Museum Virtual Worlds

Bringing Real and Virtual Together

January 10, 2011
by Rob Rothfarb
2 Comments

Augmented Reality — A Looking Glass into Other Worlds: AR Artist and Researcher Helen Papagiannis Explores Wonderment and Play in Exhibit Design

A few months ago, I was introduced to Helen Papagiannis, an artist, designer, and researcher working with the emerging technology Augmented Reality (AR). I was captivated by the way her playful AR exhibits and installations drew people in through touch, video, and sound. She’d recently exhibited her work at the Ontario Science Center and was exploring ways to engage museum audiences with their sense of discovery and wonder. I caught up with her recently and asked her some questions about the ideas behind her work.

The Amazing Cinemagician: New Media Meets Victorian Wonder” exhibition by Helen Papagiannis at the Ontario Science Center, May-Sept., 2010, Toronto, Canada. Photos: Pippin Lee

When did you start experimenting with augmented reality?

I began experimenting with AR in September 2005. When I saw AR for the first time, I was so entranced I think I entered a permanent state of wonder with the technology. And it was all very simple: a bare bone 3D virtual cube seemingly appearing in my physical space. It was completely astonishing! I went into mad scientist mode from there tinkering, prototyping, and dreaming of the creative possibilities for AR. Five and a half years later, and I’m still riveted.

What are some of the challenges that you’re exploring in your AR work?

When I began working with AR, the challenges were largely around the technical constraints. It has been important for me to work with what is at hand, right now, not tomorrow, or 6 months from now. I always ask, ‘How can we realize this now and make it compelling within the parameters?’ My process has entailed allowing the constraints to guide the work, then finding ways to push beyond those boundaries to create something new.

I strongly believe AR is emerging as a new medium and it will come to play a large role in entertainment and information sharing. The challenge at hand is to continue to investigate how best to apply the medium and really elevate it creatively, and to do this as a community of artists, engineers and industry. We need to identify what is truly unique about this new form, and how we can best leverage these characteristics to tell new stories and create engaging experiences that are unlike anything we’ve ever seen before.

How can museum audiences experience AR?

We’re beginning to see more AR applications in museums, which is very exciting. AR is becoming more accessible and affordable, including the use of personal devices that museum visitors may already have at hand, such as smart phones. AR can be used to provide additional information about objects in a museum’s collection and to enable experiential learning through discovery and play. Wonderment, as discussed in my TEDx talk, is an important part of my work in AR. For me, AR fosters a great sense of wonder as a looking glass into another world and can be used to further ignite curiosity and inquiry in a museum setting.

Museum audiences can experience AR through various viewing devices including tablets equipped with cameras, large wall-mounted screens or kiosks, and mobile devices. Here are a few contemporary examples:

Tablets:
Natural History Museum’s interactive film “Who Do You Think You Really Are?” as well as Metaio and Louvre-DNP collaboration

Screens:
Examples include my Wonder Turner installation at the Ontario Science Centre, and Total Immersion’s environmental kiosk

Mobile devices:
An unofficial exhibit at the MoMA using Layar and one of my recent projects “AR Hanging Mobile” using ARToolkit for iPhone.

What limitations of current ways that people can experience AR art works and installations would you like to get beyond?

I’d like to see more work move beyond the single viewer experience in AR and engage larger audiences in a simultaneous viewing and even collaborative interactive experience. I think this is particularly relevant for museums in designing and producing AR experiences. My early work in AR started out with books and other small hand-held objects creating an intimate experience for a single user at a given time and then expanded to a more collaborative exploratory setting with multiple users engaging in an act of play and discovery together. There’s a great opportunity for enabling a larger group dynamic at work in AR with multi-users. This can combine visitors both on and off-site.

Let’s also make the work so wondrous people forget they are looking at a screen or using a device! I’m continually exploring ways to heighten “presence”, a term used in AR to describe the illusion of non-mediation.

***
Helen Papagiannis is an artist, designer, and researcher specializing in Augmented Reality (AR). She is presently completing her Ph.D. at York University in Toronto, Canada and is a Senior Research Associate at the Augmented Reality Lab (Department of Film, Faculty of Fine Arts). Helen’s mixed reality art installations were recently featured in a solo exhibition at the Ontario Science Centre, and at TEDx, where she was also an invited speaker. Prior to her augmented life, Helen was a member of the internationally renowned Bruce Mau Design studio, where she was project lead on “Massive Change: The Future of Global Design”, a touring exhibition and book published by Phaidon Press.

December 7, 2010
by Rob Rothfarb
0 comments

How to make virtual worlds collide

I haven’t posted about any new developments with virtual worlds that the Exploratorium is developing in a few months. Fortunately, my lack of writing about it doesn’t equate to nothing going on in that realm. Exploratorium Island and ‘Sploland island in Second Life continue to thrive with several hundred visitors per week each, guided tours of exhibits, and occasional building parties and building tutorials. We’re thinking about what events would be good to stage there in the coming year. Many new exhibits have been added to Exploratorium Island recently and the space is undergoing a needed make-over. More on these developments soon!

Now I’m going to begin a little detour from immersive 3D virtual worlds and introduce some new experiments with mobile augmented reality (AR) –an emerging technology and practice that allows you to overlay a view of a physical environment or object with virtual content. We did an experiment here at the Exploratorium last year with AR, as part of our kick-off for the celebration of our 40th anniversary. It presented a real-time rendered view of an overlaid interactive 3D model onto a 2D marker that was embedded in the cover design of our quarterly publication, Explore. This was inspired by others beginning to use the print medium as an entre to the technology. I first became fascinated with the idea of location-based augmented reality when I read descriptions of pieces that “locative” artists made in William Gibson’s book Pattern Recognition. In his post-cyberpunk story, Gibson describes several large-scale 3D virtual objects that are placed at specific locations meaningful to the story (via geo-referencing). A giant octopus, a field of moving flowers, a re-created scene, etc. Viewers needed to don cumbersome headgear to see these things, which were rendered in high resolution 3D stereo and composited compellingly into a view of a physical environment. It reminded me of things I’d tried previously with slide and video projections at specific locations. In a digitally-enhanced world, with the ubiquity of the Global Positioning System and the mass-market access to it through hand-held devices including smartphones, augmented reality is now a technology that many can experience, and one that museums are experimenting with.

For me, augmented reality represents a slice of a virtual world that people can experience on the go, in their day-to-day lives. It seems like a fluid extension of immersive 3D virtual worlds in that virtual and physical world realities are mixed. There are possibilities for blending communication (social aspects) and user-generation of content in the augemented views. AR is also being heavily influenced by the commercial gaming space, with haptics and full-body gestural interfaces promising more natural-feeling interaction with augmented views. Products like the Wii and Kinect are being used as interfaces to non-gaming content. Open-source toolkits to use those devices are now available. More on that topic soon.

AR technology is in an early stage. Hardware and software platforms are rapidly emerging, open and proprietary systems are vying for developer and consumer attention, business models are being invented, content developers are exploring different uses. Museums are beginning to develop models for use for AR, designed to engage visitors in discovery of “extra” information about objects, exhibits, and places. The New Media Consortium 2010 Horizon Report: Museum Edition, tracks the time-to-adoption for AR as two to three years. So, it’s an exciting time to see this new technology grow and to work with it yourself to see how you can use it to engage your audiences.

May 3, 2010
by Rob Rothfarb
1 Comment

Explainers from the NYHOS visit the Exploratorium in SL

Recently, Paul Doherty and I met in SL with New York Hall of Science (NYHOS) curriculum developer (and Museum Virtual Worlds contributor) Ray Ferrer, along with some adventurous high school Explainers.

The Hall is working with their first cohort of high school Explainers to envision, design, and facilitate the virtual space that will be the new Virtual Hall of Science (VHOS). The meeting/tour participants had a look at what the Exploratorium has been doing with exhibit development in virtual environments and got an introduction to some of the environment and object building processes in Second Life. We played with different exhibits and chatted about things the Exploratorium has learned in developing exhibits there, including the interaction benefits of putting the avatar into the exhibit as much as possible and of moving the avatar as part of the exhibit experience. I’m looking forward to seeing how the new VHOS develops!

Visitors from NYHOS check out some exhibits on 'Sploland in SL

Visitors from NYHOS check out some exhibits on 'Sploland in SL



Trying out Dance Floor Color Mixer

Trying out Dance Floor Color Mixer



'NYHOS Explainers "in" an exhibit

NYHOS Explainer avatars being moved in an exhibit

October 5, 2009
by Rob Rothfarb
1 Comment

Encoding Tools for Live Webcasting into Virtual Worlds

Since the beginning of the Exploratorium’s explorations with virtual worlds (circa 2006–I know, seems like a long time ago!), we’ve combined live webcasting with exhibits to create social events.  To bring the outside world in using programs like webcasts, you need a capable audio/video encoding tool that can communicate directly with a streaming media server.  Virtual worlds like Second Life, can then deliver that stream to the participant by accessing the stream location and rendering it in the viewer application.  For video, we can make a surface act like a big projection screen and show the live stream.  Two programs which we’ve found to be useful to encode digital video from a camera or other video signal are QuickTime Broadcaster from Apple and Wirecast from Telestream.  Both are capable of connecting to a QuickTime Streaming Server (QTSS) which is the necessary server application you need to stream video into SL.  I haven’t been able to successfully use a Helix Universal Server, another popular streaming media server, to get video into SL yet, but I keep thinking it’s possible.  Also, I haven’t had an opportunity to experiment with using other server apps like Wowza for this but hope to at some point.  QuickTime Broadcaster is only available for the Mac,  but is free from Apple, and makes setting up an encoder system fast and easy.  You may need to enter a username and password to authenticate with QTSS from QuickTime Broadcaster through the application’s Network configuration.  Wirecast is a commercial application, though you can obtain a discounted educational license which makes it fairly accessible, cost-wise.  It’s cross-platform and I’ve used it on both Mac and Windows to encode and connect to QTSS.  Wirecast has many other features like video switching, which are very useful and worth checking out.  Both programs allow you to select a video input source on your computer –webcam, external camera via capture interface or Firewire, name the stream (you’ll need this to know the stream url), and to simultaneously save an encoded file to your computer while it’s sending the encoded video to the streaming server (useful for posting the video for on-demand viewing after the live event).  On October 9, 2009, we’ll be doing a live webcast from a remote location (the Lick Observatory on Mt. Hamilton) and using Wirecast to encode real-time telescopic images and video of Exploratorium scientists Paul Doherty and Ron Hipschman who will be hosting a program about NASA’s LCROSS mission.   The program will be streamed on the Exploratorium’s website and into our amphitheater on Exploratorium Island in Second Life.

July 3, 2009
by Rob Rothfarb
0 comments

NASA Virtual Worlds and Immersive Environments Report

NASA has published a report from the workshop on virtual worlds and immersive environments held in 2008 at Ames Research Center.  The report, created by workshop organizers and participants, summarizes the presentations and discussions at the workshop, which was attended by a diverse group of people from the research, commercial, education, and gaming sectors.  Key themes include remote exploration, global participation paradigms, and the narrowing boundaries between physical and virtual experience.   For training simulations and outreach purposes, NASA continues to use virtual worlds.  Check out the STS-125 Hubble servicing spacewalk simulation and stay tuned for more interactive virtual worlds about future moon missions.   I don’t have a current update on where NASA is with it’s RFP for a MMOG platform but will be keeping my eye out for the latest on that.

Workshop Report On NASA Virtual Worlds and Immersive Environments

May 19, 2009
by Ray Ferrer
0 comments

Phase Three of the VHOS Project

For those not familiar with the VHOS project, it is essentially a virtual space within the Active Worlds Universe in which the New York Hall of Science intends to create explorable/interactive exhibits through a collaborative process involving the contributions of Hall staff, Hall Explainers, participants of the Hall’s camp programs and finally (and ideally) casual visitors. The first phase of the VHOS project was simple enough– train a group of 18-23 year olds to use Active Worlds to a point in which they are comfortable creating things as well as showing others how to create things in-world. The second phase was a reminder that no design can be efficient without prototyping; middle schoolers have knack for showing you that the way you think they think is wrong and so anything designed for them will likely have to be revised on the fly. The third phase of the VHOS project was an interesting reminder for myself about how the process of designing something that actually meets needs is iterative. So while I was thinking that I could have veteran participants take a hand in delivering basic skills to newer participants, they just weren’t interested in being teachers. As a solution to this we introduced the “Easter Egg“. As new participants acquainted themselves with the basic navigation and building skills, veteran participants were given a “mission”; first, create an easter egg containing some scripting skills considered advanced for the newbies, then secretly place that egg somewhere on a newbies virtual property. So here we have veterans showing off there skill in a way that newbies can glean important skills from. Some veterans went as far as to create portals that will take you to a secret location containing your personalized easter egg.

Unlike the second phase, the third phase was focused on one content area. Participants designed and developed virtual exhibits dealing only with the phases of matter. During phase two of the VHOS project it appears that participants were a bit overwhelmed by the option of selecting any STEM topic of their choice. Too much time was spent narrowing down the focus of their designs and not enough designing. The effects of this can be seen when contrasting a phase two exhibit, which often illustrates a broad concept, with a phase three exhibit illustrating some characteristic feature of a substance transitioning from one phase of matter to another.

Phase 2 Exhibit

Phase 2 Exhibit

Phase 3 Exhibit

Phase 3 Exhibit

As we continue to run camps the VHOS becomes richer with educational experiences which will inevitably lead to the issue of categorizing the exhibits and directing the user/casual visitor in a way that facilitates learning. I’m excited to see where this is leading as there is already a feel of being in a place where someone has been before you, giving the space and how you experience that space siginificant thought.

VHOS Home Level

VHOS Home Level