The Exploratorium’s team of Second Life staff and volunteers put on another mixed-reality webcast viewing event in SL on June 5th, 2012, extending the museum and online event of the Exploratorium’s live webcast of the 2012 Transit of Venus into the virtual world. Avatar-scientist Patio Plasma hosted the in-world event which featured the 6.5 hr long live webcast of telescopic images of the sun and the rare planetary transit. Volunteers in SL helped avatars see the stream. For this event, we tried a different method of putting the stream into SL–using the HTML on a prim feature of the SL viewer. This method proved to be workable, though not as easy to use for SL residents as the traditional approach of replacing a prim texture with a Quicktime stream. With each event, we learn more about the best ways to use the multimedia features of SL and how to engage avatars with live webcasts.
Archive for the ‘Virtual Worlds’
Recently, Paul Doherty and I met in SL with New York Hall of Science (NYHOS) curriculum developer (and Museum Virtual Worlds contributor) Ray Ferrer, along with some adventurous high school Explainers.
The Hall is working with their first cohort of high school Explainers to envision, design, and facilitate the virtual space that will be the new Virtual Hall of Science (VHOS). The meeting/tour participants had a look at what the Exploratorium has been doing with exhibit development in virtual environments and got an introduction to some of the environment and object building processes in Second Life. We played with different exhibits and chatted about things the Exploratorium has learned in developing exhibits there, including the interaction benefits of putting the avatar into the exhibit as much as possible and of moving the avatar as part of the exhibit experience. I’m looking forward to seeing how the new VHOS develops!
Since the beginning of the Exploratorium’s explorations with virtual worlds (circa 2006–I know, seems like a long time ago!), we’ve combined live webcasting with exhibits to create social events. To bring the outside world in using programs like webcasts, you need a capable audio/video encoding tool that can communicate directly with a streaming media server. Virtual worlds like Second Life, can then deliver that stream to the participant by accessing the stream location and rendering it in the viewer application. For video, we can make a surface act like a big projection screen and show the live stream. Two programs which we’ve found to be useful to encode digital video from a camera or other video signal are QuickTime Broadcaster from Apple and Wirecast from Telestream. Both are capable of connecting to a QuickTime Streaming Server (QTSS) which is the necessary server application you need to stream video into SL. I haven’t been able to successfully use a Helix Universal Server, another popular streaming media server, to get video into SL yet, but I keep thinking it’s possible. Also, I haven’t had an opportunity to experiment with using other server apps like Wowza for this but hope to at some point. QuickTime Broadcaster is only available for the Mac, but is free from Apple, and makes setting up an encoder system fast and easy. You may need to enter a username and password to authenticate with QTSS from QuickTime Broadcaster through the application’s Network configuration. Wirecast is a commercial application, though you can obtain a discounted educational license which makes it fairly accessible, cost-wise. It’s cross-platform and I’ve used it on both Mac and Windows to encode and connect to QTSS. Wirecast has many other features like video switching, which are very useful and worth checking out. Both programs allow you to select a video input source on your computer –webcam, external camera via capture interface or Firewire, name the stream (you’ll need this to know the stream url), and to simultaneously save an encoded file to your computer while it’s sending the encoded video to the streaming server (useful for posting the video for on-demand viewing after the live event). On October 9, 2009, we’ll be doing a live webcast from a remote location (the Lick Observatory on Mt. Hamilton) and using Wirecast to encode real-time telescopic images and video of Exploratorium scientists Paul Doherty and Ron Hipschman who will be hosting a program about NASA’s LCROSS mission. The program will be streamed on the Exploratorium’s website and into our amphitheater on Exploratorium Island in Second Life.
NASA has published a report from the workshop on virtual worlds and immersive environments held in 2008 at Ames Research Center. The report, created by workshop organizers and participants, summarizes the presentations and discussions at the workshop, which was attended by a diverse group of people from the research, commercial, education, and gaming sectors. Key themes include remote exploration, global participation paradigms, and the narrowing boundaries between physical and virtual experience. For training simulations and outreach purposes, NASA continues to use virtual worlds. Check out the STS-125 Hubble servicing spacewalk simulation and stay tuned for more interactive virtual worlds about future moon missions. I don’t have a current update on where NASA is with it’s RFP for a MMOG platform but will be keeping my eye out for the latest on that.
For those not familiar with the VHOS project, it is essentially a virtual space within the Active Worlds Universe in which the New York Hall of Science intends to create explorable/interactive exhibits through a collaborative process involving the contributions of Hall staff, Hall Explainers, participants of the Hall’s camp programs and finally (and ideally) casual visitors. The first phase of the VHOS project was simple enough– train a group of 18-23 year olds to use Active Worlds to a point in which they are comfortable creating things as well as showing others how to create things in-world. The second phase was a reminder that no design can be efficient without prototyping; middle schoolers have knack for showing you that the way you think they think is wrong and so anything designed for them will likely have to be revised on the fly. The third phase of the VHOS project was an interesting reminder for myself about how the process of designing something that actually meets needs is iterative. So while I was thinking that I could have veteran participants take a hand in delivering basic skills to newer participants, they just weren’t interested in being teachers. As a solution to this we introduced the “Easter Egg“. As new participants acquainted themselves with the basic navigation and building skills, veteran participants were given a “mission”; first, create an easter egg containing some scripting skills considered advanced for the newbies, then secretly place that egg somewhere on a newbies virtual property. So here we have veterans showing off there skill in a way that newbies can glean important skills from. Some veterans went as far as to create portals that will take you to a secret location containing your personalized easter egg.
Unlike the second phase, the third phase was focused on one content area. Participants designed and developed virtual exhibits dealing only with the phases of matter. During phase two of the VHOS project it appears that participants were a bit overwhelmed by the option of selecting any STEM topic of their choice. Too much time was spent narrowing down the focus of their designs and not enough designing. The effects of this can be seen when contrasting a phase two exhibit, which often illustrates a broad concept, with a phase three exhibit illustrating some characteristic feature of a substance transitioning from one phase of matter to another.
As we continue to run camps the VHOS becomes richer with educational experiences which will inevitably lead to the issue of categorizing the exhibits and directing the user/casual visitor in a way that facilitates learning. I’m excited to see where this is leading as there is already a feel of being in a place where someone has been before you, giving the space and how you experience that space siginificant thought.
Hello World! My name is Ray Ferrer. I’m a Digital Learning Curriculum Developer at the New York Hall of Science currently incorporating virtual worlds into the learning experiences here at the Hall. As the Hall’s first endeavor using 3D virtual environments to facilitate learning, I’m excited to report that our first run was promising as an indicator of the type of learning experiences that can be had.
Using an Active Worlds space graciously donated by Cornell University, participants of the VHOS project went through a four-day camp learning how to navigate and build in the environment, research a STEM topic of their choice, learn exhibit design from and expert, and finally design their own exhibits in-world. But that’s not where it ends– in fact that’s not even how it began. Prior to the camp, a team of Explainers (the Hall’s equivalent of a docent) went through a series of AW trainings in order to help camp participants realize their designs. At the conclusion of the camp participants completed a draft of their exhibit designs. The images included below are samples.
During the week of April 14th-17th, new participants will begin the process of populating the VHOS space with their own exhibit designs while returning participants work on reiterations of their designs as well as help teach new participants the fundamentals (and obstacles) of designing in the AW environment.
The aprenticeship model that we are using has been succesful for the Hall in past programs and I trust that it will be as effcective in virtual environments. I’m eager to see the designs that come out of this project and will keep the readers of this blog posted.