Immersive Imaging Technology: VR for the Web in Academia
Stephen D Comer
The Citadel

 

Immersive imaging is a relatively new area that combines photography and virtual reality techniques. It is also called photographic VR and image-based rendering. Each name reflects a certain aspect of the technology. The technology is immersive because a viewer is given control to interact with images in a non-linear manner. And it is based on the rendering of photographic images instead of computer generated images. This article presents an overview of the technology, provides examples of its usage in education, and discusses the process of creating a VR movie for a web page or a presentation.

There are two basic types of immersive images: panorama movies and object movies. Both will be referred to as VR movies. In a panorama movie the viewer is placed at the center of a view and, using the mouse, is able to pan completely around horizontally 360 degrees and vertically, in some cases, by 180 degrees. In some cases one can also zoom in and out. Partial (less than 360 degrees) panoramas are also possible. When two or more individual panoramas are linked using "hotspots" or enriched by embedding other media types the result is called a scene or project. In an object movie the object is the center of rotation and the viewer, by moving the cursor, can rotate the object to examine it from different perspectives. Presently to view most VR movies a (free) proprietary viewer or plug-in first must be downloaded and installed. The alternative, immediate viewing, requires the movie to be delivered by a Java applet. Panoramas can be developed on a Windows platform for distribution this way. More will be said about applets and plug-ins below.

Examples
Panoramas provide an important ingredient to virtual field trips. Interesting examples in the areas of history and social science include a virtual tour of the ancient Mayan ruins of Tikal in Central America (
www.destination360.com/tikal.htm) and a virtual documentary of a journey through Central Asia along the "Silk Route" (www.worldmedia.fr/witness/). It is important to note that the objective is to use a VR movie in a way that enhances the learning experience. A good example in the area of science is TerraQuest’s Virtual Galapogos (www.terraquest.com/galapagos/) which allows a visitor to study the ecology, wildlife, history and geology of the Galapogos Islands. Another site involving ecology is the Jason Project (www.eds.com/community_affairs/jason/jX_quicktime_cover.shtml) which provides a virtual tour of an Amazon rainforest and an underwater tour of a coral reef in Bermuda including an underwater panorama of a sunken ship. The Anatomy Department at Wright State University (www.anatomy.wright.edu/QTVR/QTVRmenu) provides object movies that allow visitors to examine skulls, hearts, knees, and more. The Laboratory for Atmospheres at NASA’s Goddard Space Flight Center (rsd.gsfc.nasa.gov/rsd/) uses immersive imaging to visualize space and weather data. Colleges are starting to use panoramas to capture the feel of their campuses; see, for example, William and Mary (www.wm.edu/admission/video) and the University of Nevada, Reno (www.unr.edu/virtualvisit/qtvr.html). Links to a variety of other samples can be found at the author’s site (comers.mathcs.citadel.edu/vr).

The steps to develop a VR movie involve planning, capturing and digitizing images, creating the scenes with authoring tools, and delivering the results. The process will be discussed below for panorama movies.

Planning
Ok, we have a presentation that can be enhanced by including a VR movie. How do we start? The choice for an authoring tool and delivery method is influenced by the target audience and their equipment. Will the users be students in a single class, a wide range of students at the college, or the general public? Will they access the material using their own computer, a specifically reserved computer or kiosk, a CD-ROM, a college intranet or LAN, or the internet? One issue is whether the intended viewer can be expected to have the necessary plug-in(s). They can if it is a class requirement or if the movie is provided on a CD or a designated computer. If a college network is used the IT staff must install the appropriate browser plug-ins on all lab computers. If distribution is via the internet, there is no guarantee a potential viewer will take the time to install a plug-in.

What’s the difference between designing a movie for a plug-in or Java applet? The main advantage of plug-ins is that they support the embedding of additional media types such as audio, animation, and movies. One must be aware, however, that some plug-ins for video or audio only work on a specific platform. More importantly, many internet users, when a page requests them to download and install a program to continue, simply go elsewhere. The advantage of using a Java applet to distribute a movie is that it provides immediate viewing – no wait - on any platform. But, there’s a catch. Applets allow a viewer to navigate between different movies and to jump to URLs, but presently do not support embedded media types like audio. Also, there is a limit on the file size that an applet will support and on the total number of pixels an image can contain if presented with an applet. Of course, HTML and JavaScript can be used to create workarounds, for example, by embedding audio in the web page instead of in the movie.

Another issue is the movie file size. Download speeds over the internet range from 0.5-4 KB/sec depending on the connection. Whether a plug-in or Java applet is used the file size needs to be such that the download time is tolerable. File size is less important for projects delivered on disk or CD-ROM.

The considerations above need to be kept in mind while developing the storyboard for a project. In addition to determining the location for a panorama one must also consider such things as the light conditions and when sound is to be recorded.

Capturing and Digitizing Images
Any type of camera can be used to capture images, but the camera and lens used effects the process. If a video camera is used, a video capture card is needed to import the images into a computer. If a still film camera is used, the film must be developed and the prints scanned. Some photo developers digitize images with Photo CD. If going this route, be sure to instruct the developers to treat the images in a panorama as a group and not individually. By far the easiest way to obtain digital images is with a digital camera. A digital camera stores its images on either 3.5" floppy disks or Compact Flash cards making the transfer to a computer easy. The number of high resolution images a card will hold depends on the camera and ranges from 4-40. There are also specialized panorama cameras that can shoot an entire 360 degrees scene with one or two shots.

Software that stitch images into a panorama assumes the photos are taken in a clockwise manner and are numbered in increasing order. Automatic stitching works best if consecutive images overlap between 20-50%. This means that if a camera with the equivalence of a 35mm lens is used, expect to take about 12 photos in landscape mode or 18 photos in portrait mode. Using portrait mode gives a larger vertical field of view. When shooting a panorama the camera must be rotated about its focal point not the photographer’s axis. Parallax errors result when the focal point is not kept aligned. This is more noticeable and causes more problems when shooting indoors than outdoors where the objects are all at a distance. A tripod and tripod head (or pano head) are useful accessories to help keep the camera stable, the plane of rotation level, and a uniform overlap between successive photos. To keep the focal point of the camera aligned with the axis of rotation of the tripod when shooting in portrait mode an L-bracket can be used.

Creating Scenes with Authoring Tools
After digitizing the images the next step is to stitch adjacent images and to wrap into a single image. There are a number of software packages available to do this. I will only mention those which do everything (produce panoramas, object movies and compose scenes). For a list of providers of authoring tools check the International Association of Panoramic Photographers’ immersive imaging page (
panphoto.com/ImmersiveImaging.html) or Kaidan, Inc (www.kaidan.com). For the Macintosh platform QuickTime Authoring Studio by Apple was the first program and the most widely used. VR Toolbox also provides a suite of tools (VR PanoWorx, ObjectWorx, and SceneWorx) for the Mac OS. For the Windows PC platform Live Picture’s Reality Studio package includes their PhotoVista and Object Modeler. PictureWorks Technology provides Spin Panorama, Spin PhotoObject, and VRTour (a Java applet for distributing panoramas). Both Live Picture’s and PictureWork’s products work on a Macintosh as well, but Live Picture’s Mac program is very limited compared to its PC product. I will focus on features of the PC tools since I’m more familiar with them.

Both PhotoVista and Spin Panorama have wizards to lead a user through the construction of a panorama, allow for partial panoramas, and provide a "smart stitch" feature. Stitching with PhotoVista is based on a camera lens specification; manual stitching in this case is done by visually overlaying adjacent images. Stitching with Spin Panorama is based on selecting control points in adjacent images. Each approach has its advantages. Both programs allow the user to specify the amount of compression for the final output. Spin Panorama outputs movies in QuickTime MOV and JPEG formats. VRTour allows JPEG images to be linked using hotspots and distributed either over the web or on a disk. PhotoVista outputs movies as JPEG and FPX (FlashPix) images and creates IVR (Image-based Virtual Reality) and HTML files to display the movie using either their plug-in or Java applet. The IVR files can be imported into Reality Studio where two or more panoramas can be linked, cinematic effects created, and other media types (audio, etc) added. The result is output as an IVR file that must be viewed using their plug-in. Access to the IVR text file that is output allows the results to be enhanced by additional VRML 2 features.

Delivering Results
Before displaying a project the final image may need to be touched up using an image editor. One then builds an HTML file or edits the one produced by the authoring software. If using a Java applet, the parameters need to the set properly and files placed in the correct relationship to each other. After uploading to a web server check the download time using different browsers. If too slow, the size of the image needs to be reduced. Some applets only work with images which have no more than 300,000 pixels; others allow more. Typical aspect ratios for a viewing window ranges from 1.66:1 to 2.33:1. The most pleasing size can be found by experimentation.

The process described above for creating a VR movie is relatively straightforward. And it is becoming easier every day with the introduction of better and cheaper products for digital photography. The following resources provide in-depth introductions.

  1. Demystifying the Creation and Development Techniques Used to Build QTVR Content (www.outsidethelines.com/EZQTVR.html)
  2. QTVR for Educators (teachnet.edb.utexas.edu/~qtvr/)
  3. The QuickTime VR Book, Susan A. Kitchens, Peachpit Press, 1998