Excerpts from a Virtual Reality Researcher

Projects

 

Expedition VRctica (January 2023 – Present)

In this NSF funded project, I am (with colleagues at the Field Day Lab) developing 5 VR scenarios related to education and outreach of Polar Science.

 

USDA Forestry Project (September 2022 – Present)

In this USDA Forest Service funded project, we are developing a VR training scenario for Wood Utilization Agents, so that they can better understand and train for their job scenarios remotely.

EasyVizAR (September 2021 – Present)

In this NIST sponsored project, I am contributing to project efforts by developing 3D scene capture techniques using the Microsoft Hololens 2.

ARHAT (October 2021 – Present)

In this RRF (Retirement Research Foundation) project, we are developing an app to ease home assessments for aging in place.  I am developing a series of AR tools to achieve this as well as developing the overall framework of the app.

Developing Novel Mixed Reality Tools for Insurance Documentation (October 2021 – Present)

In this American Family Insurance Data Science Institute project, we are seeking to develop an AR-based App to detect hail damage on vehicles.

 

Developing a Novel 3D Capture Based Automated Inventory System for Insurance Documentation (August 2020 – Present)

Through the American Family Insurance Data Science Institute, we are working to create a 3D scanning app for Insurance Documentation purposes for vehicles.

 

Penguins VR (May 2021 – Present)

I am working with a team, including members of the Field Day Lab, to create an Adelie Penguin VR experience through an NSF funded Polar Research focused AISL project.

Anti-Oppressive VR (August 2020 – Present)

In an on-going collaboration with Dr. Maxine McKinney de Royston of the Curriculum and Instruction Department in the School of Education, we are working to create a new VR technology-driven anti-oppressive curriculum for secondary education teachers in efforts to reduce discriminatory and racist teaching practices.

Soil VR (January 2021 – Present)

In an on-going collaboration with Dr. Thea Whitman of the Soils Science department, we are working to create a VR experience of being shrunk to the size of a soil particle with the ability to fly through it and learn about soil properties.

Home Modifications Project (July 2019 – August 2020)

Members of the Virtual Environments Group and colleagues from the Department of Design Studies at the School of Human Ecology are 3D scanning homes and performing in-home assessments of accessibility in means of improving accessibility for persons with mobility constraints.  This project is supported by the Tommy G. Thompson Center on Public Leadership

Unity Point Cloud Plugin (June 2018 – Present)

Kevin Ponto and Ross Tredinnick are working to adapt their point cloud rendering algorithms of LiDAR data to Unity in the form of a plugin.  This work is supported by the Wisconsin Alumni Research Foundation.

walkability

Walkability Project (June 2017 – August 2018)

The LEL and the Department of Continuing Studies together with Faculty Associate Travis Flohr from the Department of Landscape Architecture are in the process of 3D scanning several areas of campus with heavy pedestrian traffic to study persons preferences related to walkability of these areas.  The scanning project will be our largest to date as a lab, with around 90 scans and covering a majority of the west end of campus near WID.

logo-mt-horeb

Mt Horeb Museum Little Norway 3D Building Kiosk (April 2017 – December 2018)

The LEL together with the Mt Horeb Historical Society are working on a VR HMD Kiosk featuring the Stave Church 3D Scan (see below).  More scans of historical buildings in the Mt. Horeb area will be added later this year.  A great thanks to the Vicker Cheritable Trust, Alex Hobson, Scott Winner, and the Mount Horeb Historical Society for helping make this project a reality.  Look for the display kiosk to be active starting in June 2017!

HomeCarousel-AlumniPark-1

UW Foundation Alumni Park Project (March 2017 – July 2017)

The LEL is working together with the UW Foundation on a portable VR HMD system for a preview display of the new UW Alumni Park, currently under construction adjacent to the Memorial Union Terrace.  The project will be shown off at a public event out at the University Ridge golf course and eventually be housed at the new One Alumni Place Alumni Visitors Center!

56aaad3ce26da.image

Crime Scene LiDAR scanning Cost / Benefit Analysis Project (Feb 2017 – December 2019)

In a collaborative project with the Dane County Sheriff’s Office and with funding through the Department of Justice, myself and members of the LEL will be conducting a research study comparing LiDAR scanning for crime scene evidence acquisition to traditional diagramming methods currently used by most law enforcement.  An in-depth cost benefit analysis will be used to compare costs of the two techniques.  We are also partnering with the UW-Platteville Criminal Justice program to make use of their Crime Scene Investigation house located on the UW-Platteville campus.

ambient-soft

Immersive Molecular Visualization via ChimeraX (November 2016 – Present)

Myself and Simon Smith, LEL Research Intern are working towards adapting a new molecular visualization program called ChimeraX to work in our CAVE and Powerwall display environments.  ChimeraX is a state of the art tool for molecular visualization currently being developed at the UCSF RBVI (Resource for Biocomputing, Visualization and Informatics).

icecubeaisl

Exploring the Universe from Antarctica (September 2016 – September 2019)

We have a new project in collaboration with the WIPAC group here on campus to bring a new VR / touch table display system to the 3D Discovery Niche location within the Town Center.  This is a NSF funded project that is aiming to bring increased informal learning about polar research to the public.  This project will combine a touch table with HMDs to create a new VR and interactive experience to promote polar research.

editor_screen

New Unity3D CAVE Plugin (August 2016 – Present)

Myself and various students and colleagues within the lab have been working on a new and improved CAVE plugin for supporting the Unity3D game engine.  This version of the plugin runs natively in Unity3D and doesn’t require an external program to provide data for camera tranformations, etc.  We are in beginning stages of testing this plugin with colleagues at other Universities to gather collaborations and to improve the plugin itself so it can adapt to several different immersive VR display setups.

colored_brain_inside

UW Virtual Brain Project (July 2016 – Present)

I am working with new faculty member Karen Schloss on a brain visualization within the CAVE.  The effort here is to have people navigate a large scale brain in the CAVE in order to understand how changing the color of different parts of the brain enables people to retain anatomical information of the brain.  This is very interesting and exciting work so far!

vlcsnap-2016-06-21-09h49m00s247

Mosquito Landing Simulation (Sept 2015 – March 2016)

In collaboration with a local industry partner, myself and other members of the LEL developed a VR simulation of mosquitos landing on the arms of a person, based on real-life landing data as supplied by the partner. The project will be used for education and outreach for the industry partner.  This project made use of a new Oculus CV1 and a Leap Motion for tracking of arms and hands.

cover_landscape

Wonder VR Collaboration (2015)

In conjunction with local artist Lisa Frank and the UW-Madison Discovery 2 Product (D2P) group, Lisa plus myself and undergraduate Sam Solovy of the LEL are working to develop two outdoor Unity3D scenarios for the Oculus Rift that integrate Lisa’s naturalistic photography and artwork into 3D exploratory environments.  Visit this site for more information on WonderVR.

3D Scan of Stave Church at Little Norway

We scanned a historic church at the former site of Little Norway in Blue Mounds, WI.  It is the only remaining in-tact building from the 1893 World’s Fair in Chicago that hasn’t been renovated.  People in Norway were commissioned to build the church in only 100 days time and send it across the Atlantic to the fair 122 years ago now.  After the fair, the building was moved to the Wrigley estate in Lake Geneva, then in 1935 it was purchased by Scott Winner’s great uncle and moved to the Little Norway site.

Scott has been trying to sell the property after it closed and was contacted by some people from Norway last year as one of them believed their Grandfather had done the original carvings and part of the construction of the church.  This was confirmed and the people in Norway gathered enough money to purchase the church and ship it back to Norway where it’s now going to stand on their original family land. Just 28 of these types of churches exist in the world.  The church will now be able to stay in the US at least in a “virtual” form.

Taliesin

3D Scan of the Frank Lloyd Wright’s Taliesin (April 2015)

In a joint project with the Frank Lloyd Wright School of Architecture, later this month we will be performing a 3D LiDAR scan of the Taliesin, located near Spring Green, WI.  We plan to scan as much of the interior of the home as possible over a two-three day span, as well as scan some outdoor areas.  The intention will be to create a 3D point cloud out of the resultant data that we can share and visualize in an educational setting with people who may not have easy access to the location.  Perhaps it will inspire some future designers and architects!  More info to come on the process and the results!

vizHOME Viewer Point Cloud Rendering (Oct 2013-Present)

The vizHOME Viewer is a big subproject I’ve been leading that is being implemented for the federally funded research grant, vizHOME A context-based health information needs assessment strategy, courtesy of AHRQ.  This application enables rendering of massive point clouds within the CAVE, Oculus Rift, DSCVR, Desktop and on the Dev Lab wall.  The application streams in data from an out of core octree and displays points in 3D stereo using a variety of GPU shaders.  The application can also handle mirrors, rendering of planes, and selection of points via a tracked 3D wand.  This application will serve as the main tool for conducting the next phases of the research study.

Windows 8.1 Upgrade (Oct-Dec 2014)

In the Fall of 2014 I upgraded the CAVE operating systems to Windows 8.1.  This transitioned the CAVE operating systems from an outdated Windows XP framework.  This project presented many challenges in terms of configuring newer graphics drivers to function with our projector setup and to also ensure that our software framework could carry over to the upgrade correctly.  The project added 3 machines to our previous 3 machine setup so that 6 render nodes now drive the CAVE, which adds more processing and GPU power for viewing scenes.  This also in turn allowed us to reduce our viewport render total (number of times we needed to draw to produce a single seamless image) from 8 in the old setup down to 2!

Unity VR CAVE Plugin (June 2014-June2015)

In 2014 the lab developed a series of plugins that allow for Unity applications to be displayed within our CAVE and other display systems (DSCVR, Dev Lab Wall).  The plugins were developed both in C# as well as in C++ and allows correct stereo and off-axis projections within the display systems.  Enhancements have been made to the plugin to account for synchronized physics.  At the 2014 Wisconsin Science Festival we presented a Ski Mountain simulation that was built using our Unity plugin and it was a popular attraction for all visitors.  We plan to continue work on improving the Unity plugin by adding support for animation and other features.

DSCVR (2013-Present)

DSCVR is a 20 panel LCD TV display system located in the School of Human Ecology building on the UW-Madison campus developed by Kevin Ponto and Joe Kohlmann.  I came on board later in the project to aid in software development for the system.  The system runs the Linux CentOS platform and can run the same Fiona middleware that our lab developed.  Much of my work here has been getting all of our applications that run in the CAVE also running on the DSCVR system.  The system can also run the Unity plugin that I developed (see above).  The color and clarity of this system is an improvement over the CAVE due to the different display technology being used (3DTVs vs. projectors).  The system allows for head and body tracking via a Kinect 2 and interaction takes place via a wireless Playstation 3 controller.

RISE Over Run (Spring 2014)

In the Spring of 2014, the LEL collaborated with renowned UW dance chair Li-Chiao Ping to participate in a building-wide dance performance put on by her group.  The CAVE served as one of the performance locations and I developed a visualization using the Ogre3D rendering engine that coincided with the dance performance that took place surrounding and below the CAVE.  More information can be found here and here and a review of the event can be found here.  It was a fascinating and wonderful event to participate in.

crime-scene-investigator

CSI with LiDAR Collaboration (Feb 2014)

In February 2014, I worked together with the Dane County Sheriff’s Office to LiDAR scan a home where a homicide had taken place.  The benefits of using LiDAR in this situation allow CSIs to gather potential pieces of evidence in a much faster fashion than through their traditional means.  Traditionally, CSIs hand measure and record locations of evidence; but in situations where there is a large amount of objects and clutter, this can take many people a large number of work hours to accomplish.  For example, the LiDAR scanning took about 10 hours of time, but through traditional means, it would have taken 4 CSIs a 40 hour week or so to gather the evidence.  LiDAR obtains a true-scale 3D model of the space and a detailed enough 3D model to discern evidence after leaving the location on a desktop computer.   We are seeking continued grant opportunities and collaborations in this area of work.

WordCAKE (2013-2014)

WordCAKE is a collaborative project between the UW Digital Humanities group and the LEL.  In this project, I teamed up with Carrie Roy in order to develop a new way to visualize word frequencies in sequential texts. The visualization uses a series of alphabetically arranged rings that continue down a cylinder over time, looking a bit like a cake. The application was developed as a SketchUp plugin and is now on the SketchUp Extension Warehouse as an official plugin.  The project was also used in the SMART Health IT project for analyzing changes in word-frequencies in several health IT web sites over time.  The plugin can export to a CAVE viewable format (Ogre3D) for viewing the visualizations within the CAVE.  You are pretty much surrounded by words!

IceCube CAVE Visualization (Summer 2013)

The IceCube CAVE Visualization was a collaborative project where myself and two undergraduate students teamed up with the UW IceCube research team in order to visualize IceCube data within the CAVE.  IceCube is the largest running neutrino detection system in the world, located at the south pole.  We created a VR application using the Syzygy VR framework to visualize neutrino events as colored spheres that appear over a time sequence of an event (slowed down significantly). The user is able to play back events forward or backward within the CAVE, fly around an to-scale replica of the environment both above and below the ice and ride a neutrino event from its perspective. The project appeared as a poster at the IEEE VR 2014 conference.  A link to the press release regarding this project can be found here.

Kanban (Spring 2013)

Working with undergraduate students Megan Kinneberg and Vito Freese, we developed a CAVE simulation of the health systems engineering organizational technique known as Kanban management. In this CAVE scenario, the Dual View feature of the CAVE was active and therefore two users could interact within the same scenario while both having a head and wand tracked perspective.  The wand is used to organize objects into bins and create cards to place on the front of bins, similar a real surgical suite’s material management.  An industrial engineering class came to the CAVE as a part of a final class project and performed the Kanban execise within the CAVE in 10 pairs of two students.

SculptUp (Fall 2012)

SculptUp was a collaborative project that the lab worked on for the 2013 3D UI contest.  The contest called for participants to make a virtual reality application that allows the user to build a world.. i.e. the “World Builder” competition.  We took an approach slightly different from other participants and developed an in-CAVE sculpting application that allowed a user to create a sculpture “from thin air” within the space of the CAVE using the tracked 3D wand and the Ogre rendering engine.  Users could then take their individual sculptures and when ready, mesh them and then move them about a space, performing transformation to create a world.  A link to the project paper is here.


Say It To See It (Fall 2012)

The Say It To See It project involved making use of Microsoft Kinect Speech Recognition to rig up an interface in the CAVE for searching for and downloading 3D models right into the CAVE.  This project became a poster at the 2013 3DUI symposium in Orlando, Florida.  The process involved rigging up speech recognition for several common words, programming the kinect to recognize a word someone shouts in the CAVE.  After this the recognized word would be sent through a pipeline that performed a search on the Trimble SketchUp 3D Warehouse website via some web scripting and gathered the thumbnails of the results, which we would then display in the CAVE on a 3×3 grid.  The user could choose one of the thumbnails, which would then trigger another script to download the model from the Trimble SketchUp 3D Warehouse and convert it to OpenSceneGraph.  After conversion the live CAVE app would load the newly made OpenSceneGraph file and the model would drop into the CAVE scene in front of the user.

Digital Curation (2012-Present)

The lab contains a web page dedicated to serving as a repository for various 3D content that our lab has gathered and created over the years.  This project involved working with a group of  SLIS (School of Library and Information Science) graduate students in re-organizing the page to improve work flow, categories of objects, and all around make the page more useful to the lab. As a side-project to this, I built a series of scripts that allowed a user of SketchUp on our lab network to automatically export and create an entry in the webpage along with a screen shot of the object all via a File->Export to Curation command from SketchUp.  The same process exports the current model to a CAVE suitable format using the SketchUp OSG exporter.

fiona

Fiona VR Middleware (2012-Present)

Despite some useful functionality provided by Virtual Lab, our lab as a whole wanted to obtain more flexibility going forth in our software infrastructure (we had no source code for virtual lab) in order to implement specific functionality for research within the CAVE.  Therefore during the summer of 2012, myself and a visiting professor Hyun Joon Shin started work on development of our own lab’s VR middleware.  The middleware is named Fiona (after Hyun Joon’s daughter’s favorite animated cartoon character from the movie Shrek).  Functionality was completed that summer to enable correct off-axis non-symetrical frustum projections within the CAVE and Dev Lab with basic OpenGL rendering plus tracking via VRPN.  Later that summer, integration of a volume rendering application was added, as well as functionality for the Ogre3D rendering engine.  I continued work on the framework and added functionality for several other applications including VMD (Visual Molecular Dynamics), Unity and a point cloud renderer.  Kevin Ponto and Joe Kohlmann also completed a port of Fiona to Linux for use in the DSCVR system.  Fiona has become the lab’s main software framework and serves as the backbone for several projects.  The project is implemented in C++ using OpenGL and glsl shaders.  At present time (January 1st, 2015) Fiona has support for Ogre3D 1.9, Voreen 4.4, Unity 4.5, and VMD 1.9.

SketchUp OSG Exporter for Virtual Lab (2012)

A first lab project was learning the Mechdyne created Virtual Lab VR framework and improving upon the workflow of creating content for Virtual Lab via a plugin for SketchUp – a free to use 3D design program.  I developed a plugin for SketchUp using the Ruby programming language that exports content to OpenSceneGraph and simultaneously writes Virtual Lab python scripts for automatically loading the content in locations they existed within the SketchUp model.  This allows for a WYSIWYG type conversion from SketchUp into the CAVE.  The plugin enables users to set specific physics properties for individual components and also tag objects as being able to be selected and interacted with.