Here is an example of Dr. David Fifer scanning a physical object, converting it into 3D model via Autodesk’s 123 Catch, and using the generated modes in their classroom. AR was used to create a brief instructional video and model will be available to students to download/interact with via Unity web interface.
In this video, Dr. Gwen Graham of EKU shows of her family heirloom – a slave doll named Buttercup. She wanted to share this doll with her f2f and online students without bringing in the physical doll, and perhaps damaging it in the process. The doll will be available to students via this video and interactive 3D model interface.
To make this augmentation possible, original doll was scanned into a 3D model, converted to a .FBX file, and used in this video lecture via Augmeted Reality interface.
This video explains the beginnings of the African American art with the use of Augmented Reality. This model (ship) is built using Google Sketchup, and used with Metaio AR software to augment and design the final scene. Dr. Graham used the marker to explain various parts of the travel African slaves had to endure while being transported to America.
This is the most recent AR project we completed, and which focuses on Early Child Sensorimotor Development. Project is a part of the OTS 515/715, taught by Dr. Leslie Hardman, and it introduces the culminating project students need to complete for this class. The final project consists of students creating and adapting a classroom area for the children with special needs.
This brief presentation covers Constructivist learning theory and its alignment with augmented reality. Brief literature review is included to explain how learners may learn with AR, and several examples of our previous work are included to support the research findings and best practices when using AR for educational purposes.
Google recently announced project Tango – a 3D mapping framework that will allow for simple scanning and virtual generation of a real world 3D environment. This project falls in the category of several sensing applications that utilize sensors such as Microsoft’s Kinect and PrimeSense’s 3D, Pelican Imaging, Softkinetic, and PMD that has emerged in the recent years.
Project Tango includes SDK and a phone like sensing device that is similar to Kinect in functionality, although it appears to be missing a depth sensor and texture projector. These sensors allow the phone to make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around the user.
This framework will be beneficial when creating augmented reality experiences, and I assume that there will be a close integration with Google Glass. I can see this being used in processes where a 3D point map of real space would be needed such as drone navigation in warehouses or augmented reality scenarios. Imagine creating a 3D point cloud of a factory floor, and incorporating various scenarios such as fire or falling piece of equipment that would play out when user physically approaches the trigger area. Such scenario would be great for training. Another example would be creating a scan of museum space, and loading various audio/visual material that would supplement existing exhibits. Such supplementary material would be visible with the aid of Google Glass or similar AR interface. Tango device could also be useful for 3D model generation, a topic we previously covered here, and here.
Tango offers a glimpse of things to come in near future. Google is doing a great thing here by developing a framework that will help the future generations of artists, scientists, hobbyists and other visionaries create new ways of interaction with our environment.
This is the third iteration of the augmented reality solar system project. We posted an early version of this project using BuildAR framework that worked only on desktop computers in 2011. Then we added a Flash implementation of the same concept with downloadable lesson and source plans in 2012.
As we previously mentioned, we recently switched to Metaio framework which enables us to publish our projects as mobile applications (via free Junaio AR browser – app for iOS and Android) and desktop applications (downloadable standalone packages). This latest version of the Solar system comes with a redesigned book titled Augmented Reality Magic Book: Solar System. This book contains essential factual knowledge about the planets of the Solar system (NASA.gov, 2013), and it comes with a set of interactive AR markers that project multimedia content such as 3D models, videos, images, and audio. Each planet of the solar system has 2 markers: main marker with a 3D model of the planet, and another marker that contains supplementary content. The book is available for download for free in .pdf and .pub formats. The content is taken from NASA.gov, and if you have Microsoft Publisher, you are free to alter the content under this Creative Commons license (Attribution-NonCommercial 4.0 International).
Our intent is to make this book available to the general public, specifically K-12 teachers, parents, and students, with a goal to make learning more fun, engaging and constructive. This aligns with our ongoing goal to further explore the use of Augmented Reality in learning and education, and to provide broader community with free meaningful, useful and engaging AR content and framework.
It’s been a year since we described how to create an accurate 3D models by using ReconstructMe. A lot of things have changed in our little shop, and we reflected a bit on the work we created for the past 3 years. The use of Augmented Reality has expanded, and AR entered mainstream. Tonight I googled “Augmented reality in Education”, and I ended up reviewing over 30 search result pages of AR projects, articles and presentations on this subject. Amazing stuff.
We are planing to publish several new projects that are compatible with mobile devices and Google glass. We are now switching to Metaio SDK which will allow us to develop AR projects on mobile devices as well as on the PC platform. We will keep you updated.
Back to the topic of 3D scanning and 3D modeling. The latest app that comes pretty close to streamlining 3D model generation is Autodesk’s 123D Capture. If you are not familiar with this tool, 123D Capture will let you take multiple images (up to 40 on mobile app, up to 70 on PC) of an object you wish to turn into a 3D model, and then convert them into a textured 3D model. Conversion is actually done on Autodesk’s servers, since the application uploads the images to Autodesk’s servers for processing. Depending on the server load, it takes around 5-10 minutes for model to be generated, and what comes back is a fairly accurate (80% complete) 3D model. If you perfect the process, you can create 3D models fairly rapidly. You can judge results by yourself:
We are really impressed with the final results, and will be using this tool in our upcoming projects. Meanwhile, check out this intro tutorial on 123D Catch:
This lunar phases augmented reality lesson was developed as a part of my doctoral studies, and I used it as a primary learning content used in the AR experimental group. I chose lunar phases for my research because (a) it is a concept that depicts material rich with spatial information; (b) it represents a concept that is often difficult to grasp; and (c) it was suggested as a suitable learning content for AR treatment by several studies.
The lunar phases lesson consists of 6 sections:
General Introduction to the relationship between the earth and the moon,
Introduction to the lunar phases,
Third quarter, and
Student Interacting with Augmented Reality Lunar Phases Lesson
Each section explains the relationship between the earth and the moon, and combined together, creates one coherent lesson about lunar phases. This lesson has been validated by 2 experts who hold PhD’s in astrophysics, and based on observations during the data collection (n=182), students enjoyed it very much. I must note that I was not able to measure the differences between the experimental group (AR) and control group (images and text treatment).
To use this material in your classroom (free), you will need to do the following:
Creating 3D models often requires hours of labor and knowledge of complex 3D modeling software. There is no way around it; if you want to create a 3D model of a specific object, you have to search for the model, download it and tweak it, pay a 3D modeling expert to create it, or spend numerous of hours learning software such as Blender, 3D MAX or Google SketchUp to create the model yourself. Simple models such as Earth, a building, or anything rectangular may not take a lot of time to create, but when creating complex models (e.g. a buffalo), the creation process becomes more grueling.
These three articles link 1, link 2, link 3 which deal with 3D printing and using Microsoft Kinect to scan the physical models to convert them to 3D models, got us thinking about using Kinect to create 3D models for Augmented Reality applications.
Tony Buser explains how to use Kinect as a 3D model scanner to create a 3D model in this video: 3D scan cleanup project. We have followed his scanning procedure to create 3D models. After several experiments, we have identified a way to use Kinect for 3D model scanning more effective and efficient. Below is the breakdown of our procedure:
Prepare the model to be scanned. We used a wooden buffalo (Figure 1) and placed it on a “lazy susan” (rotating circular tray, placed on top of a table to aid in moving food on a large table). Rotating the buffalo with hands would create an inaccurate and deformed scan, so to get the most precise scan, we used the lazy susan. While rotating, you need to pay close attention to the rotating speed. Too slow or too fast will result in a deformed 3D model.
Scan the model. Make sure that the model you are scanning is positioned a minimum of 40cm from the device and placed in an area of 1 square meter.
Once you are done with the scanning, you will be asked to save the model. The file is in OBJ or STL format and will need to be touched up.
Obtaining the scanned 3D model generated via Kinect and ReconstructMe is the first step to creating your own 3D models. There are more steps to follow to have a completed 3D model. For instance, the scanned 3D model may have some missing areas (e.g., holes), rough surfaces, extraneous surfaces, and lack colors. Fixing the model and preparing it for the final use will be the subject of the second part of this tutorial, which we plan to publish by the end of November 2012.
Here is the video we produced to help you visualize the scanning process: