For our third project, we had to "create an interface that re-imagines how people interact with immersive 360-degree video, using a mobile phone and Google Cardboard." I personally began thinking about interaction, and took a while before I landed on an actual experience that also met the prompt. After deciding on doing a Skydiving Simulation, I sketched the idea out some more, to broaden my ideas about how it would work. I also interviewed a friend of mine, who is an amateur skydiver, to understand the role movement plays in skydiving. My project therefore takes an "immersive 360-degree video" and re-imagines the interface from a simple 360 view to something that accounts for your movement, just as the real world does. Overall I think with revision, this simulation could be used in skydiving instruction to help new divers better understand the importance of the neutral position, as well as how much movement effects you when you're falling.
Sketches
Ideation Sketches
As mentioned above, I spent a lot of time sketching out different interactions. I had a range of interactions from button presses on objects within a video, retinal tracking, haptic use, orientation detection, the idea of changing a 360 video into a 2D video, movement detection, audio interaction, and even smell.
Expansion
After the initial ideation phase, I was still largely focused on the interaction. However I did move in the direction of orientation detection/change and skydiving (falling).
"Expert" Interview
Having mostly decided on a skydiving simulation of some sort at this point, I interviewed my friend, Marc. He helps out with his wife's family's skydiving business, Alberta Sky Dive, during the summer. While he's not a skydiving instructor, he is knowledgeable enough to dive on his own. He really helped me to understand just how much minor movements can effect your falling trajectory. It's very different than what you see in the movies. Overall, the goal is to maintain a neutral position, falling basically on your stomach, with your hands and feet slightly raised, and head looking forward, not down. With this understanding I had a much better picture of how I wanted to interact with my 360 video.
Final Sketch
With information in hand, I found a suitable video on Youtube that fit my idea, and sketched out my interaction around it. The expanded idea is for the simulation to be used in tandem with official skydiving training, such that a new diver could run this simulation until they feel comfortable with it, then discuss things with their instructor, all before actually doing their first tandem drop.
Prototype
Original Video used: https://www.youtube.com/watch?v=4wyemSUJRwA
Note: I did not make nor do I own the above video. Please support the creator by watching the original video before checking out my changes.
Unity + Kinect + 360 Video + Google Cardboard
Unity proved the biggest challenge for prototyping this project. While it is a robust and very useful development environment, it has a few assets that essentially make it "too smart for it's own good."
How it works:
One program runs on a computer with a Kinect connected. This program tracks body movement, specifically (in this iteration) the angle between right and left elbows and hands. These angles are used to determine if the arm is bent or extended. It then sends two messages per update cycle based on the calculation of those angles.
The other program runs on an Android phone. It displays the 360 video (linked above) and receives the signals broadcast by the Kinect program. Based on these signals, it changes the camera "rotation" by a small amount.
While it's not a perfect simulation by any standard, it does get the point across of how a simple change in arm position can set a diver into a spin.
Below are the Video Demo, Source Code, and Executable files.
Note: I did not make nor do I own the above video. Please support the creator by watching the original video before checking out my changes.
Unity + Kinect + 360 Video + Google Cardboard
Unity proved the biggest challenge for prototyping this project. While it is a robust and very useful development environment, it has a few assets that essentially make it "too smart for it's own good."
How it works:
One program runs on a computer with a Kinect connected. This program tracks body movement, specifically (in this iteration) the angle between right and left elbows and hands. These angles are used to determine if the arm is bent or extended. It then sends two messages per update cycle based on the calculation of those angles.
The other program runs on an Android phone. It displays the 360 video (linked above) and receives the signals broadcast by the Kinect program. Based on these signals, it changes the camera "rotation" by a small amount.
While it's not a perfect simulation by any standard, it does get the point across of how a simple change in arm position can set a diver into a spin.
Below are the Video Demo, Source Code, and Executable files.
Video
Source Code
For my personal source code for the simulation, navigate to Skydiving Simulation > Assets > NetworkIt > Scripts > NetworkEventTemplates.cs. For the Kinect program, navigate to SkydivingSimKinect > Assets > Scripts > BodyNetworkTemplate.cs. Both of these were templates given to us by Kevin Ta, our TA, and I've modified both to fit with my program.
Executable Files
Click the button below to get a folder with both executable files inside. The android .apk may not install on your phone. If that is the case, please grab the source code (above) and open the project in Unity and build and run to your phone from there.