Following this weeks meeting with my tutor I was advised to do some more research into Augmented Reality, in terms of its current state, the applications available, and maybe how my project could be developed without the need to hold a device in-front of us; but to instead wear it.
ARKit – https://developer.apple.com/arkit/
New from iOS 11, this app offers a new framework that allows the user to create augmented reality experiences for iPhone and iPad.
Sprite-Kit – http://www.sprite-kit.com/about/
Sprite-kit.com is a central hub of tutorials, books, open source projects and various assets for developers working with Sprite Kit – iOS/Mac graphics framework for 2d games – by iUridium.
Sprite Kit is a powerful iOS/Mac graphics framework for 2D games (side-scrolling shooters, puzzle games, and platformers) introduced by Apple targetting games on iOS 7, OS X Mavericks and potentially Applet TV:
- A flexible API lets developers control sprite attributes such as position, size, rotation, gravity, and mass without requiring advanced knowledge of the underlying OpenGL code
- Built-in support for physics to depict the force of gravity and inertia
- Built-in support for particle systems for creating essential game effects such as fire, explosions, and smoke
This platform can also be integrated with Apples ARKit to create an AR game and other interactive artifacts. I could use this platform for my final development – https://blog.pusher.com/building-ar-game-arkit-spritekit/
Aurasma – https://www.aurasma.com/
Aurasma is available as a software development kit, or as a free app for iOS and Android-based mobile devices. Its image recognition technology uses a smartphones camera to recognise real world images and overlay media on top of them in the form of animations, videos, 3D models and web pages. This type of platform could be an option for me to develop my project on as the app is already there for me to use, I would just have to simply implement my data and follow the instructions provided.
Wearable Technology –
https://www.androidauthority.com/wearable-computing-history-238324/3/
https://www.interaction-design.org/literature/article/augmented-reality-the-past-the-present-and-the-future
Augmented Reality has come a long way from when it was achieved in 1957 by the cinematographer Morton Heilig. His invention, the Sensorama, delivered visuals, sounds, vibration and small to the viewer. This wasn’t computer controlled but it was the first example of adding additional data to an experience.
Augmented reality today is achieved through a range of technological innovations, all of which can be implemented on their own or in conjunction with each other.
- General Hardware Components – The processor, display, sensors and input devices. This will typically be a smartphone as it contains all the hardware required to be an AR device.
- Displays – A monitor is more than capable of displaying AR data, however there are other systems that could achieve the same. Optical projection systems, head-mounted displays, eyeglasses, contact lenses and handheld displays.
- Sensors and input devices – GPS, gyroscopes, compasses, wireless sensors. touch recognition, speech recognition and eye tracking.
- Software – There is an Augmented Reality Markup Language (ARML) which is being used to standardise XML grammar for virtual reality (VR). There are also software development kits (SDK) which offer simple environments for AR development.
Google Glass seems to have established a new kind of consumer electronics device, which is distinguished by its integration of an optical head-mounted display, augmented reality, camera, web access, and voice-based interaction. Google Glass can be categorised and an “ubiquitous” computer, mainly because it can be used both actively and passively.