Monday 18 February 2008

Introduction

Though I have been keeping track of progress, I haven't put it in my blog before now, so adding everything to the blog is in order. I guess I have a natural aversion for keeping blogs... Oh well, time to get over it.. :-/

I guess an introduction to the project is a good a place to start as any.

My (and Graham's) project is to build an SDK or framework which would allow users to easily build Augmented Reality applications.
This means we are developing hardware and software components which can be used both together or in isolation (depending on the application being built) to handle some aspect of an Augmented Reality application, tools to configure and control these components and an API to build custom components which interact with the "stock" components we are developing. We will also be developing some sample applications to demonstrate the use of our framework.

The hardware/software components collectively would allow us to create virtual environments, inside a real physical space. The sensors would provide the system with a stream of input which would then be processed in some application-dependent way to produce audio feedback back to the user.
  • The Ubisense tags would allow the framework to know where in the physical environment the user is.
  • The digital compass and accelerometer will let the framework know which direction the user is facing. The importance of this will be discussed in a later post.
  • The ultrasonic sensor would act to detect physical objects which may be in the persons way.
  • The wireless headphones would provide the user with audio feedback. The audio will be 3D sound(see also here), generated with the FMOD audio API.

The planned components are:
  1. A headset consisting of a number of sensors (digital compass, accelerometer, ultrasonic, Ubisense tag) and a pair of wireless headphones. This will be the basic interface through which an end user would interact with the applications developed using this framework.
  2. Software components to match the various hardware components. To keep the design of both the software and the hardware modular, instead of controlling the hardware through a single monolithic piece of software, each distinct hardware component will receive it's own software daemon to monitor and control it. Roughly speaking, this means there will be a software component for handling the digital compass, the ultrasonic sensor, ubisense and 3D sound generation.
  3. A central hub which controls the various components. This program would act as a router between all the other parts of the framework and allow for a central place to configure and manage how components are to interact.
  4. Monitoring and management tools. There should be a set of generic tools to monitor the state of the system at any given time as well as to manage (and possible recalibrate?) the system through a GUI. These would register themselves with the routing application to receive the commands which they are monitoring.

Each component will interact through TCP/IP, allowing the framework to be restructured simply through the routing application, at runtime. This also allows for the possibility of running different parts of the system on different computers. This could be useful to spread computing intensive simulations out over a number of computers for better performance.

Some ideas for demonstration applications:
  1. A sound localization test program which would play sounds in a number of different locations and test whether the user can determine "where" in the virtual space the sound is coming from, possibly by simply looking at it for a number of seconds.
  2. A simple waypoint-based navigation system where the user must navigate through a set of waypoints using only audio feedback to navigate.
We (myself and Graham) plan on implementing a number of demonstration applications. More will be posted on them as we begin working on them.

The project also has some interesting potential future uses:
  • Helping blind people navigate
  • Improving the audio aspect of augmented reality (there has been a lot of work done in mixing the real and virtual visually, but 3D audio has not been explored as much as it should be)
  • Augmented Reality computer games
  • Since we are only suing audio feedback (and nothing significantly visual), perhaps this could be developed into a set of computer games for blind people?
  • It can be combined with traditional Augmented Reality (for example, by adding a head mounted display) and perhaps creating a more realistic and immersive (audio-wise) version of ARQuake

No comments: