Monday 18 February 2008

15/02/08 - Meeting with supervisor

Today myself and Graham had a meeting with our supervisor, Prof. Alan Smeaton, in order to show him what we have done so far and discuss the direction of our project. We also used him as a guinea pig to test our demo app - he was able to localize the sounds very quickly, so this proves that the sound localization works!
Alan had some good news for us too - DCU will be installing their Ubisense soon!! We hope that we will already have our Ubisense component working, before this is done, so that we can begin testing our framework with the Ubisense as soon as it is installed.
He also told us about a project which Donal Fitzpatrick and his postdocs are working on: to develop a virtual white cane, which would use tactile feedback to simulate an environment as it would appear to someone using such a cane. Our project has great potential for a future tie-in with that project, as we are not relying on anything visual in our project and will already have location tracking (which that project will also require) implemented and we will be using various sensors (ultrasonic, compass, accelerometer/gyroscope) which could be useful to theirs. Also, our audio feedback system may be useful to their project too! It's always nice to discover new uses for a project - that may not have been thought of at the start!

For the near future, we intend on meeting with Donal Fitzpatrick and gain some valuable feedback on how we could improve our system as it could be used for a visual-less navigation system and also which sounds we should concentrate on for the greatest effect. Also, as Alan has kindly agreed to obtain a sound card with HRTF support, we hope to be able to significantly improve the quality of the 3D sound localization!

Some other things which we will work on over the coming weeks are to modify the GUI for the demo app and measure the users performance and generate statistics about how well people can localize sounds, how quickly they adapt and learn and so forth. This would be a valuable tool for the testing of our framework (providing some much-needed metrics of how successful the 3D sound system is) as well as a tool for further research into which sounds work best.

Once the Ubisense system is up and running, we will begin incorporating the Ultrasonic sensor into the project so that we can detect physical objects and begin working on a means of signaling the objects existence to the user using audio.

Once all of the above is complete, it will be time to solely focus on the framework aspect of the project: integrating everything we have so far into the message router application (by tuning the command set each component understands, having them connect through the router, instead of directly as we are currently doing and writing driver python scripts) and working on a flexible but intuitive API for the applications. Also, I want to start developing a graphical configuration tool (as explained in a previous entry) which would allow the user to specify the routing behavior through a visual tool. We will also need to spend some time testing the framework for usability and also develop a number of demo apps.

All in all, the project has been moving along quite successfully, with no show stopper problems (at least, none that we couldn't work around - thanks to UCD allowing us to use their Ubisense). the coming weeks look to become exceptionaly busy!

No comments: