Sunday 4 May 2008

04/05/08 - The end is near

So this is it(ok, last friday was it, but I needed time to recover, heh). The project's code has been submitted alongside the (rather lengthy, in our case) technical specification. The only thing left now is the demonstrations. I will post a final message to this blog after the demonstrations are over. I will also fill in the details I promised in previous posts, but never got around to. Updating this blog is slow going, what with all the assignments over previous weeks and now exams (and the fact that I got a fairly crap result for this blog, which is somewhat disappointing since I have compared it to blogs which got significantly better grades and I have more content than some of those, as well as being updated more. Doesn't exactly help motivation). Stress!

So now that the development aspect is over (details of development can be found elsewhere on this blog, architectural details, manuals and notes can be found in the technical spec and user manual), what have I learnt? What would I do differently next time?

As always, I firmly believe that some up-front design is necessary, as without it, the implementation direction is too unstructured and things will eventually go wrong. However, it is also extremely important to be prepared for changes. Requirements will always change. Nothing in software development is static and this porject was probably more dynamic than most, as it was a research project just as much as it was a project to develop an end product. As new things were learnt or descovered, the design was impacted. For this reason, I am a firm believer that agile development methodologies are a necessity of dynamic software development. Unfortunately, I admit that I did not follow a well-defined agile development plan for this project. Lesson learnt.

I also learnt the importance of a flexible decentralised communication model between components. In this project, we achieved this by seperating each component out into an (almost standalone) component which communicates with other components using a textual commandset over TCP/IP. This gave us enormous flexability to add/remove components as we saw necessary, to redirect commands, intercept or monitor them and to distribute the framnework accross a number of computers - that last capability alone made this design decision worthwhile, especially when we realised that AudioD would need to run on Microsoft Windows to take advantage of hardware accelerated HRTF, while the rest of the framework was designed to be run on linux.

Given those two big lessons learnt, what would I do differently, were I given the oportunity to do this project again, from scratch?

There are a number of things which could be done to vastly improve the quality of this project, but which were out of our reach because of budget and time contraints. Most importantly, higher quality sensors could be bought (we used the absolute cheapest which we could find that would do the job). A lot of accuracy is lost because of the sensors and this accumulates to produce a significant drop in quality. Not nearly enough to render the framework useless, but enough to make it noticeable. I would love to see this framework as it would be with high sensors!

Time and cost aside though, since that is not something we could have done differently, what other improvements could have been done, knowing what I kow now?

Probably the biggest change I would make is the overall architecture. It would still be component based. The components would still communicate over TCP/IP. Those were good design decisions and not ones I would want changed. I would, however, define a standardised communications protocol to be used throughout the entire system. I would also implement a software library which manages, not only the networking and threading aspects (Graham more or less did this in his C code), but also the protocol and commandset.
That is, every component would contain a number of common commands used for the overall management, configuration and querying of the components. This library would also contain a parsing system, which would parse the commands and pass them to the correct parts of the components. This would allow the components to focus entirely on the actual application logic, instead of dealing with maintenance tasks.
The common commandset would consist of commands to:
  • Configure which ports are used.
  • Terminate, reset or restart the component.
  • Query the component for connection information (how many clients are connected).
  • Query the component for it's current status.
  • Query the component for a commandlist.
  • Register a monitor or intercept callback (the component would forward all of its input or output (as requested) to the component making the request. This would either be done asynchronously to processing the input/sending the output to it's destination, or it would wait for an "allow" command - this would allow other components to monitor or intecept a components commands, either as part of an external tool, debugging, or to implement some crazy complex features).
I would also stick to the original design plan - but modify it a little. The reason why I changed focus was time constraints. I felt I did not have the time to follow through with the originally planned architecture of having the message router/state machine application (codenamed Seadog) at the center of the framework and connecting external tools and components to/through it. Instead I merged the existing Seadog code with ASEDIT, because I felt I did not have the time to expose all of the required features to ASEDIT, as they ended up needing to be very tightly coupled.
This was both a good move and a bad one: good because it allowed me to get more needed ASEDIT features implemented in a shorter time and bad because it poluted ASEDIT's software architecture into a somewhat hackish state, while detracting from the frameworks overall flexability (it could be regained, by creating a custom component, but then you'd be simply reimplementing Seadog).
To overcome the problems Seadog posed, I would make use of the commandset described above - if such a common interface existed, then a lot of the time contraint issues with maintaining a seperate core component from the editors which require it's features would be alleviated. This would bring me to the final changes:

The centralised message routing component - unl;ike seadog, would not really be a message router as such, but rather a node for a more structured architecture than the pure TCP/IP architecture used in the rest of the components. By this I mean that it would not be a central component through which all messages must flow (so it can route them accordingly), but rather it would be a component which acts as a gateway to a higher level interface: one written enitrely in python and using serialised python objects as the communication protocol, instead of simple text commands.
This would mean that an ASEDIT-like editor could just as easily be developed as a sub-component of Seadog without the issues which I faced when I tried doing so. This also means that a simple means of extending the framework or building applications in a higher level than building "raw" components would exist.
A python library would, of course, need to be written to encapsulate the common maintenance code envolved in this archtecture into a simple API.

Using this, I would build a number of applications, which I feel would make this framework infinitely better and more useful:
  • An ASEDIT-like editor for deifining the environment and placing sound sources around.
  • A graphical configuration tool, which allowed you to "drag and drop" components and connect them together visually - the common configuration commandset would then be leveraged to reconfigure the framework on the fly. This was actually something myself and Graham talked about at great length, but it was dropped because of its complexity (though if the interface had allowed a simpler means of doing so... most of Grahams code actually does support this, to a degree..)
  • A graphical drag-and-drop editor for creating applications (using custom written python code) which could then be saved and reloaded.
This would make the framework a lot simpler to use and potentially more powerful and useful, for those same reason. As I'm wiritng this, I realise that I would not be able to implement all of those things in the given time, so I would probably drop the configuration tool and application builder. They would be damned cool to have, but the rest is more important as it builds the infastructure required for everything else. Featurewise, it would then be more or less the same as we have now, except that it would be a lot simpler and more convenient to extend it in crazy and interesting ways.

Oh well, what could have been.. always seems simpler in hindsight!


After such a lengthy post on what I would do differently and what I wish I had done.. on to something more positive: self-evaluation of my work (well... mine and Grahams, it's much easier to talk about the framework as a whole than just specific parts, since its reasonably tightly coupled):
Overall, I am quite satisfied with this project. It does pretty much what we envisioned it to. We have the ultrasonic object detection and the pathfinding navigation working - the two demo apps which started it all (sort of). We have a powerful and extensible architecture. We have an interesting mix of (custom) hardware and software. We got to play with interesting technologies (ubisense, 3d audio, sensors). We have a nice mix of programming languages, interacting nicely (Synergy! hah). We have our system distributed over a number of computers - something which our architecture allowed us to achieve.
Overall I feel this project was a huge huge success! The sheer quantity of work and the many interesting outcomes, I feel that this is certainly the best project I've ever worked on (and I'd compare it to others too, but I hate to brag, though I will say this: few projects show such a large amount of technologies working in flux, allow for a distributed computing model, have a reusable component architecture, are easily extended or make use of custom hardware - not all at once anyway).


I just hope that our examiners see this.. After the effort that went into this, I'd certainly not be pleased if some database driven website (which I'd implement in Django over the weekend) were to receive a better grade.. I know I'm being mean, but a lot went into this project and I don't want it to all be in vain.
At the end of the day, the only thing I will get from this is the experience (actually, that alone makes it worth it) and the grade (I hope this will make it worth it too).
Hell, I'd love if I got to do further work in this area (or another similar one), but unfortunately, I do not see this happening (who would want to finance mine or Grahams crazy ideas haha). Well, besides doing a Phd - something I intend on doing in maybe 2 years time, after I've had a chance to fix my finances a bit.


Well, I guess I'll theres another post on the way at the end of the month. So, stay tuned.

No comments: