Having said that, I do find that once you have spent a certain amount of time on a large project, building up infrastructure and custom libraries, it starts to take on a life and momentum of its own. Whilst working on a particular feature, you often get unexpected inspiration as to how just a small set of changes might allow some existing code to be used for something completely different that you hadn’t previously imagined. This is kind of like emergent behaviour and experiencing it is one of the absolute joys of programming.
In order to do the required investigation and experimentation, I really need something tangible to work towards. Something that has a clearly defined end goal that I can test and evaluate so that I know when I’m done and can move on to the next thing. Unfortunately trying from the outset to create a commercial tool that accommodates the huge diversity of all potential building models and geometry configurations that projects might require means that you simply won’t get very far, or will likely never know how close you are because you can’t properly test it. There will always be edge cases that you never even dreamt of that blow huge holes right through your code. Always.
Thus when starting out, arbitrary boundaries and self-imposed limitations are your friend. Setting limited but obtainable goals will help you take the first few baby steps that will soon see you running and skipping as your subsequent goals get more ambitious. Also, fresh code is notoriously terrible. The first time you code something, just getting it to work is hard enough. However, as you reuse it in lots of other small projects, there will be slightly different requirements each time that make you revise, extend and even re-write it. All of which will invariably make it better as well as more capable, robust and interoperable.
Writing a series of well-defined educational tools that demonstrate or simulate a particular aspect of building analysis is a way that I can progressively develop and stress test the infrastructure required to process real projects. It means that I don’t have to solve everything first, I can just work on specific chunks. Moreover, with each new tool comes slightly different issues which, as I solve them, means I can go back and update all my previous tools and libraries so that the whole set become better and more robust.
Also, there is the old adage that the best way to really learn something is to teach it, or in my case dynamically demonstrate it. Thus, it’s one thing to write some code that performs a particular simulation or analysis and then add a few unit tests. However, when the results of that code are in plain view - surface mapped and colour coded for all to see - and can be interactively played with in any way the user wants, then any issues or inaccuracies are very soon exposed.
I go through some similar points in my research paper on gamification. But in essence, all of the highly focused educational and experimental tools I have done thus far are my way of artificially accelerating the code evolution process, and at the same time gaining the experience and skills needed to make my analytical infrastructure flexible and robust enough to support the more comprehensive analysis tools that I have planned for the future.
Click here to comment on this page.