TDD in embedded C/C++ too expensive? Think again PDF Print E-mail
Written by Ed Willis   
Saturday, 29 October 2011 18:12

Embedded development in the wireless domainLike many developers, Ive spent more time wishing Id done Test-Driven Development (TDD) than Ive spent actually doing it. The mistake Ive made is that when put under excess schedule pressure, I make quality implicitly negotiable by allowing the scarcity of time to drive out all the work I wanted to do to ensure a quality product. I suspect Im not alone in that. With studies like these indicating TDDs 15-35% additional cost over traditional development, it would seem unsurprising to see teams abandoning it when the going gets tough. That my work is in embedded systems in C/C++ gives us additional excuses to avoid embarking on a TDD path. Theres extra effort required to set up and maintain the unit test builds, concerns about testing on target versus off, among at least a few other concerns that make TDD a little harder to say yes to. So lots of reasons not to do TDD but still that desire to do it. I recently had an experience with TDD in an embedded systems context that really turned all of these assumptions on their head and forced me to rethink how I approach TDD in this space.

The product was in the wireless space and I and a colleague were responsible for writing what may best be described as network selection software. The requirements governing this layer were complex a mix of initial network preference ordering, current networks available, and recently seen networks were fed into a fairly complicated algorithm for choosing the current preference for network selection. Im not doing it justice here. It took me a couple of weeks just to understand what the requirements were saying.

The project involved both the adaptation of the radio software stack to a pre-existing architecture and the creation of new services and components to manage radio state and configuration, of which our work was a part. Counter to my recent experiences, this project was very much a plan-driven, waterfall affair. Schedule targets for the completed product were very aggressive and the work was parallelized as much as possible as a result. In particular, the work to port and adapt the radio stack was happening at the same time as we were developing our network selection software. Needless to say, this made planning for testing challenging, as, for the bulk of the schedule, the radio would not actually be working sufficiently for our use in testing. Compounding that problem was the fact that the actual network technology was not available in our region and the network simulators we had access to in the early going could not simulate more than one base station. That latter problem made modeling a dynamic network environment in which to assess the correctness of our network selection software virtually impossible.

ScaffoldingSo we decided to try TDD. We figured that whatever incremental expense we absorbed in setting up the test build would be more than offset by our ability to establish correctness in the software in advance of radio and network availability. We chose the test library from Boost which proved to be easy to set up and portable enough to allow us to run our test build on target if we wished. That took all of a day to set up. Our test project was a bit broader than a traditional xUnit unit test more like testing at the subsystem or component level. When we were done, there were probably 6 or so production modules bundled into our test build and so our tests ended up being a mix of cases focused on individual functions and modules and those aiming at collaborations between them. The key thing, though, is that we had this framework in place before we wrote a single line of production code. That allowed us to tackle smaller pieces of the overall set of requirements incrementally from day one. Each day our tests grew alongside our production code, and our quality in terms of correct code implementing product requirements grew monotonically every day. I had forgotten what a wonderful way to work TDD is. TDD feels like an encompassing framework of correctness in which to do your development. All of your tests pass, then you write a test case that fails because you have yet to write the code to make it pass, then you write that code, refactor and youre back to 100% pass rate on a slightly larger set of test cases covering a slightly larger set of functionality. You never feel out of control, or uncertain about the software youre writing.

I got a scare one day when wed moved between branches in subversion and I had to set up my project anew in eclipse. Somehow the IDE became confused over where the debug executables were (well, or in fairness, perhaps I misconfigured eclipse) and so when I wrote my first failing test case after setting up the project, it seemed to have passed. It took me a bit to realize what the problem was and in that moment, it felt like the bottom was dropping out from under me. What if all of my test results were suspect? The odd part about this, I realized, was that the heart of this fear was really the sense of returning to what coding is like without TDD. Test results are few and far between and your sense of overall program correctness is much less rich you feel very much like youre standing on uncertain ground as opposed to the solid feel of the continuous feedback on quality you get from TDD.

When we finally reached code complete on the system, the radio was still not ready for our testing and so we had a bit of a wait before we could try out our system in anger in the lab. In the interim, a new network simulator came online, allowing us to model the kinds of network interactions required to verify our code. When the radio and lab environment were finally ready for us, we found we had one subtle bug in our system that grew out of a misunderstanding of one aspect of the requirements. We fixed that and the system worked correctly. Essentially, integration between our code and the radio stack was completed in about two hours.

But more than that, Im convinced that, had we limited our testing to network simulators and live air environments, we would have covered the barest fraction of requirements that our unit test suite eventually addressed. We would inevitably have tested what was possible for us to test rather than what the requirements called for. Moreover, we would have uncovered our defects only when we had set up or discovered environments where the defects could be found, rather than modeling them as we did in our unit tests and surfacing them at will. Theres no doubt in my mind that we got more testing done in drastically less time than we would have done relying on traditional approaches to testing.

A+As it turned out, though, the project was canceled before we could bring the product to live air but the net effect of TDD on our work could still be discerned:

  • It allowed us to pursue not just programming but also the establishment of system correctness in parallel with the development of systems on which we depended so it enabled us to do more of our work in parallel than would have been the case otherwise.
  • It shortened system integration time. A couple of hours is whimsically short in this domain. This schedule compression would have contributed directly to reduced overall project timelines.
  • It provided an environment where my colleague and I genuinely enjoyed our work and had great confidence in the product we produced.

The set of assumptions I made going in that were overturned includes:

  • Unit testing in C/C++ in embedded systems would be too expensive to set up and maintain.
  • Unit testing our component would cause us to take longer to develop it and would result in a longer project schedule.
  • Unit testing would only provide a sparse coverage of our requirements.

None of them proved true for us so even in this challenging space TDD more than pays for itself.

Last Updated on Saturday, 31 March 2012 18:12