For Thursday’s discussion, we widened our scope a bit focusing on two papers which addressed overarching aspects of OS design for WSNs. We began by discussing various aspects of nesC, a C-based language designed specifically for programming wireless sensor networks. One aspect of nesC which sparked some debate was the ‘static’ nature of the language, meaning that all memory allocation for a program is set at compile time. While some people agreed with the authors that this design offered the programmer security in knowing that a mote left running for months at a time would not run out of memory, a number of people took issue with this approach. One issue with this static approach to memory allocation is that programmers may be forced to make inefficient use of memory when buffering data of an unpredictable size, since they must allocate all the memory they might need up front. Worse, if one wants to take code written nesC for a particular mote platform and port it to one with different memory constraints it may be necessary to manually re-write all the parts of the code which allocate buffers. Some people argued that this problem could be solved by good programming practice, using constants to clearly define your buffer size in your program header.
I particularly liked one suggested solution to this problem: an automatic buffer-sizing program which could run at compile time and configure buffers appropriately. This might be accomplished by adding a range syntax when defining buffers, allowing the compiler to pick a specific buffer allocation at compile time given the memory constraints of other components in the program and the hardware platform. It seems to me that something like this would go a long way toward making the TinyOS vision of reusable components (more of) a reality.
Another key focus of nesC is dealing with concurrency. In nesC, code can be run one of two ways: asynchronously from an interrupt handler, or synchronously as part of a scheduled 'task'. The creators of nesC are particularly concerned about 'data races' resulting from concurrent attempts to access a single variable. If it is necessary to ensure that particular block of code cannot be interrupted, we can declare a section of code "atomic." On this point, a number of people worried about the way in which nesC implements atomicity: it turns off all interrupts. This struck me as a rather dramatic approach, and several people worried that a long section of atomic code might lead a mote to miss important sensor information, or render it unable to keep up with the stringent timing requirements of some of the MAC protocols we've been discussing. One person offered a potential fix for this problem, suggesting that the compiler could try to model program timing and detect situations in which the program could be liable to miss a timer interrupt or violate other key timing constraints.
One aspect of nesC which everyone seemed to appreciate was the use of interfaces and parameters to ensure a clean program structure. While this structure can be overly verbose in trivial programs, it makes program structure considerably more readable and helps to enforce good programming practices. As we considered the potential future of nesC and TinyOS, asking whether they would still be prevalent 10 years from now, this clean structure was especially important. The clean relatively simple structure of both nesC and TinyOS were votes in their favor in this regard, especially when we considered alternative UNIX-type approaches.
Personally, I'm looking forward to actually trying out nesC over the next few weeks before I make any final calls as to its strengths and weaknesses. I like the clean structure and syntax of the language, and the almost hardware-like metaphor of wiring components together, but I'm curious to see how these aspects of the language will actually feel when I sit down to write some real code.