Sunday, February 22, 2009

Networking: Dozer and IP is dead

Dozer: Ultra-Low Power Data Gathering in Sensor Networks
(Nicolas Burri, Pascal von Rickenbach, and Roger Wattenhofer, ISPN '07)


IP is Dead, Long Live IP for Wireless Sensor Networks
(Jonathan Hui and David Culler, SenSys '08)



Thursday we started the discussion by contrasting the two papers at a high level. They both address the problem of networking in wireless sensor networks at all levels with very different philosophies: minimalist vs. maximalist. Dozer is perfectly orchestrated, hand tailored to reduce radio duty cycles while maintaining low message loss under a light load (sampling the environment every minute and a half and beacon messages every half minute), while IPv6 focuses on interoperability and standardization.

Particularly clever bits in Dozer were using a random jitter after TDMA to alleviate transmission conflicts between different branches of the routing tree. Another was tracking the time drift instead of resetting the clock, as earlier papers have mentioned how sensitive applications can be to small perturbations in timing. Dozer forgoes CSMA and low power listening, gaining efficiency by allowing child nodes to sleep through their siblings transmission time and using random (but predictable for parents and children via shared seed) backoffs. Nodes follow two TDMA schedules, one dictated by their parent and one they generate for their children.

Looking for failure modes and things we would have designed differently, we discussed the problems with adding new nodes in a TDMA scheme, epoch growth negatively impacting turnaround time, and the problems with selecting parents mainly on hopcount. Sniffing in low-power-listening is used to gain an idea about link quality and contention, which can't be used when specifically powering off the radio during other's turns. Selecting parents based on hopcount will frequently mean choosing a node at the edge of the nodes detection range, which can be a lower quality connection and result in lost packets thus increasing latency. One simple addition would be to factor in RSSI when detecting and choosing parents in the beacon phases. Another would be to track failed packets and switch parents after a certain threshold.

Dozer buffers one message per node to relay upstream, which some thought might mean one per child and others one per descendant (child/grandchild/etc), and it refuses to accept another packet from that child until the current message has been relayed. Thus bursty network behavior will cause congestion for a long time.

We spent a few minutes interpreting the graphs, figuring out what amount of time was represented in each and what the low duty cycle actually was (2% and .2% are an order of magnitude different, and are each stated in different parts of the paper). One graph we would have liked to see was transmission loss over time - did the network converge towards stability or were the losses constant throughout? One participant took issue with their energy evaluation: radio transmission normally dominates the energy costs, but is it still overwhelming at .2% duty cycle?

Wrapping up out discussion on Dozer, we agreed they presented a full working vertical solution instead of attacking a single layer, which is a holy grail of WSN networking. The low transmission rate assumption limits their applicability, but it was openly admitted and discussed.

Moving on to IPv6 on WSN, we examined why we would even want to try such a implementation and came up with interoperability and standardization. Several wanted our sensor nodes to be able to communicate with other devices (an actuator motor locking a door), to reuse existing tools and applications from regular networking contexts, and to make motes more accessible to programmers. One participant was at the conference talk and relayed how differently this paper was presented there. Jonathan started with "Here is an efficient, working system", went into the benchmarks/graphs, and then described the problem domain and what they encountered along the way of building it (which is the inverse of most systems talks).

The underlying idea is to move the narrow waist up from the active message layer to the network layer by adopting a recognized standard. The network processing community is concerned with quality of service and reliability, however WSN's are not just about moving raw data to the base station, local aggregation can also be important (local max or average of sensor values). So, using IPv6 loses the local topology benefits of tweaking our own MAC and networking protocols but gains a lot in accessibility to normal programmers. Chirps allow nodes to power down their radios when they know they won't need to listen to a message, which greatly improves performance. Synchronous acks allow the application to decide whether to retransmit and to whom.

We briefly talked about Zigbee and Bluetooth as heavyweight protocols, over-engineered for their frequent uses. The internet evolved more from tradition instead of standards. Processing was done at endpoints as much as possible, though some companies like Akamai are moving away from that now. It's governed more by a rough consensus in running code than strict adherence to particular standards, which is a very different social model. The IP paper is also coming from that philosophy, complex but flexible and demonstrate a full working example before expecting anyone to listen.