(Levis et al.)
(Stann et al. SenSys’06)
Today we discussed “flooding” protocols and quickly entered a discussion regarding the definition of flooding. The RBP paper defines flooding as a network-level mechanism that sends information to all nodes in a connected network. However, the RBP paper does not actually introduce a flooding protocol, but rather a robust broadcast protocol that can be used in a flooding protocol… very confusing. I agree this is a semantics discussion, but an important one when discussing the minutia of these protocols. Trickle, on the other hand, is a form a flooding, but more specifically a consensus algorithm. We agreed that there was nothing inherent to Trickle that made it suitable for only code propagation. In general, it can be considered a consensus protocol that is used in this case for pre-code propagation.
Many of us missed the oh so important point that Trickle does not actually perform code propagation, despite its title. The authors even mention this in section 5.1: “The actual code propagation mechanism is outside the scope of Trickle…” What Trickle actually does is form consensus throughout the network as to what is the most recent version of code. Using this consensus, nodes may request updates if they are out of date, but the actual propagation of code is not done by Trickle… hmm, sneaky. I don’t understand why the authors chose to market Trickle in this manner. Why couldn’t they introduce Trickle as a consensus forming algorithm for WSN that, by the way, can be used in a code propagation application? A consensus algorithm would be more general and more useful to other people than what Trickle is now.
We talked about Trickle’s usefulness in real world applications. Someone pointed out that Trickle requires nodes to be always on, which severely impacts node lifetimes by wasting energy. We briefly discussed how we could implement Trickle with LPL and quickly realized that this is not such an easy problem as one might think. With LPL you have a duty cycle and a preamble before each packet. For an efficient network with long duty cycles and preambles, the latency of a transmission means that code will take long to propagate through the network. You can dynamically adjust your duty cycles, but both nodes must agree on the check interval… you can’t do this independently. Maybe you can use RBP to flood the network with the check intervals… maybe not. Another issue that came up was that in Trickle you are limited to a 30 byte routine payload. This seems very unrealistic and very much tailored to Maté. A general code propagation algorithm should accept larger code payloads.
Even though this argument might not apply to these papers because they are a couple of years old, here it goes. Almost all papers that we have read so far claim scalability and demonstrate that claim using simulators. However, most use a small number of nodes in their real-world evaluations (20-40). A common explanation is that they did not have enough motes for a larger test. However, given today’s test beds (Harvard’s MoteLab, Ohio State’s Kansei… etc) with over 200 nodes there is no excuse not to perform large scale experiments. Most of these test beds give out accounts to researchers and are created for this purpose. The next time I see a new paper that does a real world experiment with 5 motes on their desk, but claims scalability, I will be very skeptical.