Friday, April 24, 2009

The Best of CS263

Well, we're done discussing papers in CS263 this term. We read and talked about 41 papers over the course of the semester, and by and large they were really great.

We decided to run the course in a mock "program committee" style, in which our goal was to pick the top 10 or so papers over the course of the semester. This is great practice for students but also provided a focal point for the discussion: Does a given paper belong in our "program"?

After much deliberation, we have selected the following eight papers as The Best Papers from CS263. This could be seen as a kind of "best sensor network systems papers" award. (Of course, there are many papers we did not read at all, so caveat emptor!)

System Architecture Directions for Networked Sensors, J. Hill et al., ASPLOS'00

This is the classic paper on TinyOS, which lays out many of the challenges in the sensor network domain and focuses on the new OS and communication model requirements.
Radio Interferometric Localization, M. Maroti et al., Sensys'05

There's a lot to like about this paper, which applies RF interferometry to low-power wireless sensor nodes; a lot of technical depth.

The Design of an Acquisitional Query Processor for Sensor Networks, S. Madden et al., SIGMOD'03

TinyDB has been hugely influential in terms of new approaches for querying and aggregating data in sensor nets.

Sensor Network-Based Countersniper System, G. Simon et al., Sensys'04 (BEST PAPER AWARD!)

This system uses acoustic sensors to localize the source of gunshots in an urban setting. It brings together event detection, time synchronization, localization, and a nice sensor fusion algorithm to develop a complete system, which is deployed and evaluated in a real field setting. This paper was universally voted as the students' favorite so we've given it our "best paper award."

Distributed Image Search in Sensor Networks, T. Yan et al., Sensys'08

This paper proposed to use a network of wireless cameras to store and index images which can later be queried by similarity. The authors do a very good job at motivating their design and diving into the details of image processing, flash storage, and so forth.

Dozer: Ultra-Low Power Data Gathering in Sensor Networks, N. Burri et al., IPSN 2007

This paper does a great job at a complete cross-layer design spanning the MAC, link, routing, and application layers to achieve very low power and high reliability for data collection. In some sense this is the first paper I'd give someone working on communication in sensor nets, since it nails down how to do it right.

An Analysis of a Large Scale Habitat Monitoring Application, R. Szewczyk et al., Sensys'04

This is another "classic" paper on the sensor net deployment at Great Duck Island. It's a great case study of the challenges faced by a sensor network in the real world.

Synopsis Diffusion for Robust Aggregation in Sensor Networks, S. Nath et. al, Sensys'04

This was one of the only "algorithms" papers we read, and it does a great job at developing an aggregation technique that is robust to duplicate and lost packets. A superb example of plucking a somewhat obscure concept from the literature and demonstrating its value in a real system.

It's been a great term and I gained a lot by having these heated discussions with the students, who were approaching many of these papers for the first time.

Next week we're doing in-class presentations of the final student projects -- I'll be blogging summaries of those as well!

Underwater sensor networks

T-Lohi: A New Class of MAC Protocols for Underwater Acoustic Sensor Networks
A. Syed, W. Ye, J. Heidemann

Data Collection, Storage, and Retrieval with an Underwater Sensor Network
I. Vasilescu, K. Kotay, D. Rus
M. Dunbabin, P. Corke

We wrapped up the semester with two papers on underwater sensor networks. As you would imagine, being underwater poses a number of new challenges in the operation of the network. Radio signals don't propagate very far, so optical or acoustic systems instead must be used for communication.

We started with a discussion the data collection paper. This paper presents an underwater sensor network with fixed sensor nodes resting on the bottom of a body of water, while rechargeable mobile "data mules" periodically come by, collecting data from each node. To collect the data, the nodes communicate via line of sight using timed pulses from an LED on one end, and a photo diode on the other end. This requires the mobile node to get fairly close to the stationary node and stay relatively stationary, which could be very difficult with strong currents. Despite some micro-benchmarks performed in the Charles river, the class had its doubts that the system would not work robustly under murky ocean conditions. We talked about how this system could be made to be more robust. Maybe by slowing down the pulses of light, the system would have more time to detect a weak signal, but this comes at the cost of burning more energy on the fixed nodes. Some of the class was disappointed that the authors didn't do more with trying to encode the signals more robustly.

The other way the nodes can communicate with the data mule is acoustically. The nodes were equipped inexpensive audio transducers. But aside from one simple benchmark, it wasn't really used in their system. That was one of the frustrating thing about this paper. The introduction has this vision of many many nodes scattered across the ocean bottom. What they did was far more modest. They were able to collect data in a large swimming pool. We were also left wanting more of an evaluation. For example, how long did it take to collect data from the nodes? Throughput? Latency? In general there is a fine line between explaining your vision of the system vs. the risk of overselling your work and loosing the trust of the reader.

Someone commented that the paper read more like a technical report then a research paper. Lots of mundane details about the system. We liked that they were able to build the system out as much as they did, and the paper does a good job of motivating why this approach makes the most sense in terms of energy, but some of the class felt the cost of moving the mobile node should have been mentioned. Overall, it was pretty impressive that they got things working as well as they did.

The T-Lohi paper focuses on MAC protocols for UWSNs. They use acoustic modems, which have 5 orders of magnitude greater latencies than radio and cost 100 times more to transmit than to receive. T-Lohi uses a short reservation tones and a contention round (CR) before transmitting. The tones are short enough that nodes can often hear other reservation tones. This can give the nodes a count of how many other nodes are wishing to transmit and the backoff timer can be set accordingly for the next CR. When there is only contender in the CR, that node begins transmitting.

They compare 3 variants of T-Lohi: First, synchronized T-Lohi (ST-Lohi) were all nodes are time synced and time is divided up into slots and nodes only transmit at the start of a slot of a CR. Next was cUT-Lohi which is unsynchronized. Every node transmits when it wants, but must listen for twice the propagation delay plus twice the tone length. No collisions are possible. Teh last protocol was aUT-Lohi. Same as above, but only wait the propagation time plus the tone time, but with a chance of collision.

We talked about the problem they were trying to solve, routing packets back to the base station. They were only sending 8 packets per second, in the worst case. That's a pretty low data rate. Is it worth all this extra overhead of the contention rounds? The point of the CR is to reduce collisions. But collisions aren't the end of the world. It's probably the case that since transmitting costs so much, collisions really do hurt you and should be avoided at all costs, but they could have spelled this out more clearly. It would have been great to compare to a strawman of where there was no protocol.

Their use of "optimal" in the graphs was confusing some of us. It seems like a practical upper bounds. It's T-Lohi optimal. Then the discussion turned to whether CSMA schemes are the right way to go with these networks. TDMA would make more sense, but the time synchronization overhead might not make it worthwhile. What about more of a z-mac hybrid approach? Would that be better?

Saturday, April 18, 2009

Object Tracking and Image Search

Object Tracking in the Presence of Occlusions via a Camera Network
Ercan, et al.


Distributed Image Search in Sensor Networks
Yan, et al.



The first paper that we discussed this past Thursday addressed the problem of tracking a static object with multiple occluders (objects blocking the camera’s view of the desired object). The occluders could be moving or static. Compared to the other papers we have discussed so far, this one was heavy on the mathematics, mainly because the paper was focused on explaining the model used to simulate the situation. Also, the only idea that was specific to sensor networks was the idea of reducing the entire image from a camera to one single scan line to minimize the amount of information transferred on the network. It was concluded that this paper comprised in part the “Information Processing” realm of sensor networks.

The authors state the many assumptions that were made, and it was unclear to some of us what the system would or would not detect in special situations. However, we came to the consensus that the assumptions were valid given the constraints on the mathematical tools that the authors used. Something that was not brought up in the paper but addressed in the discussion was the situation of partial occlusions. The process of reducing the entire frame down to one scan line could be problematic in dealing with such situations and some members of the discussion felt that this reduction simplified the assumptions too much.

One person brought up an interesting point questioning whether it is necessary to even track a single object with many occluders. Is it not enough to know whether or not the object is in the room? This was under the presumption that this object tracking system would be deployed for security in such areas as malls, airports, etc. This binary output would be much simpler to implement since each camera just has to report whether or not the desired object is present in its line of vision. It was also hypothesized by some that this work was just a preliminary step towards placing cameras on mobile platforms and the system would then be used to track objects from such things as vehicles, plane, etc.

The second paper we discussed dealt with the problem of distributed image search where each sensor captures images. The user can send ad-hoc or continuous queries to a proxy that would obtain the best search results from the entire network. First we established the assumption that the SIFT algorithm will do a good enough job of translating image information and that the main problem then becomes capturing and querying images on a sensor network. Although it was stated in the paper, it was worthwhile to remind everyone that this system only works when all the objects being monitored similar. For example, one cannot search for a book cover among images of bird species. In this case, the best result visterms would most likely be irrelevant to the search.

After going through the process in which each mote stores images and visterms in an inverted index table, we went on to discuss the evaluation section. One person brought up the fact that there is no end-to-end experiment to measure the amount of time it takes actually pull images from the remote sensor nodes. It would have been nice to see such an experiment since images can be quite large (on the order of 1.6kB) and transferring each image would likely take on the order of 4 packets. There was some skepticism about the power usage value for the CC2420 radio shown in Table 1. One person made the case that the value 214.0 mW was too high and the actual value should be significantly smaller.

Overall, the paper was received very well and we agreed that every issue pertaining to distributed image search was addressed, from storing images on a constrained platform to searching for them. However, the image capture rate of one image every 30 seconds was viewed as unrealistic. A better value would have been on the order of 10Hz. Also, after doing some quick calculations, we found that if the motes had been battery-powered, their lifetime would be on the order of 1.5 hours, which is unacceptable in a widespread deployment. This led to the conclusion that the authors conducted the experiments with motes plugged into an AC power source, rather than batteries. This brought up the larger question: Are motes a sustainable platform on which to implement distributed image search? We agreed that the authors did a nice job of heavily optimizing their solution to work on motes, but, if this still results in a 1.5 hr battery lifetime, then maybe the technology is just not there yet to handle such problems. It was brought up that maybe the gap between what the motes allow and what they are being used for is too wide for the application of distributed image search.

Acoustics

Sensor Network-Based Countersniper System
Simon et al.

VoxNet: An Interactive, Rapidly-Deployable Acoustic Monitoring Platform
Allen et al.

Acoustic sensing was the topic for Tuesday’s class. The first paper used acoustics to locate a sniper (after taking a shot) and the second presented a generic acoustic sensing framework (to find marmots, among other things).

We clarified a few elements of the countersniper paper first. In particular, the consistency function was unclear to a few people. The function is used to make sense of the multiple shooter predictions coming from nodes. Some nodes may be victim to sound reflection and others may be hearing other sounds. The base station uses the consistency function to combine all of these potentially erroneous readings to make a confident final prediction of the shooter’s location. The function just breaks up a 3D location (X,Y,Z) and time (T) into a four dimensional grid. Each node’s prediction falls into one of the grid’s unit hypercubes (the 4D volume is determined by sensor accuracy characteristics) and the unit hypercube with the most predictions is considered the true shooter location. We had a calculator jam session and found that there are roughly 880 billion potential hypercubes in the shooting range, so an exhaustive search would be slow. Instead, the consistency function just starts with a bisected 4D space, chooses the hypercube and bisects it. This process repeats until a final unit hypercube is chosen.

Signal detection was another important problem the paper had to deal with. To predict a shooter’s location, each node must first know there was a shot. The paper’s approach used zero-crossing points in the acoustic signal, which would give you information on the frequency of the loudest sound. The paper also uses other statistics (max amplitude, etc.) to create a sound signature and can compare this with the expected gunshot signature. At least a few people in class though it was cool that the paper used custom logic (or an FPGA at least) to perform this work more efficiently than a mote.

There seem to be a few ways the system could be fooled. For example, if more motes heard echos than heard a direct event, the consistency function would not work properly. Alternatively, someone could maliciously blast gunshot noises from other locations to distract or jam the countersniper system. Finally, the system wouldn’t do too well if there were multiple rapid-fire shots.

The consensus on this paper was that it brought a lot of techniques together and motivated the use of sensor networks well. This felt like the paper wanted to solve a real problem rather than making up a contrived one.

We moved on to VoxNet, a more generic platform for monitoring acoustics over some area. The system is a bit beefier than our typical sensor networks because it is handling multiple sources with high data rates (4 channels of 44.1KHz audio each). This raised a hearty debate about what a “sensor network” was, exactly. These nodes are using StrongARMs, 802.11, have a large battery, and only have a lifetime of eight hours. These nodes are a far cry from the MSP430 powered motes that might last for months in most of the other papers we read. But, the system forms a network and uses sensors, so it’s technically a sensor network, no?

The paper made accessibility to domain scientists a priority using WaveScope, their dataflow language. The language is pretty simple and can be retargeted for other platforms (such as desktop computers). It also makes use of hand-tuned functions for computationally intensive operations, such as FFTs. This allows WaveScope to get good performance/energy results despite the higher-level language.  However, this also lead us to question the performance/energy claims of WaveScope – if most of the processor cycles are spent in these hand-tuned (non-WaveScope) functions, the power/performance evaluation isn’t really measuring WaveScope. Rather, it’s showing that WaveScope doesn’t glue these functions together really poorly. It would be interesting to see how WaveScope performs by itself without these hand-tuned functions.

Finally, we really liked the idea of letting multiple applications coexist. The introduction had us sold on testing code revisions while letting the old programs run, but sadly the paper never came back to it. We would really have loved to see how that functionality fit into the system.

 

Tuesday, April 14, 2009

ICEM and Triage: Two approaches to energy management in sensor nets

Integrating Concurrency Control and Energy Management in Device Drivers by Klues et. al.


Triage: Balancing Energy and Quality of Service in a

Microserver, By Banerjee et. al.


Last Tuesday, we discussed two papers focused on novel approaches to increasing power efficiency in wireless sensor networks. One paper, Integrating Concurrency Control and Energy Management in Device Drivers by Klues et al. focused on building a wireless sensor net OS model which integrates power management at the driver level, automatically turning off peripherals which are not needed. The other paper, Triage: Balancing Energy and Quality of Service in a Microserver, focuses on a specific subset of applications for sensor nets in which resource-constrained nodes are interspersed with more powerful micro-servers. To save power, Triage presents a novel architecture for these microservers in which a low-power node recieves and schedules tasks to be run on a more powerful stargate-class mote, allowing the more power hungry stargate to be turned off much of the time.


We began by focusing on the ICEM paper by Klues et. al. A fundamental underlying argument in ICEM is that much of the inefficiency of existing sensor network systems results from the inefficient handling of concurrency among different parts of an application. To remedy this, ICEM offers a driver system based on the concept of “Power Locks.” To support different types of hardware peripherals, ICEM offers three types of drivers: virtualized, dedicated, and shared. Virtualized drivers encapsulate all the buffering and per-client state maintenance, exposing only a simple functional data path to the application. Dedicated drivers, on the other hand, allow only a single user at a time, which is essential when dealing with low-level hardware control such as turning on or off an LED using a pin on the microcontroller. The third type, shared drivers, permit multiple users but require them to handle concurrency explicitly by trying to acquire a lock on the driver. In a nutshell, ICEM uses the low-level information gained from these driver interfaces to make simple inferences about the future expected concurrency for a given peripheral, and then tries to efficiently put it to sleep at times when it is not needed.


While some thought that the emphasis on deeply integrating power management into the OS was a step in the right direction, there was a good deal of discussion as to whether this approach would truly offer as many gains as they suggest in terms of application performance. A key issue here is the problems created by out-of-phase recurring tasks. For example, if two components of an application sample the ADC every 10 seconds, it would be most efficient to have them synchronized such that each samples one after the other, meaning the ADC would only turn on every 10 seconds for a few cycles. If, however, the components were out of phase such that component one sampled at say time t=n*10, and component two sampled at t=n*10+5, then the ADC is effectively turned on every five seconds which is considerably less efficient. Additionally, several people felt that ICEM had limited utility for applications in which the main power consumption came from a source other than the hardware peripherals, such as PDA class devices.


With regard to the Triage application, there was some discussion as to the applicability of the design to real-world appliacations. A number of people initially felt that the dual hardware approach in a less powerful and more energy efficient node controlled the scheduling of the more power-hungry an more powerful CPU was an innovative approach, but the discussion brought out several of points which raised doubt about its utility. It quickly became apparent that the motivation for the Triage design was based heavily on several assumptions, namely that:

  • The Microserver can be shutdown much of the time
  • Messages have to be small (since a lower-power, less powerful node recieves and processes them)
  • The Microserver will recieve tasks from other nodes in the network via radio (as opposed to from the internet, or a central hub)

The last assumption, in particular, seems to me to limit the potential utility of the Triage design: it necessitates an application in which the resource-constrained nodes do not simply relay data to the microserver, but actually send queries which require use of the heavy processing on the stargate-class device. I can imagine situations in which this might be useful, such as using microservers to offer services to mobile nodes in a ubiquitous computing context, but this type of application is never mentioned by the authors.

Monday, April 13, 2009

Mobile Sensing: CarTel and Nericell

CarTel: A Distributed Mobile Sensor Computing System
Hull et al.

Nericell: Rich Monitoring of Road and Traffic Conditions using Mobile Smartphones

Mohan et al.

Today, we covered two papers on mobile wireless sensor networks. The
main idea behind mobile sensor networks is to be able to either
dynamically deploy sensors to a particular geographic region of
interest or to cover large geographic areas, where static sensor
installation costs would be prohibitive. The general thrust of this
avenue of research is supported by the increasing penetration of
commodity sensing devices on mobile platforms such as smartphones and
even x86 class hand-held computers (in contrast, most WSN papers that
talk about mote-class devices).

The first of the papers presented CarTel, a distributed, mobile sensor
network and telematics system with an integrated query engine. We
felt that the major contribution of this work was the implementation
of an actual working mobile sensor system, even though it was on a
small scale. However, the system did raise a few interesting issues.
The legality and morality of "borrowing" open wireless access points
by mobile CarTel nodes is questionable, even as the number of open
access points has declined in the last few years. ISPs servicing
last-mile Internet connections to consumers are now providing wireless
APs that are locked out-of-the-box, a strong indication that they will
continue to pursue enforcement of their terms-of-use prohibitions
against public connection sharing. This trend undermines the CarTel
premise (in 2006) that open access points would reach a density that
would allow CarTel to be deployed on a massive scale.

Since 2006, deployment of cellular 3G networks has become commonplace
and most current-generation smartphones and even some hand-held
computers now come with 3G capabilities. This wide-spread
availability of 3G obviates much of the motivation behind CarTel.
True, the data muling aspect of CarTel could be useful for places
where this kind of cellular infrastructure is unavailable. However,
it is getting more difficult to find such places. (One might --
incorrectly -- assume that such scenarios would arise in third-world
countries. However, it is in these countries that cell phone
penetration is even higher than computer penetration.)

We had a sense of slight unease about the way that CarTel incorporated
delay tolerance. Because data-muling and opportunistic open AP usage
are not predictable, data arriving at the base station ("portal" in
CarTel parlance) can be subject to large delays. So how "tolerant"
are we to delays? If delays become huge, one might as well just
purposefully drive a sensor-laden car to a particular region of
interest and drive back! The authors don't give an models of
mobility, node density, range, etc. and so can't provide and feel for
the bounds on delay.

We next turned our attention to several of CarTel's query language
constructs. First, the PRIORITY/DELIVERY ORDER and the
SUMMARIZE AS constructs caught our attention. Prioritization via the
end-point bisection method they propose implies a couple things (1)
that users are generally interested in a "big-picture" view of the
query results, and (2) that FIFO is bad because you can get a huge
delay in results. These are interesting points, but the
implementation of data priority implies in an idiosyncratic way that
real-time querying is the goal of CarTel, which is at odds with the
stated goal of delay tolerance. Second, we discussed the relative
merits of using the "trace" as the base unit of data. It was not
clear to us how GROUP BY a location would be implemented, for example.

We then switched focus to Nericell, a system for annotating road and
traffic conditions using sensor data from smartphones. The crux of
this paper is an interesting way to virtually re-orient sensors in 3D
to a canonical set of axes. Given this capability, simple heuristics
were used to detect potholes and traffic congestion (via accelerometer
and audio samples). We thought that the simple heuristics was a
weakness in this paper. More sophisticated statistical learning
methods exist for this kind of classification/discrimination.
Moreover, such methods may be orientation invariant, obviating the
need for the virtual orientation correction method on which Nericell
is based. On the other hand, the choice of smartphones is an astute
one, since cell phone penetration in India is quite high. Overall, we
felt that the major contribution of this paper is the real-world
implementation on unreliable/uncalibrated sensors, in an extreme
environment (Bangalore roads).

Thursday, April 9, 2009

Regiment and Macrolab

The Regiment Macroprogramming System
Ryan Newton et al.
MacroLab: A Vector-based Macroprogramming Framework for Cyber-Physical Systems
Timothy Hnat et al.


Last Thursday in class we discussed Regiment and Macrolab, continuing our tour through programming models for wireless sensor networks. First we looked at Regiment's evaluation, generally agreeing that number of messages sent was a reasonable rough estimate. However, the number of retries and low-power-listening evesdropping would exponentially increase with network density and affect the energy efficiency. Then we looked at node failure, which would drop the entire subtree until the next tree formation round (epoch). There didn't seem to be a good measure of reliability built in, how many nodes contributed to the value at a given time. Overall we preferred the region metaphor in Regiment to the "vectors" in Macrolab, though most of us are not Matlab or database folks.

We discussed where it fell in the programming language spectrum, deciding eventually that it was between tiered and declarative as it had controls to manage granularity. The big differences between the different languages and programming environments we've read about is in the mental models. Earlier systems addressed problems in great detail just above the hardware level while the more recent papers present intermediate and high level modes of thinking about the problem. The ability to define nested regions is the real difference between Regiment and TinyDB, along with time being a primitive. We returned once again to the problem of time synchronization, what values were really reported at the base station and how accurately they reflected the current state of the sensor space. TinyDB merges old data with new as packets get relayed slowly from the outer reaches of the network.

Someone suggested we might be able to fix this by adding a local timestamp as part of the gathered data and buffering the last bunch of values, returning the one that best matches the requested time when queried. But we still need to map between local and global time to adjust for clock drift. Distributed algorithms in general need a lot more information in order to obtain precise measurements - network depth can have a huge impact on the time required to know everything that happened at a specific instant even discounting the overhead of time synchronization.

We ended up deciding the main contributions were defining functions over regions, allowing programmers to deal with mid-level abstractions with more flexibility than TinyDB but still taking care of all the details of compiling global network definitions to individual node code.

We then moved on to Macrolab, which defined programs in terms of the data needed. Several expressed concerns about trusting their decomposer to generate reasonable code, which others likened to the relational database revolution. Others worried about the cost model, though all thought the idea of where to store the data under what network and workload conditions was a real concern worth study. The paper fell just short of demonstrating run-time adaptivity that would be extremely useful. Even with run-time adaptivity, poorly balanced workloads could cause thrashing between modes and poor overall behavior.

Tuesday, April 7, 2009

Programming Models -- Tenet and Declarative sensor network

This class continued the theme of working towards picking the best 10 or so papers of the semester for a wireless sensor network "Hall of Fame". Over spring break we were asked to pick 5 favorites of the 27 the papers we've read so far. We started off the discussion by going over peoples picks and kicking off a new format for the class. The two top papers were TinyOS (ASPLOS'00) and the Radio interferometric localization. There were 4 papers that were in the running for the three remaining slots. Great Duck Island, NesC, Dozer, and Pixie. We decided to keep them in the running for now.

The topic for this class was programming models. As sensor networks mature, it's natural to think about interacting with them at a higher level. Our first paper for discussion was Tenet. Tenet argues for tiered sensor net deployments with simple leaf nodes interacting with more powerful master nodes. Programming the network is done through tasks, which are a linear combination of tasklets. Master nodes disseminate the tasks to the motes which perform the requested operation and send their results to an available master.

Some of the class liked the idea of specialized motes, as one size does not fit all, but wondered how easy is it to decide where to place these master nodes. Then discussion turned to would it be possible to build a mote that could switch from a low power mote to a high power master node, perhaps by using something like dynamic frequency scaling?

There was some concern that this model does not fit the idea of a scattered node deployment (think of nodes being tossed out of a plane), but this is a myth. Very often in real deployments, nodes are placed with some care, but Tenet does have an extra level of planning than some of the other systems we have looked at.

One of the advantages of Tenet is that global coordination is easer than other systems as tasks only come from the masters. Their main claim is the easy of management and usability, but they could do a better job of selling this. It seems like this could extend the lifetime, but no real energy evaluation was provided.

One disadvantage is that you have to code for two different platforms, the motes and the master nodes. They didn't make a convincing argument that it's easier to do this in the master. The usability of the system could be a big win, but the paper didn't convince some of us that it is easier. It seems like they are just pushing the complexity around in the system rather than reducing it.

There is a tension in systems between simplicity/reliability and capability. Tenet comes down on the side of simplicity but did they take advantage of this? They could have talked about all the complexity of the mote-peg as a straw man, and beat up on that a little. That would have been more more convincing. The class voted on the paper for the CS263 hall of fame: 3 weak accepts, 7 weak rejects and two strong rejects, so it didn't make it.

Our second paper was Declarative sensor network. They attempt to come up with a general-purpose, easy to use, efficient programming model for sensor networks. They created Snlog, a dialiect of Datalog to program the network. The application gets translated into a intermediary dataflow representation that uses chains of database operators that is then compiled as an executable for the nodes. They implemented several systems in datalog, such as multi-hop collection, tricke, a tracking app, and many others that they didn't have the room to describe.

The apps seem concise and all could fit on a power point slide, but that lead to our first point of discussion. Could a domain scientist get her application working and get up to speed quickly in such a system? Someone commented it's like writing your application in Haiku. This got a chuckle out of the class and led Matt to his own blog entry on the topic. The point is that maybe several pages of code that you can understand and reason about is better than a half page of code that you have little wiggle room to get wrong. Debugging sensor networks is hard, and these DSN models are tough for some people to wrap their heads around. Despite what the authors want you to believe, sensor networks are not databases.

What does using a DSN buy you? It gives the complier and runtime some flexibility in how to implement it. It's like having an automatic transmission in your car. But you still have to think about programming at the node level. You're basically writing a state machine for the node. The node specifiers are critical to programming the system. You need to pin down on the node the statement is running on, however you are only allowed to name one hop neighbors. There is implicit communication going on and commands are inducing communication in the network. It may be hard to get a handle on energy usage in this systems. As for nominations to the hall of fame there were 4 weak accepts 5 weak reject 3 strong rejects, so we were 0 for 2 with this class.

Sunday, March 29, 2009

Dealing With Sensor Data

A Macroscope in the Redwoods
Tolle et al.

Werner-Allen et al.

Today we discussed two papers which involved environmental monitoring. Although we've seen papers which did this sort of monitoring before (Great Duck Island), both of these papers brought something new to the discussion, in that they tackled using WSN as scientific instruments, with a focus on the collected data.

The first paper we discussed was "Macroscope." Opinion was split over the contributions of this work. Some argued that the real deployment in the redwoods gives a good overview of the capabilities of current WSN for data collection, grounding the field. WSN are here! Data valuable to biologists, that had previously been unattained, was now available. "Macroscope" provided a look into what useful data could be the result of such a deployment, where the problems for future consideration were, and calibration issues.  

On the other hand, some argued that there was nothing altogether too impressive here. Previous works had preformed environmental monitoring, and other fields had data sets far, far larger than what was considered in this paper. It was proposed that perhaps particle physicists, astronomers, and some others may get a good laugh out of the relatively tiny data sets from environmental monitoring.

In "Macroscope," the data yield from the network was far less than ideal, and far less than the data loggers. Someone wondered if it wouldn't be better for a static deployment to just run cables, given the unreliability of wireless communication, and given the need for scientists to miss as few sensor readings as possible. We discussed a few problems with wired connections, including deployment issues, and the weight of the cables being cumbersome for the tiny motes. Someone suggested that WSN could hopefully be deployed by a non technical person, just by switching the nodes on, and letting the network form itself, rather than have to worry about physically connecting things properly.

One point of discussion was in regards to the need for the radio network at all. "Macroscope" points out that they used data both from the on board flash data loggers, and from the network. However, had the logs been reset at deployment time, there would have not been any need for the network, so why bother? Could environmental monitoring be an application where long existing data loggers are more appropriate? Someone made the point that the network allows us to do time synchronization for sensor reading timestamps, without power hungry GPS. Additionally, the data from some applications, such as seismological event monitoring, may have real time value.

This motivated a more general discussion about the potential misuse of motes, where other platforms were more appropriate (not saying that this was the case with "Macroscope"). One impassioned individual argued that there may be a temptation to solve an interesting technical problem for motes, for an application where motes aren't the most practical solution. It may indeed be out of balance to stick a mote on an expensive, power hungry sensor.

As the discussion segued to "Volcano," someone suggested that the success or failure of a particular WSN should be data oriented. Indeed, "Volcano" focused on how well a WSN could replace previously used systems for data collection, using data fidelity and yield as its metrics.

In one sense, this work was less successful than "Macroscope." Between software problems and power outages, the network was down for a significant portion of its deployment, making conclusions about performance potentially troublesome. Additionally, a bug in the time sync protocol made data analysis very difficult. Luckily, a sensor nearby to the deployment was used for "ground truth" to successfully recover as much data as possible. The paper stressed the many problems could occur in deployment which were not apparent in lab testing (such as high velocity rocks destroying an antenna).

One person said that this paper presented itself as "We deployed a broken system, and this is how we extracted useful lessons," rather than "Here's our volcano monitoring system, and here was the performance," as the latter presentation may not have been accepted into a top conference, due to the various problems encountered during deployment.

Someone, perhaps somewhat cynically, suggested that if the system had been used to monitor a forest, rather than a volcano, it would not have been accepted.

Throughout the discussion, we considered which of the papers today, if either, we might include in a cs263 hall of fame. Nobody seemed ready to make a strong commitment to either paper, nor did anybody suggest any of the other papers we read this semester. It seemed hard to directly compare the quality of the contributions of one paper with those of another, and even more so to say that one paper is better, by whatever metric, than most of the rest.

Tuesday, March 17, 2009

Target Tracking

A Line in the Sand: A Wireless Sensor Network for Target Detection, Classification, and Tracking
Arora et al.

Tracking Multiple Targets Using Binary Proximity Sensors
Singh et al.

We took on object tracking in today’s class. Despite similar goals, the two papers we read took on noticeably different approaches. “Line” clearly made an effort to generalize approaches and algorithms while “Tracking” focused on a narrower 1D subset.

We started our discussion with “Line,” and the brief evaluation was the first issue. Perhaps details of the result were obtained but withheld due to the paper’s military connection? Still, the scalability results they release are not ideal. Further, details of their experiments is vague. Alas, it’s hard to know when you don’t know what you don’t know.

The energy issue came up early too. Many of us would have liked to see some evaluation of the energy consumption of the system. How much energy is consumed by the four consumers identified in the paper (sensing, computing, storing, and communicating)? In the interest of maintaining generalizability, we understand that you wouldn’t want to constrain the system to a certain lifetime or power allocation, but it would be nice to understand the magnitude of power consumption.

We identified some contributions in “Line”. The most obvious was support for tracking in two dimensions, using binary sensors. The paper also introduces the concept of an influence field (the area in which a sensor could detect a target). In addition, the paper attempts to count and categorize tracked objects (ordinary person, soldier, military vehicle). The paper also identifies several potential mote sensors and makes a case for the two they use: a magnetometer and radar. There was a little discussion about whether this list was compiled before or after sensor selection, but this might be some grad students projecting their experiences onto the paper...

The radar sensor was interesting, as most (if not all) of us had never used one. The radar seems to have a good range at 60’, although the specific measurement was not intuitive at first. The sensor is not measuring the velocity of the tracked object, but rather the magnitude of the vector component pointed toward the sensor.

The magnetometer’s characteristics were clearer, but seemed to introduce many of the flaws in this paper. The biggest problem is the range, which seems to be roughly 10’ and we suspect leads the need to have sensors spread out at ranges of 5’ from each other. Not only is this kind of sensor density difficult to deploy, but it causes radio density problems as well. The paper makes several references to contention leading to problems. For example, increasing retransmissions results in worse performance, tuning radio power produces large performance improvements, and scalability becoming a problem due to radio density.

We also identified some weaknesses and questionable assumptions. First, the paper assumes targets are moving. This limits the system to detecting intruders and could not be used to do reconnaissance (how many tanks are hiding in a base, etc.). Second, the paper assumes that two targets are at least 10m from each other. This might be true with vehicles, but is not always clearly the case when dealing with people (soldiers or otherwise). In addition to miscounting, violations of the 10m assumption might lead the system to misclassify a group of soldiers as an armored vehicle. Finally, the paper assumes that noise will only effect a minority of motes. The wind is the example given in the paper, but wouldn’t wind affect a majority of nodes? Wind isn’t terribly discriminating.

We moved on to the “Tracking” paper. The paper also uses binary nodes (this time generalized as outputting a 1 if at least one target is detected, 0 otherwise). The system takes a series of snapshots over time and sends them to a centralized processor. The processor then tries to determine a lower bound on the number of monitored targets.

There were a few terms that needed clarification:

  • Particle: refers to the trajectory of a target
  • Cluster: a set of targets with a common trajectory
  • Particle Filtering: looking at trajectories (particles) over time to determine which nodes are in a cluster. To improve particle filtering accuracy, the algorithm is weighted against changes in velocity.

Although this paper limited itself to one dimension and we all agreed this was not terribly useful, many in class believed the approach could be extended to 2D and beyond. Perhaps the 1D limitation was a product of the need for a simple explanation, although many wanted to know why some results were not given for 2D. After all, if 2D+ is as simple as the paper claims, why not implement it?

Also, the particle filtering algorithm avoids changes in velocity. Although this may be reasonable for tracking certain moving objects, it seemed more likely that objects would change their velocity (people, traffic, birds, etc.).

Finally, we had some questions about how two targets in the same cluster would be counted. Would deviating from a shared trajectory at the same time have a different result than deviating at the same time?

Monday, March 16, 2009

Storage: Capsule and FlashDB

Capsule: An Energy-Optimized Object Storage System for Memory-Constrained Sensor Devices
Mathur et al.

FlashDB: Dynamic Self-Tuning Database for NAND Flash
Nath and Kansal

The topic of today's discussion was flash storage in wireless sensor networks. The class started by discussing some higher-level issues about storage on motes and about flash storage in general before digging into the specifics of the assigned papers. First, one person questioned the assumption that mote applications are write-intensive. His argument was that motes tend to have low duty cycles, so there is not too much data written after all. In response, one student suggested that the write-intensive classification is based on the read-write ratio instead of basing it on the frequency of writes.

Then, one person questioned the need for advanced storage primitives, such as queues or files, since in practice, most applications use only simple append-only logs. However, some motes store their configuration as files on flash, and filesystem provides a familiar storage abstraction for programmers. If motes supported a filesystem recognized by PC's, such as FAT, we would be able to just pull out an SD card and plug it to a laptop in order to access the data. However, implementing FAT on a mote can be quite challenging, so the network could have few more capable, FAT-enabled motes, which would gather data from other simpler motes.

We were also wondering why we would need a lot of storage on a mote. For example, if the application sampled two sensors every 20 minutes, producing 4 bytes each time, a 2 GB disk would be filled in approximately 20,000 years. Even in the case of data-intensive applications, it would take at least several months in order to fill the disk. Another concern that we discussed was pulling out data once it was collected, since it is effectively impossible to stream several gigabytes of data using the radio. Instead, the users should be able to send queries to motes and retrieve only the data they actually need, which justifies the need for indexes and structured storage in general.

Then, the topic of discussion changed to flash storage and its properties. Someone pointed out that a mote is inherently incapable of efficiently performing updates in place. In order to do so, it must read the entire block to memory before erasing it, which requires more memory than it is found on typical motes. The alternative is to copy the entire block elsewhere in the flash, erase the original block, and then copy it back while performing the requested changes on fly.

In flash, it is possible to turn 0's into 1's without having to erase the entire block. One person suggested that this could be leveraged to change already stored pointers to null pointers if we represent them as FFFF instead of 0000. Another student suggested that the application can leave some of the written data as 0‘s and update it in place later. However, we realized that this would not work, because the flash controllers do not expose the interface for performing such operations.

Finally, our attention turned to Capsule, the first paper assigned for today. The system uses log-structured storage, which seems to be the only reasonable storage model for motes. This approach works well for write-intensive applications at the cost of slow reads. For example, accessing the head of a queue requires the application to traverse it all the way from back to the front. On personal computers, this performance issue is addressed by using large buffers, but on motes, we do not have the luxury to do so.

Most of the remaining discussion of the paper revolved around log cleaning and compaction. Because of the log-based approach, the flash storage can quickly fill up with irrelevant (deleted) data. But in order to use reclaim the space, the log must be compacted due to the must-erase-before-write nature of flash. Compaction, however, takes a lot of energy and time. One of our concerns was that Capsule might start this process even if the client application could be in the middle of a high-rate data capture. Another concern was that when there is nothing to be compacted, the storage component would attempt to do so anyway and just waste energy. Finally, someone pointed out that this scheme effectively allows only 1 GB of storage on a 2 GB device, since it has to leave enough space for compaction.

Towards the end of the class period, we shifted our attention to FlashDB. One of the first observations was that using the B+ tree nodes in the disk modes requires in-place updates, which would not work on typical motes. We highlighted FlashDB's ability to switch the mode of operation online - instead of just determining the appropriate mode before deployment.

We were wondering why we would need the online self-tuning capability until one student suggested an example, in which a mote samples data during the day and streams it to the base station during the night. Ideally, the B+ tree would be in the write-intensive (log) mode during the day and in the read-intensive (disk) mode during the night. However, we were not convinced that FlashDB would make this switch early enough, especially since this dynamics was not evaluated in the paper. Perhaps it would be beneficial if the client application could disable self-tuning and perform it manually.

Thursday, March 12, 2009

Data Aggregation: TinyDB, Beyond Average, and Synopsis Diffusion

(Madden, et al.)

(Hellerstein, et al.)

(Nath, et al.)

Today, we addressed the topic of data aggregation. We began by
exploring the contents of the ACQP and Beyond Average papers. Starting
with ACQP, we were quite pleased to see that the issue of power/energy
consumption was finally being brought to the forefront of a paper
related to sensor networks again (in the previous week, there was a
shared longing for papers that addressed the topic of energy more
directly). Moreover, the ACQP paper presented what one student
suggested was a new "programming model for sensor networks," but, in
particular, was a paradigm of viewing sensor networks as instruments
for scientific studies.

As began to philosophize about this idea, we began discussing the
questions that undergird the distinctions between the systems and
database communities. To a first approximation, we understand that the
systems community tends to approach problems from the bottom-up while
the database community goes from the top-down. A DB example we
discussed regarded the use of query optimizers and other DB internals
to take the declarative statements of a language like SQL and hide the
details of returning the correct answer as quickly and efficiently as
possible. The systems folks in the room tended to agree that we like
to approach problems a (relatively) low-level base.

Ultimately, we were trying to get at the question: Should the system
optimize queries for you? For DB folks, this seemed like a
no-brainer. We even tended to think about the benefits an analogous
(although, not a perfect analogy) of compiler optimization.

But this is where we began to wonder whether the optimizations
presented (e.g., priotization schemes) in this paper seemed
appropriate for sensor networks. While in the compiler analogy,
compilers rearrange instructions but the execution of, say, a
statically-bounded for-loop won't suddenly decided to stop at 10
iterations rather than 20, we questioned the example prioritization
schemes as winavg and delta seemed to defy the correctness that some
sensor network users would desire for their applications. However, we
acknowledged that perhaps specific applications may find those schemes
useful or reasonable, but a skepticism continued to linger on this
point.

Moreover, we felt that the Beyond Average paper might in fact be
breaking the abstraction barrier that it set out to establish
(high-level declarative language - SQL - hiding the underlying details
of sensor network operation). The one example from which this
sentiment emerged was in the Query-Handoff Implementation for tracking
(i.e., Fig 8). The use of SIGNAL EVENT to trigger an ON EVENT
statement appeared a lot like an imperative programming language
method or function call. We weren't certain anymore if the high-level
approach to sensor network application writing was appropriate or
possible after realizing this was what the authors were demonstrating.

In general, we appreciated the abstractions that the DB folks aimed
for but we felt uneasy that this approach to programming sensor
networks would be feasible in the future.

Of course, we also spent some time also examining the Synopsis
Diffusion paper. Mostly, we sought to understand the COUNT algorithm
(which seeks to know how many nodes are present in the network) to
better understand Synopsis Generation, Fusion, and Evaluation. We felt
that there was a little blackbox magic going on for COUNT, especially
in the SE() phase, but we didn't get too hung up on this process.

We seemed to unanimously appreciate the presentation of necessary and
sufficinet properties for ODI-correctness; however, we wished that the
authors would have spent more time discussing how to create
ODI-correct Generations and Fusions.

We had a lot of papers to get through on Tuesday, but each of them
seemed to generate a fair amount of dialogue and interest from the
class, allowing us to explore the range of DB and systems research
philosophy to details of proofs and ODI-correctness.

Thursday, March 5, 2009

Time sync and localization: FTSP and Radio Interferometric Localization

The Flooding Time Synchronization Protocol
Maroti et al.

Radio Interferometric Geolocation
Maroti et al.

Today's discussion was a little different from previous classes
because the two papers weren't on the same topic. So discussion was
largely split into two parts.

Part I: FTSP.

We started off by throwing out some thoughts about our likes and
dislikes. Under the "likes" column, people seemed to agree that FTSP
was a generally useful protocol in that it solved a pratical problem
well (especially, since it produces error on the order of tens of
microseconds). Also, we particularly appreciated the author's
extensive breakdown of the many (ten!) sources of synchronization
error, the magnitudes of each, and their characteristics. The
exposition made it much easier to understand the problem space and
what problem FTSP was trying to solve.

Under the "dislikes" column, people seemed to have some reservations
about the evaluation section since there was no direct evaluation
alongside other synchronization protocols, just comparisons to their
published performance numbers. Along these lines, there were also a
few questions raised about the metrics used to measure FTSP's
performance (we'll get to that a little later).

Root election came up as a discussion topic. It caught our attention
mainly because the root election process was reported to take so long
to converge (17 mins). There was some confusion about what happened
during convergence, when multiple nodes might be considered as root.
The election algorithm maintains time consistency when a root
relinquishes its status to another node with a lower ID, but only if
the algorithm had previously been stable and the nodes were more or
less in sync. However, when the system is boostrapping, clusters in
disjoint parts of the routing tree may have inconsistent times. It
would have been nice if the authors had included some analysis about
how the system behaves during bootstrapping.

The talk about root election brought up a more fundamental question
about whether a root was needed at all. Would it instead be possible
for nodes, in a distributed way, arrive at a global concensus time?
Recent work by the SYRAH group on DESYNC, a distributed concensus protocol modeled on pulse-coupled oscillators,
might serve as a primitive for doing just this.

As we delved deeper into the paper, one major issue that came up was
the authors' evaluation metric of average pairwise error. Since the
whole point of this protocol was to institute a global clock amongst
all the nodes, it would seem more prudent to use a metric that
measured each node's deviation from the global clock. Additionally,
the calculation of the average error per hop seemed strange since it
took the all pairs average error and divided it by the network
diameter.

One alternative to FTSP we considered was a protocol that used time
deltas. The idea would be that each node would measure its time
difference with its parent and that the root could figure out the
clock offset of a node by simply summing the deltas along the path to
that node. The drawback to this is that only the root has a concept
of the global time and that this would make it hard to coordinate
nodes (e.g., "at 10:24:56.000 execute command X"). One could conceivably
send a command from the root like "in 00:10:00.000 execute command X"
and each node would adjust the time according to its own clock
offset. However, this would only work if the time for the message to
propagate to the deepest node was less than the clock skew of any node
along the path. Since we're talking about microseconds, the delta
method may not be feasible.

Part II: Radio Interferometric Geolocation

We started off the discussion of the second paper by again throwing
out our "likes" and "dislikes" about the paper, although there weren't
many "dislikes" to speak of. In particular, people seemed to like the
novelty of the idea and were pleasantly surprised that it was actually
possible to implement. Everyone also seemed to enjoy the fact that
the paper started off with some theoretical underpinnings and then
steadily progressed into a very "engineering" kind of paper
(e.g., dealing with inaccuracy of the radio). It was striking that
nothing about this paper was naive -- there are very important
considerations made for some real world problems that are not
immediately obvious.

One of the highlights of this paper was the method used to defend
against inaccurate radio crystals. The problem they were trying to
solve was that each radio behaved somewhat differently -- when you set
it to emit at a particular frequency, the emitted signal was up to
2kHz from nominal. So the authors built in an automatic calibration
step that involved three nodes: nodes A and B transmitted at close
frequencies f1 and f2. Node B was held fixed at f2 and f1 was varied
until the difference between the frequencies at a third node C was
observed to be minimized. This was pretty clever!

Another clever thing done in this paper was in the measurement of the
frequency and phase offset. Note that what they are sampling is NOT
the carrier frequency. If this were the case, then with a carrier
frequency of approximately 400MHz, the sampling rate would have to be
800MHz -- clearly impossible with an ADC that samples at 9kHz max. So
it was neat that the authors sampled the beat frequency instead --
especially since the two carrier frequencies were close, the beat
frequency was low and easily sampled at 9kHz.

There were some practical considerations that hampered this method.
In particular, the 80 mins it took to localize the nodes seemed a
little onerous, and impossible if nodes were at all mobile. Also, the
experiments were performed in a flat outdoor field, presumably with no
signal interference from other sources. Since this method directly
relies on the radio signal, lack of interference is necessary. This
assumption, however, is not all that realistic. Still, we marvel at
the sheer novelty of the method.

Friday, February 27, 2009

Flush and RCRT: Transporting your data reliably

Flush: A Reliable Bulk Transport Protocol for Multihop Wireless Networks
(Kim, et al.)

RCRT: Rate-Controlled Reliable Transport for Wireless Sensor Networks

(Paek, et al.)

Today we discussed two protocols that solve the problem of reliable and rate limited transport in wireless sensor networks. First, we discussed the similarities between the two designs and came to the conclusion that both papers use end-to-end NACKS, link-layer ARQ, and provide some sort of mechanism to communicate control information from the nodes. One thing that was not explicitly stated in either of the papers was the different applications for which Flush and RCRT were designed. This addressed some of the skepticism I had about the feasibility of these implementations for general sensor network deployments, since each paper makes assumptions that are not be universally applicable.

I thought the Flush paper somewhat oversimplified the problem of source to sink data transport by accounting for only 1 data flow at any instant. This means that the sink has to directly request data from each source node. While the design works great from a single flow, it also means that all the congestion and rate control information only goes one way: from the source to the sink. There is no mention of the Flush design working in the other direction, and packets from the sink are transported reliably by each node transmitting to the next node in the routing path with ACK confirmation at each step. Flush could probably be modified to handle both directions with some additional overhead.

In the intro, the authors mention that Flush can adapt to varying network conditions. When I first read this, I assumed that Flush would adapt to nodes leaving the network and re-entering or scenarios where the routing path changes. In the evaluation section, I was disappointed to see that the test for adaptability was just having the nodes drop packets at a probability of 0.5 for an interval of 10 seconds. Therefore, the adaptability claim wasn’t properly addressed in terms of node failures, and all other experiments were conducted by freezing the routing tree for every run.

The main discussion point concerning RCRT was the “totalitarian” nature of the relationship between the sink and the network. In other words, when a node’s RTT for repairing a loss exceeds 4, the sink limits the rate of EVERY node by using a multiplicative rule. To bring the network’s rate back up again, every node has to fall below the threshold. We inferred that the authors did this maybe because modifying the rates of each node on an individual basis is too difficult. The network becomes too complex, and it becomes exceedingly difficult for the sink to understand exactly how the nodes affect and interact with each other.

One of the more general philosophical points dealt with the inevitable choice between avoiding congestion and detecting congestion and taking subsequent action. In the RCRT paper, we saw that the network “heals” itself as congestion increases. However, IFRC takes the opposite approach and does what is necessary to prevent the network from being congested. We came to the conclusion that the latter approach will most always lead to better energy utilization because there is never any wasted “effort”. In other words, packet loss leads to the wasted energy since the amount utilized cannot be regained.