Friday, April 24, 2009

The Best of CS263

Well, we're done discussing papers in CS263 this term. We read and talked about 41 papers over the course of the semester, and by and large they were really great.

We decided to run the course in a mock "program committee" style, in which our goal was to pick the top 10 or so papers over the course of the semester. This is great practice for students but also provided a focal point for the discussion: Does a given paper belong in our "program"?

After much deliberation, we have selected the following eight papers as The Best Papers from CS263. This could be seen as a kind of "best sensor network systems papers" award. (Of course, there are many papers we did not read at all, so caveat emptor!)

System Architecture Directions for Networked Sensors, J. Hill et al., ASPLOS'00

This is the classic paper on TinyOS, which lays out many of the challenges in the sensor network domain and focuses on the new OS and communication model requirements.
Radio Interferometric Localization, M. Maroti et al., Sensys'05

There's a lot to like about this paper, which applies RF interferometry to low-power wireless sensor nodes; a lot of technical depth.

The Design of an Acquisitional Query Processor for Sensor Networks, S. Madden et al., SIGMOD'03

TinyDB has been hugely influential in terms of new approaches for querying and aggregating data in sensor nets.

Sensor Network-Based Countersniper System, G. Simon et al., Sensys'04 (BEST PAPER AWARD!)

This system uses acoustic sensors to localize the source of gunshots in an urban setting. It brings together event detection, time synchronization, localization, and a nice sensor fusion algorithm to develop a complete system, which is deployed and evaluated in a real field setting. This paper was universally voted as the students' favorite so we've given it our "best paper award."

Distributed Image Search in Sensor Networks, T. Yan et al., Sensys'08

This paper proposed to use a network of wireless cameras to store and index images which can later be queried by similarity. The authors do a very good job at motivating their design and diving into the details of image processing, flash storage, and so forth.

Dozer: Ultra-Low Power Data Gathering in Sensor Networks, N. Burri et al., IPSN 2007

This paper does a great job at a complete cross-layer design spanning the MAC, link, routing, and application layers to achieve very low power and high reliability for data collection. In some sense this is the first paper I'd give someone working on communication in sensor nets, since it nails down how to do it right.

An Analysis of a Large Scale Habitat Monitoring Application, R. Szewczyk et al., Sensys'04

This is another "classic" paper on the sensor net deployment at Great Duck Island. It's a great case study of the challenges faced by a sensor network in the real world.

Synopsis Diffusion for Robust Aggregation in Sensor Networks, S. Nath et. al, Sensys'04

This was one of the only "algorithms" papers we read, and it does a great job at developing an aggregation technique that is robust to duplicate and lost packets. A superb example of plucking a somewhat obscure concept from the literature and demonstrating its value in a real system.

It's been a great term and I gained a lot by having these heated discussions with the students, who were approaching many of these papers for the first time.

Next week we're doing in-class presentations of the final student projects -- I'll be blogging summaries of those as well!

Underwater sensor networks

T-Lohi: A New Class of MAC Protocols for Underwater Acoustic Sensor Networks
A. Syed, W. Ye, J. Heidemann

Data Collection, Storage, and Retrieval with an Underwater Sensor Network
I. Vasilescu, K. Kotay, D. Rus
M. Dunbabin, P. Corke

We wrapped up the semester with two papers on underwater sensor networks. As you would imagine, being underwater poses a number of new challenges in the operation of the network. Radio signals don't propagate very far, so optical or acoustic systems instead must be used for communication.

We started with a discussion the data collection paper. This paper presents an underwater sensor network with fixed sensor nodes resting on the bottom of a body of water, while rechargeable mobile "data mules" periodically come by, collecting data from each node. To collect the data, the nodes communicate via line of sight using timed pulses from an LED on one end, and a photo diode on the other end. This requires the mobile node to get fairly close to the stationary node and stay relatively stationary, which could be very difficult with strong currents. Despite some micro-benchmarks performed in the Charles river, the class had its doubts that the system would not work robustly under murky ocean conditions. We talked about how this system could be made to be more robust. Maybe by slowing down the pulses of light, the system would have more time to detect a weak signal, but this comes at the cost of burning more energy on the fixed nodes. Some of the class was disappointed that the authors didn't do more with trying to encode the signals more robustly.

The other way the nodes can communicate with the data mule is acoustically. The nodes were equipped inexpensive audio transducers. But aside from one simple benchmark, it wasn't really used in their system. That was one of the frustrating thing about this paper. The introduction has this vision of many many nodes scattered across the ocean bottom. What they did was far more modest. They were able to collect data in a large swimming pool. We were also left wanting more of an evaluation. For example, how long did it take to collect data from the nodes? Throughput? Latency? In general there is a fine line between explaining your vision of the system vs. the risk of overselling your work and loosing the trust of the reader.

Someone commented that the paper read more like a technical report then a research paper. Lots of mundane details about the system. We liked that they were able to build the system out as much as they did, and the paper does a good job of motivating why this approach makes the most sense in terms of energy, but some of the class felt the cost of moving the mobile node should have been mentioned. Overall, it was pretty impressive that they got things working as well as they did.

The T-Lohi paper focuses on MAC protocols for UWSNs. They use acoustic modems, which have 5 orders of magnitude greater latencies than radio and cost 100 times more to transmit than to receive. T-Lohi uses a short reservation tones and a contention round (CR) before transmitting. The tones are short enough that nodes can often hear other reservation tones. This can give the nodes a count of how many other nodes are wishing to transmit and the backoff timer can be set accordingly for the next CR. When there is only contender in the CR, that node begins transmitting.

They compare 3 variants of T-Lohi: First, synchronized T-Lohi (ST-Lohi) were all nodes are time synced and time is divided up into slots and nodes only transmit at the start of a slot of a CR. Next was cUT-Lohi which is unsynchronized. Every node transmits when it wants, but must listen for twice the propagation delay plus twice the tone length. No collisions are possible. Teh last protocol was aUT-Lohi. Same as above, but only wait the propagation time plus the tone time, but with a chance of collision.

We talked about the problem they were trying to solve, routing packets back to the base station. They were only sending 8 packets per second, in the worst case. That's a pretty low data rate. Is it worth all this extra overhead of the contention rounds? The point of the CR is to reduce collisions. But collisions aren't the end of the world. It's probably the case that since transmitting costs so much, collisions really do hurt you and should be avoided at all costs, but they could have spelled this out more clearly. It would have been great to compare to a strawman of where there was no protocol.

Their use of "optimal" in the graphs was confusing some of us. It seems like a practical upper bounds. It's T-Lohi optimal. Then the discussion turned to whether CSMA schemes are the right way to go with these networks. TDMA would make more sense, but the time synchronization overhead might not make it worthwhile. What about more of a z-mac hybrid approach? Would that be better?

Saturday, April 18, 2009

Object Tracking and Image Search

Object Tracking in the Presence of Occlusions via a Camera Network
Ercan, et al.


Distributed Image Search in Sensor Networks
Yan, et al.



The first paper that we discussed this past Thursday addressed the problem of tracking a static object with multiple occluders (objects blocking the camera’s view of the desired object). The occluders could be moving or static. Compared to the other papers we have discussed so far, this one was heavy on the mathematics, mainly because the paper was focused on explaining the model used to simulate the situation. Also, the only idea that was specific to sensor networks was the idea of reducing the entire image from a camera to one single scan line to minimize the amount of information transferred on the network. It was concluded that this paper comprised in part the “Information Processing” realm of sensor networks.

The authors state the many assumptions that were made, and it was unclear to some of us what the system would or would not detect in special situations. However, we came to the consensus that the assumptions were valid given the constraints on the mathematical tools that the authors used. Something that was not brought up in the paper but addressed in the discussion was the situation of partial occlusions. The process of reducing the entire frame down to one scan line could be problematic in dealing with such situations and some members of the discussion felt that this reduction simplified the assumptions too much.

One person brought up an interesting point questioning whether it is necessary to even track a single object with many occluders. Is it not enough to know whether or not the object is in the room? This was under the presumption that this object tracking system would be deployed for security in such areas as malls, airports, etc. This binary output would be much simpler to implement since each camera just has to report whether or not the desired object is present in its line of vision. It was also hypothesized by some that this work was just a preliminary step towards placing cameras on mobile platforms and the system would then be used to track objects from such things as vehicles, plane, etc.

The second paper we discussed dealt with the problem of distributed image search where each sensor captures images. The user can send ad-hoc or continuous queries to a proxy that would obtain the best search results from the entire network. First we established the assumption that the SIFT algorithm will do a good enough job of translating image information and that the main problem then becomes capturing and querying images on a sensor network. Although it was stated in the paper, it was worthwhile to remind everyone that this system only works when all the objects being monitored similar. For example, one cannot search for a book cover among images of bird species. In this case, the best result visterms would most likely be irrelevant to the search.

After going through the process in which each mote stores images and visterms in an inverted index table, we went on to discuss the evaluation section. One person brought up the fact that there is no end-to-end experiment to measure the amount of time it takes actually pull images from the remote sensor nodes. It would have been nice to see such an experiment since images can be quite large (on the order of 1.6kB) and transferring each image would likely take on the order of 4 packets. There was some skepticism about the power usage value for the CC2420 radio shown in Table 1. One person made the case that the value 214.0 mW was too high and the actual value should be significantly smaller.

Overall, the paper was received very well and we agreed that every issue pertaining to distributed image search was addressed, from storing images on a constrained platform to searching for them. However, the image capture rate of one image every 30 seconds was viewed as unrealistic. A better value would have been on the order of 10Hz. Also, after doing some quick calculations, we found that if the motes had been battery-powered, their lifetime would be on the order of 1.5 hours, which is unacceptable in a widespread deployment. This led to the conclusion that the authors conducted the experiments with motes plugged into an AC power source, rather than batteries. This brought up the larger question: Are motes a sustainable platform on which to implement distributed image search? We agreed that the authors did a nice job of heavily optimizing their solution to work on motes, but, if this still results in a 1.5 hr battery lifetime, then maybe the technology is just not there yet to handle such problems. It was brought up that maybe the gap between what the motes allow and what they are being used for is too wide for the application of distributed image search.

Acoustics

Sensor Network-Based Countersniper System
Simon et al.

VoxNet: An Interactive, Rapidly-Deployable Acoustic Monitoring Platform
Allen et al.

Acoustic sensing was the topic for Tuesday’s class. The first paper used acoustics to locate a sniper (after taking a shot) and the second presented a generic acoustic sensing framework (to find marmots, among other things).

We clarified a few elements of the countersniper paper first. In particular, the consistency function was unclear to a few people. The function is used to make sense of the multiple shooter predictions coming from nodes. Some nodes may be victim to sound reflection and others may be hearing other sounds. The base station uses the consistency function to combine all of these potentially erroneous readings to make a confident final prediction of the shooter’s location. The function just breaks up a 3D location (X,Y,Z) and time (T) into a four dimensional grid. Each node’s prediction falls into one of the grid’s unit hypercubes (the 4D volume is determined by sensor accuracy characteristics) and the unit hypercube with the most predictions is considered the true shooter location. We had a calculator jam session and found that there are roughly 880 billion potential hypercubes in the shooting range, so an exhaustive search would be slow. Instead, the consistency function just starts with a bisected 4D space, chooses the hypercube and bisects it. This process repeats until a final unit hypercube is chosen.

Signal detection was another important problem the paper had to deal with. To predict a shooter’s location, each node must first know there was a shot. The paper’s approach used zero-crossing points in the acoustic signal, which would give you information on the frequency of the loudest sound. The paper also uses other statistics (max amplitude, etc.) to create a sound signature and can compare this with the expected gunshot signature. At least a few people in class though it was cool that the paper used custom logic (or an FPGA at least) to perform this work more efficiently than a mote.

There seem to be a few ways the system could be fooled. For example, if more motes heard echos than heard a direct event, the consistency function would not work properly. Alternatively, someone could maliciously blast gunshot noises from other locations to distract or jam the countersniper system. Finally, the system wouldn’t do too well if there were multiple rapid-fire shots.

The consensus on this paper was that it brought a lot of techniques together and motivated the use of sensor networks well. This felt like the paper wanted to solve a real problem rather than making up a contrived one.

We moved on to VoxNet, a more generic platform for monitoring acoustics over some area. The system is a bit beefier than our typical sensor networks because it is handling multiple sources with high data rates (4 channels of 44.1KHz audio each). This raised a hearty debate about what a “sensor network” was, exactly. These nodes are using StrongARMs, 802.11, have a large battery, and only have a lifetime of eight hours. These nodes are a far cry from the MSP430 powered motes that might last for months in most of the other papers we read. But, the system forms a network and uses sensors, so it’s technically a sensor network, no?

The paper made accessibility to domain scientists a priority using WaveScope, their dataflow language. The language is pretty simple and can be retargeted for other platforms (such as desktop computers). It also makes use of hand-tuned functions for computationally intensive operations, such as FFTs. This allows WaveScope to get good performance/energy results despite the higher-level language.  However, this also lead us to question the performance/energy claims of WaveScope – if most of the processor cycles are spent in these hand-tuned (non-WaveScope) functions, the power/performance evaluation isn’t really measuring WaveScope. Rather, it’s showing that WaveScope doesn’t glue these functions together really poorly. It would be interesting to see how WaveScope performs by itself without these hand-tuned functions.

Finally, we really liked the idea of letting multiple applications coexist. The introduction had us sold on testing code revisions while letting the old programs run, but sadly the paper never came back to it. We would really have loved to see how that functionality fit into the system.

 

Tuesday, April 14, 2009

ICEM and Triage: Two approaches to energy management in sensor nets

Integrating Concurrency Control and Energy Management in Device Drivers by Klues et. al.


Triage: Balancing Energy and Quality of Service in a

Microserver, By Banerjee et. al.


Last Tuesday, we discussed two papers focused on novel approaches to increasing power efficiency in wireless sensor networks. One paper, Integrating Concurrency Control and Energy Management in Device Drivers by Klues et al. focused on building a wireless sensor net OS model which integrates power management at the driver level, automatically turning off peripherals which are not needed. The other paper, Triage: Balancing Energy and Quality of Service in a Microserver, focuses on a specific subset of applications for sensor nets in which resource-constrained nodes are interspersed with more powerful micro-servers. To save power, Triage presents a novel architecture for these microservers in which a low-power node recieves and schedules tasks to be run on a more powerful stargate-class mote, allowing the more power hungry stargate to be turned off much of the time.


We began by focusing on the ICEM paper by Klues et. al. A fundamental underlying argument in ICEM is that much of the inefficiency of existing sensor network systems results from the inefficient handling of concurrency among different parts of an application. To remedy this, ICEM offers a driver system based on the concept of “Power Locks.” To support different types of hardware peripherals, ICEM offers three types of drivers: virtualized, dedicated, and shared. Virtualized drivers encapsulate all the buffering and per-client state maintenance, exposing only a simple functional data path to the application. Dedicated drivers, on the other hand, allow only a single user at a time, which is essential when dealing with low-level hardware control such as turning on or off an LED using a pin on the microcontroller. The third type, shared drivers, permit multiple users but require them to handle concurrency explicitly by trying to acquire a lock on the driver. In a nutshell, ICEM uses the low-level information gained from these driver interfaces to make simple inferences about the future expected concurrency for a given peripheral, and then tries to efficiently put it to sleep at times when it is not needed.


While some thought that the emphasis on deeply integrating power management into the OS was a step in the right direction, there was a good deal of discussion as to whether this approach would truly offer as many gains as they suggest in terms of application performance. A key issue here is the problems created by out-of-phase recurring tasks. For example, if two components of an application sample the ADC every 10 seconds, it would be most efficient to have them synchronized such that each samples one after the other, meaning the ADC would only turn on every 10 seconds for a few cycles. If, however, the components were out of phase such that component one sampled at say time t=n*10, and component two sampled at t=n*10+5, then the ADC is effectively turned on every five seconds which is considerably less efficient. Additionally, several people felt that ICEM had limited utility for applications in which the main power consumption came from a source other than the hardware peripherals, such as PDA class devices.


With regard to the Triage application, there was some discussion as to the applicability of the design to real-world appliacations. A number of people initially felt that the dual hardware approach in a less powerful and more energy efficient node controlled the scheduling of the more power-hungry an more powerful CPU was an innovative approach, but the discussion brought out several of points which raised doubt about its utility. It quickly became apparent that the motivation for the Triage design was based heavily on several assumptions, namely that:

  • The Microserver can be shutdown much of the time
  • Messages have to be small (since a lower-power, less powerful node recieves and processes them)
  • The Microserver will recieve tasks from other nodes in the network via radio (as opposed to from the internet, or a central hub)

The last assumption, in particular, seems to me to limit the potential utility of the Triage design: it necessitates an application in which the resource-constrained nodes do not simply relay data to the microserver, but actually send queries which require use of the heavy processing on the stargate-class device. I can imagine situations in which this might be useful, such as using microservers to offer services to mobile nodes in a ubiquitous computing context, but this type of application is never mentioned by the authors.

Monday, April 13, 2009

Mobile Sensing: CarTel and Nericell

CarTel: A Distributed Mobile Sensor Computing System
Hull et al.

Nericell: Rich Monitoring of Road and Traffic Conditions using Mobile Smartphones

Mohan et al.

Today, we covered two papers on mobile wireless sensor networks. The
main idea behind mobile sensor networks is to be able to either
dynamically deploy sensors to a particular geographic region of
interest or to cover large geographic areas, where static sensor
installation costs would be prohibitive. The general thrust of this
avenue of research is supported by the increasing penetration of
commodity sensing devices on mobile platforms such as smartphones and
even x86 class hand-held computers (in contrast, most WSN papers that
talk about mote-class devices).

The first of the papers presented CarTel, a distributed, mobile sensor
network and telematics system with an integrated query engine. We
felt that the major contribution of this work was the implementation
of an actual working mobile sensor system, even though it was on a
small scale. However, the system did raise a few interesting issues.
The legality and morality of "borrowing" open wireless access points
by mobile CarTel nodes is questionable, even as the number of open
access points has declined in the last few years. ISPs servicing
last-mile Internet connections to consumers are now providing wireless
APs that are locked out-of-the-box, a strong indication that they will
continue to pursue enforcement of their terms-of-use prohibitions
against public connection sharing. This trend undermines the CarTel
premise (in 2006) that open access points would reach a density that
would allow CarTel to be deployed on a massive scale.

Since 2006, deployment of cellular 3G networks has become commonplace
and most current-generation smartphones and even some hand-held
computers now come with 3G capabilities. This wide-spread
availability of 3G obviates much of the motivation behind CarTel.
True, the data muling aspect of CarTel could be useful for places
where this kind of cellular infrastructure is unavailable. However,
it is getting more difficult to find such places. (One might --
incorrectly -- assume that such scenarios would arise in third-world
countries. However, it is in these countries that cell phone
penetration is even higher than computer penetration.)

We had a sense of slight unease about the way that CarTel incorporated
delay tolerance. Because data-muling and opportunistic open AP usage
are not predictable, data arriving at the base station ("portal" in
CarTel parlance) can be subject to large delays. So how "tolerant"
are we to delays? If delays become huge, one might as well just
purposefully drive a sensor-laden car to a particular region of
interest and drive back! The authors don't give an models of
mobility, node density, range, etc. and so can't provide and feel for
the bounds on delay.

We next turned our attention to several of CarTel's query language
constructs. First, the PRIORITY/DELIVERY ORDER and the
SUMMARIZE AS constructs caught our attention. Prioritization via the
end-point bisection method they propose implies a couple things (1)
that users are generally interested in a "big-picture" view of the
query results, and (2) that FIFO is bad because you can get a huge
delay in results. These are interesting points, but the
implementation of data priority implies in an idiosyncratic way that
real-time querying is the goal of CarTel, which is at odds with the
stated goal of delay tolerance. Second, we discussed the relative
merits of using the "trace" as the base unit of data. It was not
clear to us how GROUP BY a location would be implemented, for example.

We then switched focus to Nericell, a system for annotating road and
traffic conditions using sensor data from smartphones. The crux of
this paper is an interesting way to virtually re-orient sensors in 3D
to a canonical set of axes. Given this capability, simple heuristics
were used to detect potholes and traffic congestion (via accelerometer
and audio samples). We thought that the simple heuristics was a
weakness in this paper. More sophisticated statistical learning
methods exist for this kind of classification/discrimination.
Moreover, such methods may be orientation invariant, obviating the
need for the virtual orientation correction method on which Nericell
is based. On the other hand, the choice of smartphones is an astute
one, since cell phone penetration in India is quite high. Overall, we
felt that the major contribution of this paper is the real-world
implementation on unreliable/uncalibrated sensors, in an extreme
environment (Bangalore roads).

Thursday, April 9, 2009

Regiment and Macrolab

The Regiment Macroprogramming System
Ryan Newton et al.
MacroLab: A Vector-based Macroprogramming Framework for Cyber-Physical Systems
Timothy Hnat et al.


Last Thursday in class we discussed Regiment and Macrolab, continuing our tour through programming models for wireless sensor networks. First we looked at Regiment's evaluation, generally agreeing that number of messages sent was a reasonable rough estimate. However, the number of retries and low-power-listening evesdropping would exponentially increase with network density and affect the energy efficiency. Then we looked at node failure, which would drop the entire subtree until the next tree formation round (epoch). There didn't seem to be a good measure of reliability built in, how many nodes contributed to the value at a given time. Overall we preferred the region metaphor in Regiment to the "vectors" in Macrolab, though most of us are not Matlab or database folks.

We discussed where it fell in the programming language spectrum, deciding eventually that it was between tiered and declarative as it had controls to manage granularity. The big differences between the different languages and programming environments we've read about is in the mental models. Earlier systems addressed problems in great detail just above the hardware level while the more recent papers present intermediate and high level modes of thinking about the problem. The ability to define nested regions is the real difference between Regiment and TinyDB, along with time being a primitive. We returned once again to the problem of time synchronization, what values were really reported at the base station and how accurately they reflected the current state of the sensor space. TinyDB merges old data with new as packets get relayed slowly from the outer reaches of the network.

Someone suggested we might be able to fix this by adding a local timestamp as part of the gathered data and buffering the last bunch of values, returning the one that best matches the requested time when queried. But we still need to map between local and global time to adjust for clock drift. Distributed algorithms in general need a lot more information in order to obtain precise measurements - network depth can have a huge impact on the time required to know everything that happened at a specific instant even discounting the overhead of time synchronization.

We ended up deciding the main contributions were defining functions over regions, allowing programmers to deal with mid-level abstractions with more flexibility than TinyDB but still taking care of all the details of compiling global network definitions to individual node code.

We then moved on to Macrolab, which defined programs in terms of the data needed. Several expressed concerns about trusting their decomposer to generate reasonable code, which others likened to the relational database revolution. Others worried about the cost model, though all thought the idea of where to store the data under what network and workload conditions was a real concern worth study. The paper fell just short of demonstrating run-time adaptivity that would be extremely useful. Even with run-time adaptivity, poorly balanced workloads could cause thrashing between modes and poor overall behavior.