Sunday, March 29, 2009

Dealing With Sensor Data

A Macroscope in the Redwoods
Tolle et al.

Werner-Allen et al.

Today we discussed two papers which involved environmental monitoring. Although we've seen papers which did this sort of monitoring before (Great Duck Island), both of these papers brought something new to the discussion, in that they tackled using WSN as scientific instruments, with a focus on the collected data.

The first paper we discussed was "Macroscope." Opinion was split over the contributions of this work. Some argued that the real deployment in the redwoods gives a good overview of the capabilities of current WSN for data collection, grounding the field. WSN are here! Data valuable to biologists, that had previously been unattained, was now available. "Macroscope" provided a look into what useful data could be the result of such a deployment, where the problems for future consideration were, and calibration issues.  

On the other hand, some argued that there was nothing altogether too impressive here. Previous works had preformed environmental monitoring, and other fields had data sets far, far larger than what was considered in this paper. It was proposed that perhaps particle physicists, astronomers, and some others may get a good laugh out of the relatively tiny data sets from environmental monitoring.

In "Macroscope," the data yield from the network was far less than ideal, and far less than the data loggers. Someone wondered if it wouldn't be better for a static deployment to just run cables, given the unreliability of wireless communication, and given the need for scientists to miss as few sensor readings as possible. We discussed a few problems with wired connections, including deployment issues, and the weight of the cables being cumbersome for the tiny motes. Someone suggested that WSN could hopefully be deployed by a non technical person, just by switching the nodes on, and letting the network form itself, rather than have to worry about physically connecting things properly.

One point of discussion was in regards to the need for the radio network at all. "Macroscope" points out that they used data both from the on board flash data loggers, and from the network. However, had the logs been reset at deployment time, there would have not been any need for the network, so why bother? Could environmental monitoring be an application where long existing data loggers are more appropriate? Someone made the point that the network allows us to do time synchronization for sensor reading timestamps, without power hungry GPS. Additionally, the data from some applications, such as seismological event monitoring, may have real time value.

This motivated a more general discussion about the potential misuse of motes, where other platforms were more appropriate (not saying that this was the case with "Macroscope"). One impassioned individual argued that there may be a temptation to solve an interesting technical problem for motes, for an application where motes aren't the most practical solution. It may indeed be out of balance to stick a mote on an expensive, power hungry sensor.

As the discussion segued to "Volcano," someone suggested that the success or failure of a particular WSN should be data oriented. Indeed, "Volcano" focused on how well a WSN could replace previously used systems for data collection, using data fidelity and yield as its metrics.

In one sense, this work was less successful than "Macroscope." Between software problems and power outages, the network was down for a significant portion of its deployment, making conclusions about performance potentially troublesome. Additionally, a bug in the time sync protocol made data analysis very difficult. Luckily, a sensor nearby to the deployment was used for "ground truth" to successfully recover as much data as possible. The paper stressed the many problems could occur in deployment which were not apparent in lab testing (such as high velocity rocks destroying an antenna).

One person said that this paper presented itself as "We deployed a broken system, and this is how we extracted useful lessons," rather than "Here's our volcano monitoring system, and here was the performance," as the latter presentation may not have been accepted into a top conference, due to the various problems encountered during deployment.

Someone, perhaps somewhat cynically, suggested that if the system had been used to monitor a forest, rather than a volcano, it would not have been accepted.

Throughout the discussion, we considered which of the papers today, if either, we might include in a cs263 hall of fame. Nobody seemed ready to make a strong commitment to either paper, nor did anybody suggest any of the other papers we read this semester. It seemed hard to directly compare the quality of the contributions of one paper with those of another, and even more so to say that one paper is better, by whatever metric, than most of the rest.

Tuesday, March 17, 2009

Target Tracking

A Line in the Sand: A Wireless Sensor Network for Target Detection, Classification, and Tracking
Arora et al.

Tracking Multiple Targets Using Binary Proximity Sensors
Singh et al.

We took on object tracking in today’s class. Despite similar goals, the two papers we read took on noticeably different approaches. “Line” clearly made an effort to generalize approaches and algorithms while “Tracking” focused on a narrower 1D subset.

We started our discussion with “Line,” and the brief evaluation was the first issue. Perhaps details of the result were obtained but withheld due to the paper’s military connection? Still, the scalability results they release are not ideal. Further, details of their experiments is vague. Alas, it’s hard to know when you don’t know what you don’t know.

The energy issue came up early too. Many of us would have liked to see some evaluation of the energy consumption of the system. How much energy is consumed by the four consumers identified in the paper (sensing, computing, storing, and communicating)? In the interest of maintaining generalizability, we understand that you wouldn’t want to constrain the system to a certain lifetime or power allocation, but it would be nice to understand the magnitude of power consumption.

We identified some contributions in “Line”. The most obvious was support for tracking in two dimensions, using binary sensors. The paper also introduces the concept of an influence field (the area in which a sensor could detect a target). In addition, the paper attempts to count and categorize tracked objects (ordinary person, soldier, military vehicle). The paper also identifies several potential mote sensors and makes a case for the two they use: a magnetometer and radar. There was a little discussion about whether this list was compiled before or after sensor selection, but this might be some grad students projecting their experiences onto the paper...

The radar sensor was interesting, as most (if not all) of us had never used one. The radar seems to have a good range at 60’, although the specific measurement was not intuitive at first. The sensor is not measuring the velocity of the tracked object, but rather the magnitude of the vector component pointed toward the sensor.

The magnetometer’s characteristics were clearer, but seemed to introduce many of the flaws in this paper. The biggest problem is the range, which seems to be roughly 10’ and we suspect leads the need to have sensors spread out at ranges of 5’ from each other. Not only is this kind of sensor density difficult to deploy, but it causes radio density problems as well. The paper makes several references to contention leading to problems. For example, increasing retransmissions results in worse performance, tuning radio power produces large performance improvements, and scalability becoming a problem due to radio density.

We also identified some weaknesses and questionable assumptions. First, the paper assumes targets are moving. This limits the system to detecting intruders and could not be used to do reconnaissance (how many tanks are hiding in a base, etc.). Second, the paper assumes that two targets are at least 10m from each other. This might be true with vehicles, but is not always clearly the case when dealing with people (soldiers or otherwise). In addition to miscounting, violations of the 10m assumption might lead the system to misclassify a group of soldiers as an armored vehicle. Finally, the paper assumes that noise will only effect a minority of motes. The wind is the example given in the paper, but wouldn’t wind affect a majority of nodes? Wind isn’t terribly discriminating.

We moved on to the “Tracking” paper. The paper also uses binary nodes (this time generalized as outputting a 1 if at least one target is detected, 0 otherwise). The system takes a series of snapshots over time and sends them to a centralized processor. The processor then tries to determine a lower bound on the number of monitored targets.

There were a few terms that needed clarification:

  • Particle: refers to the trajectory of a target
  • Cluster: a set of targets with a common trajectory
  • Particle Filtering: looking at trajectories (particles) over time to determine which nodes are in a cluster. To improve particle filtering accuracy, the algorithm is weighted against changes in velocity.

Although this paper limited itself to one dimension and we all agreed this was not terribly useful, many in class believed the approach could be extended to 2D and beyond. Perhaps the 1D limitation was a product of the need for a simple explanation, although many wanted to know why some results were not given for 2D. After all, if 2D+ is as simple as the paper claims, why not implement it?

Also, the particle filtering algorithm avoids changes in velocity. Although this may be reasonable for tracking certain moving objects, it seemed more likely that objects would change their velocity (people, traffic, birds, etc.).

Finally, we had some questions about how two targets in the same cluster would be counted. Would deviating from a shared trajectory at the same time have a different result than deviating at the same time?

Monday, March 16, 2009

Storage: Capsule and FlashDB

Capsule: An Energy-Optimized Object Storage System for Memory-Constrained Sensor Devices
Mathur et al.

FlashDB: Dynamic Self-Tuning Database for NAND Flash
Nath and Kansal

The topic of today's discussion was flash storage in wireless sensor networks. The class started by discussing some higher-level issues about storage on motes and about flash storage in general before digging into the specifics of the assigned papers. First, one person questioned the assumption that mote applications are write-intensive. His argument was that motes tend to have low duty cycles, so there is not too much data written after all. In response, one student suggested that the write-intensive classification is based on the read-write ratio instead of basing it on the frequency of writes.

Then, one person questioned the need for advanced storage primitives, such as queues or files, since in practice, most applications use only simple append-only logs. However, some motes store their configuration as files on flash, and filesystem provides a familiar storage abstraction for programmers. If motes supported a filesystem recognized by PC's, such as FAT, we would be able to just pull out an SD card and plug it to a laptop in order to access the data. However, implementing FAT on a mote can be quite challenging, so the network could have few more capable, FAT-enabled motes, which would gather data from other simpler motes.

We were also wondering why we would need a lot of storage on a mote. For example, if the application sampled two sensors every 20 minutes, producing 4 bytes each time, a 2 GB disk would be filled in approximately 20,000 years. Even in the case of data-intensive applications, it would take at least several months in order to fill the disk. Another concern that we discussed was pulling out data once it was collected, since it is effectively impossible to stream several gigabytes of data using the radio. Instead, the users should be able to send queries to motes and retrieve only the data they actually need, which justifies the need for indexes and structured storage in general.

Then, the topic of discussion changed to flash storage and its properties. Someone pointed out that a mote is inherently incapable of efficiently performing updates in place. In order to do so, it must read the entire block to memory before erasing it, which requires more memory than it is found on typical motes. The alternative is to copy the entire block elsewhere in the flash, erase the original block, and then copy it back while performing the requested changes on fly.

In flash, it is possible to turn 0's into 1's without having to erase the entire block. One person suggested that this could be leveraged to change already stored pointers to null pointers if we represent them as FFFF instead of 0000. Another student suggested that the application can leave some of the written data as 0‘s and update it in place later. However, we realized that this would not work, because the flash controllers do not expose the interface for performing such operations.

Finally, our attention turned to Capsule, the first paper assigned for today. The system uses log-structured storage, which seems to be the only reasonable storage model for motes. This approach works well for write-intensive applications at the cost of slow reads. For example, accessing the head of a queue requires the application to traverse it all the way from back to the front. On personal computers, this performance issue is addressed by using large buffers, but on motes, we do not have the luxury to do so.

Most of the remaining discussion of the paper revolved around log cleaning and compaction. Because of the log-based approach, the flash storage can quickly fill up with irrelevant (deleted) data. But in order to use reclaim the space, the log must be compacted due to the must-erase-before-write nature of flash. Compaction, however, takes a lot of energy and time. One of our concerns was that Capsule might start this process even if the client application could be in the middle of a high-rate data capture. Another concern was that when there is nothing to be compacted, the storage component would attempt to do so anyway and just waste energy. Finally, someone pointed out that this scheme effectively allows only 1 GB of storage on a 2 GB device, since it has to leave enough space for compaction.

Towards the end of the class period, we shifted our attention to FlashDB. One of the first observations was that using the B+ tree nodes in the disk modes requires in-place updates, which would not work on typical motes. We highlighted FlashDB's ability to switch the mode of operation online - instead of just determining the appropriate mode before deployment.

We were wondering why we would need the online self-tuning capability until one student suggested an example, in which a mote samples data during the day and streams it to the base station during the night. Ideally, the B+ tree would be in the write-intensive (log) mode during the day and in the read-intensive (disk) mode during the night. However, we were not convinced that FlashDB would make this switch early enough, especially since this dynamics was not evaluated in the paper. Perhaps it would be beneficial if the client application could disable self-tuning and perform it manually.

Thursday, March 12, 2009

Data Aggregation: TinyDB, Beyond Average, and Synopsis Diffusion

(Madden, et al.)

(Hellerstein, et al.)

(Nath, et al.)

Today, we addressed the topic of data aggregation. We began by
exploring the contents of the ACQP and Beyond Average papers. Starting
with ACQP, we were quite pleased to see that the issue of power/energy
consumption was finally being brought to the forefront of a paper
related to sensor networks again (in the previous week, there was a
shared longing for papers that addressed the topic of energy more
directly). Moreover, the ACQP paper presented what one student
suggested was a new "programming model for sensor networks," but, in
particular, was a paradigm of viewing sensor networks as instruments
for scientific studies.

As began to philosophize about this idea, we began discussing the
questions that undergird the distinctions between the systems and
database communities. To a first approximation, we understand that the
systems community tends to approach problems from the bottom-up while
the database community goes from the top-down. A DB example we
discussed regarded the use of query optimizers and other DB internals
to take the declarative statements of a language like SQL and hide the
details of returning the correct answer as quickly and efficiently as
possible. The systems folks in the room tended to agree that we like
to approach problems a (relatively) low-level base.

Ultimately, we were trying to get at the question: Should the system
optimize queries for you? For DB folks, this seemed like a
no-brainer. We even tended to think about the benefits an analogous
(although, not a perfect analogy) of compiler optimization.

But this is where we began to wonder whether the optimizations
presented (e.g., priotization schemes) in this paper seemed
appropriate for sensor networks. While in the compiler analogy,
compilers rearrange instructions but the execution of, say, a
statically-bounded for-loop won't suddenly decided to stop at 10
iterations rather than 20, we questioned the example prioritization
schemes as winavg and delta seemed to defy the correctness that some
sensor network users would desire for their applications. However, we
acknowledged that perhaps specific applications may find those schemes
useful or reasonable, but a skepticism continued to linger on this

Moreover, we felt that the Beyond Average paper might in fact be
breaking the abstraction barrier that it set out to establish
(high-level declarative language - SQL - hiding the underlying details
of sensor network operation). The one example from which this
sentiment emerged was in the Query-Handoff Implementation for tracking
(i.e., Fig 8). The use of SIGNAL EVENT to trigger an ON EVENT
statement appeared a lot like an imperative programming language
method or function call. We weren't certain anymore if the high-level
approach to sensor network application writing was appropriate or
possible after realizing this was what the authors were demonstrating.

In general, we appreciated the abstractions that the DB folks aimed
for but we felt uneasy that this approach to programming sensor
networks would be feasible in the future.

Of course, we also spent some time also examining the Synopsis
Diffusion paper. Mostly, we sought to understand the COUNT algorithm
(which seeks to know how many nodes are present in the network) to
better understand Synopsis Generation, Fusion, and Evaluation. We felt
that there was a little blackbox magic going on for COUNT, especially
in the SE() phase, but we didn't get too hung up on this process.

We seemed to unanimously appreciate the presentation of necessary and
sufficinet properties for ODI-correctness; however, we wished that the
authors would have spent more time discussing how to create
ODI-correct Generations and Fusions.

We had a lot of papers to get through on Tuesday, but each of them
seemed to generate a fair amount of dialogue and interest from the
class, allowing us to explore the range of DB and systems research
philosophy to details of proofs and ODI-correctness.

Thursday, March 5, 2009

Time sync and localization: FTSP and Radio Interferometric Localization

The Flooding Time Synchronization Protocol
Maroti et al.

Radio Interferometric Geolocation
Maroti et al.

Today's discussion was a little different from previous classes
because the two papers weren't on the same topic. So discussion was
largely split into two parts.

Part I: FTSP.

We started off by throwing out some thoughts about our likes and
dislikes. Under the "likes" column, people seemed to agree that FTSP
was a generally useful protocol in that it solved a pratical problem
well (especially, since it produces error on the order of tens of
microseconds). Also, we particularly appreciated the author's
extensive breakdown of the many (ten!) sources of synchronization
error, the magnitudes of each, and their characteristics. The
exposition made it much easier to understand the problem space and
what problem FTSP was trying to solve.

Under the "dislikes" column, people seemed to have some reservations
about the evaluation section since there was no direct evaluation
alongside other synchronization protocols, just comparisons to their
published performance numbers. Along these lines, there were also a
few questions raised about the metrics used to measure FTSP's
performance (we'll get to that a little later).

Root election came up as a discussion topic. It caught our attention
mainly because the root election process was reported to take so long
to converge (17 mins). There was some confusion about what happened
during convergence, when multiple nodes might be considered as root.
The election algorithm maintains time consistency when a root
relinquishes its status to another node with a lower ID, but only if
the algorithm had previously been stable and the nodes were more or
less in sync. However, when the system is boostrapping, clusters in
disjoint parts of the routing tree may have inconsistent times. It
would have been nice if the authors had included some analysis about
how the system behaves during bootstrapping.

The talk about root election brought up a more fundamental question
about whether a root was needed at all. Would it instead be possible
for nodes, in a distributed way, arrive at a global concensus time?
Recent work by the SYRAH group on DESYNC, a distributed concensus protocol modeled on pulse-coupled oscillators,
might serve as a primitive for doing just this.

As we delved deeper into the paper, one major issue that came up was
the authors' evaluation metric of average pairwise error. Since the
whole point of this protocol was to institute a global clock amongst
all the nodes, it would seem more prudent to use a metric that
measured each node's deviation from the global clock. Additionally,
the calculation of the average error per hop seemed strange since it
took the all pairs average error and divided it by the network

One alternative to FTSP we considered was a protocol that used time
deltas. The idea would be that each node would measure its time
difference with its parent and that the root could figure out the
clock offset of a node by simply summing the deltas along the path to
that node. The drawback to this is that only the root has a concept
of the global time and that this would make it hard to coordinate
nodes (e.g., "at 10:24:56.000 execute command X"). One could conceivably
send a command from the root like "in 00:10:00.000 execute command X"
and each node would adjust the time according to its own clock
offset. However, this would only work if the time for the message to
propagate to the deepest node was less than the clock skew of any node
along the path. Since we're talking about microseconds, the delta
method may not be feasible.

Part II: Radio Interferometric Geolocation

We started off the discussion of the second paper by again throwing
out our "likes" and "dislikes" about the paper, although there weren't
many "dislikes" to speak of. In particular, people seemed to like the
novelty of the idea and were pleasantly surprised that it was actually
possible to implement. Everyone also seemed to enjoy the fact that
the paper started off with some theoretical underpinnings and then
steadily progressed into a very "engineering" kind of paper
(e.g., dealing with inaccuracy of the radio). It was striking that
nothing about this paper was naive -- there are very important
considerations made for some real world problems that are not
immediately obvious.

One of the highlights of this paper was the method used to defend
against inaccurate radio crystals. The problem they were trying to
solve was that each radio behaved somewhat differently -- when you set
it to emit at a particular frequency, the emitted signal was up to
2kHz from nominal. So the authors built in an automatic calibration
step that involved three nodes: nodes A and B transmitted at close
frequencies f1 and f2. Node B was held fixed at f2 and f1 was varied
until the difference between the frequencies at a third node C was
observed to be minimized. This was pretty clever!

Another clever thing done in this paper was in the measurement of the
frequency and phase offset. Note that what they are sampling is NOT
the carrier frequency. If this were the case, then with a carrier
frequency of approximately 400MHz, the sampling rate would have to be
800MHz -- clearly impossible with an ADC that samples at 9kHz max. So
it was neat that the authors sampled the beat frequency instead --
especially since the two carrier frequencies were close, the beat
frequency was low and easily sampled at 9kHz.

There were some practical considerations that hampered this method.
In particular, the 80 mins it took to localize the nodes seemed a
little onerous, and impossible if nodes were at all mobile. Also, the
experiments were performed in a flat outdoor field, presumably with no
signal interference from other sources. Since this method directly
relies on the radio signal, lack of interference is necessary. This
assumption, however, is not all that realistic. Still, we marvel at
the sheer novelty of the method.