Arora et al.
Tracking Multiple Targets Using Binary Proximity Sensors
Singh et al.
We took on object tracking in today’s class. Despite similar goals, the two papers we read took on noticeably different approaches. “Line” clearly made an effort to generalize approaches and algorithms while “Tracking” focused on a narrower 1D subset.
We started our discussion with “Line,” and the brief evaluation was the first issue. Perhaps details of the result were obtained but withheld due to the paper’s military connection? Still, the scalability results they release are not ideal. Further, details of their experiments is vague. Alas, it’s hard to know when you don’t know what you don’t know.
The energy issue came up early too. Many of us would have liked to see some evaluation of the energy consumption of the system. How much energy is consumed by the four consumers identified in the paper (sensing, computing, storing, and communicating)? In the interest of maintaining generalizability, we understand that you wouldn’t want to constrain the system to a certain lifetime or power allocation, but it would be nice to understand the magnitude of power consumption.
We identified some contributions in “Line”. The most obvious was support for tracking in two dimensions, using binary sensors. The paper also introduces the concept of an influence field (the area in which a sensor could detect a target). In addition, the paper attempts to count and categorize tracked objects (ordinary person, soldier, military vehicle). The paper also identifies several potential mote sensors and makes a case for the two they use: a magnetometer and radar. There was a little discussion about whether this list was compiled before or after sensor selection, but this might be some grad students projecting their experiences onto the paper...
The radar sensor was interesting, as most (if not all) of us had never used one. The radar seems to have a good range at 60’, although the specific measurement was not intuitive at first. The sensor is not measuring the velocity of the tracked object, but rather the magnitude of the vector component pointed toward the sensor.
The magnetometer’s characteristics were clearer, but seemed to introduce many of the flaws in this paper. The biggest problem is the range, which seems to be roughly 10’ and we suspect leads the need to have sensors spread out at ranges of 5’ from each other. Not only is this kind of sensor density difficult to deploy, but it causes radio density problems as well. The paper makes several references to contention leading to problems. For example, increasing retransmissions results in worse performance, tuning radio power produces large performance improvements, and scalability becoming a problem due to radio density.
We also identified some weaknesses and questionable assumptions. First, the paper assumes targets are moving. This limits the system to detecting intruders and could not be used to do reconnaissance (how many tanks are hiding in a base, etc.). Second, the paper assumes that two targets are at least 10m from each other. This might be true with vehicles, but is not always clearly the case when dealing with people (soldiers or otherwise). In addition to miscounting, violations of the 10m assumption might lead the system to misclassify a group of soldiers as an armored vehicle. Finally, the paper assumes that noise will only effect a minority of motes. The wind is the example given in the paper, but wouldn’t wind affect a majority of nodes? Wind isn’t terribly discriminating.
We moved on to the “Tracking” paper. The paper also uses binary nodes (this time generalized as outputting a 1 if at least one target is detected, 0 otherwise). The system takes a series of snapshots over time and sends them to a centralized processor. The processor then tries to determine a lower bound on the number of monitored targets.
There were a few terms that needed clarification:
- Particle: refers to the trajectory of a target
- Cluster: a set of targets with a common trajectory
- Particle Filtering: looking at trajectories (particles) over time to determine which nodes are in a cluster. To improve particle filtering accuracy, the algorithm is weighted against changes in velocity.
Although this paper limited itself to one dimension and we all agreed this was not terribly useful, many in class believed the approach could be extended to 2D and beyond. Perhaps the 1D limitation was a product of the need for a simple explanation, although many wanted to know why some results were not given for 2D. After all, if 2D+ is as simple as the paper claims, why not implement it?
Also, the particle filtering algorithm avoids changes in velocity. Although this may be reasonable for tracking certain moving objects, it seemed more likely that objects would change their velocity (people, traffic, birds, etc.).
Finally, we had some questions about how two targets in the same cluster would be counted. Would deviating from a shared trajectory at the same time have a different result than deviating at the same time?