The Economics of RFID Performance

I've built many types of RFID applications in the past: passive/active, presence/area/position, HF/UHF/UWB. Regardless of the technology and the application, there's always been a disconnect between a users' concept of how RFID should work versus the real-world behavior.

The basic concept of RFID is that you can identify an object at a distance using some sort of wireless protocol. There are many different technologies and techniques for doing this but they all fundamentally rely on some sort of RF communication protocol.

I've written previously about the issues with unlicensed RF communications. The unstated implication is that if you could see RF communications the way we can see visible light, we would be stunned by the storm of RF constantly swirling around us. It would be like the inside of a dance club with disco balls, lasers and colored lights shining everywhere.

But even in "quiet" environments, RF communications still suffer significantly from fundamental limitations of physics. For instance, water absorbs most RF signals – and the human body is the most common mobile container of water when it comes to RFID applications. If the line of sight between a WiFi base station and a WiFi device is blocked by more than 3 bodies, the signal will be completely lost. RF signals are also affected by metal which can reflect, absorb and even redirect signals.

As a result, performance can be unpredictable in even the simplest deployments.

The root of the disconnect I referred to earlier is that end-users don't perceive these complexities because they aren't visible to the naked eye – you have to visualize the situation mentally to understand what is really going on. Their naive (but rational) assumption is that RFID should just work – and work reliably.

While you could spend a lot of energy educating end-users about these environmental complexities, you are probably better off framing the entire issue in economic terms which can be summed up in the following chart:

What this chart is saying is that most RFID systems (and applications) have to make a tradeoff between cost and performance. The trade off is made such that, on average, you get a reasonable level of performance for a fixed cost. Many times, "on average" will be something like 75% of the time and "reasonable" performance level will be 95% accuracy in reads.

So, generally speaking, a fixed investment in equipment gets you a high (but not perfect) level of performance. Within that fixed cost you can tune things, rearrange equipment and application parameters and take other steps to linearly improve performance by say 1%. Once you reach the limit of those techniques, you can then begin to do things like add redundant equipment to the setup for another linear increase in performance by say another 1% – but with a faster increase in cost.

Now you are at the stage where actions are best described as "heroic" and costs begin to rise exponentially. For instance, you begin to look into building custom antennas, boosting system powers beyond regulatory limits and hand-selecting RFID tags for their individual performance characteristics. Yet all of this might get you another 1% linear increase in performance.

And therein lies the lesson: there is no cost-effective way to get 100% accuracy out of an RFID system.

So take my advice and start RFID projects off with the graph above and a lesson in economics. It will save you and your customer a lot of grief.