Why Are There So Many Wireless Technologies?

by Andy Slote - Director of Customer Success for ObjectSpectrum

Jan 02 2024

All Posts Why Are There So Many Wireless Technologies?

Connectivity selection is among the many required decisions for an Internet of Things project. An analysis reveals a need for an excellent understanding of objectives and the possibility of confusion when realizing the wealth of options. If your requirement is for a wireless capability, the challenge of making this choice begs the question – Why are there so many wireless technologies?

Many of the options have been available for years or even decades, and it’s fair to wonder why we still see the introductions of new ones. Can’t we improve the current set through various means to get what we need? Is there an eventual convergence to a single technology that does it all? Unfortunately, the characteristics of wireless communication mean this ideal end-state is unrealistic. Instead, the challenges faced require many solutions to address diverse situations.

Wireless technologies use various frequencies of the “radio spectrum” to support many navigation, broadcasting, and communication capabilities. This spectrum is essentially a finite resource, and because of this, it is managed by governmental entities worldwide (like the FCC in the United States), designating frequency bands for specific uses, including selling or licensing them for private transmission services like cellular or broadcast TV operations. Some publicly available bandwidth is suitable for or even reserved for IoT, as are portions of the private spectrum, like cellular and satellite frequencies.

When evaluating wireless options for IoT, it’s necessary to assess both the operating frequency and the technology choices. For each use case, considering the distance, terrain, how often data is transmitted, how much data needs to be transmitted, desired coverage area, operational cost, data criticality, and mobility are essential. Other factors may include the location’s climate, the potential for interference, and things as obscure as seasonal changes in foliage.

IoT implementations employ technologies that use multiple frequencies – from low enough to be measured in Kilohertz (kHz) up to multiple Gigahertz (GHz). Generally, lower frequencies accommodate reduced power requirements, provide more ability to penetrate objects, and enable greater range. Conversely, operating higher up on the radio spectrum requires more power, attaining less distance and penetration. Decision-making seeks the best combination of attributes, acknowledging that there is rarely a perfect range for operating with zero tradeoffs.

For larger-scale IoT projects like agriculture, technologies using “Sub-GHz” frequencies like LoRaWAN and Cellular Nb-IoT are a good fit, efficiently covering large areas. Extensive deployments of devices in farmer’s fields often mean that battery- or solar-powered devices are the most practical. Long battery life is attainable and often measurable in years, or these devices can be powered indefinitely from small, low-cost solar cells. Strict data timeliness is generally considered less important, with devices typically transmitting small packets of data a few times daily to optimize effectiveness.

An example of the other extreme is the 5G cellular “millimeter wave” operating in frequency bands between 24 and 100 GHz. One of the most significant gains from using these higher frequencies is transmitting large amounts of data at gigabit-per-second (Gbit/s) speeds with very low latency. However, you must accept that it will cover a distance equivalent to around one city block, provided there is nothing (even the leaves on a tree) in the path between the devices and the gateway.

Wi-Fi is a familiar technology operating at higher frequencies than sub-GHz but significantly lower than 5G millimeter wave. It tends to be power-hungry for IoT applications, so electrically powered or rechargeable battery-powered devices make the most sense, although the most recent release of the standards that govern WiFi has put some emphasis on low-power operation. But one significant advantage of Wi-Fi is that the 2.4 GHz frequency band that it uses exists worldwide, enabling a single device design to be ubiquitous (the same is true for Bluetooth and BLE, and even a recent version of LoRaWAN, which also operate at 2.4 GHz). Many other technologies must use different bands depending on the location, which means device manufacturers have to make and stock different versions of their products for every region they sell into.

Using a GPS (or more broadly, a GNSS) receiver with LoRaWAN, Bluetooth, Wi-Fi, Cellular, or some combination of these, for communications is a common scenario. But interestingly, all of those wireless technologies also have native capabilities that can be used for determining a device’s location. Whether you should use these technologies alone or in conjunction with GPS is driven by factors like accuracy requirements, the intended operating environment (indoor vs. outdoor, for example), cost, and sometimes battery life.

Some wireless technologies focus solely on location and distance determination. One of the newer examples is Ultra-Wideband (UWB), operating in the US in the 3.1 to 10.6 GHz frequency band. Suitable for short-range use, typically in indoor environments, it can determine location with centimeter accuracy. And, from a power perspective, focusing purely on positioning using a “pulse” technology means UWB has low power utilization despite the relatively high operating frequencies.

Public versus private is another decision to make that needs the right level of research and expertise. For example, LoRaWAN and Wi-Fi operate in public portions of the spectrum, meaning anyone can deploy a certified gateway/router and devices just about anywhere. Whereas most cellular options rely on commercial networks where carriers manage connections using a “SIM” with corresponding charges for access. Commercial LoRaWAN networks also exist in some areas, requiring access charges but enabling device deployment without setting up and managing a network of dedicated gateways—and both options have their own pros and cons.

After doing all your analysis and making the “best” selection for your application, you may find it doesn’t perform acceptably due to something unforeseen, like interference from an identifiable or even a mysterious source. There is a bit of “magic” to making wireless deployments work effectively, and it’s important to have someone with the expertise to optimize an installation and adjust when conditions change.

So, why are there so many wireless technologies? Necessity. There must be many options to provide choices for many use cases, providing alternatives for cost, performance, and adaptability in the complex world of the wireless spectrum.

«

|

»