Most of the companies I talk to have already made at least one connectivity mistake before they call us.
Either they built something that worked fine in the lab and fell apart in the field, or they’re three years into a deployment and realizing the technology they chose won’t scale the way their business needs it to. I’ve watched companies burn six figures redesigning connectivity architecture that could have been right the first time with a better set of questions upfront.
I’m the CRO at ObjectSpectrum, and I spend most of my time in sales and strategy conversations with companies either building an IoT solution for the first time or trying to fix one that underperformed. Almost every one of those conversations starts the same way:
“What kind of connectivity should we use?”
And close behind it:
“Our devices are going to be out in the field (sometimes very remote). What’s even possible?”
Here’s what I’ve learned: connectivity feels like a technical decision, but it’s actually a business decision. The wrong choice limits your scale, surprises you with costs, or forces a redesign at the worst possible time. The right choice depends entirely on asking the right questions before anyone writes a line of code or places a hardware order.
These are the six questions I walk through with every customer before we get anywhere near a protocol recommendation.
1. Do You Need Wired, Wireless, or a Hybrid Approach?
The wired vs. wireless framing is one of the first places customers get stuck, usually because they treat it as an either/or decision when it rarely is.
Wired connectivity (Ethernet, serial connections, and industrial protocols like Modbus or Profibus) is still the right answer in many environments. If your devices are in a fixed location, distances are short, and cabling is practical, wired is simpler and more reliable than anything wireless.
But as soon as devices are remote, mobile, or spread across large areas, wireless becomes necessary. And in most real-world IoT deployments, the best architecture uses both: wired where it’s practical, wireless where it isn’t. Locking yourself into a purely wireless approach early, even when wired would serve you better in parts of the deployment, is a common and avoidable mistake.
The first question isn’t “wired or wireless.” It’s “where does each approach make sense in our specific environment?”
2. How Far Does Your Data Really Need to Travel?
Range is one of the first specs people look at and one of the most misleading.
Every connectivity technology has an advertised range. That number is almost always measured under ideal conditions, flat terrain, no interference, clear line of sight. In a real deployment, you’re dealing with metal structures, concrete walls, dense foliage, RF interference from other equipment, and physical geography that doesn’t care about your spec sheet.
A LoRaWAN gateway might cover 10 miles in an open field and 300 meters in a dense urban industrial environment. Cellular coverage maps show broad coverage that disappears in certain valleys, buildings, or rural stretches. Zigbee and Bluetooth have short ranges that work well in tight mesh configurations but fail the moment you need to reach across a large facility.
When we work through range with customers, we start by mapping the actual deployment environment, not the ideal one. That usually changes the answer.
3. How Often Are Devices Transmitting, and How Much Data?
Transmission frequency and data volume have a bigger impact on connectivity selection than most people expect, partly because they’re so directly tied to cost and power consumption.
Technologies like NB-IoT and LTE-M are designed for low-power devices that send small, infrequent payloads (a sensor reading every few minutes, a status update once an hour). They’re excellent for those use cases and poor choices for anything requiring high throughput or near-real-time data streams.
Wi-Fi and standard LTE support higher data volumes and frequent updates, but they draw more power and typically cost more to operate at scale. LoRaWAN sits in the middle: low power, longer range, but with strict duty-cycle limits that constrain how often devices can transmit.
The question isn’t just “how often does my device need to send data today?” It’s also “what happens to my connectivity model if that frequency doubles in two years?” I’ve seen deployments where a product pivot from periodic to continuous monitoring broke the connectivity architecture completely.
4. What Coverage Area Do You Need, Today and Two Years From Now?
Coverage decisions are some of the hardest to undo, so this question matters more than customers usually realize when we’re first talking.
Carrier-based cellular (LTE, LTE-M, NB-IoT) gives you wide geographic coverage without building or maintaining your own network. The tradeoff is ongoing service fees and the reality that “nationwide coverage” still has gaps, especially in rural and remote areas, which is exactly where a lot of industrial IoT deployments live.
Private networks (LoRaWAN being the most common example) can be the right answer when you need coverage in areas carriers don’t reach well, or when ongoing fees don’t fit your cost model. But building a private network means owning the infrastructure, managing the gateways, and solving how data gets backhauled from those gateways to your cloud or enterprise systems. That’s not insurmountable, but it’s real operational complexity.
Satellite connectivity (providers like Starlink and Iridium) has improved significantly and is worth considering for truly remote deployments where no other option is viable. Latency and cost remain constraints, but for certain applications it’s now the most practical choice.
The question I always push customers on: where might this deployment need to operate in 24 months? If there’s any chance of expansion into new geographies or operating environments, that answer needs to be part of the architecture conversation now.
5. Are You Optimizing for Upfront Cost or Long-Term Cost of Ownership?
This is where business model and technology strategy intersect directly, and it’s a conversation that doesn’t happen early enough in most projects.
Carrier-based connectivity has a simple cost structure: you pay recurring service fees and get broad coverage, reliable infrastructure, and minimal operational overhead. That simplicity has real value, especially if you don’t have internal resources to manage network infrastructure.
Private networks require capital investment upfront (hardware, installation, integration) but can reduce or eliminate ongoing per-device connectivity fees. For large deployments with hundreds or thousands of devices, that math can shift significantly in favor of private infrastructure over time.
Neither model is universally better. A company deploying 50 devices across multiple states is usually better served by cellular. A company with 5,000 devices concentrated in a few facilities might find a private LoRaWAN network pays for itself in 18 months. The decision depends on deployment scale, operational capabilities, and financial model, and it needs to be made with real numbers, not assumptions.
6. How Critical Is Your Data, and How Long Will These Devices Be in the Field?
The last question is the one that catches people off guard, but it shapes everything about how you design for reliability and longevity.
Not all data has the same criticality. A temperature reading from an agricultural sensor that misses one transmission is a minor inconvenience. A missed alert from a pressure sensor in an industrial pipeline is a serious problem. The connectivity architecture for those two use cases should look very different โ in terms of redundancy, guaranteed delivery, failover design, and how you handle connectivity gaps.
Device lifecycle is the other half of this question. IoT devices often have long operational lives โ five, seven, ten years in the field. Over that period, they’ll need firmware updates to patch security vulnerabilities, fix bugs, and add capabilities. Not all connectivity technologies handle over-the-air updates equally well, and some make it genuinely difficult. If you’re deploying devices that will live in the field for a decade, your connectivity choice needs to support the full operational lifecycle, not just the first deployment.
A Framework for Choosing IoT Connectivity That Actually Works
After years of these conversations and watching deployments succeed and fail, one thing is consistent: the IoT solutions that hold up over time almost never rely on a single connectivity technology.
The successful ones deliberately combine options, maybe cellular for primary backhaul with LoRaWAN for local sensor networks, or wired Ethernet at the edge with cellular for remote monitoring. The combination is designed around the deployment’s actual constraints: range, power, data requirements, coverage, cost structure, and operational realities.
Connectivity isn’t a checkbox in the architecture. It’s a core part of how your solution scales, how you support it, and how much it costs to operate over time.
If you’re designing an IoT solution, or trying to fix one that isn’t working the way you expected, these six questions are where I’d start. Get those answers right, and the technology decisions that follow tend to be a lot cleaner.
ObjectSpectrum works with companies across industries to design, build, and optimize IoT solutions. If you’re working through these questions and want a second opinion on your connectivity approach, reach out to our team. We’ve seen a lot of deployments. We know what works.
Kevin Lofgren is the Chief Revenue Officer at ObjectSpectrum, an IoT solutions company specializing in connected device strategy, architecture, and deployment.
