See “what is IoT?” But in a sentence, it is different because it is the beneficiary of the proliferation of mobile devices, etc. and it has driven the price down and the accessibility up. And that’s why it’s a revolution now.
And by the way, when the internet became public knowledge – the vast majority of people had not heard of the internet until 1993 – but it existed since 1969. The point is that the ability to do remote monitoring has existed for decades. The ability for it to be accessible – both from a cost and widespread used of technology – that’s the revolution. The inflection point in the nineties of what made the internet what it is today, was a culmination of different trends that – over the years – had made it more accessible. Between 1969 and 1993, it was used primarily by the government and the military because they were the only ones who could afford it and by universities who were funded to advance it. So we’re seeing the same kind of thing happen here with IoT, which many companies/industries – particularly large companies/industries have been using different kinds of telemetry technology for decades, but until IoT, it really hasn’t been accessible to the masses.
Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy. AI, or artificial intelligence, is a term that defines a broad category of intelligence – much of which is still highly experimental, or even still solely theoretical. But one component of AI is ML, or machine learning, which is a mature technology that is already in use in thousands of applications across hundreds of industries. And is something that you would often find as part of an IoT solution. As to whether you should use it or not, the real answer is “it depends.” Machine Learning is not a magic bullet or ingredient. Machine learning is a technology like any other that, in certain applications, can add a tremendous amount of value to the solution. And in other cases, it has no place in the solution. We have experience using machine learning. We will look at machine learning as a possible element of your solution when it makes sense to do so. But we have no vested interest in whether we include ML in your solution or not.
It’s typically a piece of equipment or object that you are trying to remotely monitor. It could be an HVAC system or it could be a field full of corn plants. It is the thing that you want to remotely monitor or manage. A thermostat in the building, part of the building itself, it could be a truck, it could be a corn crop, it could be a cow. Most IoT is one-way – we’re collecting data and making use of that data. But of course, you can also send commands downstream and in some cases, remotely control those things. Soil moisture probes and weather stations are good one-way examples and remotely monitoring pipe pressure and then remotely controlling the shut-off of a pipe is a good two-way example. The “thing” is any of those things.
And by the way, the reason the term “Internet of Things” was coined, is that we typically think of the endpoints of the Internet as being people – “I am on my computer/phone using the internet.” The internet of things is that the endpoint is not a person. It’s a thing.
There is some confusion around what people mean when they use the term “no-code” (and also “low-code”). The idea, theoretically, behind no-code is that you do not need to be a programmer to use it. The problem with that concept is that some of the things that you are doing within a no-code environment actually does involve defining logic. But you’re typically doing that through some kind of drag-and-drop interface. So instead of writing a piece of code that defines the logic, you’re drawing something like a diagram on the screen, and that’s what defines the logic. So, the question there is, “haven’t we just given you a programming tool that just gives you a different way of programming?” We liken this to a WYSIWYG HTML tool like Dreamweaver used to be. I still had to tell it I want this option field to be a radio button. I want it to have these values, I want these values to mean these things, and I want the submit button to go here and write this to the database. I didn’t have to code it myself with a programming language, per se, but I still was programming in a different way. That is typically what people mean when they call something “no-code.” They mean something you can program through a drag-and-drop interface. And it theoretically does not require any programming skills.
Low-code is kind of a hybrid, where it requires some programming skills, but much of it can theoretically be done by someone without programming skills. So, the problem is that different people in different companies will describe their tools as no-code or low-code without a consistent definition of those terms across different companies and even people within those companies. The bottom line is that – in both cases – this category of low-code and no-code, which typically go together in discussions (which reinforces the fact that there is a lot of confusion between the two terms), they tend to be very highly opinionated environments. Which means that – whatever you’re building – you are going to be operating under the limitations that someone thought you might need when they built their platform. So, if you want to put a chart on the page, then you pick a chart from one of their drop-down meus, they put a chart on the page and they give you three things you can change like the color and the title and that’s it. Highly opinionated just means there are a lot of constraints because the closer you get to no-code, the less flexible it becomes. The other limitation is that – if you want to do something that the tool has not anticipated that you want to do – you probably just can’t do it. You probably have to wait until the vendor of that tool provides it because you have no ability to extend it yourself. Now, some low-code environments will give you a way to extend it, but it’s kind of weird because – if you bought the thing to use it without being a programmer – how are you expected to be a programmer when you get to the things that require a programmer?
They tend to be more suitable for simpler applications and they tend to make for a nice demo. They’ll show you how quickly you can make a page with some charts and gauges on it, how to hook it up to a data source and have a running application. But it’s typically a rather simple application that doesn’t involve anything that’s too far outside of what the capabilities are of the tool and isn’t particularly complex when it comes to the logic that it encapsulates.
The potential benefit to those types of tools is that they can sometimes be used to quickly prototype something. These are often great for prototyping a demo to show your idea’s potential to an investor or your board, for example. So the advantage of it being highly opinionated is that they tend to automatically make a lot of assumptions for you. So you go in and define a data source and the data is coming from wherever and when you drop your chart on the page, it’s usually smart enough to assume that the data is coming from that data source that you specified and it automatically hooks it up for you. As a result of it making a lot of assumptions for you, for doing a quick prototype, it can be useful. But they tend to break down when you go too far beyond simple solutions and basic prototypes and they don’t provide much flexibility when it comes to look and feel and user experience.
Also, just because a no-code environment doesn’t require true programming skills does not mean it doesn’t have a learning curve. The people that do those videos that show them building an app in three minutes? Yeah, they’ve been doing it for years, on that specific platform. It looks super easy because the people who are showing you that have been doing it for a while. They may even be the same people that developed it.
It is an IoT solution that either does not connect to a cloud service of any kind or does not need a cloud service in order to function. So, typically the places you would find cloudless IoT of the first kind are going to be applications where the connectivity between the IoT devices and the user are going to be over a local technology, like Bluetooth, where the functionality is usually running on a phone and the IoT devices are connected to that phone using Bluetooth. You’ll find that sometimes in consumer IoT where you’ll have a phone app that controls a light bulb, for example. It’s cloudless in the sense that there is no need for it to go to and return from the cloud in order to turn on your lightbulb. Although, bizarrely, there are many examples of apps that turn on light bulbs that actually do have to make a roundtrip to the cloud in order to perform that function. Most IoT solutions that we build are not “cloudless”, because they involve interfacing with remote things and accessing them over the Internet.
Device management is the ability to add a device to whatever kind of network or connectivity that device is using to communicate. That process is usually called provisioning. You are provisioning a device on a network. That could be a lot of different things, but one example of that would be provisioning a device on a cellular network. That is how you set that device up so that it can connect to and send data over the cellular network. That’s the first part of it. The second part of it is when you have large quantities of these devices, you need some way to manage those devices. You need some way to detect that a device is not working anymore. If it’s battery-operated, you have to check the battery’s charge to make sure it’s not running out. And if the devices can support “firmware updates over the air” (FUOTA), you need a way to manage that. These are all things that are typically done as a part of device management. It doesn’t necessarily have anything to do with the device, nor with the application – it has to do with how you add, monitor, troubleshoot, replace and ultimately remove the physical devices from the communications network they’re communicating on. And, depending on the particular requirements, Prism can either include device management as part of the application or it can integrate with third-party device management platforms when appropriate.
Think about the “edge” as being the physical location of wherever the IoT solution is happening remotely. So, it’s not where the user is. It’s where the remote thing is. For example, if you are monitoring a generator in a factory, the “edge” is where that generator is in that factory. So, when you hear “edge computing,” it’s basically saying “instead of the computing going on in the cloud somewhere, the computing is going on at the site of the thing.” The reason you might want to do this is because: a) you may not have time for the two-way communications to make the round-trip to the cloud and back (called “latency”), b) it might be in a remote spot where a connection to the intent is spotty or non-existent, and/or c) the nature of the remote sensing can benefit from doing some of the work locally to reduce the bandwidth needed between there and the cloud. It moves the computing to the place its most needed. That speeds it up, and that means it doesn’t have to be connected to the internet, or can use less bandwidth, to function.
The “edge” could be literally on the device itself, or it could be some type of a local computing appliance or other computer that is located near or within the place where the data is being collected.
Typically, edge computing is used when you need to make very quick decisions – when you don’t have time to send measurement data to the cloud, have the cloud to the analytics, and then send the command back to the device. That is called latency. Even if that process is very, very quick, there is a fraction of a second that it takes to send those messages back and forth. And in certain applications you have to make those decisions very quickly.
Another case for edge computing would be cases where you had very limited bandwidth. For example, if you had some type of a sensor – like a camera that was collecting a lot of data for video that requires high bandwidth – but you were using the edge computing to count the cars that were in the field of view of the camera. And all you cared about was the number of cars. That’s a case of edge computing taking very high bandwidth information – the video – and turning it into very low bandwidth by just relaying the message, “there are seven cars now.”
The last of the big three reasons to use edge computing is if you need the system to be able to operate without being connected to the cloud. This is less being about “cloud-optional” and more about being fault-tolerant. If you have a system that you can’t afford not to work if your internet connection goes down, then edge computing is one solution for that. Where it can make local decisions, or continue to collect data until the internet connection comes back up again.
Those are typically the big three: 1) to solve latency (need it to be fast), 2) when you have limited bandwidth (does the high bandwidth stuff locally and converts it into low bandwidth information), or 3) when you have spotty internet connectivity at the site where its collecting data.
5G is a collection of technologies that make up the next generation of cellular technology. The two big selling points of 5G are: 1) much higher bandwidth than is typically possible with current 4G technology, and 2) lower latency than is typically seen with 4G technology. As with so many things, having 5G doesn’t necessarily guarantee that you’ll have higher bandwidth than any possible 4G tech, but in general, the technology is built around the idea of being able to provide higher bandwidth and lower latency. So, if your application specifically requires either very high wireless bandwidth or very low latency, then 5G might be a good choice. If it doesn’t require either or both of those, then there may not really be an advantage to you to use 5G. The 4G cellular network is not going away any time soon. 5G is the next generation and those are the two things it’s designed to do. If you care about those things, then 5G might be worth looking at. But if you don’t care about those things, then 5G may not matter to you at all.
Also, you may see NB-IoT and Cat-M1 technologies talked about as being part of 5G. While they are considered to be part of 5G, they actually did exist as part of 4G as well. Sometimes people will talk about low-bandwidth/low-power being a feature of 5G, but they are really talking about NB-IoT or Cat-M1.
Effectively, it’s a computer model of a physical thing. That model is not like taking a picture of the physical thing, it’s a set of business logic and rules that define how that thing operates. This is so that you can simulate the operation of that thing without actually doing it in the real world. It’s like a virtualized version of a physical thing. The idea is that you are able to maintain that model – the virtual model, the twin – in such a way that that it reflects the physical thing that it’s mapped to. So, it’s one of those things that people tend to talk about, but it’s not something that you will find widely deployed because it’s incredibly complicated and expensive to do. The more elaborate the physical thing is, the more complicated and expensive it is to build and maintain a digital twin.
So, if you have a fairly simple physical thing – like let’s say it’s a generator – and you’ve got several sensors on that generator, you could build a digital twin of that generator that will allow you to simulate certain aspects of that generator so you can do things like – what happens if I run it at 110% of its capacity for a year? Probably don’t want to do that with your real generator, but you could do it with a digital twin and the theory is that if the computer model is accurate enough, then you will usually get data out of the model that is representative and fairly accurate of what would happen in the real world. That’s the basic idea of a digital twin.
It can get more complex than that, though, because they can also simulate what would happen if there were physical modifications made to the physical thing. In that case, the modifications to the physical item would also have those same modifications made in the computer to the digital twin. Then you could have an entire fleet of physical generators, all with their own modifications, and then a digital twin for each of those generators in the computer, where each digital twin is an exact duplicate of its specific generator and you can just make the modifications in the computer to each one before you make that modification to the physical generator.
A device that has the only (or primary) job of routing data from one technology to another. So, for example, a LoRaWAN gateway is one that sends and receives messages over the LoRa wireless technology and then relays that information back to the cloud, possibly using cellular technology, or maybe just a direct connection to the Internet. And some gateways can have additional functionality beyond that, like computer resources for edge computing, but that’s the basic gist of what a gateway does—it translates data between different networking technologies.
A message broker is a piece of software and the infrastructure that it runs on whose job it is to receive and distribute messages. These messages are data packets, data messages that are being sent – usually from devices, when in the context of IoT, to the cloud application and/or from the cloud application back out to the devices. So, the message brokers are very high performance, very good at handling very high volumes of messages, they handle all of the details when it comes to ensuring delivery of a message, retrying that message, and in some cases distributing one message to several different destinations. While there are many different message brokers, including some that are proprietary and offered as third-party services, MQTT is probably the most commonly used message broker within IoT. We use MQTT as well as several other brokers, depending on the needs of the application.
It is what Prism Core and Prism Edge are built on. It uses a software development model where instead of writing one giant program that performs a set of functions, it breaks each of those functions down into its own individual service. So, we didn’t invent it, but it’s a way of writing software where you’re not dealing with a big monolithic piece of software, you’re dealing with a whole bunch of separate services that all do one very specific thing. The nature of the application that is built on those services is it makes use of those services to do all of those things it needs to do. The reason it’s good for IoT is that IoT tends to need to be very scalable and tends to need to be operating 24/7 and a microservice environment is very good at meeting those two requirements. Because since you don’t have a giant monolithic piece of software, and you have tiny little pieces of software, it’s very easy to scale that. You literally just add more servers and you’ve scaled up your application.
The other thing is that for the same reason, it is very good at something that’s running 24/7 (or “high availability systems”) because you’re not dependent on any one server continuing to run. If the server fails, you did lose some portion of your capacity, but those microservices are able to run on any one of the other twenty servers that you have. So for both of those points, microservices are extremely good at high availability and massive scale. That’s why we chose it. A third reason is that it makes it easier to do maintenance and to update complex applications because you can go in and work on a specific microservice without really risking breaking other functionality because they’re all independent of each other.
In the Prism Edge environment, the individual microservice is the thing that is updated when you’re doing a software update to the Edge. So, it’s very precise, it’s very low bandwidth, you make one change to one microservice and that’s the only thing that’s sent out to the Edge device. You’re not having to send a very large file with an entirely new copy of the application you’re doing just a pinpoint update on only that microservice that you’re changing.
A time series database is specifically about being able to store the same information over time and then being able to quickly aggregate that information. This is very common in IoT because typically you have data coming in from all the different sensors, and those sensors are reporting that data, usually on some type of interval (maybe it’s every minute or every hour or whatever the case is), and it’s the same information each time but it has a different time stamp. So the time series database – you could technically store a time series in any kind of database – but that is specifically designed to be very efficient at storing the same information over and over again with a different time stamp on it. It’s very efficient at being able to aggregate that data so you can say well, I have rainfall data from every ten minutes over the last week but I want to know quickly how much rainfall we had in the last week. A time series database is very good at aggregating that and giving us the count – the rainfall data for the week as opposed to the data from hundreds of different samples. A time series database is purpose-built and very common in IoT because IoT by its very nature is all about collecting data over time and reporting the aggregated data to the user. Other database technologies just don’t do this very efficiently. A time series database, because it is such a common requirement for IoT applications, is one of the capabilities that’s built into Prism.
It is when you use video input from a camera (which could be visible spectrum or it could be outside the visible spectrum, such as an infrared camera), and the purpose of the camera is not to send a live video stream back to the application, but instead to capture the video feed, use some type of edge computing (often machine learning) to interpret the information that is being captured by the camera, and then return small bits of sensor data instead of sending the whole video feed. For example, a visual spectrum camera is watching a parking lot and we want to know how many empty parking spaces there are in that parking lot. One camera could potentially see the entire parking lot and allow us to quickly count and transmit the number of cars or empty spaces in the parking lot instead of putting a sensor on every single parking space. Another example would be an infrared camera that is pointed at a particular piece of machinery. It could watch for hot spots on that machine and only transmit an alarm if that machine hit a temperature threshold that was set by in the application. Instead of putting a bunch of temperature sensors all over this machine, one infrared camera can detect the temperature at all 73 places on the machine and send an alert when any one of those 73 places gets too hot.
This is a term that you will see spring up here and there, but it is the combination of AI (which is really Machine Learning) and IoT. It’s kind of a marketing term. It doesn’t imply any particular technology. It’s just about using Machine Learning as part of an IoT application.
Templates are basically pre-built modules that are largely complete, but still allow for significant customizations. This is one of the ways we can speed up development of applications. If your application needs user management or device management or needs a visualization that involves markers on a map – those are templates that we have already built, having already done 80% of the work for that function. And then from there, we can incorporate it into your application and customize it for your specific requirements.
Templates are also a secret weapon for us, in the sense that they’re not shrink-wrapped modules that can’t be customized or modified to a particular application need, they’re almost like the structure and the components and the building blocks that have all been pre-integrated together for the common kinds of functionality. So it means that we can very quickly and take one of those templates and use as the starting point for that functionality, while still allowing for a high degree of customization to make it work for the needs of a specific application.
The ecosystem is the whole collection of companies that are part of the stack that makes up an IoT solution. Very rarely will you be able to create an IoT solution on an island in isolation, in a vacuum. Because the nature of IoT is that it makes use of components that typically come from a lot of different places. So that would include companies that make sensors, radios, network providers that are providing connectivity, cloud service providers, application developers, and many other components. So, the ecosystem is this collection of manufacturers and service providers that all play a role in the different components that make up an IoT solution.
Unfortunately, the term “platform” is used in a lot of different ways to mean a lot of different things. It’s unfortunate because it’s hard to really nail down what it means in any specific instance. At a very generic level, a “platform” is something designed to make it faster, easier, lower cost and lower risk to build and manage something. Some of the other ways that the word “platform” is used is to describe a particular family of hardware that comes with development tools and libraries and perhaps some type of provisioning system, and that might be referred to as a “platform” because it’s the building blocks and the scaffolding, if you will, that you would use to build a device.
You can also have “platforms” where their focus is more about device management, so they don’t really have anything to do with the devices themselves, they don’t really have anything to do with the application, but they have to do with how you provision and manage thousands or hundreds of thousands of devices. And as you work your way up the stack, the word “platform” has been used for what Prism is, which is a set of tools and an operating environment that is built around the idea of using it to build custom applications. And sometimes software like Prism is called an “application platform” or even an “application enablement platform.”
A very small-scale deployment where you’re focusing on the key functionality of the desired solution. We strip away everything that is not necessary beyond the key functionality that you’re trying to evaluate. For example, if you’re trying to determine whether or not measuring atmospheric pressure is going to give you useful, actionable data for your factory, then the first thing we would do is create a minimalist solution – almost a laboratory test, in a way – that we can put some barometric pressure sensors into your factory, collect that data in a way that can determine whether it’s going to be useful or not before we spend any time and money building a whole application around that process. It’s a minimalist implementation that strips away everything but the core or key functional aspects, and to some extent it could be deployed in a very small scope among data testers or in maybe a couple of test environments where the idea is to basically test whether or not something will work and give you the actual data that you’re looking for.
The other thing about a proof of concept is that you will almost always learn things from the POC that will influence the requirements and objectives of the full solution development. So, it’s something that we often recommend as a way to test out a concept before you commit to the full project. And of course, this is on a case-by-case basis. Sometimes it may not be necessary and other times it may not be feasible to do it.
Bluetooth Low Energy (BLE) is a relatively short-range technology that is universal (works on the same frequency anywhere in the world), very accessible (every smart phone has BLE capability), and it is also for transmitting relatively small amounts of data. It is not the same as Bluetooth. Bluetooth Low Energy is not Bluetooth, but of course they are often easily confused. They are related to each other, but Bluetooth has enough bandwidth that you can send high quality audio over it to your earbuds. BLE is designed for low power devices with small amounts of data. For certain applications where you don’t care that much about range, and/or you want to be able to connect directly to an end user’s smartphone (one example of this is cloudless IoT applications), then BLE is often a good choice because it has that universal accessibility combined with the ultra-low power capability that no other technology really has, meaning you can build battery-powered BLE devices that last for years on a battery. Another place that we would tend to see BLE used is as a secondary wireless technology used on a device that may have a LoRaWAN or a cellular radio on it as its primary communication link, using the BLE wireless device as a way to configure or provision the device (because it can be done using any smartphone). It’s not super common, but it does happen.
Declarative means you are not writing “functional” or “procedural” code to do something. Instead, you are declaring what you want to happen.
The main example of a declarative approach within Prism is Prism View. For example, instead of writing code that tells the browser how to draw a bar graph, the UI/UX person that is developing the UI part of the solution declares how they want that bar graph to appear. They are saying, “I want a bar graph, I want it to be here, I want it to be this size, I want it to use these colors, I want it to have these labels, I want it to pull from this data.” That’s what declarative programming is. You are declaring what you want to happen, you are not writing software to make it happen. The advantage is that it’s faster to develop, it’s less error-prone, and it’s easier to maintain over the life-cycle of the application.
Another example is our Binary Schema module, used to decode packed binary data payloads, typically received from sensor nodes. While it runs in Prism Core and Edge, which are not declarative environments, the process of building a payload decoder with the Binary Schema module is itself declarative. So instead of writing code that procedurally extracts data from the binary payload (which can quickly turn into some real spaghetti code), the developer simply declares the desired result: data in the form of meaningful named properties. And as with Prism View, this approach is faster to develop, less error-prone, and much easier to maintain and modify.
IIoT is typically about IoT technologies that are applied in industrial environments. It is differentiated from consumer or commercial IoT applications in the sense that IIoT tends to be more ruggedized and suitable for an industrial environment. The software and the components that go into building the software are very similar, but the industrial IoT’s difference is usually about the physical devices, the sensors that are used, and the functionality of the software.
It’s a managed service that’s customized to your needs that can provide everything necessary to turn-key your IoT solution, including things like end-user support and logistics. We can take care of ordering and managing your inventory and shipping devices to your end-users. The customer can do as much as they want to do, and we’ll do the rest – whatever “the rest” is (0%, 100%, or anywhere in between).
IoT, in general, is taking the ability to see what’s happening remotely through sensors using modern internet technologies. The big thing that makes IoT IoT is that – in the past – this kind of thing would have been called telemetry or machine-to-machine (M2M) communications. The difference is that those solutions (which have been around for a really long time) is that IoT leverages technology that was originally developed for smartphones. And the reason that is important is that the sheer volume of the smartphone market is what effectively paid for the miniaturization and cost reduction of these technologies. Twenty years ago, if you had wanted to put some sensors on a remote piece of oil well equipment, you could absolutely do it but it would have cost you tens of thousands of dollars. Because the sensors were expensive, the computers were expensive, the connectivity was expensive, everything was expensive. But the fact that they are producing billions of cell phones every year, and the fact that we now have the miniaturization – I mean, think about it. Your phone has a GPS in it, right? Remember how big a GPS used to be? It was its own separate thing! It’s all done on a tiny little chip now. And all the other sensors in your cell phone and even the whole wireless component of it – all of this stuff we use is all from cell phones.
That market is so large that it drove the prices down and the size of it and the power requirements of it down. So, what IoT is doing is taking advantage of all of that and it’s saying, “Hey, no longer do we have to go out and spend $3,000 on a sensor that can communicate wirelessly and tell us what the humidity is in this place. We can now do that for $30.” So that is the big deal. From a “what makes IoT different from telemetry or machine-to-machine communications that have been around for decades” perspective.
From the standpoint of “why does all this matter,” having remote visibility to things in real-time lets you make better decisions. It really is as simple as that. If you are a manufacturer of HVAC equipment, knowing that the pressure of the coolant in that system is 20% low, and knowing that today is going to give you the ability to make better decisions and provide better service to your customers, make the equipment last longer, etc. as compared to not knowing that information for six months until you go out and inspect that piece of equipment on its scheduled maintenance call. That’s really the bottom line: from a business standpoint, it gives you remote visibility to actionable data in real-time to enable you to make better business decisions. And because of this cell phone-driven path to low power, tiny, low-cost sensors and cheap wireless communication, suddenly this technology is in reach of almost any company as opposed to only the EXXONs of the world.
These are the salient points – the gospel, if you will – of IoT.
LoRa stands for “long range,” so it the word literally means “long-range wide area network.” It is a low-power WAN modulation technique that uses license-free frequency bands to transmit small amounts of data – either over long distances (ten miles or more) or through obstructions like buildings, typically being able to penetrate several concrete walls. It’s used in a lot of applications, including agriculture, smart buildings, smart cities, where you need either the long-range or the deep building penetration capability to wirelessly collect data from sensors and/or wirelessly send commands to actuators. It is often used in applications that need to be very low power, like battery-powered devices that last five years or more on a battery, or energy-harvested devices, like devices that run on solar power virtually forever. It’s a wireless technology that can either be operated on a private network or on any one of a number of public networks that provide LoRaWAN connectivity for a monthly fee. LoRaWAN is one of the wireless technologies that we use at ObjectSpectrum, and often includes network management services for private LoRaWAN customer networks as part of the solution.
These are two different cellular technologies that run on the cellular network. They’re both designed to be used for low-powered devices that have relatively low bandwidth requirements. So, in a way, they are competing standards, but in most places today you will find support for either or both. And they do have some slight differences in their capability that might make you lean one way or the other, but if you were going to build a cellular-based IoT device that didn’t need high bandwidth, then one of those two technologies would probably be the way to go.
Like LoRaWAN, it is also a long-range, low-power, low-bandwidth wireless communication technology. It is available worldwide, with some countries having outdoor coverage and some cities having indoor coverage. It’s always provided by a Sigfox public service provider. So, unlike LoRaWAN, you can’t have a private Sigfox network, but in other ways they are somewhat similar in the sense that they are designed to send small amounts of information over long distances. Sigfox is often found in remote sensing applications, which include things like meter reading, agricultural applications, and tracking applications.
This is where you get into that whole “stack.” Typically, you think about it from the bottom-up. You’ve got: the thing – what you want to remotely monitor or control – what is that device that does that monitoring or control, what kind of hardware is it built with, what kind of software is that device running, how does that device now communicate to a network that allows it to send its data that is collected to somewhere else. How does that network connect to the application, which is now collecting and storing data from all of those devices that are deployed? How do analytics and machine learning apply to that data? How do users interact with that information (the application layer with visualization and user input)? How do you store that information long-term so that you can analyze the historical information? That is an IoT solution. It’s the hardware, it’s the firmware, it’s the network communication, it’s the application, the database, the analytics, it’s the user interface, data visualization, and tying all of that together into an end-to-end solution.