Yes, we do. That’s really about the device itself. So, if we had an application that had sensors for a hazardous environment, we’d have to support devices with Hazardous Area certifications. But from our perspective, it doesn’t really matter that much because Prism is not in the hazardous environment. Your solution is almost always in the cloud. The exception would be on-premise deployments or certain edge computing hardware, and of course options generally exist for those that have Hazardous Area certifications, too.
Device management is the ability to add a device to whatever kind of network or connectivity that device is using to communicate. That process is usually called provisioning. You are provisioning a device on a network. That could be a lot of different things, but one example of that would be provisioning a device on a cellular network. That is how you set that device up so that it can connect to and send data over the cellular network. That’s the first part of it. The second part of it is when you have large quantities of these devices, you need some way to manage those devices. You need some way to detect that a device is not working anymore. If it’s battery-operated, you have to check the battery’s charge to make sure it’s not running out. And if the devices can support “firmware updates over the air” (FUOTA), you need a way to manage that. These are all things that are typically done as a part of device management. It doesn’t necessarily have anything to do with the device, nor with the application – it has to do with how you add, monitor, troubleshoot, replace and ultimately remove the physical devices from the communications network they’re communicating on. And, depending on the particular requirements, Prism can either include device management as part of the application or it can integrate with third-party device management platforms when appropriate.
Think about the “edge” as being the physical location of wherever the IoT solution is happening remotely. So, it’s not where the user is. It’s where the remote thing is. For example, if you are monitoring a generator in a factory, the “edge” is where that generator is in that factory. So, when you hear “edge computing,” it’s basically saying “instead of the computing going on in the cloud somewhere, the computing is going on at the site of the thing.” The reason you might want to do this is because: a) you may not have time for the two-way communications to make the round-trip to the cloud and back (called “latency”), b) it might be in a remote spot where a connection to the intent is spotty or non-existent, and/or c) the nature of the remote sensing can benefit from doing some of the work locally to reduce the bandwidth needed between there and the cloud. It moves the computing to the place its most needed. That speeds it up, and that means it doesn’t have to be connected to the internet, or can use less bandwidth, to function.
The “edge” could be literally on the device itself, or it could be some type of a local computing appliance or other computer that is located near or within the place where the data is being collected.
Typically, edge computing is used when you need to make very quick decisions – when you don’t have time to send measurement data to the cloud, have the cloud to the analytics, and then send the command back to the device. That is called latency. Even if that process is very, very quick, there is a fraction of a second that it takes to send those messages back and forth. And in certain applications you have to make those decisions very quickly.
Another case for edge computing would be cases where you had very limited bandwidth. For example, if you had some type of a sensor – like a camera that was collecting a lot of data for video that requires high bandwidth – but you were using the edge computing to count the cars that were in the field of view of the camera. And all you cared about was the number of cars. That’s a case of edge computing taking very high bandwidth information – the video – and turning it into very low bandwidth by just relaying the message, “there are seven cars now.”
The last of the big three reasons to use edge computing is if you need the system to be able to operate without being connected to the cloud. This is less being about “cloud-optional” and more about being fault-tolerant. If you have a system that you can’t afford not to work if your internet connection goes down, then edge computing is one solution for that. Where it can make local decisions, or continue to collect data until the internet connection comes back up again.
Those are typically the big three: 1) to solve latency (need it to be fast), 2) when you have limited bandwidth (does the high bandwidth stuff locally and converts it into low bandwidth information), or 3) when you have spotty internet connectivity at the site where its collecting data.
A device that has the only (or primary) job of routing data from one technology to another. So, for example, a LoRaWAN gateway is one that sends and receives messages over the LoRa wireless technology and then relays that information back to the cloud, possibly using cellular technology, or maybe just a direct connection to the Internet. And some gateways can have additional functionality beyond that, like computer resources for edge computing, but that’s the basic gist of what a gateway does—it translates data between different networking technologies.
It is when you use video input from a camera (which could be visible spectrum or it could be outside the visible spectrum, such as an infrared camera), and the purpose of the camera is not to send a live video stream back to the application, but instead to capture the video feed, use some type of edge computing (often machine learning) to interpret the information that is being captured by the camera, and then return small bits of sensor data instead of sending the whole video feed. For example, a visual spectrum camera is watching a parking lot and we want to know how many empty parking spaces there are in that parking lot. One camera could potentially see the entire parking lot and allow us to quickly count and transmit the number of cars or empty spaces in the parking lot instead of putting a sensor on every single parking space. Another example would be an infrared camera that is pointed at a particular piece of machinery. It could watch for hot spots on that machine and only transmit an alarm if that machine hit a temperature threshold that was set by in the application. Instead of putting a bunch of temperature sensors all over this machine, one infrared camera can detect the temperature at all 73 places on the machine and send an alert when any one of those 73 places gets too hot.
Bluetooth Low Energy (BLE) is a relatively short-range technology that is universal (works on the same frequency anywhere in the world), very accessible (every smart phone has BLE capability), and it is also for transmitting relatively small amounts of data. It is not the same as Bluetooth. Bluetooth Low Energy is not Bluetooth, but of course they are often easily confused. They are related to each other, but Bluetooth has enough bandwidth that you can send high quality audio over it to your earbuds. BLE is designed for low power devices with small amounts of data. For certain applications where you don’t care that much about range, and/or you want to be able to connect directly to an end user’s smartphone (one example of this is cloudless IoT applications), then BLE is often a good choice because it has that universal accessibility combined with the ultra-low power capability that no other technology really has, meaning you can build battery-powered BLE devices that last for years on a battery. Another place that we would tend to see BLE used is as a secondary wireless technology used on a device that may have a LoRaWAN or a cellular radio on it as its primary communication link, using the BLE wireless device as a way to configure or provision the device (because it can be done using any smartphone). It’s not super common, but it does happen.
LoRa stands for “long range,” so it the word literally means “long-range wide area network.” It is a low-power WAN modulation technique that uses license-free frequency bands to transmit small amounts of data – either over long distances (ten miles or more) or through obstructions like buildings, typically being able to penetrate several concrete walls. It’s used in a lot of applications, including agriculture, smart buildings, smart cities, where you need either the long-range or the deep building penetration capability to wirelessly collect data from sensors and/or wirelessly send commands to actuators. It is often used in applications that need to be very low power, like battery-powered devices that last five years or more on a battery, or energy-harvested devices, like devices that run on solar power virtually forever. It’s a wireless technology that can either be operated on a private network or on any one of a number of public networks that provide LoRaWAN connectivity for a monthly fee. LoRaWAN is one of the wireless technologies that we use at ObjectSpectrum, and often includes network management services for private LoRaWAN customer networks as part of the solution.
ObjectSpectrum supports just about any third-party sensor and device. We have the ability to rapidly integrate new devices. We aren’t limited by any particular list or kind of sensor or device. We figure out what the best solution is for what you’re trying to do, and we are completely agnostic when it comes to the device or any other technology it uses. We can integrate devices from different manufacturers with different technologies so that we can use the best in class for your specific solution’s needs. As a general rule, we try to use off-the-shelf sensors and devices whenever possible, but will work with our partners to develop custom hardware when that’s needed.