What is “edge computing” and when is it needed?

Concepts and Definitions

General Information


Think about the “edge” as being the physical location of wherever the IoT solution is happening remotely. So, it’s not where the user is. It’s where the remote thing is. For example, if you are monitoring a generator in a factory, the “edge” is where that generator is in that factory. So, when you hear “edge computing,” it’s basically saying “instead of the computing going on in the cloud somewhere, the computing is going on at the site of the thing.” The reason you might want to do this is because: a) you may not have time for the two-way communications to make the round-trip to the cloud and back (called “latency”), b) it might be in a remote spot where a connection to the intent is spotty or non-existent, and/or c) the nature of the remote sensing can benefit from doing some of the work locally to reduce the bandwidth needed between there and the cloud. It moves the computing to the place its most needed. That speeds it up, and that means it doesn’t have to be connected to the internet, or can use less bandwidth, to function.

The “edge” could be literally on the device itself, or it could be some type of a local computing appliance or other computer that is located near or within the place where the data is being collected.

Typically, edge computing is used when you need to make very quick decisions – when you don’t have time to send measurement data to the cloud, have the cloud to the analytics, and then send the command back to the device. That is called latency. Even if that process is very, very quick, there is a fraction of a second that it takes to send those messages back and forth. And in certain applications you have to make those decisions very quickly.

Another case for edge computing would be cases where you had very limited bandwidth. For example, if you had some type of a sensor – like a camera that was collecting a lot of data for video that requires high bandwidth – but you were using the edge computing to count the cars that were in the field of view of the camera. And all you cared about was the number of cars. That’s a case of edge computing taking very high bandwidth information – the video – and turning it into very low bandwidth by just relaying the message, “there are seven cars now.”

The last of the big three reasons to use edge computing is if you need the system to be able to operate without being connected to the cloud. This is less being about “cloud-optional” and more about being fault-tolerant. If you have a system that you can’t afford not to work if your internet connection goes down, then edge computing is one solution for that. Where it can make local decisions, or continue to collect data until the internet connection comes back up again. 

Those are typically the big three: 1) to solve latency (need it to be fast), 2) when you have limited bandwidth (does the high bandwidth stuff locally and converts it into low bandwidth information), or 3) when you have spotty internet connectivity at the site where its collecting data.

All FAQs