Edge computing may hold the key to a faster, cheaper, more sustainable internet
An alternative to cloud computing promises smaller carbon footprint, reduced latency
Most of the time, storing data in the cloud—whether that's our own personal photos, or Netflix's stockpile of videos—gives us what feels like instant access to digital information. But a developing approach promises a faster, more environmentally sustainable means to managing our ever expanding digital data.
Blesson Varghese is a senior lecturer and Principal Investigator at the Edge Computing Hub at Queen's University Belfast. He explains that with edge computing, some services of an application are not processed in the cloud, but on the "edge of the network," which could mean, say, your home router, or small, dispersed "micro-data centres", or even on the device itself.
Varghese spoke with Spark host Nora Young about the future of edge computing, and the role it can play in the Internet of Things. Here is part of their conversation.
Edge computing, as I understand it, is often talked about in relation to the Internet of Things. So can you paint me a big of a picture about how edge computing would fit into this evolving world of the Internet of Things?
I think it is Forbes that predicts that by 2025, there'll be over 50 billion devices connected to the Internet, and trillions and trillions of gigabytes of data being produced. So one of the key challenges that we need to address is: how do we cope better with the number of devices that are connected to the internet?
A lot of these IoT devices and sensors, they actually require real time responses. So they need computing to be as close as possible. So there is what's called the latency argument. Also, these devices and sensors are capturing sensitive data much more than what's been done in the past. So how can we make sure that this data is processed, maybe within one's own legal jurisdiction? So we call that a privacy argument. And then there's also the element of reducing the volume of data going to the cloud. So if you have more gadgets, you have huge amounts of data going to the cloud, and that is expensive. So we call that a bandwidth argument. And there's also an energy argument, which is, can we lower our reliance on power-hungry and concentrated cloud data centres. Edge computing is offering us more localized computing, and it's going to help us with the expansion, latency, privacy, bandwidth, and the energy arguments.
You can certainly imagine how some applications might need it more than others. If you have a self driving car, you don't want there to be any latency, for example, or if you have a health monitor that's keeping track of all of your vital signs, you might be more concerned about privacy than with certain other applications...
You're absolutely right over there. There are basically two types of applications that I think will benefit from edge computing. The first class of applications is what we call 'edge-native'. And edge native applications can really come into existence if the edge is available for processing. That would be autonomous cars, for example. Because if you rely on the cloud, you're talking about tens or hundreds of milliseconds of latency. And that can prove fatal. So you need the edge to help process data. And then you have another bunch of applications, which are called 'edge-accelerated'. These applications are, I would say, already in existence today. And they make use of the cloud. But we can perhaps improve their performance by making use of the edge for the variety of reasons that I've just mentioned: latency, privacy, energy.
There's often talk of cannibalization when a new and innovative technology emerges. So is edge computing a direct competitor to the cloud? Could it even make the cloud obsolete? Or are they performing essentially different types of functions?
My initial thinking about five or six years ago was that the cloud would become obsolete. But the more I started working in this particular area of edge computing, we realized that the cloud cannot become obsolete: the cloud is still an essential component of the internet and the way the internet works. So in short, the answer is no. But of course, it's a nuanced no.
The cloud is still going to be there, but we're going to have the opportunity to process data outside the cloud data centre before it gets into the cloud. So in that sense, we are going to see a shift in the way the internet works behind the curtains, rather than any significant change to an individual user.
What are some of the challenges?
Some of the challenges involve security. How do we know that a service that's been brought to the edge is actually a legitimate service? How do we know that data that's being processed at the edge is being done in a trustworthy manner? How do we know that the data that has to go to the cloud is actually being well taken care of at the edge? How do we know that the edge is actually not degrading my performance?
So from a technological point of view, security is a challenge. But from an ethical point of view, you're probably solving some of those privacy-related challenges. So, if you think about how there have been multiple instances in the last decade, where user data has been unethically harvested, and used, the eventual outcome has been that it's created a setting of mass 'data xenophobia'. So the edge actually gives us an opportunity to create trust and accountability in the internet, because you're processing data locally. So from that perspective, we are actually creating a more sustainable, and a more friendly, and more democratized internet, where there are more data controlling points or data processing points. From the technical point of view, the challenges here are more related to being able to create the security in an automated manner. Because when you have so many millions of potential data points that are going to process data, we cannot send humans and make sure that the data being processed is being safe.
This interview has been edited for length and clarity.
Written and produced by Nora Young.