Edge computing harnesses growing in-device computing capability to deliver near-real-time deep insights and predictive analysis. This increased analytics power in edge devices is capable of driving innovation to boost efficiency and value. This also poses crucial strategic questions: In the presence of increased processing power, how do you handle the deployment of workloads that perform these types of actions? Why do you make more sensitive use of the embedded intelligence in devices to impact operational processes for your staff, customers and business? Significant amounts of computation must travel to the edge to derive the most value from all of those tools.
Edge computing is a networking concept that seeks to bring computation as close to the data source as possible to reduce latency and bandwidth utilization. Simply put, edge computing means running lesser functions in the cloud and transferring those operations to local locations, such as on a user’s computer, an Internet of Things ( IoT) system, or an edge server. Bringing computation to the edge of the network minimizes the amount of long-distance contact that may occur between a client and a server.
At its basic level, edge computing brings computation and data storage closer to the devices where it’s being gathered, rather than relying on a central location that can be thousands of miles away. This is done so that data, especially real-time data, does not suffer latency issues that can affect an application’s performance. In addition, companies can save money by having the processing done locally, reducing the amount of data that needs to be processed in a centralized or cloud-based location.
Benefits of edge computing
When a computer needs somewhere to connect with a remote server, this causes a delay.
For example, two coworkers in the same office chatting over an IM platform could experience a considerable delay because each message has to be routed out of the building, communicating with a server somewhere around the globe, and being brought back before it appears on the receiver’s screen. When that mechanism is taken to the edge, and the internal router of the organization is responsible for the transfer of intra-office chats, the significant pause does not occur.
Similarly, when users of all sorts of web applications run into processes that need to communicate with an external server, delays will occur. The duration of these delays will vary based on their available bandwidth and server location, but by bringing more processes to the edge of the network, these delays can be avoided altogether.
Concerned about Privacy and security
As with other emerging innovations, however, solving one problem will generate others. From a security point of view, edge data can be problematic, especially when managed by various devices that may not be as secure as a centralized or cloud-based system. When the number of IoT devices grows, it is imperative that IT consider the possible security problems surrounding these devices and ensure that they can be protected. This involves ensuring data is encrypted, and using the appropriate methods of access control and even VPN tunneling.
What are the drawbacks of edge computing?
One downside of edge computing is that attack vectors will increase. Adding more ‘smart’ devices to the mix, such as edge servers and IoT devices that have robust built-in computers, gives malicious actors new opportunities to compromise these devices.
A further downside with edge computing is that more local hardware is needed. Of example, while an IoT camera requires an integrated computer to send its raw video data to a web server, it will need a much more sophisticated computer with more computing power to run its own algorithms of motion detection. But hardware’s falling costs make it easier to build smarter apps.