Over the last decade, user expectations have evolved rapidly with services expecting to be available anytime, anywhere and on any device. If you are watching Netflix, you want to be able to watch it on your television, on your tablet or on your phone. Whether you are in California or Canada, you still want to be able to watch your favorite Netflix shows at any time. How is Netflix able to fulfill these expectations? Simple, their services are built into the cloud. The internet boom and the reduction in the cost of computing and storage gave rise to the cloud revolution and companies started using a network of remote servers hosted in large data centers (called the cloud) to store, manage, and process data, rather than using on-premise services in the form of local servers. Netflix also knows which shows you are watching and for how long you are watching them so that they are able to curate their shows according to your preferences. The data for this is coming from the smart device(s) you use to watch the show and Gartner predicts that the number of connected things worldwide will grow from 8.4 billion in 2017 to 20.4 billion by 2020. These Internet of Things (IoT) and smart devices are generating humongous amounts of data. Imagine if all of Netflix’s 125 million subscribers were driving around in autonomous cars, each one generating up to 1GB of data every second. How would all of that data be processed and how can decisions be made in real-time with minimum latency?
The Need for Edge Computing in Automotive
The increasing number of devices running software and generating data will require a link to the cloud to store and process the massive amounts of data. The amount of power and bandwidth required to send that data to the cloud is immense and the space needed to store the data is a huge challenge. For a Netflix show or a video game, a millisecond lag is not a big deal. But for an autonomous car to experience a time lag is a whole other thing. A lapse for even a fraction of a second is the difference between crashing into another car and avoiding a collision. It is literally a matter of life and death. On average, it takes roughly 100 milliseconds for large amounts of data to travel back and forth from the cloud. Autonomous vehicles need information from their surrounding environment and the cloud to make quick decisions in order to transport its passengers and cargo as quickly and safely as possible. Thus, they are constantly sensing and sending data on weather, road conditions, location and the surrounding vehicles. In the case of a self-driving car, which generates 1 GB of data per second, it is impractical to send even a fraction of the terabytes of data for interpretation to a centralized cloud or data center because of the processing latency. So how do we reduce latency in the communication of critical data?
The answer is Edge Computing, the method of processing data at the edge of a network, as close to the data source as possible, instead of in a massive centralized data warehouse (cloud). Edge computing reduces the strain on clogged networks and provides better reliability by reducing the time lag between data processing and transferring the data to the vehicle. While the cloud is a necessity for certain tasks, autonomous cars require a more decentralized approach. For example, cameras can be provided with the power to analyze their own video feeds, determine which frames of a video require attention and send only that data to the server. This decentralized architecture reduces network latency during the data transfer process as data no longer has to traverse across the network to the cloud for immediate processing, they can be sent to the cloud at the end of the day. In edge computing, devices are still connected to the internet and can still tap into cloud computing services. But they typically have more onboard computing power than in the past and can do more things on their own. The end result is a flatter network with fewer hops, enabling more predictable latency. Thanks to a combination of enormous amounts of sensor data, critical local processing power and an equally essential need to connect back to more advanced data analysis tools in the cloud, autonomous cars are driving the development of advanced edge computing. Edge computing deployments are also ideal for connected cars as they are constantly moving and might have poor connectivity in some areas, lacking a constant connection with the cloud.
Why Should Edge Computing Be Intelligent?
A big driver of edge computing will likely be the rapid development of artificial intelligence, which can require lots processing power to be available immediately. The safety of connected and autonomous cars is critical. To be safe on the road, autonomous vehicles have to ensure that they are keeping to their lanes, recognizing and stopping at red lights and stop signs, and identifying pedestrians and bicyclists and making sure to yield to them. All of that requires the cars to crunch massive amounts of sensory data in real time, every second of every ride. Cloud services are fast, but, when factoring in network latencies, they are not fast enough to be able to respond in real-time to current driving conditions or immediate dangers. Edge computing pushes computing power to the edges of a network, implementing data analytics close to the end devices. So, instead of machines like autonomous cars or smart traffic lights needing to call on the cloud for instructions or data analysis, they can perform some analytics themselves on streaming data and communicate with other devices to accomplish tasks. Thus, edge computing can also speed up the analysis process, allowing decision makers to take action on insights faster than before.
Since machine learning algorithms are actions performed in repetition on a continually refreshed data source, these algorithms can also be deployed on servers that are closer to where their data is collected and aggregated, making processed data accessible sooner to make decisions, or creating a real-time feedback loop with automation systems. Each autonomous vehicle will need enough computing power for it to become a ‘data center on wheels’. Consider Aptiv’s smart vehicle architecture that has three layers of protection for power failure, network failure and even compute failure. It also has the ability to dynamically re-route and power network traffic and even decision making to bring an autonomous car to a safe stop. Add to this an intelligent system with edge computing, providing onboard diagnostics with predictive analytics and you have a system that can grow and evolve in features over its lifecycle.
In the age of instantaneous information flow, data analytics are required to make critical decisions. An autonomous car can navigate on its own because of immediate data input that tells it where to go or when to stop. The interdependency between human and machine means the velocity of information transfer in real-time is essential. The proliferation of artificial intelligence and IoT devices is driving the shift of certain types of data processing from the cloud to the ‘edge’ of a sensor or device as it reduces the latency of data transfer by making sure that the data is processed quickly, reliably and securely. Edge computing is meant to complement the cloud, not completely replace it. Figuring out the right balance between how much processing can be done in the cloud and how much should be done on the edge devices will become one of the most important decisions for technology providers.
As the auto industry is changed by technological and economic currents, OEMs and Tier-1 manufacturers will need to partner with technological specialists to thrive in the era of the software defined car. Movimento’s expertise is rooted in our background as an automotive company. This has allowed us to create the technological platform that underpins the future of the software driven and self-driven car. Connect with us today to learn more about how we can work together.