The edge is going to change the way we interact with each other and the world.
Thanks to edge computing, the world is about to look much different.
Within a decade, the edge will boast more computing power—and produce far more data—than the cloud does today, says Lin Nease, HPE Fellow and chief technologist for IoT.
The edge is where the Internet of Things, artificial intelligence, and ultrafast 5G networks are converging. And that will change our lives dramatically over the next few years.
Here are six major trends we can expect to see over the coming decade.
1. Every large facility will have its own edge data center.
By the year 2025, the number of connected devices in the wild is expected to exceed 56 billion, according to IDC. And as IoT sensors proliferate through public and private spaces, the volume of data they produce will grow exponentially.
Organizations will need to process that data locally so they can act on it in real-time, Nease says. Instead of shuttling that data to the cloud, they’ll be operating their own mini data centres on-site. Modular, self-contained units like the HPE Edge Center, which can be placed wherever needed, will become common across a wide range of industries.
“Within three years, all operations facilities, including retail stores, hospitals, warehouses—any place the physical operations of the company occur—will have data room capabilities to ingest data from cameras, microphones, and environmental sensors,” says Nease. “These little edge clouds will resemble the private clouds companies already have in their big data centres, but with less than a dozen servers.”
And instead of deploying increasingly intelligent end devices, many organizations will opt for inexpensive sensors that simply collect data, which is then processed locally in their small private clouds.
“The low-cost approach is to buy cheap sensors and have software running somewhere in the facility that can do pattern recognition or infrared sensing,” Nease says. “If you’re a large retailer, for example, you will do this in your store in about three years. You will have to if you want to compete.”
2. Smarter spaces will enable frictionless transactions
Data captured by edge devices in public spaces will help provide context around who people are and what they came there to do. That will change both how these spaces are designed and how people interact with them, says Partha Narasimhan, CTO at Aruba, a Hewlett Packard Enterprise company.
“Today, some of the contexts is split between the digital world and the physical world,” he says. “Those two will absolutely come together.”
For example, when you walk into your bank, smart cameras can identify you as a regular customer and make an intelligent guess as to why you came in, says Narasimhan. If you were researching interest rates on the bank’s website the night before, a loan officer could be expecting you, with your financial records already called up on the screen.
Similarly, when you step into your doctor’s waiting room, edge systems can automatically check you in, pull up your electronic medical record, take your copayment, and alert the nurse you’ve arrived.
“As you journey through your course of care, hospitals will know who you are, where you are, and what assets, like wheelchairs or X-ray machines, you are using,” notes Christian Renaud, research director for IoT at 451 Research. “They’ll have lower operational costs, you’ll have a faster diagnostic experience, and clinicians will get to spend more time with patients and less time entering data into electronic health records.”
And, as we gradually return to the workplace, edge systems in the building will know that we’re there primarily to collaborate with colleagues. We may be assigned a different workspace each day, depending on the people we need to collaborate with that day. Offices will be redesigned with more meeting rooms and fewer cubicles, better teleconferencing gear, smarter whiteboards, and more ways to intelligently capture the content of conversations.
“To some people, it may feel a little creepy that someone already knows who you are and why you’re there,” says Narasimhan. “But so long as you manage that data in a secure way that respects people’s privacy, most of us will overcome that feeling because it’s so convenient.”
3. The robots will be watching us—and learning
The killer app for AI-powered cameras at the edge won’t be security surveillance or autonomous vehicles, Nease says. Instead, they will be helping many of us become better at our jobs.
“People will be analyzing the physical environment of their operations using computer vision,” he says. “But instead of employing it as a management tool, they’ll be using it to redesign processes to figure out how to make them more efficient.”
Companies like Drishti are already doing this on manual assembly lines for companies like Ford and Honeywell, using smart cameras to quantify processes, identify quality control issues, and train employees more effectively.
A major luxury carmaker is studying how to use computer vision and robots to mimic the work of its craftspeople, notes Dr Eng Lim Goh, CTO for high-performance computing and artificial intelligence at HPE. The goal is not to replace human workers but to enable the creation of highly customized bespoke vehicles, built to the exact requirements of each customer.
“These robots will not only learn from humans but also from each other,” he says. “And if you have factories in different countries, they’ll be able to share their learnings across borders.”
4. Edge devices will begin to learn on their own
Today, edge devices infer decisions based on machine learning models that have been trained in the cloud and then pushed down to the edge. But as the battery life of IoT devices improves and their computing power increases, these devices will begin to learn on their own, Goh says.
“The beauty of this is that the devices collect the data and then immediately learn from it,” he adds. “Imagine the latency reduction in your decision-making.”
To avoid bias that can be introduced by relying on limited sets of training data, edge devices will be able to learn from and analyze data from multiple sensors, a concept known as swarm learning. Because devices share only the insights gleaned from the data, the data itself remains private and secure, Goh adds.
For example, connected X-ray machines at a hospital that treats a lot of patients with tuberculosis can share insights with another location that sees more cases of pneumonia. The ability to analyze lung X-rays at both facilities improves, while no patient data is shared.
“We’ll continue to see more robust compute and analytics get further and further out in the network until they approach the point of origination, whether that’s a camera, a sensor, a drill press, or an MRI machine,” Renaud says.
5. Augmented and virtual reality will become actual realities
Today’s AR and VR apps have proved how quickly nascent consumer technologies can deliver value in enterprise settings. As edge compute and connectivity continue to advance, we will see a continued evolution of experiences that are more immersive, with equipment that is less intrusive. That will drive continued applications of AR and VR into use cases that today are considered impractical.
With greater compute and storage capability at the edge, images and video can be cached locally and served up instantly via ultra-low-latency 5G networks.
“The edge will enable new services like AR and VR that require content to be closer to where users are,” says Narasimhan. “Virtual experiences will be enhanced to the point where they’re easy and intuitive to consume. And as they get more refined, they will make it easier for people to meet virtually.”
Beyond reducing the need for business travel, AR and VR will be deployed in a wide range of industrial and commercial settings to improve workflows, bridge expertise supply and demand imbalances, and transform customer experiences.
Factory workers can use AR glasses to view 3D schematics as they assemble parts. AR mirrors inside clothing stores will let customers digitally try on different outfits. We will see more widespread use of the technology by doctors to perform remote surgery a thousand miles away from the operating theatre. Visitors to theme parks will use it to interact with life-size holograms of their favourite characters.
“AR will allow employees to play with different datasets and explore them from different perspectives,” notes Ross Rubin, principal analyst at Reticle Research. “It will be useful for any application that frees people from having to look at a screen at a particular time and overlay information in a way that sparks insights that might not otherwise be available in that environment.”
6. The big privacy issues will eventually be solved
The ability to deliver digital services to anyone in any location carries with it the ability to track everyone’s behaviour everywhere. Organizations that hope to realize the full benefits of edge computing will need to solve the privacy problem.
“We’ve done more than 50 consumer and enterprise surveys over the past five years, and privacy always comes up as one of the top concerns,” Renaud says. “Hopefully, enlightened regulation will moderate the ambitious potential of the technology to get to a point where our privacy is safe.”
In the European Union, for example, data sovereignty rules limit the physical locations where information can be stored, and some countries require connected cameras to automatically blur faces or license plate numbers.
Technologies like swarm learning, which allows insights to be shared without needing the underlying data, can help alleviate data sovereignty concerns. HPE and others are currently working on new data standards that would allow people greater control over how their data is used, Nease says.
“I think these standards will emerge and people will give permission for their data to be used, just as they do with Google every time they perform a search,” he says. “But there is no stopping the privacy problem. It is now upon us.”