IEEE Computer Society tech experts recently published their annual predictions for the future of tech, presenting what they believe will be the most widely adopted technology trends in 2019.
This year, the experts also review additional technologies that haven’t yet reached broad adoption and will be revisited next year–such as digital twins–as well as technologies that have outpaced many others.
“The predictions, based on an in-depth analysis by a team of leading technology experts, identify top technologies that have substantial potential to disrupt the market in the year 2019,” said Hironori Kasahara, IEEE Computer Society President.
The top 10 technology trends predicted to reach adoption in 2019 are:
- Deep learning accelerators such as GPUs, FPGAs, and TPUs. More companies have been announcing plans to design their own accelerators, which are widely used in datacentres. There is also an opportunity to deploy them at the edge, initially for inference and for limited training over time. This also includes accelerators for very low power devices. The development of these technologies will allow machine learning (or smart devices) to be used in many IoT devices and appliances.
- Assisted transportation. While the vision of fully autonomous, self-driving vehicles might still be a few years away, increasingly automated assistance is taking place in both personal and municipal (dedicated) vehicles. Assisted transportation is already very useful for wide recognition and is paving the way for fully autonomous vehicles. This technology is highly dependent on deep learning accelerators for video recognition.
- Internet of Bodies (IoB). IoT and self-monitoring technologies are moving closer to and even inside the human body. Consumers are comfortable with self-tracking using external devices (eg. fitness trackers and smart glasses) and gaming using augmented reality devices. Digital pills are entering mainstream medicine, and body-attached, implantable, and embedded IoB devices are also beginning to interact with sensors in the environment. These devices yield richer data that enable more useful applications, but also raise concerns about security, privacy, physical harm and abuse.
- Social credit algorithms. These use facial recognition and other advanced biometrics to identify a person and retrieve data about that person from social media and other digital profiles for the purpose of approval or denial of access to consumer products or social services. The combination of biometrics and blended social data streams can turn a brief observation into a judgment of whether a person is a good or bad risk or worthy of public social sanction. Some countries are already using social credit algorithms to assess loyalty to the state.
- Advanced (smart) materials and devices. Novel and advanced materials and devices for sensors, actuators, and wireless communications, such as tuneable glass, smart paper and ingestible transmitters, will create an explosion of exciting applications in healthcare, packaging, appliances and more. These technologies will also advance pervasive, ubiquitous and immersive computing, such as a cellular phone with foldable screen. The use of such technologies will have a large impact in the way we perceive IoT devices and will lead to new usage models.
- Active security protection. The traditional method of protecting computer systems involves the deployment of prevention mechanisms, such as anti-virus software. As attackers become more sophisticated, the effectiveness of protection mechanisms decreases as the cost increases. However, a new generation of security mechanisms is emerging that uses an active approach, such as hooks that can be activated when new types of attacks are exposed and machine-learning mechanisms to identify sophisticated attacks. Attacking the attacker is a technological possibility as well, but is almost always illegal.
- Virtual reality (VR) and augmented reality (AR). These technologies have been mainstream for a number of years. A well-known example — Pokemon Go — is a game that uses the camera of a smartphone to interpose fictional objects in real-world surroundings. Gaming is clearly a driver of these technologies, with other consumer devices becoming affordable and commonplace. VR and AR technologies are also useful for education, engineering, and other fields. However, there has been a ‘Catch-22’ due to a lack of applications resulting from the high cost of entry, yet the cost has stayed high due to a lack of applications.
- Chatbots. These artificial intelligence (AI) programs simulate interactive human conversation using pre-calculated user phrases and auditory signals. Chatbots have recently started using self-created sentences in lieu of pre-calculated user phrases, providing better results. Chatbots are frequently used for basic customer service on social networking hubs and are often included in operating systems as virtual assistants. In fact, chatbots mimic humans so well that some countries are considering requiring chatbots to disclose that they are not human.
- Automated voice spam (robocall) prevention. Spam phone calls are increasingly sophisticated, often spoofing the caller ID number of the victim’s family and business associates. This leads people to regularly ignore phone calls, creating risks such as true emergency calls going unanswered. However, emerging technology can now block spoofed caller ID and intercept questionable calls so the computer can ask questions of the caller to assess whether they are legitimate.
- Technology for humanity — specifically machine learning (ML). Technology can help resolve societal issues. IEEE predicts large-scale use of ML, robots and drones will help improve agriculture, ease drought, ensure supply of food, and improve health in remote areas. Some of these activities have already started, but a significant increase in adoption rate and successful deployment is predicted in 2020. “Sensors everywhere” and advances in IoT and edge computing are major factors contributing to this adoption. Recent disasters, such as wildfires and bridge collapses, are further accelerating the urgency to adopt monitoring technologies in places like forests and smart roads.
Below are technologies IEEE considered promising, but felt won’t reach broad adoption until after 2019:
- Digital twins — software representations of assets and processes to understand, predict and optimise performance for improved business outcomes. A digital twin can be a digital representation of any characteristic of a real entity, including humans. The choice of which characteristics are digitised is determined by the intended use of the twin. Digital twins are already being used by many companies: according to analysts, 48% of companies in the IoT space have already started adopting them. This includes digital twins for very complex entities, such as an entire smart city. Digital twins are also expected to play a transformational role in healthcare over the next three years.
- Real-time ray tracing (RT2) — long been considered the Holy Grail for rendering computer graphics realistically. Although the technique itself is quite mature, it was too compute-intensive to perform in real time until recently—so all ray-traced scenes had to be scripted and rendered in advance. In 2018, we witnessed the debut of a consumer product family with RT2 capabilities. In the next couple of years we expect to see incremental iterations until true RT2 is widespread. Initially, we expect the growth to be driven by consumer applications, such as gaming, followed by professional applications, such as training and simulation. Combined with #7 (VR), this technology could open up new frontiers in high-fidelity visual simulations.
- ‘Serverless’ computing — the family of lambda-like offerings in the cloud, such as AWS Lambda, Google Cloud Functions, or Azure Functions. ‘Serverless’ is the next step in the continuum along the line of virtualisation, containers and micro-services. Unlike IaaS, in serverless computing, the service provider manages the resources at a very fine granularity (down to an individual function). End users can focus on the functions and don’t have to pre-allocate instances or containers or manage them explicitly. While it’s still at an early stage of adoption, there’s appeal on both sides (better resource utilisation for the providers, and pay-for-what-you-use for the users), so IEEE expects significant adoption of it in the next couple of years.