Perfect timing for SoCs as Moore’s Law crawls to an end
Miniaturization is a dying strategy
IHS Markit predicts that within the next few years, almost every processor supplier will be forced to present their solution for supporting artificial intelligence or risk seeming technologically outdated. For many of them, especially system-on-chip (SoC) suppliers, this will become a welcome distraction from the increasingly costly strategy of miniaturization associated with Moore’s Law.
Strictly speaking, Moore simply observed that processor performance doubled approximately every two years because transistor density doubled at the same rate (through miniaturization). It was not a law of physics, but merely a technological trend that drove market expectations very efficiently. Designers dependent upon understanding future processor performance, such as computer OEMs and software developers, could set their financial clocks by it. But, as with most efficient systems, the better it works, the more chaos ensues when it stops working, and due to physical limitations, Moore’s law is coming to an end. The infinitesimal size of semiconductor features that enable processors to function are already only a few atoms thick and it is becoming financially cumbersome to continue following Moore’s law. Even if the pace is slowed, within a few generations, miniaturization will no longer be a viable option due to the physical limitations.
Application-specific heterogeneous processing is the new law
Through uncertainty comes innovation. Just as it has been a trend for complex systems to get smaller and highly integrated, these trends have been applied at the semiconductor scale. For decades, it has become a growing strategy to improve processor performance, beyond the increasingly expensive miniaturization, through more integration with functionally optimized subsystems to continue providing roughly the same performance gains. However, it has not just been raw performance that is driving this trend, but also:
- Increasing power efficiency
- Reducing latency
- System component reduction
- Overall system cost efficiencies
The overall result of this trend has led to the development of highly integrated SoCs, and future processors will contain even more application specific sub-systems. The number of both symmetric and asymmetric cores as well as highly dedicated acceleration logic continues to grow. As can be seen in the following illustration, the traditional classes of processor are evolving to the point of not being recognizable as distinct classes. The traditional definitions of what is a microprocessor, a digital signal processor, or microcontroller, are all beginning to blur as system designers turn more to application specific logic and SoCs as efficient mainstream solutions, especially where processor designers supply turnkey solutions. This is leading us further toward heterogeneous processor solutions. The performance target tends to be aligned with highest performance on top. As we become more heterogeneous however, there is no hardened rule that this order will segment the same for all applications. For example, machine learning/deep learning (ML/DL) processors will commonly integrate with vision processors for applications of video analytics.
The next generation of heterogeneous SoCs will cater to Machine Learning
Artificial intelligence is not a new concept. It is the body of science; algorithms and machines capable of performing a version of learning and independent problem solving. That term covers a large scope of techniques and is perhaps too broad to be of much use in analyzing market potential for processors. A more refined subset is machine learning, which runs code that enables the machine to “learn” by iteratively re-evaluating its algorithm’s assumptions based on which responses were deemed correct and which were not. This “reasoning” process is known as inferencing. Due to the massive number of computations, traditional inferencing was historically relegated to high-performance computers.
The evolution of machine learning has turned to a strategy of processing known as neural networking, which are pattern analysis engines rather than deterministic compute engines that run highly parallel vector and matrix math to weight characteristics of input data to achieve the desired output. Learning mode, where the system is continuously taught whether its outputs were acceptable or not, still typically relies on high performance systems, such as cloud servers, with potentially a great deal of human machine interfacing. However, once trained, the resulting algorithm can be converted to a form of conventional deterministic program, or even a deep learning neural network program, where unused parts of the algorithm are removed or refined to enable running on a more resource restricted platform. This has significant impact on the future of the processor market.
At the heart of enabling this evolution is the rapid adoption of integrated vector processing such as graphics for general purpose processing and even dedicated parallel processing elements specifically targeting deep learning. The convergence of several other major market trends is what is driving this new mass market trend. The following represents additional trends that have contributed to this focus on neural networking and even further into deep learning:
- Cloud data centers – Cloud computing is becoming ubiquitous from commercial enterprise down to personal electronics. From assisting your natural speech inquiry or browser search, to analyzing social media habits, traffic patterns, financial patterns and much more. We are fast becoming not only accepting, but dependent upon cloud computing run on servers commonly utilizing high-performance CPUs, GPUs, FPGAs and now more recently, specialized neural networking coprocessors to perform fast, efficient, big data analysis on neural networking algorithms.
- The Internet-of-Things – It should be obvious, but needs to be said, that the need for big data analysis is also being driven by the ever-increasing amount of data available in the form of IoT nodes. The patterns of apps on mobile devices, connected cars, wearable devices, medicine, smart homes and smart appliances, remote sensors, and the billions of other new nodes are creating big data sets that can be used to discover things about process traits, system health, personal habits, and digital environmental factors and thus creating whole new markets that never existed before.
- Edge computing – It is neither practical nor desirable to spend resources on making every device artificially intelligent. Likewise, it is equally impractical to send all locally collected data to the cloud for inferencing. It is also important to consider the distribution of artificial intelligence and how that affects latency, privacy, safety, security, and practical investment in resources. The rise of IoT data and the potential downsides of cloud services are driving a growing market for intelligent gateways, and other localized data centers, servicing industrial, enterprise, commercial, and even campus activity. Given enough data, it is practical that many of these devices will become hubs for local applications of artificial intelligence.
- Foothold applications – Artificial intelligence has been around for a long while, however, only recently has the embedded processor performance and robust neural network algorithms enabled such things as machines that perceive image content better than people. The following represents emerging applications that are prime to embrace AI in embedded applications.
- Automotive – Driving safety is a crucial technology industry and breakthroughs here often cross to other industries. The introduction of vision AI supporting advanced driver assist systems (ADAS) and car safety such as those measured by the New Car Assessment Program (NCAP), is changing the entire design of automobile electronics. Vision is not the only SoC benefiting from significantly greater interest in AI; the coordination of a growing network of sensors and connectivity will require some of the highest levels of coordinated inferencing in any application.
- Smartphone – The ubiquity of smartphones and demand for a high-performance multimedia SoC already makes it a prime target for AI. Already, AI is being introduced for purposes of enhancing photography, but it also has the potential to enhance AR/VR applications, social media, location-based services, and applications we haven’t even begun to address yet.
- Surveillance – Video cameras are another prime target for AI. Whether the intelligence lies in the camera or the network video recording equipment, there is strong growing demand for video analytics to capture not just the content, but the context of surveilled scenes in real time. This data can be used to trigger alerts to events as they unfold and guide people making low-latency critical decisions for public and private safety and policing decisions.
- Industry 4.0 – Demand for automation has been growing since the birth of the industrial revolution, but worker safety has, until recently, been a very limiting factor for high volume integration of robotics and human workers in factory automation. With the introduction of AI, this and thousands of other industrial applications can enable the efficient use of both automation and human workforce in an environment made safer and more efficient through AI enhancements.
- Voice assistance – Video analytics is not the only rapidly growing use of inferencing. Natural voice interpretation is one of the fastest growing markets enabled by AI in the consumer applications. Smart speakers and other smart home automation are creating an environment where the average consumer is growing in reliance on digital voice assistance in their daily life which in turn relies heavily on cloud based inferencing. It is likely, that even more local inferencing and sensor driven automation will grow the market for multimedia SoCs utilizing embedded deep learning elements in the smart home.
There are many other AI applications emerging that IHS Markit is tracking, not only for their effect on the electronics industry, but also the influence on processor design. Processor providers must innovate in new ways beyond the historic miniaturization process. Moore’s Law physics may be coming to an end, but Moore’s law was never about the physics, it was always about the economic stability. If the processor industry can continue to promise a predictable increase in performance, and AI is one of the most likely candidates for this, Moore’s Law could live on in a new generation.
For more information on Processors in AI, System-on-Chip Tracking and related processor reports, watch for the new IHS Markit Processor Intelligence Service in 2019.