Skip to main content

All change in edge computing

As AMD buys Xilinx and Nvidia acquires Arm, we ask two industry experts what this could mean for the vision sector

Jonathan Hou, president of Pleora Technologies, on vision processing in the XPU era

Welcome to the world of the ‘XPU’, as the processors of the future are sometimes referred to, and an emerging battle between Intel, AMD, and Nvidia.

Recent industry consolidation – Nvidia’s agreement to acquire Arm, AMD’s purchase of Xilinx, and Intel’s previous purchase of Altera – all share a common goal of offering more compute options in different form factors optimised for different markets from a single vendor. It’s clear that Intel, Nvidia, and AMD are taking a platform approach to try and grow their share of the processing revenue in edge devices, traditional desktops, workstations, and servers.

While x86 CPUs from Intel and AMD have been dominant in the traditional PC-based vision systems, Arm CPUs have emerged over the last couple of years as a viable alternative from an edge processing perspective. By offering a compelling balance of price, performance, and power consumption, Arm CPUs are ideal for embedded vision and smart camera applications.

Meanwhile, GPUs have evolved from 2D and 3D graphics acceleration into highly parallel general compute processors. For intensive applications, such as image processing and deep learning AI, GPUs prove to be highly effective in providing both performance and ease of programming for software developers.

At the other extreme, companies like Google are starting to build application-specific integrated circuits (ASICs) designed from the ground up for AI acceleration, rather than a general purpose processor route, with devices like the tensor processing unit (TPU). Field programmable gate arrays (FPGAs) take an in-between approach; providing reprogramming flexibility for specific tasks like AI, without the high power consumption associated with GPUs.

For a developer in the machine vision world, this means there are more choices available in terms of options from edge processing to the servers that are used for training massive amounts of data. What the recent industry consolidation signals is the emergence of computing platforms, rather than individual components. Intel, AMD, and Nvidia are now competing to try and get developers to embrace their entire platform, from their CPUs, GPUs, to FPGAs, through a common set of tools and libraries.

With these acquisitions, we will continue to see a race to two extremes – one towards integrating more compute functions in a single chip, and another towards bundling more standalone compute devices in a single system – all from a single vendor.

At the edge, we’ll start seeing more integration with system-on-chips (SOCs) to provide developers with access to different compute resources in very small form factors. We already see this today in the Arm world, where, from a size and power perspective, putting CPU, GPU and AI processors all on a single chip is ideal for edge devices. The Industrial Internet of Things (IIoT) and embedded vision markets are following a very similar path to the integration that we’ve seen in the history of CPUs, where additional processing blocks have become standard over time including the floating point unit (FPU) and integrated GPUs. In fact, Xilinx had been heading in this direction prior to being acquired by AMD by integrating more compute engines in a single chip – Arm processors, digital signal processors (DSPs), and FPGA all in a single package and tightly integrated so data can be easily shared across each engine.

In the server and data centre world, discrete components will continue to exist. There will be discrete CPUs, GPUs and FPGAs that offer the best performance but highest power budget in order to crunch through data for applications like AI training.

As Intel, Nvidia, and AMD look to integrate and bundle each compute unit, designers will benefit from a single platform-level application programming interface (API). Intel has already introduced one API for its CPU, upcoming GPU and FPGAs: a ‘program once, run on any compute device’ abstraction layer. Nvidia has its popular Cuda library for GPUs, which it has been working on to support Arm CPUs. AMD and Xilinx have recently showcased the ability to run AMD’s ROCm libraries across GPUs and analogue programmable gate arrays (APGAs).

For machine vision developers, the benefit is that there will be single unified programming interfaces for each vendor platform. This should make it easier to program across the board, regardless of CPU, GPU or FPGA, thanks to better integration. The disadvantage is that the market is becoming quite fragmented in terms of having to choose a specific vendor platform to develop in – you’ll need to make a choice to be an Intel, Nvidia, or AMD developer.

At the end of the day, industry consolidation should be a positive for system integrators in the machine vision market. As an industry, the XPU – a more integrated, easier to program, and higher performance processor – will be a key technology for edge devices and embedded vision applications as we move towards the age of AI and Industry 4.0.

--

Jan-Erik Schmitt, vice president of sales at Vision Components, on the opportunities and risks from the Xilinx acquisition

AMD’s plans to take over the inventor of the FPGA, Xilinx, follows a row of acquisitions, notably Intel buying Altera and most recently Nvidia purchasing Arm. This is certainly a consequence of booming edge and edge-AI development. Driven by the consumer and automotive markets, more and more applications incorporate embedded technology such as smart sensors, data processing at the edge, and embedded vision. The increasing computing power of ever smaller hardware reinforces this trend. Many applications that needed x86 processors in the past run on much smaller and more energy efficient embedded processors today. This leads to a shift in the mass market towards embedded devices that the big players want to take part in, hence the recent acquisitions.

But what are the expectations for the embedded vision market? Decentralisation, data processing at the edge and perfect integration of hardware into specific applications have always been part of our market’s strategy. From when Vision Components released its first industrial-grade smart camera 25 years ago, to fully integrated embedded vision systems, edge computing and AI at the edge are the logical evolution of the mass market.

While the PC and desktop computer market and its big players like Intel, AMD and Nvidia always relied on proprietary licensing, the embedded market has traditionally had a strong relationship to open source approaches. If one wants to look at it negatively, there is a risk that proprietary license models will spread into the embedded market. But from our point of view, this may lead to more alternative projects such as Risk-V and totally new open source developments breaking through and revitalising the market landscape.

We at Vision Components have worked with Xilinx FPGAs for the past 25 years and they have always been an ideal choice for many projects because of Xilinx’s commitment to industrial-grade quality, longevity and support. If the market is now growing rapidly and expanding into mass market consumer applications with AMD’s help, we hope that the quality and support from Xilinx for industrial applications - with their smaller quantities and demand - will remain the same. At the same time the embedded vision market will benefit from a rapid evolution of the technology, fuelled by the consumer market.

Write for us

Do you have an opinion on how embedded vision could change in the coming years? Please get in touch: greg.blackman@europascience.com.

Topics

Read more about:

Embedded vision, Business

Media Partners