AI Workloads Get A Boost From New Chip Designs

AI-Workloads

New chip designs are revolutionizing the way AI workloads are processed, which includes an analog chip that improves computer vision. These designs would revolutionize AI by taking advantage of innovative ways to deal with generative workloads in a more efficient manner.

In an interview with Siddharth Kotwal, Quantiphi’s global head of Nvidia practice said:

“When it comes to machine learning, keeping up with the requirements of AI/ML workloads, both in terms of hardware and software, is paramount. The potential hardware opportunities revolve around developing workload specific AI accelerators/GPUs to cater to the specialized needs of enterprises.”

Ben Lee, professor at the University of Pennsylvania’s Penn Engineering noted in an interview that general purpose microprocessors like those from Intel and AMD offer high performance processors but for a really broad range of applications.

He also said that chips which are built for a specific use case such as AI, they can offer a much better performance and energy efficiency.

Lee goes onto say and I quote:

“First, they optimize the movement of data into and within the processor, reducing the number of energy intensive data transfers. Secondly they create a large custom instructions that perform much more work per invocation, which allows the chip to amortize the energy costs of supplying data for those instructions. Computer engineers often use a rule of thumb: custom chips tailored for an application domain can improve performance and energy efficiency by two orders of magnitute (ie, x100).”

A promising area of research has to do with processing-in-memory (PIM). This couples emerging memory technologies with analog computation according to Lee. This memory technology has programmable resistors that can be used to represent a ML (machine learning) model’s parameters or weights.

Lee goes on to further say:

“As current flows through these programmed resistors, the memory can implement multiplications & additions that form the basis for many machine learning computations. PIM offers much greater efficiency because computation is embedded within the data, eliminating the need to move large volumes of data across long distances to the processor.”

Kotwal believes that there will likely be a growing demand for Edge GPUs, especially for edge inference, necessitating GPUs from companies like Nvidia, ARM, Qualcomm & others in the SoC or mobile domains.

Artificial intelligence concept


Interference Minimization

USC (University of Southern California) researchers have recently developed a way that allows devices to minimize interference for AI tasks.

Thanks to this innovation, it boasts an unprecedented information density that can store 11 bits per component. As a result of this it is the most compact memory technology to date.

These powerful but tiny chips can prove to be game changers, with them being fitted into our mobile devices greatly enhancing their capabilities.

Next generation NPUsASICs, & FPGAs that are designed for AI workloads can be much more efficient and cost effective according to Robert Daigle, who is Lenovo’s director of Global AI, in his interview.

Daigle predicted that such AI accelerators will become more specialized depending on the use case.

For example accelerators that are designed specifically for Computer Vision Inference, Generative AI Inference, and training.

Daigle said that the latest chip designs are incorporating capabilities to operate in a liquid cooled environments, which marks a shift to more sustainable energy practices.

A crucial design priority is minimizing energy consumption & improving heat dispersion.

Daigle goes on to note that he evolution of AI accelerators are branching into two distinct trajectories:

One branch is discrete purpose built accelerators and AI cores integrated into multipurpose silicone like CPUs.

With the convergence of advanced, efficient silicon, along side innovative liquid cooling techniques as well as streamlined AI code within robust frameworks stands ready to amplify the potential for new AI models and solutions.

Daigle went on to further add:

“Soon chips will help lead the way in sustainability efforts to drive peake AI performance results while reducing and reutilizing energy consumption. AI will continue to advance and become more complex; advancing chip design will help lead the way in that evolutionary process. We can expect to see significant power consumption reduction, acoustic improvements and cost savings.”

 

AI & Computer Vision

Researchers at Tsinghua University in China have come with a recent innovation.

They have said that they have created a completely analog photoelectric chip that integrates optical and electronic computing for rapid & energy efficient computer vision processing.

Both Analog & digital signals are 2 ways to transmit information.

Analog signals are light light forming an image. They change continuously.

On the other hand digital signals are like binary numbers and not continuous.

In such work loads such as image recognition and object detection in computer vision, it usually begins with an analog signal rom the environment.

To then process them using AI neural networks, which are in turn trained to find patterns in the data, the analog signals must be converted to digital ones.

The conversion takes time and consumes alot of energy, this can in turn slow the neural network down.

Photonic computing on the other hand uses light signals as an alternative and shows that it could be a very promising solution.

The researchers paper, which was published in Nature showed how they created an integrated processor which combined the benefits of light & electricity in an all analog way. 

This is called ACCEL which is short for All Analog Chip Combining Electronic and Light Computing.

Fang Lu who is a researcher on the Tsinghua Team said:

“We maximized the advantages of light and electricity under all analog signals, avoiding the drawbacks of analog to digital conversion and breaking the bottleneck of power consumption and speed.”

Leave a Reply

Your email address will not be published. Required fields are marked *