Read the latest article from The Information. Subscribe today and save 25% on all of our business, tech and finance reporting. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
|
Apr 19, 2026 |
|
Google is in talks with Marvell Technology to develop two new chips aimed at running AI models more efficiently, according to two people with direct knowledge of the discussions. One is a memory processing unit designed to work alongside Google’s tensor processing unit. The other is a new TPU built specifically for running AI models. The moves underscore surging demand for inference chips that run AI powering commercial products such as autonomous agents. At its GTC conference in March, Nvidia released a chip designed to improve the efficiency of inference workloads. Called a language processing unit, the chip is built on technology Nvidia licensed from startup Groq for $20 billion.
|
By Qianer Liu |
|
|
|
|
|
|
|
|
|
0 comentários:
Postar um comentário