World’s largest chip gets beefier: 850 thousand cores for AI
Cerebras Systems has announced its 2nd Generation Wafer Scale Engine processor in what looks to be a major breakthrough for the company.
Announed at the Hot Chips 2020 conference, the new chip — which is as large as a single 300-mm wafer — packs over two times more transistors as well as a whopping number of AI-optimized cores.
The Cerebras Wafer Scale Engine Generation 2 has a die size of approximately 46,225 mm2 as it is limited by physical dimension of a 300-mm silicon wafer. The WSE 2 is processed using TSMC’s 7nm fabrication technology and consists of 2.6 million of transistors. To put the numbers into context, NVIDIA’s A100 measures 826mm2 and contains 54.2 billion transistors.
Incredible complexity
The Wafer Scale Engine 2 packs up to 850,000 proprietary cores specifically designed for processing AI/DL/ML applications. Each core has its own SRAM (i.e., there are tens of gigabytes of SRAM in every WSE) and all of them are interconnected using a special 3D mesh network that enables all 850,000 cores to work on the same task.
Cerebras claims that its Wafer Scale Engines are designed for workloads that would not perform good enough on traditional supercomputers employing hundreds of CPUs/GPUs/TPUs. Meanwhile, Wafer Scale Engines can be used with existing AI frameworks, such as TensorFlow or PyTorch, albeit with a special compiler from Cerebras.
Cerebras says that it has already received its 2nd Generation Wafer Scale Engine from the fab and is currently testing it internally. There is no word on actual performance or even power consumption of the WSE 2, but the 1st Generation WSE consumes up to 20kW of power and is supplied as a 15U rack system.
Sources: AnandTech, Tom’s Hardware