d-Matrix scores $110M to undercut Nvidia in AI
-
d-Matrix scores $110M in funding to make AI inferencing chips that compete with Nvidia. Their chip integrates compute into SRAM for high bandwidth.
-
d-Matrix's Jayhawk II chip has 256 compute engines per chiplet, with 256MB SRAM each. Full card has 2GB SRAM across 8 chiplets.
-
The chip is targeted at smaller 3-60 billion parameter AI models that fit into SRAM. Larger models use slower DRAM.
-
Nvidia's GPUs are better for huge 100B+ parameter models. But d-Matrix can be more cost effective for mainstream enterprise models.
-
d-Matrix must move fast, as Nvidia and cloud providers like Amazon and Google are developing their own inferencing chips.