4 comments

  • LargoLasskhyfv a day ago ago
    • ahartmetz 10 hours ago ago

      Nice. The article seems to be underselling the performance of the chip with its "not primary goal" wording, though. AFAICT, only air cooling and DDR5 instead of HBM sacrifice performance in favor of flexibility. It otherwise does not seem to be a middling, conservative design meant to sell on features other than performance.

  • ivape 19 hours ago ago

    In addition, based on AMD ROCm™ software, an open-source AI/HPC software stack for GPUs, and Fujitsu’s Arm-based FUJITSU-MONAKA software, Fujitsu and AMD will enhance their collaboration with the open-source community. Both companies seek to advance the development of open-source AI software that is optimized for the AI computing platforms they will provide, and work to expand the ecosystem.

    https://www.fujitsu.com/global/about/resources/news/press-re...

    ARM is supposed to be releasing their own AI chip soon, so I'm not really sure what's going on here, other than all these major companies just see incredible demand for chips:

    https://www.ft.com/content/95367b2b-2aa7-4a06-bdd3-0463c9bad...

    Would it be wrong to say these will probably be our inferencing chips instead of GPUs going forward?

    • soganess 8 hours ago ago

      I don't know if you really need the full ROCm stack just to throw matrices at a card. Inference requires less "scaffolding", but maybe Fujitsu just want to outsource the whole stack and forget about it.

      My only worry would be that the software model of compute that ROCm has is going to drastically different than the hardware model of compute the Fujitsu chip has. This would be like using one of those CUDA compilers that target AMD hardware. Sure you can produce something that will run, but depending on what you are compiling, it will run dog slow.