Researchers from Standford, Princeton, and Cornell have developed a new benchmark to better evaluate coding abilities of large language models (LLMs). Called CodeClash, the new benchmark pits LLMs ...
Today, MLCommons announced new results for the MLPerf Training v5.1 benchmark suite, highlighting the rapid evolution and ...
Researchers are racing to develop more challenging, interpretable, and fair assessments of AI models that reflect real-world use cases. The stakes are high. Benchmarks are often reduced to leaderboard ...
In the latest MLPerf Training v5.1, NVIDIA dominated every benchmark, setting new records across LLMs, image generation, and more thanks to its Blackwell Ultra GPUs, NVFP4 precision, and ...