Z.ai GLM 4.7 benchmark results

Compare Z.ai GLM 4.7 benchmark results across hosted providers and endpoints. This page summarizes public runs on MMLU, MATH, GSM8K, IFEval, and MuSR, including score, latency, sample coverage, prompts, outputs, and methodology.

Provider Endpoints

Z.ai GLM 4.7 has 5 public runs across 1 provider. Provider-hosted versions of the same model can differ in quantization, infrastructure, and serving configuration, which affects benchmark results independently of model capability.

How to Compare Endpoints

Use canonical-prompt runs on the same benchmark to compare endpoints fairly. Score differences between providers running the same model family reflect hosting differences rather than model differences. Check the methodology for how runs are defined and what makes them comparable.

MMLU results for Z.ai GLM 4.7

Provider Endpoint Best Score Runs
Cerebras Cerebras / Z.ai GLM 4.7 5 runs

Explore all MMLU benchmark results →

MATH results for Z.ai GLM 4.7

No public MATH runs for Z.ai GLM 4.7 are available yet. Explore all MATH benchmark results or run this benchmark on your endpoint.

GSM8K results for Z.ai GLM 4.7

No public GSM8K runs for Z.ai GLM 4.7 are available yet. Explore all GSM8K benchmark results or run this benchmark on your endpoint.

IFEval results for Z.ai GLM 4.7

No public IFEval runs for Z.ai GLM 4.7 are available yet. Explore all IFEval benchmark results or run this benchmark on your endpoint.

MuSR results for Z.ai GLM 4.7

No public MuSR runs for Z.ai GLM 4.7 are available yet. Explore all MuSR benchmark results or run this benchmark on your endpoint.

Benchscope is a JavaScript app. If the interactive interface does not load, enable JavaScript or use the links above for the main public sections.