Llama 3.1 8B benchmark results

Compare Llama 3.1 8B benchmark results across hosted providers and endpoints. This page summarizes public runs on MMLU, MATH, GSM8K, IFEval, and MuSR, including score, latency, sample coverage, prompts, outputs, and methodology.

Provider Endpoints

Llama 3.1 8B has 1 public run across 3 providers. Provider-hosted versions of the same model can differ in quantization, infrastructure, and serving configuration, which affects benchmark results independently of model capability.

How to Compare Endpoints

Use canonical-prompt runs on the same benchmark to compare endpoints fairly. Score differences between providers running the same model family reflect hosting differences rather than model differences. Check the methodology for how runs are defined and what makes them comparable.

MMLU results for Llama 3.1 8B

Provider Endpoint Best Score Runs
Groq Groq / Llama 3.1 8B 45.0% 1 run

Explore all MMLU benchmark results →

MATH results for Llama 3.1 8B

No public MATH runs for Llama 3.1 8B are available yet. Explore all MATH benchmark results or run this benchmark on your endpoint.

GSM8K results for Llama 3.1 8B

No public GSM8K runs for Llama 3.1 8B are available yet. Explore all GSM8K benchmark results or run this benchmark on your endpoint.

IFEval results for Llama 3.1 8B

No public IFEval runs for Llama 3.1 8B are available yet. Explore all IFEval benchmark results or run this benchmark on your endpoint.

MuSR results for Llama 3.1 8B

No public MuSR runs for Llama 3.1 8B are available yet. Explore all MuSR benchmark results or run this benchmark on your endpoint.

Benchscope is a JavaScript app. If the interactive interface does not load, enable JavaScript or use the links above for the main public sections.