Together AI / Llama 3.3 70B on MUSR

Together AI hosting Llama 3.3 70B on MUSR. Completed at 4/20/2026, 6:30:12 AM. 30.0%. 597 ms p50 latency.

Run Summary

This run is completed. Benchscope recorded 20 samples with 20 completed samples. Prompt mode: canonical. Sample scope: random.

Benchmark and Endpoint

Model family: Llama 3.3 70B. Provider: Together AI. Benchmark: MUSR. Compare this run with other public runs on the same benchmark and model pages.

Benchscope is a JavaScript app. If the interactive interface does not load, enable JavaScript or use the links above for the main public sections.