You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+18-21Lines changed: 18 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,11 +14,11 @@
14
14
<ahref="https://arxiv.org/abs/2306.07179"target="_blank">Benchmark</a>/<ahref="https://openreview.net/forum?id=CtM5xjRSfm"target="_blank">Results</a> Paper
@@ -28,11 +28,12 @@ Submissions are evaluated based on their "time-to-result", i.e., the wall-clock
28
28
29
29
---
30
30
31
-
> This is the repository for the *AlgoPerf: Training Algorithms benchmark* measuring neural network training speedups due to algorithmic improvements.
31
+
> This is the repository for the _AlgoPerf: Training Algorithms benchmark_ measuring neural network training speedups due to algorithmic improvements.
32
32
> It is developed by the [MLCommons Algorithms Working Group](https://mlcommons.org/en/groups/research-algorithms/).
33
33
> This repository holds the benchmark code, the benchmark's [**technical documentation**](/docs/DOCUMENTATION.md) and [**getting started guides**](/docs/GETTING_STARTED.md). For a detailed description of the benchmark design, see our [**introductory paper**](https://arxiv.org/abs/2306.07179), for the results of the inaugural competition see our [**results paper**](https://openreview.net/forum?id=CtM5xjRSfm).
34
34
>
35
35
> **See our [AlgoPerf Leaderboard](https://github.com/mlcommons/submissions_algorithms) for the latest results of the benchmark and to submit your algorithm.**
36
+
36
37
---
37
38
38
39
> [!IMPORTANT]
@@ -50,22 +51,21 @@ Submissions are evaluated based on their "time-to-result", i.e., the wall-clock
50
51
51
52
## Installation
52
53
53
-
> [!TIP]
54
-
> **If you have any questions about the benchmark competition or you run into any issues, please feel free to contact us.** Either [file an issue](https://github.com/mlcommons/algorithmic-efficiency/issues), ask a question on [our Discord](https://discord.gg/5FPXK7SMt6) or [join our weekly meetings](https://mlcommons.org/en/groups/research-algorithms/).
54
+
> [!TIP] > **If you have any questions about the benchmark competition or you run into any issues, please feel free to contact us.** Either [file an issue](https://github.com/mlcommons/algorithmic-efficiency/issues), ask a question on [our Discord](https://discord.gg/5FPXK7SMt6) or [join our weekly meetings](https://mlcommons.org/en/groups/research-algorithms/).
55
55
56
56
You can install this package and dependencies in a [Python virtual environment](/docs/GETTING_STARTED.md#python-virtual-environment) or use a [Docker/Singularity/Apptainer container](/docs/GETTING_STARTED.md#docker) (recommended).
57
57
We recommend using a Docker container (or alternatively, a Singularity/Apptainer container) to ensure a similar environment to our scoring and testing environments.
58
58
Both options are described in detail in the [**Getting Started**](/docs/GETTING_STARTED.md) document.
@@ -117,17 +117,15 @@ Our [**Contributing**](/docs/CONTRIBUTING.md) document provides further MLCommon
117
117
118
118
## License
119
119
120
-
The *AlgoPerf* codebase is licensed under the [Apache License 2.0](/LICENSE.md).
120
+
The _AlgoPerf_ codebase is licensed under the [Apache License 2.0](/LICENSE.md).
121
121
122
122
## Paper and Citing the AlgoPerf Benchmark
123
123
124
-
In our paper ["Benchmarking Neural Network Training Algorithms"](http://arxiv.org/abs/2306.07179) we motivate, describe, and justify the *AlgoPerf: Training Algorithms* benchmark.
124
+
In our paper ["Benchmarking Neural Network Training Algorithms"](http://arxiv.org/abs/2306.07179) we motivate, describe, and justify the _AlgoPerf: Training Algorithms_ benchmark.
125
125
126
-
If you are using the *AlgoPerf benchmark*, its codebase, baselines, or workloads, please consider citing our paper:
126
+
If you are using the _AlgoPerf benchmark_, its codebase, baselines, or workloads, please consider citing our paper:
127
127
128
-
> [Dahl, Schneider, Nado, et al.<br/>
129
-
> **Benchmarking Neural Network Training Algorithms**<br/>
0 commit comments