@@ -2,8 +2,8 @@ Minigo: A minimalist Go engine modeled after AlphaGo Zero, built on MuGo
2
2
==================================================
3
3
4
4
This is a pure Python implementation of a neural-network based Go AI, using
5
- TensorFlow. While inspired by Deepmind 's AlphaGo algorithm, this project is not
6
- a Deepmind project nor is it affiliated with the official AlphaGo project.
5
+ TensorFlow. While inspired by DeepMind 's AlphaGo algorithm, this project is not
6
+ a DeepMind project nor is it affiliated with the official AlphaGo project.
7
7
8
8
### This is NOT an official version of AlphaGo ###
9
9
@@ -32,7 +32,7 @@ Goals of the Project
32
32
Google Cloud Platform for establishing Reinforcement Learning pipelines on
33
33
various hardware accelerators.
34
34
35
- 2 . Reproduce the methods of the original Deepmind AlphaGo papers as faithfully
35
+ 2 . Reproduce the methods of the original DeepMind AlphaGo papers as faithfully
36
36
as possible, through an open-source implementation and open-source pipeline
37
37
tools.
38
38
@@ -45,15 +45,15 @@ understandable implementation that can benefit the community, even if that
45
45
means our implementation is not as fast or efficient as possible.
46
46
47
47
While this product might produce such a strong model, we hope to focus on the
48
- process. Remember, getting there is half the fun :)
48
+ process. Remember, getting there is half the fun. :)
49
49
50
50
We hope this project is an accessible way for interested developers to have
51
51
access to a strong Go model with an easy-to-understand platform of python code
52
52
available for extension, adaptation, etc.
53
53
54
- If you'd like to read about our experiences training models, see RESULTS.md
54
+ If you'd like to read about our experiences training models, see [ RESULTS.md] ( RESULTS.md ) .
55
55
56
- To see our guidelines for contributing, see CONTRIBUTING.md
56
+ To see our guidelines for contributing, see [ CONTRIBUTING.md] ( CONTRIBUTING.md ) .
57
57
58
58
Getting Started
59
59
===============
@@ -277,9 +277,9 @@ This command takes multiple tfrecord.zz files (which will probably be KBs in siz
277
277
and shuffles them into tfrecord.zz files that are ~ 100 MB in size.
278
278
279
279
Gathering is done according to model numbers, so that games generated by
280
- one model stay together. By default, ` rl_loop.py ` will use directories
280
+ one model stay together. By default, [ rl_loop.py] ( rl_loop.py ) will use directories
281
281
specified by the environment variable ` BUCKET_NAME ` , set at the top of
282
- ` rl_loop.py `
282
+ [ rl_loop.py] ( rl_loop.py ) .
283
283
284
284
```
285
285
gs://$BUCKET_NAME/data/training_chunks/$MODEL_NAME-{chunk_number}.tfrecord.zz
0 commit comments