@@ -20,17 +20,17 @@ modes/precisions:
2020 ```
2121
22223 . Clone the [ tf_unet] ( https://github.com/jakeret/tf_unet ) repository,
23- and then get [ PR #202 ] ( https://github.com/jakeret/tf_unet/pull/202 )
23+ and then get [ PR #276 ] ( https://github.com/jakeret/tf_unet/pull/276 )
2424 to get cpu optimizations:
2525
2626 ```
2727 $ git clone [email protected] :jakeret/tf_unet.git 2828
2929 $ cd tf_unet/
3030
31- $ git fetch origin pull/202 /head:cpu_optimized
31+ $ git fetch origin pull/276 /head:cpu_optimized
3232 From github.com:jakeret/tf_unet
33- * [new ref] refs/pull/202 /head -> cpu_optimized
33+ * [new ref] refs/pull/276 /head -> cpu_optimized
3434
3535 $ git checkout cpu_optimized
3636 Switched to branch 'cpu_optimized'
@@ -60,7 +60,7 @@ modes/precisions:
6060 --docker-image gcr.io/deeplearning-platform-release/tf-cpu.1-14 \
6161 --checkpoint /home/<user>/unet_trained \
6262 --model-source-dir /home/<user>/tf_unet \
63- -- checkpoint_name=model.cpkt
63+ -- checkpoint_name=model.ckpt
6464 ```
6565
6666 Note that the ` --verbose ` or ` --output-dir ` flag can be added to the above
@@ -75,4 +75,4 @@ modes/precisions:
7575 Total samples/sec: 905.5344 samples/s
7676 Ran inference with batch size 1
7777 Log location outside container: {--output-dir value}/benchmark_unet_inference_fp32_20190201_205601.log
78- ```
78+ ```
0 commit comments