You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The update command implementation runs over files that are independent
from each other. As such, the overall update operation can be trivially
parallelised to speed things up.
This change introduces The list of files that need to be
compared/updated is collected in a first past. This list is then given
to a multiprocessing pool to farm out the actual update of each
individual file. The amount of parallelism is controlled through a new
"jobs" parameter, command line option and environment variable. If no
value is given for this option, all CPUs are used.
I noticed this chance for improvement when doing a test run of the
update of .po files for the Spanish translation of the CPython
documentation. Local numbers in my 8-core, hyper-threaded AMD Ryzen 7
5825U:
-j 1 (same as old behaviour)
real 12m5.402s
user 12m4.942s
sys 0m0.273s
-j 8
real 2m23.609s
user 17m45.201s
sys 0m0.460s
<no value given>
real 1m57.398s
user 26m22.654s
sys 0m0.989s
Signed-off-by: Rodrigo Tobar <[email protected]>
0 commit comments