Skip to content

Commit a9144d3

Browse files
Merge pull request #670 from SciML/opt_bounds
Fix default optimizer when parameter bounds are given
2 parents 60df115 + 423ba08 commit a9144d3

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

src/train.jl

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,8 @@ defined in an optimizer-dependent manner.
4242
4343
The current default AD choice is dependent on the number of parameters.
4444
For <50 parameters both ForwardDiff.jl and Zygote.jl gradients are evaluated
45-
and the fastest is used. If both methods fail, finite difference method
46-
is used as a fallback. For ≥50 parameters Zygote.jl is used.
45+
and the fastest is used. If both methods fail, finite difference method
46+
is used as a fallback. For ≥50 parameters Zygote.jl is used.
4747
More refinements to the techniques are planned.
4848
4949
## Default Optimizer Choice
@@ -98,7 +98,7 @@ function sciml_train(loss, θ, opt=nothing, adtype=nothing, args...;
9898
error("Automatic optimizer determination requires deterministic loss functions (and no data) or maxiters must be specified.")
9999
end
100100

101-
if isempty(args) && deterministic
101+
if isempty(args) && deterministic && lower_bounds === nothing && upper_bounds === nothing
102102
# If determinsitic then ADAM -> finish with BFGS
103103
if maxiters === nothing
104104
res1 = GalacticOptim.solve(optprob, ADAM(0.01), args...; maxiters=300, kwargs...)
@@ -110,6 +110,9 @@ function sciml_train(loss, θ, opt=nothing, adtype=nothing, args...;
110110
optfunc, res1.u; lb=lower_bounds, ub=upper_bounds, kwargs...)
111111
res1 = GalacticOptim.solve(
112112
optprob2, BFGS(initial_stepnorm=0.01), args...; maxiters, kwargs...)
113+
elseif isempty(args) && deterministic
114+
res1 = GalacticOptim.solve(
115+
optprob, BFGS(initial_stepnorm=0.01), args...; maxiters, kwargs...)
113116
else
114117
res1 = GalacticOptim.solve(optprob, ADAM(0.1), args...; maxiters, kwargs...)
115118
end

0 commit comments

Comments
 (0)