Replies: 5 comments 6 replies
-
Since you have a custom function, there is no easy way to do it, so you have to define an AbstractNLPModel and implement some of its values. using NLPModels
mutable struct YourProblem{T, S} <: AbstractNLPModel{T, S}
meta::NLPModelMeta{T, S}
counters::Counters
end
function YourProblem(::Type{T}) where {T}
meta = NLPModelMeta{T, Vector{T}}(
2, # number of variables,
ncon = 2, # number of constraints,
x0 = ...
lvar = ...
uvar = ...
lcon = ...
ucon = ...
name = ...
)
end
YourProblem() = YourProblem(Float64)
function NLPModels.objcons(nlp::YourProblem, x::AbstractVector)
@lencheck 2 x
increment!(nlp, :neval_obj)
increment!(nlp, :neval_cons)
fx = ...
obj_value = g(fx)
cons_value = h(fx)
return obj_value, cons_value
end You will need to define other functions as well, and you need to handle AD yourself. Since the model is completely new, you can do other things that you might deem necessary, for instance, storing There are a few examples to get inspiration, such as ManualNLPModels.jl, or hs10.jl from NLPModelsTest.jl. Let me know if this is helpful. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your answer. The only part unclear is the AD. You say that
but I could not figure out which part of the API handles the objective, its gradient, the constraint, and the Jacobian at once. Is there even such a thing? If that is not available, I could do the following: for each In any case, I am still in the planning phase, comments are appreciated. |
Beta Was this translation helpful? Give feedback.
-
Forgot to mention: I am probably going to be using Percival.jl, I find it very robust. |
Beta Was this translation helpful? Give feedback.
-
I started an implementation at https://github.com/tpapp/ObjConsNLPModels.jl. But I am running into a dimension error with Percival: julia> using ObjConsNLPModels
julia> using Percival
julia> model = objcons_nlpmodel(x -> [sum(abs2, x), sum(x) - 1]; x0 = [2.0, 2.0])
ObjConsNLPModels.ObjConsNLPModel{Float64, Vector{Float64}, Float64, var"#24#25"}
Problem name: Generic
All variables: ████████████████████ 2 All constraints: ████████████████████ 1 free: ████████████████████ 2 free: ████████████████████ 1 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 nnzh: ( 0.00% sparsity) 3 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 1 nnzj: ( 0.00% sparsity) 2
Counters:
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 grad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
cons_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jac_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
julia> output = percival(model)
ERROR: DimensionError: Input lvar should have length 3 not 2 What am I doing wrong? |
Beta Was this translation helpful? Give feedback.
-
Thanks for all the help so far. I updated the repository https://github.com/tpapp/ObjConsNLPModels.jl with new methods and fixed the julia> using ObjConsNLPModels
julia> using Percival
julia> model = objcons_nlpmodel(x -> [sum(abs2, x), sum(x) - 1]; x0 = [2.0, 2.0])
julia> percival(model) giving me a |
Beta Was this translation helpful? Give feedback.
-
Consider the problem
subject to
and
where$x \in \mathbb{R}^n$ is a vector, $f: \mathbb{R}^n \to \mathbb{R}^m$ is expensive to calculate compared to $g$ and $h$ , $g: \mathbb{R}^m \to \mathbb{R}$ and $h: \mathbb{R}^m \to \mathbb{R}^k$ .
The natural solution would be using objcons, but I need help figuring out how to integrate AD with that.
My (admittedly very superficial) understanding of the API is that it allows value & gradient/jacobian for the objective and the constraint separately, but not together. But perhaps I missed something.
For the purposes of an MWE, a walk-through using
would help me a lot. Remember, the exercise is to evaluate
f
once for eachx
, and use AD (ForwardDiff is fine).Beta Was this translation helpful? Give feedback.
All reactions