Tips and tricks
This last chapter contains some rules of the thumb to improve efficiency, solve memory issues and other frequent problems.
Automatic Differentiation (AD)
Why is Automatic Differentiation important?
Nonlinear optimization software relies on accurate and efficient derivative computations for faster solutions and improved robustness.
Knitro in particular has the ability to utilize second derivative (Hessian matrix) information for faster convergence. Computing partial derivatives and coding them manually in a programming language can be time consuming and error prone (Knitro does provide a function to check first derivatives against finite differences).
Automatic Differentiation (AD) is a modern technique which automatically and efficiently computes the exact derivatives so that the user is freed from dealing with this issue.
Most modeling languages provide automatic differentiation.
Option tuning for efficiency
- If you are unsure how to set non-default options, or which user options
to play with, simply running your model with the setting
tuner=1 will cause the Knitro-Tuner to run many instances of your model with a variety of option settings, and report some statistics and recommendations on what non-default option settings may improve performance on your model. Often significant performance improvements may be made by choosing non-default option settings. See The Knitro-Tuner for more details.
- The most important user option is the choice of which continuous
nonlinear optimization algorithm to use, which is specified by the
algorithmoption. Please try all four options as it is often difficult to predict which one will work best, or try using the multi option (
algorithm=5). In particular the Active Set algorithms may often work best for small problems, problems whose only constraints are simple bounds on the variables, or linear programs. The interior-point algorithms are generally preferable for large-scale problems.
- Perhaps the second most important user option setting is the
hessoptuser option that specifies which Hessian (or Hessian approximation) technique to use. If you (or the modeling language) are not providing the exact Hessian to Knitro, then you should experiment with different values here.
- One of the most important user options for the interior-point
algorithms is the
bar_muruleoption, which controls the handling of the barrier parameter. It is recommended to experiment with different values for this user option if you are using one of the interior-point solvers in Knitro.
- If you are using the Interior/Direct algorithm
and it seems to be taking a large number of conjugate gradient (CG) steps
(as evidenced by a non-zero value under the CGits output column header
on many iterations), then you should try a small value for the
bar_directintervaluser option (e.g., 0-2). This option will try to prevent Knitro from taking an excessive number of CG steps. Additionally, if there are solver iterations where Knitro slows down because it is taking a very large number of CG iterations, you can try enforcing a maximum limit on the number of CG iterations per algorithm iteration using the
linsolveroption can make a big difference in performance for some problems. For small problems (particularly small problems with dense Jacobian and Hessian matrices), it is recommended to try the qr setting, while for large problems, it is recommended to try the hybrid, ma27, ma57 and mklpardiso settings to see which is fastest. When using either the hybrid, qr, ma57, or mklpardiso setting for the
linsolveroption it is highly recommended to use the Intel MKL BLAS (
blasoption= 1) provided with Knitro or some other optimized BLAS as this can result in significant speedups compared to the internal Knitro BLAS (
- When solving mixed integer problems (MIPs), if Knitro is struggling to
find an integer feasible point, then you should try different values for the
mip_heuristicoption, which will try to find an integer feasible point before beginning the branch and bound process. Other important MIP options that can significantly impact the performance of Knitro are the
mip_selectruleuser options, as well as the
mip_nodealgoption which will determine the Knitro algorithm to use to solve the nonlinear, continuous subproblems generated during the branch and bound process.
Setting bounds efficiently
Why is Knitro not honoring my bound constraints?
By default Knitro does not enforce that simple bounds on the variables () are satisfied throughout the optimization process. Rather, satisfaction of these bounds is only enforced at the solution.
In some applications, however, the user may want to enforce that the initial point and all intermediate iterates satisfy the
This can be enforced by setting
KTR_PARAM_HONORBNDS to 1.
Please note, the honor bounds option pertains only to the simple bounds defined with vectors and for , not to the general equality and inequality constraints defined with the vectors , , and .
Do I need to specify a constraint with , , and if I already specified it with the bounds parameters and ?
No, if you have specified a constraint with the bounds parameters then you should not specify it with the general constraints.
For example, is best modeled by setting
KTR_INFBOUND and .
Duplicate specification of a constraint can make the problem more difficult to solve.
Do I need to initialize all of the bounds parameters? What if a variable is unbounded?
If every variable is unbounded from below (or above), then you can pass a NULL pointer for (or ), and Knitro will interpret this to mean unbounded.
If at least one variable is bounded from below (or above), then you must initialize all elements of (or ).
Use Knitro’s value for infinity,
KTR_INFBOUND to denote unbounded.
If the bound is larger in magnitude than
KTR_INFBOUND, then you must rescale the problem. See
include/knitro.h for the definition of
Do I need to initialize all of the constraint parameters and ? What if a constraint is unbounded?
You must initialize all elements of and .
If a constraint is unbounded use Knitro’s value for infinity,
If the constraint is larger in magnitude than KTR_INFBOUND, then you must rescale the problem. See
include/knitro.h for the definition of
If you receive a Knitro termination message indicating that there was not enough memory on your computer to solve the problem, or if your problem appears to be running very slow because it is using nearly all of the available memory on your computer system, the following are some recommendations to try to reduce the amount of memory used by Knitro.
- Experiment with different algorithms. Typically the Interior/Direct algorithm is chosen by default and uses the most memory. The Interior/CG and Active Set algorithms usually use much less memory. In particular if the Hessian matrix is large and dense and using most of the memory, then the Interior/CG method may offer big savings in memory. If the constraint Jacobian matrix is large and dense and using most of the memory, then the Active Set algorithm may use much less memory on your problem.
- If much of the memory usage seems to come from the Hessian matrix,
then you should try different Hessian options via the
hessoptuser option. In particular
hessoptsettings product_findiff, product, and lbfgs use the least amount of memory.
- Try different linear solver options in Knitro via the
linsolveruser option. Sometimes even if your problem definition (e.g. Hessian and Jacobian matrix) can be easily stored in memory, the sparse linear system solvers inside Knitro may require a lot of extra memory to perform and store matrix factorizations. If your problem size is relatively small you can try
linsolversetting qr. For large problems you should try both ma27 and ma57 settings as one or the other may use significantly less memory. In addition, using a smaller
pivotuser option value may reduce the amount of memory needed for the linear solver.
Reproducibility issues across platform/computer
If you notice different results accross platforms/computers for the exact same Knitro run (same model, same initial conditions, same options), it is probably due to one of the following reasons:
- Knitro library is built with different compilers on different OS which can cause small numerical differences that propagate.
- The Intel MKL library has specializations to optimize performance for particular hardware/CPUs/environments which can also cause numerical differences.
To avoid the second issue you may set
blasoption = 0.