An optimization problem involves minimizing a function (called the objective function) of several variables, possibly subject to restrictions on the values of the variables defined by a set of constraint functions. The routines in the NAG Library are concerned with function minimization only, since the problem of maximizing a given objective function F(x) is equivalent to minimizing -F(x).
Here we are concerned with finding a local minimum of a function f(x) - that is a point x_best such that for all x near x_best, f(x) >= f(x_best). Note that x is a vector of n variables, where n >= 1. As a trivial example, we might want to minimize f(x) = 3*x*x - 2*x + 1, a function of just one variable.
Typically a minimization routine requires you to choose a starting point from which the algorithm begins its search for the minimum. It will then make a sequence of steps which will hopefully take it towards a minimum.
A measure of the efficiency of an optimization algorithm is how oftem it is required to evaluate the function which is being minimized. If the function f(x) being minimized is inexpensive to compute, then a large number of function evaluations may not matter, but in practice f(x) may be very expensive. Clearly it is desirable to avoid unecessary evaluations if possible.
In this demo, so that results can be presented easily in picture form, we only consider the minimization of functions of two variables (call them x and y). However, real-life problems might involve many thousands of variables.
See the NAG Library E04 Chapter Introduction for a great deal of advice on how to choose which NAG optimization methods are appropriate for your problem.