Your repairman.  Finishing work, exterior, preparatory

Extrema of functions of several variables. A necessary condition for an extremum. Sufficient condition for an extremum. Conditional extremum. Method of Lagrange multipliers. Finding the largest and smallest values.

Lecture 5

Definition 5.1. Dot M 0 (x 0, y 0) called maximum point functions z = f(x, y), if f (x o , y o) > f(x, y) for all points (x, y) M 0.

Definition 5.2. Dot M 0 (x 0, y 0) called minimum point functions z = f(x, y), if f (x o , y o) < f(x, y) for all points (x, y) from some neighborhood of the point M 0.

Remark 1. The maximum and minimum points are called extremum points functions of several variables.

Remark 2. The extremum point for a function of any number of variables is defined in a similar way.

Theorem 5.1 (the necessary conditions extremum). If M 0 (x 0, y 0) is the extremum point of the function z = f(x, y), then at this point the first-order partial derivatives of this function are equal to zero or do not exist.

Proof.

Let's fix the value of the variable at counting y = y 0. Then the function f(x, y0) will be a function of one variable X, for which x = x 0 is the extremum point. Therefore, by Fermat's theorem or does not exist. The same assertion is proved for .

Definition 5.3. Points belonging to the domain of a function of several variables, at which the partial derivatives of the function are equal to zero or do not exist, are called stationary points this function.

Comment. Thus, the extremum can be reached only at stationary points, but it is not necessarily observed at each of them.

Theorem 5.2(sufficient conditions for an extremum). Let in some neighborhood of the point M 0 (x 0, y 0), which is a stationary point of the function z = f(x, y), this function has continuous partial derivatives up to the 3rd order inclusive. Denote Then:

1) f(x, y) has at the point M 0 maximum if AC-B² > 0, A < 0;

2) f(x, y) has at the point M 0 minimum if AC-B² > 0, A > 0;

3) there is no extremum at the critical point if AC-B² < 0;



4) if AC-B² = 0, additional research is needed.

Proof.

Let us write the Taylor formula of the second order for the function f(x, y), keeping in mind that at a stationary point, the partial derivatives of the first order are equal to zero:

where If the angle between the segment M 0 M, where M (x 0 +Δ x, y 0 +Δ at), and the O axis X denote φ, then Δ x =Δ ρ cos φ, Δ y=Δρsinφ. In this case, the Taylor formula will take the form: . Let Then we can divide and multiply the expression in parentheses by A. We get:

Consider now four possible cases:

1) AC-B² > 0, A < 0. Тогда , и for sufficiently small Δρ. Therefore, in some neighborhood M 0 f (x 0 + Δ x, y 0 +Δ y)< f(x0, y0), that is M 0 is the maximum point.

2) Let AC-B² > 0, A > 0. Then , and M 0 is the minimum point.

3) Let AC-B² < 0, A> 0. Consider the increment of arguments along the ray φ = 0. Then it follows from (5.1) that , that is, when moving along this ray, the function increases. If we move along a ray such that tg φ 0 \u003d -A / B, then , therefore, when moving along this ray, the function decreases. So the point M 0 is not an extreme point.

3`) When AC-B² < 0, A < 0 доказательство отсутствия экстремума проводится

similar to the previous one.

3``) If AC-B² < 0, A= 0, then . Wherein . Then, for sufficiently small φ, expression 2 B cos + C sinφ close to 2 V, that is, it retains a constant sign, and sinφ changes sign in the vicinity of the point M 0 . This means that the increment of the function changes sign in the vicinity of the stationary point, which is therefore not an extremum point.

4) If AC-B² = 0, and , , that is, the sign of the increment is determined by the sign 2α 0 . At the same time, further research is needed to elucidate the question of the existence of an extremum.

Example. Let's find the extremum points of the function z=x² - 2 xy + 2y² + 2 x. To search for stationary points, we solve the system . So, the stationary point is (-2,-1). Wherein A = 2, V = -2, WITH= 4. Then AC-B² = 4 > 0, therefore, an extremum is reached at the stationary point, namely the minimum (since A > 0).

Definition 5.4. If the function arguments f (x 1 , x 2 ,…, x n) connected additional conditions as m equations ( m< n) :

φ 1 ( x 1, x 2,…, x n) = 0, φ 2 ( x 1, x 2,…, x n) = 0, …, φ m ( x 1, x 2,…, x n) = 0, (5.2)

where the functions φ i have continuous partial derivatives, then equations (5.2) are called connection equations.

Definition 5.5. Function extremum f (x 1 , x 2 ,…, x n) under conditions (5.2) is called conditional extremum.

Comment. We can offer the following geometric interpretation of the conditional extremum of a function of two variables: let the arguments of the function f(x,y) are related by the equation φ (x, y)= 0, defining some curve in the plane O hu. Having restored from each point of this curve perpendiculars to the plane O hu before crossing the surface z = f (x, y), we obtain a spatial curve lying on the surface above the curve φ (x, y)= 0. The problem is to find the extremum points of the resulting curve, which, of course, in the general case do not coincide with the unconditional extremum points of the function f(x,y).

Let us define the necessary conditional extremum conditions for a function of two variables by introducing the following definition beforehand:

Definition 5.6. Function L (x 1 , x 2 ,…, x n) = f (x 1 , x 2 ,…, x n) + λ 1 φ 1 (x 1 , x 2 ,…, x n) +

+ λ 2 φ 2 (x 1 , x 2 ,…, x n) +…+λ m φ m (x 1 , x 2 ,…, x n), (5.3)

where λ i - some constants, called Lagrange function, and the numbers λ iindefinite Lagrange multipliers.

Theorem 5.3(necessary conditional extremum conditions). Conditional extremum of the function z = f(x, y) in the presence of the constraint equation φ ( x, y)= 0 can only be reached at stationary points of the Lagrange function L (x, y) = f (x, y) + λφ (x, y).

Proof. The constraint equation defines an implicit dependency at from X, so we will assume that at there is a function from X: y = y(x). Then z there is a complex function X, and its critical points are determined by the condition: . (5.4) It follows from the constraint equation that . (5.5)

We multiply equality (5.5) by some number λ and add it to (5.4). We get:

, or .

The last equality must hold at stationary points, from which it follows:

(5.6)

A system of three equations for three unknowns is obtained: x, y and λ, with the first two equations being the conditions for the stationary point of the Lagrange function. Eliminating the auxiliary unknown λ from system (5.6), we find the coordinates of the points at which the original function can have a conditional extremum.

Remark 1. The presence of a conditional extremum at the found point can be checked by studying the second-order partial derivatives of the Lagrange function by analogy with Theorem 5.2.

Remark 2. Points at which the conditional extremum of the function can be reached f (x 1 , x 2 ,…, x n) under conditions (5.2), can be defined as solutions of the system (5.7)

Example. Find the conditional extremum of the function z = xy provided x + y= 1. Compose the Lagrange function L(x, y) = xy + λ (x + y – one). System (5.6) then looks like this:

Whence -2λ=1, λ=-0.5, x = y = -λ = 0.5. Wherein L (x, y) can be represented as L(x, y) = - 0,5 (x-y)² + 0.5 ≤ 0.5, therefore, at the found stationary point L (x, y) has a maximum and z = xy - conditional maximum.

Definition1: A function is said to have a local maximum at a point if there exists a neighborhood of the point such that for any point M with coordinates (x, y) inequality is fulfilled: . In this case, i.e., the increment of the function< 0.

Definition2: A function is said to have a local minimum at a point if there exists a neighborhood of the point such that for any point M with coordinates (x, y) inequality is fulfilled: . In this case, i.e., the increment of the function > 0.

Definition 3: Local minimum and maximum points are called extremum points.

Conditional Extremes

When searching for extrema of a function of many variables, problems often arise related to the so-called conditional extreme. This concept can be explained by the example of a function of two variables.

Let a function and a line be given L on surface 0xy. The task is to line L find such a point P(x, y), in which the value of the function is the largest or smallest compared to the values ​​of this function at the points of the line L located near the point P. Such points P called conditional extremum points line functions L. Unlike the usual extremum point, the function value at the conditional extremum point is compared with the function values ​​not at all points of some of its neighborhood, but only at those that lie on the line L.

It is quite clear that the point of the usual extremum (they also say unconditional extremum) is also a conditional extremum point for any line passing through this point. The converse, of course, is not true: a conditional extremum point may not be a conventional extremum point. Let me explain this with a simple example. The graph of the function is the upper hemisphere (Appendix 3 (Fig. 3)).

This function has a maximum at the origin; it corresponds to the top M hemispheres. If the line L there is a line passing through the points A and V(her equation x+y-1=0), then it is geometrically clear that for the points of this line the maximum value of the function is reached at the point lying in the middle between the points A and V. This is the point of the conditional extremum (maximum) of the function on the given line; it corresponds to the point M 1 on the hemisphere, and it can be seen from the figure that there can be no question of any ordinary extremum here.

Note that in the final part of the problem of finding the largest and smallest values ​​of a function in a closed region, we have to find the extremal values ​​of the function on the boundary of this region, i.e. on some line, and thereby solve the problem for a conditional extremum.

Let us now proceed to the practical search for the points of the conditional extremum of the function Z= f(x, y) provided that the variables x and y are related by the equation (x, y) = 0. This relation will be called the constraint equation. If from the connection equation y can be expressed explicitly in terms of x: y \u003d (x), we get a function of one variable Z \u003d f (x, (x)) \u003d Ф (x).

Having found the value of x at which this function reaches an extremum, and then determining the corresponding values ​​of y from the connection equation, we will obtain the desired points of the conditional extremum.

So, in the above example, from the equation of communication x+y-1=0 we have y=1-x. From here

It is easy to check that z reaches its maximum at x = 0.5; but then from the connection equation y = 0.5, and we get exactly the point P, found from geometric considerations.

The conditional extremum problem is solved very simply even when the constraint equation can be represented by parametric equations x=x(t), y=y(t). Substituting expressions for x and y into this function, we again come to the problem of finding the extremum of a function of one variable.

If the constraint equation has more than complex view and we fail to express one variable explicitly in terms of another, nor to replace it with parametric equations, then the problem of finding a conditional extremum becomes more difficult. We will continue to assume that in the expression of the function z= f(x, y) the variable (x, y) = 0. The total derivative of the function z= f(x, y) is equal to:

Where is the derivative y`, found by the rule of differentiation of the implicit function. At the points of the conditional extremum, the found total derivative must be equal to zero; this gives one equation relating x and y. Since they must also satisfy the constraint equation, we get a system of two equations with two unknowns

Let's transform this system to a much more convenient one by writing the first equation as a proportion and introducing a new auxiliary unknown:

(a minus sign is placed in front for convenience). It is easy to pass from these equalities to the following system:

f` x =(x,y)+` x (x,y)=0, f` y (x,y)+` y (x,y)=0 (*),

which, together with the constraint equation (x, y) = 0, forms a system of three equations with unknowns x, y, and.

These equations (*) are easiest to remember using next rule: in order to find points that can be points of the conditional extremum of the function

Z= f(x, y) with the constraint equation (x, y) = 0, you need to form an auxiliary function

F(x,y)=f(x,y)+(x,y)

Where is some constant, and write equations to find the extremum points of this function.

The specified system of equations delivers, as a rule, only the necessary conditions, i.e. not every pair of x and y values ​​that satisfies this system is necessarily a conditional extremum point. I will not give sufficient conditions for conditional extremum points; very often the specific content of the problem itself suggests what the found point is. The described technique for solving problems for a conditional extremum is called the method of Lagrange multipliers.

A sufficient condition for an extremum of a function of two variables

1. Let the function be continuously differentiable in some neighborhood of the point and have continuous second-order partial derivatives (pure and mixed).

2. Denote by the second order determinant

extremum variable lecture function

Theorem

If the point with coordinates is a stationary point for the function, then:

A) When it is a point of local extremum and, at a local maximum, - a local minimum;

C) when the point is not a local extremum point;

C) if, maybe both.

Proof

We write the Taylor formula for the function, limiting ourselves to two members:

Since, according to the condition of the theorem, the point is stationary, the second-order partial derivatives are equal to zero, i.e. and. Then

Denote

Then the increment of the function will take the form:

Due to the continuity of partial derivatives of the second order (pure and mixed), according to the condition of the theorem at a point, we can write:

Where or; ,

1. Let and, i.e., or.

2. We multiply the increment of the function and divide by, we get:

3. Complement the expression in curly brackets to the full square of the sum:

4. The expression in curly brackets is non-negative, since

5. Therefore, if and hence, and, then and, therefore, according to the definition, the point is a point of local minimum.

6. If and means, and, then, according to the definition, a point with coordinates is a local maximum point.

2. Consider square trinomial, its discriminant, .

3. If, then there are points such that the polynomial

4. The total increment of the function at a point in accordance with the expression obtained in I, we write in the form:

5. Due to the continuity of second-order partial derivatives, by the condition of the theorem at a point, we can write that

therefore, there exists a neighborhood of a point such that, for any point, the square trinomial is greater than zero:

6. Consider - the neighborhood of the point.

Let's choose any value, so that's the point. Assuming that in the formula for the increment of the function

What we get:

7. Since, then.

8. Arguing similarly for the root, we get that in any -neighborhood of the point there is a point for which, therefore, in the neighborhood of the point it does not preserve sign, therefore there is no extremum at the point.

Conditional extremum of a function of two variables

When searching for extrema of a function of two variables, problems often arise related to the so-called conditional extremum. This concept can be explained by the example of a function of two variables.

Let a function and a line L be given on the plane 0xy. The task is to find such a point P (x, y) on the line L, at which the value of the function is the largest or smallest compared to the values ​​of this function at the points of the line L, located near the point P. Such points P are called conditional extremum points functions on the line L. In contrast to the usual extremum point, the value of the function at the conditional extremum point is compared with the values ​​of the function not at all points of some of its neighborhood, but only at those that lie on the line L.

It is quite clear that the point of the usual extremum (they also say the unconditional extremum) is also the point of the conditional extremum for any line passing through this point. The converse, of course, is not true: a conditional extremum point may not be a conventional extremum point. Let's illustrate what has been said with an example.

Example #1. The graph of the function is the upper hemisphere (Fig. 2).

Rice. 2.

This function has a maximum at the origin; it corresponds to the vertex M of the hemisphere. If the line L is a straight line passing through points A and B (its equation), then it is geometrically clear that for the points of this line the maximum value of the function is reached at the point lying in the middle between points A and B. This is the conditional extremum (maximum) point functions on this line; it corresponds to the point M 1 on the hemisphere, and it can be seen from the figure that there can be no question of any ordinary extremum here.

Note that in the final part of the problem of finding the largest and smallest values ​​of a function in a closed region, one has to find the extremal values ​​of the function on the boundary of this region, i.e. on some line, and thereby solve the problem for a conditional extremum.

Definition 1. They say that where has a conditional or relative maximum (minimum) at a point that satisfies the equation: if for any that satisfies the equation, the inequality

Definition 2. An equation of the form is called a constraint equation.

Theorem

If the functions and are continuously differentiable in a neighborhood of a point, and the partial derivative and the point are the point of the conditional extremum of the function with respect to the constraint equation, then the second-order determinant is equal to zero:

Proof

1. Since, according to the condition of the theorem, the partial derivative, and the value of the function, then in some rectangle

implicit function defined

A complex function of two variables at a point will have a local extremum, therefore, or.

2. Indeed, according to the invariance property of the first-order differential formula

3. The connection equation can be represented in this form, which means

4. Multiply equation (2) by, and (3) by and add them

Therefore, when

arbitrary. h.t.d.

Consequence

The search for conditional extremum points of a function of two variables in practice is carried out by solving a system of equations

So, in the above example No. 1 from the equation of communication we have. From here it is easy to check what reaches a maximum at . But then from the equation of communication. We get the point P, found geometrically.

Example #2. Find the conditional extremum points of the function with respect to the constraint equation.

Let's find the partial derivatives of the given function and the connection equation:

Let's make a second-order determinant:

Let's write down the system of equations for finding conditional extremum points:

hence, there are four conditional extremum points of the function with coordinates: .

Example #3. Find the extremum points of the function.

Equating the partial derivatives to zero: , we find one stationary point - the origin. Here,. Therefore, the point (0, 0) is not an extremum point either. The equation is the equation of a hyperbolic paraboloid (Fig. 3), the figure shows that the point (0, 0) is not an extremum point.

Rice. 3.

The largest and smallest value of a function in a closed area

1. Let the function be defined and continuous in a bounded closed domain D.

2. Let the function have finite partial derivatives in this region, except for individual points of the region.

3. In accordance with the Weierstrass theorem, in this area there is a point at which the function takes the largest and smallest values.

4. If these points are interior points of the region D, then it is obvious that they will have a maximum or a minimum.

5. In this case, the points of interest to us are among the suspicious points on the extremum.

6. However, the function can also take on the maximum or minimum value on the boundary of the region D.

7. In order to find the largest (smallest) value of the function in the area D, you need to find all internal points suspicious for an extremum, calculate the value of the function in them, then compare with the value of the function at the boundary points of the area, and the largest of all found values ​​will be the largest in the closed region D.

8. The method of finding a local maximum or minimum was considered earlier in Section 1.2. and 1.3.

9. It remains to consider the method of finding the maximum and minimum values ​​of the function on the boundary of the region.

10. In the case of a function of two variables, the area usually turns out to be bounded by a curve or several curves.

11. Along such a curve (or several curves), the variables and either depend on one another, or both depend on one parameter.

12. Thus, on the boundary, the function turns out to be dependent on one variable.

13. The method of finding the largest value of a function of one variable was discussed earlier.

14. Let the boundary of the region D be given by the parametric equations:

Then on this curve the function of two variables will be complex function from parameter: . For such a function, the largest and smallest value is determined by the method of determining the largest and smallest values ​​for a function of one variable.

Example

Find the extremum of the function provided that X and at are related by the ratio: . Geometrically, the problem means the following: on an ellipse
plane
.

This problem can be solved as follows: from the equation
find
X:


provided that
, reduced to the problem of finding the extremum of a function of one variable, on the interval
.

Geometrically, the problem means the following: on an ellipse obtained by crossing the cylinder
plane
, it is required to find the maximum or minimum value of the applicate (Fig. 9). This problem can be solved as follows: from the equation
find
. Substituting the found value of y into the equation of the plane, we obtain a function of one variable X:

Thus, the problem of finding the extremum of the function
provided that
, reduced to the problem of finding the extremum of a function of one variable, on a segment.

So, the problem of finding a conditional extremum is the problem of finding the extremum of the objective function
, provided that the variables X and at subject to the restriction
called connection equation.

We will say that dot
, satisfying the constraint equation, is a point of local conditional maximum (minimum) if there is a neighborhood
such that for any points
, whose coordinates satisfy the constraint equation, the inequality holds.

If from the equation of communication it is possible to find an expression for at, then, substituting this expression into the original function, we turn the latter into a complex function of one variable X.

The general method for solving the conditional extremum problem is Lagrange multiplier method. Let's create an auxiliary function, where ─ some number. This function is called Lagrange function, a ─ Lagrange multiplier. Thus, the problem of finding a conditional extremum has been reduced to finding local extremum points for the Lagrange function. To find the points of a possible extremum, it is necessary to solve a system of 3 equations with three unknowns x, y and.

Then one should use the following sufficient extremum condition.

THEOREM. Let the point be a point of possible extremum for the Lagrange function. We assume that in the vicinity of the point
there are continuous second-order partial derivatives of the functions and . Denote

Then if
, then
─ conditional extremum point of the function
at the constraint equation
meanwhile, if
, then
─ conditional minimum point, if
, then
─ point of conditional maximum.

§eight. Gradient and directional derivative

Let the function
defined in some (open) domain. Consider any point
this area and any directed straight line (axis) passing through this point (Fig. 1). Let
- some other point of this axis,
- the length of the segment between
and
, taken with a plus sign, if the direction
coincides with the direction of the axis , and with a minus sign if their directions are opposite.

Let
approaches indefinitely
. Limit

called function derivative
towards
(or along the axis ) and is denoted as follows:

.

This derivative characterizes the "rate of change" of the function at the point
towards . In particular, and ordinary partial derivatives ,can also be thought of as derivatives "with respect to direction".

Suppose now that the function
has continuous partial derivatives in the region under consideration. Let the axis forms angles with the coordinate axes
and . Under the assumptions made, the directional derivative exists and is expressed by the formula

.

If the vector
set by its coordinates
, then the derivative of the function
in the direction of the vector
can be calculated using the formula:

.

Vector with coordinates
called gradient vector functions
at the point
. The gradient vector indicates the direction of the fastest increase of the function at a given point.

Example

Given a function , a point A(1, 1) and a vector
. Find: 1) grad z at point A; 2) the derivative at point A in the direction of the vector .

Partial derivatives of a given function at a point
:

;
.

Then the gradient vector of the function at this point is:
. The gradient vector can also be written using a vector expansion and :

. Function derivative in the direction of the vector :

So,
,
.◄

If you notice an error, select a piece of text and press Ctrl + Enter
SHARE:
Your repairman.  Finishing work, exterior, preparatory