Liquidlasagna rated it it was amazing Apr 11, Jovany Agathe rated it it was ok Mar 21, AB rated it it was amazing Dec 27, Wendelle rated it it was amazing Mar 11, Thomas marked it as to-read Feb 03, Siva Prasad Varma marked it as to-read Jun 15, Odemar marked it as to-read Nov 27, Angela Muenks added it May 30, Zhiwei Tang added it Sep 02, Ariana marked it as to-read Sep 02, Marcel Herzog marked it as to-read Oct 19, James Harden marked it as to-read May 31, Asd Asd marked it as to-read Aug 12, Thomas Steimle added it Feb 24, Sakar is currently reading it Mar 11, DZ marked it as to-read May 23, Romain Endelin marked it as to-read Aug 13, Chris Darley marked it as to-read Oct 05, Nup Pun is currently reading it Oct 06, Justin marked it as to-read Nov 21, There are no discussion topics on this book yet.
Be the first to start one ». About John H. John H. Books by John H. When Dana Schwartz started writing about a 19th-century pandemic ravaging Edinburgh in her latest book, Anatomy: A Love Story, she had no idea Read more Trivia About Student Solution No trivia or quizzes yet.
We need to show that the numerator is also strictly positive, i. We know from Proposition 1. B 1 38 Solutions for review exercises, Chapter 1 1. All partial derivatives exist: away from the origin, they exist by Theorems 1. By Theorem 1. Indeed, the 0 b 0 0 matrices and are matrices whose square is 0.
This is a case where it is much easier to think of first rotating the telescope so that it is in the x, z -plane, then changing the elevation, then rotating back. Once the telescope is level, it is pointing in the direction of the x-axis.
You, the astronomer Solutions for review exercises, Chapter 1 39 rotating the telescope, are at the negative x end of the telescope. If you rotate it counterclockwise, as seen by you, the matrix is as we say. On the other hand, we are not absolutely sure that the problem is unambiguous as stated.
It is best to think of first rotating the telescope into the x, z -plane, then rotating it until it is horizontal or vertical , then rotating it on its own axis, and then rotating it back in two steps. Call the equations A, B, C, D. Since you can choose arbitrarily the value of y and w, and they determine the values of the other variables, the family of solutions depends on two parameters. This system has a solution for every value of a.
We have two equations in three unknowns; there is no unique solution. There are then two possibilities. The third is neither open nor closed. By the time we are ready to obtain a pivotal 1 at the intersection of the kth column dotted and kth row, all the entries on the kth row to the left of the kth column are 0, so we only need to place a 1 in position k, k and then justify that act by dividing all the entries on the kth row to the right of kth column by the k, k entry.
If the k, k entry is 0, we go down the kth column until we find a nonzero entry. In computing the total number of computations, we are assuming the worse case scenario, where all entries of the kth column are nonzero. If we have one equation in one unknown, we need to perform one division. If we have two equations in two unknowns, we need two divisions to get a pivotal 1 in the first row the 1 is free , followed by two multiplications and two additions to get a 0 in the first element of the second row the 0 is free.
One more division, multiplication and addition get us a pivotal 1 in the second row and a 0 for the second element of the first row, for a total of nine. We need to do this for n 1 entries of column k. So the function R n Q n is increasing as a function of n for n 2, and hence is strictly positive for n 3.
Denote by P n the total computations needed for partial row reduction. So by Proposition 2. Then AE1 i, x has the same columns as A, except the ith, which is multiplied by x.
AE2 i, j, x has the same columns as A except the jth, which is the sum of the jth column of A contributed by the 1 in the j, j th position , and x times the ith column contributed by the x in the i, j th position. AE3 i, j has the same columns as A, except for the ith and jth, which are switched. Denote by a the ith row and by b the jth row of our matrix. Assume we wish to switch the ith and the jth rows. Multiplication on the left by E2 j, i, 1 then by E1 j, 1 turns the jth row into a.
Finally, we multiply on the left by E2 i, j, 1 to subtract a from the ith row, making that row b. So we can switch rows by multiplying with only the first two types of elementary matrices. Suppose that [A I] row reduces to [I B]. This can be expressed as multiplication on the left by elementary matrices: Going from equation 1 to equation 2 in Solution 2.
So B is also a right inverse of A. In either case, A is not invertible. Solution 2. So the orthonormal basis is ,. Solutions for Chapter 2 49 2. But this scheme runs into trouble. Despite these bad sign variations, the Riemann sum works pretty well: the approximation to the integral above gives 0.
Solutions for Chapter 2 51 2 3 1 a a a 61 1 a a7 2. Let us suppose that this is the case. The vector 4 3 5 is in the 3 3 image of A. These are straightforward computations, using the linearity of T.
So the image is also closed under addition and multiplication by scalars. The last three columns of the matrix are clearly linearly independent, so the matrix has rank at least 3, and it has rank at most 3 because there can be at most three linearly independent vectors in R3.
For example, the first three columns are linearly independent, since the matrix composed of just those columns row reduces to the identity. The 3rd, 4th, and 6th columns are linearly dependent. You cannot choose freely the values of x1 , x2 , x5.
Since the rank of the matrix is 3, three variables must correspond to pivotal linearly independent columns.
For the variables x1 , x2 , x5 to be freely chosen, i. The kernel has dimension 1 the number of nonpivotal columns , and consists precisely of the constant polynomials.
Elsewhere, the rank dimension of the image is 2, so by the dimension formula the kernel has dimension 0. The rank is never 0 or 3. At the origin the rank is 1 and the dimension of the kernel is 2. Elsewhere, the kernel has dimension 0 and the rank is 3. By row operations, we can bring the matrix B to 2 3 1 2 a 40 b ab a 5.
As in Example 2. Row reduction then tells us for what values of a the system has no solutions. This does not contradict Proposition 2. Since see the discussion after Corollary 2. The second solution reproves the fact that if a square matrix has a left inverse it has a right inverse, and the two inverses are equal. By Proposition 1. Note that this uses associativity of matrix multiplication Proposition 1. Recall that the nullity of a linear transformation is the dimension of its kernel.
Solution 2 Note first the following results: if T1 , T2 : Rn! Rn are linear transformations, then 1. So A has rank n, hence nullity 0 by the dimension formula, so A is invertible. Solutions for Chapter 2 Solution 2. Since q cannot have degree greater than n, it must be the zero polynomial. It follows that there exists a solution of 2 3 a0. In fact this is the lowest value for k that always has a solution. If Hn is not invertible, there exist numbers a1 ,.
First, we will show that if there exists S : Rn! Now extend it to Rn in any way, for instance by choosing a basis for img T2 , extending it to a basis of Rn , and setting it equal to 0 on all the new basis vectors.
If 9S : Rn! This is possible, since img T2 img T1. This is not a subspace, since 0 the zero function is not in it. It is an affine subspace. This is a subspace. This is not a vector subspace: 0 is in it, but that is not enough to make it a subspace. Take any basis w1 ,. When you are through, the vectors obtained will be linearly independent, so they satisfy Definition 2. The approach is identical: eliminate from v1 ,. We will give two solutions.
The change of basis matrix [Pp! This expresses the images of the basis vectors pi under T as linear combinations of the pi , so the coefficients are the columns of the desired matrix, giving again 2 3 1 0 2 2 2 47 60 1 4 5. We give a solution analogous to the first above. Normally one would not expect a system of nine equations in three unknowns to have a solution!
The Cayley-Hamilton theorem Theorem 4. Solutions for Chapter 2 Computing S we find 1 61 is what computers are for, but when the dust has settled, 2 3 2 3 1. Finally, to find the number of digits of c , only the term involving is relevant, and we resort to logarithms base 1 log. Thus c has digits. Inequality 1 follows from the second paragraph of the proof of Theorem 2. This result is of interest in its own right.
Moreover, by the argument above, the corresponding eigenvalues n are nonincreasing and positive, so they have a limit 0. Since is compact, the sequence n 7! By part a, the matrix A has an eigenvector v 2 with eigenvalue 0.
The vector v is still an eigenvector for An , this time with eigenvalue n. There are at least two ways around this. We can use the same Lipschitz ratio. The norm is defined in Definition 2.
What would we get if we used the norm discussed in Section 2. Next, the derivative p at the origin is the identity, so its inverse is also the identity, with length 3.
But it does apply at x1. The second derivative approach gives Solutions for Chapter 2 b. Exercise 2. We will use the length of matrices, but the norm for elements of L Mat 2, 2 , Mat 2, 2 : i. Mat 2, 2. The following virtually identical to equation 2. In fact, Exercise 2. Invertible: the derivative at 11 is , which is invertible; 2 0 hence the mapping is invertible. The same mapping as in part d is invertible at 1 A; the derivative 1 2 3 1 1 1 at that point is 4 2 0 0 5, which is invertible.
But the implicit function theorem does not say that an implicit 0 Solutions for Chapter 2 71 0 1 2! So it is locally invertible near the origin, and the point 2a will be in the domain of the inverse for a sufficiently small. The Hi,j form a basis for Mat n, n.
Their images are all multiples of themselves, so they are linearly independent unless any multiple is 0, i. It follows from the chain rule that the derivative of the inverse is the inverse of the derivative.
Pick a number y 2 [c, d], and consider the function fy : x 7! We need to check that g is continuous. Consider the sequences n 7! These sequences are respectively increasing and decreasing, and both are bounded, so they have limits x0 and x Indeed, the sequence n 7! Since g is continuous, limh! Since lim h! Solutions for review exercises, chapter 2 2.
In that case, the system of equations has a unique solution. We have already done all the work: the matrix of coefficients i. The first statement is true; the others are false. We can now add multiples of the last k rows to the first e which row reduces the whole matrix to the n k to cancel the entries of C, identity.
If you guess that there is an inverse of the form and [0] B 1 multiply out, you find 2 3 A C 4 5 [0] B. Saying that a matrix is invertible is equivalent to saying that it can e if be row reduced to the identity. A couple of row operations will bring 2 3 2 3 1 1 1 1 1 1 40 5.
This is an unpleasant row reduction of 2 3 1 1 1 1 0 0 40 a 1 0 1 By Proposition 2. The argument in part a applies. We saw in Example 1.
Note that Proposition 1. The subspace spanned by I, A, A2 , A3 has dimension 2. But they have the same dimension, so they are equal. By definition 2. The other nonzero entries would then be fractions. We will apply the dimension formula to the linear mapping c2k 1 T : P2k 1!
R2k defined in the margin. So from the dimension formula, the image of T has dimension 2k, hence is all of R2k. Then by Proposition 2. Similarly, if v1 ,. Then m n. Column reducing a matrix is the same as row reducing the transpose, then transposing back. Moreover, row reducing A can be done by multiplying A on the left by a product of elementary matrices, which is invertible. So they form a basis for the image of A.
The answer ends up being 2 3 2. The shaded region, described in part c, is guaranteed to contain a unique root. Let w. This is a problem most easily solved using the chain rule, not the inverse function theorem. From Example 1. If it is not, then the chain rule tells you that there are two possibilities. In this case, it is an easy consequence of Theorem 1.
Suppose first that p1 and p2 have no common factors, and that By the fundamental theorem of algebra Theorem 1. If all the ai are distinct, this shows that q2 is a polynomial of degree a1. There will be some such term x aj left from p1 x after all cancellations, since p1 has higher degree than q2 , so if after cancellation we evaluate what is left at aj , the first summand will give 0 and the second will not.
If p1 and p2 are relatively prime, then T is an injective one to one linear transformation between spaces of the same dimension, hence by the dimension formula theorem 2. In particular, the polynomial 1 is in the image. But if p1 and p2 have the common factor x c , i. In the matrix BA, where B is a product of elementary matrices, there is a pivotal 1 in every row that is not all zeros, so by column operations multiplication by elementary matrices D on the right; see Exercise 2. The theorem of the incomplete basis justifies our adding vectors to make a basis; see Exercise 2.
The matrix then has the form m 2 n n Solutions for Chapter 3 0 1 x1 3. Since b is a constant, every value of x gives one and only one value of y. If the straight line is vertical, it is the graph of the function g : R! Note that X0 has a cusp at the origin, which is why it is not a smooth curve.
Since this is a transformation R2! Thus X0 is not a smooth curve. These curves are shown in the figure at left. The graphs of p, p2 , q, and q 2 are shown at top and middle of the figure in the margin.
The graph of the function F is shown at the bottom. Looking ahead to section 3. If you slice this graph parallel to the x, y -plane, you will get the curves of Figure 3. The union is parametrized by t 7! Figure for Solution 3.
Top: The graphs of p and p2. Middle: The graphs of q and q 2. Bottom: Graph of F. The figure below shows the two parametrizations.
The left side shows Solutions for Chapter 3 87 the parametrization of part d; it certainly looks like a cone of revolution. Left: The parametrization for part d.
Right: The parametrization for part a. The equation is x b. If one of these inequalities is satisfied and the other is an equality, the circles are tangent and intersect in one point. The point x2 is on the circle of radius l1 around x1 and on the circle of radius l2 around x3.
Similarly, the point x4 is on the circle of radius l3 around x3 and the circle of radius l4 around x1. Under these conditions, there are two choices for x2 , and two choices for x4 , leading to four positions in all. Solutions for Chapter 3 89 b. This is similar but longer. We just saw that at least one coordinate of such a cross product must be nonzero at a point in M2 3, 3.
Solution 3. But in this case, at these values of a and b the locus Xa,b is a single point, which is not a smooth curve.
No, Theorem 3. We define F : R3! The case where the candidate coordinates consist of x2 and x4 and two y-variables. We can choose the values of the coordinate functions to be whatever we like; in particular, they can be 0. At the base position A, all the yi vanish. There are many 8 choose 4, i. Let us make some preliminary simplifications, which will eliminate many of the candidates. Clearly y1 , y2 , y3 , y4 are not a possible set of coordinates, since we can add any fixed constant to x1 , x2 , x3 , x4 and stay in X2.
Any choice of three xi -variables will contain a pair of linked ones, so any set of coordinates must include either two x-variables and two y-variables, or one x-variable and three y-variables. If our candidate coordinates consist of two x-variables easily seen to be necessarily x2 and x4 , and two y-variables, then in every neighborhood of the position A, there exists a position X where the two coordinate y-variables are 0, and the two non-coordinate y-variables are nonzero.
Thus these candidate coordinates fail: their values do not determine the other four variables. When the candidate consists of one x-coordinate and three y-coordinates, two y-coordinates must belong to non-linked points, say y1 and y3 y2 and y4 would do just as well , and without loss of generality, we may assume that the third coordinate y-variable is y2.
Thus the candidate y1 , y2 , y3 , and xj are not coordinates for any j, as they do not determine y4. The upshot is that there is no successful candidate for coordinate variables, so X2 is not a manifold near A.
So the derivative of F never vanishes on R3. We still need to check that this actually is a solution. R3 v is injective means that its image is a plane in R3 , not a line or a point. Dg u is injective. Thus g is injective. Thus the derivative of g is everywhere injective. X is the kernel of [2u, 3v 2 ], i. The tangent space T Solutions for Chapter 3 93 0 1 u A, the tangent plane to the surface of 3.
Similarly, g x, z A and y A are on S. So all tangent planes are the same. An is a convergent sequence of orthogonal matrices, converging to A1 , then Solution 3. That O n is a manifold follows immediately from Theorem 3. Thus equation 3 is true. Any choice of positions of the m stars in the sequence automatically implies the positions of the n 1 bars, and vice versa.
Clearly this function has partial derivatives of all orders everywhere, except perhaps at the origin. It is also clear that both first partials exist at the origin, and are 0 there, since f vanishes identically on the axes. In particular, f is of class C 1. This does not contradict Theorem 3.
Define g : U1! Thus the implicit function theorem Theorem 2. So locally F 1 M is the graph of h. Moreover, by equation 2. Equation 3. But we recommend against writing x, y, z for those increments, which leads to confusion.
The real danger is in confusing z and the increment to z. In Example 3. As long as you can keep variables and increments straight, it does not matter which you do. The exchange of sum and integral in equation 1 is justified t2 k because the series k!
Solutions for Chapter 3 Solution 3. The advantage of the first solution is that it is a systematic approach that always works: when a quadratic form contains no squares, choose the first term and set the new variable u to be the sum of the variables involved or, as we did in Example 3.
Thus this space of Hermitian matrices is a natural model for spacetime. First, suppose A is a symmetric matrix. But then it holds for the linear combinations of standard basis vectors, i. Thus the matrix is Solution 3. You might confirm this in a couple of cases, but proving it is quite tricky. Both p t 2 and p0 t 2 are polynomials, whose coefficients are homogeneous quadratic polynomials in the coefficients of p.
This formula describes an ellipsoid. It is shown in Figure 2. The hyperbolic cylinder. Parametrizing this hyperboloid was done as in part b, but the computations are quite a bit more unpleasant.
The hyperbola in the x, z -plane over which we are constructing the cylinder. In fact, 2 4Df! We have Figure 5 for Solution 3. The hyperboloid of one sheet of part d. If a quadratic form on W is negative definite, then by Proposition 3. Solutions for Chapter 3 3. The critical points are the points where D1 f , D2 f , and D3 f vanish, i. Here we turn the polynomial into a new polynomial, whose variables are 0 the 1 increments from the 2 point 2 A.
In part b, we use equation 3. The second-degree terms of the Taylor approximation of f about the origin are x2 and y 2. This form has signature 1, 1 , so f has a saddle at both of these critical points.
This leads to the 3. In this case the constraints are linear, so finding a parametrization is easy. But usually finding a parametrization requires the implicit function and is computationally daunting, and less straightforward than using Lagrange multipliers also computationally daunting. We will analyze the critical point two ways: using a parametrization that incorporates the constraints, and using the augmented Hessian matrix. In this case, a parametrization is computationally much easier.
First let us use a parametrization. Now we will use the augmented Hessian matrix. Since the constraint functions F1 and F2 are linear, their second derivatives are 0, so the only nonzero entries of the matrix B of equation 3.
So the signature of the constrained critical point is 1, 0 , which means that the constrained critical point is a minimum. There are eight octants, so the p total volume is 4 3. Since nothing made the z-coordinate special, the second and third points are also saddles. This could also be computed directly. The fourth corresponds to the quadratic form! Thus it must have a positive maximum; since the function is equal to 0 at the other critical points, this maximum must be the fourth critical point.
We can get the same result using the definition of the directional derivative. But the hyperbola is a closed subset of R3 , and each branch contains a point closest to the origin, i. This is incompatible with the second constraint.
Since both of the local minima of the distance function are critical points, these two points are both local minima. Therefore x0 is a saddle of f Definition 3. The function F is our constraint function. Its derivative is 2! Thus p1 A, p1 A are 2 2 constrained critical points of f. Thus 0 1 0 1 0 1 0 1 0 2 0 2 2A, 0A, 0A, 2A 0 0 0 0 are also constrained critical points of f. We showed in part a that the only critical point in the interior of the ball is at x0 , and that this point is a saddle; therefore it cannot be an extremum.
The curve is shown in the figure in the margin. In theory, you should be willing to ante up any sum less than the expected value, so for f , any sum less than 2. Obviously no bank can pay out the expected value, so to determine the real expectation you would need to know the maximum the bank will pay. Another thing to consider is how much you are willing to lose; consider the comments of Daniel Bernoulli. Solutions for Chapter 3 b. This is indeed what equation 1 gives. To determine the Taylor polynomial of f at the origin, we use equation 3.
The quadratic terms of the Taylor polynomial are X2 2 Y2. Even if you did not use the X, Y, Z notation when you computed the Taylor polynomial you would have found no first-degree terms.
Thus if you tried to apply Proposition 3. That proposition only applies to cases where at least one of a1 and a2 is nonzero. Definition 3. Remember or look up that the radius of the earth is km. Thus the real radius of the arctic circle the radius as measured in the plane containing the arctic circle is sin The corresponding circle has radius on earth about km. The hypocycloid looks like the solid curve in the margin top.
Using Definition 3. Exercise 3. Solutions for review exercises, chapter 3 3. Look at the first entry, then the second. At the point 1 A, the derivative of F is [ 4 3 5 ]. The tangent space to the surface is the kernel of the derivative, i. The derivative of the mapping R6! So X is a manifold of dimension 3 near the points in equation 1.
The tangent space is the kernel of the matrix 2 computed in part b, i. Moral: If you can, compute 2 2 6 Taylor polynomials from known expansions rather than by computing partial derivatives. Solutions for review exercises, Chapter 3 3. The implicit function theorem says that this will happen if Solution 3.
For a picture of this surface, see Figure 3. One solution is 0 follows 1 0 0 A. The other solutions are of the form 0 0 1 0 1 0 1 0 1 a a a a aA, aA, aA, aA, a a a a for an appropriate value of a. At the origin, the quadratic terms are evidently x2 y 2 z 2 , so that point is a maximum. This point is also a 1, 2 -saddle. Since the expression for the function is symmetric with respect to the variables, evidently the other two points behave the same way.
Consider the figure at left. It follows from high school geometry that the quadrilateral can be inscribed in a circle; see the figure below. Left: Our quadrilateral, inscribed in a circle. Right: An angle inscribed in a circle is half the corresponding angle at the center of the circle. This is key to proving that a quadrilateral can be inscribed in a circle if and only if opposite angles are supplementary, a result you may remember from high school.
The quadratic form 1 is called the second fundamental form of a surface at a point. We could also see that the second fundamental form is degenerate by considering the first equation in Solution 3. This is what one would expect: our surface is a cone, so it becomes flatter as we move away from the vertex of the cone.
One way to explain why the Gaussian curvature is 0 is to look at equation 3. Our surface is a cone, which can be made out of a flat piece of paper, with no distortion. Thus the area of a disc around a point on the surface a point not the vertex of the cone is the same as the area of a flat disc with the same radius. Therefore, K must be 0. For another explanation, note that by Definition 3.
If both A0,2 and A2,0 are negative, it can be written p p 2 A2,0 X A0,2 Y 1 Thus the quadratic form second fundamental form is degenerate. Why should the second fundamental form of a cone be degenerate? In adapted coordinates, a surface is locally the graph of a function whose domain is the tangent space and whose values are in the normal line; see equation 3. Thus the quadratic terms making up the second fundamental form are also a map from the tangent plane to the normal line.
In Solutions for review exercises, Chapter 3 the case of a cone, the tangent plane to a cone contains a line of the cone, so the quadratic terms vanish on that line. Therefore, when constructing the augmented Hessian matrix, one must take into account these second derivatives and the Lagrange multipliers corresponding to each critical point. Again, the constrained critical point is a minimum, with signature 1, 0.
This has signature 2, 3 , so the constrained critical point has signature 0, 1 ; it is a maximum. This follows from case 2 , using that A and B are symmetric, i. We have never seen a case where the sequence k 7!
Ak fails to converge, but here is a proof that the sequence always has a convergent sequence. This is a sequence in the orthogonal group O n , so it has a convergent subsequence i 7! Pmi that converges, say to the orthogonal matrix P. Thus x3. Solutions for Chapter 4 1 4. This index is k, which tells where the cube is.
The number n of entries of k gives necessary information: if it is 2, the cube is in R2 , if it is 3, the cube is in R3 , and so on. But we do not need to know what the entries are to compute the n-dimensional volume. Since our theory of integration is based on dyadic decompositions, we divide the interval from 0 to 1 into 2N pieces, as suggested by the figure at left. Let us compare upper and lower sums when the interval is [0, 1.
For the lower sum which corresponds to the left Riemann sum, since the function is increasing , the width of each piece is 21N , the height of the first is 0, that N of the second is 21N , and so on, ending with 2 2N 1.
This is fundamentally easy; the difficulty is all in seeing exactly how many dyadic intervals are contained in [0, a], and fiddling with the little bit at the right that is left over. Consider first the case of the closed interval, i.
To find the integral, we will use the lower sums which are slightly less messy. By Proposition 4. The outer terms above can be made arbitrarily close, so the inner terms can also. Rn Rn 4. N N 2 2 2 So taking the limit as N! Case X [ Y : inequality 1 is Proposition 4. Let X be the set in question.
This is an acceptable integrand. This is not an acceptable integrand; it gives an infinite limit. This is an acceptable integrand; the limit it gives is 1. Then the relevant Riemann sum is r! If N is much smaller p than M , the limit is infinite, and if N is much bigger than M , the limit p is 0. Therefore, b a 2 c d is not a reasonable integrand. There exists a sequence n 7! Let f be the indicator function of the rationals in [0, 1], and g the indicator function of the irrationals in [0, 1]: i.
Some town discovers that 3 times as many children die from leukemia as the national average for towns of that size: 9 deaths rather than 3.
This creates an uproar, and an intense hunt for the cause. But is there really a cause to discover? Among towns of that size, the number of leukemia cases will vary, and some place will have the maximum. The question is: with the number of towns that there are, would you expect to find one that far from the expected value? Solutions for Chapter 4 4. Every individual outcome is very unlikely though some are more unlikely than others. The question which makes sense is to ask: how many standard deviations is the observed outcome from average?
You can also ask whether the number of repetitions of the experiment in this case 15 is large enough to use the figures of Figure 4.
The quick answer is that these results are not very likely. If we chart results for each integer, the resulting bar graph does not look like a bell curve, as shown in the margin. To give a more detailed answer, we must compute the standard deviation for the experiment. None of the 14 coin tossses resulted in no heads appearing; heads appeared exactly once in two of the 14 toin cosses, and so on. So the standard deviation of the average results when tossing the coin 1 14 times is p.
But we are interested in the actual number of heads, not 2 14 the average. If we were asked in a court of law, we would not be willing to affirm that a person reporting those results was lying or cheating. If people each tossed a coin 14 times, one of them might very likely come up results more than 2. Since the circle is symmetric, consider only the upper right quadrant, and keep in mind that we are looking just for an upper bound, not for a sharp upper bound.
Starting at the origin, and going to the point 0 2N p 2 Solution 4. Second, if you looked carefully at equation 4.
This is not im1 0 portant, since we are concerned with the number needed as N gets big. So if we use the same columns, then as N! Denote the unit circle by S 1. The function 1P f is bounded by 1 , and its support is in P , which is bounded. Moreover, it is continuous except on the boundary of P. Solutions for Chapter 4 Solution 4. The volume of an open box equals the volume of its closure. The other direction is a little more difficult.
For any closed box B, let B 0 be the open box with the same center but twice the sidelength. Bi of open boxes such i voln Bi LN 1X 4. Then g obviously and f see Example 4. In this case we did not need Heine-Borel as we did in Exercise 4. Below the x-axis, the domain of integration would be unchanged, as shown in Figure 4. This domain of integration is shown in the figure in the margin. We use induction over k. Why not? The above, incorrect integral does not take into account all the available information.
When we integrate first with respect to x, to determine the upper limit of integration we ask what is the most that x can be. When we next integrate with respect to y, we are doing so for all values of x between 0 and 1.
We will compute the integral for the half of the square above the diagonal, and then add it to the integral for the bottom half, which by equation 4. Can you think of a way to take advantage of Fubini to make this computation easier? Rather surprisingly, when we compute the total integral this way, we encounter some logarithms.
But as x1! For powers and exponentials, see Solution 6. This computation should convince you that taking advantage of symmetries is worthwhile. But there is a better way to compute the integral for Since the line separating the the upper half, as we realized after working through the above computashaded and unshaded regions is tion.
S Note that this uses the continuity of the second partials. Thus if D2 D1 f and D1 D2 f both exist and are continuous on U , then the crossed partials are equal; the second partials cannot be continuous and satisfy inequality 1.
The volume is given by the following integral. This exercise would be much easier to do using cylindrical coordinates, discussed in Section 4. Rearranging the order of a set of numbers does not change the value of the rth smallest number.
Thus Mr x satisfies the hypothesis of Exercise 4. After we integrate with respect to xr 1 , the Mr function becomes relevant. The minimum of a set of numbers is the same, regardless of the order of the numbers, so the min function also satisfies the hypothesis of Exercise 4.
To evaluate the integral it is convenient to use the same order of integration that we chose in part a: n! First perform one additional integration:! Z g y dy [a,b]! So every s 2 S has been sampled the same huge number of times, and in the expectation we have divided by this huge number, to find the average of f 2 over S, i. N 1 Solutions for Chapter 4 d. N 1 N N 1 N Note that one N goes with the m to make up the y; the other two give the area of the square.
Theorem 4. We want to show that the linear transformation [D det A ] : Mat n, n! R is not the zero linear transformation, i. Solution 4. In the first n k columns, every entry in the matrix has a factor of h, so each term of the determinant has a factor of h2.
Solutions for Chapter 4 b. Since Corollary 4. The entry 1 is reflection with respect to some hyperplane. In the subspace spanned by the eigenvectors with eigenvalue 1, A is minus the identity, i. Let A : Rn! Work by induction on the dimension of a subspace of Rn on which A is orthogonal. In that basis the matrix of A : P! Solutions for Chapter 4 2 2 b.
0コメント