# 3.2: Differentiation

**At Grade**Created by: CK-12

## Tangent Lines and Rates of Change

The concept of slope is very basic and will be naturally familiar to any student. Therefore it is recommended to begin teaching about derivatives by talking about slope. How steep is a certain hill? How do we measure that steepness? What about if the hill starts off real gradual and then later becomes steep? You might use diagrams like the following to explain as you go along:

These kinds of ideas naturally lead one to the ideas of secant lines, tangent lines, and even the derivative!

It is strongly recommended to practice taking limits using the variable \begin{align*}h\end{align*}

1) Find the following limits:

\begin{align*}& \lim_{h \to \infty} \frac{3h^7 - 4h^2 + 5}{9h^7 - h^3 + h^2 - 5h + 2} && \left (\text{answer} = \frac{1}{3} \right )\\
& \lim_{h \to 0} \frac{4h^3 + x^2}{h^2 + x} && (\text{answer} = x)\end{align*}

The limiting process of allowing \begin{align*}h \rightarrow 0\end{align*}

## The Derivative

Here it should be made very clear that for the step function, the slope is zero (not \begin{align*}2\end{align*}*from the left*. If we approach from the right, with \begin{align*}x_0 > 0\end{align*}

To decide if a function was continuous, we recommended drawing it and checking if your pencil need be lifted. We can perform a similar test to check if the derivative exists, except this time we draw the function and then lay the pencil *along* the curve to indicate the tangent line. Move along the curve tilting the pencil up and down to denote the steepness and if at any point it’s unclear what the steepness should be, or jumps suddenly from one value to another, or becomes perfectly vertical, then the derivative does not exist.

There are various demos online to see how this works, such as for example:

http://upload.wikimedia.org/wikipedia/en/7/7a/Graph_of_sliding_derivative_line.gif

Try this with your pencil on the following graphs to see if they are differentiable or not:

## Techniques of Differentiation

It should be pointed out that the Wikipedia page indicated (Calculus with Polynomials) has a nice proof. On the other hand, the binomial theorem and explicit summations can be avoided by thinking a little. For example, proving these rules is a valuable exercise to do in class, and as an example, here is how you might teach the power rule:

POWER RULE

Step 1: Review the binomial theorem:

\begin{align*}(x+h)^n = x^n + a_{n-1} x^{n-1} h + \ldots + a_1 x h^{n-1} + h^n\end{align*}

and the fact that the coefficients \begin{align*}a_1\end{align*}

\begin{align*}& \qquad \qquad \quad \ \ 1\\
& \qquad \qquad \ \ 1 \quad \ 1 \\
& \qquad \qquad 1 \quad 2 \quad \ 1\\
& \qquad \ \quad 1 \quad 3 \ \quad 3 \ \quad 1 \ \\
& \qquad \ 1 \quad \ 4 \quad \ 6 \quad 4 \quad 1\\
& \qquad 1 \quad 5 \quad 10 \quad 10 \ \ 5 \ \ 1\\
& \ldots \ldots \ldots \ldots (\text{etc}) \ldots \ldots \ldots \ldots \end{align*}

Step 2: Now simply plug in:

\begin{align*}\frac{d}{dx} (x + h)^n & = \lim_{h \to 0} \frac{(x + h)^n - x^n} {h}\\ \frac{d}{dx} (x + h)^n & = \lim_{h \to 0} \frac{x^n + a_{n - 1} x^{n - 1} h + \ldots + a_1 xh^{n - 1} + h^n - x^n}{h}\\ \frac{d}{dx} (x + h)^n & = \lim_{h \to 0} \frac{a_{n - 1} x^{n - 1} \cancel{h} + \ldots + a_1 x \cancel{h}^{n - 1} + \cancel{h}^n} {\cancel{h}}\\ \frac{d}{dx} (x + h)^n & = \lim_{h \to 0} (a_{n - 1} x^{n - 1} + \ldots + a_1 x h^{n - 2} + h^{n - 1} )\\ \frac{d}{dx} (x + h)^n & = a_{n - 1} x^{n - 1}\end{align*}

Step 3: Recognize from Pascal’s Triangle that the first coefficient is always just the number \begin{align*}n\end{align*}, or the power in \begin{align*}(x+h)^n\end{align*}. So:

\begin{align*}\frac{d}{dx} (x + h)^n = nx^{n - 1}\end{align*}

Proving how to deal with constants and addition/subtraction of functions is much more straightforward. However the product and quotient rules require a little more work. It is probably not recommendable to show the proofs for these unless students are interested. Furthermore, once students have the chain rule then one needs only prove the product rule since the quotient rule follows as a product of two fractions.

It should also be noted that the product rule is extremely profound. In advanced mathematics the product rule is actually called the “Leibniz Law” and defines an abstract concept called a *derivation.* A derivation is a kind of operator \begin{align*}O\end{align*}, or map from functions to functions. For a simple example, think of the operator \begin{align*}A\end{align*} that acts as \begin{align*}A(f) = f + 2\end{align*}. This just takes a function and adds \begin{align*}2\end{align*} to it everywhere. Then, if we apply \begin{align*}A\end{align*} to a product we get that \begin{align*}A(fg)=fg + 2\end{align*}. The derivative *operator* is defined by \begin{align*}D(f)=f’\end{align*}. Then the product rule gives that \begin{align*}D(fg)=f’g+fg’\end{align*}. However, any operator \begin{align*}O\end{align*} that satisfies this property that \begin{align*}O(fg)=f’g+fg’\end{align*} is called a *derivation* and these are extremely important in areas of math lying at the intersection of algebra of geometry.

Higher order derivatives have extremely important applications as well. It was pointed out that the first derivative is useful since it represents instantaneous velocity. The second derivative then gives the instantaneous change in velocity over time, which is the acceleration. In fact, physics might be naively described as the study of acceleration since Newton’s Second Law defines a force as that which produces acceleration:

NEWTON’S SECOND LAW

\begin{align*}F = ma = m \frac{d^2 x}{dt^2}\end{align*}

## Derivatives Trigonometric Functions

The proofs given in this chapter are fine, but in order to deepen the content we can find the same results in a slightly different way. Complicated functions like Sine, Cosine, and Tangent can actually be represented in terms of *infinite polynomials:*

\begin{align*}\text{Sin}(x) & =x- \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \ldots\\ \text{Cos}(x)& = 1-\frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \ldots\end{align*}

Then, we could find the derivatives by simply applying the rules we know for polynomials to each term individually and we would get the same result as in this lesson:

\begin{align*}\frac{d}{dx} \text{Sin}(x) & = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \ldots = \ \text{Cos}(x)\\ \frac{d}{dx} \text{Cos}(x) & = - x + \frac{x^3}{3!} - \frac{x^5}{5!} + \frac{x^7}{7!} + \ldots = \ - \text{Sin}(x)\end{align*}

Later on we will describe where these series come from and how we know that they are correct. However for the time-being, this might be a nice way to practice using the power rule from the previous lesson.

## The Chain Rule

The chain rule usually looks pretty daunting at first, but this is mostly due to the fact that function composition is a little notationally awkward. So again, in order to make this more comfortable, I recommend beginning with some basic examples of composed functions and the question of how we might find their derivatives. For example, what is the derivative of the following *composed* functions:

\begin{align*}f(x) & = (1 - x)^2\\ f(x) & = \sqrt{x + \frac{1}{x}}\\ f(x) & = \text{Sin} (x^2)\end{align*}

Now, in order to teach this effectively it is useful to think of a mnemonic. The chain rule can be applied mentally by differentiating in the order: “OUTSIDE THEN INSIDE”. As an example, consider the function:

\begin{align*}f(x) = \text{Sin}(1 + \text{Cos}(x^2))\end{align*}

We begin by differentiating the most outside function, *Sin(stuff)*, to give, *Cos(stuff)*:

\begin{align*}\frac{d}{dx} f(x) = \text{Cos} (1 + \text{Cos}(x^2)) \cdot \ \text{inside}\end{align*}

Then we move inside one step and differentiate \begin{align*}1 + \;\mathrm{Cos(stuff)}\end{align*} to give \begin{align*}–\mathrm{Sin(stuff)}\end{align*}:

\begin{align*}\frac{d}{dx} f(x) = \text{Cos}(1 + \text{Cos}(x^2 )) \cdot (-\text{Sin}(x^2)) \cdot \text{inside}\end{align*}

Finally we move into the innermost part and differentiate \begin{align*}x^2\end{align*} to give \begin{align*}2x\end{align*}:

\begin{align*}\frac{d}{dx}f(x) & = \text{Cos}(1 + \text{Cos}(x^2)) \cdot (- \text{Sin}(x^2)) \cdot (2x)\\ \frac{d}{dx} f(x) & = -2 x \ \text{Sin}(x^2) \text{Cos} (1 + \text{Cos}(x^2))\end{align*}

## Implicit Differentiation

Here it may be worthwhile to review various examples where the variables \begin{align*}y\end{align*} and \begin{align*}x\end{align*} are replaced by a variety of different letters and symbols. This will help to make students more fluid with recognizing which is the variable of differentiation and which is the function. Then a physical example like the following might help:

- The area of a rectangle is length \begin{align*}(L)\end{align*} times width \begin{align*}(W)\end{align*}. Suppose that a rectangle has area equal to \begin{align*}10\end{align*}. Then how does the length change with respect to changes in the width?
- \begin{align*}LW=10\end{align*}, and \begin{align*}\left (\frac{dL}{dW}\right )W + L = 0\end{align*} so \begin{align*}\frac{dL}{dW} = -\frac{L}{W} = -\frac{10}{W^2}\end{align*}.

Notice that when we use implicit differentiation we usually end up with a derivative \begin{align*}\frac{dy}{dx}\end{align*} that depends on both \begin{align*}y\end{align*} and \begin{align*}x\end{align*}. Before, we had derivatives \begin{align*}\frac{dy}{dx}\end{align*} that only depended on \begin{align*}x\end{align*}. Sometimes, as in the example of rectangles above, it is easy to just solve for \begin{align*}y\end{align*} in terms of \begin{align*}x\end{align*}. However we usually favor implicit differentiation when this is not straightforward. In these cases it is acceptable to leave the solution in terms of \begin{align*}x\end{align*} and \begin{align*}y\end{align*}, and to recognize that for any given value of \begin{align*}x\end{align*} there should be a unique value for \begin{align*}y\end{align*} (if indeed we began with a strict function). Most of the examples given in this lesson are actually *not functions*, since for a given value of \begin{align*}x\end{align*} there are usually more than one possible values for \begin{align*}y\end{align*}. In Example 3, to note one such case, if \begin{align*}x\end{align*} is \begin{align*}3\end{align*} then \begin{align*}y\end{align*} could be \begin{align*}+3\end{align*} or \begin{align*}-3\end{align*}.

In fact, implicit differentiation is most useful when the graph associated with the values \begin{align*}(x,y)\end{align*} that solve our equation is not the graph of a function. For example, consider the circle below:

We cannot write the equation for \begin{align*}y\end{align*} as a function of \begin{align*}x\end{align*}, since \begin{align*}y\end{align*} is not uniquely determined by \begin{align*}x\end{align*}. However the circle is the set of \begin{align*}x\end{align*} and \begin{align*}y\end{align*} that solve the equation: \begin{align*}x^2+y^2=1\end{align*}. Using implicit differentiation on this we obtain that \begin{align*}\frac{dy}{dx} = - \frac{x}{y}\end{align*}. For a given value of \begin{align*}x\end{align*}, \begin{align*}y\end{align*} can be either positive or negative and so the slope of the tangent can be either positive or negative. Using the graph above you can see that corresponding to each \begin{align*}x\end{align*} there are two \begin{align*}y\end{align*} values and that the corresponding tangent lines either have positive or negative slope.

## Newton’s Method

The topics of linearization and Newton’s Method really belong in the next chapter on applications. The entire utility of calculus really lies in the fact that when one zooms in on any curve, it looks like the tangent line:

http://www.ima.umn.edu/~arnold/calculus/tangent/tangent-g.html

Functions can be very complicated, involving transcendental pieces like Sines and Cosines or exponentials. This is why linearization is so important, since it allows us to trade in complicated functions for simple ones like \begin{align*}y=mx+b\end{align*}.

The process of using Newton’s Method for finding roots of an equation is, of course, due to Issac Newton. Despite this, Newton’s description was more complicated and different from the one known today. Furthermore, the essential idea was used long before Newton to calculate square roots and is known as the Babylonian Method. The essential idea, however, is quite simple. Basically, if one is interested in finding the point where a function becomes zero then simply find the linearization and take its zero. The idea is that if the function is heading towards zero in some direction and at a particular rate then head in that direction.

There are some notable difficulties with using Newton’s Method, beginning with the fact that sometimes it is difficult to obtain the derivative of a function. Similarly, if the derivative happens to vanish then we cannot put it in a denominator as prescribed by the method. That is to say, wherever the graph is horizontal or even just nearly zero there is no or little information about any potential nearby zeroes. Indeed, if the initial point is not chosen carefully there is no reason that the linearization should contain any information about where a distant zero may be.

### Notes/Highlights Having trouble? Report an issue.

Color | Highlighted Text | Notes | |
---|---|---|---|

Please Sign In to create your own Highlights / Notes | |||

Show More |

### Image Attributions

**Save or share your relevant files like activites, homework and worksheet.**

To add resources, you must be the owner of the section. Click Customize to make your own copy.