Question:
Continuity and Derivability of a function.?
2008-07-02 12:22:18 UTC
If a function is derivable at every real number, that means that it's continuous as well right? And if we say that a certain function is derivable at x=2, that means that it is also continuous at x=2 right??

Or was it the way around??

Please explain fully.
Four answers:
♣ K-Dub ♣
2008-07-02 12:44:08 UTC
Differentiability → Continuity.



If



f'(c) = lim (h → 0) [f(c+h) - f(c)] / h = L (that is, it exists - and note that in the definition f(c) must exist),



since



lim (h → 0) h = 0



lim (h → 0) [f(c+h) - f(c)]

= lim (h → 0) h * [f(c+h) - f(c)] / h

= lim (h → 0) [f(c+h) - f(c)] / h * lim (h → 0) h

= L * 0

= 0.



That is, f is continuous at x = c.



Continuity does not necessarily imply differentiability. Take, for instance



f(x) = |x| at x = 0 (a "point")

f(x) = x^(1/3) at x = 0 (a vertical tangent)



Hope this helps.



♣ ♣
anonymous dude
2008-07-02 20:17:46 UTC
First of all: a syntactic point to be careful about: "derivable" and "derivability" are not proper terminology, regardless of what you have been told. The correct terminology is "differentiable" and "differentiability". However, one refers to the "derivative" of a function; you can also talk about the "differential" of a function, but it does not always mean the same thing as derivative. It's confusing, I know. Onto the actual question:



Just to clarify: the first answer is incorrect, but the second answer is. If a function is differentiable, then it must be continuous but a continuous function need not be differentiable.



The fact that differentiability implies continuity should be proven using the formal definition of a limit, but you can see what's going on by looking at the definition of a derivative. We say that a function f is differentiable at a point a if the following limit exists:



lim_{h -> 0} (f(a + h) - f(a)) / h



If this limit exists, then since the denominator is tending to zero we must have that the numerator is tending to zero as well. But if the numerator tends to zero then f(a + h) tends to f(a) as h tends to zero, which means that f(a) can be approximated by f(x) for points x which are near enough to a. This is the intuitive explanation of why differentiability implies continuity; as I mentioned above, the fact requires a formal proof.



Note that the function given in the first response, f(x) = (x - 2)^2 / (x - 2), is not differentiable at x = 2 because f(2) is not defined and thus we cannot refer to it in the difference quotient (f(2 + h) - f(2))/h. If we modified the function a little bit and said that f(x) is equal to (x - 2)^2 / (x - 2) when x is not equal to 2 and 0 when x = 2, then f would be differentiable (and indeed continuous). But if you pick any other value besides 0 for f(2), say f(2) = 1, then differentiability and continuity fail; differentiability fails because f(2 + h) approaches 0 as h approaches 0 which means f(2 + h) - f(2) approaches -1.



To see that continuity does not imply differentiability, it is best to look at examples. Take f(x) = |x|, for example (absolute value of x). The function behaves like x coming from the right of 0 and it behaves like -x coming from the left of 0, so the difference quotients in the definition of differentiability are all 1 coming from the right and they are all -1 coming from the left. Thus they do not approach any value as x tends to 0 overall.



There are even more dramatic examples. Take the function:



f(x) = Sum_{n = 0 to infinity} (3/4)^n cos(9^n * pi * x)



Functions of this type really surprised mathematicians in the 19th century because it turns out to be continuous at every point on the real line but not differentiable at a single point! In other words, its graph is a continuous curve that is comprised only of corners. Weird!



Even more disturbingly, one can proof that the collection of continuous functions which are differentiable at SOME point have measure zero in the set of all continuous functions. In other words, if you choose a continuous function at random (in some sense that can be made precise), the probability that it would be differentiable at any point at all is 0. Pretty surprising, considering the fact that it was not until the 19th century that anybody could construct such a function!
Mo
2008-07-02 19:51:31 UTC
The standard definitions of differentiability:

d/dx f(x) = lim(h->0) [ (f(x+h) - f(x)) / h ]

or

df/dx (a) = lim(x->a) [ (f(x) - f(a)) / (x-a) ]



For both of these, whenever a function is differentiable, it must be continuous.



However, we still can have functions like the f(x) = |x| which has a corner at x=0, so it's continuous, but it's not necessarily differentiable.



Some of the nitty-gritty:

The "function with a hole" is not differentiable at the hole, since we can't put the "f(x)" or the "f(a)" into the definition.

I suppose we could make a different definition of the derivative for incomplete spaces, but then there'd be no reason to not do the same for the continuity, hence making the hole actually continuous.
Dan B
2008-07-02 19:30:22 UTC
Nope, you can have point discontinuity for example:



f(x) = (x² - 4)/(x - 2)



is discontinuous at x=2, but differentiable at that point.



Not the other way around either it can be continuous, but have a corner or a cusp, where the limit of the rate of change coming from one way does not equal the limit of the rate of change coming from the other.



example;



f(x) = { x² if x< 1 and (x-2)² if x >or= 1 }


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...