# limit

The mathematical concept of a limit was developed in the late 18^{th} and early 19^{th} centuries as a means of putting the differential and integral calculus on a rigorous foundation, and it remains one of the most important concepts—and sometimes one of the most difficult—for students of mathematics to understand.

In the beginning, when Isaac Newton and Gottfried Leibniz were developing the calculus, the idea upon which their work rested was that of (ratios and products of) arbitrarily small quantities or numbers. Newton called them fluxions, and Leibniz referred to them as differentials. These notions built upon the work of John Wallis, Pierre de Fermat, and René Decartes, all of whom had found a need in their analytic work for some concept of the “infinitely small.”

The root of these issues is the following question: How close can two numbers be without being the same number? Equivalently (since we may consider the difference of two such numbers), how small can a number be without being zero? The effective answer given by Newton and others was that there are infinitessimals, which are to be thought of as positive quantities that are smaller than any non-zero real number. Such a concept seemed necessary, since the differential calculus relied crucially on the consideration of ratios, both of whose terms were vanishing to zero simultaneously.

A quantity is something or nothing: if it is something, it has not yet vanished; if it is nothing, it has literally vanished. The supposition that there is an intermediate state between these two is a chimera.

—Jean le Rond d’Alembert

Despite the success of Newton’s and Leibniz’s methods, however, and even despite the enthusiasm of such important mathematicians as Lagrange and Euler, the concept of the infinitessimal became more and more difficult to support during the 18^{th} century. The chief complaint of most critics was that it is impossible to imagine in any concrete way an object—even a mathematical one—that is infinitely small. More important in the long run was that, without a firm theoretical basis for infinitessimals, mathematicians could not be completely confident in their methods. Such a theoretical basis did not seem forthcoming, despite nearly two centuries of effort by the mathematical community of Europe.

In the meantime a new way of thinking about ratios of vanishing quantities was being introduced by the French mathematician Jean le Rond d’Alembert, namely the method of limits. D’Alembert’s formulation was nearly identical to that in use today, but unlike our contemporary conception it relied heavily on geometrical intuitions. For instance, D’Alembert saw the tangent to a curve as a limit of secant lines, as the end points of the secants converged on the point of tangency and became identical with it “in the limit.”

This is precisely how the derivative is motivated in calculus courses throughout the world today. However, considered purely as a geometrical argument, without a numerical or functional foundation, this conception of limiting secant lines is subject to age-old objections of the sort exhibited by Zeno’s paradoxes.

It needed another great French mathematician, Augustin Cauchy, to provide the rigorous formulation of the limit concept in a series of publications in the 1820’s that would meet all objections. Cauchy’s definitions of the derivative and the integral as limits of functions transformed our understanding of the calculus, and opened the door to a rich period of growth and innovation in mathematics, at last bringing the field of real analysis to full maturity.

### What is a Limit?

Students should begin by considering limits in a purely numerical context. To do this, it is well to review the Archimedean principle:

Let \(a\) and \(b\) be real numbers. Then we may find some natural number n such that \(a < nb\).

From this principle it follows that \(\displaystyle\frac{a}{n} < b\), and consequently that for any real number \(r\) we may find an \(n\) such that \(\displaystyle\frac{1}{n} < r\). Notice that \(\displaystyle\frac{1}{n}\) is never zero for any actual integer \(n\), no matter how large that \(n\) might be. However, no matter what positive real number you might pick, an \(n\) can be found so that \(\displaystyle\frac{1}{n}\) is closer to zero than that number. We say then that zero is the *limit* of \(\displaystyle\frac{1}{n}\) as n grows indefinitely large. In standard notation, this is expressed by

\[\lim_{x\rightarrow \infty} \frac{1}{n} = 0\]

which is commonly read, “the limit as \(n\) goes to infinity of one-over-\(n\) is zero.” The use of the “equals” sign in this expression is somewhat misleading, and it is essential to emphasize that this does not say that \(\displaystyle\frac{1}{n}\) is ever equal to zero. Nor does it mean that \(n\) itself is ever infinite. Instead, it means that—by choosing \(n\) sufficiently large—the quantity \(\displaystyle\frac{1}{n}\) can be made as close to zero as desired.

This last distinction bears all the emphasis we can bring to it. If we have some expression involving a variable \(x\), say, and we consider the limit of that expression as \(x\) approaches a fixed value \(a\), then to say that “the limit exists” and is equal to some value \(L\) is not to say that the expression itself is ever equal to \(L\), or even that \(x\) is ever equal to \(a\). Indeed, what makes limits interesting and useful is that it very often happens that \(x\) can’t be \(a\), but can only become arbitrarily near to \(a\). With this in mind, let’s now parse the general expression of a limit:

All that remains is for us to make quite precise what we mean by the phrases “as close as we want” and “close enough.” We do this by a natural extension of the Archimedean principle discussed above:

Def^{n}: Let \(f\) be a real-valued function of a real variable \(x\). Then we say that “the *limit* of \(f\) as \(x\) approaches \(a\) *exists*” provided there is a real number \(L\) with the property that, whenever we are given a positive real number \(\varepsilon\), we may find another positive real number \(\delta\) so that \(|f(x)-L| < \varepsilon\) for any \(x\) satisfying \(|x-a| < \delta\).

This is denoted by

\[\lim_{x\rightarrow a}f(x)=L\]

We often call this simply, “the limit of \(f\) at \(a\).” The use of the Greek letters \(\varepsilon\) (epsilon) and \(\delta\) (delta) are traditional; indeed this is generally called the “epsilon-delta” definition of the limit. The definition tells us what a limit is by telling us when it exists, namely, it “exists at \(a\) and is equal to \(L\)” if we may force \(f(x)\) arbitrarily close to \(L\) (that is, within any given positive amount \(\varepsilon\)), simply by choosing, for any such \(\varepsilon\), an appropriate “closeness” (represented by \(\delta\)) of \(x\) to \(a\).

We observe that the definition provided above is constructive. (This observation is not necessary to understanding limits; feel free to skip this paragraph on a first reading.) By constructive, we mean that it asserts only that we may find a \(\delta\) for any *actual* \(\varepsilon\) we are given. However, this definition is often given in a non-constructive way, asserting “for all \(\varepsilon\) there *exists* a \(\delta\)” satisfying the given conditions. The difference here is a subtle one, but we believe it is important for two reasons. First, few students of calculus have yet received formal instruction in predicate logic, and consequently they have no way of interpreting the universal and existential quantifiers “for all” and “there exists” without ambiguity. Second, the non-constructive formulation is unnecessary, and so as a matter of mathematical style it should be avoided, since it entails certain obligations about the foundations of mathematics that we should wish to avoid in the study of the calculus.

It may happen that a function satisfies the definition of a limit for \(x\)’s approaching \(a\) from one direction on the number line, but not the other. For example, we would denote the case where the limit exists approaching from the left by

\[\lim_{x\rightarrow a^-}f(x)=L\]

(Note the minus sign superscripted on the \(a\).) In this case we would say that the “left-hand limit” (or the “right-hand limit” respectively) exists, but we do not say that the limit itself exists unless both the left and right-hand limits exist and are equal.

Let us illustrate the definition with a concrete example. Consider the function f given by

\[f(x)=\frac{x^2-4}{x-2}\]

By factoring the numerator and cancelling the common factor, we may rewrite this function as

\[f(x)=x+2,\,\,\,x\neq 2\]

that is, as the line \(x + 2\) with the point \((2, 4)\) missing.

Clearly, although \(x = 2\) is not in the domain of this function (since it would cause division by zero), and consequently 4 is not in the range, nonetheless putting \(x\) sufficiently close to 2 will force the function as close to the point \((2, 4)\) as we like. In other words, we have

\[\lim_{x\rightarrow 2} \frac{x^2-4}{x-2} =4\]

Appealing to the definition, we observe that if we are given a positive real number \(\varepsilon\), then all we need do is choose any positive \(\delta\) so that \(\delta < \varepsilon\). Our proof of this limit then runs as follows:

\[\begin{array}{rl} \mbox{if} & |x-2|<\delta \\ & & \\ \mbox{then} & 2-\delta < x < 2+\delta \\ & & \\ \mbox{so} & 2-\delta + 2 < x+2 < 2+\delta + 2 \\ & & \\ \mbox{and so} & 4-\delta < x+2 < 4+\delta \\ & & \\ \mbox{and so} & 4-\varepsilon < x+2 < 4+\varepsilon \\ & & \\ \mbox{and so} & |(x+2)-4| < \varepsilon \end{array}\]

and the last line is what we wanted to prove. Notice that the choice of \(\delta\) depends entirely on the \(\varepsilon\) we are given. In this case we only needed that \(\delta\) be smaller than \(\varepsilon\)—remember the Archimedean principle!—and it is this which gets us from the 4^{th} to the 5^{th} line in the above proof. Notice also that this limit exists even though \(x\) can never be 2 and \(f(x)\) can never be 4.

For the student of calculus, proving that limits exist is usually not the focus; rather, we want to calculate what the limit actually is, and in general this won’t be obvious. Making such calculations is important because the derivative and integral are both defined as limits of certain kinds of functions (difference quotients and Riemann sums), so to find a derivative or integral is in fact to calculate a limit. However, before we turn to this valuable skill, which is largely algebraic in character, it is as well to spend a little more time with the analytic aspect of limits.

### Limits of Sequences

In the example with which we started, namely the limit of \(\displaystyle\frac{1}{n}\), the above definition for the limit doesn’t work out, because infinity is not a real number. It just isn’t clear what we could mean by “\(n\) getting closer and closer to infinity,” since any finite \(n\) is just about as close to infinity as any other. (That is, any finite \(n\) still has infinitely far to go!) However, if we insist that our \(n\) be discrete values—say, natural numbers—then what we get as “\(n\) tends towards infinity” is a discrete sequence of values, whose limit may be characterized in a slightly different way than for real-valued functions.

For instance, in the case of natural numbers, we get the following sequence, called the harmonic sequence:

\[1,\frac{1}{2},\frac{1}{3},\frac{1}{4},\frac{1}{5},\frac{1}{6},\ldots,\frac{1}{n},\ldots\]

It’s clear that this sequence “limits to zero,” and what we mean is that, by choosing \(n\) large enough, the \(n^{th}\) term of the sequence may be made as close to zero as desired. This brings us to the following definition of limits for sequences:

Def^{n}: Let \(\{a_n\}\) be a sequence of real numbers indexed by the natural numbers (positive integers). Then we say that “the sequence has a limit” provided there is a real number \(L\) with the property that, whenever we are given a positive real number \(\varepsilon\), we may find a large enough positive integer \(N\) so that \(|a_n-L| < \varepsilon\) whenever \(n > N\).

This is denoted by

\[\lim_{n\rightarrow\infty} a_n = L\]

We also say in this case that the sequence is a *convergent* sequence, and that the sequence “converges to L.”

This definition encodes a topological notion that may be easily generalized to other kinds of sequences and spaces. What it says is that, no matter how small a neighborhood we take around the limit point, eventually (i.e., far enough out in the sequence) all the points of the sequence will lie within that neighborhood. This condition is equivalent to a sequence being what is called a Cauchy sequence, and is true of every convergent sequence. Conversely, if the space is complete (contains all its limit points) then this condition ensures that the sequence has a limit (converges).

This more analytic notion of the limit of a sequence is the foundation of our understanding of infinite series, which in turn gives us a much deeper understanding of real-valued functions. Also, as noted, this notion of convergence may be generalized to other kinds of sequences, such as sequences of sets and sequences of functions.

### Working with Limits

We restrict our attention here to limits of real-valued functions as defined in the **What is a Limit** section above, rather than limits of sequences.

Working successfully with limits requires that you know about three things:

- The definition.
- How limits commute with continuous functions.
- How to work with indeterminate forms.

The first was already covered, and the second and third are explained below.

By limits “commuting with continuous functions” we mean the following. Suppose that \(F\) is a continuous function and that \(f(x)\) is an expression involving \(x\). Then we have

\[F\left( \lim_{x \rightarrow a} f(x) \right) = \lim_{x \rightarrow a} F\left( f(x) \right)\]

In particular, since the ordinary arithmetic operations are continuous functions, we have

\[\begin{eqnarray*} \lim (a+b) & = & \lim a + \lim b \\ & = & \\ \lim (ab) & = & (\lim a)(\lim b) \\ & = & \\ \lim \frac{a}{b} & = & \frac{\lim a}{\lim b} \\ & = & \\ \lim (\log a) & = & \log(\lim a) \\ & = & \\ \lim e^a & = & e^{\lim a} \end{eqnarray*}\]

Using these identities often simplifies greatly the calculation of a limit.

By “indeterminate form” we mean a limit which (at first glance) evaluates in a way that is ambiguous. For example, consider the limit

\[\lim_{x\rightarrow\infty}\frac{4x^2-x-2}{x^2 + 1}\]

Here both the numerator and denominator grow without bound, that is, this limit seems to be “infinity over infinity.”

\[\lim_{x\rightarrow\infty}\frac{4x^2-x-2}{x^2 + 1}=\frac{\infty}{\infty}\color{gray}=1?\]

It is tempting to feel that we should just be able to “cancel” and get the answer 1 for this limit. However, notice that we can do the following:

This answer is correct. What then, went wrong before? The answer is that certain “forms” in mathematics have no precise meaning, and so it is not possible to calculate with them. These are called the *indeterminate forms*:

\[\frac{\infty}{\infty},\,\,\,\frac{0}{0},\,\,\,0\times\infty,\,\,\,0^0,\,\,\,1^{\infty}\]

Whenever a limit appears to evaluate to one of these forms, it must be worked with in some fashion until it can be calculated in an unambiguous way. In some cases, it is only possible to put the limiting expression into either the “infinity over infinity” form or the “zero over zero” form. In these cases, one can apply L’Hospital’s Rule to change the limit to an equivalent limit that is no longer indeterminate.

There are too many particular techniques for evaluating troublesome limits to present them here. However, any calculus textbook will have a wealth of examples.

Wendy Hageman Smith , reviewerB. Sidney Smith , author

- [MLA] , B. Sidney Smith. "limit."
*Platonic Realms Interactive Mathematics Encyclopedia.*Platonic Realms, 6 Mar 2013. Web. 6 Mar 2013. <http://platonicrealms.com/> - [APA] , B. Sidney Smith (6 Mar 2013). limit. Retrieved 6 Mar 2013 from the
*Platonic Realms Interactive Mathematics Encyclopedia:*http://platonicrealms.com/encyclopedia/limit/