Proof of Sum Rule for Limits

Here is the formal statement of this limit law and its proof: Then $h(x), k(x)rightarrow 0$ and if we prove the theorem for $h, k$, we also get it for $f$ and $g$ by the first line of your proof. The law of sum of limits can also be extended to two other functions. If $f_{1}(x)$, $f_{2}(x)$, $f_{3}(x)ldots$ are functions, then the rule-sum of limits is written as follows. Since we assume that all of the above limits exist, we assume $lim_{x rightarrow c} f(x) = L_1$ and $lim_{x rightarrow c} g(x) = L_2$. So now we hope to show that $$lim_{x rightarrow c} (f(x) + g(x)) = L_1 + L_2$$ Based on the epsilon delta definition for a limit, we must show that for every $epsilon > 0$, we can find a $delta > 0$, so that $$textrm{If $0 About Apostol`s proof in Calculus II, Can you show it here (by typing or just attaching an image)? It would then be possible to comment on that. It is proved that the limit of the sum of the functions is equal to the sum of their limits. However, it would be better to go directly to $f(x), g(x)$ instead of $f(x) – A$ and $g(x) – B$. Try to use the limit setting and there shouldn`t be a problem. Update: After looking at the Apostol proof (attached image), it is clear that he uses the theorems already established on the limits of real-valued functions (in Apostol 1 page 132) and he uses them very intelligently to prove theorems related to vector-valued functions. If my evidence is wrong, what is wrong and how can I correct it? Why is the proof in Apostol II on page 248 of the limits correct? Why didn`t he use limits ($epsilon,delta$), as he did in Apostol I on page 132? Why did he assume that limits A and B are 0, and then said that this would be proven for all cases? I am new to reasoning in mathematics, so I want to know if this informal proof of limits is possible: the limit of the sum of functions is equal to the sum of their limits. It is called the rule of sum of limit values and rule of addition of limit values. Now that we have the formal definition of a limit value, we can prove some of the properties we mentioned earlier in this chapter on limits.

Your proof begins with a good technique, namely reducing the problem to a simpler case. Here you have $lim_{x to a}{f(x) – A} = 0$ and $lim_{x to a}{g(x) – B} = 0$ and you must find that $lim_{x to a}left{{f(x) – A} + {g(x) – B}right} = 0$ Therefore, we will consider only one of the limit laws (i.e. the limit of the sum is a sum of limits), And how to be sure that it applies because of the epsilon delta definition of a limit. This is Apostol`s proof (theorem 8.1) in vector-valued functions of vector variables. Does this proof also apply to real-valued functions of vector variables (scalar fields)? Why didn`t he use $epsilon,delta$? We will not attempt in this course to prove each of the boundary laws with the definition of delta epsilon for a limit. Despite the fact that this proof is technically necessary before limit laws are used, it is traditionally not covered in a first-year calculus course. Instead, they usually appear in a mathematical analysis course that comes later. (I guess the effect is similar to shooting a movie prequel.) Assuming you have understood this proof of Apostol 1, your approach to proving the result below is good. $f(x)$ and $g(x)$ are two different functions in terms of $$x.

The sum of the functions is $f(x)+g(x)$. For example, suppose lim x → c f ( x ) = L {displaystyle lim _{xto c}f(x)=L} and lim x → c g ( x ) = M {displaystyle lim _{xto c}g(x)=M}. Then we know $lim_{x rightarrow c} f(x) = L_1$. So, invoking again the definition of delta epsilon, but this time in the other direction, we also know that for every $epsilon_1 gt, we can find $0 delta_1, so that if $0 takes the value of $$x, $a$ approaches. The limit of the sum of functions when $x$ approaches $ $a is written in calculation in the following mathematical form. Now replace $x = a$ to find the value of the sum of functions if $x$ tends to $a$. Consider what happens if we take both $epsilon_1$ and $epsilon_2$ as $frac{epsilon}{2}$. For example, suppose lim x → a f ( x ) = L {displaystyle lim _{xto a}f(x)=L} for a finite L {displaystyle L} and c {displaystyle c} is constant.

Then lim x → a c ⋅ f ( x ) = c ⋅ lim x → a f ( x ) = c ⋅ L {displaystyle lim _{xto a}ccdot f(x)=ccdot lim _{xto a}f(x)=ccdot L} $displaystyle large lim_{x ,to, a} normalsize Big[f(x)+g(x)Big]$. If we can show that lim x →c 1 g ( x ) = 1 M {displaystyle lim _{xto c}{frac {1}{g(x)}}={frac {1}{M}}} , then we can define a function, h ( x ) {displaystyle h(x)} as h ( x ) = 1 g ( x ) {displaystyle h(x)={frac {1}{g(x)}}} and refer to the product rule for limits, to prove the theorem. It is therefore sufficient to prove that lim x →c 1 g ( x ) = 1 M {displaystyle lim _{xto c}{frac {1}{g(x)}}={frac {1}{M}}}. But you can`t ignore the basic result if $A = B = $0, which was proven on page 132 of Apostol 1. Since lim x → c f ( x ) = L {displaystyle lim _{xto c}f(x)=L} and lim x → c g ( x ) = M {displaystyle lim _{xto c}g(x)=M} , there must be functions, call them δ f ( ε ) {displaystyle delta _{f}(varepsilon )} and δ g ( ε ) {displaystyle delta _{g}(varepsilon )} , so that for all ε > 0 {displaystyle varepsilon >0} , | f ( x ) – L | < ε {displaystyle {Big |} f(x)-L{Big |} <varepsilon } whenever | x – c | < δ f ( ε ) {displaystyle |x-c|<delta _{f}(varepsilon )} and | g ( x ) – M | < ε {displaystyle {Big |} g(x)-M{Big |} <varepsilon } whenever | x – c | < δ g ( ε ) {displaystyle |x-c|<delta _{g}(varepsilon )}. The sum of the two inequalities gives | f ( x ) – L | + | g ( x ) – M | < 2 ε {displaystyle {Big |} f(x)-L{Big |} +{big |} g(x)-M{big |} <2varepsilon }. Thanks to triangular inequality, we have | ( f ( x ) − L ) + ( g ( x ) − M ) | = | ( f ( x ) + g ( x ) − ( L + M ) | ≤ | f ( x ) – L | + | g ( x ) – M | {displaystyle {bigg |} (f(x)-L)+(g(x)-M){bigg |} ={bigg |} (f(x)+g(x))-(L+M){bigg |} leq {Big |} f(x)-L{Big |} +{Big |} g(x)-M{Big |}} , so we didn`t | ( f ( x ) + g ( x ) − ( L + M ) | < 2 ε {displaystyle {bigg |} (f(x)+g(x))-(L+M){bigg |} <2varepsilon } whenever | x – c | < δ f ( ε ) {displaystyle |x-c|<delta _{f}(varepsilon )} and | x – c | < δ g ( ε ) {displaystyle |x-c|<delta _{g}(varepsilon )}. Let δ f g ( ε ) {displaystyle delta _{fg}(varepsilon )} the smaller of the two: δ f ( ε 2 ) {displaystyle delta _{f}({tfrac {varepsilon }{2}})} and δ g ( ε 2 ) {displaystyle delta _{g}({tfrac {varepsilon }{2}})}. Then this δ {displaystyle delta } satisfies the definition of a limit for lim x → c [ f ( x ) + g ( x ) ] {displaystyle lim _{xto c}{Big [}f(x)+g(x){Big ]}} with limit L + M {displaystyle L+M}.