This page is a sub-page of our page on Calculus of Several Real Variables.
///////
The sub-pages of this page are:
• Functions with special properties
Related KMR-pages:
• …
///////
Other relevant sources of information:
• Richard Courant
• The axiom of choice
• Sociology and Mathematics: The Banach-Tarski paradox is nonsense
///////
The interactive simulations on this page can be navigated with the Free Viewer
of the Graphing Calculator.
///////
A list of Anchors into the text below:
Anchors to be added here.
///////
Functions of Several Real Variables:
What is meant by the expression “a function of several independent variables?”
With “several” we mean “more than one,” and with “function” we mean a “rule” that describes how “something” depends on “some other things”. Each “thing” is described by a “variable” and “the dependent thing” is described by “the dependent variable.” The “things” that “the dependent thing” depends on are called “independent things”. They are described by “independent variables,” and since there are several of them, one talks about “several independent variables.”
The dependency itself is specified by some kind of “rule” that shows how “the dependent thing” is related to the “independent things”. All the “things” that are related are classified by specifying the respective “types” that they belong to.
For a very long time (almost two thousand years actually) people thinking about such dependencies discriminated between algebraic functions, where the rule was specified by an algebraic formula, and geometric functions, where the rule was specified by a geometric construction (such as a e.g., curve).
With the emergence of set theory (during the end of the nineteenth century) it became custom to specify the structure of each “thing” by naming the respective “set” that it belonged to (= is a member of).
The structure of a mathematical function:
In modern language we say that a function consists of two sets, \, X \, and \, Y \, and a rule \, f \, which, to each element \, x \, that is a member of the set \, X , assigns one and only one element \, y \, which is a member of the set \, Y . This element \, y \, is denoted by \, f(x) \, and we write \, y = f(x) .
Hence a mathematical function consists of a rule that maps each member of a set into into another set. The set \, X \, is called the domain of the function \, f \, and the set \, Y \, is called the codomain of the function \, f \, .
In “mathematese” we write this relationship:
\, X \ni x \mapsto f(x) \in Y \,
or
{{ \,\,\,\, X \, \xrightarrow{ \,\, f \,\, } \, Y \:}\atop {\,\,\,\,\,\, x \,\,\, \mapsto \,\, f(x) } } \, .
A conceptual visualization of a general function :
///////
Ordered and unordered collections of objects
\, X = (x_1, x_2, ... \, , x_n) \, \, X = \{x_1, x_2, ... \, , x_n \}///////
A function \, f \, from \, \mathbb{R}^2 \, to \, \mathbb{R}^2 \, can be described by:
{{ \,\,\,\, \mathbb{R}^2 \, \xrightarrow{ \,\,\,\,\,\, f \,\,\,\,\,\, } \,\,\, \mathbb{R}^2 \:}\atop {\,\,\,\, (x_1,x_2) \,\, \mapsto \,\, f(x_1,x_2) } } \, , where \, f(x_1,x_2) = (f_1(x_1, x_2), f_2(x_1, x_2)) \, .
In order to be compatible with matrix algebra, we will often depict the function \, f \, as operating “in the opposite direction”, i.e., from right to left. In such cases we will write
\, { \,\,\, {\mathbb{R}^2 \,\,\, \xleftarrow{\,\,\,\,\,\, f \,\,\,\,\,\,} \,\,\,\, \mathbb{R}^2 \:}\atop {\, f(x_1,x_2) \,\,\, \leftarrow\shortmid \,\,\, (x_1,x_2) } } \, , instead of {{ \,\,\,\, \mathbb{R}^2 \, \xrightarrow{ \,\,\,\,\,\, f \,\,\,\,\,\, } \,\,\, \mathbb{R}^2 \:}\atop {\,\,\,\, (x_1,x_2) \,\, \mapsto \,\, f(x_1,x_2) } } \, .
NOTE: Since there is no arrow of type “\leftmapsto” in KaTeX, the bottom arrow in our reversed representation (to the left) is represented as “\leftarrow\shortmid” which is the reason for the small white gap between these symbols. Both of the vertically-ended arrows symbolize the transformation of elements from the domain of \, f \, to the codomain of \, f . The domain and the codomain of a function represent the sets between which the function operates, and, in the diagrams depicted above, their symbols are situated directly above those of their respective elements.
///////
The simplest types of functions
Just as in the case of functions of one variable, the simplest functions are the rational integral functions or polynomials. When used as a function, the most general polynomial of the first degree is called an affine function. Such a function has the form \, z = f(x, y) = a x + b y + c \, where \, z \, denotes the dependent variable, \, x \, and \, y \, denote the independent variables, and \, a, b, c \, are regarded as constants (often called parameters). If \, c = 0 \, the function \, f \, is called a linear function (since it maps zero to zero).
IMPORTANT: An affine function is therefore a linear function plus a constant function.
As we will see, affine functions are fundamentally important when it comes to differentiation.
/////// Quoting Wikipedia (on Affine transformations):
If \, X \, is the point set of an affine space, then every affine transformation on \, X \, can be represented as the composition of a linear transformation on \, X \, and a translation of \, X \, . Unlike a purely linear transformation, an affine transformation need not preserve the origin of the affine space. Thus, every linear transformation is affine, but not every affine transformation is linear.
Examples of affine transformations include translation, scaling, homothety, similarity, reflection, rotation, shear mapping, and compositions of them in any combination and sequence.
/////// End of quote from Wikipedia
NOTE: The perspective japonaise (“Japanese perspective” = projections from infinity) of Oscar Reutersvärd are affine transformations since they preserves parallel lines.
//// TEXT HERE
///////