Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.
This glossary of calculus is a list of definitions about calculus, its sub-disciplines, and related fields.
An infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number . Similarly, an improper integral of a function, , is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if
The absolute value or modulus|x| of a real numberx is the non-negative value of x without regard to its sign. Namely, |x| = x for a positivex, |x| = −x for a negativex (in which case −x is positive), and |0| = 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero.
Is the method used to prove that an alternating series with terms that decrease in absolute value is a convergent series. The test was used by Gottfried Leibniz and is sometimes known as Leibniz's test, Leibniz's rule, or the Leibniz criterion.
An antiderivative, primitive function, primitive integral or indefinite integral[Note 1] of a functionf is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as .[1][2] The process of solving for antiderivatives is called antidifferentiation (or indefinite integration) and its opposite operation is called differentiation, which is the process of finding a derivative.
In analytic geometry, an asymptote of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. Some sources include the requirement that the curve may not cross the line infinitely often, but this is unusual for modern authors.[3] In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity.[4][5]
In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation or computational differentiation,[6][7] is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.
Any of the positive integers that occurs as a coefficient in the binomial theorem is a binomial coefficient. Commonly, a binomial coefficient is indexed by a pair of integers n ≥ k ≥ 0 and is written It is the coefficient of the xk term in the polynomial expansion of the binomialpower(1 + x)n, and it is given by the formula
A functionf defined on some setX with real or complex values is called bounded, if the set of its values is bounded. In other words, there exists a real number M such that
for allx in X. A function that is not bounded is said to be unbounded.
Sometimes, if f(x) ≤ A for all x in X, then the function is said to be bounded above by A. On the other hand, if f(x) ≥ B for all x in X, then the function is said to be bounded below by B.
(From Latincalculus, literally 'small pebble', used for counting and calculations, as on an abacus)[8] is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations.
Cavalieri's principle, a modern implementation of the method of indivisibles, named after Bonaventura Cavalieri, is as follows:[9]
2-dimensional case: Suppose two regions in a plane are included between two parallel lines in that plane. If every line parallel to these two lines intersects both regions in line segments of equal length, then the two regions have equal areas.
3-dimensional case: Suppose two regions in three-space (solids) are included between two parallel planes. If every plane parallel to these two planes intersects both regions in cross-sections of equal area, then the two regions have equal volumes.
The chain rule is a formula for computing the derivative of the composition of two or more functions. That is, if f and g are functions, then the chain rule expresses the derivative of their composition f∘g (the function which maps x to f(g(x)) ) in terms of the derivatives of f and g and the product of functions as follows:
This may equivalently be expressed in terms of the variable. Let F = f∘g, or equivalently, F(x) = f(g(x)) for all x. Then one can also write
The chain rule may be written in Leibniz's notation in the following way. If a variable z depends on the variable y, which itself depends on the variable x, so that y and z are therefore dependent variables, then z, via the intermediate variable of y, depends on x as well. The chain rule then states,
The two versions of the chain rule are related; if and , then
Is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem.
Is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap or upper convex.
The indefinite integral of a given function (i.e., the set of all antiderivatives of the function) on a connected domain is only defined up to an additive constant, the constant of integration.[15][16] This constant expresses an ambiguity inherent in the construction of antiderivatives. If a function is defined on an interval and is an antiderivative of , then the set of all antiderivatives of is given by the functions , where C is an arbitrary constant (meaning that any value for C makes a valid antiderivative). The constant of integration is sometimes omitted in lists of integrals for simplicity.
Is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function. A continuous function with a continuous inverse function is called a homeomorphism.
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane.[17][18][19]
A series is convergent if the sequence of its partial sums tends to a limit; that means that the partial sums become closer and closer to a given number when the number of their terms increases. More precisely, a series converges, if there exists a number such that for any arbitrarily small positive number , there is a (sufficiently large) integer such that for all ,
If the series is convergent, the number (necessarily unique) is called the sum of the series.
Any series that is not convergent is said to be divergent.
In mathematics, a real-valued function defined on an n-dimensional interval is called convex (or convex downward or concave upward) if the line segment between any two points on the graph of the function lies above or on the graph, in a Euclidean space (or more generally a vector space) of at least two dimensions. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. For a twice differentiable function of a single variable, if the second derivative is always greater than or equal to zero for its entire domain then the function is convex.[20] Well-known examples of convex functions include the quadratic function and the exponential function.
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-hand-sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750,[21][22] although Colin Maclaurin also published special cases of the rule in 1748[23] (and possibly knew of it as early as 1729).[24][25][26]
In geometry, curve sketching (or curve tracing) includes techniques that can be used to produce a rough idea of overall shape of a plane curve given its equation without computing the large numbers of points required for a detailed plot. It is an application of the theory of curves to find their main features. Here input is an equation.
In digital geometry it is a method of drawing a curve pixel by pixel. Here input is an array (digital image).
Is the highest degree of its monomials (individual terms) with non-zero coefficients. The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer.
The derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances.
A differentiable function of one real variable is a function whose derivative exists at each point in its domain. As a result, the graph of a differentiable function must have a (non-vertical) tangent line at each point in its domain, be relatively smooth, and cannot contain any breaks, bends, or cusps.
The term differential is used in calculus to refer to an infinitesimal (infinitely small) change in some varying quantity. For example, if x is a variable, then a change in the value of x is often denoted Δx (pronounced delta x). The differential dx represents an infinitely small change in the variable x. The idea of an infinitely small or infinitely slow change is extremely useful intuitively, and there are a number of ways to make the notion mathematically precise.
Using calculus, it is possible to relate the infinitely small changes of various variables to each other mathematically using derivatives. If y is a function of x, then the differential dy of y is related to dx by the formula
where dy/dx denotes the derivative of y with respect to x. This formula summarizes the intuitive idea that the derivative of y with respect to x is the limit of the ratio of differences Δy/Δx as Δx becomes infinitesimal.
Is a subfield of calculus[30] concerned with the study of the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus, the study of the area beneath a curve.[31]
Is a mathematicalequation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two.
In calculus, the differential represents the principal part of the change in a function y = f(x) with respect to changes in the independent variable. The differential dy is defined by
where is the derivative of f with respect to x, and dx is an additional real variable (so that dy is a function of x and dx). The notation is such that the equation
holds, where the derivative is represented in the Leibniz notationdy/dx, and this is consistent with regarding the derivative as the quotient of the differentials. One also writes
The precise meaning of the variables dy and dx depends on the context of the application and the required level of mathematical rigor. The domain of these variables may take on a particular geometrical significance if the differential is regarded as a particular differential form, or analytical significance if the differential is regarded as a linear approximation to the increment of a function. Traditionally, the variables dx and dy are considered to be very small (infinitesimal), and this interpretation is made rigorous in non-standard analysis.
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function.
In mathematics, the dot product or scalar product[note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used and often called "the" inner product (or rarely projection product) of Euclidean space even though it is not the only inner product that can be defined on Euclidean space; see also inner product space.
The multiple integral is a definite integral of a function of more than one real variable, for example, f(x, y) or f(x, y, z). Integrals of a function of two variables over a region in R2 are called double integrals, and integrals of a function of three variables over a region of R3 are called triple integrals.[33]
The number e is a mathematical constant that is the base of the natural logarithm: the unique number whose natural logarithm is equal to one. It is approximately equal to 2.71828,[34] and is the limit of (1 + 1/n)n as n approaches infinity, an expression that arises in the study of compound interest. It can also be calculated as the sum of the infinite series[35]
In integral calculus, elliptic integrals originally arose in connection with the problem of giving the arc length of an ellipse. They were first studied by Giulio Fagnano and Leonhard Euler (c. 1750). Modern mathematics defines an "elliptic integral" as any functionf which can be expressed in the form
where R is a rational function of its two arguments, P is a polynomial of degree 3 or 4 with no repeated roots, and c is a constant..
For an essential discontinuity, only one of the two one-sided limits needs not exist or be infinite.
Consider the function
Then, the point is an essential discontinuity.
In this case, doesn't exist and is infinite – thus satisfying twice the conditions of essential discontinuity. So x0 is an essential discontinuity, infinite discontinuity, or discontinuity of the second kind. (This is distinct from the term essential singularity which is often used when studying functions of complex variables.
In mathematics, an exponential function is a function of the form
where b is a positive real number, and in which the argument x occurs as an exponent. For real numbers c and d, a function of the form is also an exponential function, as it can be rewritten as
States that if a real-valued functionf is continuous on the closed interval [a,b], then f must attain a maximum and a minimum, each at least once. That is, there exist numbers c and d in [a,b] such that:
A related theorem is the boundedness theorem which states that a continuous function f in the closed interval [a,b] is bounded on that interval. That is, there exist real numbers m and M such that:
The extreme value theorem enriches the boundedness theorem by saying that not only is the function bounded, but it also attains its least upper bound as its maximum and its greatest lower bound as its minimum.
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema).[37][38][39]Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.
As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum.
Is an identity in mathematics generalizing the chain rule to higher derivatives, named after Francesco Faà di Bruno (1855, 1857), though he was not the first to state or prove the formula. In 1800, more than 50 years before Faà di Bruno, the French mathematician Louis François Antoine Arbogast stated the formula in a calculus textbook,[40] considered the first published reference on the subject.[41]
Perhaps the most well-known form of Faà di Bruno's formula says that
where the sum is over all n-tuples of nonnegative integers (m1, …, mn) satisfying the constraint
Sometimes, to give it a memorable pattern, it is written in a way in which the coefficients that have the combinatorial interpretation discussed below are less explicit:
Combining the terms with the same value of m1 + m2 + ... + mn = k and noticing that mj has to be zero for j > n − k + 1 leads to a somewhat simpler formula expressed in terms of Bell polynomialsBn,k(x1,...,xn−k+1):
The first derivative test examines a function's monotonic properties (where the function is increasing or decreasing) focusing on a particular point in its domain. If the function "switches" from increasing to decreasing at the point, then the function will achieve a highest value at that point. Similarly, if the function "switches" from decreasing to increasing at the point, then it will achieve a least value at that point. If the function fails to "switch", and remains increasing or remains decreasing, then no highest or least value is achieved.
and developing a calculus for such operators generalizing the classical one.
In this context, the term powers refers to iterative application of a linear operator to a function, in some analogy to function composition acting on a variable, i.e. f∘2(x) = f ∘ f (x) = f ( f (x) ).
Is a process or a relation that associates each element x of a setX, the domain of the function, to a single element y of another set Y (possibly the same set), the codomain of the function. If the function is called f, this relation is denoted y = f(x) (read f of x), the element x is the argument or input of the function, and y is the value of the function, the output, or the image of x by f.[43] The symbol that is used for representing the input is the variable of the function (one often says that f is a function of the variable x).
Is an operation that takes two functionsf and g and produces a function h such that h(x) = g(f(x)). In this operation, the function g is applied to the result of applying the function f to x. That is, the functions f : X → Y and g : Y → Z are composed to yield a function that maps x in X to g(f(x)) in Z.
The fundamental theorem of calculus is a theorem that links the concept of differentiating a function with the concept of integrating a function. The first part of the theorem, sometimes called the first fundamental theorem of calculus, states that one of the antiderivatives (also called indefinite integral), say F, of some function f may be obtained as the integral of f with a variable bound of integration. This implies the existence of antiderivatives for continuous functions.[44] Conversely, the second part of the theorem, sometimes called the second fundamental theorem of calculus, states that the integral of a function f over some interval can be computed by using any one, say F, of its infinitely many antiderivatives. This part of the theorem has key practical applications, because explicitly finding the antiderivative of a function by symbolic integration avoids numerical integration to compute integrals. This provides generally a better numerical accuracy.
The general Leibniz rule,[45] named after Gottfried Wilhelm Leibniz, generalizes the product rule (which is also known as "Leibniz's rule"). It states that if and are -times differentiable functions, then the product is also -times differentiable and its th derivative is given by
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema).[46][47][48]Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.
As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum.
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema).[49][50][51]Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.
As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum.
In geometry, a golden spiral is a logarithmic spiral whose growth factor is φ, the golden ratio.[52] That is, a golden spiral gets wider (or further from its origin) by a factor of φ for every quarter turn it makes.
In mathematics, a harmonic progression (or harmonic sequence) is a progression formed by taking the reciprocals of an arithmetic progression. It is a sequence of the form
where −a/d is not a natural number and kis a natural number.
Equivalently, a sequence is a harmonic progression when each term is the harmonic mean of the neighboring terms.
It is not possible for a harmonic progression (other than the trivial case where a = 1 and k = 0) to sum to an integer. The reason is that, necessarily, at least one denominator of the progression will be divisible by a prime number that does not divide any other denominator.[53]
Let f be a differentiable function, and let f ′ be its derivative. The derivative of f ′ (if it has one) is written f ′′ and is called the second derivative of f. Similarly, the derivative of the second derivative, if it exists, is written f ′′′ and is called the third derivative of f. Continuing this process, one can define, if it exists, the nth derivative as the derivative of the (n-1)th derivative. These repeated derivatives are called higher-order derivatives. The nth derivative is also called the derivative of order n.
where f and g are homogeneous functions of the same degree of x and y. In this case, the change of variable y = ux leads to an equation of the form
which is easy to solve by integration of the two members.
Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives. In the case of linear differential equations, this means that there are no constant terms. The solutions of any linear ordinary differential equation of any order may be deduced by integration from the solution of the homogeneous equation obtained by removing the constant term.
Also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. In equations, the function is given by f(x) = x.
Is a complex number that can be written as a real number multiplied by the imaginary uniti,[note 2] which is defined by its property i2 = −1.[54] The square of an imaginary number bi is −b2. For example, 5i is an imaginary number, and its square is −25. Zero is considered to be both real and imaginary.[55]
In mathematics, an implicit equation is a relation of the form , where is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is .
An implicit function is a function that is defined implicitly by an implicit equation, by associating one of the variables (the value) with the others (the arguments).[56]: 204–206 Thus, an implicit function for in the context of the unit circle is defined implicitly by . This implicit equation defines as a function of only if and one considers only non-negative (or non-positive) values for the values of the function.
The implicit function theorem provides conditions under which some kinds of relations define an implicit function, namely relations defined as the indicator function of the zero set of some continuously differentiablemultivariate function.
Common fractions can be classified as either proper or improper. When the numerator and the denominator are both positive, the fraction is called proper if the numerator is less than the denominator, and improper otherwise.[57][58] In general, a common fraction is said to be a proper fraction if the absolute value of the fraction is strictly less than one—that is, if the fraction is greater than −1 and less than 1.[59][60]
It is said to be an improper fraction, or sometimes top-heavy fraction,[61] if the absolute value of the fraction is greater than or equal to 1. Examples of proper fractions are 2/3, –3/4, and 4/9; examples of improper fractions are 9/4, –4/3, and 3/3.
In mathematical analysis, an improper integral is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number, , , or in some instances as both endpoints approach limits. Such an integral is often written symbolically just like a standard definite integral, in some cases with infinity as a limit of integration.
Specifically, an improper integral is a limit of the form:
or
in which one takes a limit in one or the other (or sometimes both) endpoints (Apostol 1967, §10.23).
In differential calculus, an inflection point, point of inflection, flex, or inflection (British English: inflexion) is a point on a continuousplane curve at which the curve changes from being concave (concave downward) to convex (concave upward), or vice versa.
The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the "instantaneous rate of change", the ratio of the instantaneous change in the dependent variable to that of the independent variable. .
If we consider v as velocity and x as the displacement (change in position) vector, then we can express the (instantaneous) velocity of a particle or object, at any particular time t, as the derivative of the position with respect to time:
From this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time (v vs. t graph) is the displacement, x. In calculus terms, the integral of the velocity function v(t) is the displacement function x(t). In the figure, this corresponds to the yellow area under the curve labeled s (s being an alternative notation for displacement).
Since the derivative of the position with respect to time gives the change in position (in metres) divided by the change in time (in seconds), velocity is measured in metres per second (m/s). Although the concept of an instantaneous velocity might at first seem counter-intuitive, it may be thought of as the velocity that the object would continue to travel at if it stopped accelerating at that moment. .
An integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two main operations of calculus, with its inverse operation, differentiation, being the other. .
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be readily derived by integrating the product rule of differentiation.
If u = u(x) and du = u′(x) dx, while v = v(x) and dv = v′(x) dx, then integration by parts states that:
In mathematical analysis, the intermediate value theorem states that if a continuous function, f, with an interval, [a, b], as its domain, takes values f(a) and f(b) at each end of the interval, then it also takes any value between f(a) and f(b) at some point within the interval.
This has two important corollaries:
If a continuous function has values of opposite sign inside an interval, then it has a root in that interval (Bolzano's theorem).[64]
The image of a continuous function over an interval is itself an interval. .
Then, the point x0 = 1 is a jump discontinuity.
In this case, a single limit does not exist because the one-sided limits, L− and L+, exist and are finite, but are not equal: since, L− ≠ L+, the limit L does not exist. Then, x0 is called a jump discontinuity, step discontinuity, or discontinuity of the first kind. For this type of discontinuity, the function f may have any value at x0.
In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the x-axis. The Lebesgue integral extends the integral to a larger class of functions. It also extends the domains on which these functions can be defined.
L'Hôpital's rule or L'Hospital's rule uses derivatives to help evaluate limits involving indeterminate forms. Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be evaluated by substitution, allowing easier evaluation of the limit. The rule is named after the 17th-century FrenchmathematicianGuillaume de l'Hôpital. Although the contribution of the rule is often attributed to L'Hôpital, the theorem was first introduced to L'Hôpital in 1694 by the Swiss mathematician Johann Bernoulli.
L'Hôpital's rule states that for functions f and g which are differentiable on an open intervalI except possibly at a point c contained in I, if
for all x in I with x ≠ c, and exists, then
The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be evaluated directly.
In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants).[74][75][76] The concept of linear combinations is central to linear algebra and related fields of mathematics.
The natural logarithm of a number is its logarithm to the base of the mathematical constante, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, logex, or sometimes, if the base e is implicit, simply log x.[77]Parentheses are sometimes added for clarity, giving ln(x), loge(x) or log(x). This is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity.
(Also known as the Guldinus theorem, Pappus–Guldinus theorem or Pappus's theorem) is either of two related theorems dealing with the surface areas and volumes of surfaces and solids of revolution.
Is a plane curve that is mirror-symmetrical and is approximately U-shaped. It fits several superficially different other mathematical descriptions, which can all be proved to define exactly the same curves.
In algebra, a quadratic function, a quadratic polynomial, a polynomial of degree 2, or simply a quadratic, is a polynomial function with one or more variables in which the highest-degree term is of the second degree. For example, a quadratic function in three variables x, y, and z contains exclusively terms x2, y2, z2, xy, xz, yz, x, y, z, and a constant:
with at least one of the coefficientsa, b, c, d, e, or f of the second-degree terms being non-zero.
A univariate (single-variable) quadratic function has the form[78]
in the single variable x. The graph of a univariate quadratic function is a parabola whose axis of symmetry is parallel to the y-axis, as shown at right.
If the quadratic function is set equal to zero, then the result is a quadratic equation. The solutions to the univariate equation are called the roots of the univariate function.
The bivariate case in terms of variables x and y has the form
with at least one of a, b, c not equal to zero, and an equation setting this function equal to zero gives rise to a conic section (a circle or other ellipse, a parabola, or a hyperbola).
In general there can be an arbitrarily large number of variables, in which case the resulting surface is called a quadric, but the highest degree term must be of degree 2, such as x2, xy, yz, etc.
Is the SI unit for measuring angles, and is the standard unit of angular measure used in many areas of mathematics. The length of an arc of a unit circle is numerically equal to the measurement in radians of the angle that it subtends; one radian is just under 57.3 degrees (expansion at OEIS: A072097). The unit was formerly an SI supplementary unit, but this category was abolished in 1995 and the radian is now considered an SI derived unit.[79] Separately, the SI unit of solid angle measurement is the steradian .
^"Calculus". OxfordDictionaries. Archived from the original on April 30, 2013. Retrieved 15 September 2017.
^Eves, Howard (March 1991). "Two Surprising Theorems on Cavalieri Congruence". The College Mathematics Journal. 22 (2): 118–124. doi:10.1080/07468342.1991.11973367.
^Hall, Arthur Graham; Frink, Fred Goodrich (January 1909). "Chapter II. The Acute Angle [10] Functions of complementary angles". Written at Ann Arbor, Michigan, USA. Trigonometry. Vol. Part I: Plane Trigonometry. New York, USA: Henry Holt and Company / Norwood Press / J. S. Cushing Co. - Berwick & Smith Co., Norwood, Massachusetts, USA. pp. 11–12. Retrieved 2017-08-12.
^Démonstration d’un théorème d’Abel. Journal de mathématiques pures et appliquées 2nd series, tome 7 (1862), p. 253-255Archived 2011-07-21 at the Wayback Machine.
^Taczanowski, Stefan (October 1978). "On the optimization of some geometric parameters in 14 MeV neutron activation analysis". Nuclear Instruments and Methods. 155 (3): 543–546. Bibcode:1978NucIM.155..543T. doi:10.1016/0029-554X(78)90541-4.
^Ebner, Dieter (2005-07-25). Preparatory Course in Mathematics (PDF) (6 ed.). Department of Physics, University of Konstanz. Archived (PDF) from the original on 2017-07-26. Retrieved 2017-07-26.[page needed]
^Mejlbro, Leif (2010-11-11). Stability, Riemann Surfaces, Conformal Mappings - Complex Functions Theory (PDF) (1 ed.). Ventus Publishing ApS / Bookboon. ISBN978-87-7681-702-2. Archived (PDF) from the original on 2017-07-26. Retrieved 2017-07-26.[page needed]
^Durán, Mario (2012). Mathematical methods for wave propagation in science and engineering. 1: Fundamentals (1 ed.). Ediciones UC. p. 88. ISBN978-956141314-6.[page needed]
^Hall, Arthur Graham; Frink, Fred Goodrich (January 1909). "Chapter II. The Acute Angle [14] Inverse trigonometric functions". Written at Ann Arbor, Michigan, USA. Trigonometry. Part I: Plane Trigonometry. New York, USA: Henry Holt and Company / Norwood Press / J. S. Cushing Co. - Berwick & Smith Co., Norwood, Massachusetts, USA. p. 15. Retrieved 2017-08-12. […] α = arcsin m: It is frequently read "arc-sinem" or "anti-sine m," since two mutually inverse functions are said each to be the anti-function of the other. […] A similar symbolic relation holds for the other trigonometric functions. […] This notation is universally used in Europe and is fast gaining ground in this country. A less desirable symbol, α = sin-1m, is still found in English and American texts. The notation α = inv sin m is perhaps better still on account of its general applicability. […]
^Klein, Christian Felix (1924) [1902]. Elementarmathematik vom höheren Standpunkt aus: Arithmetik, Algebra, Analysis (in German). 1 (3rd ed.). Berlin: J. Springer.
^Klein, Christian Felix (2004) [1932]. Elementary Mathematics from an Advanced Standpoint: Arithmetic, Algebra, Analysis. Translated by Hedrick, E. R.; Noble, C. A. (Translation of 3rd German ed.). Dover Publications, Inc. / The Macmillan Company. ISBN978-0-48643480-3. Retrieved 2017-08-13.
^Dörrie, Heinrich (1965). Triumph der Mathematik. Translated by Antin, David. Dover Publications. p. 69. ISBN978-0-486-61348-2.
^j is usually used in Engineering contexts where i has other meanings (such as electrical current)
^Antiderivatives are also called general integrals, and sometimes integrals. The latter term is generic, and refers not only to indefinite integrals (antiderivatives), but also to definite integrals. When the word integral is used without additional specification, the reader is supposed to deduce from the context whether it refers to a definite or indefinite integral. Some authors define the indefinite integral of a function as the set of its infinitely many possible antiderivatives. Others define it as an arbitrarily selected element of that set. Wikipedia adopts the latter approach.[citation needed]
^The symbol J is commonly used instead of the intuitive I in order to avoid confusion with other concepts identified by similar I–like glyphs, e.g. identities.