Mathematics is the study of numbers, quantity, space, structure, and change. Mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, and the social sciences. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries and sometimes leads to the development of entirely new mathematical disciplines, such as statistics and game theory. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, and practical applications for what began as pure mathematics are often discovered.
An algorithm is a procedure (a finite set of well-defined instructions) for accomplishing some task which, given an initial state, will terminate in a defined end-state. The computational complexity and efficient implementation of the algorithm are important in computing, and this depends on suitable data structures.
Informally, the concept of an algorithm is often illustrated by the example of a recipe, although many algorithms are much more complex; algorithms often have steps that repeat (iterate) or require decisions (such as logic or comparison). Algorithms can be composed to create more complex algorithms.
The concept of an algorithm originated as a means of recording procedures for solving mathematical problems such as finding the common divisor of two numbers or multiplying two numbers. The concept was formalized in 1936 through Alan Turing's Turing machines and Alonzo Church's lambda calculus, which in turn formed the foundation of computer science.
Most algorithms can be directly implemented by computer programs; any other algorithms can at least in theory be simulated by computer programs. In many programming languages, algorithms are implemented as functions or procedures.
Anscombe's quartet is a collection of four sets of bivariate data (paired x–y observations) illustrating the importance of graphical displays of data when analyzing relationships among variables. The data sets were specially constructed in 1973 by English statistician Frank Anscombe to have the same (or nearly the same) values for many commonly computed descriptive statistics (values which summarize different aspects of the data) and yet to look very different when their scatter plots are compared. The four x variables share exactly the same mean (or "average value") of 9; the four y variables have approximately the same mean of 7.50, to 2 decimal places of precision. Similarly, the data sets share at least approximately the same standard deviations for x and y, and correlation between the two variables. When y is viewed as being dependent on x and a least-squares regression line is fit to each data set, almost the same slope and y-intercept are found in all cases, resulting in almost the same predicted values of y for any given x value, and approximately the same coefficient of determination or R² value (a measure of the fraction of variation in y that can be "explained" by x, or more intuitively "how well y can be predicted" from x). Many other commonly computed statistics are also almost the same for the four data sets, including the standard error of the regression equation and the t statistic and accompanying p-value for testing the significance of the slope. Clear differences between the data sets are apparent, however, when they are graphed using scatter plots. The plots even suggest particular reasons why y cannot be perfectly predicted from x using each regression line: (1) While the variables are roughly linearly related in the first data set, there is more variability in y than can be accounted for by x, as seen in the vertical spread of the points around the regression line; in this case, one or more additional independent variables may be needed to account for some of this "residual" variation in y. (2) The second scatter plot shows strong curvature, so a simple linear model is not even appropriate for the data; polynomial regression or some other model allowing for nonlinear relationships may be appropriate. (3) The third data set contains an outlier, which ruins the otherwise perfect linear relationship between the variables; this may indicate that an error was made in collecting or recording the data, or may reveal an aspect of the variation of y that has not been considered. (4) The fourth data set contains an influential point that is almost completely determining the slope of the regression line; the reliability of the line would be increased if more data were collected at the high x value, or at any other x values besides 8. Although some other common summary statistics such as quartiles could have revealed differences across the four data sets, the plots give additional information that would be difficult to glean from mere numerical summaries. The importance of visualizing data is magnified (and made more complicated) when dealing with higher-dimensional data sets. Multiple regression is a straightforward generalization of linear regression to the case of multiple independent variables, while "multivariate" regression methods such as the general linear model allow for multiple dependent variables. Other statistical procedures designed to reveal relationships in multivariate data (several of which are closely tied to useful graphical depictions of the data) include principal component analysis, factor analysis, multidimensional scaling, discriminant function analysis, cluster analysis, and many others.