**The straight line**

Mathematical models can be as simple as a straight line. In a linear model the straight line is used to describe the relation between two variables. We express y according to the value of x. We predict y according to x. Therefore x is called the predictor or **independent variable** and y the predicted or **dependent variable**.

**Figure 1** shows the relation between y and x.

The straight line represents the average values of y for different values of x. It is a regression line. It was obtained by fitting a straight line equation to the data. A simple way to understand how the straight line is fitted on the dot plot is to visually guess where it would need to be placed in order to minimise the various distances between each dot and the line.

The equation of a straight line is:

y = β_{0} + β_{1}x

In which

β_{0 }is the intercept_{ }(value of y when x = 0)

β_{1}x is the coefficient of x. It describes the slope of the line. It represents the number of units of change in y when x increases by 1 unit.

**The general linear model**

Let's suppose that in the above example we want to predict y not only according to x_{1} but also according to x_{2}. We would have then two predictors. The relation between x_{1} , x_{2} and y is still a straight line. The equation is now:

** **y = β_{0} + β_{1}x_{1} + β_{2}x_{2}

To locate the straight line in the dots we need to imagine a 3 dimensional rectangle coordinates with y expressed according to x_{1} and x_{2.}

The coefficients β_{1 }and β_{2} respectively provide estimate of the effect of x_{1} and x_{2} which are mutually un-confounded. Mathematically there are no limits to the number of variables to be included in a model.