Welcome to Numerary’s documentation!¶
Numerary is Scientific Library that contains many numerical methods.
Contents¶
Bisection Method¶
Overview¶
In mathematics, the bisection method is a root-finding method that applies to any continuous functions for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.

The Method¶
The method is applicable for numerically solving the equation \(f(x) = 0\) for the real variable \(x\), where \(f\) is a continuous function defined on an interval \([a, b]\) and where \(f(a)\) and \(f(b)\) have opposite signs. In this case \(a\) and \(b\) are said to bracket a root since, by the intermediate value theorem, the continuous function \(f\) must have at least one root in the interval \((a, b)\).
At each step the method divides the interval in two by computing the midpoint \(c = \frac{a+b}{2}\) of the interval and the value of the function \(f(c)\) at that point. Unless \(c\) is itself a root (which is very unlikely, but possible) there are now only two possibilities: either \(f(a)\) and \(f(c)\) have opposite signs and bracket a root, or \(f(c)\) and \(f(b)\) have opposite signs and bracket a root. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. In this way an interval that contains a zero of \(f\) is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small.
Explicitly, if \(f(a)\) and \(f(c)\) have opposite signs, then the method sets \(c\) as the new value for \(b\), and if \(f(b)\) and \(f(c)\) have opposite signs then the method sets \(c\) as the new \(a\). (If \(f(c)=0\) then \(c\) may be taken as the solution and the process stops.) In both cases, the new \(f(a)\) and \(f(b)\) have opposite signs, so the method is applicable to this smaller interval.
Iteration Tasks¶
- Calculate \(c\), the midpoint of the interval, \(c=\frac{a+b}{2}\).
- Calculate the function value at the midpoint, \(f(c)\).
- If convergence is satisfactory (that is, \(c-a\) is sufficiently small, or \(|f(c)|\) is sufficiently small), return \(c\) and stop iterating.
- Examine the sign of \(f(c)\) and replace either \((a, f(a))\) or \((b, f(b))\) with \((c, f(c))\) so that there is a zero crossing within the new interval.
Usage¶
Imagine that we want to find the root of the following function:
Then the code will look like this:
// example_root_bisection.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* Function to found the root */
double f(double x) {
return sin(x);
}
/* The main function */
int main() {
const double eps = 1.e-9; // eps value for method (1.e-9 by default)
double a = -1; // "a" value of segment [a, b]
double b = 1; // "b" value of segment [a, b]
double root;
short int is_found;
is_found = Numerary::root(f, a, b, &root, "bisection", eps);
if (is_found == 1) {
cout << "Root is in x = " << root << endl;
} else {
cout << "Method not allowed!" << endl;
}
return 0;
}
Secant Method¶
Overview¶
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function \(f\). The secant method can be thought of as a finite-difference approximation of Newton’s method. However, the method was developed independently of Newton’s method and predates it by over 3000 years.

The Method¶
The secant method is defined by the recurrence relation
As can be seen from the recurrence relation, the secant method requires two initial values, \(x_0\) and \(x_1\), which should ideally be chosen to lie close to the root.
Derivation Of The Method¶
Starting with initial values \(x_0\) and \(x_1\), we construct a line through the points \((x0, f(x0))\) and \((x1, f(x1))\), as shown in the picture above. In slope–intercept form, the equation of this line is
The root of this linear function, that is the value of \(x\) such that \(y=0\) is
We then use this new value of \(x\) as \(x_2\) and repeat the process, using \(x_1\) and \(x_2\) instead of \(x_0\) and \(x_1\). We continue this process, solving for \(x_3\), \(x_4\), etc., until we reach a sufficiently high level of precision (a sufficiently small difference between \(x_n\) and \(x_n-1\)):
Usage¶
Imagine that we want to minimize the following function:
Then the code will look like this:
// example_root_secant.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* Function to found the root */
double f(double x) {
return sin(x);
}
/* The main function */
int main() {
const double eps = 1.e-9; // eps value for method (1.e-9 by default)
double a = -1; // "a" value of segment [a, b]
double b = 1; // "b" value of segment [a, b]
double root;
short int is_found;
is_found = Numerary::root(f, a, b, &root, "secant", eps);
if (is_found == 1) {
cout << "Root is in x = " << root << endl;
} else {
cout << "Method not allowed!" << endl;
}
return 0;
}
Integral Approximation - Simpson’s Rule¶
Definition¶
Suppose \(f(x)\) is defined on the interval \([a, b]\). Then Simpson’s rule on the entire interval approximates the definite integral of \(f(x)\) on the interval by the formula
The idea is that if \(f(x) = 1, x, x^2\), this formula is an exact equality. So Simpson’s rule gives the correct integral of any quadratic function. In general, Simpson’s rule approximates \(f(x)\) by a parabola through the points on the graph of \(f(x)\) with \(x\)-coordinates \(a, \frac{a+b}{2}, b\).
Simpson’s rule is usually applied by breaking the interval into \(N\) equal-sized subintervals, where \(N\) is an even number, and approximating the integral over each pair of adjacent subintervals using the above estimate.
That is, let \(x_0=a, x_1=a+\frac{b-a}{N}, x_2=a+2\frac{b-x}{N}, \dots, x_{N-1}=a+(N-1)\frac{b-a}{N}, x_N=b\). Then
and so on. Adding these up gives
Usage¶
Imagine that we want to integrate the following expression:
Then the code will look like this:
// example_integral_simpson.cpp
#include <iostream>
#include "../src/numerary.hpp"
using namespace std;
using namespace numerary;
/* Function to be integrated */
double f(double x) {
return 5*pow(x, 3) + 2*cos(x);
}
/* The main function */
int main() {
const double from = 0; // Lower bound of integral
const double to = 1; // Upper bound of integral
const string method = "simpson"; // Numerical method we will use for integration ("simpson" by default)
const double eps = 1.e-9; // eps value for integration (1.e-9 by default)
double *I = Numerary::integrate(f, from, to, method, eps);
cout << "ans = " << I[0] << endl; // Value of calculated integral
cout << "err = " << I[1] << endl; // Error of calculated integral value
return 0;
}
Bisection Method¶
Usage¶
Imagine that we want to minimize the following function:
Then the code will look like this:
// example_minimum_bisection.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* Function to found local minimum */
double f(double x) {
return 2*x*x - 5*x + 3;
}
/* The main function */
int main() {
const double eps = 1.e-9; // eps value for method (1.e-9 by default)
double a = 0; // "a" value of segment [a, b]
double b = 2; // "b" value of segment [a, b]
double minimum;
short int is_found;
is_found = Numerary::minimum(f, a, b, &minimum, "bisection", eps);
if (is_found == 1) {
cout << "Minimum is in x = " << minimum << endl;
} else {
cout << "Method not allowed!" << endl;
}
return 0;
}
Golden Ratio Method¶
Usage¶
Imagine that we want to minimize the following function:
Then the code will look like this:
// example_minimum_golden_ratio.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* Function to found local minimum */
double f(double x) {
return x*x + sin(3*x);
}
/* The main function */
int main() {
const double eps = 1.e-9; // eps value for method (1.e-9 by default)
double a = -1; // "a" value of segment [a, b]
double b = 1; // "b" value of segment [a, b]
double minimum;
short int is_found;
is_found = Numerary::minimum(f, a, b, &minimum, "golden_ratio", eps);
if (is_found == 1) {
cout << "Minimum is in x = " << minimum << endl;
} else {
cout << "Method not allowed!" << endl;
}
return 0;
}
Bisection Method¶
Usage¶
Imagine that we want to maximize the following function:
Then the code will look like this:
// example_maximum_bisection.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* Function to found local maximum */
double f(double x) {
return sin(x);
}
/* The main function */
int main() {
const double eps = 1.e-9; // eps value for method (1.e-9 by default)
double a = -2; // "a" value of segment [a, b]
double b = 2; // "b" value of segment [a, b]
double maximum;
short int is_found;
is_found = Numerary::maximum(f, a, b, &maximum, "bisection", eps);
if (is_found == 1) {
cout << "Maximum is in x = " << maximum << endl;
} else {
cout << "Method not allowed!" << endl;
}
return 0;
}
Golden Ratio Method¶
Usage¶
Imagine that we want to maximize the following function:
Then the code will look like this:
// example_maximum_golden_ratio.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* Function to found local maximum */
double f(double x) {
return 1.0 / (1.0 + x*x);
}
/* The main function */
int main() {
const double eps = 1.e-9; // eps value for method (1.e-9 by default)
double a = -2; // "a" value of segment [a, b]
double b = 2; // "b" value of segment [a, b]
double maximum;
short int is_found;
is_found = Numerary::maximum(f, a, b, &maximum, "golden_ratio", eps);
if (is_found == 1) {
cout << "Maximum is in x = " << maximum << endl;
} else {
cout << "Method not allowed!" << endl;
}
return 0;
}
Gauss Elimination Method¶
Algorithm¶
The Gauss method is a classical method for solving a system of linear algebraic equations (SLAE). Consider a system of linear equations with real constant coefficients
or in matrix form
where
The Gauss method of solving a system of linear equations includes 2 stages:
- sequential (direct) exception;
- reverse substitution.
Sequential exception¶
Gauss exceptions are based on the idea of successive exceptions variables one at a time until only one equation remains with one variable on the left side. Then this equation is solved with respect to a single variable. Thus, the system of equations lead to a triangular (step) shape. For this, among the elements the first column of the matrix is selected nonzero (and most often maximum) element and move it to its highest position by rearranging lines. Then all the equations are normalized, dividing it by the coefficient ai1, where i is the column number.
Then the first line obtained after the permutation is subtracted from the remaining lines:
A new system of equations is obtained, in which the corresponding coefficients are replaced.
After the indicated transformations have been completed, the first row and the first column are mentally deleted and continue the specified process for all subsequent equations until an equation with one unknown:
Reverse substitution¶
Reverse substitution involves the substitution of the value of x_n obtained in the previous step into the previous equations:
This procedure is repeated for all remaining solutions:
Usage¶
Imagine that we want to solve following linear system of equations:
Then the code will look like this:
// example_gauss_elimination.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* The main function */
int main() {
double **a = new double*[2];
double *y = new double[2];
double *x = new double[2];
short int is_solved;
for (int i = 0; i < 2; i ++)
a[i] = new double[2];
// Initialize matrix A
a[0][0] = 2;
a[0][1] = 1;
a[1][0] = -1;
a[1][1] = 1;
// Initialize vector y
y[0] = 5;
y[1] = 2;
is_solved = Numerary::linear_systems_of_equations(a, y, x, 2, "gauss");
if (is_solved == 1) {
cout << "System solved!" << endl;
cout << "x = (" << x[0] << ", " << x[1] << ")" << endl;
} else {
cout << "Method is not allowed!";
}
for (int i = 0; i < 2; i++) delete[] a[i];
delete[] a;
delete[] x;
delete[] y;
return 0;
}
Newton’s Method¶
Overview¶
Newton’s method is one of the most popular numerical methods, and is even referred by Burden and Faires as the most powerful method that is used to solve for the equation \(f(x) = 0\). This method originates from the Taylor’s series expansion of the function \(f(x)\) about the point \(x1\):
where \(f\), and its first and second order derivatives, \(f'\) and \(f''\) are calculated at \(x_1\). If we take the first two terms of the Taylor’s series expansion we have:
We then set previous expression to zero (i.e \(f(x) = 0\) ) to find the root of the equation which gives us:
Rearranging the previous expression we obtain the next approximation to the root, giving us:
Thus generalizing previous expression we obtain Newton’s iterative method:
where \(x_{i} \rightarrow \bar{x}\) (as \(i \rightarrow \infty\)), and \(x\) is the approximation to a root of the function \(f(x)\).
Note: As the iterations begin to have the same repeated values i.e. as \(x_i = x_{i+1} = \bar{x}\) this is an indication that \(f(x)\) converges to \(\bar{x}\). Thus \(x_i\) is the root of the function \(f(x)\).
The Method¶
Step 1:¶
Let \(\mathbf{x}^{(0)}=\left(x_{1}^{(0)}, x_{2}^{(0)}, \ldots, x_{n}^{(0)}\right)\) be a given initial vector.
Step 2:¶
Calculate \(J\left(\mathbf{x}^{(0)}\right)\) and \(\mathbf{F}\left(\mathbf{x}^{(0)}\right)\).
Step 3:¶
We now have to calculate the vector \(\mathbf{y}^{(0)}\), where
In order to find \(\mathbf{y}^{(0)}\), we solve the linear system \(J\left(\mathbf{x}^{(0)}\right) \mathbf{y}^{(0)}=-\mathbf{F}\left(\mathbf{x}^{(0)}\right)\), using Gaussian Elimination.
Note: Rearranging the system in Step 3, we get that \(\mathbf{y}^{(0)}=-J\left(\mathbf{x}^{(0)}\right)^{-1} \mathbf{F}\left(\mathbf{x}^{(0)}\right)\). The significance of this is that, since \(\mathbf{y}^{(0)}=-J\left(\mathbf{x}^{(0)}\right)^{-1} \mathbf{F}\left(\mathbf{x}^{(0)}\right)\), we can replace \(-J\left(\mathbf{x}^{(0)}\right)^{-1} \mathbf{F}\left(\mathbf{x}^{(0)}\right)\) in our iterative formula with \(\mathbf{y}^{(0)}\). This result will yield that
Step 4:¶
Once \(\mathbf{y}^{(0)}\) is found, we can now proceed to finish the first iteration by solving for \(\mathbf{x}^{(1)}\). Thus using the result from Step 3, we have that
Step 5:¶
Once we have calculated \(\mathbf{x}^{(1)}\), we repeat the process again, until \(\mathbf{x}^{(k)}\) converges to \(\bar{x}\). This indicates we have reached the solution to \(\mathbf{F}(\mathbf{x})=\mathbf{0}\), where \(\bar{x}\) is the solution to the system.
Note: When a set of vectors converges, the norm \(\left\|\mathbf{x}^{(k)}-\mathbf{x}^{(k-1)}\right\|=0\). This means that
Usage¶
imagine that we want to solve the following nonlinear system of equations:
then the code will look like this:
// example_newton.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* System to solve */
void system(double *x, double *fv, int n) {
fv[0] = x[0]*x[0] + x[1]*x[1] - 5;
fv[1] = x[1] - 3*x[0] + 5;
}
/* The main function */
int main() {
const int maxiter = 100; // Maximum interations for set method (100 by default)
const double eps = 1.e-7; // eps value for method (1.e-7 by default)
double *x = new double[2], *fv = new double[2];
short int is_solved;
// Initial point
x[0] = 1; x[1] = 2;
is_solved = Numerary::nonlinear_systems_of_equations(system, x, fv, 2, "newton", eps, maxiter);
if (is_solved == 1) {
cout << "Solved!" << endl;
cout << "x = " << x[0] << endl;
cout << "y = " << x[1] << endl;
} else {
cout << "Method not allowed!";
}
delete[] x;
delete[] fv;
return 0;
}
Dormand-Prince Method¶
Definition¶
The one step calculation in the Dormand-Prince method is done as the following.
Then the next step value \(y_{k+1}\) is calculated as
This is a calculation by Runge-Kutta method of order 4. We have to be aware that we do not use \(k_2\), though it is used to calculate \(k_3\) and so on.
Next, we will calculate the next step value \(z_{k+1}\) by Runge-Kutta method of order 5 as
We calculate the difference of the two next values \(\left|z_{k+1}-y_{k+1}\right|\).
This is considered as the error in \(y_{k+1}\). We calculate the optimal time interval \(h_{opt}\) as,
where \(h\) in the right side is the old time interval. In practical programming, this new \(h_{opt}\) will be used in the next step of the calculation, though the author thinks it should be also used in the present calculation when it is very small, half or smaller for example.
Usage¶
Imagine that we want to minimize the following differential equation:
Then the code will look like this:
// example_dorpi.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* Equation to solve */
double equation(double x, double y) {
return 3.0*y/x + x*x*x + x;
}
/* The main function */
int main() {
double *y = new double[2];
double x0, x, h;
short int is_solved;
// Initial point
x0 = 1; y[0] = 3;
// Point where we want calculate y(x)
x = 2.0;
// Step size
h = 0.01;
is_solved = Numerary::ordinary_differential_equations(equation, y, x0, h, x, "dorpi_4_5");
if (is_solved == 0) {
cout << "Solved!" << endl;
cout << "y(" << x << ") = " << y[1] << endl;
} else {
cout << "Method not allowed!" << endl;
}
delete[] y;
return 0;
}
Linear Regression¶
Introduction¶
In statistics, linear regression is a linear approach to modeling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent variables). The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

The Simple Linear Regression Model¶
The simplest deterministic mathematical relationship between two variables \(x\) and \(y\) is a linear relationship: \(y = \beta_0 + \beta_1 x\).
The objective of this section is to develop an equivalent linear probabilistic model.
If the two (random) variables are probabilistically related, then for a fixed value of x, there is uncertainty in the value of the second variable.
So we assume \(Y = \beta_0 + \beta_1 x + \varepsilon\), where \(\varepsilon\) is a random variable.
Two variables are related linearly “on average” if for fixed x the actual value of Y differs from its expected value by a random amount (i.e. there is random error).
A Linear Probabilistic Model¶
Definition: (The Simple Linear Regression Model)
There are parameters \(\beta_0\), \(\beta_1\), and \(\sigma^2\), such that for any fixed value of the independent variable \(x\), the dependent variable is a random variable related to \(x\) through the model equation

The quantity \(\varepsilon\) in the model equation is the “error” - a random variable, assumed to be symmetrically distributed with
(no assumption made about the distribution of \(\varepsilon\), yet)
- \(\boldsymbol{X}\): the independent, predictor, or explanatory variable (usually known).
- \(\boldsymbol{Y}\): the dependent or response variable. For fixed \(x\), \(Y\) will be random variable.
- \(\boldsymbol{\varepsilon}\): the random deviation or random error term. For fixed \(x\), \(\varepsilon\) will be random variable.
- \(\boldsymbol{\beta_0}\): the average value of \(Y\) when \(x\) is zero (the intercept of the true regression line)
- \(\boldsymbol{\beta_1}\): the expected (average) change in \(Y\) associated with a 1-unit increase in the value of \(x\). (the slope of the true regression line)
The points \((x_1, y_1),\dots,(x_n, y_n)\) resulting from \(n\) independent observations will then be scattered about the true regression line:

Estimating Model Parameters¶
The values of \(\beta_0\), \(beta_1\), and \(sigma\) will almost never be known to an investigator.
Instead, sample data consists of n observed pairs \((x_1, y_1),\dots,(x_n, y_n)\) from which the model parameters and the true regression line itself can be estimated.
The data (pairs) are assumed to have been obtained independently of one another.
The “best fit” line is motivated by the principle of least squares, which can be traced back to the German mathematician Gauss (1777–1855):
A line provides the best fit to the data if the sum of the squared vertical distances (deviations) from the observed points to that line is as small as it can be.

The sum of squared vertical deviations from the points \((x_1, y_1),\dots,(x_n, y_n)\)
The point estimates of \(\beta_0\) and \(\beta_1\), denoted by and, are called the least squares estimates – they are those values that minimize \(f(b_0, b_1)\).
The fitted regression line or least squares line is then the line whose equation is \(y=\hat{\beta}_{0}+\hat{\beta}_{1} x\).
The minimizing values of \(b_0\) and \(b_1\) are found by taking partial derivatives of \(f(b_0, b_1)\) with respect to both \(b_0\) and \(b_1\), equating them both to zero [analogously to \(f'(b)=0\) in univariate calculus], and solving the equations
The least squares estimate of the slope coefficient \(\beta_1\) of the true regression line is
Shortcut formulas for the numerator and denominator of \(\hat{\beta_1}\) are
The least squares estimate of the intercept \(b_0\) of the true regression line is
Usage¶
Imagine that we have following points and we want to build a linear regression model:
X | Y |
---|---|
1.0 | 1.0 |
2.0 | 2.0 |
3.0 | 1.3 |
4.0 | 3.75 |
5.0 | 2.25 |
Then the code will look like this:
// example_linear_regression.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* The main function */
int main() {
const int N = 5; // Number of points
double *X = new double[N], *Y = new double[N], *predicted_kc = new double[2];
X[0] = 1.0; Y[0] = 1.0;
X[1] = 2.0; Y[1] = 2.0;
X[2] = 3.0; Y[2] = 1.3;
X[3] = 4.0; Y[3] = 3.75;
X[4] = 5.0; Y[4] = 2.25;
// Get predicted linear regression line
predicted_kc = Numerary::linear_regression(X, Y, N);
// Equation of regression line
cout << "y = " << predicted_kc[0] << "*x + " << predicted_kc[1] << endl;
// Reallocate memory
delete[] X;
delete[] Y;
delete[] predicted_kc;
return 0;
}
Lagrange’s Interpolation¶
Lagrange polynomial¶
In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of points \((x_j, y_j)\) with no two \(x_j\) values equal, the Lagrange polynomial is the polynomial of lowest degree that assumes at each value \(x_j\) the corresponding value \(x_j\), so that the functions coincide at each point.
Definition¶
Given a set of \(k+1\) data points \((x_0, y_0),\dots,(x_j, y_j),\dots,(x_k, y_k)\) where no two \(x_j\) are the same, the interpolation polynomial in the Lagrange form is a linear combination \(L(x):=\sum_{j=0}^{k} y_{j} \ell_{j}(x)\), of Lagrange basis polynomials
where \(0 \leq j \leq k\). Note how, given the initial assumption that no two \(x_j\) are the same, \(x_j - x_m \neq 0\), so this expression is always well-defined. The reason pairs \(x_i = x_j\) with \(y_i \neq y_j\) are not allowed is that no interpolation function \(L\) such that \(y_i=L(x_i)\) would exist; a function can only get one value for each argumetn \(x_i\). On the other hand, if also \(y_i = y_j\), then those two points would actually be one single point.
For all \(i \neq j\), \(\ell(x)\) includes the term \((x-x_i)\) in the numerator, so the whole pruduct will be zero at \(x = x_i\):
On the other hand,
In other words, all basis polynomials are zero at \(x = x_i\), except \(\ell_i(x)\), for which it holds that \(\ell_i(x_i) = 1\), because it lacks the \((x - x_i)\) term.
It follows that \(\ell_i(x_i) = y_i\), so at each point \(x_i\), \(L(x_i) = y_i + 0 + 0 + \dots + 0 = y_i\), showing that \(L\) interpolates the function exactly.
Runge’s example¶
The function \(f(x) = \frac{1}{1+x^2}\) cannot be interpolated accurately on \([−5, 5]\) using a tenth-degree polynomial (dashed curve) with equally-spaced interpolation points. This example that illustrates the difficulty that one can generally expect with high-degree polynomial interpolation with equally-spaced points is known as Runge’s example.

Usage¶
Imagine that we have following points and we want to build a Lagrange polynomial with this points:
X | Y |
---|---|
2.0 | 10.0 |
3.0 | 15.0 |
5.0 | 25.0 |
8.0 | 40.0 |
12.0 | 60.0 |
Then the code will look like this:
// example_lagrange_polynomial.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* The main function */
int main() {
const int N = 5;
double *X = new double[N], *Y = new double[N];
double y, x;
// Points to interpolate
X[0] = 2.0; Y[0] = 10.0;
X[1] = 3.0; Y[1] = 15.0;
X[2] = 5.0; Y[2] = 25.0;
X[3] = 8.0; Y[3] = 40.0;
X[4] = 12.0; Y[4] = 60.0;
// Point where we want to get value of Lagrange Polynomial
x = 7.0;
y = Numerary::lagrange_polynomial(X, Y, x, N);
cout << "y(" << x << ") = " << y << endl;
delete[] X;
delete[] Y;
return 0;
}
Numerical differentiation. Calculation of the first derivative.¶
Definition¶
By definition, the first derivative of a smooth function \(f(x)\) at a point x is calculated as
When calculating the first derivative of the function \(f(x)\) on a computer, we replace the infinitesimal \(h \rightarrow \infty\) with a small but finite value \(h\):
where \(\mathrm{O}(h)\) is the derivative calculation error, which naturally depends on \(h\). Previous formula is called a difference scheme for calculating the first derivative (more precisely, a right difference scheme or just a right difference). Similarly, maybe the left-hand difference scheme is written.
How to determine \(\mathrm{O}(h)\)? Expand the function \(f(x)\) in a Taylor series at the point \(x + h\):
whence it follows that in the first order of the expansion
By choosing a very small \(h\), the round-off errors in computing on a computer can be comparable to or greater than \(h\). Therefore, we are interested in an algorithm that gives lower error value for the same value of \(h\).
Such an improved algorithm can be easily obtained by expanding the function \(f(x)\) into a Taylor series at the points \(x + h\) and \(x - h\), then subtracting one result from the other, which gives
where the error in calculating the first derivative
This is the central difference scheme (central difference).
In principle, it is possible to follow the path of improving the accuracy of the method for calculating the first derivative and further. For example, considering the expansion of the function \(f(x)\) in a Taylor series at the points \(x + h\), \(x + 2h\), \(x - h\), and \(x - 2h\), one can obtain a four-point scheme etc.
Usage¶
Imagine that we want to find the derivative of the following function:
Then the code will look like this:
// example_first_order_derivative_h.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* Functiion to derive */
double f(double x) {
return sin(x);
}
/* The main function */
int main() {
const short int order = 1;
double x, dy_dx;
// Point where we want get value of derivative function
x = M_PI;
dy_dx = Numerary::differentiate(f, order, x);
cout << "dy/dx (" << x << ") = " << dy_dx << endl;
return 0;
}
Incomplete Gamma Function¶
Definition¶
Usage¶
Imagine that we want to calculate the value of:
Then the code will look like this:
// example_incomplete_gamma_function.cpp
#include <iostream>
#include "../src/numerary.hpp" // Numerary library
using namespace std;
using namespace numerary;
/* The main function */
int main() {
double value;
value = Numerary::incgamma(2, 1);
cout << "IncGamma(2, 1) = " << value << endl;
return 0;
}