Important Questions -- Numerical Methods


Module 2 -- Interpolation

1. Newton's forward and backward interpolation

🔢 Sample Dataset (Equally Spaced x-values)

x f(x)
1 1
2 8
3 27
4 64
5 125

✅ This is based on f(x) = x3, which makes checking your results easy.

Step 1: Build the difference table

x f(x) Δf(x) Δ2f(x) Δ3f(x) Δ4f(x)
1 1
7
2 8 12
19 6
3 27 18 0
37 6
4 64 24
61
5 125

and the step size h = 1, since the gap between the x values = 1.

Step 2: Find p

For forward interpolation, we choose :

p = x  x0h

So, for f(2.4), x = 2.4, and x0 is the first entry in the table, 1.

 p = 1.4.

For backward interpolation, we choose:

p = x  xnh

for f(4.6), x = 4.6 and xn is the last entry in the table, 5.

 p = 0.4

Step 3: Forward and Backward Interpolation

Forward Interpolation formula:

P(x)=f(x0)+pΔf(x0)+p(p1)2!Δ2f(x0)+p(p1)(p2)3!Δ3f(x0) + 

So,

P(2.4) = 1 + (1.4×7) + [1.4(1.41)2!×12] + [1.4(1.41)(1.42)3!×6] + 0P(2.4) = 1 + 9.8 + 3.36  0.336 + 0P(2.4) = 13.824

Backward Interpolation formula

P(x)=f(xn)+pf(xn)+p(p+1)2!2f(xn)+p(p+1)(p+2)3!3f(xn)+

So,

P(4.6) = 125 + (0.4 × 61) + 0.4(0.6)2×24 + 0.4(0.6)(1.6)6×6  0P(4.6) = 125 24.4  2.88  0.384 0P(4.6) = 97.336

If you apply both the values of 2.4 and 4.6 to f(x) = x3 , you can verify that the answers are correct.


2. Lagrange Interpolation and Newton's Divided Difference Interpolation

The formula

Given a set of n+1 data points:

(x0,y0), (x1,y1), , (xn,yn),

the Lagrange interpolation polynomial P(x) is given by:

P(x)=j=0nyjLj(x)

where the Lagrange basis polynomial Lj(x) is defined as:

Lj(x)=i=0ijnxxixjxi

Let's practice an example to clear this up.

📊 New Dataset (Unequally spaced x-values):

x f(x)
1 2
3 10
4 22
7 70

Find f(5) using Lagrange Interpolation.

Or better yet we can write the dataset like this:

x 1 3 4 7
f(x) 2 10 22 70

which can be considered pairs of the following format:

(x0,y0)=(1,2)(x1,y1)=(3,10)(x2,y2)=(4,22)(x3,y3)=(7,70)

Step 1: Create the Lagrange Polynomials

In the denominator write the same terms but replace x with the hidden value, in this case, x0, then multiply the entire fraction by the corresponding f(x) value

L0 = (xx1)(xx2)(xx3)(x0x1)(x0x2)(x0x3) × 2L0 = (x3)(x4)(x7)(13)(14)(17) × 2L0 = (x3)(x4)(x7)36 × 2L0 = (x3)(x4)(x7)18 L1 = (x1)(x4)(x7)(31)(34)(37) × 10L1 = (x1)(x4)(x7)8 × 10L1 = 5(x1)(x4)(x7)4 L2 = (x1)(x3)(x7)(41)(43)(47) × 22L2 = (x1)(x3)(x7)9 × 22L2 = 22(x1)(x3)(x7)9 L3 = (x1)(x3)(x4)(71)(73)(74) × 70L3 = (x1)(x3)(x4)72 × 70L3 = 35(x1)(x3)(x4)36

For x = 5,

We put 5 in place of x and add up all the Lagrange polynomials.

L0 + L1 + L2 + L3

So,

(53)(54)(57)18 + 5(51)(54)(57)4 + 22(51)(53)(57)9 + 35(51)(53)(54)36= 0.222  10 + 39.111 + 7.777f(5) = 37.111

Using Newton's Divided Difference Interpolation

So again we have the dataset.

x 1 3 4 7
f(x) 2 10 22 70

Step 1: Build the difference table

x f(x) Δf(x) Δ2f(x) Δ3f(x)
1 2
10231 = 4
3 10 12241 = 2.66
221043 = 12 12.6671 = 0.27
4 22 161273 = 1
702274 = 16
7 70

Then use this formula:

P(x)=f(x0)+(xx0) Δf(x0)+(xx0)(xx1) Δ2 f(x0)+

So,

P(5) = 2 + (4×4) + (4)(2)×2.66 + (4)(2)(1)×0.27P(5) = 2 + 16 + 21.28 2.16P(5) = 37.12

Module 3 -- Numerical Integration

1. Trapezoidal Rule

abf(x)dx  h2[f(x0)+2f(x1)+2f(x2)+  + 2f(xn1) + f(xn)]

All terms except the first and last one are multiplied by 2.

Approximate the value of:

12ln(x) dx

with 4 sub-intervals

Step 1: Find step size h

n = 4 subintervals, b = 2, a = 1

h = banh = 214 = 14 = 0.25

Step 2: Calculate xi values

x0 = a = 1x1 = a + h = 1 + 0.25 = 1.25x2 = a + 2h = 1 + 0.5 = 1.5x3 = a + 3h = 1 + 0.75 = 1.75x4 = a + 4h = 1 + 1 = 2

Step 3: Evaluate the values for f(xi)

  1. f(1) = ln(1) = 0
  2. f(1.25) = ln(1.25) = 0.2231
  3. f(1.5) = ln(1.5) = 0.4054
  4. f(1.75) = ln(1.75) = 0.5596
  5. f(2) = ln(2) = 0.693

Step 4: Apply Trapezoidal rule.

12ln(x) dx = h2[ f(x0) + 2f(x1) + 2f(x2) + 2f(x3) + f(x4) ]= 0.125 × [ 0 + 0.4462 + 0.8108 + 1.1192 + 0.693 ]= 0.125 × 3.0692= 0.38365

2. Simpson's 13rd rule

abf(x) dx  h3[f(x0)+4f(x1)+2f(x2)+4f(x3)++4f(xn1)+f(xn)]

All terms besides the first and last ones are multiplied by constants. Odd terms are multiplied by 4, even terms are multiplied by 2.

Approximate the value of:

0611 + x2 dx

till n = 6 subintervals

Step 1: Find h

h = 1

Step 2: Calculate xi values

x0 = a = 0x1 = a + h = 1x2 = a + 2h = 12x3 = a + 3h = 18x4 = a + 4h = 24x5 = a + 5h = 30x4 = a + 6h = 36

Step 3: Calculate f(xi) values

Step 4: Apply formula

0611 + x2 dx = h3[ f(x0) + 4f(x1) + 2f(x2) + 4f(x3) + 2f(x4) + 4f(x5) + f(x6)]= 0.33[ 1 + 2 + 0.001378 + 0.01228 + 0.00346 + 0.0044 + 0.00077]= 0.33 × 3.022288= 0.99735504

Module 5 -- Solutions to a non-linear algebraic equation

1. The Bisection Method

Algorithm

  1. Initial Bracket: Choose a and b so that f(a)f(b)<0.
  2. Midpoint: Compute the midpoint c=a+b2​.
  3. Test Sign:
    • If f(c)=0 (or is close enough to zero), then c is the root.
    • If f(a)f(c)<0, set b=c; otherwise, set a=c.
  4. Repeat: Continue the process until the interval [a,b] is sufficiently small or until the error is less than a predetermined tolerance (or till c stops changing).
f(x) = x3  x  2

Let's go with the classic combination of 1,-1

f(1) = 2
f(1) = 4

Their product is not less than zero, so we need another separate interval.

Let's try (1, 2)

f(2) = 4

And, f(1) × f(2) = 8

So, we have a valid interval as a = 1, b = 2

So we find the midpoint c = 1+22 = 1.5

and f(1.5) = 0.125

So far, we have had:

Iteration a b midpoint (c) f(c)
1 1 2 1.5 -0.125

Let's continue this till the value of c stops changing, which would probably take 10-15 iterations.

f(c)× f(a) > 0 , set a = c

Iteration a b midpoint (c) f(a) f(c) f(c) × f(a)> 0?
1 1 2 1.5 -2 -0.125 Yes, set a = c
2 1.5 2 1.75 -0.125 1.609 No, set b = c
3 1.5 1.75 1.625 -0.125 0.660 No, set b = c
4 1.5 1.625 1.5625 -0.125 0.252 No, set b = c
5 1.5 1.5625 1.53125 -0.125 0.005 No, set b = c
6 1.5 1.53125 1.515625 -0.125 -0.034 Yes, set a = c
7 1.515625 1.53125 1.5234375 -0.034 0.012 No, set b = c
8 1.515625 1.5234375 1.51953125 -0.034 -0.010 Yes, set a = c
9 1.51953125 1.5234375 1.521484375 -0.010 0.00062 No, set b = c
10 1.51953125 1.521484375 1.516007813 -0.010 -0.0317 Yes, set a = c
11 1.516007813 1.521484375 1.518746094 -0.0317 -0.0156 Yes, set a = c
12 1.518746094 1.521484375 1.520115235 -0.0156 -0.0075 Yes, set a = c
13 1.520115235 1.521484375 1.520799805 -0.0075 -0.0034 Yes, set a = c
14 1.520799805 1.521484375 1.52114209 -0.0034 -0.0014 Yes, set a = c
15 1.52114209 1.521484375 1.521313233 -0.0014 -0.00039 Yes, set a = c
16 1.521313233 1.521484375 1.521398804 -0.00039 0.00011 No, set b = c
17 1.521313233 1.521398804 1.521355567 -0.00039 -0.000143 Yes

After 17 iterations, the midpoint c has stopped changing till 4 decimal places and f(c) is really close to zero, which is good enough, so the most precise real root for this equation is 1.521355567.


2. The Regula-Falsi Method

Algorithm

We start off similar to how we do in the Bisection Method

  1. Initial Bracket: As with bisection, choose a and b so that f(a)  f(b) < 0, that is, a and b should have opposite signs.
  2. Interpolation: Compute the new estimate using the formula:
c =(a  f(b))  (b  f(a))f(b)  f(a)
  1. Test sign:
    • If f(c)=0 (or is close enough to zero), then c is the root.
    • Otherwise, if f(a) f(c) < 0, set b = c, else set a = c

This method should get the root faster than the bisection method.

For the same equation:

f(x) = x3  x  2

We know already know the intervals, 1 and 2.

and f(a) = 2, f(b) = 4

Compute c.

c = (a × f(b))  (b × f(a))f(b)  f(a)c = (4 + 4)4 + 2 = 86 = 1.33

f(c) = f(1.33) = 0.977

f(c) × f(a) = 1.954, > 0

Iteration a b midpoint (c) f(a) f(b) f(c) f(c)×f(a)> 0?
1 1 2 1.33 -2 4 1.954 Yes, set a = c
2 1.33 2 0.690 1.954 4 -2.36 No, set b = c
3 1.33 0.690 1.040 1.954 -2.36 -1.914 No, set b = c
4 1.33 1.040 1.183 1.954 -1.914 -1.525 No, set b = c
5 1.33 1.183 1.247 1.954 -1.525 -0.306 No, set b = c
6 1.33 1.247 1.258 1.954 -0.306 -1.266 No, set b = c
7 1.33 1.954 1.466 1.954 -1.266 -0.315 No, set b = c
8 1.33 1.466 1.530 1.954 -0.315 0.056 Yes, set a = c
9 1.530 1.466 1.5210035 0.056 -0.315 -0.0022 No, set b = c
10 1.530 1.5210035 1.5213772 0.056 -0.0022 -0.0000147 No, set b = c
11 1.530 1.5213772 1.52137969 0.056 -0.0000147 -0.00000009816 Yes.

We see that c's first 4 decimal places are now consistent, and f(c) is very close to zero, which is good enough.

So the most precise real root is 1.52137969.


3. Newton-Raphson Method

  1. Initial Guess: Start with an initial guess x0​.

  2. Iteration: Compute the next approximation using:

    xn+1=xnf(xn)f(xn)$$

Taking the same equation again,

f(x) = x3  x  2

And the derivative of this function is:

3x2  1

Let's keep the initial guesses:

f(1) = 2, f(2) = 4

Now we choose x0 as the value whose f(x0) is closest to zero.

A general rule of thumb is to take the midpoint between the two initial guesses if one of the guesses' f(x) value is close to zero.

So, x0 = 1 + 22 = 1.5

So, f(1.5) = 0.875

and f(1.5) = 5.75

Let's continue this till n = 5

So, x1:

x1 = x0  f(x0)f(x0)x1 = 1.5  0.152x1 = 1.348

For x2:

x2 = 1.348  0.8984.451x2 = 1.348 + 0.181 = 1.529

For x3:

x3 = 1.529  0.0456.013x3 = 1.529  0.00748x3 = 1.52152

For x4:

x4 = 1.52152  0.0008335.945069x4 = 1.52152  0.000140116x4 = 1.521379884

For x5:

x5 = 1.521379884  0.00000105325.943790254x5 = 1.521379884  0.00000017719x5 = 1.521379707

Again we see that the value repeats till 5 decimal places, so we stop the process here.

The most precise real root is: 1.521379707.


Module 6 -- Solutions to an ODE.

1. Euler's Method

Formula

If you know y(tn) and want to compute y(tn+1) with a step h:

yn+1 = yn + h f(tn,yn)

Note that we only calculate the derivative of f(tn,yn) if f is an explicit equation. If it's an ODE we just use the function directly.

And it requires very small step sizes.

Given ODE:

dydx = x + y

and y(0) = 1

Given step size: h = 0.1

So, we make a table like this:

n xn yn
0 0 1
y1 = y0 + h × (f(x0))y1 = 1 + 0.1 × (0 + 1)y1 = 1 + 0.1 = 1.1
n xn yn
0 0 1
1 0.1 1.1

where xn keeps on adding itself by h.

So, we found f(0.1) = 1.1

Let's continue till n = 2

So,

y2 = y1 + 0.1 × (f(x1))y2 = 1.1 + 0.12y2 = 1.22
n xn yn
0 0 1
1 0.1 1.1
2 0.2 1.22

And f(0.2) = 1.22


2. Runge-Kutta 2nd Order (RK2)

This method is used for 2nd order differential equations.

This is based upon Euler's method by modifying it.

yn+1 = yn + h f(xn, yn)

(obtained by using Euler's method)

then, we use this to get :

yn+1 = yn + h2 [ f(xn,yn) + f(xn+1, yn+1) ]

For the same ODE:

dydx = x + y

with step size h = 0.1

So,

n xn yn f(xn,yn)
0 0 1 1

For y1 :

y1 = 1.1

(from Euler's method).

Now:

y1 = y0 + 0.12[ f(x0,y0) + f(x1,y1) ]

x1 = x0 + h = 0.1

y1 = y0 + 0.12[ 1 + f(0.1,1.1) ]y1 = 1 + (0.05 × 2.2)y1 = 1 + 0.11y1 = 1.11
n xn yn f(xn,yn)
0 0 1 1
1 0.1 1.11 1.21

So, f(0.1) = 1.11

Now,

y2 = 1.22

(from euler's method)

y2 = y1 + 0.12[ 1.21 + f(0.2,1.22) ]y2 = 1.11 + 0.05[1.21 + 1.42]y2 = 1.11 + 0.1315y2 = 1.2415
n xn yn f(xn,yn)
0 0 1 1
1 0.1 1.11 1.21
2 0.2 1.2415 1.4415

So, f(0.2) = 1.2415

Notice the precision difference?

It gets even more precise in RK4.


Runge-Kutta 4th Order (RK4)

Builds on RK2:

We calculate these 4 stuff first.

k1 = h × f(xn,yn)k2 = h × f((xn + h2), (yn + k12))k3 = h × f((xn + h2), (yn + k22))k4 = h × f((xn + h), (yn + k32))

And finally :

yn+1 = yn + 16(k1 + 2k2 + 2k3 + k4)

To not overly confuse you, let's just stick to the previously used:

dydx = x + y

with step size h = 0.1

and f(0) = 1

So, x0 = 0 and y0 = 1.

So,:

k1 = 0.1 × f(x0, y0) = 0.1 × 1 = 0.1k2 = 0.1 × f((x0 + 0.12), (y0 + k12))k2 = 0.1 × f(0.1 + 0.05), (1 + 0.05))k2 = 0.1 × f(0.15,1.05) = 0.1 × 1.2 = 0.12

Next,

k3 = 0.1 × f((0.15), (1 + 0.122))k3 = 0.1 × f(0.1,1.06)k3 = 0.1 × 1.16k3 = 0.116

Next,

k4 = 0.1 × f((0.1 +0.1), (1 + 0.1162)) k4 = 0.1 × f(0.2,1.058)k4 = 0.1 × 1.258k4 = 0.1258

Finally

y1 = y0 + 16(k1 +2k2 + 2k3 + k4)y1 = 1 + 0.166[0.1 + 2×0.12 + 2×0.116 + 0.1258]y1 = 1 + 0.1158348y1 = 1.1158348

which is now much more precise than RK2.

Now for x = 0.2

k1 = 0.2 × f(x0, y0) = 0.2 × 1.1158348 = 0.22316696k2 = 0.2 × f((x0 + 1.11583482), (y0 + k12))k2 = 0.2 × f(0.2 + 0.5579174), (1.1158348 + 0.11158348))k2 = 0.2 × f(0.7579174,1.22741828) = 0.397067136

Next,

k3 = 0.2 × f((0.7579174), (1.1158348 + 0.3970671362))k3 = 0.2 × f(1.1158348, 0.198533568)k3 = 0.2 × 1.314368368k3 = 0.2628736736

Next,

k4 = 0.2 × f((0.2 +0.1), (1.1158348 + 0.26287367362)) k4 = 0.2 × f(0.3, 0.1314368368)k4 = 0.2 × 0.4314368368k4 = 0.08628736736

Finally,

y2 = y1 + 16(k1 +2k2 + 2k3 + k4)y2 = 1.1158348 + 0.166[ 0.22316696 + 0.794134272 + 0.5257473472 + 0.08628736736 ]y2 = 1.1158348 + [ 0.166 × 1.62933594656 ]y2 = 1.1158348 + 0.27046976712896y2 = 1.38630456712896

which even more precise than RK2.


4. Predictor-Corrector's method

1. Heun's Method.

We’ll look at Heun’s method, a simpler Predictor-Corrector pair (especially good for exams):

It's a 2-step process and a bit similar to RK2.

Step 1: Predictor (Euler's estimate):

yn+1Predict = yn + h× f(xn, yn)

Step 2: Corrector (Average slope):

yn+1Corrected = yn + h2[ f(xn, yn) + f(xn, yn+1Predict) ]

Yeah, it's exactly the same as RK2.


Finite Difference method.

FDM converts differential equations into algebraic equations by replacing derivatives with finite differences.

We commonly solve second-order ODEs like:

d2ydx2 = f(x), y(a) = α, y(b) = β

We do that using the following formula:

yi1  2yi + yi+1 = h2 × xi

This gives us a system of linear equations which we can solve using methods like Gauss elimination or LU factorization.

The number of values we can find out depends on the internal points — the ones between the boundary conditions.

For example:

d2ydx2 = x, y(0) = 0, y(1) = 0

with step size h = 0.25 and a chosen interval [0, 1]

So, the number of points we can have:

x0 = 0, x1 = 0.25 , x2 = 0.50, x3 = 0.75, x4 = 1

Only 4 points.

And we have the boundary values known.

Unknown values:

and h2 = 0.0625

So, for x1, i = 1

and $$-h^2 \ \times \ x_i \ = \ -(0.00625 \ \times \ 0.25)$$

So we get an equation :

2y1 + y2 = 0.015625 y1 2y2 +y3 = 0.03125 y2  2y3 +y4 = 0.046875y2  2y3 = 0.046875

Now if we solve these three equations we can get all the intermediate values.

That can be done using any of the matrix equation solving methods.