• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/35

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

35 Cards in this Set

  • Front
  • Back

Name the two Bracketing methods for finding roots

Bisection Method and Method of false position

PROS: Bracketing methods vs locally convergent

Always convergent once initial interval [a,b] is found where f(a) and f(b) has opposite signs.


(GLOBALLY CONVERGENT)




Needs no analytical derivative (like Newtons method this is not always possible or ez)

CONS: Bracketing Method

- Slow convergence speed (MOFP has fastest of the two)




- If f(x) has several roots in [a,b] need to find seperate intervals for each root

What does it mean if a root method is Locally Convergent?

Local Convergencns: Requires a close a approximation to the root as first initial guess to guarantee convergence.

Name the locally convergent root methods

Newtons Method and Secant method

Name the Bracketing Methods to minimize a function

Ratio Method and Fibonacci

What conditions are there to use bracketing method to minimize a function (of one variable)

f(x) is decreasing on [a, p]
 f(x) is increasing on [p, b]





f(x) is decreasing on [a, p]


f(x) is increasing on [p, b]

Golden search method internal points c and d are given as

c = a + (1 - r)(b - a)




d = b - (1 - r)(b - a)

r in Golden Ratio method is

r = ( -1 + sqrt(5) ) / 2

PROS/CONS Golden Ratio

PROS: Although might be slow, can be appled in cases where f(x) is not differentiable




CONS: If function is very flat near minimum, it has limited accuracy

Golden ratio vs Fibonacci method



Fib. more efficient than GR for small n (Fibonacci numbers), else identical

Explain the difference between local and global error

The local error is the truncation error from approximating using finite differences (one iteration).




The global error is due to the accumulation of the truncation error of previous pointswhen calculating the current point.

Hvordan vil du effektivt løse matrisesystemet Ax=b

• Jakobimetoden




• Gauss-Seidel




• Steepest descent




• Konjugerte gradienters metode

Jacobi vs Gauss Seidel

Gauss Seidel converts faster than Jacobi (utilizes new values)




But there are cases where Gauss Seidel wont converge but Jacobi will.

Cons: Newtons method to find root f(x) = 0

Division by zero




X may be far away (slope of f'(X) is far away)




Need to know analytical solution of f'(x) (not always possible)

Requirement for Steepest descent and CG?

Positive Definite mateix , symmetric

Jacobi and Gauss Seidel are methods that finds...

solution to Ax=b by iteration.

What requirements are there for Jacobi method to converge?

The spectral radius (maximum eigenvalue) of A must be less than 1




If A is Strictly Diagonally Dominant - (the max spectral radius is less than one) - the method will converge.




OBS (the method might still also converge if not Strictly Diagonally Dominant)

What requirements are there for Gauss Seidel method to converge?

The spectral radius (maximum eigenvalue) of A must be less than 1




If A is Strictly Diagonally Dominant - (the max spectral radius is less than one) - the method will converge.




OBS (the method might still also converge if not Strictly Diagonally Dominant)

Pros cons Jacobi and Gauss seidel


+ which is faster

GS is fastest to converge




Pros: Fast, simple to implement




Cons: Not exact, convergence issues

Seidel and Newtons are methods to solve

Nonlinear Systems of equations




ex.


f1(x,y)=0


f2(x,y)=0




rewrite to




x=g1(x,y)


y=g2(x,y)

What is the requirement of convergence for Seidel and Newtons method to solve non linear equations

The rows of the jacobi matrix must be less than one.




dg1/dx + dg1/dy < 1


dg2/dx + dg2/dy < 1

Taylor polynomial, Lagrange, Divided Differences and Chebyshev are...

Different ways to find polynomials P to use for interpolation

Taylor Polynomial interpolation, what is good and bad about this method?

Taylor Polynomials only use information about f around a point Xo.




Hence, TP Interpolation is very accurate close to xo but innacurate away from xo




Taylorpolynomials is not good for interpolation

Lagrange polynomials are useful for interpolation, what are some things that are not so good about the method?

The error term is not usable in practice, since we do not know f. The Error is f''''''''''(x) something (n+1) something




Thus, it is diffucult to know the order of Pn to use.--> May need to try different points. to interpolate.

What are the n distinct roots of a chebyshev polynomial Tn(x) ?

xk = cos((2k+1)*pi / (2N))

What are the roots used for?

They can be used as collocation points in Lagrange Polynomial interpolation to minimize the error of lagrange




Using Chebyshev nodes removes the Runge phenomenomnea

A good way of reducing error of a differential approximation (finite differences) is to use ...

Richardson theorem

Explain briefly how richardson theorem works

Two Differential approximaions (must be of even order O(h^2k) one with step length h and one with steplength 2h are combined to remove the largest error term.




Can also be used in numerical integration, use on Trapes, Simpsons.. This is the theorem Romberg Integration is built upon.

Eulers, Heuns, RK methods are methods used to...

Solve ODEs

What is the order of the error done in Eulers, Heuns and RK4?

Eulers O(h)


Heuns O(h^2)


RK4 O(h^4)

What is the difference between Predictor Corrector methods compared to regular numerical solutions of ODEs (euler, heuns, RK)

Predictor corrector methods use multistep. (that is they use information from a set of previous points, k, k-1, k-2, ... etc)




Regular methods are singlestep.




Can have implicit and explicit multistep methods.




Use AB(explicit) to estimate next step, use AM(implcit) to correct

What are the two ways to solve PDE systems numerically?



Depends on how the finite difference method is. If explicit then they are very simple straight forward. If they are Implict you need to solve a system of eq to find the next value. This can be done by




- Iterative:


Jacobi, Gauss seidel




- Gradient search:


Steepest descent


(Newtons)


CG

What are the advantages of Spectral Element Method over Finite Differences

- High order ---> Accurate solutions




- You can have an irregular mesh




- All boundary conditions are automatically implemented in the method




- Based on weak solution

What are the disadvantages of Spectral Element Method over Finite Differences

Difficult to implement


- Node numbering


- Mesh generation


- Paralellization




Meshes are tedious to create