3.3. Interpolation and Lagrange polynomial — Numerical_Analysis (2024)

Contents

3.3.1. Lagrange Interpolating Polynomials#

For a distinct set of \(n + 1\) data points (no two \(x_{j}\) are the same)

\[\begin{equation*}(x_{0},y_{0}),~(x_{1},y_{1}),~(x_{2},y_{2})\ldots ,~(x_{n-1},y_{n-1}),~(x_{n},y_{n}),\end{equation*}\]

the Lagrange basis polynomials are defined as follows,

(3.15)#\[\begin{equation}{L_{j}(x)=\prod _{\begin{smallmatrix}0\leq m\leq n\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}}={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{n})}{(x_{j}-x_{n})}},}\end{equation}\]

where \(y_{j} = f(x_{j})\) and \(0\leq j\leq n\).

Furthermore, the interpolation polynomial in the Lagrange form is a linear combination of these Lagrange basis polynomials

(3.16)#\[{L(x)=\sum _{j=0}^{n}y_{j}L_{j}(x) = y_{0} L_{0}(x) + y_{1} L_{1}(x) + \ldots + y_{n} L_{n}(x)}\]

Theorem

Assuming that \(x_0\), \(x_1\), … , \(x_{n}\) are \(n+1\) distinct numbers from \([a, b]\) and \(f \in C^{n+1} [a,b]\). Then, there exists a number \(\xi(x_{j}) \in (x_{0},x_{1})\bigcup \ldots \bigcup (x_{n-1},x_{n})\) such that

(3.17)#\[f(x) = L(x) + \frac{f^{n+1} (\xi(x)) }{ (n+1)!} (x- x_0) (x - x_1) \ldots (x - x_n),\quad x\in [a,b],\]

where \(L(x)\) is the interpolating polynomial provided in equation (3.16).

Proof:

Note that \(f(x_{j}) = L(x_{j})\) when \(0\leq j \leq n\). Therefore, when \(x = x_{j}\) when \(0\leq j \leq n\) forany \(\xi(x_{j}) \in (x_{0},x_{1})\bigcup \ldots \bigcup (x_{n-1},x_{n})\) yields equation (3.17).

Now, when \(x \neq x_{j}\) for \(j=0,1,\ldots, n\), define a function \(g(t)\) for \(t\in[a,b]\) as follows,

\[\begin{align*}g(t) &= f(t) - L(t) - [f(x) - L(x)]\frac{(t- x_0) (t - x_1) \ldots (t - x_n)}{(x- x_0) (x - x_1) \ldots (x - x_n)}\\& = f(t) - L(t) - [f(x) - L(x)]\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)}.\end{align*}\]

Since \(f \in C^{n+1}[a, b]\), and \(P \in C^{\infty}[a, b]\), then \(g \in C^{n+1}[a, b]\). Now, for \(t = x_{k}\) for \(k=0,1,\ldots, n\), we have

\[\begin{align*}g(x_{k}) & = f(x_{k}) - L(x_{k}) - [f(x) - L(x)]\prod _{j = 0 }^{n}\frac{(x_{k} - x_j)}{(x - x_j)}\\& = 0 - [f(x) - L(x)].0 = 0.\end{align*}\]

Moreover, when \(t = x\), we have,

\[\begin{align*}g(x) &= f(x) - L(x) - [f(x) - L(x)]\prod _{j = 0 }^{n}\frac{(x - x_j)}{(x - x_j)}\\& = f(x) - L(x) - [f(x) - L(x)] = 0.\end{align*}\]

Therefore, \(g \in C^{n+1}[a, b]\) and \(g\) is zero at \(n+2\) distinct points \(x\), \(x_0\), \(x_1\), … , \(x_n\). ByGeneralized Rolle’s Theorem, there is a number \(\xi\in (a, b)\) such that

\[\begin{align*}g^{(n+1)}(\xi) = 0.\end{align*}\]

It follows that

(3.18)#\[0 = g^{(n+1)}(\xi) = f^{(n+1)}(\xi) - P^{(n+1)}(\xi) - [f(x) - L(x)]\frac{d^{n+1}}{dt^{n+1}}\left[\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)}\right]_{t = \xi}\]

On the other hand, \(\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)}\) is a \(n+1\) degree polynomial and its first \((n+1)\) are equal to zero. We have,

\[\begin{align*}\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)} = \left[\frac{1}{\prod _{j = 0 }^{n}(x-x_{j}} \right] + \text{lower-degree- terms.}\end{align*}\]

The derivative of those lower-degree terms would be zero, and

(3.19)#\[\frac{d^{n+1}}{dt^{n+1}}\prod _{j = 0 }^{n}\frac{(t - x_j)}{(x - x_j)} = \frac{(n+1)!}{\prod _{j = 0 }^{n}(x-x_{j}}.\]

\[\begin{align*}0 = g^{(n+1)}(\xi) = f^{(n+1)}(\xi) -0 - [f(x) - L(x)] \frac{(n+1)!}{\prod _{j = 0 }^{n}(x-x_{j}}\end{align*}\]

Solving the above equation for \(f(x\), we have,

\[\begin{align*}f(x) = L(x) + \frac{f^{n+1} (\xi(x)) }{ (n+1)!} \prod _{j = 0 }^{n}(x-x_{j}.\end{align*}\]

Corollary

In addition to assumptions provided in Theorem \ref{Lagrange_Error_theorem}, assume that

(3.20)#\[\begin{equation}\left| f^{(n+1)} (\xi(x)) \right| \leq M\end{equation}\]

for some values of \(M\) and all points \(\xi(x) \in (x_{0},~x_{1})\bigcup \ldots \bigcup (x_{n-1},~x_{n})\), then

(3.21)#\[\begin{align}\label{Lagrange_ErrorBound}|f(x) - L(x)| &= \frac{\left| f^{n+1} (\xi(x)) \right|}{(n+1)!} \left| (x-x_{0})(x-x_{1})\ldots(x-x_{n}) \right| \notag\\& \leq \frac{M}{(n+1)!} \left| (x-x_{0})(x-x_{1})\ldots(x-x_{n}) \right|.\end{align}\]

import numpy as npdef LagrangePolyCoeff (x ,i , xn): ''' Parameters ---------- x : float DESCRIPTION. point x i : int DESCRIPTION. index of L_{i} xn : list/array DESCRIPTION. a list/array consisting points x0, x1, x2,... ,xn Returns ------- Lj : TYPE DESCRIPTION. ''' Lj=1 for j in range (len(xn)): if i!=j: Lj*=( x-xn[j])/( xn[i]-xn[j]) return Ljdef LagrangePoly(x , xn , yn ): ''' Parameters ---------- x : float DESCRIPTION. point x xn : list/array DESCRIPTION. a list/array consisting points x0, x1, x2,... ,xn yn : list/array DESCRIPTION. a list/array consisting points y0 = f(x0), y1 = f(x1),... ,yn = f(xn) Returns ------- L : float DESCRIPTION. L(x) ''' LagrangePoly = np.array ([LagrangePolyCoeff(x ,i , xn ) for i in range (len(xn))]) L = np.dot( yn , LagrangePoly) return L

function [Lj] = LagrangePolyCoeff(x ,i , xn)%{Parameters----------x : float DESCRIPTION. point xi : int DESCRIPTION. index of L_{i}xn : list/array DESCRIPTION. a list/array consisting points x0, x1, x2,... ,xnReturns-------Lj : TYPE DESCRIPTION.%}Lj=1;for j =1:length(xn) if i ~= j Lj = Lj .*( x - xn(j))/( xn(i) - xn(j)); endend function [L] = LagrangePoly(x , xn , yn)%{Parameters----------x : float DESCRIPTION. point xxn : list/array DESCRIPTION. a list/array consisting points x0, x1, x2,... ,xnyn : list/array DESCRIPTION. a list/array consisting points y0 = f(x0), y1 = f(x1),... ,yn = f(xn)Returns-------L : float DESCRIPTION. L(x)Example:xn = [1 ,2 ,3 ,4 ,5 , 6];yn = [-3 ,0 ,-1 ,2 ,1 , 4];x = linspace(min(xn)-1 , max(xn)+1 , 100);%}LagrangePoly = zeros(length(xn), length(x));for i =1:length(xn) LagrangePoly(i,:) = LagrangePolyCoeff(x, i , xn );endL = 0.*x;for j =1:length(x) L(j) = dot(yn, LagrangePoly(:,j));end

Example: Consider the following data points

\[\begin{align*}\{(1,-3),~(2,0),~(3,-1),~(4,2),~(5,1),~(6,4)\}\end{align*}\]

and construct an interpolating polynomial using Lagrange polynomial and all of this data.

Solution:

Calculating \(L_{j}\)s with \(n = 3\) for \(j=0,1,2,3\):

\[\begin{equation*}{L_{j}(x)={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{n})}{(x_{j}-x_{n})}},}\end{equation*}\]

\[\begin{align*}L_{0}(x)&=\frac {(x-x_{1})}{(x_{0}-x_{1})}\frac {(x-x_{2})}{(x_{0}-x_{2})}\frac {(x-x_{3})}{(x_{0}-x_{3})}=\frac {(x- 2)}{(1-2)}\frac {(x- 3)}{(1-3)}\frac {(x- 4)}{(1-4)}\\&=-\frac{\left(x-2\right)\,\left(x-3\right)\,\left(x-4\right)}{6} = -\frac{x^3}{6}+\frac{3\,x^2}{2}-\frac{13\,x}{3}+4.\end{align*}\]

Similarly, we can calculate all \(L_{j}\)s:

\[\begin{align*}L_{0}(x)&=-\frac{\left(x-2\right)\,\left(x-3\right)\,\left(x-4\right)}{6} = -\frac{x^3}{6}+\frac{3\,x^2}{2}-\frac{13\,x}{3}+4,\\L_{1}(x)&=\frac{\left(x-1\right)\,\left(x-3\right)\,\left(x-4\right)}{2}=\frac{x^3}{2}-4\,x^2+\frac{19\,x}{2}-6,\\L_{2}(x)& = -\frac{\left(x-1\right)\,\left(x-2\right)\,\left(x-4\right)}{2}=-\frac{x^3}{2}+\frac{7\,x^2}{2}-7\,x+4,\\L_{3}(x)&=\frac{\left(x-1\right)\,\left(x-2\right)\,\left(x-3\right)}{6}=\frac{x^3}{6}-x^2+\frac{11\,x}{6}-1,\\\end{align*}\]

Moreover,

\[\begin{equation*}L(x)= y_{0} L_{0}(x) + y_{1} L_{1}(x) + y_{2} L_{2}(x) + y_{3} L_{3}(x)=\frac{4\,x^3}{3}-10\,x^2+\frac{71\,x}{3}-18.\end{equation*}\]

# This part is used for producing tables and figuresimport syssys.path.insert(0,'..')import hd_tools as hdimport numpy as npfrom hd_Interpolation_Algorithms import LagrangePoly# A set of distinct pointsxn = np.array ([1 ,2 ,3 ,4 ,5 , 6])yn = np.array ([-3 ,0 ,-1 ,2 ,1 , 4])x = np.linspace(xn.min()-1 , xn.max()+1 , 100)y = LagrangePoly(x , xn , yn )# Plotshd.interpolation_method_plot(xn, yn, x, y, title = 'Lagrange Interpolating Polynomials')

Loading BokehJS ...

"}}; function display_loaded() { const el = document.getElementById("d600c13e-4e28-4c17-ae94-72e9af60092f"); if (el != null) { el.textContent = "BokehJS is loading..."; } if (root.Bokeh !== undefined) { if (el != null) { el.textContent = "BokehJS " + root.Bokeh.version + " successfully loaded."; } } else if (Date.now() < root._bokeh_timeout) { setTimeout(display_loaded, 100) } } function run_callbacks() { try { root._bokeh_onload_callbacks.forEach(function(callback) { if (callback != null) callback(); }); } finally { delete root._bokeh_onload_callbacks } console.debug("Bokeh: all callbacks have finished"); } function load_libs(css_urls, js_urls, callback) { if (css_urls == null) css_urls = []; if (js_urls == null) js_urls = []; root._bokeh_onload_callbacks.push(callback); if (root._bokeh_is_loading > 0) { console.debug("Bokeh: BokehJS is being loaded, scheduling callback at", now()); return null; } if (js_urls == null || js_urls.length === 0) { run_callbacks(); return null; } console.debug("Bokeh: BokehJS not loaded, scheduling load and callback at", now()); root._bokeh_is_loading = css_urls.length + js_urls.length; function on_load() { root._bokeh_is_loading--; if (root._bokeh_is_loading === 0) { console.debug("Bokeh: all BokehJS libraries/stylesheets loaded"); run_callbacks() } } function on_error(url) { console.error("failed to load " + url); } for (let i = 0; i < css_urls.length; i++) { const url = css_urls[i]; const element = document.createElement("link"); element.onload = on_load; element.onerror = on_error.bind(null, url); element.rel = "stylesheet"; element.type = "text/css"; element.href = url; console.debug("Bokeh: injecting link tag for BokehJS stylesheet: ", url); document.body.appendChild(element); } for (let i = 0; i < js_urls.length; i++) { const url = js_urls[i]; const element = document.createElement('script'); element.onload = on_load; element.onerror = on_error.bind(null, url); element.async = false; element.src = url; console.debug("Bokeh: injecting script tag for BokehJS library: ", url); document.head.appendChild(element); } }; function inject_raw_css(css) { const element = document.createElement("style"); element.appendChild(document.createTextNode(css)); document.body.appendChild(element); } const js_urls = ["https://cdn.bokeh.org/bokeh/release/bokeh-3.1.1.min.js", "https://cdn.bokeh.org/bokeh/release/bokeh-gl-3.1.1.min.js", "https://cdn.bokeh.org/bokeh/release/bokeh-widgets-3.1.1.min.js", "https://cdn.bokeh.org/bokeh/release/bokeh-tables-3.1.1.min.js", "https://cdn.bokeh.org/bokeh/release/bokeh-mathjax-3.1.1.min.js"]; const css_urls = []; const inline_js = [ function(Bokeh) { Bokeh.set_log_level("info"); },function(Bokeh) { } ]; function run_inline_js() { if (root.Bokeh !== undefined || force === true) { for (let i = 0; i < inline_js.length; i++) { inline_js[i].call(root, root.Bokeh); }if (force === true) { display_loaded(); }} else if (Date.now() < root._bokeh_timeout) { setTimeout(run_inline_js, 100); } else if (!root._bokeh_failed_load) { console.log("Bokeh: BokehJS failed to load within specified timeout."); root._bokeh_failed_load = true; } else if (force !== true) { const cell = $(document.getElementById("d600c13e-4e28-4c17-ae94-72e9af60092f")).parents('.cell').data().cell; cell.output_area.append_execute_result(NB_LOAD_WARNING) } } if (root._bokeh_is_loading === 0) { console.debug("Bokeh: BokehJS loaded, going straight to plotting"); run_inline_js(); } else { load_libs(css_urls, js_urls, function() { console.debug("Bokeh: BokehJS plotting callback run at", now()); run_inline_js(); }); }}(window));

Example: Assume that \(x_{0}<x_{1}\) are distinct numbers in \([a,~b]\) and \(f \in C^{2} [a,~b]\). Also, assume that Lagrange the interpolation \(L(x)\) only comprises two data points \(\{(x_{0},y_{0}),~(x_{1},y_{1})\}\). Then, demonstrate the following using the error formula for the Lagrange interpolation:

(3.22)#\[\begin{equation}|f(x) - L(x)| \leq \frac{h^2}{8}M\end{equation}\]

where \( \displaystyle{M = \max_{x_0\leq x\leq x_1}} |f''(x)|\) and \(h = x_1 - x_0\).

Solution:The Lagrange interpolating polynomial using two data points \(\{(x_{0},y_{0}),~(x_{1},y_{1})\}\):

(3.23)#\[\begin{equation}L(x) = y_{0}L_{0}(x) + y_{1}L_{1}(x) =\frac{(x - x_{1})}{(x_{0} - x_{1})} y_{0} + \frac{(x - x_{0})}{(x_{1} - x_{0})} y_{1}.\end{equation}\]

The error formula \eqref{Lagrange_Error_theorem} is updated by utilizing the provided data point.

(3.24)#\[\begin{equation}f(x) - L(x) = \frac{(x - x_{0})(x - x_{1})}{2!} f''(\xi(x)),\end{equation}\]

where \(\xi(x)\) is a unknown point between \(x_{0}\) and \(x_{1}\). Hence

(3.25)#\[\begin{equation}|f(x) - L(x)| = \left|\frac{(x - x_{0})(x - x_{1})}{2!}\right| \left| f''(\xi(x)) \right|.\end{equation}\]

Because \(\xi(x)\) is unknown, the value of \(f''(\xi(x))\) cannot be calculated precisely. We may, however, limit the error by finding the largest possible value for \(|f''(\xi(x))|\). Let

(3.26)#\[\begin{equation}M = \max_{x_0\leq x\leq x_1}~|f''(x)|.\end{equation}\]

Then, the bound of \(|f''(x)|\) on \([x_{0},~x_{1}] \) can be obtained as follows,

(3.27)#\[\begin{equation}\label{eq.3.35}|f''(\xi(x))| \leq M.\end{equation}\]

Therefore,

(3.28)#\[\begin{equation}|f(x) - L(x)| \leq \frac{M}{2} |(x - x_{0})(x - x_{1})|.\end{equation}\]

Note that the inequality can be also concluded from the last Corollary.

Observe that the maximum of the function \(g(x) = (x - x_{0})(x - x_{1})\) in \([x_{0},~x_{1}]\) appears at the critical point\(x = \dfrac{(x_{0} + x_{1})}{2}\) as

(3.29)#\[\begin{equation}g'(x) = (x - x_{0}) + (x - x_{1}),\end{equation}\]

and it follows from setting \(g'(x) = 0\) that

(3.30)#\[\begin{equation}g'(x) = 0\quad \Rightarrow \quad(x - x_{0}) + (x - x_{1}) = 0,\quad \Rightarrow \quadx = \dfrac{(x_{0} + x_{1})}{2}.\end{equation}\]

Thus the maximum of \(g\) on \((x_{0},~x_{1})\) is \(|(x - x_{0})(x - x_{1})| = (x_{1} - x_{0})^2/4\). Therefore,

(3.31)#\[\begin{equation}M = \max_{x_0\leq x\leq x_1}~|f''(x)| = |(x - x_{0})(x - x_{1})| = \frac{(x_{1} - x_{0})^2}{4}.\end{equation}\]

As a result, for each \(x \in [x_{0},~x_{1}]\), we obtain

(3.32)#\[\begin{equation}|f(x) - L(x)| \leq \frac{(x_{1} - x_{0})^2}{8}M,\end{equation}\]

Now, since \(h = x_1 - x_0\),

(3.33)#\[\begin{equation}|f(x) - L| \leq \frac{h^2}{8}M.\end{equation}\]

Example:Consider the following table having the data for \(f(x) = e^{x} + \sin(2x)\) on \([0.1,~0.5]\):

\(x_i\)

0.1000

0.3000

0.5000

\(y_i\)

1.3038

1.9145

2.4902

  • a. Use Lagrange polynomials to interpolate the points.

  • b. Find the approximation of \(f(0.4)\) using the Lagrange interpolated polynomial and estimate an error bound and absolute error for the approximation.

Solution:

\begin{enumerate}[label=\alph*), labelindent=0pt]a.Calculating \(L_{j}\)s with \(n = 2\) for \(j=0,1,2\):

\[\begin{equation*}{L_{j}(x)={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{n})}{(x_{j}-x_{n})}},}\end{equation*}\]

\[\begin{align*}L_{0}(x)&= \frac {(x-x_{1})}{(x_{0}-x_{1})}\frac {(x-x_{2})}{(x_{0}-x_{2})}= \frac {(x-0.3)}{(0.1-0.3)}\frac {(x-0.5)}{(0.1-0.5)}\\ & = \frac{5\,\left(2\,x-1\right)\,\left(10\,x-3\right)}{8}= \frac{25\,x^2}{2}-10\,x+\frac{15}{8},\\L_{1}(x)&= \frac {(x-x_{0})}{(x_{1}-x_{0})}\frac {(x-x_{2})}{(x_{1}-x_{2})}= \frac {(x-0.1)}{(0.3-0.1)}\frac {(x-0.5)}{(0.1-0.3)}\\ &= -\frac{5\,\left(2\,x-1\right)\,\left(10\,x-1\right)}{4} = -25\,x^2+15\,x-\frac{5}{4},\\L_{2}(x)&= \frac {(x-x_{0})}{(x_{2}-x_{0})}\frac {(x-x_{1})}{(x_{2}-x_{1})}= \frac {(x-0.1)}{(0.5-0.1)}\frac {(x-0.3)}{(0.5-0.3)}\\ & =\frac{\left(10\,x-1\right)\,\left(10\,x-3\right)}{8} = \frac{25\,x^2}{2}-5\,x+\frac{3}{8}.\end{align*}\]

Observe that in the above computations, we multiplied both numerator and denominator by 10 for convenience. Moreover,

\[\begin{equation*}L(x)= y_{0} L_{0}(x) + y_{1} L_{1}(x) + y_{2} L_{2}(x) = -0.437\,x^2+3.228\,x+0.985.\end{equation*}\]

b. First,

\[\begin{equation*}L(0.4) = 2.206718,\end{equation*}\]

while \(f(0.4) = 2.209181\). Therefore, the absolute error here is

(3.34)#\[\begin{align}|L(0.4) - f(0.4)| = 2.462763\times 10^{-03}.\end{align}\]

Now to compute an error bound of the approximation, we use the following formula

(3.35)#\[\begin{align}|f(x) - L(x)| &= \frac{|f^{(3)}(\xi)|}{3!} \left| (x - 0.1) (x - 0.3) (x - 0.5) \right| \notag\\& \leq \frac{M}{3!} \left| (x - 0.1) (x - 0.3) (x - 0.5) \right|.\end{align}\]

Taking the third derivative of the given function, we get

\[\begin{align*}f'(x) & = 2\,\cos\left(2\,x\right)+{\mathrm{e}}^x,\\f''(x) & = {\mathrm{e}}^x-4\,\sin\left(2\,x\right),\\f'''(x) & = {\mathrm{e}}^x-8\,\cos\left(2\,x\right).\end{align*}\]

As can be seen \(f'''(x)\) is an increasing function when \(x \in (0.1, 0.5)\). Thus

\[\begin{equation*}|f^{(3)}(\xi(x))| = |{\mathrm{e}}^x-8\,\cos\left(2\,x\right)|,\qquad\xi(x) \in (0.1, 0.5),\end{equation*}\]

takes its maximum at \(x = 0.1\), and

\[\begin{equation*}M = \max_{0.1 \leq x \leq 0.5} |f^{(3)}(x)| = \left|{\mathrm{e}}^{0.1}-8\,\cos\left(2\,(0.1)\right)\right| = 6.7354.\end{equation*}\]

Therefore, the error bound at \(x = 0.4\):

(3.36)#\[\begin{equation}|f(0.4) - L(0.4)| \leq \frac{6.7354}{3!} \left| (0.4 - 0.1) (0.4 - 0.3) (0.4 - 0.5) \right| = 1.683840 \times 10^{-02}.\end{equation}\]

As can be seen, the maximum estimate error is larger than the actual absolute error at \(0.4\) as the error bound needs to highlight the maximum expected error!

3.3. Interpolation and Lagrange polynomial — Numerical_Analysis (2024)

References

Top Articles
Latest Posts
Article information

Author: Errol Quitzon

Last Updated:

Views: 5823

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Errol Quitzon

Birthday: 1993-04-02

Address: 70604 Haley Lane, Port Weldonside, TN 99233-0942

Phone: +9665282866296

Job: Product Retail Agent

Hobby: Computer programming, Horseback riding, Hooping, Dance, Ice skating, Backpacking, Rafting

Introduction: My name is Errol Quitzon, I am a fair, cute, fancy, clean, attractive, sparkling, kind person who loves writing and wants to share my knowledge and understanding with you.