Interpolation of sparse pointwise and integral geographical Data over Time

Interpolation of sparse pointwise and integral geographical Data over Time

Martin Papke*
 * *Universität Koblenz-Landau, Germany

Abstract

Geographical data, for example distribution densities of species as discussed in, often are given both by pointwise and integral values. We aim to give two interpolation algorithm which can handle both data types together. As these data tend to change over time, we will extend our algorithm to allow a timestamp attached to the data and give newer data more weight in the interpolation. This will allow us to model time dependent functions and data. We will compare both approaches and discuss their advantages and disadvantages. The possibility of giving both pointwise and integral data extends the basic ideas from, allowing for integral data points.

Keywords: Interpolation, Applied Mathematics, Numerical Mathematics

Introduction
Let $$\mathbb{T}$$ be a time set. Assume we have collected data about an unknown mapping $$(f_t)_{t\in \mathbb{T}}$$ that changes over time. $$f_t$$ is the mapping at time $$t\in \mathbb{T}$$. The ocollected data $$(d,t)$$ indicates that $$d$$ was collected at time $$t\in \mathbb{T}$$. As time set we use $$\mathbb{T}:=[0,+\infty)=\mathbb{R}^{+}_0$$ with $$t_0 \in \mathbb{T}$$ denotes the current time stamp. All $$t > t_0$$ are time stamp of the future. Collected data $$(d,t)$$ has always a time stamp $$t \leq t_0$$.

For the following sections we use the time index $$t \in \mathbb{T}$$ as last argument of the function $$f$$ and use $$f_t(x)$$ as $$f(x,t)$$. If $$f_t$$ does not change over time we have $$f_{t_1}=f_{t_2}$$ for all $$t_1,t_2 \in \mathbb{T}$$. If the domain of $$f$$ does not contain the time set $$\mathbb{T}$$, that the function $$f$$ is regarded as static of time.

We propose two algorithms to handle sparse pointwise and integral data as they arrise in geographical problems. The first algorithm uses a least squares idea, the second one has convex combinations as its grounding idea.

Setting and Notations
We consider a triangulation $$T_\Delta = \{\Delta_i: i = 1, \ldots, n\}$$ of a region $$\Omega \subseteq \mathbb R^2$$. We denote the set of nodes of $$T_\Delta$$ by $$\{p_j: j = 1, \ldots, m\}$$. For each triangle $$\Delta_i$$, we denote by $$j(i,\alpha)$$, $$\alpha = 1,2,3$$ the indices of $$\Delta_i$$'s nodes, that is the nodes of $$\Delta_i$$, are $$p_{j(i,1)}$$, $$p_{j(i,2)}$$ and $$p_{j(i,3)}$$. For each node $$p_j$$, we denote by $$p_{n(j,\beta)}$$, $$\beta = 1, \ldots, N(j)$$, the neighbours of $$p_j$$, that is the nodes which are connected to $$n_j$$.

We are given two types of data for an unknown function $$f \colon \Omega \to \mathbb R$$. Firstly, point values $$y_k$$, which are measured values of $$f$$ at a node $$p_{j(k)}$$. Secondly we are given integrals $$I_\ell$$ of $$f$$ over triangles $$\Delta_{i(\ell)}$$.

Our aim is to construct a function $$f\colon \Omega \to \mathbb R$$ which interpolates the given values in the sense that $$f(p_{j(k)}) \approx y_k, k = 1, \ldots, K$$

$$\int_{\Delta_{i(\ell)}} f(x)\, dx \approx I_\ell, \ell = 1, \ldots, L $$ taking into account that our data are for example values of a density distribution of a species, hence give $$f$$ only locally.

Example: Single Point Data Collection
E.g. at single point in a habitat a camera is placed, that takes automatically snapshots of all animals that pass this point. The aggregated data provides pointwise values of the density distribution of a species. Including the time index $$t\in \mathbb{T}$$ the collected data provides pointwise data of the density distribution for every month.

Combine Different Data Collections
Two different data collections at the same time $$t$$ for a given population density could indicate different population estimations, which can arrise for example when using the capture-recapture method for estimating populations, as done in and.

The least squares Approach
The first algorithm we propose to takle this kind of problem is a least squares approach, which allows for more than one data point for a fixed site in our lattice.

As geographical data tend to be time dependent, we want to add a time stamp to each data point and consider a time dependent function $$f \colon \Omega \times [0,T] \to \mathbb R$$. We therefore replace the above equations by $$ f(p_{j(k)}, t_k) \approx y_k, k = 1, \ldots, K $$

$$\int_{\Delta_{i(\ell)}} f(x, t_\ell)\, dx \approx I_\ell, \ell = 1, \ldots, L $$

At a given time $$t$$ we will only consider values with a timestamp $$\le t$$, which will allow live data being considered. The importance of measurements decays over time, we attach to each equation a weight, given for a data point with time stamp $$\tau$$ by $$\phi(t-\tau)$$ denoting by $$\phi$$ the weighting function $$\phi(s) := e^{-\eta s}$$, where $$\eta \ge 0$$ denotes the speed of decay. $$\eta$$ has to be choosen in a way that addapts to the given problem, a possible choice could be an estimate for the time derivative of the model function $$f$$, because $$f$$ having strong fluctuations in time means that the importance of past time values decays more rapidly.

Constructing a linear least squares system
For a given time $$t$$ we will construct an overdetermined linear system of equations for the node values $$x_j := f(p_j, t)$$ of our function $$f$$, which we will solve in a weighted least square sense, that is we will transform both point and integral data into equations for the values of $$f$$ at the nodes $$p_j$$. Each of the equations to be modelled gives rise to one linear equation. Another set of equations models the smoothness of our function $$f$$.

Equations for the point data
For each given point datum with $$t_k \le t$$ - future measurements are not considered giving information for the current time - we add the equation $$ x_j = y_j $$ with weight $$\phi(t - t_j)$$ The weight is choosen such that it has its maximal value when $$t = t_j$$ and decays over time.

Equations for the volume data
To give an equation for a given volume datum we first have to approximate the integral by point values of the function $$f$$. We use the two-dimensional variant of the trapezoidal rule, that is we approximate the integral of a function $$g$$ over a triangle $$\Delta_i$$ by the volume of the trapezoidal body determined by the values of $$g$$ at the nodes of $$\Delta_i$$, hence $$ \int_{\Delta_i} g(x)\, dx \approx \frac 13 |\Delta_i|\sum_{\alpha=1}^3 g(p_{i,\alpha}) $$ where $$|\Delta_i|$$ denotes the area of the triangle.

Hence, we get the equation $$ \frac 13|\Delta_{i(\ell)}| \cdot \sum_{\alpha=1}^3 x_{p(i(\ell),\alpha)} = I_\ell$$ with weight $$\phi(t-t_\ell)$$,

where the weight is choosen exactly as in the point value case.

Smoothness equations
If we have only a little set of data points, our system will be underdertermined and the calculated function may have strong fluctuations. To prevent that, we add for each node the equation $$  \frac{1}{N(j)}\sum_{\beta=1}^{N(j)} x_{n(j,\beta)} - x_j = 0,$$ with weight $$0.1$$

stating that we want the value at each note to be approximately the mean of its neighbouring values.

The resulting linear system $$Ax = b$$ is solved in a weighted least square sense, giving us point values for $$f$$ which we interpolate by linear functions.

Implementation
We represent the triangulation by a structure consisting of the nodes, given by their coordinates and the triangles, which are given by the indices of their vertices. From that we calculate the neighbours of each node and store this also in the lattice structure. Given the data we now assemble for each time $$t$$ a matrix $$A$$ and a right hand side $$b$$ collecting the equations.

The convex combination approach
As a second possibility to attack our problem, we propose an algorithm that handles point and integral data as distinct interpolation problems and afterwards takes a convex combination of the to solution to obtain a solution to the full problem.

The pointwise interpolation
At each given point, we first combine all data we have at this point into one by taking a convex combination of the measured values, where each value is weighted with $$\phi(t-t_j)$$ as in the first algorithm. That is to a node $$p_j$$ in our lattice, we look at all values given for that point with $$t_k \le t$$ and form the average

$$ q_j := \frac{\sum_{k: t_k \le t, j(k) = j} \phi(t-t_k)y_k}{\sum_{k : t_k \le t, j(k) = j} \phi(t-t_j) } $$

of the values measured at this particular point.

Interpolation of the integral values
The same way as we interpolated the point values, we now interpolate the given integral values. Hence, we first assign to each triangle of our lattice an unique integral value by weighting the given ones. Afterwards, we interpolate this values in a linear sense, assigning the value divided by the volume of the particular cell to the Center of Gravity of each triangle. That is, we approximate the integral by means of the midpoint rule, i. e.

$$ \int_\Delta f(x) \, dx \approx f(p_\Delta) \cdot |\Delta| $$

where $$p_\Delta$$ denotes $$\Delta$$'s center of gravity.

The value, that we want the integral to have is calculated exactly as in the point case, that is we have assign

$$ I_\Delta := \frac{\sum_{k: t_k \le t, j(k) = j} \phi(t-t_k)I_k}{\sum_{k : t_k \le t, j(k) = j} \phi(t-t_j) } $$

Obtaining the solution to the full problem
Let $$f_p\colon \Omega \to \mathbb R$$ denote the interpolating function obtained in step one and $$f_i\colon \Omega \to \mathbb R$$ denote the interpolation of the integral values. We now let $$f:= \frac{n_p}{n_p + n_i} f_p + \frac{n_i}{n_p + n_i} f_i$$ where $$n_p$$ and $$n_i$$ are the number of given point and integral values respectively.

Another idea that did not work out
As a second possibility to attack our problem, we propose an algorithm starts by interpolating the point data and than matches the integral data by a refinement of the lattice. The value at the center of gravity of each cell is choosen in a way to match the integral data.

The pointwise interpolation
At each given point, we first combine all data we have at this point into one by taking a convex combination of the measured values, where each value is weighted with $$\phi(t-t_j)$$ as in the first algorithm. That is to a node $$p_j$$ in our lattice, we look at all values given for that point with $$t_k \le t$$ and form the average

$$ q_j := \frac{\sum_{k: t_k \le t, j(k) = j} \phi(t-t_k)y_k}{\sum_{k : t_k \le t, j(k) = j} \phi(t-t_j) } $$

of the values measured at this particular point.

Interpolation of the integral values
The same way as we interpolated the point values, we now interpolate the given integral values. Hence, we first assign to each triangle of our lattice an unique integral value by weighting the given ones. Afterwards, we interpolate this values in a linear sense, assigning the value divided by the volume of the particular cell to the Center of Gravity of each triangle. That is, we approximate the integral by means of the midpoint rule, i. e.

$$ \int_\Delta f(x) \, dx \approx f(p_\Delta) \cdot |\Delta| $$

where $$p_\Delta$$ denotes $$\Delta$$'s center of gravity.

Full solution
We then choose a function $$f_3$$ that interpolates both the point and the integral values. That did not lead to a good result, because the fluctiation of the function $$f_3$$ was so large, that it could not be regarded as an approximation of a smooth function.

Examples
As an example we generate for a simple triangulation of $$\Omega = [0,1]^2$$, over 10 seconds 4000 random point data and 1000 random volume data. As timestep we choose $$\tau=0.1$$, that is we interpolate our data every $$0.1$$ seconds. As random data, we start with a simple function, here
 * $$f_t(x) = f_t(x_1,x_2) = \sin(2\pi (x_1+t)) + \cos(\pi (x_2+t))$$,

that creates a diagonal shift of the graph of $$f_0$$ in time. Furthermore we add some normally distributed noise. The points and time values to interpolate are also generated randomly.

.

Conclusion
Extending the ideas form, we allow for integral values to be given. Representing the data points as a weighted linear squares system would allow us to use an iterative least square solver to reuse the calculations we have already done, for example the solver LSQR discussed by.

Source code files

 * Interpolation_of_sparse_pointwise_and_integral_geographical_Data_over_Time/spline2.m
 * Interpolation_of_sparse_pointwise_and_integral_geographical_Data_over_Time/splinetest2.m
 * Interpolation_of_sparse_pointwise_and_integral_geographical_Data_over_Time/assemble_lattice.m
 * Interpolation_of_sparse_pointwise_and_integral_geographical_Data_over_Time/neighbours.m
 * Interpolation_of_sparse_pointwise_and_integral_geographical_Data_over_Time/point_interpolate.m