MADALGO summer school on geometric data structures (2).

Today is raining. Usual.
Yesterday Sariel Har-Paled has started is series of talks about data structures for
geometric approximation. He has presented various topics like grids, quadtrees
and a technique for representing (approximately) the distance between points in
the space (well separated pairs decomposition).
Talking about grids, it is interesting how this data structure can be used to find the
closest pair in a set $P$ of points on the plane. The data structure is able to
determine in linear time, given $P$, whether its closest pair has distance $CP(P)$ equal, smaller or larger than a real number $\alpha$.
A grid of cells of size $\alpha \times \alpha$ is built on top of $P$. The data structure
maintains, for each cell, a linked list of points falling in that cell. This data structure
allows for constant time insertions (using hashing) and each time a new point is
added, it can check in constant time if the smaller distance between points changes.
Since $n$ insertions are needed, the result follows.
Now, using this data structure in a randomized algorithm, the closest pair of a set
can be found in expected linear time bounding the expected number of times the
closest pair changes (note that the model of computation relies on hashing in
constant time, true randomization and a constant time computation for the floor
function).

Today there have also been the first lectures of John Iacono concerning the point
location problem, that is, given a space and a partition of it in polygons, find the
polygon containing a point given as the input.
A technique for facing this problem is the Kirkpatrick’s method. Let’s look at the
method in the case of $\mathbb{R}^2$ where we have a triangulation of the space. The
technique relies on removing an independent set of vertices. This operation could
break the triangulation, so a new one has to be computed (and this can be done
in constant time in this particular case). Then it is just sufficient to proceed in the
same way on the resulting instance, until we end up with a single, big, triangle.
Then we look at the point we want to localize and start the process of rebuilding
the triangulation keeping track of the triangle the points lies in at each step (this
can be done easily since the data structure maintains pointers from triangles at
various levels). It turns out that the number of steps needed in order to carry out
the search of the point is $\log n$, where $n$ is the number of vertices, while
building the structure (as pointed out before there are pointers in the game that have
to be set up) takes $O(n)$. Moreover the algorithm is completely deterministic.

Tomorrow is the last day of the summer school, heading back to København.