Spelling suggestions: "subject:"computational geometry"" "subject:"eomputational geometry""
11 |
Algorithms for Optimizing Search Schedules in a PolygonBahun, Stephen January 2008 (has links)
In the area of motion planning, considerable work has been done on guarding
problems, where "guards", modelled as points, must guard a polygonal
space from "intruders". Different variants
of this problem involve varying a number of factors. The guards performing
the search may vary in terms of their number, their mobility, and their
range of vision. The model of intruders may or may not allow them to
move. The polygon being searched may have a specified starting point,
a specified ending point, or neither of these. The typical question asked
about one of these problems is whether or not certain polygons can be
searched under a particular guarding paradigm defined by the types
of guards and intruders.
In this thesis, we focus on two cases of a chain of guards searching
a room (polygon with a specific starting point) for mobile intruders.
The intruders must never be allowed to escape through the door undetected.
In the case of the two guard problem, the guards must start at the door
point and move in opposite directions along the boundary of the
polygon, never crossing the door point. At all times, the
guards must be able to see each other. The search is complete once both
guards occupy the same spot elsewhere on the polygon. In the case of
a chain of three guards, consecutive guards in the chain must always
be visible. Again, the search starts at the door point, and the outer
guards of the chain must move from the door in opposite directions.
These outer guards must always remain on the boundary of the polygon.
The search is complete once the chain lies entirely on a portion of
the polygon boundary not containing the door point.
Determining whether a polygon can be searched is a problem in the area
of visibility in polygons; further to that, our work is related
to the area of planning algorithms. We look for ways to find optimal schedules that minimize
the distance or time required to complete the search. This is done
by finding shortest paths in visibility diagrams that indicate valid
positions for the guards. In the case of
the two-guard room search, we are able to find the shortest distance
schedule and the quickest schedule. The shortest distance schedule
is found in O(n^2) time by solving an L_1 shortest path problem
among curved obstacles in two dimensions. The quickest search schedule is
found in O(n^4) time by solving an L_infinity shortest path
problem among curved obstacles in two dimensions.
For the chain of three guards, a search schedule minimizing the total
distance travelled by the outer guards is found in O(n^6) time by
solving an L_1 shortest path problem among curved obstacles in two dimensions.
|
12 |
Intuition in formal proof : a novel framework for combining mathematical toolsMeikle, Laura Isabel January 2014 (has links)
This doctoral thesis addresses one major difficulty in formal proof: removing obstructions to intuition which hamper the proof endeavour. We investigate this in the context of formally verifying geometric algorithms using the theorem prover Isabelle, by first proving the Graham’s Scan algorithm for finding convex hulls, then using the challenges we encountered as motivations for the design of a general, modular framework for combining mathematical tools. We introduce our integration framework — the Prover’s Palette, describing in detail the guiding principles from software engineering and the key differentiator of our approach — emphasising the role of the user. Two integrations are described, using the framework to extend Eclipse Proof General so that the computer algebra systems QEPCAD and Maple are directly available in an Isabelle proof context, capable of running either fully automated or with user customisation. The versatility of the approach is illustrated by showing a variety of ways that these tools can be used to streamline the theorem proving process, enriching the user’s intuition rather than disrupting it. The usefulness of our approach is then demonstrated through the formal verification of an algorithm for computing Delaunay triangulations in the Prover’s Palette.
|
13 |
Analysis-ready models of tortuous, tightly packed geometriesEdwards, John Martin 22 September 2014 (has links)
Complex networks of cells called neurons in the brain enable human learning and memory. The topology and electrophysiological function of these networks are affected by nano and microscale geometries of neurons. Understanding of these structure-function relationships in neurons is an important component of neuroscience in which simulation plays a fundamental role. This thesis addresses four specific geometric problems raised by modeling and simulation of intricate neuronal structure and behavior at the nanoscale. The first two problems deal with 3D surface reconstruction: neurons are geometrically complex structures that are tightly intertwined in the brain, presenting great challenges in reconstruction. We present the first algorithm that reconstructs surface meshes from polygonal contours that provably guarantees watertight, manifold, and intersection-free forests of densely packed structures. Many algorithms exist that produce surfaces from cross-sectional contours, but all either use heuristics in fitting the surface or they fail when presented with tortuous objects in close proximity. Our algorithm reconstructs surfaces that are not only internally correct, but are also free of intersections with other reconstructed objects in the same region. We also present a novel surface remeshing algorithm suitable for models of neuronal dual space. The last two problems treated by this thesis deal with producing derivative models from surface meshes. A range of neuronal simulation methodologies exist and we offer a framework to derive appropriate models for each from surface meshes. We present two specific algorithms that yield analysis-ready 1D cable models in one case, and proposed "aligned cell" models in the other. In the creation of aligned cells we also present a novel adaptive distance transform. Finally, we present a software package called VolRoverN in which we have implemented many of our algorithms and which we expect will serve as a repository of important tools for the neuronal modeling community. Our algorithms are designed to meet the immediate needs of the neuroscience community, but as we show in this thesis, they are general and suitable for a variety of applications. / text
|
14 |
Algorithms for Geometric Covering and Piercing ProblemsFraser, Robert January 2012 (has links)
This thesis involves the study of a range of geometric covering and piercing problems, where the unifying thread is approximation using disks. While some of the problems addressed in this work are solved exactly with polynomial time algorithms, many problems are shown to be at least NP-hard. For the latter, approximation algorithms are the best that we can do in polynomial time assuming that P is not equal to NP.
One of the best known problems involving unit disks is the Discrete Unit Disk Cover (DUDC) problem, in which the input consists of a set of points P and a set of unit disks in the plane D, and the objective is to compute a subset of the disks of minimum cardinality which covers all of the points. Another perspective on the problem is to consider the centre points (denoted Q) of the disks D as an approximating set of points for P. An optimal solution to DUDC provides a minimal cardinality subset Q*, a subset of Q, so that each point in P is within unit distance of a point in Q*. In order to approximate the general DUDC problem, we also examine several restricted variants.
In the Line-Separable Discrete Unit Disk Cover (LSDUDC) problem, P and Q are separated by a line in the plane. We write that l^- is the half-plane defined by l containing P, and l^+ is the half-plane containing Q. LSDUDC may be solved exactly in O(m^2n) time using a greedy algorithm. We augment this result by describing a 2-approximate solution for the Assisted LSDUDC problem, where the union of all disks centred in l^+ covers all points in P, but we consider using disks centred in l^- as well to try to improve the solution.
Next, we describe the Within-Strip Discrete Unit Disk Cover (WSDUDC) problem, where P and Q are confined to a strip of the plane of height h. We show that this problem is NP-complete, and we provide a range of approximation algorithms for the problem with trade-offs between the approximation factor and running time.
We outline approximation algorithms for the general DUDC problem which make use of the algorithms for LSDUDC and WSDUDC. These results provide the fastest known approximation algorithms for DUDC. As with the WSDUDC results, we present a set of algorithms in which better approximation factors may be had at the expense of greater running time, ranging from a 15-approximate algorithm which runs in O(mn + m log m + n log n) time to a 18-approximate algorithm which runs in O(m^6n+n log n) time.
The next problems that we study are Hausdorff Core problems. These problems accept an input polygon P, and we seek a convex polygon Q which is fully contained in P and minimizes the Hausdorff distance between P and Q. Interestingly, we show that this problem may be reduced to that of computing the minimum radius of disk, call it k_opt, so that a convex polygon Q contained in P intersects all disks of radius k_opt centred on the vertices of P. We begin by describing a polynomial time algorithm for the simple case where P has only a single reflex vertex. On general polygons, we provide a parameterized algorithm which performs a parametric search on the possible values of k_opt. The solution to the decision version of the problem, i.e. determining whether there exists a Hausdorff Core for P given k_opt, requires some novel insights. We also describe an FPTAS for the decision version of the Hausdorff Core problem.
Finally, we study Generalized Minimum Spanning Tree (GMST) problems, where the input consists of imprecise vertices, and the objective is to select a single point from each imprecise vertex in order to optimize the weight of the MST over the points. In keeping with one of the themes of the thesis, we begin by using disks as the imprecise vertices. We show that the minimization and maximization versions of this problem are NP-hard, and we describe some parameterized and approximation algorithms. Finally, we look at the case where the imprecise vertices consist of just two vertices each, and we show that the minimization version of the problem (which we call 2-GMST) remains NP-hard, even in the plane. We also provide an algorithm to solve the 2-GMST problem exactly if the combinatorial structure of the optimal solution is known.
We identify a number of open problems in this thesis that are worthy of further study.
Among them:
Is the Assisted LSDUDC problem NP-complete?
Can the WSDUDC results be used to obtain an improved PTAS for DUDC?
Are there classes of polygons for which the determination of the Hausdorff Core is easy?
Is there a PTAS for the maximum weight GMST problem on (unit) disks?
Is there a combinatorial approximation algorithm for the 2-GMST problem (particularly with an approximation factor under 4)?
|
15 |
Morphing Parallel Graph DrawingsSpriggs, Michael John January 2007 (has links)
A pair of straight-line drawings of a graph is called parallel if, for every edge of the graph, the line segment that represents the edge in one drawing is parallel with the line segment that represents the edge in the other drawing. We study the problem of morphing between pairs of parallel planar drawings of a graph, keeping all intermediate drawings planar and parallel with the source and target drawings. We call such a morph a parallel morph. Parallel morphs have application to graph visualization. The problem of deciding whether two parallel drawings in the plane admit a parallel morph turns out to be NP-hard in general. However, for some restricted classes of graphs and drawings, we can efficiently decide parallel morphability. Our main positive result is that every pair of parallel simple orthogonal drawings in the plane admits a parallel morph. We give an efficient algorithm that computes such a morph. The number of steps required in a morph produced by our algorithm is linear in the complexity of the graph, where a step involves moving each vertex along a straight line at constant speed. We prove that this upper bound on the number of steps is within a constant factor of the worst-case lower bound. We explore the related problem of computing a parallel morph where edges are required to change length monotonically, i.e. to be either non-increasing or non-decreasing in length. Although parallel orthogonally-convex polygons always admit a monotone parallel morph, deciding morphability under these constraints is NP-hard, even for orthogonal polygons. We also begin a study of parallel morphing in higher dimensions. Parallel drawings of trees in any dimension always admit a parallel morph. This is not so for parallel drawings of cycles in 3-space, even if orthogonal. Similarly, not all pairs of parallel orthogonal polyhedra admit a parallel morph, even if they are topological spheres. In fact, deciding parallel morphability turns out to be PSPACE-hard for both parallel orthogonal polyhedra, and parallel orthogonal drawings in 3-space.
|
16 |
On Geometric Range Searching, Approximate Counting and Depth ProblemsAfshani, Peyman January 2008 (has links)
In this thesis we deal with problems connected to range searching,
which is one of the central areas of computational geometry.
The dominant problems in this area are
halfspace range searching, simplex range searching and orthogonal range searching and
research into these problems has spanned decades.
For many range searching problems, the best possible
data structures cannot offer fast (i.e., polylogarithmic) query
times if we limit ourselves to near linear storage.
Even worse, it is conjectured (and proved in some cases)
that only very small improvements to these might be possible.
This inefficiency has encouraged many researchers to seek alternatives through approximations.
In this thesis we continue this line of research and focus on
relative approximation of range counting problems.
One important problem where it is possible to achieve significant speedup
through approximation is halfspace range counting in 3D.
Here we continue the previous research done
and obtain the first optimal data structure for approximate halfspace range counting in 3D.
Our data structure has the slight advantage of being Las Vegas (the result is always correct) in contrast
to the previous methods that were Monte Carlo (the correctness holds with high probability).
Another series of problems where approximation can provide us with
substantial speedup comes from robust statistics.
We recognize three problems here:
approximate Tukey depth, regression depth and simplicial depth queries.
In 2D, we obtain an optimal data structure capable of approximating
the regression depth of a query hyperplane.
We also offer a linear space data structure which can answer approximate
Tukey depth queries efficiently in 3D.
These data structures are obtained by applying our ideas for the
approximate halfspace counting problem.
Approximating the simplicial depth turns out to be much more
difficult, however.
Computing the simplicial depth of a given point is more computationally
challenging than most other definitions of data depth.
In 2D we obtain the first data structure which uses near linear space
and can answer approximate simplicial depth queries in polylogarithmic time.
As applications of this result, we provide two non-trivial methods to
approximate the simplicial depth of a given point in higher dimension.
Along the way, we establish a tight combinatorial relationship between
the Tukey depth of any given point and its simplicial depth.
Another problem investigated in this thesis is the dominance reporting problem,
an important special case of orthogonal range reporting.
In three dimensions, we solve this
problem in the pointer machine model and the external memory model
by offering the first optimal data structures in these models of computation.
Also, in the RAM model and for points from
an integer grid we reduce the space complexity of the fastest
known data structure to optimal.
Using known techniques in the literature, we can use our
results to obtain solutions for the orthogonal range searching problem as well.
The query complexity offered by our orthogonal range reporting data structures
match the most efficient query complexities
known in the literature but our space bounds are lower than the previous methods in the external
memory model and RAM model where the input is a subset of an integer grid.
The results also yield improved orthogonal range searching in
higher dimensions (which shows the significance
of the dominance reporting problem).
Intersection searching is a generalization of range searching where
we deal with more complicated geometric objects instead of points.
We investigate the rectilinear disjoint polygon counting problem
which is a specialized intersection counting problem.
We provide a linear-size data structure capable of counting
the number of disjoint rectilinear polygons
intersecting any rectilinear polygon of constant size.
The query time (as well as some other properties of our data structure) resembles
the classical simplex range searching data structures.
|
17 |
Solving Geometric Problems in Space-Conscious ModelsChen, Yu January 2009 (has links)
When dealing with massive data sets, standard algorithms may
easily ``run out of memory''. In this thesis, we design efficient
algorithms in space-conscious models. In particular, in-place
algorithms, multi-pass algorithms, read-only algorithms, and
stream-sort algorithms are studied, and the focus is on
fundamental geometric problems, such as 2D convex hulls, 3D convex
hulls, Voronoi diagrams and nearest neighbor queries, Klee's
measure problem, and low-dimensional linear programming.
In-place algorithms only use O(1) extra space besides the input
array. We present a data structure for 2D nearest neighbor queries
and algorithms for Klee's measure problem in this model.
Algorithms in the multi-pass model only make read-only sequential
access to the input, and use sublinear working space and small
(usually a constant) number of passes on the input. We present
algorithms and lower bounds for many problems, including
low-dimensional linear programming and convex hulls, in this
model.
Algorithms in the read-only model only make read-only random
access to the input array, and use sublinear working space. We
present algorithms for Klee's measure problem and 2D convex hulls
in this model.
Algorithms in the stream-sort model use sorting as a primitive
operation. Each pass can either sort the data or make sequential
access to the data. As in the multi-pass model, these algorithms
can only use sublinear working space and a small (usually a
constant) number of passes on the data. We present algorithms for
constructing convex hulls and polygon triangulation in this model.
|
18 |
Morphing Parallel Graph DrawingsSpriggs, Michael John January 2007 (has links)
A pair of straight-line drawings of a graph is called parallel if, for every edge of the graph, the line segment that represents the edge in one drawing is parallel with the line segment that represents the edge in the other drawing. We study the problem of morphing between pairs of parallel planar drawings of a graph, keeping all intermediate drawings planar and parallel with the source and target drawings. We call such a morph a parallel morph. Parallel morphs have application to graph visualization. The problem of deciding whether two parallel drawings in the plane admit a parallel morph turns out to be NP-hard in general. However, for some restricted classes of graphs and drawings, we can efficiently decide parallel morphability. Our main positive result is that every pair of parallel simple orthogonal drawings in the plane admits a parallel morph. We give an efficient algorithm that computes such a morph. The number of steps required in a morph produced by our algorithm is linear in the complexity of the graph, where a step involves moving each vertex along a straight line at constant speed. We prove that this upper bound on the number of steps is within a constant factor of the worst-case lower bound. We explore the related problem of computing a parallel morph where edges are required to change length monotonically, i.e. to be either non-increasing or non-decreasing in length. Although parallel orthogonally-convex polygons always admit a monotone parallel morph, deciding morphability under these constraints is NP-hard, even for orthogonal polygons. We also begin a study of parallel morphing in higher dimensions. Parallel drawings of trees in any dimension always admit a parallel morph. This is not so for parallel drawings of cycles in 3-space, even if orthogonal. Similarly, not all pairs of parallel orthogonal polyhedra admit a parallel morph, even if they are topological spheres. In fact, deciding parallel morphability turns out to be PSPACE-hard for both parallel orthogonal polyhedra, and parallel orthogonal drawings in 3-space.
|
19 |
On Geometric Range Searching, Approximate Counting and Depth ProblemsAfshani, Peyman January 2008 (has links)
In this thesis we deal with problems connected to range searching,
which is one of the central areas of computational geometry.
The dominant problems in this area are
halfspace range searching, simplex range searching and orthogonal range searching and
research into these problems has spanned decades.
For many range searching problems, the best possible
data structures cannot offer fast (i.e., polylogarithmic) query
times if we limit ourselves to near linear storage.
Even worse, it is conjectured (and proved in some cases)
that only very small improvements to these might be possible.
This inefficiency has encouraged many researchers to seek alternatives through approximations.
In this thesis we continue this line of research and focus on
relative approximation of range counting problems.
One important problem where it is possible to achieve significant speedup
through approximation is halfspace range counting in 3D.
Here we continue the previous research done
and obtain the first optimal data structure for approximate halfspace range counting in 3D.
Our data structure has the slight advantage of being Las Vegas (the result is always correct) in contrast
to the previous methods that were Monte Carlo (the correctness holds with high probability).
Another series of problems where approximation can provide us with
substantial speedup comes from robust statistics.
We recognize three problems here:
approximate Tukey depth, regression depth and simplicial depth queries.
In 2D, we obtain an optimal data structure capable of approximating
the regression depth of a query hyperplane.
We also offer a linear space data structure which can answer approximate
Tukey depth queries efficiently in 3D.
These data structures are obtained by applying our ideas for the
approximate halfspace counting problem.
Approximating the simplicial depth turns out to be much more
difficult, however.
Computing the simplicial depth of a given point is more computationally
challenging than most other definitions of data depth.
In 2D we obtain the first data structure which uses near linear space
and can answer approximate simplicial depth queries in polylogarithmic time.
As applications of this result, we provide two non-trivial methods to
approximate the simplicial depth of a given point in higher dimension.
Along the way, we establish a tight combinatorial relationship between
the Tukey depth of any given point and its simplicial depth.
Another problem investigated in this thesis is the dominance reporting problem,
an important special case of orthogonal range reporting.
In three dimensions, we solve this
problem in the pointer machine model and the external memory model
by offering the first optimal data structures in these models of computation.
Also, in the RAM model and for points from
an integer grid we reduce the space complexity of the fastest
known data structure to optimal.
Using known techniques in the literature, we can use our
results to obtain solutions for the orthogonal range searching problem as well.
The query complexity offered by our orthogonal range reporting data structures
match the most efficient query complexities
known in the literature but our space bounds are lower than the previous methods in the external
memory model and RAM model where the input is a subset of an integer grid.
The results also yield improved orthogonal range searching in
higher dimensions (which shows the significance
of the dominance reporting problem).
Intersection searching is a generalization of range searching where
we deal with more complicated geometric objects instead of points.
We investigate the rectilinear disjoint polygon counting problem
which is a specialized intersection counting problem.
We provide a linear-size data structure capable of counting
the number of disjoint rectilinear polygons
intersecting any rectilinear polygon of constant size.
The query time (as well as some other properties of our data structure) resembles
the classical simplex range searching data structures.
|
20 |
Solving Geometric Problems in Space-Conscious ModelsChen, Yu January 2009 (has links)
When dealing with massive data sets, standard algorithms may
easily ``run out of memory''. In this thesis, we design efficient
algorithms in space-conscious models. In particular, in-place
algorithms, multi-pass algorithms, read-only algorithms, and
stream-sort algorithms are studied, and the focus is on
fundamental geometric problems, such as 2D convex hulls, 3D convex
hulls, Voronoi diagrams and nearest neighbor queries, Klee's
measure problem, and low-dimensional linear programming.
In-place algorithms only use O(1) extra space besides the input
array. We present a data structure for 2D nearest neighbor queries
and algorithms for Klee's measure problem in this model.
Algorithms in the multi-pass model only make read-only sequential
access to the input, and use sublinear working space and small
(usually a constant) number of passes on the input. We present
algorithms and lower bounds for many problems, including
low-dimensional linear programming and convex hulls, in this
model.
Algorithms in the read-only model only make read-only random
access to the input array, and use sublinear working space. We
present algorithms for Klee's measure problem and 2D convex hulls
in this model.
Algorithms in the stream-sort model use sorting as a primitive
operation. Each pass can either sort the data or make sequential
access to the data. As in the multi-pass model, these algorithms
can only use sublinear working space and a small (usually a
constant) number of passes on the data. We present algorithms for
constructing convex hulls and polygon triangulation in this model.
|
Page generated in 0.1477 seconds