
NSF Org: |
CCF Division of Computing and Communication Foundations |
Recipient: |
|
Initial Amendment Date: | March 31, 2011 |
Latest Amendment Date: | March 31, 2011 |
Award Number: | 1065106 |
Award Instrument: | Standard Grant |
Program Manager: |
Jack S. Snoeyink
CCF Division of Computing and Communication Foundations CSE Directorate for Computer and Information Science and Engineering |
Start Date: | April 1, 2011 |
End Date: | March 31, 2015 (Estimated) |
Total Intended Award Amount: | $772,857.00 |
Total Awarded Amount to Date: | $772,857.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
5000 FORBES AVE PITTSBURGH PA US 15213-3815 (412)268-8746 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
5000 FORBES AVE PITTSBURGH PA US 15213-3815 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Algorithmic Foundations |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
Meshing has been a cornerstone for simulations using the finite
element method. But more recently it has had applications wherever
one needs to define a function over a domain, such as, graphics,
computer aided design, robotics, and even machine learning. Algorithms
will be designed to efficiently work in any fixed dimension with
guarantees on output size and quality.
At present, no theory exists to formulate and produce optimal meshes
in the presence of small input angles even for the 2D case. The
research will find efficient algorithms 2D and higher dimension that
generate optimal size Delaunay triangulations using only simplices
with no angles approaching 180 degrees.
Machine learning applications need a meshing algorithm that runs in
polynomial time for meshing n points in log n dimensions.
Historically, meshing algorithms return a set of space-filling
simplices. Even good aspect ratio simplices have too small a volume
and return a mesh that is of super polynomial size. Thus, new
algorithms will be developed that handle atomic objects that are have
much larger volume than simplices.
The results from this project are eminently practical and have broad
impact on the Sciences, Engineering, Manufacturing, and Machine
Learning. In particular, meshing is an enabling technology for
designing efficient windmills and cars, and simulations of earth
quakes and medial devices. One goal is to incorporated techniques
from this research into our first generation 3D code that we made
available on the web. In addition, this material will be incorporated
into classes taught at CMU, lectures, and papers presented at
conferences.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The primary focus of the project was to design more efficient
algorithms for fundamental problems that arise in geometry, graph
theory, and linear algebra. Our motivation and interest for studying
these algorithmic problems arises partly by their application to
problems in the sciences, industry, medicine, and military.
One of the applications studied in more detail by this project were
problems in image processing. We demonstrated how these new
algorithms can be applied to problems as medical image processing.
Other fast algorithms were found for the solution to numeric problems
arising from electro-magnetic radiation and 3D fluids problems such as
those involving Maxwell's equations and radar imaging.
To efficiently solve these applications required us to study and solve
more fundamental questions in algorithm design. Included in our list
were problems in computational geometry, graph theory,
and linear algebra. The main approach undertaken to solve a problem
in one of these three areas was to include the tools developed in the
other two areas. A now classic algorithmic design technique is to use
graph theory to solve linear algebra problems and linear algebra to
solve graph problems in what now is known as spectral graph theory.
The project also developed new algorithms for problems in
computational geometry and numerical analysis.
One of the simplest problems that was studied by this project and is
reasonably easy to describe is the problem of computing the Voronoi
Diagram of a set of points given in some fixed dimensional space. The
problem is to partition the space according to the distance to the
nearest input point. This will decompose the space into a polytope
around each input point. Unfortunately the number of corners of these
polytopes may be exponential in the number of points and the
dimension. Thus, even in three dimensions the number of corners may be
quadratic in the number of points. But quite often the number of
corners may only be linear in the number of points. We showed how to
use ideas from finite element mesh generation to improve the runtime
to compute the Voronoi diagram when the output size is small. In this
case we obtained a much faster algorithm then was known.
Another very well-studied problem is finding what is known as a low
diameter decomposition of a graph. That is given an unweighted
undirected graph G with n vertices remove a small number of edges such
that each connected component of the remaining graph has small
diameter, O(log n). We found a very simple linear-work parallel algorithm for
this classic problem. This algorithm is now used in the fastest known
algorithm for solving linear systems that are symmetric and diagonally
dominate SDD. We have also used this decomposition algorithm to
improve the run time for parallel approximate shortest-path algorithms
as well as approximate graph spanners.
The algorithm is so simple it can be explained to a general audience.
The input to the algorithm is an unweighted undirected graph and we
think of each node as having an agent or processor on each node.
The algorithm is as follows:
1) Each agent picks independently a value from an exponential
distribution, say the ith node picks Xi.
2) The agents determine the maximum value picked, say, Xm.
3) The ith agent comes alive at time Xm - Xi and if no one owns the
agent's node, it gets to start its own cluster by owning its own node, otherwise it quits.
4) At each successive time the agent looks to see if any nodes
neighboring the nodes it owns are not owned and i...
Please report errors in award information by writing to: awardsearch@nsf.gov.