Skip to feedback

Award Abstract # 1065106
AF: Medium: Theory and Practice of Optimal Meshing

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: CARNEGIE MELLON UNIVERSITY
Initial Amendment Date: March 31, 2011
Latest Amendment Date: March 31, 2011
Award Number: 1065106
Award Instrument: Standard Grant
Program Manager: Jack S. Snoeyink
CCF
 Division of Computing and Communication Foundations
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: April 1, 2011
End Date: March 31, 2015 (Estimated)
Total Intended Award Amount: $772,857.00
Total Awarded Amount to Date: $772,857.00
Funds Obligated to Date: FY 2011 = $772,857.00
History of Investigator:
  • Gary Miller (Principal Investigator)
    glmiller@cs.cmu.edu
Recipient Sponsored Research Office: Carnegie-Mellon University
5000 FORBES AVE
PITTSBURGH
PA  US  15213-3815
(412)268-8746
Sponsor Congressional District: 12
Primary Place of Performance: Carnegie-Mellon University
5000 FORBES AVE
PITTSBURGH
PA  US  15213-3815
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): U3NKNFLNQ613
Parent UEI: U3NKNFLNQ613
NSF Program(s): Algorithmic Foundations
Primary Program Source: 01001112DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7924, 7929, 9218, HPCC
Program Element Code(s): 779600
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Meshing has been a cornerstone for simulations using the finite
element method. But more recently it has had applications wherever
one needs to define a function over a domain, such as, graphics,
computer aided design, robotics, and even machine learning. Algorithms
will be designed to efficiently work in any fixed dimension with
guarantees on output size and quality.

At present, no theory exists to formulate and produce optimal meshes
in the presence of small input angles even for the 2D case. The
research will find efficient algorithms 2D and higher dimension that
generate optimal size Delaunay triangulations using only simplices
with no angles approaching 180 degrees.

Machine learning applications need a meshing algorithm that runs in
polynomial time for meshing n points in log n dimensions.
Historically, meshing algorithms return a set of space-filling
simplices. Even good aspect ratio simplices have too small a volume
and return a mesh that is of super polynomial size. Thus, new
algorithms will be developed that handle atomic objects that are have
much larger volume than simplices.

The results from this project are eminently practical and have broad
impact on the Sciences, Engineering, Manufacturing, and Machine
Learning. In particular, meshing is an enabling technology for
designing efficient windmills and cars, and simulations of earth
quakes and medial devices. One goal is to incorporated techniques
from this research into our first generation 3D code that we made
available on the web. In addition, this material will be incorporated
into classes taught at CMU, lectures, and papers presented at
conferences.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 11)
Gary L. Miller and Donald R. Sheehy "Approximate centerpoints with proofs" Comput. Geom. , v.43 , 2010 , p.647-654
Gary L. Miller and Donald Sheehy "A New Approach to Output-Sensitive Construction of Voronoi Diagrams and Delaunay Triangulations" Discrete {\&} Computational Geometry , v.52 , 2014 , p.476--491 10.1007/s00454-014-9629-y
Gary L. Miller and Todd Phillips and Donald R. Sheehy "Beating the Spread: Time-Optimal Point Meshing" Symposium on Compuational Geometry , 2011
Guy E. Blelloch and Anupam Gupta and Ioannis Koutis and Gary L. Miller and Richard Peng and Kanat Tangwongsan "Nearly-Linear Work Parallel {SDD} Solvers, Low-Diameter Decomposition, and Low-Stretch Subgraphs" Theory Comput. Syst. , v.55 , 2014 , p.521--554 10.1007/s00224-013-9444-5
Ioannis Koutis and Gary L. Miller and Richard Peng "A fast solver for a class of linear systems" Commun. ACM , v.55 , 2012 , p.99-107
Ioannis Koutis and Gary L. Miller and Richard Peng "A Generalized Cheeger Inequality" CoRR , v.abs/141 , 2014
Ioannis Koutis and Gary L. Miller and Richard Peng "Approaching Optimality for Solving {SDD} Linear Systems" {SIAM} J. Comput. , v.43 , 2014 , p.337--354 10.1137/110845914
Michael B. Cohen and Brittany Terese Fasy and Gary L. Miller and Amir Nayyeri and Donald Sheehy and Ameya Velingker "Approximating Nearest Neighbor Distances" CoRR , v.abs/150 , 2015
Michael B. Cohen and Gary L. Miller and Jakub W. Pachocki and Richard Peng and Shen Chen Xu "Stretching Stretch" CoRR , v.abs/140 , 2014
Mihail N. Kolountzakis and Gary L. Miller and Richard Peng and Charalampos E. Tsourakakis "Efficient Triangle Counting in Large Graphs via Degree-Based Vertex Partitioning" Internet Mathematics , v.8 , 2012 , p.161-185
Tsourakakis, Charalampos E. and Peng, Richard and Tsiarli, Maria A. and Miller, Gary L. and Schwartz, Russell "Approximation algorithms for speeding up dynamic programming and denoising aCGH data" J. Exp. Algorithmics , v.16 , 2011 , p.1.8:1.1-- 10.1145/1963190.2063517
(Showing: 1 - 10 of 11)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The primary focus of the project was to design more efficient
algorithms for fundamental problems that arise in geometry, graph
theory, and linear algebra.  Our motivation and interest for studying
these algorithmic problems arises partly by their application to
problems in the sciences, industry, medicine, and military.

One of the applications studied in more detail by this project were
problems in image processing.  We demonstrated how these new
algorithms can be applied to problems as medical image processing.
Other fast algorithms were found for the solution to numeric problems
arising from electro-magnetic radiation and 3D fluids problems such as
those involving Maxwell's equations and radar imaging.

To efficiently solve these applications required us to study and solve
more fundamental questions in algorithm design. Included in our list
were problems in computational geometry, graph theory,
and linear algebra.  The main approach undertaken to solve a problem
in one of these three areas was to include the tools developed in the
other two areas.  A now classic algorithmic design technique is to use
graph theory to solve linear algebra problems and linear algebra to
solve graph problems in what now is known as spectral graph theory.
The project also developed new algorithms for problems in
computational geometry and numerical analysis.

One of the simplest problems that was studied by this project and is
reasonably easy to describe is the problem of computing the Voronoi
Diagram of a set of points given in some fixed dimensional space. The
problem is to partition the space according to the distance to the
nearest input point. This will decompose the space into a polytope
around each input point. Unfortunately the number of corners of these
polytopes may be exponential in the number of points and the
dimension. Thus, even in three dimensions the number of corners may be
quadratic in the number of points.  But quite often the number of
corners may only be linear in the number of points.  We showed how to
use ideas from finite element mesh generation to improve the runtime
to compute the Voronoi diagram when the output size is small. In this
case we obtained a much faster algorithm then was known.

Another very well-studied problem is finding what is known as a low
diameter decomposition of a graph. That is given an unweighted
undirected graph G with n vertices remove a small number of edges such
that each connected component of the remaining graph has small
diameter, O(log n).  We found a very simple linear-work parallel algorithm for
this classic problem.  This algorithm is now used in the fastest known
algorithm for solving linear systems that are symmetric and diagonally
dominate SDD.  We have also used this decomposition algorithm to
improve the run time for parallel approximate shortest-path algorithms
as well as approximate graph spanners.

The algorithm is so simple it can be explained to a general audience.
The input to the algorithm is an unweighted undirected graph and we
think of each node as having an agent or processor on each node.

The algorithm is as follows:

1) Each agent picks independently a value from an exponential
distribution, say the ith node picks Xi.

2) The agents determine the maximum value picked, say, Xm.

3) The ith agent comes alive at time Xm - Xi and if no one owns the
agent's node, it gets to start its own cluster by owning its own node, otherwise it quits.

4) At each successive time the agent looks to see if any nodes
neighboring the nodes it owns are not owned and i...

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page