Award Abstract # 1538217
Collaborative Research: Perfect Simulation of Stochastic Networks

NSF Org: CMMI
Division of Civil, Mechanical, and Manufacturing Innovation
Recipient: THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
Initial Amendment Date: August 25, 2015
Latest Amendment Date: August 25, 2015
Award Number: 1538217
Award Instrument: Standard Grant
Program Manager: Georgia-Ann Klutke
gaklutke@nsf.gov
 (703)292-2443
CMMI
 Division of Civil, Mechanical, and Manufacturing Innovation
ENG
 Directorate for Engineering
Start Date: September 1, 2015
End Date: August 31, 2018 (Estimated)
Total Intended Award Amount: $127,588.00
Total Awarded Amount to Date: $127,588.00
Funds Obligated to Date: FY 2015 = $127,588.00
History of Investigator:
  • Jose Blanchet (Principal Investigator)
    jose.blanchet@stanford.edu
Recipient Sponsored Research Office: Columbia University
615 W 131ST ST
NEW YORK
NY  US  10027-7922
(212)854-6851
Sponsor Congressional District: 13
Primary Place of Performance: Columbia University
500 W 120th St.
New York
NY  US  10027-6902
Primary Place of Performance
Congressional District:
13
Unique Entity Identifier (UEI): F4N1QNPB95M4
Parent UEI:
NSF Program(s): OE Operations Engineering
Primary Program Source: 01001516DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 072E, 073E, 077E, 9102
Program Element Code(s): 006Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.041

ABSTRACT

Stochastic networks are a general class of time-varying probabilistic models where there is competition for limited resources. They are used in a wide range of engineering applications such as communication networks, call centers, and manufacturing systems. Operators of these types of systems are often interested in achieving a high level of performance over the long run, i.e., in steady state. Thus, it is important to devise efficient computational methods for steady-state analysis of stochastic networks. Simulation is one of the most commonly used methods for estimating steady-state performance but straightforward application results in an initial-transient bias. This award provides a comprehensive set of tools that will enable exact (i.e. with no initial-transient bias) steady-state stochastic simulation of a wide range of complex stochastic networks of interest. This characteristic (complete bias deletion) is what defines a perfect simulation algorithm. This research will therefore enable accurate steady-state analysis in a wide range of areas of societal impact, thereby allowing operators to improve efficiency and performance. Because steady-state analysis arises in a wide variety of areas, including Bayesian Statistics, the award will also be impactful beyond the types of applications mentioned earlier.

Steady-state performance analysis of stochastic networks (including general queueing networks) is of great importance in operations research. Stochastic simulation has been a traditional tool used by modelers and researchers to perform steady-state computations. The key challenge in steady-state simulation is the quantification of the bias caused by the initial transient behavior associated to any direct stochastic simulation procedure. This award's focus is on algorithms that fully eliminate the initial transient bias in a non-asymptotic sense; these are known as perfect simulation algorithms. This research will produce the first class of perfect simulation algorithms for general stochastic networks with features such as non-Markovian input, time-inhomogeneous (periodic) characteristics, long-range dependence traffic (e.g. fractional Brownian motion), and multidimensional networks with and without capacity constraints (such as generalized Jackson networks). This research combines techniques from areas such as rare-event simulation and steady-state simulation, which have not been connected for the purpose of developing computational methods. The project has important implications for other scientific areas of great relevance, such as Bayesian Statistics, due to the connection between steady-state simulation through Markov chain Monte Carlo method.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Blanchet, J. and Liu, Z. "Malliavin-based Multilevel Monte CarloEstimators for Densities of Max-stable Processes" Proceedings of MCQMC 2016 , 2017 , p.75 2194-1017
Blanchet, J., Dong, J., and Pei, Y. "Perfect Sampling of GI/GI/c Queues" Queueing Systems: Theory and Applications , v.90 , 2017 , p.1 10.1007/s11134-018-9573-2

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Many applications are concerned with the design of engineering systems which exhibit a desirable long term average performance. One may think, for example, in applications such as health care operations or manufacturing systems, in which managers and engineers are interested in implementing and evaluating policies which minimize the long term average cost per unit of time of running such systems. These types of systems are complex, high dimensional and non-stationary in time, so even evaluating the performance of a policy is a non-trivial undertaking. So, stochastic simulation is a natural approach to performance analysis and optimization in these types of engineering systems, which are commonly known in the Operations Research literature as stochastic networks.

In turn, stochastic simulation consists in generating scenarios of the system in consideration, based on random inputs which are inherently part of the system's evolution. For instance, the flow of patients and the type of care that they require when arriving to the hospital or the demand and supply fluctuations in the setting of a manufacturing operation. The basic premise in stochastic simulation, which is based on the statistical Law of Large Numbers, is that by averaging many independent scenarios one can compute the expected performance of the system in consideration. The problem with the approach in the setting of steady-state cost analysis is that when computing the long term average cost of running a system, each scenario needs to be simulated for long period of time. In fact, in principle, strictly speaking, an infinite amount of time if one genuinely wishes to estimate the steady-state cost per unit of time of running the system. Naturally, this can be potentially computationally very expensive. One can, of course, truncate the simulation to a fix time horizon, but such truncation if not done carefully may introduce biases.

One of the main outcomes of this project is the construction of a class of algorithms that allow to estimate without any bias the steady-state cost of complex stochastic networks by running each scenario only for a finite (but random) amount of time. This types of estimators of steady-state costs are known in the community as "perfect" because they do not possess biases. The advantage of having perfect or unbiased samples is that they can be easily replicated in parallel, distributing the computation easily among many computers. Thus, obtaining unbiased estimates of steady-state costs with errors that can be reduced by increasing the amount of independent replications. It is important to note that each replication or scenario now requires only a finite window of time to be implemented. 

The methodological elements of the proposal are abstracted and applied to other applications in which bias plays a key role. For instance, in the context of machine learning and data-analysis of massive data sets, often very large optimization problems are solved for model fitting. The sets may be so massive that a subsample may be needed to implement fitting procedures. Unfortunately, optimizing over a subsample will introduce a bias in the fitting procedure. The tools studied and proposed in this proposal are also applied to the problem of finding unbiased estimators of solutions of massive optimization problems, by solving small problems, each with a size which is independent of the size of the original data set (which can be arbitrarily massive in size).

In sum, the tools studied in this project enable the evaluation of quantities of interest, such as steady-state costs and optimization of massive data sets, without any bias. In principle, some of these quantities may take arbitrarily long time to be computed, yet, the algorithms produced by this project produce unbiased estimators of these quantities in a finite amount of time. These estimators can be averaged to improve the accuracy of the estimates in a timely manner. Moreover, the estimators can be easily produced in parallel, making the approach scalable and versatile.


Last Modified: 04/23/2019
Modified by: Jose Blanchet

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page