Article: Optimal uncertainty quantification for legacy data observations of Lipschitz functions

I’m happy to report that the article “Optimal uncertainty quantification for legacy data observations of Lipschitz functions”, jointly written with Mike McKerns, Dominik Meyer, Florian Theil, Houman Owhadi and Michael Ortiz has now appeared in ESAIM: Mathematical Modelling and Numerical Analysis, vol. 47, no. 6. The preprint version can be found at arXiv:1202.1928.

Continue reading “Article: Optimal uncertainty quantification for legacy data observations of Lipschitz functions”

Advertisements

Article: Optimal Uncertainty Quantification

Almost three years on from the initial submission, the article “Optimal Uncertainty Quantification”, jointly written with Houman Owhadi, Clint Scovel, Mike McKerns and Michael Ortiz, is now in print. It will appear in this year’s second-quarter issue of SIAM Review, and is already accessible online for those with SIAM subscriptions; the preprint version can be found at arXiv:1009.0679.

This paper was a real team effort, with everyone bringing different strengths to the table. Given the length of the review process, I think that our corresponding author Houman Owhadi deserves a medal for his patience (as does Ilse Ipsen, the article’s editor at SIAM Review), but, really, congratulations and thanks to all. 🙂

We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.

Preprint: Stratified graphene-noble metal systems for low-loss plasmonics applications

Lauren Rast, Vinod Tewary and I have just posted to the arXiv a preprint of our joint paper “Stratified graphene-noble metal systems for low-loss plasmonics applications”, which has been accepted for publication in Physical Review B later this year.

Graphene — which is basically just a sheet of carbon graphite just a few atomic layers thick — is often hailed in the press as a “wonder material”, usually because of its material properties such as its exceptionally high strength-to-weight ratio. Another area of interest, however, is the study of the electronic and optical properties of graphene.

Graphene has a very high carrier mobility, meaning that if you excite it by shooting some light at it, then the electrons in the graphene sheet will wiggle about in a very pleasant manner (called a plasmon) and thereby transmit a large chunk of the energy contained in the incident light as electrical current through the graphene sheet. This is exactly what you want a solar cell to do if you want to generate electricity from sunlight — and a graphene-based “thin film” solar cell would be much thinner and lighter than today’s solar cells.

But there’s a tiny problem: graphene doesn’t do this very well at visible frequencies of light. Silver, on the other hand, does respond very sympathetically to visible light, but is much more lossy. But, a-ha! What is one were to make a graphene and silver sandwich? Could one have the best of both worlds, i.e. absorption in the visible range like silver, and low electronic energy loss like graphene?

Our paper is a theoretical and numerical exploration of such sandwich structures, and we show that such composites do go a long way towards this best of both worlds.

Preprint: Thermalization of rate-independent processes by entropic regularization

Somewhat belatedly, I’ve just uploaded to the arXiv a preprint of a joint paper with Marisol Koslowski, Florian Theil and Michael Ortiz entitled “Thermalization of rate-independent processes by entropic regularization”. The full paper is slated appear in Discrete and Continuous Dynamical Systems – Series S in early 2013.

The topic of the paper is a cute little extension of some of my PhD work, in which we show that the effect of coupling a rate-independent process (a decent model for plastic evolutions such as dry friction) to a heat bath (injecting a bit of statistically disordered energy) is equivalent to “softening” the dissipation potential by taking its Cramer transform (a smoothing and strict convexification procedure often used in large deviations theory).

Continue reading “Preprint: Thermalization of rate-independent processes by entropic regularization”