Past Working Groups

2019

Year-long Program: Games, Decisions, Risk and Reliability (GDRR)

Fall Semester 2019: Program on Deep Learning

Spring Semester 2020: Program on Causal Inference

2018

Program on Statistical, Mathematical, and Computational Methods for Precision Medicine (PMED)

Working Group Group Leaders
Sequential Decision Making (observational data) Working Group Leaders: Dan Lizotte (University of Western Ontario) and  Erica Moodie (McGill University)
Subgroup (formerly WG6): Andrew Roberts (Cerner Corp.)Description: The objective was to develop and apply methodology for supporting sequential decision-making to observational data derived from patient care. The focus was to use electronic health record data to derive sequential diagnostic and treatment strategies, particularly for diseases that entail substantial burden and that are sometimes difficult to distinguish from similar conditions.
Observational Microbiome Working Group Leader: Li Ma (Duke University)

Description: The “microbiome” group studied and developed models and methods for effective analysis of microbiome data in the context of predicting health outcomes and inferring the underlying biological processes. Some problems to be investigated include modeling the latent structure in compositional data, integrating microbiome data with other “-omics” data, and modeling dynamics of microbiome data.

mHealth Working Group Leader: Rumi Chunara (New York University)

Description: The agile nature of mobile tools (spanning but not limited to phones, wearables, online media) generates new opportunity for both dynamic treatment, and dynamic observation opportunities in health. Simultaneously there are statistical challenges around these new data and interventions including the observational nature of captured data, dynamic and opt-in sampling frames, and appropriate spatial and temporal features from the data. This working group worked with experts and those interested in working with mobile health tools across statistical, computational and health domains.

Neural Networks in Biomedicine Working Group Leader: Michael Mayhew (Inflammatix)

Description: Neural networks, deep and otherwise, have recently set the standard for performance in a number of challenging tasks in the disciplines of computer vision, natural language processing, speech recognition, and so on. These highly expressive, graphical models have also proven effective in a range of tasks in biology and medicine. Despite the many application successes of neural networks, theory and even empirical best practices for specification and selection of these models is still under development. Current practices in the field, while effective, could be (and in some cases have already been) enhanced with a statistical approach. We believe there is an opportunity for researchers from statistical and mathematical disciplines to contribute and adapt their perspectives and techniques for model specification, estimation of predictive uncertainty, incorporation of prior/domain information and other common statistical tasks to the field of neural networks. This working group aimed at creating an updated (Cheng and Titterington did it back in 1994), open, and community-driven review article that bridged exciting recent developments in statistical and neural network modeling as well as at their interface. The review highlighted state-of-the-art statistics-based neural network models, developed rough guidelines for carrying out routine statistical tasks in the neural network context, and listed software and data resources for the development and application of neural network models in biomedical research. The working group leveraged this knowledge to develop and evaluate neural network models for detection of structural abnormalities in a publicly available set of musculo-skeletal radiographs.

Model Learning of Model Selection Working Group Leader: Heiko Enderling (Moffitt Cancer Center)

Description: The aim of this working group was to identify and develop machine learning techniques to identify appropriate model structures to simulate tumor growth and treatment response to maximize model predictive power. The working group focused on aligning mathematical, statistical and bioinformatics concepts to combine machine learning for parameter identification as well as statistics for model selection to arrive at a model learning platform.

Working Group VI
** Working Group VI is a sub group of Working Group I – Refer to information presented in Working Group I **
Tumor Heterogeneity Working Group Leaders: Kevin Flores (NC State University); Erica Rutter (NC State University) and John Nardini (SAMSI/NC State University)

Description: In vivo tumors exhibit a broad range of heterogeneity, such as spatial differences, geno- and pheno-types, and environmental. A fundamental question the group aimed to solve was how, why, and if heterogeneity is important? And what could be done about it? The group attempted to address these issues by understanding this variability, making use of fascinating new imaging tools and recent mathematical and statistical modeling techniques?
We identify a few specific problems to tackle:

  • Identify the different mathematical models incorporating heterogeneity into a review article
  • Multi-treatment scenarios in the context of different types of heterogeneity? Why is adaptive therapy so successful with a simple measurement of tumor volume with such heterogeneity? How does this differ with different cancer types?
  • Multiple models of cell competition in light of constrained sources of heterogeneity (such as mobility)
Clinical Trials in Precision Medicine Working Group Leader: Elizabeth Slate (Florida State University)

Description: A goal of developing personalized medicine changes the focus in clinical trials to decision making for the individual, or subgroups, rather than more traditional treatment comparison via average response. This change of focus has implications for all aspects of clinical trials. The group focused on the study of the role of heterogeneity/subgroups in clinical trials in three ways: (1) prepare a review paper(s) on methods for identification of subgroups, beginning with analysis of existing clinical trials data (where a drug may have “failed” at the population level), then exploring the design of clinical trials for the purpose of identifying subgroups (“responders”), and potentially also studying methods for shrinkage over subgroups such as are used in basket trials. (2) The group studied the implications of heterogeneity/subgroups in resource allocation in clinicals, specifically the influence on decision making at interim analysis points. (3) The group attempted to quantify the contribution of a single agent in trials that evaluate sequence of agents (such as SMART studies). A 4th area of strong interest was handling multiple, competing utilities in clinical trial design, particularly incorporating patient-specific utility functions.

Course: Spring 2019: Introduction to Dynamic Treatment Regimes Description: This course was offered in conjunction with the SAMSI year-long research Program on Statistical, Mathematical, and Computational Methods for Precision Medicine (PMED).

This course provided a comprehensive introduction to methodology for data-based development and evaluation of dynamic treatment regimes. A dynamic treatment regime is a set of sequential decision rules, each corresponding to a key point in a disease or disorder process at which a decision on the next treatment action must be made. Each rule takes patient information to that point as input and returns the treatment s/he should receive from among the available options, thus tailoring treatment decisions to a patient’s individual characteristics. Methods will be motivated and developed through a formal time-dependent causal inference framework. Examples were drawn from cancer and other chronic disease research and research in the behavioral, educational, and other sciences.

Schedule: January 9 – April 24, 2019 / Wednesdays at 4:30PM – 7:00PM

Program on on Model Uncertainty: Mathematical and Statistical (MUMS)

Working Group Group Leaders
UQ in Materials Working Group Leaders: Wei Chen (Northwestern University) and Ralph Smith (NC State University)

Description: SubGroup 1-1: Multi-scale Model Calibration and UQ
Leaders: Wei Chen and Chris Hoyle
Uncertainty in material science shows itself in many places – multi-scale physics and model building for these systems, reduced order models and surrogates. The general theme of UQ in Materials was studied in several working groups.This group focused on multi-scale roll-up and UQ across scales for an integrated design application and a problem arising the modeling of welding. Topics that were investigated included: construction of appropriate discrepancy functions, passing of model uncertainty from lower to higher scales, design of multi-scale data collections and techniques to improve model identifiability.

SubGroup 1-2: Surrogates in Multi-scale Modeling
Leader: Ralph Smith
Uncertainty in material science shows itself in many places – multi-scale physics and model building for these systems, reduced order models and surrogates. The general theme of UQ in Materials was studied in several working groups.

This group broadly focused on physical and statistical techniques to construct surrogate models that can be employed for system models, design and control. Topics to be investigated included: surrogates that maintain physical structure and conservation properties, time- and path-dependent surrogates, and dimension reduction/active subspace construction when constructing surrogates. This group will closely coordinate with the Reduced Model working group. The group initially considered two applications that have well-developed high-fidelity models and experimental data: hysteresis and rate-dependent modeling for PZT and shape memory alloys, and biological viscoelasticity.

Reduce Order Models (ROMs) Theory and Application Working Group Leaders: Elaine Spiller (Marquette University); Ralph Smith (NC State University); Youssef Marzouk (MIT) and Matthew Plumlee (Northwestern University)

Description: The group sought to quantify faithfulness/accuracy of reduced order models to the science. We further seek methods for the development/creation of reduced order models that leverage scientific knowledge. The theoretical and practical advancements in reduced order models should be applicable to modern scientific settings.

Prediction Uncertainty and Extrapolation Working Group Leaders: Elaine Spiller (Marquette University); Ralph Smith (NC State University); Youssef Marzouk (MIT) and Matthew Plumlee (Northwestern University)

Description: This research thrust sought to develop methods and case studies for developing physically motivated error models to account for the discrepancy between computational model and reality, resulting in more reliable prediction uncertainties. This thrust considered a number of applications and activities that motivated new modeling approaches, helped to determine overarching principles for identifying settings in which reliable prediction uncertainties can be developed.

Potential projects included:

  • discrepancy models for OCO-2 or simpler test model
  • discovering pde structure to better characterize discrepancy between model and reality
  • use of stochastic “emulators” to better capture the link between model and reality
  • Discrepancy models in rich/structured design space
  • Designing model runs and physical experiments in additive manufacturing
  • Potential workshop on agent-based models in November
Data Fusion Working Group Leaders: Dongchu Sun (University of Missouri); Chong He (University of Missouri) and Jim Berger (Duke University)

Description:
Integration of multiple data sources to:
1. Infer upon the parameters/inputs to a computer model (data fusion in the UQ world)
2. Obtain improved inference/predictions on process (data fusion of model outputs and observations)
3. Deal with datasets of varying qualities

Foundations of Model Uncertainty Working Group Leaders: Rui Paulo (Universidade de Lisboa)
Jan Hannig (University of North Carolina – Chapel Hill)Description: This research group concentrated its research on fundamental questions arising when considering uncertainty in the presence of both scientific and statistical models. Some of the topics of interest included assessing the performance of a collection of competing scientific models; the detection of active inputs in a scientific model; reconciling, comparing and interpreting Bayesian and other forms of uncertainty quantification; and the inclusion of a bias term in statistical and in scientific models.
Storm Surge Hazard & Risk Working Group Leaders: Taylor Asher (University of North Carolina – Chapel Hill) and Whitney Huang (University of Victoria [CAN])

Description: This working group is an application-focused group centered on coastal flood hazards, particularly storm surge. It focused on advancing the treatment of uncertainties within flood hazard problems. Emphasis was placed on advancing emulators/surrogates for surge models and on uncertainty quantification/attribution in existing methods. Additional potential topics included:

  • Model inversion for parameter calculation
  • Uncertainty in climate models for hurricane characterization
  • Uncertainty in statistical characterization of hurricane data
Course: Fall 2018: Model Uncertainty: Mathematical and Statistical Description: This course was offered in conjunction with the SAMSI year-long research Program on Model Uncertainty: Mathematical and Statistical.

This course introduced statistical and mathematical sensitivity analysis and uncertainty quantification techniques for large-scale models arising in current applications. This included the construction and verification of surrogate models and emulators for complex simulation models, frequentist and Bayesian inference techniques, and techniques to quantify uncertainties associated with statistical quantities of interest. Examples were drawn from applications in materials science, biology and biomedical sciences, and nuclear engineering.

Schedule: August 28 – December 4, 2018 / Tuesdays at 4:30PM – 7:00PM


2017

Program on Mathematical and Statistical Methods for Climate and the Earth System (CLIM)

Working Group Group Leaders
UQ in Materials Working Group Leaders: Wei Chen (Northwestern University) and
Ralph Smith (NC State University)
Parameter Estimation
Working Group Leader: Peter Challenor, (Exeter)
Data Assimilation
Working Group Leaders: Chris Jones (UNC-Chapel Hill); Erik Van Vleck (Kansas)
Data Analytics
Working Group Leaders: Doug Nychka (NCAR); Vipin Kumar (Minnesota)
Climate Prediction
Working Group Leader: Leonard Smith (London School of Economics)
Extremes
Working Group Leaders: Dan Cooley (Colorado State); Richard Smith (UNC-Chapel Hill)
Stochastic Parameterization
Working Group Leader: Adam Monahan (University of Victoria, Canada)
Environmental Health
Working Group Leader: Brian Reich (NCSU)
Food Systems
Working Group Leaders: Hans Kaper (Georgetown); Mary Lou Zeeman (Bowdoin College)
Ice Dynamics
Working Group Leaders: Chris Jones (UNC-Chapel Hill); Alberto Carrassi (NERSC) and Murali Haran (Penn State)
Detection and Attribution
Working Group Leaders: Dorit Hammerling (NCAR) and Matthias Katzfuss (Texas A&M)
Risk & Coastal Hazards
Working Group Leaders: Brian Blanton (RENCI); Slava Lyubchich (Maryland) and Richard Smith (UNC-Chapel Hill)
Statistical Oceanography
Working Group Leaders: Michael Stein (University of Chicago) and Mikael Kuusela (Carnegie Mellon University)
Course: Fall 2017: Statistics for Climate Research Description: This course is being offered in conjunction with the SAMSI year-long research program on Mathematical and Statistical Methods for Climate and the Earth System. The course will cover statistical and computational methods for the analysis of data arising in climate research. Specific topics will include:

  • Time series methods and assessments of trends in climatological data
  • Analysis of large spatial datasets in climate research
  • Methods based on Empirical Orthogonal Functions
  • Climate Informatics: the application of machine learning methods and high-performance computing in climate research
  • Statistics for climate extremes
  • Climate and Health

Schedule: August 29 – December 5, 2017 / Tuesdays at 4:30PM – 7:00PM

Course: Spring 2018: Data Assimilation in Dynamical Systems Description: This course is being offered in conjunction with the SAMSI year-long research program on Mathematical and Statistical Methods for Climate and the Earth System. The course will cover statistical and computational methods for the analysis of data arising in climate research. Specific topics will include:

  • Dynamical systems: unstable manifolds and attractors, Lyapunov exponents, sensitivity to initial conditions and concept of predictability.
  • Data assimilation and filtering theory: Bayesian viewpoint
  • Nonlinear Filtering: Particle filtering and sampling methods
  • Advanced topics: parameter estimation, Lagrangian data assimilation

Schedule: January 16 – April 24, 2018 / Tuesdays at 4:30PM – 7:00PM

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics (QMC)

Working Group Description
QMC Working Group I: Parallel Monte Carlo Neutronics Simulation Working Group Leaders: C.T. Kelley (NCSU); T.M. Evans (Oak Ridge National Laboratory) and S.P. Hamilton (Oak Ridge National Laboratory)

Description: We simulate neutron flux intensity far from the source or behind shielding. Applications include nuclear power plants where we model radiation in machine rooms near the core, but behind shielding. Monte Carlo simulation randomly generates particles using information on sources and scattering properties of the problem domain and then follows those particles until they either leave the domain or are absorbed. This is not a PDE-based approach and has no discretization error.

Fluxes are very small and standard MC would resolve the radiation field inefficiently. We address this problem with CADIS (Consistent Adjoint-Driven Importance Sampling) which eliminates particles that move away from a region of interest. The computation must be done in parallel and the research focus of this project is to optimize the domain decomposition that is central to parallelization.

QMC Working Group II: Probabilistic Numbers Working Group Leaders: Chris Oates (University of Newcastle Upon Tyne and Alan Turing Institute, UK) and Tim Sullivan (Free University of Berlin, Germany)

Description: The accuracy and robustness of numerical predictions that are based on mathematical models depend critically upon the construction of accurate discrete approximations to key quantities of interest. The exact error due to approximation will be unknown to the analyst, but worst-case upper bounds can often be obtained. This working group aims, instead, to develop Probabilistic Numerical Methods, which provide the analyst with a richer, probabilistic quantification of the numerical error in their output, thus providing better tools for reliable statistical inference.

QMC Working Group III: Sampling by Interacting Particle Systems Working Group Leaders: Jianfeng Lu (Duke University) and Jonathan Mattingly (Duke University)

Description: In recent years, several algorithms, based on interacting samplers, have been proposed and further advanced; such algorithms have been used in various settings, including high dimensional Bayesian sampling, particle filtering, computational physics, etc. This working group plans to explore the connections between various ideas proposed in different communities and try to advance the theoretical understanding of efficiency and design principle for such sampling algorithms.

QMC Working Group IV: Representative Points for Small-data and Big-data Problems Working Group Leaders: Roshan Vengazhiyil (Georgia Institute of Technology) and Simon Mak (Georgia Institute of Technology)

Description: Representative points, which compact a probability distribution into a finite point set, are useful for a wide array of “small-data” and “big-data” problems. Small-data problems arise naturally in many engineering applications, where a key challenge is to allocate limited experimental runs for performing functional approximation, uncertainty propagation or design optimization. Similarly, given the massive volume, variety and velocity of big-data (particularly in Bayesian problems), the reduction of such datasets using representative points allows for meaningful and timely analysis. This working group aims to investigate the theory and application of representative points to the aforementioned small-data and big-data problems, with an emphasis on engineering applications and Bayesian computation.

QMC Working Group V: Sampling and Analysis in High Dimensions When Samples Are Expensive Working Group Leaders: Fred J. Hickernell (Illinois Institute of Technology); Mac Hyman (Tulane University) and Paul Constantine (University of Colorado)

Description: For some problems, generating observations (multivariate function values) is expensive, such as running a time-consuming computer code or conducting a real-world experiment. This can result in having only a few samples for high-dimensional input variables. These samples can be noisy and may not be available for some important ranges of the parameters. Examples include medical trials, large-scale computer simulations, such as climate models, and financial market data. This working group will study how to best use the available small samples and how strategically guide picking future samples. The group will consider algorithms based on assumptions about the underlying input-output relationship to improve the effectiveness of the statistical analysis under these difficult constraints. The assumption will be that the cost of the observations will far exceed the computational cost of the post-processing algorithms.

Working Group VI: Adaptive Choice of Sobolev Space Weights Working Group Leader: Art Owen (Stanford University)

Description: High dimensional numerical integration problems can often be solve efficiently using algorithms devised for weighted Sobolev spaces. A given black box function may belong to all of the spaces people use. That raises the question of which weights to be used. This WG looks at methods to select weights using ideas from global sensitivity analysis and active subspaces.

Working Group VII: Multivariate Decomposition Method (MDM) and Applications Working Group Leaders: Dirk Nuyens (KU Leuven – Belgium) and Alec Gilbert (University of New South Wales Sydney, Australia)

Description: The multivariate decomposition method allows us to approximate infinite-dimensional problems by a sum of finite-dimensional subproblems in the “active set” whose size depends on the requested error demand. The weight structure imposed on these finite-dimensional subspaces often implies that the maximum number of variables in each subspace grows very slowly in terms of the requested error demand. Thus we only need to solve many low-dimensional problems (say, 1, 2, 3 dimensional) and a few medium-dimensional problems (say, up to 10 dimensions) to reach a reasonable error demand. These problems can be solved in parallel. The working group will develop efficient implementation of the MDM and consider its application to PDE problems with random coefficients as well as other potential applications.

Working Group VIII: Application of QMC to PDEs with random coefficients Working Group Leaders: Frances Kuo (University of New South Wales Sydney, Australia) and Alec Gilbert (University of New South Wales Sydney, Australia)

Description: Quasi-Monte Carlo methods have recently been successfully used in calculating expected values of quantities of interests which depend on the solutions of PDEs with random coefficients, for so-called “uniform” and “lognormal” random fields. The construction of “randomly shifted lattice rules” and “interlaced polynomial lattice rules” based on decay properties of the random field has been implemented in the QMC4PDE software package.

The working group will extend the theory and implementation of these methods in various directions: for example, to generate lognormal random fields by the circulant embedding technique, to stochastic wave propagation, to neutron diffusion, to Bayesian inverse problems in uncertainty quantification, and more.

Working Group IX: Multivariate Integration and Approximation in the Context of IBC Working Group Leader: Peter Kritzer (Johann Radon Institute, Austria)

Description: The working group studies multivariate algorithms for numerical integration and function approximation in the context of Information-Based Complexity (IBC). Apart from deriving error bounds for these algorithms, we are particularly interested in studying how much information about a given problem is needed to solve it to within a given error bound.

Working Group X: Simpson Plays Billiards? Working Group Leader: Dirk Nuyens (KU Leuven – Belgium)

Description: Lattice rules are normally applied in the context of integrating periodic functions. There are however several results known in which lattice rules can be used in the context of non-periodic function spaces. We are interested in obtaining higher-order convergence for such non-periodic function spaces.

As a first wild idea, inspired by an interpretation of tent-transormed lattice rules as trapezoidal rules over a billiard ball trajectory, we might look at a fractal-transform which generates a Simpson’s rule. We are then interested in deriving worst-case error bounds which show higher-order convergence.


2016

Program on Optimization (OPT)

Working Group Description
Statistical Inverse Problems
Contact: [email protected]
Working Group Leaders: Alen Alexandrian, Arvind Saibaba
EM & MM Algorithms
Working Group Leaders: Hua Zhou, Eric Chi, Ekkehard Sachs, Dirk Lorenz
SAMSI Webmaster: Liuyu Hu
Sums of Squares and Semidefinite Programming Contact: [email protected]
Working Group Leaders: David Papp
SAMSI Webmaster: Sercan Yildiz
Bayesian Optimization & Decision Analysis Contact: [email protected]
Working Group Leaders: Mike West
SAMSI Webmaster: Mike Lindon
Mixed Interger – PDE Constrained Optimization Contact: [email protected]
Working Group Leaders: Sven Leyffer, Ekkehard Sachs and Bart van Bloemen Wannders
SAMSI Webmaster: Joey Hart
Modeling & Computation of Equilibrium Problems Contact: [email protected]
Working Group Leaders: Shu Lu
SAMSI Webmaster: Hongsheng Liu
Applications to Energy and the Environment Contact: [email protected]
Working Group Leaders: Mihai Anitescu and Jianfeng Lu
SAMSI Webmaster: Peter Diao
Probabilistic Numerics Working Group Leaders: David Bortz and Vanja Dukic
Small Area Estimation Contact: [email protected]
Radiotherapy Optimization Contact: [email protected]
Working Group Leaders: David Papp
SAMSI Webmaster: Melissa Gaddy
Faster Statistics Contact: [email protected]
Working Group Leaders: Xiaoming Huo
SAMSI Webmaster: Cheng Huang
Optimization for Electronic Structure Models Working Group Leaders: Jianfeng Lu
Minimum action principle for multi-physics problems Working Group Leaders: Jianfeng Lu
SAMSI Webmaster: Jeff LaComb
Course: Fall 2016: Numerical Optimization and Applications – Part I Description: In this course we consider the numerical solution of constrained and unconstrained optimization problems. We review the classical schemes with a special emphasis on new and modern developments in this area like e.g. nonlinear trust region methods, inexact Newton methods or conditional gradient methods.
We also analyze how these methods can be especially adapted to optimization problem with constraints that come from ordinary, partial or even stochastic differential equations. The computation of Lagrange multipliers play an important role in this context from a traditional point of view but also from recent research using second order adjoints.
Another important topic in this course will be applications in optimization which come from problems in statistics and which often exhibit special structure that can be exploited to reduced computation time and storage.
Schedule: Tuesdays 4:30 pm at SAMSI, Research Triangle Park, NC, beginning Tuesday, September 6, 2016
No class during week of Thanksgiving: Tuesday November 22, 2016
Last class: Tuesday, November 29, 2016
Course: Numerical Optimization and Applications – Part II Description: The main topic of this class is the numerical solution of constrained optimization problems. We review the classical schemes with a special emphasis on new and modern developments in this area. We also analyze how the algorithms can be especially adapted to optimization problem with constraints that come from ordinary, partial or even stochastic differential equations. Another important topic in this course will be applications in optimization which come from problems in statistics and which often exhibit special structure that can be exploited to reduced computation time and storage.
Schedule: Tuesdays 4:30 pm at SAMSI, Research Triangle Park, NC, beginning Tuesday, January 17, 2017
(No class March 14, 2017 for Spring Break)
Last class: Tuesday, April 25, 2017

Program on Statistical, Mathematical and Computational Methods for Astronomy (ASTRO)

Working Group Description
Uncertainty Quantification and Astrophysical Emulation (UQAE) Contact: [email protected]
Working Group Leaders: Derek Bingham (Simon Fraser) and Earl Lawrence (LANL)
SAMSI Webmaster: David Stenning
Synoptic Time Domain Surveys (TDA) Contact: [email protected]
Working Group Leaders: Ashish Mahabal (Astro, Caltech) and G. Jogesh Babu (Stat, PSU)
SAMSI Webmaster: David Jones
Multivariate and Irregularly Sampled Time Series (MISTS) Contact: [email protected]
Working Group Leaders: Ben Farr (Astro, U. Chicago) and Soumen Lahiri (Stat, NCSU)
SAMSI Webmaster: Hyungsuk Tak/David Jones
Astrophysical Populations (AP) Contact: [email protected]
Working Group Leaders: Jessi Cisewski (Stat, Yale) and Eric Ford (Astro, Penn State)
SAMSI Webmaster: David Stenning
Statistics, Computation, and Modeling in Cosmology (COSMO) Contact: [email protected]
Working Group Leaders: Jeff Jewell (Astro, JPL) and Joe Guinness (Stat, NCSU)
SAMSI Webmaster: Hyungsuk Tak
Course: Fall 2016: Analytical Methods and Applications to Astrophysics and Astronomy Description: With the advance of digital imaging techniques, astronomy has become a data science in which knowledge creation depends on applying and developing sophisticated statistical methodology to large and/or complex data sets. This course will cover common types of data in astronomy such as light curves, spectra, and images as well as statistical methods used for analyzing these data sets, such as functional data analysis, measurement error models, hierarchical models, survival analysis, and machine learning techniques. An emphasis will be placed on the complexity of the inference tasks faced by astronomers and the propagation of uncertainty across several levels of inference. Guest lecturers will discuss topical issues in the analysis of astronomy data. Students will complete a final project in groups based on data or statistical methodology presented during the course.
The course will be aimed at a wide audience in an effort to appeal to students with either an astronomy or statistics background. While there are no formal prerequisites students will benefit from having some past experience with medium to large data sets and a familiarity with statistical methods such as maximum likelihood and regression. Theory will be kept to a minimum with an emphasis instead on astronomy data and statistical methodology.
Schedule: Wednesdays 4:30 pm at SAMSI, Research Triangle Park, NC, beginning Wednesday, September 7, 2016
No class during week of Thanksgiving: Wednesday November 23, 2016
Last class: Wednesday, November 30, 2016
Course: Spring 2017: Time Series Methods for Astronomy Description: The course starts with an overview of variable cosmic phenomena and characteristics of astronomical time series. Classical time series analysis in the time and frequency domain for evenly spaced data will be reviewed. This includes Gaussian and Poisson processes, smoothing and interpolation, autocorrelation and autoregressive modeling, Fourier analysis, and wavelet analysis. The class then proceeds to treatments of unevenly spaced time series commonly found in astronomical datasets, again in both the time and frequency domain. Guest lectures by expert SAMSI scholars developing advanced techniques for unevenly spaced data will be featured. Throughout the course, methods will be exercised using the public domain R statistical software environment using contemporary astronomical datasets. Students will complete R-based homeworks and a personal project in time series analysis involving a dataset of their choice.
Schedule: January 11 – April 26, 2017 / Wednesdays at 4:30PM – 7:00PM

2015

Working Group Description
Multiple Sources of Bias Contact: [email protected]
Working Group Leaders: Sandy Zabell, Cliff Spiegelman
Login: https://sakai.duke.edu/sakai-login-tool/container
Pattern Evidence Contact: [email protected]
Working Group Leaders: Karen Kafadar, Anil Jain
Login: https://sakai.duke.edu/sakai-login-tool/container
Forensic Experiments Contact: [email protected]
Working Group Leader: Dennis Lin
Login: https://sakai.duke.edu/sakai-login-tool/container
Statistical Evidence Contact: [email protected]
Working Group Leaders: Colin Aitken, Anjali Mazumder
Login: https://sakai.duke.edu/sakai-login-tool/container
Possible Matches Contact: [email protected]
Working Group Leader: Len Stefanski
Login: https://sakai.duke.edu/sakai-login-tool/container
Ballistic Images Contact: [email protected]
Working Group Leaders: Nell Sedransk, Cliff Spiegelman, Sarena Wiesner
Login: https://sakai.duke.edu/sakai-login-tool/container
Forensic Evidence Contact: [email protected]
Working Group Leader: Cedric Neumann
Login: https://sakai.duke.edu/sakai-login-tool/container
Clinical Brain Imaging Contact: [email protected]
Working Group Leader: Ciprian Crainiceanu, Johns Hopkins University
Login: https://sakai.duke.edu/sakai-login-tool/container
Computational Approaches to Large-scale 
Inverse Problems with Applications to Neuroscience
Contact: [email protected]
Working Group Leader: Arvind Saibaba, North Carolina State University
Login: https://sakai.duke.edu/sakai-login-tool/container
Understanding Neuromechanical Processes in
Locomotion with Physical Modeling and Network Analysis
Contact: [email protected]
Working Group Leaders: Laura Miller, UNC and Katie Newhall, UNC
Login: https://sakai.duke.edu/sakai-login-tool/container
Mathematical and Statistical Approaches to Modeling Brain Networks Contact: [email protected]
Working Group Leaders: Rob Kass, Carnegie Mellon University; Uri Eden, Boston U; Mark Kramer, Boston U
Login: https://sakai.duke.edu/sakai-login-tool/container
Theory of neural networks: structure and dynamics Contact: [email protected]
Working Group Leaders: Carina Curto, PSU; Brent Doiron, U. of Pittsburgh; Chris Hillar, MSRI
Login: https://sakai.duke.edu/sakai-login-tool/container
Acquisition, Reconstruction, and Processing of MRI Data Contact: [email protected]
Working Group Leader: Daniel Rowe, Marquette University
Login: https://sakai.duke.edu/sakai-login-tool/container
Imaging Genetics Contact: [email protected]
Working Group Leader: Hongtu Zhu, UNC
Login: https://sakai.duke.edu/sakai-login-tool/container
Structural Connectivity Contact: [email protected]
Working Group Leaders: David Dunson, Duke University; Hongtu Zhu, UNC
Login: https://sakai.duke.edu/sakai-login-tool/container
Functional Imaging Methods and Functional Connectivity Contact: [email protected]
Working Group Leaders: Hernando Ombao, UCI; John Aston, University of Cambridge
Login: https://sakai.duke.edu/sakai-login-tool/container
Big Data Integration in Neuroimaging Contact: [email protected]
Working Group Leaders: Martin Lindquist and Timothy Johnson
Login: Login: https://sakai.duke.edu/sakai-login-tool/container
Analysis of Optical Imaging Data Contact: [email protected]
Working group leader: Mark Reimers
Login: Login: https://sakai.duke.edu/sakai-login-tool/container

2014

Working Group Description
Fall Course 2014-2015: Stochastic Process Modeling for Ecological Processes Schedule: Wednesdays 4:30 pm at SAMSI, Research Triangle Park, NC, beginning September 3, 2014 No class during week of Thanksgiving: Wednesday Nov 26, 2014 Last class: December 3, 2014 Instructors: L. Miller, A. Lloyd, D. Adalsteinsson, J. Clark, A. Gelfand The central theme of this proposed course is the use of stochastic process models to introduce desired behaviors into models for complex ecological processes. Objective include both emulation and data analysis. Specifically, there will be a focus on (stochastic) pde’s for dynamical systems with application to invasive and emerging infections (Lloyd) and to dispersal and spatial effects in competition models (Miller). There can be demanding coding aspects to this work with the possibility of some flipped classroom “lectures” (Adalsteinsson, Miller). On the more statistical side, the emphasis would be on hierarchical modeling with examples of hierarchical models for inference on seed dispersal and pde’s for population growth (Clark) supplemented with space-time diffusions for invasive species (Gelfand). Model development here would be complemented with some computer labs in R. Registration for this course is being processed through your respective university: UNC-CH: STOR 930 Section 001 (cross-listed with MATH 892 Section 001) Duke: STA 790.04 NCSU: MA 810.002 For additional information about this course, send e-mail to [email protected]

2013

Working Group Description
Kepler Working Group This group is specifically geared toward starting discussions to help people prepare for the actual meeting in June 2013. There are already discussion threads where you are encouraged to: introduce yourself, discuss ideas for focused working groups, and suggest background reading that may be helpful for other participants. Please contribute to via comments section of each discussion. Also, feel free to start a discussion topic of your own. (Membership is limited to invited participants.)
Online Streaming and Sketching Purpose: Methodology and fast algorithms for computing leverage scores, with application to astronomy and genomics. Preliminary list of topics: * Leverage scores: Computation, fast approximation, sensitivity, numerical stability of algorithms, behavior under sketching * Randomized low-rank approximations: Subset selection, CUR, Nyström, PCA, robust PCA, subsampled regression, regression on manifolds, construction of robust linear models, windowed and online streaming approaches * Randomized sketching and importance sampling strategies: Methodology and numerical computation * Local vs global: Eigenvector and invariant subspace localization, eigen-analysis of data connectivity matrices, numerical stability of streaming and updating methods, relation to generalized eigenvalue problems * Data fusion/integration: Robust and fast/streaming methods with application to galaxy formation and evolution
2013-14 Course: Geometric and Topological Summaries of Data and Inference The course will focus on geometric and topological summaries computed from data that are routinely generated across science and engineering. The focus is on modeling objects that have geometric or topological structure. Examples include curves, or surfaces such as bones or teeth, or objects of higher dimension such as positive definite matrices, or subspaces that describe variation in phenotypic traits due to genetic variation, or the geometry of multivariate trajectories generated from cellular processes. Specific topics will include the following. (1) Geometry in statistical inference — Material covered will include recent work in machine learning and statistics on the topics of manifold learning, subspace inference, factor models, and inferring covariance/positive definite matrices. Applications will be used to highlight methodologies. The focus will be on methods used to reduce high-dimensional data to low-dimensional summaries using geometric ideas. (2) Topology in statistical inference — Material covered will focus on probabilistic perspectives on topological summaries such as persistence homology and on inference of topological summaries based on the Hodge operator and the Laplacian on forms. Again, applications will be used to highlight methodologies. (3) Random geometry and topology — Material will cover the geometry and topology induced by random processes. Topics include the topology of random clique complexes, random geometric complexes, limit theorems of Betti numbers of random simplicial complexes. (4) Applications of the Laplacian operator in data analysis — Material will cover the various uses of the Laplacian in data analysis, including manifold learning, spectral clustering, and Cheeger inequalities. More advanced topics will include the Hodge operator or combinatorial Laplacian and applications to data analysis including decomposing ranked data into consistent and inconsistent components, inference of structure in social networks, and decomposing games into parts that have Nash equilibria and parts that cycle. Prerequisites: Background in calculus and linear algebra and some reasonable foundation in statistics and probability. Course Format: The main instructor will be Sayan Mukherjee but there will be several guest lecturers, with material and instructors paralleling certain of the major themes in the 2013-2014 year-long SAMSI program on Low-Dimensional Structure in High-Dimensional Systems (LDHD). All course updates including example projects, reading material, and lecture slides will be posted at http://www.stat.duke.edu/~sayan/SAMSI/.
Fall 2013 Course: Computational Methods for Social Sciences Coordinating Instructor: Richard L. Smith (Department of STOR, UNC) Lead Instructors: David Banks, Department of Statistical Science, Duke; Thomas Carsey, Department of Political Science and Odum Institute, UNC; Peter Mucha, Departments of Mathematics and Applied Physical Sciences, UNC; Jerry Reiter, Department of Statistical Science, Duke Time and Place: Wednesdays from 4:30pm-7:00pm beginning August 28, 2013; Statistical and Applied Mathematical Sciences Institute, 19 T.W. Alexander Drive, RTP, N.C. First Class is Wednesday, August 28, 2013 Course description: The Statistical and Applied Mathematical Sciences Institute (SAMSI) is hosting a year-long research program on Computational Methods for Social Sciences. As part of this program, there will be an advanced graduate course on the topics of the program. The syllabus will cover the three main research themes in the program: (a) Social Networks; (b) Statistical Methods for Censuses and Surveys; (c) Agent-based models. The course will meet once a week at SAMSI and will consist primarily of lectures by senior researchers. The course will be suitable for advanced graduate students in Mathematics, Statistics, Biostatistics or quantitative social sciences (e.g. Sociology, Political Science, Psychology). There are no specific prerequisites. Assessment will be by class presentations or a written project, the exact format to be determined partly based on the number of students participating. For additional information about the course, send e-mail to [email protected].
CMSS: Social Networks
CMSS: Causal Inference
CMSS: Censuses and Surveys This workng group is investigating several methods related to computation in surveys and censuses. The group is particularly interested in combining information from multiple sources. One specific application is record linkage. How does one account for uncertainty in imperfect linkage? How effective is creating a joint distribution from available information compared to imperfect, multi-way record linkage? What should one do with survey weights in linked files? Another specific application is merging big data with probability samples. Can we use information from surveys to help generalize analyses from large-scale administrative/private or organic data?
CMSS: Weighting in Surveys
CMSS: Agent-based Models
High-dimensional Graphical Models Graphical models are now a standard tool in statistics, and high-dimensional graphical models have been studied extensively. One of the ideas behind graphical models is to break down a high-dimensional problem into several low-dimensional ones. The difficult problem is model selection: one way to do that is to identify the low-dimensional components and see how they fit together. Buhlmann and Meinhausen (2006) were among the first to introduce the local method of neighborhood regression for model selection among graphical Gaussian models. More recent approaches include local l_1 regularized logistic regression for model selection among discrete Ising models, the concept of sparse local separators, and the usage of neighborhood structure. One of the activities of the working group will be review of the current iterature on local methods for structure estimation in high dimension. Another will be the exploration of geometric and topological local methods for the identification of structure in graphical models. Suggestions for other activities are, of course, welcome.
Data Analysis on Hilbert Manifolds and their Applications The theoretical focus of this Working Group is nonparametric statistics on Hilbert manifolds and dimensionality reduction from infinity to low dimension as small as 1. This will include an extension of CLT for iid variables to infinite dimensional Hilbert manifolds and beyond — to infinite dimensional stratified spaces — as well as the neighborhood hypothesis testing methodology, extending recent results to two or multiple samples on Hilbert manifolds or even to infinite dimensional stratified spaces. Applications to Hilbert manifolds data analysis that will be potentially discussed, depending on the WG participants, could be to any of the following: i. MRI imaging including MRI, DTI, or f-MRI; ii. CT imaging; iii. eye medical imaging, including stereo imaging; iv. 2D and 3D scene recognition from similarity shapes of curves and surfaces; v. projective shape of 3D scenes reconstructed from digital camera imaging; vi. color and texture imaging data; vii. spatial and temporal data on the geoid; viii. plate tectonics data and continental drift; ix. paths of eyes of storms on planet Earth; x. volcanic activity, earthquakes, and other Earth Sciences data; xi. astronomy data; xii. solar system data; xiii. DNA based data.
Nonlinear Low-dimensional Structures in High-dimensions for Biological Data Many problems in biology, particularly at the molecular level, involve very high dimensions. Traditional methods, such as principal component analysis provide, an important first step toward understanding the underlying lower dimensional structure. Nevertheless, finding even lower-dimensional and possibly nonlinear structures requires more qualitative procedures. Some possible candidates, among others, include geodesic principal component analysis, SiZER analysis, and persistent homology. The aim of this Working Group will be to describe and understand the nature of certain biological data coming from areas such as brain artery tree networks, gene networks, biomechanical motion data, and metagenomics, to name a few. We will explore and formulate strategies to best apply these qualitative procedures. In addition, a recent insight into a connection between SiZER analysis and persistent homology will be discussed.
Inference: Dimension Reduction Dimension reduction methods search for low dimensional structure in high dimensional data. Although unsupervised methods such as PCA have been used for over 80 years, new issues in robustness, very high dimensionality p >> n, nonlinear structure, and so on continue to provide rich sources of statistical and computational issues. Supervised methods, such as Sliced Inverse Regression, which look for low dimensional association between predictors and response have been developed within the past 35 years, and many useful extensions continue to be developed. Performing valid inference to correctly account for the impact of dimension reduction including variable selection also poses problems that have not been thoroughly explored. This working group will work on problems of extending unsupervised and supervised methods to some of the interesting new areas such as machine learning, functional data, discrete data, and complex data structures such as networks. As well, we will work on inferential issues.

2012

Working Group Description
SAMSI Fall Course – Operations Research Methods in Healthcare Principal Instructor: V. Kulkarni Course Day and Time: Course will be held at SAMSI (driving directions) in RTP on Wednesdays, 4:30-7:00 p.m. in Room 150. Schedule: First class Wednesday, September 5, 2012 ; last class day, Thursday, December 6, 2012 Course Description: This is a seminar-style course treating application of operations research methods such as stochastic modeling, queuing theory (including fluid models), optimization and simulation to problems in healthcare. Potential problems to be studied are data-based design of healthcare operations, patient flow, scheduling of facilities and personnel, management of transplant lists, mass casualty events and comparative effectiveness research. Students will be expected to read and make presentations of material from the relevant literature. Registration for this course is being processed through your respective university: Duke: STA 790-02 NCSU: MA 810.002 UNC: STOR 892.1 Questions about the course or the Healthcare program should be emailed to [email protected]
MD Imaging The focus of the Imaging Working Group will be on methodological and computational questions of statistics, mathematics and computer science posed by imaging science and technology, with applications to either astronomy, high energy physics, the environment, health sciences or other areas. This will be a venue to discuss the challenges, motivate and develop innovative approaches towards computing environments, analyses, methods, algorithms and tools, in relation to imaging science and innovative applications.
MD Online Streaming & Sketching Working Group leaders: Petros Drineas, Ilse Ipsen, and Michael Mahoney Webmasters: John Holodnak and Kevin Penner Purpose: Development and analysis of fast randomized algorithms for computing leverage scores, and their application Preliminary, Partial List of Topics: Approximating leverage scores for L2 and other regression problems: Online, streaming, incremental streaming algorithms Numerical analysis: Sensitivity of leverage scores, numerical stability of algorithms Applications in astronomy: Characterization of streaming & time dependent aspects of low rank approximations Incremental computation of leverage scores Applications in feature selection: How to distinguish among almost identical columns with high leverage scores (RRQR factorization, clustering) Derivation of formal bounds

2011

Working Group Description
Statistics of Extremes – Climate and Methodology UQ This group will examine the characterization of extreme events from a statistical point-of-view. The overall group will be composed of several project groups, each with a specific focus. The project groups may be application-driven, may work develop new statistical methodologies, or may work to further the theory on which extreme value analyses rely. The overall group will provide a structure for the project groups as well as provide an environment for investigating aspects which have general interest.
Parallel Computing Issues – Climate UQ – adaptive design of experiments, and resolution issues – embedding of emulated sub-models to resolve sub-processes that are now computationally prohibitive – python open source software open platform
Simulation of Rare Events – Methodology UQ This group will focus on methods for simulating rare events in high-dimensional physical systems, especially PDE models. We will explore the use of importance sampling and large deviation theory in order to identify important mechanisms or configurations of the parameters that lead to rare events. We also will consider the role of asymptotic analysis in constructing effective sampling weights for such computations.
Data Assimilation – Methodology UQ Data assimilation is the process of fusing information from imperfect models, noisy measurements, and priors, to produce an optimal representation of the state of a physical system. Data assimilation can be interpreted and carried out in a Bayesian framework. Practical methods for large-scale systems include suboptimal and the ensemble Kalman filter approaches, optimal interpolation, and three and four dimensional variational methods. This working group will focus on emerging problems that include, but are not limited to: new computational algorithms, modeling of model errors, impact of observations, and quantification of posterior uncertainties. There will be strong ties between theory and applications investigated throughout the program.
Stochastic to Deterministic Models and Back Again – Methodology UQ Models of complex multiscale and/or multiphysics phenomena often require combining stochastic and deterministic models. Direct coupling of stochastic and deterministic models, e.g. molecular dynamics with a continuum model Stochastic parameterization, with parameters determined by a stochastic model simulation or other statistical models Such models are often used to predict “engineering scale” questions from limited microscale information. Of course, this is a classic analysis/modeling problem. The working group will focus on computational issues, including: Rigorous formulation and analysis of coupling mechanisms and their discretizations Numerical treatment of averaging and computed expectations and the effect of approximations A posteriori error analysis, resolutions required in different components, adaptive computation Rigorous treatment of feedback between stochastic and deterministic models, e.g. nonlinear iterative methods, convergence
Model Validation – Methodology UQ Model validation refers to the process of assessing the accuracy with which mathematical models can predict physical events, or, more specifically, quantities of interest observed in physical phenomena. Validation should be a prerequisite for predictive modeling, which often forms the basis for decision-making. This working group will study the principles, merits, and limitations of various probabilistic approaches to model validation. Special emphasis will be laid on methods for splitting datasets for calibration and validation purposes, on the analysis of model discrepancies, on the development of rejection metrics, and on any other issues of interest raised during the working group meetings. The working group is organized and managed by Serge Prudhomme (ICES, UT Austin), Sujit Ghosh (Statistics, NCSU), and Jan Hannig (STOR, UNC).
Multiphysics – Methodology UQ Multiphysics models comprising compositions of models of several physical processes, often at different scales, dominate many areas of science and engineering. The working group will study UQ topics for MP models (including both forward and inverse topics) Research issues: Complex feedback between physical processes, highly nonlinear responses Complex and unresolved coupling mechanisms Different kinds and representations of uncertainty for different components, and complex interactions between sources of uncertainty and error Complex, high dimension parameter space Bifurcations and discontinuous model changes High performance computational issues
Approximating Computationally Intensive Functions and Sampling Design in High Dimensions – Methodology UQ This is a core problem in constructing surrogates, with application to uncertainty propagation, inference, prediction, and design. Relevant research questions: Cross-examination of different methods: Projection, regression, interpolation, L1 minimization, Gaussian process/kriging, etc Appropriate measures of performance/accuracy and their dependence on the intended use of the surrogate model. Error analysis and convergence properties Sparse representations: l1 minimization, pursuit algorithms, low-rank approximation “Optimal” choices of nodes/design points for different methods Adaptive approaches: a posteriori error estimates; derivative properties; dimension reduction; additional optimization of nodes; ANOVA; relation to sequential design of experiments Interpreting and combining uncertainty information from stochastic surrogates (e.g., Gaussian process variance) and deterministic error bounds Deriving optimality criteria and search algorithms that are good for high dimensions Borrow existing theoretical results in high dimensional statistics (Donoho, etc.) to shed light on the structure of “optimal” designs in high dimension. Additional issues— Lack of regularity: discovering and approximating discontinuities in high dimensions Incorporating gradient information Enforcing constraints on output and input domains Fault tolerance and missing samples
Inverse Function-based Inference – Methodology UQ Jan Hannig (lead), D. Estep, Troy Butler (U. Texas), Simon Tavener (CSU) The working group will study the use of set-valued inversion of models for inference Research issues: Approximation of set-valued inverses in complex spaces Computation of inverse measures in parameter space Convergence and accuracy of computed inverse measures Theoretical issues regarding inversion of multiple observations Relation to fiducial inference and Dempster and Shafer calculus Intrusive and non-intrusive algorithms, dimension-benign computational algorithms
Surrogate Models – Methodology UQ This working group focuses on the exploration of properties, utility, and performance of two classes of model surrogates, namely Polynomial Chaos and Gaussian Process surrogates. The study will be done in the context of specific model problems with a range of difficulty involving nonlinearity and dimensionality. Test problems will include both algebraic functions as well as simple ODE/PDE problems.
Engineered Systems – Engineering UQ
Sustainability – Engineering UQ
Materials – Engineering UQ
Renewable Energy – Engineering UQ The group will consider uncertainty quantification issues arising in specific applications linked to renewable energy. In particular, we will study biofuels and wind farms. Other aspects involves the inclusion, in a UQ framework, of factors such as technological advances and/or regulations.
Nuclear Energy – Engineering UQ
Geosciences – Geosciences UQ
Data Assimilation in IPCC Level Models – Climate UQ In the IPCC AR4, the results were based on runs from ca. 24 models. These were built and run at climate research centers around the world and are each integrated Earth system models that comprise many components, including atmosphere, ocean, ice and land. The so-called dynamical core of such models is a computational model covering both the atmosphere and ocean and based on the primitive equations of GFD. While this is essentially computational, data come into the process of forming the final model. This incorporation occurs at a number of stages of the model development, including parametrization of sub-grid scale effects and model tuning. The process is not, however, done systematically and current practice is not thought of as “data assimilation.” There seems to be a growing realization that DA will have a significant role to play in future climate model development. This is, in part, driven by the need to quantify uncertainty in the model predictions. Nevertheless, there is not a consensus as to how DA should be used in these large-scale climate models. This working group will consider the issues involved in formulating a plan for DA in such models. The first step will be to understand how such a model is put together and uncover all the steps where data is currently used in the model formation. For this purpose, we will look at the latest CESM from NCAR.
Numerical Methods for Uncertainty Quantification – Spring Part 2 Principal Instructors: Various Course Day and Time: Course will be held at SAMSI (driving directions) in RTP on Wednesdays, 4:30-7:00 p.m. in Room 150. Schedule: First class Wednesday, January 18, 2012 ; last class Wednesday, April 25, 2012 This course focuses on numerical methods for stochastic computation and uncertainty quantification (UQ). It is a two-semester course, where the first semester focuses on fundamental materials for UQ computing and the second semester on more advanced research materials. The main topics covered in the second semester include advanced numerical techniques for SPDE: adaptive methods and compressive sensing propagation of probability distributions Bayesian inference data assimilation model calibration Prerequisites: Numerical linear algebra, numerical methods for ordinary and partial differential equations; programming skill in one language, e.g., C/C++, FORTRAN, or Matlab. The course is open to all levels of graduate students in Mathematics, Statistics, as well as to those in other departments of sciences and engineering. Senior level undergraduate students with outstanding background are also considered. Registration for this course is being processed through your university. Duke: STA 294-01 NCSU: MA 810.002 UNC: STOR 891-001 Questions about the course or the UQ program should be emailed to [email protected].

2010

Working Group Description
Dynamics OF Networks Complex Networks Program working group.

The dynamics of networks working group is exploring a variety of mathematical and statistical approaches for describing and understanding the changing connection topology of networks over time, the interplay of these network dynamics with other dynamic processes on the network, and the connections between these different mathematical and statistical methodologies.

Sampling / Modeling / Inference CN Program working group.

The Working Group on Sampling/Modeling/Inference in networks aims to work towards moving the current state of knowledge on these inter-related tasks — in the specific context of networks — to rest on a more principled and integrated mathematical and statistical foundation.  We are pursuing this goal by focusing on a handful of specific prototype problems in the context of certain application areas, ranging from information networks to animal communities to neuroscience.

Dynamics ON Networks CN Program working group.

Random graphs are useful models of social and technological networks. To date most of the research in this area has concerned geometric properties of the graphs. This working group will focus on processes taking place ON the network. In particular we are interested in how their behavior on networks differs from  that in homogeneously mixing populations or on regular lattices of the type commonly used in ecology and physics.

Geometrical / Spectral Analysis CN Program working group

This working group is concerned with the following topics: detection of communities in networks, multiscale spectral methods for the analysis of the geometry of networks, algorithms that simplify graphs into simpler graphs in order to speed up certain optimization problems, metrics for comparing graphs, and multiscale homogenization of random walks. These topics have applications biology and to spread of “epidemics” in financial networks.