Archived Talks

Feb. 22th, 2019 (Friday 11:00 am)Tauhid Zaman (MIT Sloan School of Management)

  • Optimizing Opinions with Stubborn Agents Show Abstract

    [Abstract] Recent news about Russian propaganda bots influencing elections has created a need for social media counter-measures. One approach is to deploy agents in a social network to counter the influence campaign. These can be viewed as “stubborn” agents whose goal is to shift the opinions of non-stubborn people by sharing relevant content. In this talk we show how to deploy stubborn agents in a social network in order to accomplish this. We begin by proposing an opinion dynamics model for interacting individuals in a social network. Our model differs from previous models by allowing individuals to grow more stubborn with time. Despite the time varying nature of our model, we are able to provide a precise characterization of the conditions under which the opinions convergence to an equilibrium. We find that the equilibrium opinions are given by a linear system which has a form similar to Ohm’s Law from circuit theory. To validate this opinion model, we use it to predict the opinions of hundreds of thousands of Twitter users discussing varying polarized topics. We use the content of these users tweets as a measure of their true opinions. We develop a neural network to extract the user’s opinions from their tweet text. We find that our opinion model accurately predicts the mean opinion of the populations and has a high correlation with their tweet based opinions. Using state of the art detection algorithms, we are able to identify bots in our datasets and use our opinion model to measure the effect they have on the population opinions. We also show how to place stubborn agents in a network to have maximal impact on the population opinion. We formulate this as a discrete optimization problem. We show that the problem is submodular, which allows us to find an accurate and fast solution using a greedy algorithm. We find that a relatively small number of stubborn agents can have a strong effect on the population opinion in several real social networks.

    [Bio] Tauhid is an Associate Professor of Operations Management at the MIT Sloan School of Management. He received his BS, MEng, and PhD degrees in electrical engineering and computer science from MIT. His research focuses on solving operational problems involving social network data using probabilistic models, network algorithms, and modern statistical methods. Some of the topics he studies in the social networks space include predicting the popularity of content, finding online extremists, and geo-locating users. His broader interests cover data driven approaches to investing in startup companies, non-traditional choice modeling, algorithmic sports betting, and biometric data. His work has been featured in the Wall Street Journal, Wired, Mashable, the LA Times, and Time Magazine.


Feb. 15th, 2019 (Friday 11:00 am)Michael Johnson (University of Massachusetts, Boston)

  • Community-Engaged Operations Research: Localized Interventions, Appropriate Methods, Social Impact Show Abstract

    [Abstract]
    Community-engaged operations research is an extension of multiple OR/MS traditions to support participatory research, localized impact and social change. It applies critical thinking, evidence-based policy analysis, community participation and decision modeling to local interventions. It emphasizes the needs, voices and values of disadvantaged and marginalized populations. It rests on a foundation of meaningful engagement with communities. Through a survey of current scholarship in two complementary areas of inquiry, ‘community operational research’ (referring to work by primarily European researchers) and ‘community-based operations research’ (referring to work by primarily American researchers), I develop principles for community-engaged OR, present critical questions that represent opportunities to expand the impact of this work, and discuss current projects whose methods and applications have the potential to enrich research and practice in operations research, management science and analytics.

    [Bio]
    Michael Johnson


Oct. 26th, 2018 (Friday 11:15 am)Thinh T. Doan

  • Distributed learning over a network of agents under communication constraints! Show Abstract

    [Abstract]
    In this talk, I will present a popular distributed method, namely, distributed consensus-based gradient (DCG) method, for solving optimal learning problems over a network of agents. Such problems arise in many applications such as, finding optimal parameters over a large dataset distributed among a network of processors or seeking an optimal policy for coverage control problems in robotic networks. The focus is to present our recent results, where we study the performance of DCG when the agents are only allowed to exchange their quantized values due to their finite communication bandwidth. In particular, we develop a novel distributed variant of the popular stochastic approximation under random quantization. Our notable contribution is to provide an explicit formula for their rates of convergence, which shows the dependence on the underlying network topology and the size of the bandwidths shared between agents. Finally, I conclude my talk with some discussion about potential applications of distributed stochastic approximations in multi-agent reinforcement learning.

    [Bio]
    Thinh T. Doan is a TRIAD postdoctoral fellow at Georgia Institute of Technology, joint between the School of Industrial and Systems Engineering and the School of Electrical and Computer Engineering (ECE). He was born in Vietnam, where he got his Bachelor degree in Automatic Control at Hanoi University of Science and Technology in 2008. He obtained his Master and Ph.D. degrees both in ECE from the University of Oklahoma in 2013 and the University of Illinois at Urbana-Champaign in 2018, respectively. His research interests lie at the intersection of control theory, optimization, distributed algorithms, and applied probability, with the main applications in machine learning, reinforcement learning, power networks, and multi-agent systems.


Oct. 19th, 2018 (Friday 11:15 am)Sebastian Salazar

  • Dynamic Resource Allocation in the Cloud with Near-Optimal Efficiency Show Abstract

    [Abstract]
    The new era of Cloud Computing has given average users the benefit of high performance technology without specialized infrastructure or in-depth knowledge. Sharing resources efficiently between different users, and taking advantage of economies of scale make this new business model economically viable. In many cloud settings, providers would like to operate resources at high utilization while simultaneously satisfying user requirements. Nonetheless, there is an unavoidable trade-off between these two objectives. For instance, focusing solely on satisfying individual requirements can lead to poor utilization. In this talk, I will introduce a multiplicative weights algorithm for dynamic allocation of a single divisible resource, such as CPU, that guarantees simultaneously near-optimal utilization and user satisfaction. Joint work with Ishai Menache, Mohit Singh and Alejandro Toriello.

    [Bio]
    Sebastian is a second-year PhD student in ACO, currently working with Mohit Singh and Alejandro Toriello on optimization, online algorithms and approximation algorithms.


Oct. 12th, 2018 (Friday 11:15 am)Gerdus Benade(CMU)

  • How to make envy vanish over time Show Abstract

    [Abstract]
    We study the dynamic fair division of indivisible goods. Suppose at every time step an item arrives online and must be allocated to an agent upon arrival, and that agents have heterogeneous values for the items. Our goal is to design allocation algorithms that minimize the maximum envy, defined as the maximum difference between any agent’s overall value for items allocated to another agent and to herself. We say that an algorithm has vanishing envy if the ratio of envy over time goes to zero as time goes to infinity. We design a polynomial-time, deterministic algorithm that achieves vanishing envy, and show the rate at which envy vanishes is asymptotically optimal. We also derive tight (in the number of items) bounds for a more general setting where items arrive in batches.

    [Bio]
    Gerdus Benade is a Ph.D. student in Operations Research in the Tepper School of Business at Carnegie Mellon University. He is interested in discrete optimization and computational social choice.


Oct. 5th, 2018 (Friday 11:15 am)Swati Gupta

  • Mathematical Theories of Bias and Fairness Show Abstract

    [Abstract]
    With ubiquitous automated decision-making comes great responsibility (or does it?). Modern optimization methods shape and affect almost all aspects of modern life — search, finance, urban transportation, credit applications, and criminal justice, to name a few. It is important that we understand (and mitigate) the unintended consequences of automated optimized decisions to avoid discrimination among the users (even perceived discrimination), as well as ensure due process and transparency in decision-making. For instance, (i) delivery of certain services has been made available at a higher cost based on the location of the customers thus inadvertently discriminating against minority neighborhoods, (ii) algorithms like COMPAS used in the criminal justice system have been shown to incorrectly classify black defendants as high risk with much higher likelihood than white defendants, and (iii) experiments on recommendation engines have shown that women are much less likely to be shown higher paying jobs compared to men. These are just some examples in which automated decisions have had adverse effects on various members of the population. In this talk, we will survey some of the mathematical theories developed to model bias and fairness from a statistical as well as an optimization perspective. We will discuss impossibility results and trade-offs between various fairness metrics proposed in the literature, with the hope of informing policymakers on what is even achievable mathematically. We will also discuss some on-going work along these lines in the context of making balanced inventory routing decisions, placing public facilities while ensuring equity and matching students fairly to schools, as well as interesting open questions in this domain.

    [Bio]
    Dr. Swati Gupta is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. Gupta’s research interests lie primarily in combinatorial, convex, and robust optimization with applications in online learning and data-driven decision-making under partial information. Her work focuses on speeding up fundamental bottlenecks that arise in learning problems due to the combinatorial nature of the decisions, as well as drawing from machine learning to improve traditional optimization methods.


Sep. 28th, 2018 (Friday 11:15 am)Ashia C. Wilson (Microsoft Research)

  • A Dynamical View of Optimization Algorithms Show Abstract

    [Abstract]
    Optimization is a core primitive in statistics, machine learning, and data analytics. Within these fields, the rapid growth in the scale and complexity of modern datasets has led to a focus on two classes of algorithms: gradient methods and momentum methods (also referred to as accelerated methods). Momentum methods, first proposed by Nesterov in 1983, achieve faster convergence rates than gradient methods. However, unlike gradient methods, they are not descent methods and providing robust performance guarantees remains a challenge. In simple settings, momentum methods can be understood as modeling the dynamics of a damped harmonic oscillator; making this intuition precise however, and generalizing it to other geometries has been difficult. Furthermore, derivations of momentum methods do not flow from a single underlying principle, but tend to rely on case-specific algebra using a technique – considered by many to be esoteric – called estimate sequences. The first part of our work introduces a variational, continuous-time framework for understanding momentum methods. We show that there is a family of Lagrangian functionals, which we call Bregman Lagrangians, which generate dynamics corresponding to momentum methods in continuous time. In particular, momentum methods can be understood as arising from various discretization techniques applied to these continuous time dynamics. The second part of our work strengthens this connection. We demonstrate how to derive families of Lyapunov functions which can certify rates of convergence for the continuous time momentum dynamics. We further demonstrate how the proofs of convergence of momentum methods can be understood as bounding discretization errors of the Lyapunov function when moving to discrete time. Along the way, we prove an equivalence between these family of Lyapunov functions and the technique of estimate sequences as well as demonstrate how this framework allows us to derive and analyze several new optimization methods. The following is joint work with Andre Wibisono, Stephen Tu, Shivaram Venkataraman, Alex Gittens, Benjamin Recht, and Michael I. Jordan.


Sep. 21st, 2018 (Friday 11:15 am)Natashia Boland

  • Decomposition Branching for Mixed Integer Programming Show Abstract

    [Abstract]
    In the last few decades, there have been enormous advances in exact methods for the solution of general mixed integer programming (MIP) problems. These advances have come from attention to several key aspects of the LP-based branch-and-bound framework, including preprocessing, cutting planes, search strategies, and branching variable selection. However, the branching constraints themselves have received relatively little attention, and those used in modern codes remain the same as they were in the 1960’s. This talk explores one idea for branching that seeks to exploit decomposable structure in MIPs: decomposition branching. The approach uses the power of MIP solvers to solve (or nearly solve) smaller subproblems so as to derive the branching constraint. The computational cost of doing so, in experiments with instances of a weighted set covering problem and a regionalized p-median facility location problem with assignment range constraints, is greatly outweighed by its benefits. The number of branch-and-bound nodes is vastly reduced and run times can be orders of magnitude less than those of a commercial solver.

    [Bio]
    Natashia Boland is a professor in the Stewart School of Industrial and Systems Engineering at Georgia Tech. Recently, her work has focused on two main directions. The first, a better solution of planning problems in time-space networks: she has developed techniques that generate only the parts of the network really needed to determine near-optimal solutions, with high accuracy and in far faster computing times than previously available. She is applying these concepts in the design and operation of less-than-truckload carrier networks. The second is on addressing problems with multiple objectives. She is developing technology that can efficiently, in practice, offer the decision maker a comprehensive view of the trade-offs available to them, given the constraints of their system. Such technology is especially helpful in settings in which issues such as fairness, risk, environmental impact, and human factors, as well as costs or profits, play an important role in the decision-making.


Sep. 14th, 2018 (Friday 11:15 am)Pascal Van Hentenryck

  • Hybrid Optimization: The Many Marriages of CP, MIP, and SAT Show Abstract

    [Abstract]
    In the last decades, significant progress in discrete optimization originated from hybridizations of synergic optimization methods. This talk presents three case studies of combining constraint programming, mixed-integer programming, and satisfiability. The benefits are illustrated on applications in routing and scheduling.

    [Bio]
    Pascal Van Hentenryck is the A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. His current research focuses on artificial intelligence, data science, and operations research with applications in energy, mobility, and privacy.


Apr. 6th, 2018 (Friday 11am)Rediet Abebe (Cornell)

  • Information and Overexposure in Word-of-Mouth Campaigns Show Abstract

    [Abstract]

    Access to information is of major concern in a wide range of domains including health, education, and labor markets. Word-of-mouth campaigns are an effective tool for disseminating information that can be leveraged to improve people’s lives. Within health, specifically, by targeting individuals with crucial health information that might be relevant to them, we can potentially create an information cascade that reaches a large portion of the population, and in turn, improve health outcomes for many people.

    In traditional models for word-of-mouth campaigns, the objective has generally been based on reaching as many people as possible. However, a number of studies both within health and in commercial settings have shown that the indiscriminate spread of a product by word-of-mouth can result in overexposure where the information or product reaches a part of the population that is not receptive to it. This can lead to negative reputational effects or decrease the efficacy of future campaigns. In the first part of the talk, we ask how we should make use of social influence when there is a risk of overexposure. We develop and analyze a theoretical model for this process; we show that it captures a number of the qualitative phenomena associated with overexposure and provide an efficient algorithm to find the optimal campaigning strategy.

    In the second part of the talk, we focus on the stark inequalities in the availability of health-related data. We focus on the lack of data on health information needs of individuals in Africa and explore the role that search engines can play in uncovering people’s everyday health information needs, concerns, and misconceptions. We analyze Bing searches related to HIV/AIDS in all 54 nations in Africa, and show that these expose a wide range of themes that shed light on how health information needs vary geographically as well as by demographic groups. We discuss the potential for these results to inform targeted education efforts to decrease the burden of the disease.

    [Bio]
    Rediet Abebe is a Ph.D. candidate in Computer Science at Cornell. Her research focuses on algorithms, artificial intelligence, and applications to social good. Her work has generously been supported by fellowships and scholarships through Facebook (2017-2019), Google (2016-2017), and the Cornell Graduate School (2015-2016). She is also a 2013-2014 Harvard Cambridge Scholar.


Apr. 10th, 2018 (Tuesday 11am)Weijun Xie (VT)

  • Multi-Product Newsvendor Model with Substitutions Show Abstract

    [Abstract]
    We study a multi-product newsvendor model with customer-driven demand substitutions, where each product, once run out of stock, can be proportionally substituted by the others. This model has been widely studied in the literature, however, due to its nonconvexity and intractability, only very limited analytical properties have been discovered nor efficient solution approaches. This paper first completely characterizes the optimal order policy when the demand is known and recasts the model as a discrete submodular maximization problem. When the demand is stochastic, we first reformulate this nonconvex model as a two-stage stochastic integer program, where the recourse decision variables are mixed integers. We further study a tight upper bound via nonanticipativity dual, which is proven to be close to the true optimal and can be computed as a linear program. Finally, numerical studies demonstrate the effectiveness of the algorithms.

    [Bio]
    Dr. Weijun Xie graduated from Georgia Tech in 2017 and is Assistant Professor in the Department of Industrial and Systems Engineering at Virginia Tech.


Mar. 30th, 2018 (Friday 11am)Gauri Joshi (CMU)

  • Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD Show Abstract

    [Abstract]
    Distributed Stochastic Gradient Descent (SGD) when running in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers). Asynchronous methods can alleviate stragglers, but cause gradient staleness that can adversely affect convergence. In this work, we present the first theoretical characterization of the speed-up offered by asynchronous methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). The novelty in our work is that our runtime analysis considers random straggler delays, which helps us design and compare distributed SGD algorithms that strike a balance between stragglers and staleness. We also present a new convergence analysis of asynchronous SGD variants without bounded or exponential delay assumptions, and a novel learning rate schedule to compensate for gradient staleness.

    [Bio]
    Gauri Joshi is an assistant professor in the ECE department at Carnegie Mellon University since September 2017. Prior to that, she worked as a Research Staff Member at IBM T. J. Watson Research Center. Gauri completed her Ph.D. from MIT EECS in June 2016. She also received her B.Tech and M. Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Bombay in 2010. Her awards and honors include the Best Thesis Prize in Computer science at MIT (2012), Institute Gold Medal of IIT Bombay (2010), Claude Shannon Research Assistantship (2015-16), and the Schlumberger Faculty for the Future fellowship (2011-2015).


Feb. 13th, 2018 (Tuesday 11am)Roy Schwartz (Technion)

  • Fast and Simple Algorithms for Submodular Maximization Show Abstract

    [Abstract]
    Submodular maximization captures both classical problems in combinatorial optimization and practical applications that arise in other disciplines, e.g., machine learning and data mining. The size of the inputs in these applications is usually very large. Hence, it is interesting to devise approximation algorithms that in addition to providing a provable guarantee are also very fast and simple to use. In this talk, I will present several such examples from recent years.

    [Bio]
    Roy Schwartz is an assistant professor at the Computer Science Department at the Technion. His research focuses on the design and analysis of algorithms, approximation algorithms, combinatorial optimization, the geometry of metric spaces and its applications, submodular optimization, and randomized algorithms.


Jan. 23rd, 2018 (Tuesday 11am)David Hemmi (Monash)

  • High-level modeling and solution of combinatorial stochastic programs Show Abstract

    [Abstract]
    Stochastic programming is concerned with decision making under uncertainty, seeking an optimal policy with respect to a set of possible future scenarios. While the value of Stochastic Programming is obvious to many practitioners, in reality, uncertainty in decision making is oftentimes neglected. For deterministic optimisation problems, a coherent chain of modeling and solving exists. Employing standard modeling languages and solvers for stochastic programs is however difficult. First, they have (with exceptions) no native support to formulate Stochastic Programs. Secondly solving stochastic programs with standard solvers (e.g. MIP solvers) is often computationally intractable. I will talk about my research that aims to make Stochastic Programming more accessible. First, I will discuss modeling deterministic and stochastic programs in the Constraint Programming language MiniZinc – a modeling paradigm that retains the structure of a problem much more strongly than MIP formulations. Second, I will talk about decomposition algorithms I have been working on to solve combinatorial Stochastic Programs.

    [Bio]
    David is a PhD student at Monash University in Melbourne. Before joining Monash, he completed a Bachelors degree in Systems Engineering in Switzerland, studied Robotics at University of Bristol and worked in various engineering roles. As part of his Phd, David is working on a solver system for stochastic combinatorial optimisation problems that are modeled in in the Constraint Programming language MiniZinc. In future, he would like to combine his passion for automation with the research he is doing in operations research.

Dec. 4th, 2017 (Monday 3pm) – Ruiwei Jiang(UMich)

  • Ambiguous Risk Constraints with Moment and Structural Information: Three Examples   Show Abstract

    [Abstract]
    Optimization problems face random constraint violations when uncertainty arises in constraint parameters. Effective ways of controlling such violations include risk constraints, e.g., chance constraints and conditional Value-at-Risk (CVaR) constraints. This talk discusses risk constraints when the distributional information of the uncertain parameters consists of moment information (e.g., mean, covariance, support) and certain structural information, for which we mention three specific examples: alpha-unimodality, logconcavity, and dominance on the tail. We find that the ambiguous risk constraints in these settings can be recast or approximated using conic constraints that facilitate computation. Finally, we demonstrate the theoretical results via case studies on power system operation and appointment scheduling.

    [Bio]
    Ruiwei Jiang is an Assistant Professor of Industrial and Operations Engineering in the University of Michigan at Ann Arbor. His research interests include stochastic optimization and integer programming. Application areas of his work include power and water systems, healthcare, and transportation systems. Recognition of his research includes the Stochastic Programming Society student paper award, the INFORMS George E. Nicholson student paper award, and the INFORMS Junior Faculty Interest Group paper award (honorable mention).


Nov. 17th, 2017 – Basak Kalkanci(Gatech, Scheller College of Business)

  • Supply Risk Mitigation via Supplier Diversification and Improvement: An Experimental Evaluation   Show Abstract

    [Abstract]
    Due to the trend towards decentralization and greater complexity in supply chains, companies are increasingly exposed to supply risks. Various strategies to mitigate supply risks have been developed using modeling-based approaches, including risk diversification by dual sourcing and direct investment to improve supplier performance. Yet, despite the overwhelming evidence that managerial decisions are influenced by behavioral factors particularly under risk and uncertainty, such behavioral factors are typically not considered by previous theoretical studies. In this paper, we use controlled lab experiments to evaluate the performances of dual sourcing and single sourcing with supplier improvement strategies to mitigate risks of a buyer facing suppliers with different costs and risk profiles, and develop behavioral theories to elucidate the decision-making process under supply risks more effectively. With dual sourcing, human buyers do not diversify their orders effectively (relying on a more even allocation of orders between suppliers than theory suggests) and exhibit quantity hedging behavior. To explain this phenomenon, we propose and empirically validate a behavioral theory in which human buyers choose order quantities to minimize their disutility from order allocation errors between suppliers. Human buyers use the single sourcing with supplier improvement strategy relatively effectively, despite being subject to supplier selection errors due to bounded rationality.


Nov. 3rd, 2017 – Pierre Le Bodic(Monash)

  • Online Estimation of the Size of the Branch and Bound Tree in MIP Solvers   Show Abstract

    [Abstract]
    We present an online method that estimates the final size of the branch-and-bound tree in Mixed-Integer Programming solvers. The method combines an old sampling method due to Knuth (1975) and recent work on branching by Le Bodic and Nemhauser (2017). This method is implemented in the MIP solver SCIP and its results are displayed as an extra column. This is joint work with Gleb Belov, Samuel Esler, Dylan Fernando and George Nemhauser.


Oct. 27th, 2017  – Regan Baucke

  • A Deterministic Decomposition Algorithm to Solve Multistage Stochastic Programs   Show Abstract

    [Abstract]

    Multistage stochastic programming problems are an important class of optimisation problems, especially in energy planning and scheduling. These problems and their solution methods have been of particular interest to researchers in stochastic programming recently. Because of the large scenario trees that these problems induce, current solution methods require random sampling of the tree in order to build a candidate policy. Candidate policies are then evaluated using Monte Carlo simulation. Under certain sampling assumptions, theoretical convergence is obtained almost surely. In practice, convergence of a given policy requires a statistical test and is only guaranteed at a given level of confidence.

    In this talk, I will present a deterministic algorithm to solve these problems. The main feature of this algorithm is a deterministic path sampling scheme during the forward pass phase of the algorithm which is guaranteed to reduce the bound gap at all the nodes visited. Because policy simulation is no longer required, there is an improvement in performance over traditional methods for problems in which a high level of confidence is sought.

    [Bio]
    Regan Baucke is a PhD Student at the University of Auckland working under the supervisors Golbon Zakeri and Anthony Downward. His work focuses on multistage stochastic programming and risk aversion. He is a member of the EPOC research group at the University of Auckland, which focuses on mathematical modelling and optimisation with the view of analysing and improving the New Zealand electricity market.


Oct. 20th, 2017 (10am, ISyE Main 126)

  • Di Wu – Fixed Budget Ranking and Selection under Input Uncertainty   Show Abstract

    [Abstract]
    Studies of fixed budget Ranking and Selection (R & S, a.k.a. Best Arm Identification in the multi-armed bandits literature) mainly focus on maximizing the probability of correctly identifying the best design/arm using a fixed number of simulation runs/pulls. In the realm of stochastic simulation, R & S is mostly studied under the assumption of an accurate input model, while in practice the input model is usually estimated using finite input data, causing the so-called “input uncertainty”. In this talk, we propose a new formulation to account for input uncertainty, where a budget is introduced for collecting input data as well as running simulations. Different types of algorithms are developed to approximately solve the formulation, and convergence analysis is conducted to show (among other asymptotics) the large deviations properties of related estimators, revealing that the probability of false selection still converges to 0 exponentially fast. Numerical results suggest that adaptive algorithms typically achieve superior finite-sample performance.

  • Rui Gao – Wasserstein Distributional Robustness and Regularization in Statistical Learning   Show Abstract

    [Abstract]
    A central question in statistical learning is to design algorithms that generalize to new data. We tackle this problem by formulating a Wassestein distributionally robust stochastic optimization, and establish its connection with gradient-norm regularization. Such connection provides new interpretations for problems involving regularization, including many machine learning problems and discrete choice models. Moreover, it suggests a systematic approach to regularize high-dimensional non-convex problems, as illustrated by the training of generative adversarial networks, and the estimation of mixed logit model.

  • Zhaowei She – The Inefficiency of Capitated Healthcare System (A Case Study of Medicare Advantage)   Show Abstract

    [Abstract]
    Health care in the United State has been undergoing another wave of changes from fee-for-service payment systems to capitated systems. It is generally believed that capitation payment can incentivize providers to healthcare providers to provide better care at lower cost. However, empirical evidence shows the contrary. Sicker patients are leaving Medicare Advantage, the largest capitated healthcare system in the United State. In this talk, we will model the current payment mechanism of Medicare Advantage in a queueing framework, and unveil the reason why this capitated payment mechanism cannot achieve an efficient resource allocation in the Medicare system.


Sep. 22nd, 2017  – William B. Haskell

  • Markov chain methods for analyzing algorithms   Show Abstract

    [Abstract]
    We are interested in using Markov chain methods to establish convergence in probability for various algorithms in dynamic programming and optimization. We start by investigating simple “empirical” variants of classical value and policy iteration for dynamic programming. In this case, we show that the progress of these algorithms is stochastically dominated by an easy to analyze Markov chain, from which we can extract a convergence rate for the original algorithms. We continue by showing that this same line of reasoning covers several empirical algorithms in optimization as well. We argue that the advantage of this approach lies in its simplicity and intuitive appeal.


May 1st, 2017  – Rahul Mazumder

  • Fair Division via Social Comparison   Show Abstract

    [Abstract]
    Sparsity plays a key role in linear statistical modeling and beyond. In this talk I will discuss the best subset selection problem, a central problem in statistics, wherein the task is to select a set of k relevant features from among p variables, given n samples. I will discuss recent computational techniques relying on integer optimization and first order optimization methods, that enable us to obtain high-quality, near-optimal solutions for best-subsets regression, for sizes well beyond what was considered possible. This sheds interesting new insights into the statistical behavior of subset selection problems vis-a-vis popular, computationally friendlier methods like L1 regularization — thereby motivating the design of new statistical estimators with better statistical and computational properties. If time permits, I will also discuss another closely related, extremely effective, but relatively less understood sparse regularization scheme: the forward stage-wise regression (aka Boosting) in linear models.


April 27th, 2017  – Andy Philpott

  • Markov chain methods for analyzing algorithms   Show Abstract

    [Abstract]
    Markets for wholesale electricity supply are now ubiquitous throughout the industrialized world. In the simplest form of these markets, a perfectly competitive partial equilibrium corresponds to the optimal solution to a convex economic dispatch problem, where Lagrange multipliers yield nodal energy prices. This construction can be extended to the setting where the dispatch problem becomes a convex stochastic program (for example to deal with intermittent renewable energy, or uncertain hydro-reservoir inflows) and agents maximize expected profits. When agents are risk averse, competitive partial equilibrium corresponds to a risk-adjusted social optimum as long as derivative instruments enable agents to trade risk. We illustrate these ideas using some simple examples drawn from the New Zealand electricity market.

    [Bio]
    Dr. Andy Philpott is a Professor at University of Auckland, and he is the director of the Electric Power Optimization Centre. He received his PhD and MPhil from University of Cambridge, and his BSc and BA degrees from Victoria University of Wellington. His research interests span most of mathematical programming, in particular linear, non-linear and stochastic programming and their application to operations research problems, in particular optimal planning under uncertainty, capacity expansion in telecommunications and power networks, optimal power generation hydro-electric power systems, stochastic optimization in supply chains, and optimal yacht routing under uncertainty. Much of his recent research has been conducted as part of the Electric Power Optimization Centre, which develops optimization and equilibrium models of electricity markets.


April 21st, 2017  – Lin Yang

  • Fair Division via Social Comparison   Show Abstract

    [Abstract]

    We characterize the streaming space complexity of every symmetric norm l (a norm on R^n invariant under sign-flips and coordinate-permutations), by relating this space complexity to the measure-concentration characteristics of l. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a (1±ϵ)-approximation to the norm of the stream, for every 0<ϵ�?/2. (The bounds match up to poly(ϵ^�?logn) factors.) We further extend those bounds to any large approximation ratio D�?.1, showing that the decrease in space complexity is proportional to D^2, and that this factor the best possible. All of the bounds depend on the median of l(x) when x is drawn uniformly from the l2 unit sphere. The same median governs many phenomena in high-dimensional spaces, such as large-deviation bounds and the critical dimension in Dvoretzky’s Theorem.

    The family of symmetric norms contains several well-studied norms, such as all l_p norms, and indeed we provide a new explanation for the disparity in space complexity between p�? and p>2. In addition, we apply our general results to easily derive bounds for several norms that were not studied before in the streaming model, including the top-k norm and the k-support norm, which was recently employed for machine learning tasks. Overall, these results make progress on two outstanding problems in the area of sublinear algorithms (Problems 5 and 30 in https://sublinear.info). Based on paper: https://arxiv.org/abs/1511.01111. Joint work with: Jaroslaw Blasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer.

    [Bio]
    Lin Yang is currently a PhD student in the Computer Science Department and Physics Department of Johns Hopkins University. He will obtain two PhD degrees at the end of this year. He works on algorithms for streaming data analysis, studying the fundamental complexity of the streaming computation model as well as designing new and better algorithms for important problems. His research connects theoretical computer science with machine learning and computational cosmology in the big data regime.


April 17th, 2017  – Ezgi Karabulut

  • Decentralized Online Integer Programming   Show Abstract

    [Abstract]
    We consider a set of collaborative agents that need to coordinate their actions so as to socially optimize the sum of their objectives while satisfying a common resource constraint. The objective functions of the players are unknown to them a priori and are revealed in an online manner over time. Given a resource allocation, each player’s action is determined by solving an integer program. Due to privacy issues, players want to share limited information while solving this problem in a decentralized way. A cardinality resource constraint links all player actions. The resulting problem is an online optimization problem to optimally allocate the resource among the players prior to observing the item values. The performance of an online algorithm is measured by regret, defined as the difference between total gain of the best fixed decision considering all data in hindsight and the total gain incurred by the online decisions we make. A good online algorithm is expected to have regret as a sublinear function of the number of rounds, $T$. In this research, we show that for any deterministic online algorithm for the resource allocation problem, there exists objective function coefficients that guarantee $O(T)$ regret. Furthermore, for the case when the players integer programs satisfy a special concavity condition, we propose a randomized online algorithm for the resource allocation problem that guarantees an upper bound of $O(\sqrt T)$ on the expected regret.

    This is joint work with Shabbir Ahmed and George Nemhauser.


April 7th, 2017 – Rediet Abebe

  • Fair Division via Social Comparison   Show Abstract

    [Abstract]
    We study cake cutting on a graph, where agents can only evaluate their shares relative to their neighbors. This is an extension of the classical problem of fair division to incorporate the notion of social comparison from the social sciences. We say an allocation is locally envy-free if no agent envies a neighbor’s allocation, and locally proportional if each agent values its own allocation as much as the average value of its neighbors�?allocations. We generalize the classical “Cut and Choose�?protocol for two agents to this setting, by fully characterizing the set of graphs for which an oblivious single-cutter protocol can give locally envy-free (thus also locally-proportional) allocations. We study the price of envy-freeness, which compares the total value of an optimal allocation with that of an optimal, locally envy-free allocation. Surprisingly, a lower bound of ?(vn) on the price of envy-freeness for global allocations also holds for local envy-freeness in any connected graph, so sparse graphs do not provide more ?exibility asymptotically with respect to the quality of envy-free allocations.

    [Bio]
    Rediet Abebe is a PhD student in the Department of Computer Science at Cornell University, advised by Jon Kleinberg. Her research focuses on algorithms, computational social science, and social networks. In particular, she is interested in using insights from theoretical computer science to better understand and implement interventions in socioeconomic inequality and opinion dynamics. She is the 2016 recipient of the Google Generation Scholarship. Prior to Cornell, she completed a B.A. in Mathematics from Harvard University, an M.A. in Mathematics from the University of Cambridge, and an M.S. in Applied Mathematics from Harvard University. She was born and raised in Addis Ababa, Ethiopia.


March 27th, 2017 (Monday) – Arid Helseth

  • Stochastic Optimization Applied to Hydropower Scheduling   Show Abstract

    [Abstract]
    Hydropower producers rely on stochastic optimization in their daily scheduling of resources. Due to uncertainties in future inflow to reservoirs and market prices, stochastic optimization models often proves to outperform their deterministic and heuristic counterparts. In this presentation the combined SDP/SDDP methodology widely used by Nordic producers will be described. We discuss how both changes in power market design as well as the future generation mix in Europe introduces new challenges to this methodology. In particular it will be of higher importance for the methodology to capture the exact unit commitment of generators and to incorporate the uncertainty in multiple price processes.


March 17th, 2017 (Friday) – Soomin Lee

  • Communication-Efficient Decentralized and Stochastic Optimization   Show Abstract

    [Abstract]
    Optimization problems arising in decentralized multi-agent systems have gained significant attention in the context of cyber-physical, communication, power, and robotic networks combined with privacy preservation, distributed data mining and processing issues. The distributed nature of the problems is inherent due to partial knowledge of the problem data (i.e., a portion of the cost function or a subset of the constraints is known to different entities in the system), which necessitates costly communications among neighboring agents. In this talk, we present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems which can significantly reduce the number of inter-node communications. Our major contribution is the development of decentralized communication sliding methods, which can skip inter-node communications while agents solve the primal subproblems iteratively through linearizations of their local objective functions.

    This is a joint work with Guanghui (George) Lan and Yi Zhou.

    [Bio]
    Soomin Lee is a postdoc fellow in industrial and systems engineering at Georgia Tech. She received her Ph.D. in Electrical and Computer Engineering (2013) and Master’s in Computer Science (2012) from the University of Illinois, Urbana-Champaign. After graduation, she joined in Duke Robotics Group as a postdoc associate. In 2009, she worked as an assistant research officer at the Advanced Digital Science Center (ADSC) in Singapore. She is a recipient of the NSF fellowship program for enhancing partnership with industry. Her research interests include control and optimization of various distributed engineering systems interconnected over complex networks, large-scale machine learning for big data analytics as well as theoretical optimization.


March 10th, 2017 – Yingbin Liang

  • Nonconvex Approach for High Dimensional Estimation   Show Abstract

    [Abstract]
    High dimensional estimation problems, such as phase retrieval, low rank matrix estimation, and blind deconvolution, attracted intensive attention recently due to their wide applications in medical image, signal processing, social networks, etc. Traditional approaches to solving these problems are either empirical, which work well but lack theoretic guarantee; or via convex formulations, which come with performance guarantee but are computationally costly in large dimensions. Nonconvex approaches are recently emerging as a powerful method to solve such problems, which come with provable performance guarantee and are computationally efficient.

    In this talk, I will first introduce general ideas of using nonconvex methods for solving high dimensional estimation problems. I will then focus on the phase retrieval problem to present our recent advancements of nonconvex method. In particular, I will first describe our design of a nonconvex objective that yields first-order algorithm outperforming all existing algorithms in both statistical and computational efficiency. I will then present our further design of stochastic algorithms for large-scale phase retrieval with provable convergence guarantee. Towards the end of the talk, I will discuss insights learned from our studies, which are beneficial to future directions of this topic.

    [Bio]
    Dr. Yingbin Liang received the Ph.D. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2005. In 2005-2007, she was working as a postdoctoral research associate at Princeton University. In 2008-2009, she was an assistant professor at the University of Hawaii. Since December 2009, she has been on the faculty at Syracuse University, where she is an associate professor. Dr. Liang’s research interests include information theory, statistical learning theory, optimization for large scale machine learning, and wireless communication and networks.

    Dr. Liang was a Vodafone Fellow at the University of Illinois at Urbana-Champaign during 2003-2005, and received the Vodafone-U.S. Foundation Fellows Initiative Research Merit Award in 2005. She also received the M. E. Van Valkenburg Graduate Research Award from the ECE department, University of Illinois at Urbana-Champaign, in 2005. In 2009, she received the National Science Foundation CAREER Award, and the State of Hawaii Governor Innovation Award. In 2014, she received EURASIP Best Paper Award for the EURASIP Journal on Wireless Communications and Networking. She served as an Associate Editor for the Shannon Theory of the IEEE Transactions on Information Theory during 2013-2015.


Feb. 27th, 2017 – Mohit Singh

  • Minimum Birkhoff-von Neumann Decompositions   Show Abstract

    [Abstract]
    Motivated by the applications in routing in data centers, we study the problem of expressing a doubly stochastic matrix as a linear combination using the smallest number of (sub)permutation matrices. The Birkhoff-von Neumann decomposition theorem proves the existence of such a decomposition, but does not give a representation with the smallest number of permutation matrices. In this talk, I will discuss the tractability of this problem from an exact and an approximate viewpoint. This is joint work with Janardhan Kulkarni and Euiwoong Lee.


Feb. 24th, 2017 (Friday 3pm-4pm)

  • Joseph Paat – How to choose what you lift   Show Abstract

    [Abstract]
    One way to generate valid cuts (that is, valid inequalities) for a mixed-integer set is to first relax any integrality constraints and create a cut for the continuous relaxation. However, as this approach ignores integrality information, cuts generated in this way may be weak. To remedy this, one may hope to somehow reintroduce integrality information in an attempt to strengthen the cut. This process of reimposing integrality on a variable is referred to as lifting the variable. In this talk we explore the idea of lifting from the viewpoint of cut-generating functions. We examine questions such as “Does lifting one variable produce a stronger cut than lifting another?” and “How much strength is gained from lifting a single variable?” This work was done in collaboration with Santanu Dey and Amitabh Basu.

  • Amitabh Basu – Understanding Deep Neural Networks with Rectified Linear Units   Show Abstract

    [Abstract]
    In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give the first-ever polynomial time (in the size of data) algorithm to train a ReLU DNN with one hidden layer to global optimality. This follows from our complete characterization of the ReLU DNN function class whereby we show that a $\mathbb{R}^n \to \mathbb{R}$ function is representable by a ReLU DNN if and only if it is a continuous piecewise linear function. The main tool used to prove this characterization is an elegant result from tropical geometry. Further, for the $n=1$ case, we show that a single hidden layer suffices to express all piecewise linear functions, and we give tight bounds for the size of such a ReLU DNN. We follow up with gap results showing that there is a smoothly parameterized family of $\mathbb{R}\to \mathbb{R}$ “hard” functions that lead to an exponential blow-up in size, if the number of layers is decreased by a small amount. An example consequence of our gap theorem is that for every natural number $N$, there exists a function representable by a ReLU DNN with depth $N^2+1$ and total size $N^3$, such that any ReLU DNN with depth at most $N+1$ will require at least $\frac{1}{2} N^{N+1} – 1$ total nodes. Finally, we construct a family of $\mathbb{R}^n\to \mathbb{R}$ functions for $n \geq 2$ (also smoothly parameterized), whose number of affine pieces scales exponentially with the dimension $n$ at any fixed size and depth. To the best of our knowledge, such a construction with exponential dependence on $n$ has not been achieved by previous families of “hard” functions in the neural nets literature. This construction utilizes the theory of zonotopes from polyhedral theory.


Feb. 14th, 2017 – Weijun Xie

  • On Deterministic Reformulations of Distributionally Robust Chance Constrained Program   Show Abstract

    [Abstract]
    A chance constrained optimization problem involves multiple uncertain constraints, i.e. constraints with stochastic parameters, that are required to be satisfied with probability at least a pre-specified threshold. In a distributionally robust chance constrained program (DRCCP), the chance constraint is required to hold for all probability distributions of the stochastic parameters from a given family of distributions, called an ambiguity set. In this work, we consider DRCCP involving linear uncertain constraints and an ambiguity set specified by convex moment inequalities. In general, a DRCCP is nonconvex and hence NP-hard to optimize. We develop deterministic reformulations of such problems and establish sufficient conditions under which these formulations are convex and tractable. We further approximate DRCCP by a system of single chance constraints, one for each uncertain constraint. The tractability of such approximation has been “an open question�?since 2006. We provide sufficient conditions under which the approximation set is equivalent to the feasible region of DRCCP and can be reformulated as a convex program. Finally, we present a chance constrained optimal power flow model to illustrate the proposed methodology.


Jan. 30th, 2017 – Andreas Bärmann

  • Staircase Compatibility and its Applications in Scheduling and Piecewise Linearization   Show Abstract

    [Abstract]
    We consider the clique problem with multiple-choice constraints (CPMC) and characterize a case where it is possible to give an efficient description of the convex hull of its feasible solutions. This case, which we call staircase compatibility, generalizes common properties in applications and allows for a linear description of the integer feasible solutions to (CPMC) with a totally unimodular constraint matrix of polynomial size. We derive two such totally unimodular reformulations for the problem: one that is obtained by a strengthening of the compatibility constraints and one that is based on a representation as a dual network flow problem. We also evaluate our reformulations from a computational point of view by applying them to two different real-world applications. The first one is a problem in railway timetabling where we try to adapt a given timetable slightly such that energy costs from operating the trains are reduced. The second one is the piecewise linearization of non-linear flow problems on a gas network. In both cases, we are able to reduce the solution times significantly by passing to the theoretically stronger formulations of the problem.


Dec. 6th, 2016 – Lewis Ntaimo

  • Irreducible Infeasible Subsystem (IIS) Decomposition for Probabilistically Constrained Stochastic Programming   Show Abstract

    [Abstract]
    Probabilistically constrained stochastic programs (PC-SPs) have many applications in science and engineering but are very challenging to solve. Furthermore, linear programming (LP) provides very weak bounds on the optimal value. In this talk, we introduce a new decomposition approach using irreducible infeasible subsystem (IIS) inequalities to strengthen the LP-relaxation of PC-SPs. We first establish the theoretical results for determining IIS inequalities for the continuous case, and then extend the results to the binary case and give example illustrations. Next, we present an IIS branch-and-cut algorithm for PC-SP and report on preliminary computational results.

    [Bio]
    Lewis Ntaimo received his Ph.D. degree in systems and industrial engineering in 2004, his M.S. degree in mining and geological engineering in 2000, and B.S. degree in mining engineering, all from the University of Arizona. He has been with Texas A&M University since 2004. Dr. Ntaimo research interests are in algorithms for large-scale stochastic optimization, systems modeling, and discrete event simulation. Recent applications include wildfire response planning, energy reduction in data centers, wind farm operations and maintenance, and patient and resource management in healthcare. His research has been funded by the National Science Foundation, Department of Homeland Security, and industry. Dr. Ntaimo is a member of 3 and IISE. He served as Vice Chair for the INFORMS Optimization Society from 2008-2010 and is now Vice President for the INFORMS Minority Issues Forum. He is on the Editorial Board of the Journal of Global Optimization is a member of the technical committee for the Society of Computer Simulation DEVS symposium.


Nov. 18th, 2016 (Friday) – Anthony Papavasiliou

  • An Asynchronous Distributed Algorithm for Solving Stochastic Unit Commitment   Show Abstract

    [Abstract]
    We present an asynchronous algorithm for solving the stochastic unit commitment (SUC) problem using scenario decomposition. The algorithm is motivated by the scale of problem and significant differences in run times observed among scenario subproblems, which can result in inefficient use of distributed computing resources by synchronous parallel algorithms. Dual iterations are performed asynchronously using a block-coordinate subgradient descent method which allows performing block-coordinate updates using delayed information. We provide convergence guarantees for the asynchronous block-coordinate subgradient method based on previous results for incremental subgradient methods and stochastic subgradient methods. The algorithm recovers candidate primal solutions from the solutions of scenario subproblems using recombination heuristics.

    The asynchronous algorithm is implemented in a high performance computing cluster and we conduct numerical experiments for two-stage SUC instances of the Western Electricity Coordinating Council (WECC) system and of the Central Western European (CWE) system. The WECC system that we study consist of 130 thermal generators, 182 nodes and 319 lines with hourly resolution and up to 1000 scenarios, while the CWE system consist of 656 thermal generators, 679 nodes and 1073 lines, with quarterly resolution and up to 120 scenarios. When using 10 nodes of the cluster per instance, the algorithm provides solutions that are within 2% of optimality to all problems within 47 minutes for WECC and 3 hours, 54 minutes for CWE. Moreover, we find that an equivalent synchronous parallel subgradient algorithm would leave processors idle up to 84% of the time, an observation which underscores the need for designing asynchronous optimization schemes in order to fully exploit distributed computing on real world applications.

    [Bio]
    gnacio Aravena obtained his B.Sc. and M.Sc. degrees in Electrical Engineering from Universidad Tecnica Federico Santa Maria (UTFSM), Chile, where he also served as lecturer and as industrial consultant. He is currently a Ph.D. student on Applied Mathematics at Universite Catholique de Louvain (UCL), Belgium. His research focuses on evaluating the impact of renewable energy integration on the European electricity markets, by using detailed models and high performance computing.


Nov. 10th, 2016 (Thursday) – William Haskell

  • Online algorithms for constrained optimization   Show Abstract

    [Abstract]
    Much of the literature on online optimization focuses on unconstrained minimization of objective functions with a large number of terms. We are interested in extending this development to create online algorithms for convex optimization problems with large numbers of constraints. We offer two approaches in this regard. First, we combine random constraint sampling with the classical primal-dual algorithm. Second, we combine random constraint sampling with classical penalty/barrier methods. We are able to give a convergence rate analysis for both approaches.

    [Bio]
    William B. Haskell completed his Ph.D in operations research at the University of California Berkeley in 2011. He is currently an assistant professor in the Department of Industrial and Systems Engineering at the National University of Singapore. His research focuses on large-scale decision-making, and he has a special interest in risk-aware sequential optimization.


Nov. 8th, 2016

  • Luis Damian Reyes Rodriguez – Complexity of dynamic delivery problems with release dates and deadlines   Show Abstract

    [Abstract]
    Motivated by a case-study in food delivery operations, we investigate the complexity of dynamic delivery problems with release times and service guarantees. At the heart of these problems, there is a trade-off between waiting to consolidate more orders – enabling cost-effective delivery routes – and dispatching a vehicle earlier – in order to complete some orders while preserving capacity for others released later in the operating period. We introduce polynomial-time algorithms for some deterministic variants on a 1-dimensional geometry.

  • Asteroide Santana – Cut generating functions for conic sets   Show Abstract

    [Abstract]
    In this paper, we study cut generating functions for conic sets. Our first main result shows that if the conic set is bounded, then cut generating functions for integer linear programs can easily be adapted to give the integer hull of the conic integer program. Then we introduce a new class of cut generating functions which are non-decreasing with respect to second-order cone. We show that, under some minor technical conditions, these functions together with integer linear programming-based functions are sufficient to yield the integer hull of intersections of conic sections in $\mathbb{R}^2$.


Nov. 1st, 2016 – Huan Xu

  • Goal Scoring, Coherent Loss, and Application to Machine Learning   Show Abstract

    [Abstract]
    Motivated by the binary classification problem in machine learning, we study a class of decision problems where the decision maker has a list of goals, from which he aims to attain the maximal possible number of goals. In binary classification, this essentially means seeking a prediction rule to achieve the lowest probability of misclassification, and computationally it involves minimizing a (difficult) non-convex, 0-1 loss function. To address the intractability, previous methods consider minimizing the cumulative loss �?the sum of convex surrogates of the 0-1 loss of each goal. We revisit this paradigm and develop instead an axiomatic framework by proposing a set of salient properties on functions for goal scoring and then propose the coherent loss approach, which is a tractable upper-bound of the loss over the entire set of goals. We show that the proposed approach yields a strictly tighter approximation to the total loss (i.e., the number of missed goal) than any convex cumulative loss approach while preserving the convexity of the underlying optimization problem. Moreover, this approach, applied to for binary classification, also has a robustness interpretation which builds a connection to robust SVMs.

    [Bio]
    Dr. Huan Xu is an assistant professor at the Stewart School of Industrial & Systems Engineering at Georgia Tech. His current research interest focuses on data, learning, and decision making. Specifically, he is interested in machine learning, high-dimensional statistics, robust and stochastic optimization, sequential decision making, and application to large-scale systems.


Oct. 25th, 2016 – Fabian Rigterink

  • On the strength of relaxations of the boolean quadric polytope   Show Abstract

    [Abstract]
    In the 1989 seminal paper, The boolean quadric polytope: Some characteristics, facets and relatives [Mathematical Programming, 45(1-3):139-172, 1989], Padberg introduced five classes of valid inequalities for the boolean quadric polytope: triangle, clique, cut, generalized cut, and odd cycle inequalities. In addition to the McCormick relaxation, these inequalities give a stronger relaxation of the convex hull of the graph of a bilinear function. In this talk, we study classes of bilinear functions where the McCormick relaxation and some of the Padberg inequalities characterize the convex hull. Furthermore, we study which of the Padberg inequalities give the strongest relaxation of the convex hull. We then apply the strong inequalities to (quadratically constrained) quadratic programs from the literature to find good lower bounds fast. Finally, we demonstrate that warm starting a global optimization solver with these lower bounds can improve the solver’s performance.

    This is joint work with Natashia Boland, Thomas Kalinowski, and Hamish Waterer.


Oct. 18th, 2016 – Ilbin Lee

  • Uncertainty Quantification for Optimization Models and Batch Solution Methods   Show Abstract

    [Abstract]
    In most optimization models, the input parameters are uncertain estimates derived from data. Existing approaches in optimization focus on finding one optimal decision, to perform well on average, or to guarantee a worst-case performance under uncertainty. Although making a single decision given data is key to the operation of any system, additional insights as to how uncertain such solution might be is also important in decision-making. The intuition classical sensitivity analysis can provide is limited for complex systems because there can be a large number of input parameters. We propose a computational framework to empirically estimate the uncertainty by solving the optimization problem for sampled input parameters. In this talk, we focus on solving linear programs (LPs) with varying parameters.

    The common approach is solving the LPs for all combinations of given parameter values, called the brute-force, which can be computationally infeasible when the parameter space is high-dimensional and/or the underlying LP is computationally challenging. We introduce a new approach for solving a large number of LPs that differ only in the right hand side of the constraints ($b$ of $A x= b$). The computational approach builds on theoretical properties of the geometry of the space of critical regions, where a critical region is defined as the set of $b$’s for which a basis is optimal. To formally support our computational approach we provide proofs of geometric properties of neighboring critical regions. While these theoretical properties have been stated and applied in the existing literature of parametric programming, their formal proofs have not been provided to the best of our knowledge. Based on the geometric properties of critical regions, we develop an algorithm that solves the LPs in batches by finding critical regions that contain multiple $b$’s. Moreover, we suggest a data-driven version of our algorithm that uses the distribution (e.g., shape) of a sample of $b$’s for which the LPs need to be solved. The experimental results on a benchmark problem show that our approach can be more efficient and scale better in the number of $b$’s than the brute-force, but also indicate some limitations of the algorithm. Possible extensions of the method are also discussed.

    [Bio]
    Dr. Lee is a postdoc researcher in our department working with Dr. Serban. His research interests are in health data analytics and large-scale optimization problems arising in data analytics. He is applying his expertise in operations research and machine learning to translating large-scale health data into recommendations for policy makers and to develop novel approaches for solving optimization problems in healthcare applications.


Oct. 4th, 2016 – Rui Gao

  • Distributionally Robust Stochastic Optimization with Wasserstein Distance   Show Abstract

    [Abstract]
    Optimization under uncertainty is often formulated as a stochastic optimization problem. In many settings, a “true” probability distribution may not be known, or the notion of a true distribution may not even be applicable. We consider an approach, called distributionally robust stochastic optimization (DRSO), in which one hedges against all probability distributions that are within a chosen Wasserstein distance from a nominal distribution, for example an empirical distribution. Comparing to the popular phi-divergences, Wasserstein distance results in more reasonable worst-case distributions. We derive a dual reformulation of the corresponding DRSO problem and construct the worst-case distribution explicitly via the first-order optimality condition of the dual problem.

    Our contributions are five-fold.

    1. We identify necessary and sufficient conditions for the existence of a worst-case distribution, which are naturally related to the growth rate of the objective function.
    2. We show that the worst-case distributions resulting from an appropriate Wasserstein distance have a concise structure and a clear interpretation.
    3. Using this structure, we show that data-driven DRSO problems can be approximated to any accuracy by robust optimization problems, and thereby many DRSO problems become tractable by using tools from robust optimization.
    4. To the best of our knowledge, our proof of strong duality is the first constructive proof for DRSO problems, and we show that this technique is also useful in other contexts.
    5. Our strong duality result holds in a very general setting, and can be applied to infinite dimensional process control problems and worst-case value-at-risk analysis.

    This is a joint work with Anton Kleywegt.


Sep. 27th, 2016 – Merve Bodur

  • Decomposition for loosely coupled integer programs: A multiobjective perspective   Show Abstract

    [Abstract]
    We consider integer programming (IP) problems consisting of (possibly a large number of) interrelated subsystems and a small number of coupling constraints which link blocks of variables that correspond to different subsystems. Such problems are called loosely coupled or nearly-decomposable. Motivated by recent developments in multiobjective programming (MOP), we develop a MOP-based decomposition algorithm to solve loosely coupled IPs. More specifically, we reformulate the problem in such a way that it can be decomposed into a (resource-directive) master problem and a set of MOP subproblems. The proposed algorithm is a column generation algorithm. However, it is based on a new lower bounding problem (which is an IP), and considers a more structured (and usually smaller) set of columns than a traditional column generation algorithm. We provide preliminary computational results on instances with knapsack structure in the subsystems, demonstrating the potential benefits of our approach.

    This is a joint work with Shabbir Ahmed, Natashia Boland and George L. Nemhauser


Sep. 20th, 2016 – Yi Zhou

  • An optimal randomized incremental gradient method   Show Abstract

    [Abstract]
    In this talk, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of m ($\ge 1$) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization problems using a primal-dual termination criterion. Our major contribution is to develop a randomized primal-dual gradient (RPDG) method, which needs to compute the gradient of only one randomly selected smooth component at each iteration, but can possibly achieve better complexity than PDG in terms of the total number of gradient evaluations. More specifically, we show that the total number of gradient evaluations performed by RPDG can be $O(\sqrt m)$ times smaller, both in expectation and with high probability, than those performed by deterministic optimal first-order methods under favorable situations. We also show that the complexity of the RPDG method is not improvable by developing a new lower complexity bound for a general class of randomized methods for solving large-scale finite-sum convex optimization problems. Moreover, through the development of PDG and RPDG, we introduce a novel game-theoretic interpretation for these optimal methods for convex optimization.


Oct. 30th, 2015 (Friday 1pm)

  • Qianyi Wang – Effectiveness of Sparse Cuts   Show Abstract

    [Abstract]
    In this talk, we present an analysis of the quality of sparse cuts for IPs with sparse formulations. In order to accomplish this analysis, we define a notion of an induced graph based on the constraint matrix. Then, we are able to relate the strength of sparse cutting-planes to graph-theoretic parameters of the induced graph.

  • Burak Kocuk – New Formulation And Strong MISOCP Relaxations For AC Optimal Transmission Switching Problem   Show Abstract

    [Abstract]
    In this work, we formulate the AC Optimal Transmission Switching (AC OTS) problem as a mixed-integer nonlinear program. We propose a mixed-integer second cone programming (MISOCP) relaxation and strengthen this relaxation via several types of valid inequalities, some of which have demonstrated excellent performance for AC OPF and some others are specifically developed for the AC OTS. Finally, we propose practical algorithms that utilize the solutions from the MISOCP relaxation to obtain high quality feasible solutions for AC OTS problem. This is joint work with Santanu Dey and Andy Sun.

  • Caglar Caglayan – Physician Staffing in the Emergency Department: Opening the Blackbox   Show Abstract

    [Abstract]
    We propose an “intuitive”, “realistic” and “tractable” model of the emergency department (ED) by a multi-class multi-stage queuing network with multiple targeted service levels. Based on infinite-server approximation and offered load analysis, we employ square-root safety principle to determine the right number of physicians in the ED. Our model is detailed enough to capture the key dynamics of the ED but simple enough to understand, infer results and implement in a clinical setting.

  • Weijun Xie – On The Quantile Cut Closure of Chance-constrained Problems   Show Abstract

    [Abstract]
    Quantile cuts for a chance-constrained problem (CCP) are projections of a subset of the well-known mixing set inequalities onto the original problem space. We study the closure of quantile cuts, and show that iterative application of the closure recovers the convex hull of a CCP.

  • Caglar Caglayan – Optimal Screening Policies for Women at High Risk of Breast Cancer   Show Abstract

    [Abstract]
    Women with breast density, family history of breast or ovarian cancer, or BRCA1 and BRCA2-mutation-carriers are at higher risk of breast cancer. For such women, non-mammographic modalities such as ultrasound or MRI, adjunct to or instead of mammogram, can be beneficial but they lead to an increased screening cost. Considering both potential health benefits and financial aspects, we study this multi-modality breast cancer screening problem and identify cost-effective optimal screening policies.


Oct. 16th, 2015 (Friday) – Juan Pablo Vielma

  • Embedding Formulations and Complexity for Unions of Polyhedra   Show Abstract

    [Abstract]
    We consider a systematic procedure to construct small but strong Mixed Integer Programming (MIP) formulations for unions of polyhedra. A key of the procedure is the flexible use of 0-1 variables that signal the selection among the polyhedra. This flexibility is achieved by considering several possible embeddings of the polyhedra in a higher dimensional space. An important characteristic of these formulations is that they do not use auxiliary variables other than the 0-1 variables strictly needed to construct a formulation (in contrast to “extended” formulations, which are allowed to use any number of auxiliary variables). Nonetheless, the formulations obtained through this embedding procedure can be smaller that the smallest known alternative formulation (extended or not). Furthermore, the Linear Programming (LP) relaxation of these formulations often have extreme points that naturally satisfy the appropriate integrality constraints (such formulations are usually denoted “ideal”). These characteristics allow some versions of these formulations to provide a significant computational advantage over alternative formulations.

    The embedding formulation approach leads to two notions of complexity for unions of polyhedra: 1) the size of the smallest non-extended formulation, and 2) the size of the smallest ideal non-extended formulations. We give upper and lower bounds for these complexity measures for several classes of polyhedra. We also study how the embedding selection affects the sizes of the associated formulations. Finally, we compare these measures to other complexity notions such as the size of the convex hull of the union of the polyhedra and the extension complexity of this convex hull.


Oct. 15th, 2015 (11am Thursday, Main 228) – Liron Yedidsion

  • A Polynomial Time Approximation Scheme (PTAS) for the bi-scenario sum of completion times problem.   Show Abstract

    [Abstract]
    An influential aspect of any scheduling problem is the processing time of a tasks (job), which typically can be deterministic, stochastic or even uncertain. Scheduling according to unique and known processing times (a.k.a. Nominal) may be naïve, since real production systems are usually subject to inherent uncertainty. Moreover, typically, there are several objectives that the decision-maker seeks to satisfy. We offer a novel approach in the context of deterministic scheduling, borrowed from scenario-based optimization. The new approach copes with uncertainty by simultaneously optimizing a the sum of completion times criterion under two different instances of the processing times. We term the new problem a bi-scenario trade-off problem. We develop a PTAS that approximates the Pareto-optimal set of solutions and show that it is tight. Then we introduce a Branch-and-Bound based algorithm that solves the problem optimally.


Oct. 8th, 2015 (Thursday) – Sheldon Jacobson

  • Balance Optimization Subset Selection (BOSS): An Alternative Approach for Causal Inference with Observational Data.   Show Abstract

    [Abstract]
    Researchers in medicine and the social sciences attempt to identify and document causal relationships. Those not fortunate enough to be able to design and implement randomized control trials must resort to observational studies. To preserve the ability to make causal inferences outside the experimental realm, researchers attempt to post-process observational data to draw meaningful insights and conclusions. Finding the subset of data that most closely resembles experimental data is a challenging, complex problem. However, the rise in computational power and discrete optimization algorithmic advances suggests an operations research solution as an alternative to methods currently being employed. Joint work with Jason J. Sauppe (University of Wisconsin – Lacrosse)


Oct. 2th, 2015 (Friday) – Avinash Bhardwaj

  • Submodular Knapsacks – A Polyhedral Discussion.   Show Abstract

    [Abstract]
    Submodular and supermodular knapsack sets arise naturally when modeling utilities, risk and probabilistic constraints on discrete variables. Although submodular functions have been explored plentiful with respect to set function optimization, understanding of level sets of these set functions is still underdeveloped. The results with respect to submodular optimization cannot be translated as such when optimizing over the level sets of submodular functions. In particular, even though the convex hull of convex lower envelope of a general submodular function f can be completely characterized, the convex hull description of the lower level set of f is unknown even in the special case when f is monotone. In this talk, we will study the facial structure of the convex hull of the level sets of a given (not necessarily monotone) submodular set function f : {0, 1}^n → R. In particular we derive valid inequalities and their extensions for the lower level sets of submodular set functions, and provide facet defining conditions for the same. We will investigate the questions of separation, lifting and sequence independent bounds in this context, generalizing corresponding results for the linear 0-1 knapsack set. This is a joint work with Alper Atamtürk.


Sept. 30th, 2015 – Bistra Dilkina

  • Learning to Branch in Mixed Integer Programming.   Show Abstract

    [Abstract]
    Choosing good variables to branch on often leads to a dramatic reduction in terms of the number of nodes needed to solve an instance. The design of strategies for branching in Mixed Integer Programming (MIP) is guided by cycles of parameter tuning and offline experimentation on an extremely heterogeneous testbed, using the average performance. Once devised, these strategies (and the values of their parameters) are essentially input-agnostic. To address these issues, we develop a novel framework for data-driven, on-the-fly design of variable selection strategies. By leveraging recent advances in supervised ranking, we aim to produce strategies that gather the best of all properties: 1) using a small number of search nodes approaching the good performance of SB, 2) maintaining a low computation footprint as in PC, and 3) selecting variables adaptively based on the properties of the given instance. Based on a limited number of observed Strong Branching decisions at the start of the search and a set of dynamic features computed for each candidate variable at a search node, we learn an easy-to-evaluate surrogate function that mimics the SB strategy by solving a “learning-to-rank” problem, common in ML. We will show an instantiation of this framework using CPLEX, a state-of-the-art MIP solver, and evaluate performance the MIPLIB benchmark. This is joint work with Elias Khalil, Le Song, Pierre Le Bodic and George Nemhauser.


Sept. 23th, 2015 – Merve Bodur

  • Cutting Planes from Extended LP Formulations.   Show Abstract

    [Abstract]
    For mixed-integer sets, we study extended formulations of their LP relaxations and study the effect of adding cutting planes in the extended space. In terms of optimization, extended LP formulations do not lead to better bounds as their projection onto the original space is precisely the original LP relaxation. However, we show that applying split cuts to such extended formulations can be more effective than applying split cuts to the original formulation. For any 0-1 mixed-integer set with n integer and k continuous variables, we construct an extended formulation with 2n+k-1 variables whose elementary split closure is integral. We extend this idea to general mixed-integer sets and construct the best extended formulation with respect to split cuts. This is joint work with Sanjeeb Dash and Oktay Gunluk.


Aug. 26th, 2015 – Pierre Le Bodic

  • An Abstract Model for Branching and its Application to Mixed Integer Programming   Show Abstract

    [Abstract]
    The selection of branching variables is a key component of branch-and-bound algorithms for solving Mixed-Integer Programming (MIP) problems since the quality of the selection procedure is likely to have a significant effect on the size of the enumeration tree. State-of-the-art procedures base the selection of variables on their “LP gains”, which is the dual bound improvement obtained after branching on a variable. There are various ways of selecting variables depending on their LP gains. However, all methods are evaluated empirically. In this paper we present a theoretical model for the selection of branching variables. It is based upon an abstraction of MIPs to a simpler setting in which it is possible to analytically evaluate the dual bound improvement of choosing a given variable. We then discuss how the analytical results can be used to choose branching variables for MIPs, and we give experimental results that demonstrate the effectiveness of the method on MIPLIB instances. This is joint work with George Nemhauser.


Apr. 22nd, 2015 – Arefin Huq (Theory Group, Georgia Tech)

  • The Matching Problem Has No Small Symmetric SDP   Show Abstract

    [Abstract]
    Yannakakis showed that the matching problem does not have a small symmetric linear program. Rothvoß recently proved that any (not necessarily symmetric) linear program also has exponential size. It is natural to ask whether the matching problem can be expressed compactly in a framework such as semidefinite programming (SDP) that is more powerful than linear programming but still allows efficient optimization. We answer this question negatively for symmetric SDPs: any symmetric SDP for the matching problem has exponential size. We also show that an O(k)-round Lasserre SDP relaxation for the metric traveling salesperson problem (TSP) yields at least as good an approximation as any symmetric SDP relaxation of size n^k. The key technical ingredient underlying both these results is an upper bound on the degree needed to derive polynomial identities that hold over the space of matchings or traveling salesperson tours. This is joint work with Jonah Brown-Cohen, Prasad Raghavendra, and Benjamin Weitz from Berkeley, and Gabor Braun, Sebastian Pokutta, Aurko Roy, and Daniel Zink from Georgia Tech.


Apr. 8th, 2015 – Burak Kocuk

  • Strong SOCP Relaxations for Optimal Power Flow Problem   Show Abstract

    [Abstract]
    Optimal Power Flow is a fundamental optimization problem in electrical power systems analysis. Although local optimal solution methods are generally successful, they do not provide guarantees on the quality. Recently, much research has focused on Semidefinite Programming (SDP) relaxations to obtain strong lower bounds. In this work, we instead utilize Second Order Cone Programming (SOCP) relaxations due to their superior computational power. However, since SOCPs are weaker than their SDP counterparts, we propose three improvements to strengthen SOCP relaxation and show that two of them are incomparable to SDP relaxation. Finally, we present extensive computational experiments with standard benchmark instances from literature to demonstrate the accuracy and efficiency of SOCP-based methods. This is joint work with Santanu Dey and Andy Sun.


Mar. 11th, 2015 – Murat Yildirim

  • Sensor Driven Condition Based Maintenance Models for Generator Fleets   Show Abstract

    [Abstract]
    We provide a framework that links low-level performance and condition monitoring data with high-level operational and maintenance decisions for generators. The operational decisions identify the optimal commitment and dispatch to satisfy demand and transmission constraints. Maintenance decisions focus on arriving at an optimal condition based maintenance (CBM) schedule that accounts for optimal asset-specific CBM schedules driven by the condition monitoring data. We propose new mixed-integer optimization models and efficient algorithms that exploit the special structure of the proposed formulation. We present extensive computational experiment results to show proposed models achieve significant improvements in cost and reliability. This is a joint work with Andy Sun and Nagi Gebraeel.


Mar. 4th, 2015 – Andres Iroume

  • Some Lower Bounds on Sparse Outer Approximations of Polytopes   Show Abstract

    [Abstract]
    Trying to understand the use of sparse cutting-planes in integer programming solvers, a recent paper by Dey, Molinaro and Wang studied how well polytopes are approximated by using only sparse valid-inequalities. In this talk, we consider “less-idealized” questions such as: effect of sparse inequalities added to linear-programming relaxation, effect on approximation by addition of a budgeted number of dense valid-inequalities, sparse-approximation of polytope under every rotation and approximation by sparse inequalities in specific directions.


Feb. 27, 2015 (Friday) – Hadi Charkhgard

  • A Polynomial Time Algorithm to Solve a Class of Optimization Problems with a Multi-linear Objective Function and Affine Constraints   Show Abstract

    [Abstract]
    In this talk, I will present the first polynomial-time linear programming based algorithm for a class of optimization problems with a multi-linear objective function and affine constraints. This class of optimization problems arises naturally in a number of settings in game theory, such as the bargaining problem, linear Fisher markets, and Kelly capacity allocation markets, but has applications in other fields of study as well. The algorithm computes an optimal solution by solving at most O(p^3) linear programs, where p is the number of variables in the multi-linear objective function.


Feb. 18, 2015 – Jeff Baker (Southern Company Generation)

  • Modeling the Unit Commitment Problem for Large Power Systems   Show Abstract

    [Abstract]
    Headquartered in Atlanta, GA, Southern Company is one the largest providers of electricity in the Southeast with over 4 million customers and nearly 46,000MW of generation capacity. Southern Company’s Fleet Operations and Trading manages the day-ahead and real-time unit commitment and economic dispatch of its generation fleet. Over the last decade, operation of its generation fleet has become significantly more complicated due to changing market structures, significant additions of natural gas units (e.g. combined cycles, combustion turbines), complex power purchase agreements (PPA’s), environmental legislation, and increased penetration of variable energy resources (e.g. wind, solar). On a daily and hourly basis, system operators use complex software models and optimization to assist in difficult unit scheduling decisions. Making smart, efficient decisions is worth millions of dollars annually to the organization and ultimately Southern Company customers.

    This talk will introduce the unit commitment and economic dispatch problem. Unit commitment determines the on/off state of each generator while the economic dispatch problem determines each generator’s optimal output level (set point). Modeling the unit commitment model is a large mixed integer problem with numerous constraints (hourly electric demand, system operating and regulating reserves, generator minimum up and down times, etc.). A seven day unit commitment problem for a large power system may consist of hundreds of thousands of binary decision variables. In addition to day ahead and real time scheduling, other applications of unit commitment and economic dispatch to power systems such as long term system planning and incorporating variable energy resources will also be discussed.


Feb. 13, 2015 (Friday) – Siqian Shen (University of Michigan)

  • Decomposition Algorithm for Optimizing Multi-server Appointment Scheduling with Chance Constraints   Show Abstract

    [Abstract]
    In this talk, we consider an integrated multiserver allocation and appointment scheduling problem under random service durations. We minimize the costs of operating servers and serving appointments, subject to a joint chance constraint limiting the risk of server overtime. Using finite samples of the uncertainty, we formulate the problem as a mixed-integer linear program, and propose a two-stage programming framework where we keep cross-server decisions and constraints at the first stage, so that the remaining problem features a server-wise decomposable structure. The first-stage problem is a variant of the chance-constrained binary packing problem, in which appointments are packed to servers with a probabilistic overtime-free guarantee. The second-stage problem seeks feasible appointment schedules given appointment-to-server assignments. We solve the problems at both stages by methods of coefficient strengthening, bounding, and branch-and-cut with diverse forms of cutting planes. We also investigate variants that consider bounded appointment waiting time, expected overtime/waiting penalty, and use individual chance constraints to limit server overtime. We test instances with diverse problem and sample sizes to demonstrate the computational efficacy of different approaches.


Feb. 11, 2015 – Martin Savelsbergh

  • Advances in Multiobjective Integer Programming   Show Abstract

    [Abstract]
    Multiobjective optimization, one of the earliest fields of study in operations research, has been experiencing a resurgence of interest in the last decade. This is due, in part, to the ever increasing adoption of optimization-based decision support tools in industry. Since most real-world problems involve multiple, often conflicting, goals, the want for multiobjective optimization decision support tools is not surprising. The development of effective, and relatively easy to use, evolutionary algorithms for multiobjective optimization is another contributing factor. Finally, the availability of cheap computing power has played a role. Solving multiobjective optimization problems is highly computationally intensive (more so than solving single-objective optimization problems) and thus the availability of cheap computing power has acted as an enabler. Surprisingly, handling multiple objectives has received little attention by the integer programming community. In this talk, I will introduce the concepts of multiobjective integer programming and present some recent work on developing criterion space search algorithms for 2- and 3-objective integer programming.


Feb. 4, 2015 – Timm Oertel (IFOR at ETH)

  • Center-points: A Link Between Discrete Geometry and Optimization   Show Abstract

    [Abstract]
    In this talk, I will consider mixed-integer convex minimization problems. First, I will present optimality conditions for this class of optimization problems. Then, I will introduce the concept of center-points, a generalization of the median from the one dimensional space to vector spaces. Through the theory of center-points I will show how to extend the general cutting plane scheme from the continuous setting to the mixed-integer setting. Further, I will present several properties of center-points and how to compute them approximately.


Jan. 28, 2015 – Brian Kues

  • The Multiple-Job Repair Kit Problem with Forward Stocking Location Recourse   Show Abstract

    [Abstract]
    The multiple-job repair kit problem is concerned with choosing which spare parts a traveling technician should carry. The demand for parts at each job is not known until the technician diagnoses the problem on-site. The general objective is to understand and optimize the trade-off between low inventory cost and high customer service level, which is defined as the fraction of jobs completed in a single visit. Hence, jobs which require parts not stocked in the repair kit are considered service failures. We introduce a new model for a multiple-job repair kit problem where a technician can retrieve needed parts not carried in the repair kit from a forward stocking location in order to complete a job successfully. However, such trips require additional time and hinder the technician’s ability to complete later jobs. This type of spare parts supply chain trades off low inventory cost and high technician productivity. We develop a model for the problem as well as an algorithm to find an inventory policy. We show that the algorithm is not always optimal and can be arbitrarily bad in the worst case but performs well in many computational experiments. We also perform factor screening experiments to examine the sensitivity of the heuristic solutions to the problem parameters.


Jan. 21, 2015 – Kartik Sivaramakrishnan (Axioma Research)

  • Multi-period Portfolio Optimization with Alpha Decay   Show Abstract

    [Abstract]
    The traditional Markowitz mean-variance framework is based on a single-period portfolio model. Single-period portfolio optimization does not use any data and decisions beyond the rebalancing time horizon with the result that that its policies are “myopic” in nature. For long-term investors, multiperiod optimization offers the opportunity to make “wait-and-see” policy decisions by including approximate forecasts and long-term policy decisions beyond the rebalancing time horizon. We consider portfolio optimization with a composite alpha signal that is composed of a short-term and a long-term alpha signal. The short-term alpha better predicts returns at the end of the rebalancing period but it decays quickly. On the other hand, the long-term alpha has less predictive power than the short-term alpha but it decays slowly. We develop a two-stage multi-period model that incorporates this alpha model to construct the optimal portfolio at the end of the rebalancing period. We compare this model with the traditional single-period MVO model on simulated and realistic backtests and show that the multi-period model generates portfolios with superior realized performance.


Nov. 26, 2014 – Marcus Raymundo

  • A Note on Complexity of Multistage Stochastic Programs   Show Abstract

    [Abstract]
    In Shapiro [2006], estimates of the sample sizes required to solve a multistage stochastic programming problem with a given accuracy by the conditional sample average approximation method were derived. In this paper we construct an example in the multistage setting that shows that these estimates cannot be significantly improved.


Nov. 21, 2014 – Joey Huchette (MIT)

  • Modeling Optimization Problems with JuMP in Julia   Show Abstract

    [Abstract]
    We present JuMP, an optimization modeling language like AMPL, GAMS, YALMIP, and others, which is embedded in Julia, a recently developed language for technical computing. A key theme of JuMP and Julia in general is the the ability to achieve performance competitive with C from a language with high-level syntax comparable to Python and MATLAB. We describe JuMP’s functionality for linear, mixed-integer, and nonlinear optimization and provide an overview of the broader “JuliaOpt” ecosystem of open-source optimization packages. The tutorial will be interactive, so we urge attendees to bring laptops with Julia preinstalled by following the accompanying instructions at here.


Nov. 5, 2014

  • Burak Kocuk – Some Polyhedral Results for DCOPF Problem with Switching   Show Abstract

    [Abstract]
    Transmission line switching in power networks has become increasingly important in practice. However, underlying theory of this problem is not well-studied. In this work, we first prove some hardness results. Then, we propose a new formulation based on cycles, which is used to find valid inequalities and polyhedral results. Finally, numerical results are provided. This is joint work with Santanu Dey and Andy Sun.

  • Tugce Isik – Processor Sharing in Non-Collaborative Queueing Networks   Show Abstract

    [Abstract]
    We study processor sharing (PS) in non-collaborative queueing networks where multiple servers cannot work at the same station. For tandem lines with two stations, two servers, and infinite buffers, we show that either a dedicated or a PS policy maximizes the throughput, and the optimal PS policy can be achieved as a limit of round-robin policies that rotate the servers among the jobs. For systems with more than two stations, we evaluate how the performance of round-robin policies depends on the quantum of time spent at each station and the buffer size. Our numerical results show that the round-robin policies are near-optimal even if the buffers are finite as long as the servers are rotated frequently enough. This is joint work with Sigrún Andradóttir and Hayriye Ayhan.


Oct. 30, 2014 – Warren Powell (Princeton)

  • Stochastic Optimization Made Easy: Practical Strategies for Planning Under Uncertainty   Show Abstract

    [Abstract]
    Uncertainty pervades planning and operations in a variety of settings in transportation and logistics, energy, health and finance. Customer demands, transit delays, weather events, commodity prices, equipment problems, market behavior and natural disasters are all familiar partners. Mathematical programming has given us a powerful and widely used canonical framework for modeling deterministic optimization problems. However, if we introduce uncertainty, the field fragments into a number of competing communities with names such as stochastic programming, dynamic programming, stochastic search, robust optimization, and optimal control. Lost in the jungle of stochastic optimization is the realization that we solve these problems every day. The problem is that there has been a fundamental misunderstanding of how to think about and formulate sequential decision problems under uncertainty. While deterministic optimization problems are defined by solving for decisions, sequential stochastic optimization problems require that we find robust policies. I will identify four fundamental classes of policies which unify the competing communities into a common modeling framework, including any strategy used in practice. Further, I will show using a simple energy storage problem that, depending on the characteristics of the data, that any of these four (or a hybrid) may be best on any given problem.


Oct. 29, 2014

  • Niao He – Mirror Prox Algorithm for Multi-Term Composite Minimization and Semi-Separable Problems   Show Abstract

    [Abstract]
    This work is inspired by the recent trend in many fields of building composite optimization models with hybrid regularizations and mixed penalties. We present a composite version of Mirror Prox algorithm for solving convex-concave saddle point problems and monotone variational inequalities of special structures, allowing to cover saddle point/variational analogies of the usual composite minimization. We demonstrate that the composite Mirror Prox algorithm inherits the favourable efficiency estimate of its prototype. We demonstrate that the proposed approach can be successfully applied to Lasso-type problems with several penalizing terms and to problems of semi-separable structures considered in the alternating directions methods, implying in both cases methods with the O(1/t) complexity bounds. This is joint work with Anatoli Juditsky and Arkadi Nemirovski.

  • Mathias Klapp – The One-dimensional Dynamic Dispatch Waves Problem   Show Abstract

    [Abstract]
    We study same-day delivery (SDD) distribution systems by formulating the dynamic dispatch wave problem (DDWP) that models a depot where orders arrive randomly throughout the day. At each dispatch period (wave), the decision maker decides whether or not to dispatch a vehicle and serve a subset of open orders to minimize operational costs and charges for unattended requests at the end of the day. We begin studying the DDWP with a single vehicle and customer delivery locations on a line; we efficiently solve the deterministic case, provide an optimal a priori solution of a specific class for the stochastic case, and give heuristic policies and dual bounds for the stochastic-dynamic case. This is joint work with Alan Erera and Alejandro Toriello.


Oct. 15, 2014 – Alex Shapiro

  • Risk Neutral and Risk Averse Approaches to Multistage Stochastic Programming   Show Abstract

    [Abstract]
    In many practical situations one has to make decisions sequentially based on data available at the time of the decision and facing uncertainty of the future. This leads to optimization problems which can be formulated in a framework of multistage stochastic optimization. In this talk we consider risk neutral and risk averse approaches to multistage stochastic programming. We discuss conceptual and computational issues involved in formulation and solving such problems. As an example we give numerical results based on the Stochastic Dual Dynamic Programming method applied to planning of the Brazilian interconnected power system.


Oct. 1, 2014 – Chris Ryan (University of Chicago)

  • Duality theory via Fourier-Motzkin Elimination   Show Abstract

    [Abstract]
    We explore how Fourier-Motzkin elimination, a standard tool infinite dimensional linear programming, can be used to understand the duality theory of more general optimization problems, including semi-infinite linear programming, convex, and conic programming. This is joint work with Amitabh Basu (Johns Hopkins) and Kipp Martin (University of Chicago).


Sep. 26, 2014 – Brian Dandurand

  • Distributed and Coordinated Optimization Motivated by Multidisciplinary Nonconvex Multiobjective Optimization   Show Abstract

    [Abstract]
    The needs of distributed solution approaches to decomposable nonconvex multiobjective optimization problems motivate the study of augmented Lagrangian coordination (ALC) methods and Gauss-Seidel methods from single objective optimization. The application of certain scalarization techniques to a nonconvex, vector-valued objective function results in a reformulated single objective problem (RSOP) whose objective function may not preserve certain properties assumed in Gauss-Seidel approaches such as the alternating direction method of multipliers (ADMM). The block coordinate descent method (BCD) is considered as an alternative distributed optimization approach. While BCD requires that the constraints possess a certain decomposable structure, the formulation of the RSOP as a decomposable problem may require the introduction of extra variables and extra nondecomposable constraints. Thus, an ALC approach such as the method of multipliers must be integrated with BCD, and a corresponding analysis of such an integrated approach is warranted. In this presentation, a brief introduction to these concepts will be provided. Then state-of-art results for BCD and ALC methods are provided and compared with those for ADMM. A BCD coordination (BCDCoor) method consisting of an integration of BCD and ALC methods is stated as an alternative to ADMM for solving nonconvex RSOPs with nonseparable objective functions. A convergence analysis of solution-multiplier pair sequences generated by BCDCoor requires certain extensions to the state-of-art results for BCD and ALC methods. These required extensions are described, and current contributions to this end are discussed briefly.


Sep. 17th, 2014 – Linwei Xin

  • Optimality Gap of Constant-order Policies Decays Exponentially in the Lead Time for Lost Sales Models   Show Abstract

    [Abstract]
    Inventory models with lost sales and large lead times have traditionally been considered intractable, due to the curse of dimensionality which arises from having to track the set of orders placed but not yet received (i.e. pipeline vector). Recently, Goldberg et al. (2012) laid the foundations for a new approach to solving these models, by proving that as the lead time grows large (with the other problem parameters fixed), a simple constant-order policy (proposed earlier by Reiman (2004)) is asymptotically optimal. This was quite surprising, as it is exactly this setting (i.e. large lead times) that was previously believed intractable. However, the bounds proven there are impractical, requiring the lead time to be very large before the constant-order policy becomes nearly optimal, e.g. requiring a lead time which is Omega(epsilon^{-2}) to ensure a (1+epsilon)-approximation guarantee, and involving a massive prefactor. The authors note that numerical experiments of Zipkin (2008) suggest that the constant-order policy performs quite well even for small lead times, and pose closing this gap (thus making the results practical) as an open problem. In this work, we make significant progress towards resolving this open problem and closing this gap. In particular, for the infinite-horizon variant of the finite-horizon problem considered by Goldberg et al. (2012), we prove that the optimality gap of the same constant-order policy actually converges exponentially fast to zero, i.e. we prove that a lead time which is O(log(epsilon^{-1})) suffices to ensure a (1+epsilon)-approximation guarantee. We demonstrate that the corresponding rate of exponential decay is at least as fast as the exponential rate of convergence of the expected waiting time in a related single-server queue to its steady-state value. We also derive simple and explicit bounds for the optimality gap. For the special case of exponentially distributed demand, we further compute all expressions appearing in our bound in closed form, and numerically evaluate them, demonstrating good performance for a wide range of parameter values. Our main proof technique combines convexity arguments with ideas from queueing theory. This is joint work with David A. Goldberg.


Sep. 11, 2014 – Kimia Ghobadi (University of Toronto)

  • Radiation Therapy Treatment Planning with Continuous Dose Delivery for Head-and-Neck Tumours   Show Abstract

    [Abstract]
    In this talk we explore automated inverse treatment planning for Gamma Knife Perfexion to treat head-and-neck tumours. The main goal in designing a treatment plan is to deliver an appropriate amount of dose to tumour structures while avoiding the surrounding healthy tissue as much as possible. In continuous dose delivery treatment planning, the radiation is delivered with a continuous beam along a path inside the tumour instead of the conventional method of using multiple discrete beams. To obtain a quality treatment plan, we first use a set of approximation algorithms to find the position of the focal points of the shots inside the tumour volume. We then used these focal points to construct a path that covers the tumour volume. Once the path is selected, a mixed-integer optimization model and its approximated linear model are formulated to find the beam intensity and duration for each point on the path. In this optimization model we minimize the dose spillage to healthy organs while ensuring the tumour receives the prescribed dose. We impose additional clinical and machine constraints such as maximum dose to healthy organs, maximizing the utility of the available beams, speed constraints for the radiation unit, etc. We test our plan on clinical test cases and observe that the quality of the plans are better or comparable with clinical treatment planning.


Jun. 25, 2014 – Jesus De Loera

  • Augmentation or Pivoting-like Algorithms for Integer Programming   Show Abstract

    [Abstract]
    In the study of the simplex method one is interested on the performance of pivot rules and the number of pivots needed to reach the optimum. Departing from traditional methods in integer programming one can take a similar approach in integer programming. Separable convex integer programs can be solved via similar “pivoting” or augmentation techniques. The steps for linear programming (LPs) are circuits or edges, for IPs instead we use Graver bases moves. In this talk I will survey several Graver bases pivoting algorithms with excellent complexity result. We will show that using the right pivot rule one can prove the good performance. We show polynomially many augmentation steps are possible if best augmenting steps along Graver basis directions are performed. Using instead augmentation along directions with best ratio of cost improvement/unit length, we show that for linear objectives the number of augmentation steps is bounded by the number of elements in the Graver basis of the problem matrix, giving strongly polynomial-time algorithms for the solution of N-fold LPs and ILPs. This is joint work with Raymond Hemmecke (Technische Universität München) and Jon Lee (University of Michigan).


Apr. 16, 2014 – Cristóbal Guzmán

  • Lower Complexity Bounds for Large-Scale Smooth Convex Optimization   Show Abstract

    [Abstract]
    We prove lower bounds on the black-box oracle complexity of large-scale smooth convex minimization problems. These lower bounds work for unit balls of normed spaces under a technical smoothing condition, and arbitrary smoothness parameter of the objective with respect to this norm. As a consequence, we show a unified framework for the complexity of convex optimization on \ell^p-balls, for 2<=p<=\infty. In particular, we prove that the T-step Conditional Gradient algorithm as applied to minimizing smooth convex functions over the n-dimensional box with T<=n is nearly optimal. On the other hand, we prove lower bounds for the complexity of convex optimization over \ell^p-balls, for 1<=p<2, by combining a random subspace method with the p=\infty lower bounds. In particular, we establish the complexity of problem classes that contain the sparse recovery problem. This is joint work with Arkadi Nemirovski.


Apr. 11, 2014 – Karen Aardal

  • A Local-search Based Approximation Algorithm for the Transportation Problem with Market Choice   Show Abstract

    [Abstract]
    In the transportation problem with market choice we are given a set of markets with demands and a set of facilities with limited capacities in a common metric space. Each market is either accepted or rejected. For each accepted market, its demand has to be fully satisfied by facilities, which incurs transportation cost. For each rejected market, a penalty cost has to be paid. The goal is to decide which markets should be accepted and which should be rejected, such that the sum of transportation and penalty cost is minimized. The transportation problem with market choice was introduced by Damci-Kurt, Dey, and Küçükyavuz, who show that TPMC is NP-hard, and examine the polyhedral structure. Here we derive a constant factor approximation algorithm based on local search. This is joint work with Pieter van den Berg and Shanfei Li.


Apr. 4, 2014 – Camilo Ortiz

  • An Inexact Block-decomposition Method for Extra Large-scale Conic Semidefinite Programming   Show Abstract

    [Abstract]
    In this paper, we present an inexact block-decomposition (BD) first-order method for solving standard form conic semidefinite programming (SDP) which avoids computations of exact projections onto the manifold defined by the affine constraints and, as a result, is able to handle extra large SDP instances. The method is based on a two-block reformulation of the optimality conditions where the first block corresponds to the complementarity conditions and the second one corresponds to the manifolds consisting of both the primal and dual affine feasibility constraints. While the proximal subproblem corresponding to the first block is solved exactly, the one corresponding to the second block is solved inexactly in order to avoid finding the exact solution of the underlying augmented primal-dual linear system. The error condition required by the BD method naturally suggests the use of a relative error condition in the context of the latter augmented primal-dual linear system. Our implementation uses the conjugate gradient (CG) method applied to a reduced positive definite dual linear system to obtain inexact solutions of the augmented linear system. In addition, we implemented the proposed BD method with the following refinements: an adaptive choice of stepsize for performing an extragradient step; and a new dynamic scaling scheme that uses two scaling factors to balance three inclusions comprising the optimality conditions of the conic SDP. Finally, we present computational results showing the efficiency of our method for solving various extra large SDP instances, several of which cannot be solved by other existing methods, including some with at least two million constraints and/or fifty million non-zero coefficients in the affine constraints.


Mar. 26, 2014 – Samuel Fiorini

  • Cut-dominant and Forbidden Minors   Show Abstract

    [Abstract]
    The cut-dominant of a connected graph G is the polyhedron that corresponds to the problem of computing global min-cuts in G. Despite the fact that computing a global min-cut can be done in polynomial time, the geometry of the cut-dominant is far from being understood. We study graphs for which all facets of the corresponding cut-dominant have right-hand side at most a fixed integer k. These graphs form a minor-closed collection. We give a complete list of forbidden minors for k <= 2. This is then applied to the TSP to give a shorter proof of a classic result of Fonlupt and Naddef (Math. Prog., 1992) that characterizes TSP-perfect graphs. This work in progress is joint with Kanstantsin Pashkovich (Brussels) and Michele Conforti (Padova).


Mar. 12, 2014 – Deepak Rajan

  • Strong Formulations in Electric Power Generation: A Polyhedral Study of the Unit Commitment Problem   Show Abstract

    [Abstract]
    In this talk, we will illustrate the importance of strong formulations in Mixed-Integer Programming by focusing on recent polyhedral research in Electric Power generation. We will study the Unit Commitment problem, in which the goal is to minimize the operational cost of power generators over a finite horizon (typically a day) while meeting demand and satisfying generator physical constraints. In particular, we consider a relaxation involving two generator constraints: up/down time constraints and ramping constraints. We present several classes of new facet-defining inequalities and describe the convex-hull for special cases of the relaxation considered. Computational experiments using our inequalities in a branch-and-cut algorithm confirm that stronger formulations can decrease computations times significantly.


Mar. 10, 2014 – Karen Aardal

  • GMI/Split Cuts Based on Lattice Information   Show Abstract

    [Abstract]
    Cutting planes incorporated in a branch-and-bound framework is the most dominant solution approach for (mixed)-integer optimization problems. One important family of cutting planes is the family of split cuts. A computational study by Balas and Saxena indicates that the first closure associated with the family of split inequalities is a very good approximation of the convex hull of feasible solution. It is, however, NP-hard to optimize a linear function over the split closure, so achieving these results is computationally expensive. A special case of the split cuts, which can trivially be generated, is the family of GMI-inequalities that can be obtained from optimal basic feasible solutions. The computational effectiveness of these inequalities is however much more modest (Bixby, Gu, Rothberg, and Wunderling). The discrepancy between the potential effectiveness of GMI/split inequalities indicated by the study of Balas and Saxena, and the results that so far can be realized by generating such inequalities from optimal basic solutions, led Cornuejols to suggest that one should look for deep split cuts that can be separated efficiently. In our work we suggest a heuristic way of generating GMI/split inequalities that is based on information from the structure of the underlying lattice. We present some initial computational indications. This talk is based on joint work with Frederik von Heymann, Andrea Lodi, Andrea Tramontani, and Laurence Wolsey.


Mar. 5, 2014 – Tugce Isik

  • Optimal Control of Non-Collaborative Queueing Systems with Setup Costs   Show Abstract

    [Abstract]
    We consider tandem queueing networks with flexible, non-collaborative servers and finite buffers. Servers are able to work at multiple workstations as long as at most one server serves each station. The server reassignments may result in setup costs. For two station tandem lines and no setup costs, we completely characterize the server assignment policy that maximizes the long-run average throughput. For longer lines, server assignment heuristics are developed assuming the rates of the servers do not depend on the task (i.e., tasks are homogeneous). Numerical results show that the heuristic policy with a primary assignment that keeps servers ordered from slower-to-faster as in bucket brigades manufacturing is near-optimal. When the server reassignments are costly, we show that the optimal policy depends on the magnitude of the setup cost. We characterize the double-threshold server assignment policy that maximizes the long-run average profit in two station systems if the tasks are homogeneous and the setup costs are constant. For systems with non-homogeneous tasks and non-constant setup costs, we use a homogenized approximation of the system to develop heuristic server assignment policies. Numerical results suggest that for non-homogeneous tasks, the heuristics we propose are near-optimal. This is joint work with Sigrún Andradóttir and Hayriye Ayhan.


Feb. 26, 2014 – Burak Kocuk

  • Approximating the Objective Gradient Using Perceptrons for Constrained Minimization with Application in Drag Reduction   Show Abstract

    [Abstract]
    This research is concerned with the constrained minimization problems whose objective function does not have a closed form analytical expression. However, it can be evaluated numerically which enables the determination of an estimator using random samples. We assume that the set of constraints are well-defined and explicitly known and use perceptrons to estimate the objective function. The new method, which we call Neural Reduced Gradient Algorithm, approximates the reduced gradient by combining the gradient value of the estimated objective function with the exact gradients of the constraints. It is tested with some randomly generated and also, some available convex and nonconvex optimization problems. We also provide an interesting real application of the new method in shear stress minimization for drag reduction and compare its performance with the ones obtained using linear estimators, which are proposed as a part of this work. This is joint work with Kuban Altinel.


Feb. 19, 2014 – Laurence Wolsey

  • On the Facets of Continuous Knapsack Constraints   Show Abstract

    [Abstract]
    Given a single constraint mixed integer program with n integer variables and m continuous variables with bounds: {(x, y) \in R_+^n x Z_+^n : \sum_{i=1}^{m} x_i + \sum_{j=1}^{n} a_j y_j \ge b, x_i \le u_i, i = 1, … , m}. Our aim is to understand when all the nontrivial facets of the convex hull of solutions have 0-1 coefficients on the continuous variables. The special case with n = 1 is the single arc set studied by Magnanti et al., and the case with n = 2 and one unbounded continuous variable is treated by Agra and Constantino. We show that the number of distinct non-zero coefficients for the continuous variables is bounded above by 2n −n and that for n = 2 the bound is exactly one. Our main result is to show that when n = 2 the property holds and we provide a complete description of the convex hull. We then show by example that there are more complicated facets when n \ge 3. We also show that similar results hold for all m and n when the coefficients of the integer variables are divisible. This is joint work with Oktay Gunluk and Sanjeeb Dash, and Hande Yaman.


Feb. 14, 2014 – M. Javad Feizollahi

  • The Robust (Minmax Regret) Quadratic Assignment Problem with Interval Flows   Show Abstract

    [Abstract]
    We consider a generalization of the classical quadratic assignment problem, where material flows between facilities are uncertain, and only upper and lower bounds are known for each flow. The objective is to find a minmax regret solution. We present an exact Benders decomposition algorithm based on two developed mathematical programming formulations and on the developed linearizations of master problems, and a heuristic based on using tabu search in the context of a Benders decomposition framework. Then, we develop a hybrid Benders decomposition approach that allows us to combine the speed of heuristics with the rigor and precision of the exact Benders method. We discuss the results of extensive computational experiments. This is a joint work with Igor Averbakh (University of Toronto).


Feb. 5, 2014 – Marco Molinaro

  • How Good Are Sparse Cutting-Planes?   Show Abstract

    [Abstract]
    Sparse cutting-planes are often the ones used in mixed-integer programing (MIP) solvers, since they help in solving the linear programs encountered during branch&bound more efficiently. However, how well can we approximate the integer hull by just using sparse cutting-planes? In order to understand this question better, given a polyope P (e.g. the integer hull of a MIP), let P^k be its best approximation using cuts with at most k non-zero coefficients. We consider d(P, P^k) = max_{x in P^k} (min_{y in P} |x – y|) as a measure of the quality of sparse cuts. In our first result, we present general upper bounds on d(P, P^k) which depend on the number of vertices in the polytope and exhibits three phases as k increases. Our bounds imply that if P has polynomially many vertices, using half sparsity already approximates it very well. Second, we present a lower bound on d(P, P^k) for random polytopes that show that the upper bounds are quite tight. Third, we show that for a class of hard packing IPs, sparse cutting-planes do not approximate the integer hull well. Finally, we show that using sparse cutting-planes in extended formulations is at least as good as using them in the original polyhedron, and give an example where the former is actually much better. Joint work with >Santanu Dey and Qianyi Wang.


Jan. 31, 2014 – Alberto Del Pia

  • Integer Quadratic Programming in the Plane   Show Abstract

    [Abstract]
    We show that the problem of minimizing a quadratic polynomial with integer coefficients over the integer points in a general two-dimensional rational polyhedron is solvable in time bounded by a polynomial in the input size. This is joint work with Robert Weismantel.


Jan. 15, 2014 – Sergio Bruno

  • Integer Quadratic Programming in the Plane   Show Abstract

    [Abstract]
    Renewable energy has drawn increasing attention in the last years due to technological improvements and environmental demands. Yet, its profitability is largely dependent on generation and price risks. Investment strategies for renewable generation rely on real options approaches such as postponing, hedging with fixed (forward) contracts and combination with other sources. We develop a dynamic programming model that includes such options and is solvable by a procedure based on the Stochastic Dual Dynamic Programming (SDDP) method. We extend the framework for the risk averse setting. The numerical results show that the generated policies are efficient investment strategies This is a joint work with Shabbir Ahmed and Alexander Shapiro.


Nov. 25, 2013 – George L. Nemhauser

  • Integer Programming: the Global Impact   Show Abstract

    [Abstract]
    Integer programming is the (not very appealing or descriptive) name for optimization models and algorithms in which some variables are required to have integer values. Planning and operational problems in energy, finance, health, manufacturing, military, transportation, and in almost any imaginable domain where decisions are made, are formulated and solved using integer programming. For example, most Fortune 500 companies use integer programming in some aspects of their business. Currently available software is capable of solving models with thousands, and sometimes millions, of variables and constraints. We will discuss some integer programming models whose solutions have had big impact in solving important problems, and present recent progress that has made it possible to solve very large instances and to obtain provably good solutions quickly. We will close by speculating on future advances in methodology and applications.


Nov. 13, 2013 – Linwei Xin

  • Distributionally Robust Inventory Control When Demand is a Martingale   Show Abstract

    [Abstract]
    Recently, there has been considerable interest in taking a more robust approach to the stochastic models which arise in inventory control. In the distributionally robust optimization paradigm, one assumes that the joint distribution (over time) of the sequence of future demands belongs to some set of joint distributions, and solves the min-max problem of computing the control policy which is optimal against a worst-case distribution belonging to this set. Although such models have been analyzed previously, the cost and policy implications of positing different dependency structures remains poorly understood, and there have been recent efforts to develop a deeper understanding (e.g. (Agrawal et al. 2012)). In this talk, we combine the framework of distributionally robust optimization with the theory of martingales, and consider the setting in which the sequence of future demands is assumed to belong to the family of martingales with given mean and support. We explicitly compute the optimal policy and value, and compare to the analogous setting in which demand is independent across periods (analyzed previously in (Shapiro 2012)). We identify several interesting qualitative differences between these settings, and prove that the cost incurred by an optimal policy in the independence model is always at least as large as the corresponding cost in the martingale model. We also perform an asymptotic analysis (as the number of time periods $T \rightarrow \infty$) to shed light on the interplay between the optimal policy and worst-case distribution, and derive closed-form expressions for the ratio between these costs. Interestingly, for the setting in which the penalties for over-stocking and under-stocking inventory are equivalent and the expected demand is exactly half the maximum support, we find this limiting ratio to be exactly $\frac{1}{2}$. This is joint work with David A. Goldberg.


Nov. 4, 2013 – Álvaro Lorca

  • Adaptive Robust Economic Dispatch with Dynamic Uncertainty Sets for Power Systems with Significant Wind   Show Abstract

    [Abstract]
    The benefits of wind power as an environmentally responsible renewable energy resource have led to an increasing penetration of wind energy in today’s power systems. This trend has started to reshape the paradigms of power systems operation, as dealing with uncertainty caused by the highly intermittent and uncertain wind power becomes a significant issue. Motivated by this, we present a framework using adaptive robust optimization for the economic dispatch of power systems with high level of wind penetration. We will also discuss solution algorithms and present a practical evaluation method for testing our approach in terms of cost and reliability. This is joint work with Andy Sun.


Oct. 1, 2013

  • Hyunwoo Park – Evolution of Product and Service Provider Networks: An Empirical Analysis and Visualization   Show Abstract

    [Abstract]
    We examine how the network structure among service providers, product manufacturers, and platform providers have shaped inter-firm coordination of nearly 1,500 smartphone launches in the mobile ecosystem between 2000-2012. Network analysis on a tripartite graph—consisting of product-service-platform triads—reveals dynamic patterns of strategic service alliance and switching between platforms. We complement our analysis with network visualizations and discuss service-level strategic implications. This is joint work with Rahul Basole.

  • Weihong Hu – Strategic Health Workforce Planning: Modeling, Optimality, and Uncertainty   Show Abstract

    [Abstract]
    We propose an infinite-horizon linear programming model for workforce planning in a large health system for a single worker class. We prove the optimality of a natural look-ahead policy applicable to any system of this kind, and use real-world data to examine the performance of the policy in more complex systems with additional detail. The results motivate us to further analyze the impact of demand uncertainty.