Big Data and optimization
Traditional optimization methods for decision-making under uncertainty assume perfect information, i.e. accurate values for all relevant system parameters and specific probability measures for the random variables, say Gaussian with known mean and variance. However such precise information is rarely available in practice. Indeed, one has to resort to merely some noisy historical data collected of some of the system parameters. Unfortunately, the optimal decision of these approaches is very sensitive to the used specific parameter values, and hence their practical relevance is often questionable.
The increasing digitalization of society and the advent of cheap sensing devices has however caused an explosion of available historical data, a phenomenon usually referred to as "Big Data". In the light of these recent trends, it should not come as a surprise that also in optimization the focus is shifting from traditional model driven methods to data driven approaches.
The purpose of data driven optimization methods is to enable decision makers to take informed decisions, using the limited available historical data, and to provide them with certain optimality guarantees. Recently, several new approaches have been developed in order to formulate and solve optimization problems in the context of big data including among others:
The purpose of this website is not to provide an exhaustive overview of available material on data driven optimization, but rather provide the interested practitioner or researcher with an introductory overview of the strengths and weaknesses of the above mentioned methods and point to helpful references for further study.