DSpace Repository

Multi-Armed Bandits with Applications to Markov Decision Processes and Scheduling Problems

Show simple item record

dc.contributor.advisor Hu, Jiaqiao en_US
dc.contributor.author Muqattash, Isa Mithqal en_US
dc.contributor.other Department of Applied Mathematics and Statistics en_US
dc.date.accessioned 2017-09-20T16:49:52Z
dc.date.available 2017-09-20T16:49:52Z
dc.date.issued 2014-12-01
dc.identifier.uri http://hdl.handle.net/11401/76260 en_US
dc.description 118 pgs en_US
dc.description.abstract The focus of this work is on practical applications of stochastic multi-armed bandits (MABs) in two distinctive settings. First, we develop and present REGA, a novel adaptive sampling-based algorithm for control of finite-horizon Markov decision processes (MDPs) with very large state spaces and small action spaces. We apply a variant of the epsilon-greedy multi-armed bandit algorithm to each stage of the MDP in a recursive manner, thus computing an estimation of the ' reward-to-go' value at each stage of the MDP. We provide a finite-time analysis of REGA. In particular, we provide a bound on the probability that the approximation error exceeds a given threshold, where the bound is given in terms of the number of samples collected at each stage of the MDP. We empirically compare REGA against other sampling-based algorithms and find that our algorithm is competitive. We discuss measures to aid against the curse of dimensionality due to the backwards induction nature of REGA, necessary when the MDP horizon is large. Second, we introduce e-Discovery, a topic of extreme significance to the legal industry, which pertains to the ability of sifting through large volumes of data in order to identify the ' needle in the haystack' documents relevant to a lawsuit or investigation. Surprisingly, the topic has not been explicitly investigated in academia. Looking at the problem from a scheduling perspective, we highlight the main properties and challenges pertaining to this topic and outline a formal model for the problem. We examine an approach based on related work from the field of scheduling theory and provide simulation results that demonstrate the performance of our approach against a very large data set. We also provide an approach based on list-scheduling that incorporates a side multi-armed bandit in lieu of standard heuristics. Necessarily, we propose the first MAB algorithm that accounts for both sleeping bandits and bandits with history. The empirical results are encouraging. Surveys of multi-armed bandits as well as scheduling theory are included. Many new and known open problems are proposed and/or documented. en_US
dc.description.sponsorship This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree. en_US
dc.format Monograph en_US
dc.format.medium Electronic Resource en_US
dc.language.iso en_US en_US
dc.publisher The Graduate School, Stony Brook University: Stony Brook, NY. en_US
dc.subject.lcsh Applied mathematics en_US
dc.subject.other Electronic Discovery, Markov Decision Process (MDP), Multi-Armed Bandit (MAB), Optimization Under Uncertainties, Sampling, Stochastic Scheduling en_US
dc.title Multi-Armed Bandits with Applications to Markov Decision Processes and Scheduling Problems en_US
dc.type Dissertation en_US
dc.mimetype Application/PDF en_US
dc.contributor.committeemember Arkin, Esther en_US
dc.contributor.committeemember Deng, Yuefan en_US
dc.contributor.committeemember Ortiz, Luis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account