• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Algorithmic Game Theory

Mehta, Aranyak 19 July 2005 (has links)
The interaction of theoretical computer science with game theory and economics has resulted in the emergence of two very interesting research directions. First, it has provided a new model for algorithm design, which is to optimize in the presence of strategic behavior. Second, it has prompted us to consider the computational aspects of various solution concepts from game theory, economics and auction design which have traditionally been considered mainly in a non-constructive manner. In this thesis we present progress along both these directions. We first consider optimization problems that arise in the design of combinatorial auctions. We provide an online algorithm in the important case of budget-bounded utilities. This model is motivated by the recent development of the business of online auctions of search engine advertisements. Our algorithm achieves a factor of $1-1/e$, via a new linear programming based technique to determine optimal tradeoffs between bids and budgets. We also provide lower bounds in terms of hardness of approximation in more general submodular settings, via a PCP-based reduction. Second, we consider truth-revelation in auctions, and provide an equivalence theorem between two notions of strategy-proofness in randomized auctions of digital goods. Last, we consider the problem of computing an approximate Nash equilibrium in multi-player general-sum games, for which we provide the first subexponential time algorithm.
12

Two-Sided Matching Markets: Models, Structures, and Algorithms

Zhang, Xuan January 2022 (has links)
Two-sided matching markets are a cornerstone of modern economics. They model a wide range of applications such as ride-sharing, online dating, job positioning, school admissions, and many more. In many of those markets, monetary exchange does not play a role. For instance, the New York City public high school system is free of charge. Thus, the decision on how eighth-graders are assigned to public high schools must be made using concepts of fairness rather than price. There has been therefore a huge amount of literature, mostly in the economics community, defining various concepts of fairness in different settings and showing the existence of matchings that satisfy these fairness conditions. Those concepts have enjoyed wide-spread success, inside and outside academia. However, finding such matchings is as important as showing their existence. Moreover, it is crucial to have fast (i.e., polynomial-time) algorithms as the size of the markets grows. In many cases, modern algorithmic tools must be employed to tackle the intractability issues arising from the big data era. The aim of my research is to provide mathematically rigorous and provably fast algorithms to find solutions that extend and improve over a well-studied concept of fairness in two-sided markets known as stability. This concept was initially employed by the National Resident Matching Program in assigning medical doctors to hospitals, and is now widely used, for instance, by cities in the US for assigning students to public high schools and by certain refugee agencies to relocate asylum seekers. In the classical model, a stable matching can be found efficiently using the renowned deferred acceptance algorithm by Gale and Shapley. However, stability by itself does not take care of important concerns that arose recently, some of which were featured in national newspapers. Some examples are: how can we make sure students get admitted to the best school they deserve, and how can we enforce diversity in a cohort of students? By building on known and new tools from Mathematical Programming, Combinatorial Optimization, and Order Theory, my goal is to provide fast algorithms to answer questions like those above, and test them on real-world data. In Chapter 1, I introduce the stable matching problem and related concepts, as well as its applications in different markets. In Chapter 2, we investigate two extensions introduced in the framework of school choice that aim at finding an assignment that is more favorable to students -- legal assignments and the Efficiency Adjusted Deferred Acceptance Mechanism (EADAM) -- through the lens of classical theory of stable matchings. We prove that the set of legal assignments is exactly the set of stable assignments in another instance. Our result implies that essentially all optimization problems over the set of legal assignments can be solved within the same time bound needed for solving it over the set of stable assignments. We also give an algorithm that obtains the assignment output of EADAM. Our algorithm has the same running time as that of the deferred acceptance algorithm, hence largely improving in both theory and practice over known algorithms. In Chapter 3, we introduce a property of distributive lattices, which we term as affine representability, and show its role in efficiently solving linear optimization problems over the elements of a distributive lattice, as well as describing the convex hull of the characteristic vectors of the lattice elements. We apply this concept to the stable matching model with path-independent quota-filling choice functions, thus giving efficient algorithms and a compact polyhedral description for this model. Such choice functions can be used to model many complex real-world decision rules that are not captured by the classical model, such as those with diversity concerns. To the best of our knowledge, this model generalizes all those for which similar results were known, and our paper is the first that proposes efficient algorithms for stable matchings with choice functions, beyond classical extensions of the Deferred Acceptance algorithm. In Chapter 4, we study the discovery program (DISC), which is an affirmative action policy used by the New York City Department of Education (NYC DOE) for specialized high schools; and explore two other affirmative action policies that can be used to minimally modify and improve the discovery program: the minority reserve (MR) and the joint-seat allocation (JSA) mechanism. Although the discovery program is beneficial in increasing the number of admissions for disadvantaged students, our empirical analysis of the student-school matches from the 12 recent academic years (2005-06 to 2016-17) shows that about 950 in-group blocking pairs were created each year amongst disadvantaged group of students, impacting about 650 disadvantaged students every year. Moreover, we find that this program usually benefits lower-performing disadvantaged students more than top-performing disadvantaged students (in terms of the ranking of their assigned schools), thus unintentionally creating an incentive to under-perform. On the contrary, we show, theoretically by employing choice functions, that (i) both MR and JSA result in no in-group blocking pairs, and (ii) JSA is weakly group strategy-proof, ensures that at least one disadvantaged is not worse off, and when reservation quotas are carefully chosen then no disadvantaged student is worse-off. We show that each of these properties is not satisfied by DISC. In the general setting, we show that there is no clear winner in terms of the matchings provided by DISC, JSA, and MR, from the perspective of disadvantaged students. We however characterize a condition for markets, that we term high competitiveness, where JSA dominates MR for disadvantaged students. This condition is verified, in particular, in certain markets when there is a higher demand for seats than supply, and the performances of disadvantaged students are significantly lower than that of advantaged students. Data from NYC DOE satisfy the high competitiveness condition, and for this dataset our empirical results corroborate our theoretical predictions, showing the superiority of JSA. We believe that the discovery program, and more generally affirmative action mechanisms, can be changed for the better by implementing the JSA mechanism, leading to incentives for the top-performing disadvantaged students while providing many benefits of the affirmative action program.
13

On learning and visualizing lexicographic preference trees

Moussa, Ahmed S. 01 January 2019 (has links)
Preferences are very important in research fields such as decision making, recommendersystemsandmarketing. The focus of this thesis is on preferences over combinatorial domains, which are domains of objects configured with categorical attributes. For example, the domain of cars includes car objects that are constructed withvaluesforattributes, such as ‘make’, ‘year’, ‘model’, ‘color’, ‘body type’ and ‘transmission’.Different values can instantiate an attribute. For instance, values for attribute ‘make’canbeHonda, Toyota, Tesla or BMW, and attribute ‘transmission’ can haveautomaticormanual. To this end,thisthesis studiesproblemsonpreference visualization and learning for lexicographic preference trees, graphical preference models that often are compact over complex domains of objects built of categorical attributes. Visualizing preferences is essential to provide users with insights into the process of decision making, while learning preferences from data is practically important, as it is ineffective to elicit preference models directly from users. The results obtained from this thesis are two parts: 1) for preference visualization, aweb- basedsystem is created that visualizes various types of lexicographic preference tree models learned by a greedy learning algorithm; 2) for preference learning, a genetic algorithm is designed and implemented, called GA, that learns a restricted type of lexicographic preference tree, called unconditional importance and unconditional preference tree, or UIUP trees for short. Experiments show that GA achieves higher accuracy compared to the greedy algorithm at the cost of more computational time. Moreover, a Dynamic Programming Algorithm (DPA) was devised and implemented that computes an optimal UIUP tree model in the sense that it satisfies as many examples as possible in the dataset. This novel exact algorithm (DPA), was used to evaluate the quality of models computed by GA, and it was found to reduce the factorial time complexity of the brute force algorithm to exponential. The major contribution to the field of machine learning and data mining in this thesis would be the novel learning algorithm (DPA) which is an exact algorithm. DPA learns and finds the best UIUP tree model in the huge search space which classifies accurately the most number of examples in the training dataset; such model is referred to as the optimal model in this thesis. Finally, using datasets produced from randomly generated UIUP trees, this thesis presents experimental results on the performances (e.g., accuracy and computational time) of GA compared to the existent greedy algorithm and DPA.

Page generated in 0.0336 seconds