• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22007
  • 3290
  • 2297
  • 1634
  • 1583
  • 627
  • 601
  • 240
  • 219
  • 217
  • 180
  • 180
  • 180
  • 180
  • 180
  • Tagged with
  • 40625
  • 6653
  • 5726
  • 5244
  • 4156
  • 4039
  • 3373
  • 3201
  • 2826
  • 2714
  • 2608
  • 2475
  • 2363
  • 2339
  • 2304
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Survey techniques used in Hong Kong marketing research.

January 1970 (has links)
Thesis (M.B.A.)--Chinese University of Hong Kong. / Bibliography: leaves 92-94. / Summary in Chinese.
62

A comparison of selected methods of replacing missing data and their impact on multidimensional scaling results: a simulating study.

January 1975 (has links)
Summary in Chinese. / Thesis (M.B.A.)--Chinese University of Hong Kong. / Bibliography: leaves 127-128.
63

Large scale prediction models and algorithms

Monsch, Matthieu (Matthieu Frederic) January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Operations Research Center, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 129-132). / Over 90% of the data available across the world has been produced over the last two years, and the trend is increasing. It has therefore become paramount to develop algorithms which are able to scale to very high dimensions. In this thesis we are interested in showing how we can use structural properties of a given problem to come up with models applicable in practice, while keeping most of the value of a large data set. Our first application provides a provably near-optimal pricing strategy under large-scale competition, and our second focuses on capturing the interactions between extreme weather and damage to the power grid from large historical logs. The first part of this thesis is focused on modeling competition in Revenue Management (RM) problems. RM is used extensively across a swathe of industries, ranging from airlines to the hospitality industry to retail, and the internet has, by reducing search costs for customers, potentially added a new challenge to the design and practice of RM strategies: accounting for competition. This work considers a novel approach to dynamic pricing in the face of competition that is intuitive, tractable and leads to asymptotically optimal equilibria. We also provide empirical support for the notion of equilibrium we posit. The second part of this thesis was done in collaboration with a utility company in the North East of the United States. In recent years, there has been a number of powerful storms that led to extensive power outages. We provide a unified framework to help power companies reduce the duration of such outages. We first train a data driven model to predict the extent and location of damage from weather forecasts. This information is then used in a robust optimization model to optimally dispatch repair crews ahead of time. Finally, we build an algorithm that uses incoming customer calls to compute the likelihood of damage at any point in the electrical network. / by Matthieu Monsch. / Ph.D.
64

Essays in Consumer Choice Driven Assortment Planning

Saure, Denis R. January 2011 (has links)
Product assortment selection is among the most critical decisions facing retailers: product variety and relevance is a fundamental driver of consumers' purchase decisions and ultimately of a retailer's profitability. In the last couple of decades an increasing number of firms have gained the ability to frequently revisit their assortment decisions during a selling season. In addition, the development and consolidation of online retailing have introduced new levels of operational flexibility, and cheap access to detailed transactional information. These new operational features present the retailer with both benefits and challenges. The ability to revisit the assortment decision frequently over time allows the retailer to introduce and test new products during the selling season, and adjust on the fly to unexpected changes in consumer preferences, and use customer profile information to customize (in real time) online shopping experience. Our main objective in this thesis is to formulate and solve assortment optimization models addressing the challenges present in modern retail environments. We begin by analyzing the role of the assortment decision in balancing information collection and revenue maximization, when consumer preferences are initially unknown. By considering utility maximizing consumers, we establish fundamental limits on the performance of any assortment policy whose aim is to maximize long run revenues. In addition, we propose adaptive assortment policies that attain such performance limits. Our results highlight salient features of this dynamic assortment problem that distinguish it from similar problems of sequential decision making under model uncertainty. Next, we extend the analysis to the case when additional consumer profile information is available; our primary motivation here is the emerging area of online advertisement. As in the previous setup, we identify fundamental performance limits and propose adaptive policies attaining these limits. Finally, we focus on the effects of competition and consumers' access to information on assortment strategies. In particular, we study competition among retailers when they have access to common products, i.e., products that are available to the competition, and where consumers have full information about the retailers' offerings. Our results shed light on equilibrium properties in such settings and the effect common products have on this behavior.
65

Algorithms for Sparse and Low-Rank Optimization: Convergence, Complexity and Applications

Ma, Shiqian January 2011 (has links)
Solving optimization problems with sparse or low-rank optimal solutions has been an important topic since the recent emergence of compressed sensing and its matrix extensions such as the matrix rank minimization and robust principal component analysis problems. Compressed sensing enables one to recover a signal or image with fewer observations than the "length" of the signal or image, and thus provides potential breakthroughs in applications where data acquisition is costly. However, the potential impact of compressed sensing cannot be realized without efficient optimization algorithms that can handle extremely large-scale and dense data from real applications. Although the convex relaxations of these problems can be reformulated as either linear programming, second-order cone programming or semidefinite programming problems, the standard methods for solving these relaxations are not applicable because the problems are usually of huge size and contain dense data. In this dissertation, we give efficient algorithms for solving these "sparse" optimization problems and analyze the convergence and iteration complexity properties of these algorithms. Chapter 2 presents algorithms for solving the linearly constrained matrix rank minimization problem. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast and solved as a semidefinite programming problem, such an approach is computationally expensive when the matrices are large. In Chapter 2, we propose fixed-point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems. Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10<sup>-5</sup> in about 3 minutes by sampling only 20 percent of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms. In Chapter 3, we study the convergence/recoverability properties of the fixed point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving linearly constrained matrix rank minimization problems are reported. Chapters 4 and 5 considers alternating direction type methods for solving composite convex optimization problems. We present in Chapter 4 alternating linearization algorithms that are based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most O(1/ε) iterations to obtain an ε-optimal solution, while our accelerated (i.e., fast) versions require at most O(1/√ε) iterations, with little change in the computational effort required at each iteration. For more general problem, i.e., minimizing the sum of K convex functions, we propose multiple-splitting algorithms for solving them. We propose both basic and accelerated algorithms with O(1/ε) and O(1/√ε) iteration complexity bounds for obtaining an ε-optimal solution. To the best of our knowledge, the complexity results presented in these two chapters are the first ones of this type that have been given for splitting and alternating direction type methods. Numerical results on various applications in sparse and low-rank optimization, including compressed sensing, matrix completion, image deblurring, robust principal component analysis, are reported to demonstrate the efficiency of our methods.
66

Many-Server Queues with Time-Varying Arrivals, Customer Abandonment, and non-Exponential Distributions

Liu, Yunan January 2011 (has links)
This thesis develops deterministic heavy-traffic fluid approximations for many-server stochastic queueing models. The queueing models, with many homogeneous servers working independently in parallel, are intended to model large-scale service systems such as call centers and health care systems. Such models also have been employed to study communication, computing and manufacturing systems. The heavy-traffic approximations yield relatively simple formulas for quantities describing system performance, such as the expected number of customers waiting in the queue. The new performance approximations are valuable because, in the generality considered, these complex systems are not amenable to exact mathematical analysis. Since the approximate performance measures can be computed quite rapidly, they usefully complement more cumbersome computer simulation. Thus these heavy-traffic approximations can be used to improve capacity planning and operational control. More specifically, the heavy-traffic approximations here are for large-scale service systems, having many servers and a high arrival rate. The main focus is on systems that have time-varying arrival rates and staffing functions. The system is considered under the assumption that there are alternating periods of overloading and underloading, which commonly occurs when service providers are unable to adjust the staffing frequently enough to economically meet demand at all times. The models also allow the realistic features of customer abandonment and non-exponential probability distributions for the service times and the times customers are willing to wait before abandoning. These features make the overall stochastic model non-Markovian and thus thus very difficult to analyze directly. This thesis provides effective algorithms to compute approximate performance descriptions for these complex systems. These algorithms are based on ordinary differential equations and fixed point equations associated with contraction operators. Simulation experiments are conducted to verify that the approximations are effective. This thesis consists of four pieces of work, each presented in one chapter. The first chapter (Chapter 2) develops the basic fluid approximation for a non-Markovian many-server queue with time-varying arrival rate and staffing. The second chapter (Chapter 3) extends the fluid approximation to systems with complex network structure and Markovian routing to other queues of customers after completing service from each queue. The extension to open networks of queues has important applications. For one example, in hospitals, patients usually move among different units such as emergency rooms, operating rooms, and intensive care units. For another example, in manufacturing systems, individual products visit different work stations one or more times. The open network fluid model has multiple queues each of which has a time-varying arrival rate and staffing function. The third chapter (Chapter 4) studies the large-time asymptotic dynamics of a single fluid queue. When the model parameters are constant, convergence to the steady state as time evolves is established. When the arrival rates are periodic functions, such as in service systems with daily or seasonal cycles, the existence of a periodic steady state and the convergence to that periodic steady state as time evolves are established. Conditions are provided under which this convergence is exponentially fast. The fourth chapter (Chapter 5) uses a fluid approximation to gain insight into nearly periodic behavior seen in overloaded stationary many-server queues with customer abandonment and nearly deterministic service times. Deterministic service times are of applied interest because computer-generated service times, such as automated messages, may well be deterministic, and computer-generated service is becoming more prevalent. With deterministic service times, if all the servers remain busy for a long interval of time, then the times customers enter service assumes a periodic behavior throughout that interval. In overloaded large-scale systems, these intervals tend to persist for a long time, producing nearly periodic behavior. To gain insight, a heavy-traffic limit theorem is established showing that the fluid model arises as the many-server heavy-traffic limit of a sequence of appropriately scaled queueing models, all having these deterministic service times. Simulation experiments confirm that the transient behavior of the limiting fluid model provides a useful description of the transient performance of the queueing system. However, unlike the asymptotic loss of memory results in the previous chapter for service times with densities, the stationary fluid model with deterministic service times does not approach steady state as time evolves independent of the initial conditions. Since the queueing model with deterministic service times approaches a proper steady state as time evolves, this model with deterministic service times provides an example where the limit interchange (limiting steady state as time evolves and heavy traffic as scale increases) is not valid.
67

Essays on Inventory Management and Object Allocation

Lee, Thiam Hui January 2012 (has links)
This dissertation consists of three essays. In the first, we establish a framework for proving equivalences between mechanisms that allocate indivisible objects to agents. In the second, we study a newsvendor model where the inventory manager has access to two experts that provide advice, and examine how and when an optimal algorithm can be efficiently computed. In the third, we study classical single-resource capacity allocation problem and investigate the relationship between data availability and performance guarantees. We first study mechanisms that solve the problem of allocating indivisible objects to agents. We consider the class of mechanisms that utilize the Top Trading Cycles (TTC) algorithm (these may differ based on how they prioritize agents), and show a general approach to proving equivalences between mechanisms from this class. This approach is used to show alternative and simpler proofs for two recent equivalence results for mechanisms with linear priority structures. We also use the same approach to show that these equivalence results can be generalized to mechanisms where the agent priority structure is described by a tree. Second, we study the newsvendor model where the manager has recourse to advice, or decision recommendations, from two experts, and where the objective is to minimize worst-case regret from not following the advice of the better of the two agents. We show the model can be reduced to the class machine-learning problem of predicting binary sequences but with an asymmetric cost function, allowing us to obtain an optimal algorithm by modifying a well-known existing one. However, the algorithm we modify, and consequently the optimal algorithm we describe, is not known to be efficiently computable, because it requires evaluations of a function v which is the objective value of recursively defined optimization problems. We analyze v and show that when the two cost parameters of the newsvendor model are small multiples of a common factor, its evaluation is computationally efficient. We also provide a novel and direct asymptotic analysis of v that differs from previous approaches. Our asymptotic analysis gives us insight into the transient structure of v as its parameters scale, enabling us to formulate a heuristic for evaluating v generally. This, in turn, defines a heuristic for the optimal algorithm whose decisions we find in a numerical study to be close to optimal. In our third essay, we study the classical single-resource capacity allocation problem. In particular, we analyze the relationship between data availability (in the form of demand samples) and performance guarantees for solutions derived from that data. This is done by describing a class of solutions called epsilon-backwards accurate policies and determining a suboptimality gap for this class of solutions. The suboptimality gap we find is in terms of epsilon and is also distribution-free. We then relate solutions generated by a Monte Carlo algorithm and epsilon-backwards accurate policies, showing a lower bound on the quantity of data necessary to ensure that the solution generated by the algorithm is epsilon-backwards accurate with a high probability. Combining the two results then allows us to give a lower bound on the data needed to generate an α-approximation with a given confidence probability 1-delta. We find that this lower bound is polynomial in the number of fares, M, and 1/α.
68

Three Essays on Dynamic Pricing and Resource Allocation

Nur, Cavdaroglu January 2012 (has links)
This thesis consists of three essays that focus on different aspects of pricing and resource allocation. We use techniques from supply chain and revenue management, scenario-based robust optimization and game theory to study the behavior of firms in different competitive and non-competitive settings. We develop dynamic programming models that account for pricing and resource allocation decisions of firms in such settings. In Chapter 2, we focus on the resource allocation problem of a service firm, particularly a health-care facility. We formulate a general model that is applicable to various resource allocation problems of a hospital. To this end, we consider a system with multiple customer classes that display different reactions to delays in service. By adopting a dynamic-programming approach, we show that the optimal policy is not simple but exhibits desirable monotonicity properties. Furthermore, we propose a simple threshold heuristic policy that performs well in our experiments. In Chapter 3, we study a dynamic pricing problem for a monopolist seller that operates in a setting where buyers have market power, and where each potential sale takes the form of a bilateral negotiation. We review the dynamic programming formulation of the negotiation problem, and propose a simple and tractable deterministic "fluid" analogue for this problem. The main emphasis of the chapter is in expanding the formulation to the dynamic setting where both the buyer and seller have limited prior information on their counterparty valuation and their negotiation skill. In Chapter 4, we consider the revenue maximization problem of a seller who operates in a market where there are two types of customers; namely the "investors" and "regular-buyers". In a two-period setting, we model and solve the pricing game between the seller and the investors in the latter period, and based on the solution of this game, we analyze the revenue maximization problem of the seller in the former period. Moreover, we study the effects on the total system profits when the seller and the investors cooperate through a contracting mechanism rather than competing with each other; and explore the contracting opportunities that lead to higher profits for both agents.
69

Data-driven System Design in Service Operations

Lu, Yina January 2013 (has links)
The service industry has become an increasingly important component in the world's economy. Simultaneously, the data collected from service systems has grown rapidly in both size and complexity due to the rapid spread of information technology, providing new opportunities and challenges for operations management researchers. This dissertation aims to explore methodologies to extract information from data and provide powerful insights to guide the design of service delivery systems. To do this, we analyze three applications in the retail, healthcare, and IT service industries. In the first application, we conduct an empirical study to analyze how waiting in queue in the context of a retail store affects customers' purchasing behavior. The methodology combines a novel dataset collected via video recognition technology with traditional point-of-sales data. We find that waiting in queue has a nonlinear impact on purchase incidence and that customers appear to focus mostly on the length of the queue, without adjusting enough for the speed at which the line moves. We also find that customers' sensitivity to waiting is heterogeneous and negatively correlated with price sensitivity. These findings have important implications for queueing system design and pricing management under congestion. The second application focuses on disaster planning in healthcare. According to a U.S. government mandate, in a catastrophic event, the New York City metropolitan areas need to be capable of caring for 400 burn-injured patients during a catastrophe, which far exceeds the current burn bed capacity. We develop a new system for prioritizing patients for transfer to burn beds as they become available and demonstrate its superiority over several other triage methods. Based on data from previous burn catastrophes, we study the feasibility of being able to admit the required number of patients to burn beds within the critical three-to-five-day time frame. We find that this is unlikely and that the ability to do so is highly dependent on the type of event and the demographics of the patient population. This work has implications for how disaster plans in other metropolitan areas should be developed. In the third application, we study workers' productivity in a global IT service delivery system, where service requests from possibly globally distributed customers are managed centrally and served by agents. Based on a novel dataset which tracks the detailed time intervals an agent spends on all business related activities, we develop a methodology to study the variation of productivity over time motivated by econometric tools from survival analysis. This approach can be used to identify different mechanisms by which workload affects productivity. The findings provide important insights for the design of the workload allocation policies which account for agents' workload management behavior.
70

Perfect Simulation, Sample-path Large Deviations, and Multiscale Modeling for Some Fundamental Queueing Systems

Chen, Xinyun January 2014 (has links)
As a primary branch of Operations Research, Queueing Theory models and analyzes engineering systems with random fluctuations. With the development of internet and computation techniques, the engineering systems today are much bigger in scale and more complicated in structure than 20 years ago, which raises numerous new problems to researchers in the field of queueing theory. The aim of this thesis is to explore new methods and tools, from both algorithmic and analytical perspectives, that are useful to solve such problems. In Chapter 1 and 2, we introduce some techniques of asymptotic analysis that are relatively new to queueing applications in order to give more accurate probabilistic characterization of queueing models with large scale and complicated structure. In particular, Chapter 1 gives the first functional large deviation result for infinite-server system with general inter-arrival and service times. The functional approach we use enables a nice description of the whole system over the entire time horizon of interest, which is important in real problems. In Chapter 2, we construct a queueing model for the so-called limit order book that is used in main financial markets worldwide. We use an asymptotic approach called multi-scale modeling to disentangle the complicated dependence among the elements in the trading system and to reduce the model dimensionality. The asymptotic regime we use is inspired by empirical observations and the resulting limit process explains and reproduces stylized features of real market data. Chapter 2 also provides a nice example of novel applications of queueing models in systems, such as the electronic trading system, that are traditionally outside the scope of queueing theory. Chapter 3 and 4 focus on stochastic simulation methods for performance evaluation of queueing models where analytic approaches fail. In Chapter 3, we develop a perfect sampling algorithm to generate exact samples from the stationary distribution of stochastic fluid networks in polynomial time. Our approach can be used for time-varying networks with general inter-arrival and service times, whose stationary distributions have no analytic expression. In Chapter 4, we focus on the stochastic systems with continuous random fluctuations, for instance, the workload arrives to the system in continuous flow like a Levy process. We develop a general framework of simulation algorithms featuring a deterministic error bound and an almost square root convergence rate. As an application, we apply this framework to estimate the stationary distributions of reflected Brownian motions and the performance of our algorithm is better than existing prevalent numeric methods.

Page generated in 0.0669 seconds