• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Essays on Service Operations Systems| Incentives, Information Asymmetries and Bounded Rationalities

He, Qiao-Chu 19 September 2018 (has links)
<p> This dissertation is concerned with service operations systems with considerations of incentives, information asymmetries and bounded rationalities. Chapter 1 provides an overview of the dissertation. </p><p> In Chapter 2, motivated by the information service operations for the agricultural sectors in the developing economies, we propose a Cournot quantity competition model with price uncertainty, wherein the marketing boards of farmers' cooperatives have the options to obtain costly private information, and form information sharing coalitions. We study the social value of market information and the incentives for information sharing among farmers. </p><p> In Chapter 3, we offer a behavioral (bounded rationality) theory to explain product/technology adoption puzzle: Why superior investment goods are not widely purchased by consumers? We show that present-bias encourages procrastination, but discourages strategic consumer behavior. Advance selling is beneficial not only to the consumers as a commitment device, but also to the seller as a price discrimination instrument. </p><p> In Chapter 4, motivated by the fresh-product delivery industry, we propose a model of service operations systems in which customers are heterogeneous both in terms of their private delay sensitivity and taste preference. The service provider maximizes revenue through jointly optimal pricing strategies, steady-state scheduling rules, and probabilistic routing policies under information asymmetry. Our results guide service mechanism design using substitution strategies. </p><p> In Chapter 5, motivated by the puzzle of excessively long queue for low quality service in tourism and healthcare industries, we study the customers&rsquo; learning behaviors in the service operations systems, when they hold incorrect beliefs about the population distribution. We highlight a simple behavioral explanation for the blind ``buying frenzy'' in service systems with low quality: The customers under-estimate others' patience and are trapped in a false optimism about the service quality. </p><p> Chapter 6 concludes the dissertation with a summary of the main results and policy recommendations.</p><p>
552

Estimating Uncertainty Attributable to Inconsistent Pairwise Comparisons in the Analytic Hierarchy Process (AHP)

Webb, Michael John 30 May 2018 (has links)
<p> This praxis explores a new approach to the problem of estimating the uncertainty attributable to inconsistent pairwise comparison judgments in the Analytic Hierarchy Process (AHP), a prominent decision-making methodology used in numerous fields, including systems engineering and engineering management. Based on insights from measurement theory and established error propagation equations, the work develops techniques to estimate the uncertainty of aggregated priorities for decision alternatives based on measures of inconsistency for component pairwise comparison matrices. This research develops two formulations for estimating the error: the first, more computationally intensive and accurate, uses detailed calculations of parameter errors to estimate the aggregated uncertainty, while the second, significantly simpler, uses an estimate of mean relative error (MRE) for each pairwise comparison matrix to estimate the aggregated error. This paper describes the derivation of both formulations for the linear weighted sum method of priority aggregation in AHP and uses Monte Carlo simulation to test their estimation accuracies for diverse problem structures and parameter values. The work focuses on the two most commonly used methods of deriving priority weights in AHP: the eigenvector method (EVM) and the geometric mean method (GMM). However, the approach of estimating the propagation of measurement errors can be readily applied to other hierarchical decision support methodologies that use pairwise comparison matrices. The developed techniques provide analysts the ability to easily assess decision model uncertainties attributable to comparative judgment inconsistencies without recourse to more complex optimization routines or simulation experiments described previously in the professional literature.</p><p>
553

Using Multi Criteria Decision Analysis Decision Support Systems to Conduct Analysis of Alternatives for Department of Defense Acquisition Programs

Mahalak, David Matthew 06 February 2018 (has links)
<p> Despite being a mandated requirement, the U.S. Government Accountability Office found a lack of guidance across the Department of Defense for conducting analysis of alternatives which contributed to significant cost, schedule, and performance problems for Defense acquisition programs. In 2008 ninety-six major weapon system programs were reviewed and findings showed cost growths of $296 billion, average program delays of twenty-two months, and the delivery of fewer systems with reduced capabilities. Without specific guidance and criteria for how analysis of alternatives should be conducted the Department of Defense will continue to struggle to make informed trade-offs and start executable programs. This praxis presents a decision support system that enables decision makers to analyze cost, schedule, and performance ratings for multi criteria decision analysis problems. The decision support system provides interactive visualization tools that allow decision makers to execute sensitivity and uncertainty analyses, analyze the decision problem from multiple stakeholder-specific viewpoints, and synthesize results in a meaningful way. Although the primary motivation of this praxis is to fill the gap identified by the U.S. Government Accountability Office, the decision support system presented in this praxis can be modified and applied across multiple domains. </p><p>
554

Innovation Management and Crowdsourcing| A Quantitative Analysis of Sponsor and Crowd Assessments

Jones, Kyle Thomas 15 March 2018 (has links)
<p> Crowdsourcing is an increasingly common method used for new product development in large engineering-focused companies. While effective at generating a large number of ideas, previous research has noted that there is not an efficient mechanism to sort ideas based on the sponsor's desired outcomes. Without such a mechanism, the sponsor is left to evaluate ideas individually in a labor-intensive effort. This paper evaluates the extent to which information revealed by the crowd during the course of a crowdsourcing event can be used to accurately predict sponsor selection of submitted ideas. The praxis reviews current literature relevant to new product development, innovation management, and crowdsourcing as well as methods for efficient sorting. Using a quantitatively-based methodology, the author develops and evaluates several predictive models using various attributes of the crowd reaction to crowdsourced ideas. Ultimately, the praxis proposes a model that can significantly reduce the burden of sorting through submissions and determining the submissions which merit further review. </p><p>
555

The Role of Trust and Collaboration toward Innovation in Outsourced Manufacturing Supply Chains| A Systematic Review

Mallett, Brian 23 March 2018 (has links)
<p> As organizations shift more work to outsourced partners, a problem for management is how to accomplish not only short-term/tactical performance but also how to leverage network relationships for long-term/strategic advantage. Outsourced manufacturing supply chains represent a unique context for study as internal and external participants share a common goal for supply chain performance but also have separate and independent goals. Trust and collaboration are among the inputs that can influence supply chain outcomes, but there is a gap in understanding these variables with respect to strategic outcomes like innovation. This research uses systematic review of peer reviewed literature to examine the role of trust and collaboration in outsourced manufacturing supply chains, and specifically the potential for these variables to shape relationships for advancing innovation. Two conditions are found that derive from the presence of trust: 1) <i>willingness</i> to engage, and 2) <i>commitment</i> for long-term relationship and to overcome failures. Three conditions are found that derive from the presence of collaboration: 1) <i>awareness</i> of capability, 2) <i>sharing </i> information, and 3) <i>integration</i> of resources. These conditions shape an underlying mindset that can either advance or diminish innovation, and together create either transactional, operational, serendipitous, or strategic orientations. The conclusion is that a strategic orientation promotes the path for innovation and arises from high willingness, commitment, awareness, sharing, and integration that are shaped by trust and collaboration. The findings have implication for organizations that seek to foster interactions for innovation and to go beyond what is necessary to accomplish short-term operational objectives.</p><p>
556

Impacts of Airline Mergers on Passenger Welfare

Luo, Tian 06 October 2017 (has links)
<p> Since 2005, U.S. domestic airline industry has undergone a series of consolidations. The overall effects of these consolidations on air travelers are of considerable interest to researchers and policy makers alike. In this thesis, unlike any of the previous studies in literature, we provide a comprehensive assessment of the overall effects of each of the five major recent mergers on the passengers&rsquo; welfare as evaluated through consumer surplus changes, starting with the US Airways &ndash; America West Airlines merger in 2005 and ending with the American Airlines &ndash; US Airways merger in 2013. We develop discrete choice models with fare, nonstop and one-stop service frequency, travel time, and other carrier and route attributes as parameters. The consumer surplus, as a function of these parameters, is calculated for each market as the measure of passengers&rsquo; welfare. By using the markets not affected by the mergers as a control group, we are able to separate out the welfare effects of mergers from those of other extrinsic factors such as oil price changes, changes in economic conditions, etc. Several new insights are obtained. We find that mergers of legacy network carriers with significant proportion of overlapping markets are generally accompanied by flight reallocation and network reorganization, which in turn, contribute to an increase in passenger welfare. However, overall passenger welfare for very small communities declined after the mergers. Also, overall passenger welfare in markets with many competitors declined, consistent with the classic economic theory of consolidation-induced welfare losses. We also find that the welfare gain from mergers of legacy network carriers with significant proportion of overlapping markets progressively decreased as the number of existing major domestic carriers decreased, and that after the most recent mergers, any further potential mergers of legacy network carriers are likely to result in welfare losses.</p><p>
557

Implicit methods for iterative estimation with large data sets

Toulis, Panagiotis 25 July 2017 (has links)
The ideal estimation method needs to fulfill three requirements: (i) efficient computation, (ii) statistical efficiency, and (iii) numerical stability. The classical stochastic approximation of (Robbins, 1951) is an iterative estimation method, where the current iterate (parameter estimate) is updated according to some discrepancy between what is observed and what is expected assuming the current iterate has the true parameter value. Classical stochastic approximation undoubtedly meets the computation requirement, which explains its widespread popularity, for example, in modern applications of machine learning with large data sets, but cannot effectively combine it with efficiency and stability. Surprisingly, the stability issue can be improved substantially, if the aforementioned discrepancy is computed not using the current iterate, but using the conditional expectation of the next iterate given the current one. The computational overhead of the resulting implicit update is minimal for many statistical models, whereas statistical efficiency can be achieved through simple averaging of the iterates, as in classical stochastic approximation (Ruppert, 1988). Thus, implicit stochastic approximation is fast and principled, fulfills requirements (i-iii) for a number of popular statistical models including generalized linear models, M-estimation, and proportional hazards, and it is poised to become the workhorse of estimation with large data sets in statistical practice. / Statistics
558

Efficient set relations among data envelopment analysis models and resource use efficiency in manufacturing

Heimerman, Kathryn T 01 January 1993 (has links)
Data Envelopment Analysis (DEA) is a multi-criteria data analysis methodology introduced by Charnes, Cooper, and Rhodes in 1978. Since that time, it has proven to be a valuable analysis tool for strategic, policy, and operational decision problems. Its primary use is to conduct performance evaluations of technical, scale, and managerial efficiency. Since DEA generalizes the single-dimensional engineering and economic efficiency measure into a multi-dimensional measure, it has useful applications in engineering and economic studies. This dissertation addresses several aspects of the DEA methodology and presents original research results of both a theoretical and applied nature. Topics of the early chapters provide the reader with an intuitive understanding of DEA in addition to a finely-tuned technical understanding of the method. The later chapters build on this understanding through new theoretical results which contribute to a unifying DEA theory and through an empirical study of resource use efficiency in manufacturing. The theoretical research results give a thorough examination and specification of relationships between the economic concept of returns to scale enforced by different DEA models and variable set dimensionality. The relationships become apparent by examining properties of the set of units classified as efficient by each DEA model. These relationships are delineated, in set-theoretic terms, in a sequence of theorems with proofs. The applied research is an empirical DEA study of global resource use efficiency in international manufacturing using actual data obtained from the United Nations. By using the aggregate measure of efficiency which DEA provides, this research links multiple manufacturing outputs to consumption levels of multiple resources thereby incorporating the complexities of manufacturing environments which prior, simpler productivity analyses have been unable to capture. In particular, we analyze and interpret relationships between resource use and manufacturing efficiency. We compare performance of the manufacturing sectors in nations around the globe detecting temporal trends in efficiency, including differences in performance by economy type and by geographic location. Both the theoretical and the applied contributions presented in this dissertation are springboards to areas of future research. This dissertation concludes with mention of such possible extensions and follow-on studies.
559

Algorithms to extract hidden networks and applications to set partitioning problems

Han, Hyun-Soo 01 January 1994 (has links)
Recently, tremendous advancements have been made in the solution of the set partitioning problem (SPP), which finds application in numerous real-world industrial scheduling problems such as fleet assignment and airline crew scheduling. Two major thrusts in the development of solution procedures for SPP have been variable reduction and the use of, either inherent or enforced, special structures in the element-set incidence matrix on which the problem is defined. This dissertation demonstrates the use of hidden network structures for the solution of SPP. The hidden network structures in the element-set incidence matrix are revealed via preprocessing. The importance of problem preprocessing in the development of efficient computational techniques has been well established. We develop heuristic procedures to extract hidden network submatrices in a (0,1) matrix. The heuristics are based on Fujishige's PQ-graph algorithm (1980), which is one of the most computationally efficient algorithms for testing the graph realizability of a (0,$\pm$1) matrix. The computational implementation of the algorithm is the first of its kind and the computational experience in this dissertation validates the almost linear time computational complexity of graph realizability. Fujishige's algorithm lends itself to modification for the development of heuristic procedures to identify submatrices that transform to pure network. The heuristics we develop are based on rules that use Fujishige's PQ-graph characteristics so that the computational efficiencies afforded by PQ-graphs are retained. By finding an embedded network row submatrix, SPP is transformed to a network with side constraints. Flow conditions on the revealed pure network are then used in a procedure for effecting variable reduction. Variable reduction has been recognized to be essential for efficient solution of SPP. By finding an embedded network column submatrix SPP is transformed to a network with side columns. The resulting formulation is used in finding a feasible solution for SPP quickly. For the purpose of obtaining optimal and suboptimal solutions to SPP we use a reformulation of the problem. By repetitive application of the heuristic to obtain an embedded network column submatrix, SPP is reformulated by transforming the element-set incidence matrix to a concatenation of pure network submatrices. This reformulation is to be used within an enumerative procedure which allows significant variable reduction. The research is summarized as follows: (i) Fujishige's PQ-graph algorithm is implemented by supplementing algorithmic details which have never been published; (ii) Heuristic procedures to extract hidden network submatrices are developed using rules which have been developed to maintain Fujishige's PQ-graph characteristics; (iii) The use of embedded network structures is demonstrated for the solution of SPP via variable reduction procedures and procedures for efficient generation of a feasible solution; (iv) Procedure that use a reformulation to find optimal solutions for set partitioning problems is developed.
560

A dynamic programming approach to maintenance: Inspection models for a single machine under stochastic deterioration

Cohn, Sanford 01 January 1995 (has links)
Consider a single machine that produces N items each period. This machine which deteriorates over time needs occasional maintenance to restore it to its "new" condition. Our only indication that such deterioration has occurred is an increase in the incidence of defective items produced by the machine. The more periods that pass without maintenance, the higher the chances that the machine has deteriorated and will start producing more inferior items. We suppose that in the beginning of each period, the decision maker has three options: (1) To let the machine produce during that period without interfering with its production or inspecting it. (2) To inspect n items produced that period. If the inspected items are bad, maintenance is done at the end of the period on the machine to restore its "new" condition before the start of the next period. (3) To automatically do a maintenance on the machine at the start of the period without inspecting any of the items or doing any production. This choice must be made taking into account: (a) The machine deterioration rate, (b) The type of inspection that can be done and (c) The costs involved, e.g. inspection, maintenance, bad items produced, lost revenues, etc. Our thesis considers two different finite time horizon discounted dynamic programming models that can be used to optimally choose the correct option each period. The first model assumes that any inspection data obtained in a given period is only used in that period. The second model assumes that a single summary statistic of all past and present data is kept, and employed in making the decision. For both models, we proved the existence of a set of sufficient conditions based exclusively on input data that assure that the optimal policy has a special simple structure. In the first model, the optimal policy indicates that it is optimal to do nothing for the first few periods since the last maintenance, inspect before making a decision for the next few period, and if no maintenance is chosen in those periods, then automatically maintain in the following period. This structure is called a three tier policy. The second model's special structure says that for any given summary statistics, the optimal policy also has a three tier structure. In addition, for any given period, the optimal policy is to do nothing for the "best" summary statistics, inspect for the "next best" summary statistics and automatically maintain for the worst summary statistics.

Page generated in 0.1118 seconds