• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1139
  • 59
  • 21
  • 11
  • 5
  • 1
  • Tagged with
  • 1335
  • 1335
  • 907
  • 900
  • 856
  • 843
  • 835
  • 248
  • 228
  • 213
  • 168
  • 158
  • 137
  • 134
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Facility location with economies of scale and congestion

Lu, Da January 2010 (has links)
Most literature on facility location assumes a fixed set-up cost and a linear variable cost. However, as production volume increases, cost savings are achieved through economies of scale, and then when production exceeds a certain capacity level, congestion occurs and costs start to increase significantly. This leads to an S-shaped cost function that makes the location-allocation decisions challenging. This thesis presents a nonlinear mixed integer programming formulation for the facility location problem with economies of scale and congestion and proposes a Lagrangian solution approach. Testing on a variety of functions and cost settings reveals the efficiency of the proposed approach in finding solutions that are within an average gap of 3.79% from optimal.
12

A Lagrangian Relaxation Approach to a Two-Stage Stochastic Facility Location Problem with Second-Stage Activation Cost

Ghodsi, Ghazal January 2012 (has links)
We study a two-stage stochastic facility location problem in the context of disaster response network design. The uncertainty inherent in disaster occurrence and impact is captured by defining scenarios to reflect a large spectrum of possible occurrences. In the first stage (pre-event response), planners should decide on locating a set of facilities in strategic regions. In the second stage (post-event response), some of these facilities are to be activated to respond to demand in the disaster affected region. The second-stage decisions depend on disaster occurrence and impact which are highly uncertain. To model this uncertainty, a large number of scenarios are defined to reflect a large spectrum of possible occurrences. In this case, facility activation and demand allocation decisions are made under each scenario. The aim is to minimize the total cost of locating facilities in the first stage plus the expected cost of facility activation and demand allocation under all scenarios in the second stage while satisfying demand subject to facility and arc capacities. We propose a mixed integer programming model with binary facility location variables in the first stage and binary facility activation variables and fractional demand allocation variables in the second stage. We propose two Lagrangian relaxations and several valid cuts to improve the bounds. We experiment with aggregated, disaggregated and hybrid implementations in calculating the Lagrangian bound and develop several Lagrangian heuristics. We perform extensive numerical testing to investigate the effect of valid cuts and disaggregation and to compare the relaxations. The second relaxation proved to provide a tight bound as well as high quality feasible solutions.
13

The Generalized DEA Model of Fundamental Analysis of Public Firms, with Application to Portfolio Selection

Zhang, Xin 01 December 2007 (has links)
Fundamental analysis is an approach for evaluating a public firm for its investmentworthiness by looking at its business at the basic or fundamental financial level. The focus of this thesis is on utilizing financial statement data and a new generalization of the Data Envelopment Analysis, termed the GDEA model, to determine a relative financial strength (RFS) indicator that represents the underlying business strength of a firm. This approach is based on maximizing a correlation metric between GDEA-based score of financial strength and stock price performance. The correlation maximization problem is a difficult binary nonlinear optimization that requires iterative re-configuration of parameters of financial statements as inputs and outputs. A two-step heuristic algorithm that combines random sampling and local search optimization is developed. Theoretical optimality conditions are also derived for checking solutions of the GDEA model. Statistical tests are developed for validating the utility of the RFS indicator for portfolio selection, and the approach is computationally tested and compared with competing approaches. The GDEA model is also further extended by incorporating Expert Information on input/output selection. In addition to deriving theoretical properties of the model, a new methodology is developed for testing if such exogenous expert knowledge can be significant in obtaining stronger RFS indicators. Finally, the RFS approach under expert information is applied in a Case Study, involving more than 800 firms covering all sectors of the U.S. stock market, to determine optimized RFS indicators for stock selection. Those selected stocks are then used within portfolio optimization models to demonstrate the superiority of the techniques developed in this thesis.
14

Data Mining with Multivariate Kernel Regression Using Information Complexity and the Genetic Algorithm

Beal, Dennis Jack 01 December 2009 (has links)
Kernel density estimation is a data smoothing technique that depends heavily on the bandwidth selection. The current literature has focused on optimal selectors for the univariate case that are primarily data driven. Plug-in and cross validation selectors have recently been extended to the general multivariate case. This dissertation will introduce and develop new and novel techniques for data mining with multivariate kernel density regression using information complexity and the genetic algorithm as a heuristic optimizer to choose the optimal bandwidth and the best predictors in kernel regression models. Simulated and real data will be used to cross validate the optimal bandwidth selectors using information complexity. The genetic algorithm is used in conjunction with information complexity to determine kernel density estimates for variable selection from high dimension multivariate data sets. Kernel regression is also hybridized with the implicit enumeration algorithm to determine the set of independent variables for the global optimal solution using information criteria as the objective function. The results from the genetic algorithm are compared to the optimal solution from the implicit enumeration algorithm and the known global optimal solution from an explicit enumeration of all possible subset models.
15

Algorithms for Multi-Sample Cluster Analysis

Almutairi, Fahad 01 August 2007 (has links)
In this study, we develop algorithms to solve the Multi-Sample Cluster Analysis (MSCA) problem. This problem arises when we have multiple samples and we need to find the statistical model that best fits the cluster structure of these samples. One important area among others in which our algorithms can be used is international market segmentation. In this area, samples about customers’preferences and characteristics are collected from di¤erent regions in the market. The goal in this case is to join the regions with similar customers’characteristics in clusters (segments). We develop branch and bound algorithms and a genetic algorithm. In these algorithms, any of the available information criteria (AIC, CAIC, SBC, and ICOMP) can be used as the objective function to be optimized. Our algorithms use the Clique Partitioning Problem (CPP) formulation. They are the first algorithms to use information criteria with the CPP formulation. When the branch and bound algorithms are allowed to run to completion, they converge to the optimal MSCA alternative. These methods also proved to find good solutions when they were stopped short of convergence. In particular, we develop a branching strategy which uses a "look-ahead" technique. We refer to this strategy as the complete adaptive branching strategy. This strategy makes the branch and bound algorithm quickly search for the optimal solution in multiple branches of the enumeration tree before using a depth- first branching strategy. In computational tests, this method’s performance was superior to other branching methods as well as to the genetic algorithm.
16

Approximation Methods for the Standard Deviation of Flow Times in the G/G/s Queue

Zhao, Xiaofeng 01 August 2007 (has links)
We provide approximation methods for the standard deviation of flow time in system for a general multi-server queue with infinite waiting capacity (G / G / s ). The approximations require only the mean and standard deviation or the coefficient of variation of the inter-arrival and service time distributions, and the number of servers. These approximations are simple enough to be implemented in manual or spreadsheet calculations, but in comparisons to Monte Carlo simulations have proven to give good approximations (within ±10%) for cases in which the coefficients of variation for the interarrival and service times are between 0 and 1. The approximations also have the desirable properties of being exact for the specific case of Markov queue model M / M / s, as well as some imbedded Markov queuing models ( Ek / M / 1 and M / Eα / 1). The practical significance of this research is that (1) many real world queuing problems involve the G / G / s queuing systems, and (2) predicting the range of variation of the time in the system (rather than just the average) is needed for decision making. For example, one job shop facility with which the authors have worked, guarantees its customers a nine day turnaround time and must determine the minimum number of machines of each type required to achieve nine days as a “worst case” time in the system. In many systems, the “worst case” value of flow time is very relevant because it represents the lead time that can safely be promised to customers. To estimate this we need both the average and standard deviation of the time in system. The usefulness of our results stems from the fact that they are computationally simple and thus provide quick approximations without resorting to complex numerical techniques or Monte Carlo simulations. While many accurate approximations for the G / G / s queue have been proposed previously, they often result in algebraically intractable expressions. This hinders attempts to derive closed-form solutions to the decision variables incorporated in optimization models, and inevitably leads to the use of complex numeric methods. Furthermore, actual application of many of these approximations often requires specification of the actual distributions of the inter-arrival time and the service time. Also, these results have tended to focus on delay probabilities and average waiting time, and do not provide a means of estimating the standard deviation of the time in the system. We also extend the approximations to computing the standard deviation of flow times of each priority class in the G / G / s priority queues and compare the results to those obtained via Monte Carlo simulations. These simulation experiments reveal good approximations for all priority classes with the exception of the lowest priority class in queuing systems with high utilization. In addition, we use the approximations to estimate the average and the standard deviation of the total flow time through queuing networks and have validated these results via Monte Carlo Simulations. The primary theoretical contributions of this work are the derivations of an original expression for the coefficient of variation of waiting time in the G / G / s queue, which holds exactly for G / M / s and M / G /1 queues. We also do some error sensitivity analysis of the formula and develop interpolation models to calculate the probability of waiting, since we need to estimate the probability of waiting for the G / G / s queue to calculate the coefficient of variation of waiting time. Technically we develop a general queuing system performance predictor, which can be used to estimate all kinds of performances for any steady state, infinite queues. We intend to make available a user friendly predictor for implementing our approximation methods. The advantages of these models are that they make no assumptions about distribution of inter-arrival time and service time. Our techniques generalize the previously developed approximations and can also be used in queuing networks and priority queues. Hopefully our approximation methods will be beneficial to those practitioners who like simple and quick practical answers to their multi-server queuing systems. Key words and Phrases: Queuing System, Standard Deviation, Waiting Time, Stochastic Process, Heuristics, G / G/ s, Approximation Methods, Priority Queue, and Queuing Networks.
17

Experimental Investigation of Quasi-Newton Approaches to a Learning Problem in Electronic Negotiation

Meloche, Paul January 2007 (has links)
The recent growth in electronic commerce has motivated the development of semi-autonomous negotiation systems capable of implementing multiple negotiations simultaneously. Different approaches have recently been presented in the literature with the aim of providing a solution to this growing market segment. The current thesis presents an examination of optimization approaches for learning the parameters of a time-dependent decision-function that has recently obtained significant interest in the negotiation literature. Twelve different nonlinear optimization variants are evaluated using 800 problems, and the resulting 9600 runs are statistically analyzed on four different performance measures. Potential implications of our analysis are discussed for their possible use in the context of electronic negotiation
18

Experimental Investigation of Quasi-Newton Approaches to a Learning Problem in Electronic Negotiation

Meloche, Paul January 2007 (has links)
The recent growth in electronic commerce has motivated the development of semi-autonomous negotiation systems capable of implementing multiple negotiations simultaneously. Different approaches have recently been presented in the literature with the aim of providing a solution to this growing market segment. The current thesis presents an examination of optimization approaches for learning the parameters of a time-dependent decision-function that has recently obtained significant interest in the negotiation literature. Twelve different nonlinear optimization variants are evaluated using 800 problems, and the resulting 9600 runs are statistically analyzed on four different performance measures. Potential implications of our analysis are discussed for their possible use in the context of electronic negotiation
19

Innovation-Performance relationship: the moderating role of the Degree of Internationalization

Wahid, Fazli January 2010 (has links)
Moderator variables are typically introduced when there is an unexpectedly weak or inconsistent relationship between a predictor and a criterion variable (Baron and Kenny, 1986). Holak, Parry and Song (1991) and Zhang, Li, Hitt, and Cui (2007) found an inconsistent relationship between R&D spending (a measure of innovation) and firm performance and so concluded that this relationship should be studied under different contextual factors. One such factor is the Degree of Internationalization (DOI) of a firm. Therefore, this paper evaluates the innovation-performance link in the presence of a moderator - the Degree of Internationalization (DOI). It proposes that DOI moderates the innovation-performance relationship. In addition, this research tests the hypothesis that DOI can affect either the form or the strength of the innovation-performance relationship. Only one previous study has evaluated the moderating effect of DOI on innovation-performance relationship, but this paper did not investigate the influence on the form of the relationship. The findings of this study are based on time series cross-sectional data of 102 large U.S. manufacturing firms from seven different industries. Data for each firm was obtained for eight years (2000-2007) from the Compustat database. Hypotheses were tested using the TSCSREG procedure with Fuller-Battese method implemented in SAS. The identification and the differentiation of the moderation effect into form and strength were carried out by using the typology from the work of Sharma, Durand and Gur-Arie (1981). The results show that DOI moderates the innovation-performance relationship positively and significantly. In addition, DOI affects the form (direct) and is a quasi moderator of the innovation-performance relationship. In terms of theory, there are two implications. First, that DOI is an important contingency factor when examining the innovation-performance relationship. Predicting the innovation-performance relationship without including DOI may lead to misleading conclusions. Second, when evaluating the relationship between R&D and firm performance, identifying whether DOI moderates the form or the strength of the relationship is needed in order to use a proper analytical technique. In terms of practice, the results sensitize managers to the need to focus not only on innovation activities, but also on their internationalization in order to appropriate the full benefits of their innovations.
20

Numerosity and Cognitive Complexity of a Medium as Moderators of Medium Effect on Effort

Rahimi Nejad, Mona 27 September 2010 (has links)
As a part of loyalty programs in marketing or as incentive plans in companies, mediums have attracted considerable interest from marketing and organizational behavior researchers. Previous studies focused mainly on the effects of mediums on people’s choices and not on the role of moderators of a medium effect. The goal of the present thesis is to study two such moderators namely the numerosity of a medium and the cognitive complexity of mediums. In this study, after a thorough theoretical analysis, experimental data is analyzed to explore the relation between numerosity and cognitive complexity of a medium on individuals’ efforts. Our findings suggest that the medium effect is stronger when a medium is more numerous. Also, a more cognitively complex medium makes the mediums more effective.

Page generated in 0.1151 seconds