• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 908
  • 64
  • 27
  • 14
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 1116
  • 1116
  • 1032
  • 904
  • 844
  • 840
  • 832
  • 241
  • 209
  • 198
  • 165
  • 159
  • 144
  • 139
  • 132
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Generalized DEA Model of Fundamental Analysis of Public Firms, with Application to Portfolio Selection

Zhang, Xin 01 December 2007 (has links)
Fundamental analysis is an approach for evaluating a public firm for its investmentworthiness by looking at its business at the basic or fundamental financial level. The focus of this thesis is on utilizing financial statement data and a new generalization of the Data Envelopment Analysis, termed the GDEA model, to determine a relative financial strength (RFS) indicator that represents the underlying business strength of a firm. This approach is based on maximizing a correlation metric between GDEA-based score of financial strength and stock price performance. The correlation maximization problem is a difficult binary nonlinear optimization that requires iterative re-configuration of parameters of financial statements as inputs and outputs. A two-step heuristic algorithm that combines random sampling and local search optimization is developed. Theoretical optimality conditions are also derived for checking solutions of the GDEA model. Statistical tests are developed for validating the utility of the RFS indicator for portfolio selection, and the approach is computationally tested and compared with competing approaches. The GDEA model is also further extended by incorporating Expert Information on input/output selection. In addition to deriving theoretical properties of the model, a new methodology is developed for testing if such exogenous expert knowledge can be significant in obtaining stronger RFS indicators. Finally, the RFS approach under expert information is applied in a Case Study, involving more than 800 firms covering all sectors of the U.S. stock market, to determine optimized RFS indicators for stock selection. Those selected stocks are then used within portfolio optimization models to demonstrate the superiority of the techniques developed in this thesis.
2

Data Mining with Multivariate Kernel Regression Using Information Complexity and the Genetic Algorithm

Beal, Dennis Jack 01 December 2009 (has links)
Kernel density estimation is a data smoothing technique that depends heavily on the bandwidth selection. The current literature has focused on optimal selectors for the univariate case that are primarily data driven. Plug-in and cross validation selectors have recently been extended to the general multivariate case. This dissertation will introduce and develop new and novel techniques for data mining with multivariate kernel density regression using information complexity and the genetic algorithm as a heuristic optimizer to choose the optimal bandwidth and the best predictors in kernel regression models. Simulated and real data will be used to cross validate the optimal bandwidth selectors using information complexity. The genetic algorithm is used in conjunction with information complexity to determine kernel density estimates for variable selection from high dimension multivariate data sets. Kernel regression is also hybridized with the implicit enumeration algorithm to determine the set of independent variables for the global optimal solution using information criteria as the objective function. The results from the genetic algorithm are compared to the optimal solution from the implicit enumeration algorithm and the known global optimal solution from an explicit enumeration of all possible subset models.
3

Algorithms for Multi-Sample Cluster Analysis

Almutairi, Fahad 01 August 2007 (has links)
In this study, we develop algorithms to solve the Multi-Sample Cluster Analysis (MSCA) problem. This problem arises when we have multiple samples and we need to find the statistical model that best fits the cluster structure of these samples. One important area among others in which our algorithms can be used is international market segmentation. In this area, samples about customers’preferences and characteristics are collected from di¤erent regions in the market. The goal in this case is to join the regions with similar customers’characteristics in clusters (segments). We develop branch and bound algorithms and a genetic algorithm. In these algorithms, any of the available information criteria (AIC, CAIC, SBC, and ICOMP) can be used as the objective function to be optimized. Our algorithms use the Clique Partitioning Problem (CPP) formulation. They are the first algorithms to use information criteria with the CPP formulation. When the branch and bound algorithms are allowed to run to completion, they converge to the optimal MSCA alternative. These methods also proved to find good solutions when they were stopped short of convergence. In particular, we develop a branching strategy which uses a "look-ahead" technique. We refer to this strategy as the complete adaptive branching strategy. This strategy makes the branch and bound algorithm quickly search for the optimal solution in multiple branches of the enumeration tree before using a depth- first branching strategy. In computational tests, this method’s performance was superior to other branching methods as well as to the genetic algorithm.
4

Approximation Methods for the Standard Deviation of Flow Times in the G/G/s Queue

Zhao, Xiaofeng 01 August 2007 (has links)
We provide approximation methods for the standard deviation of flow time in system for a general multi-server queue with infinite waiting capacity (G / G / s ). The approximations require only the mean and standard deviation or the coefficient of variation of the inter-arrival and service time distributions, and the number of servers. These approximations are simple enough to be implemented in manual or spreadsheet calculations, but in comparisons to Monte Carlo simulations have proven to give good approximations (within ±10%) for cases in which the coefficients of variation for the interarrival and service times are between 0 and 1. The approximations also have the desirable properties of being exact for the specific case of Markov queue model M / M / s, as well as some imbedded Markov queuing models ( Ek / M / 1 and M / Eα / 1). The practical significance of this research is that (1) many real world queuing problems involve the G / G / s queuing systems, and (2) predicting the range of variation of the time in the system (rather than just the average) is needed for decision making. For example, one job shop facility with which the authors have worked, guarantees its customers a nine day turnaround time and must determine the minimum number of machines of each type required to achieve nine days as a “worst case” time in the system. In many systems, the “worst case” value of flow time is very relevant because it represents the lead time that can safely be promised to customers. To estimate this we need both the average and standard deviation of the time in system. The usefulness of our results stems from the fact that they are computationally simple and thus provide quick approximations without resorting to complex numerical techniques or Monte Carlo simulations. While many accurate approximations for the G / G / s queue have been proposed previously, they often result in algebraically intractable expressions. This hinders attempts to derive closed-form solutions to the decision variables incorporated in optimization models, and inevitably leads to the use of complex numeric methods. Furthermore, actual application of many of these approximations often requires specification of the actual distributions of the inter-arrival time and the service time. Also, these results have tended to focus on delay probabilities and average waiting time, and do not provide a means of estimating the standard deviation of the time in the system. We also extend the approximations to computing the standard deviation of flow times of each priority class in the G / G / s priority queues and compare the results to those obtained via Monte Carlo simulations. These simulation experiments reveal good approximations for all priority classes with the exception of the lowest priority class in queuing systems with high utilization. In addition, we use the approximations to estimate the average and the standard deviation of the total flow time through queuing networks and have validated these results via Monte Carlo Simulations. The primary theoretical contributions of this work are the derivations of an original expression for the coefficient of variation of waiting time in the G / G / s queue, which holds exactly for G / M / s and M / G /1 queues. We also do some error sensitivity analysis of the formula and develop interpolation models to calculate the probability of waiting, since we need to estimate the probability of waiting for the G / G / s queue to calculate the coefficient of variation of waiting time. Technically we develop a general queuing system performance predictor, which can be used to estimate all kinds of performances for any steady state, infinite queues. We intend to make available a user friendly predictor for implementing our approximation methods. The advantages of these models are that they make no assumptions about distribution of inter-arrival time and service time. Our techniques generalize the previously developed approximations and can also be used in queuing networks and priority queues. Hopefully our approximation methods will be beneficial to those practitioners who like simple and quick practical answers to their multi-server queuing systems. Key words and Phrases: Queuing System, Standard Deviation, Waiting Time, Stochastic Process, Heuristics, G / G/ s, Approximation Methods, Priority Queue, and Queuing Networks.
5

Myopic Policies for Inventory Control

Çetinkaya, Sila 06 1900 (has links)
<p>In this thesis we study a typical retailer's problem characterized by a slngle item, periodic review of inventory levels in a multi-period setting: and stochastic demands. We consider the case of full backlogging where backorders are penalized via fixed and proportional backorder costs simultaneously. This treatment of backorder costs is a nonstandard aspect of our study. The discussion begins with an introduction in Chapter 1. Next, a review of the relevant literature is provided in Chapter 2. In Chapter 3 we study the infinite horizon case which is of both theoretical and practical interest. From a theoretical point of view tile infinite horizon solution represents the limiting behavior of the finite horizon case. Solving the infinite horizon problem has also its own practical benefits since its solution is easier to compute. Our motivation to study the infinite horizon case in the first place is pragmatic. We prove that a myopic base-stock policy is optimal for the infinite horizon case, and this result provides a basis for our study. We show that the optimal myopic policy can be computed easily for the Erlang demand in Chapter 4; solve a disposal problem which arises under the myopic policy in Chapter 5, and also study in Chapters 6 and 7 the finite horizon problem for which a myopic policy is not optimal. For the finite horizon problem computation of the exact policy may require a substantial effort. From a computational point of view, there is a need for developing a method that overcomes this burden. In Chapter 6 we develop a model for such a method by restricting our attention to the class of myopic base-stock policies, and call the resulting policy the 'best myopic' policy. We discuss analytical and numerical results for the computation of the best myopic policy in Chapter 7. Finally we present a summary of our main findings in Chapter 8.</p> / Doctor of Philosophy (PhD)
6

Stochastic optimization models for service and manufacturing industry

Denton, Brian T. 05 1900 (has links)
<p>We explore two novel applications of stochastic optimization inspired by real-world problems. The first application involves the optimization of appointments-based service systems. The problem here is to determine an optimal schedule of start times for jobs that have random durations, and a range of potential cost structures based on common performance metrics such as customer waiting and server idling . We show that the problem can be formulated as a two-stage stochastic linear program and develop an algorithm that utilizes the problem structure to obtain a near-optimal solution. Various aspects of the problem are considered, including the effects of job sequence, dependence on cost parameters, and job duration distributions. A range of numerical experiments is provided and some resulting insights are summarized. Some simple heuristics are proposed, based on relaxations of the problem, and evidence of their effectiveness is provided. The second application relates to inventory deployment at an integrated steel manufacturer (ISM). The models presented in this case were developed for making inventory design-choice (what to carry) and lot-size (how much to carry) decisions. They were developed by working with managers from several different functional areas at a particular ISM. They are, however, applicable to other ISMs and to other continuous-process industries with similar architectures. We discuss details of the practical implementation of the models, the structure of the problems, and algorithms and heuristics for solving them. Numerical experiments illustrate the accuracy of the heuristics, and examples based on empirical data from an ISM show the advantages of using such models in practice and suggest some managerial insights.</p> / Doctor of Philosophy (PhD)
7

Bi-Axial Testing of Zinc and Zinc Alloy Sheets under Superimposed Hydrostatic Pressures

Sandhu, Harjeet S. 07 1900 (has links)
<p>The effects of pressurization on the properties of metals has long been of interest to scientists. Bridgman found that in general, the ductility (ability of the metal to deform without fracture) increased with superimposed hydrostatic pressure. Pugh et al. confirmed similar findings.</p> <p>The effects of hydrostatistic pressure on the mechanical properties of thin anisotropic zinc, heat treated and non heat treated zinc alloy sheets subjected to biaxial tension (via the circular bulge test) is investigated in this project.</p> <p>A brief look is taken into the generalized conditions for the onset of tensile plastic instability in a thin circular diaphragm bulged under superimposed hydrostatic pressure. The material is assumed to obey Hill's theory of yielding for anisotropic materials. These predictions are verified by conducting bulge tests using back pressures up to 10,000 psi. It is concluded that within the pressure range of investigation there is no detectable changes in the properties of the materials tested.</p> <p>In the appendix section a brief look is taken into the microstructure of the materials tested.</p> / Master of Engineering (ME)
8

Development of Harmonic Excitation technique for Machine Tool stability analysis

Lau, King-Chun Michael 08 1900 (has links)
<p>The project described in this thesis was to establish the instrumentation and technique for analysing stability of machine-tools against chatter by harmonic excitation. To test out the technique, two sets of experiments were performed on centre lathes:</p> <p>1) comparison of cutting stability with tour different types of boring bars, and</p> <p>2) comparison of cutting stability of a tool oriented in seven orientations in a single plane perpendicular to the spindle axis.</p> <p>The electro-dynamic exciter was used in 1) while an electro-magnetic exciter was used in 2) Data of the excitation tests were used to compute and plot the cross-receptances which indicate the limit width of cut together with the chatter frequency and the modal shapes which identify the main masses and springs of the structure. The contribution of the individual modes to the resulting degree of stability can also be obtained. Cutting tests were conducted to provide some means of checking the reliability of the excitation test results. In this report also included are the theory of vibration, theory of chatter and specification of various parts of instrumentation.</p> / Master of Engineering (ME)
9

Agent based buddy finding methodology for knowledge sharing

Li, Xiaoqing 07 1900 (has links)
<p>The Internet provides opportunity for knowledge sharing among people with similar interests (i.e., buddies). Common methods available for people to identify buddies for knowledge sharing include emails, mailing lists, chat rooms, electronic bulletin boards, and newsgroups. However, these manual buddy finding methods are time consuming and inefficient. In this thesis, we propose an agent-based buddy finding methodology based on a combination of case-based reasoning methodology and fuzzy logic technique. We performed two experiments to assess the effectiveness of our proposed methodology. The first experiment was comprised of a stock market portfolio knowledge sharing environment in which a conventional cluster analysis was used as a benchmark to assess the technical goodness of the proposed methodology in identifying the clusters of buddies. Statistical analysis showed that there was no significant ranking difference between conventional cluster analysis and the proposed buddy-finding methodology in identifying buddies. Cluster analysis requires centralized database to form buddies (clusters) with similar properties. The unique advantage of our proposed agent-based buddy finding methodology is that it can identify similar buddies in distributed as well as centralized database environments. A second experiment, in the context of sharing musical-knowledge among human subjects, was used to find out whether selection of the buddies by the proposed methodology is as good as those done by human subjects. The findings from this latter empirical test showed that the buddies found by agents are as good as the buddies found manually by humans.</p> / Doctor of Philosophy (PhD)
10

Evolutionary theory: a 'good' explanatory framework for research into technological innovation

Myers, Stephen Keir Unknown Date (has links)
This study attempts to answer the question; does evolutionary theory provide a ‘good’ explanatory framework for examining the phenomenon of technological innovation? In doing so, the study critically examines mainstream marketing’s in-place explanatory frameworks, offers an explanation of ‘evolution’ based on new insights from a broad range of (sub)disciplines, and makes the practical proposition that this explanation is a ‘good’ analogous representation of the technological innovation process. As an alternative to the ‘scientific empiricist’ approach that dominates much of marketing’s research into technological innovation, the study develops a research methodology that is based within a postmodern philosophy, adopts an epistemology of transcendental realism, bases the research design on abduction and textual explanation, and brings together a research method based on the criteria of interesting, plausibility and acceptability. Familiarisation with mainstream marketing’s explanatory frameworks for research into technological innovation identified Diffusion theory, New Product Development theory and Network theory as dominant. It is concluded that these frameworks are based on problematic theoretical foundations, a situation considered as largely due to a pre-occupation with assumptions that are atomistic, reductionistic, deterministic, gradualistic and mechanistic in nature. It is argued, that in concert with the ‘socialised’ dominance of mathematical form over conceptual substance, mainstream marketing’s research into technological innovation is locked into a narrow range of ‘preordained axiomatics’. The explanation of ‘evolution’ offered within the study is based on a why, what, how, when and where format. The resultant explanation represents a significant departure from the (neo)Darwinian biological perspective that tends to dominate evolutionary explanations of socio-economic behaviour, in that the focus is on the principles (and associated conceptualizations) of ‘variation’, ‘selection’ and ‘preservation’ within the broader context of open and dissipative systems. The offered explanation presents a number of theoretical ideas, and in particular, that evolution can only occur in a ‘multidimensional space of possibilities’, denotes a process of ‘adaptive emergence’, and is essentially concerned with ‘on-going resilience through adaptability’. The practical proposition is made that the offered explanation of ‘evolution’ can be used in an ‘as if’ manner, that is, the principles of ‘variation’, ‘selection’, and ‘preservation’ (and the meanings ascribed to them through conceptualization) are analogically transferable to the technological innovation research area. The proposition is supported through reference to theoretical and empirical research, highlighting the similarity with respect to the generative mechanisms, structures and contingent conditions underpinning both ‘evolution’ and ‘technological innovation’.

Page generated in 0.095 seconds