• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 652
  • 444
  • 147
  • 99
  • 65
  • 62
  • 58
  • 33
  • 30
  • 17
  • 12
  • 11
  • 11
  • 10
  • 8
  • Tagged with
  • 1871
  • 1127
  • 328
  • 297
  • 271
  • 186
  • 156
  • 151
  • 149
  • 139
  • 119
  • 115
  • 113
  • 104
  • 103
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A Material Flow Evaluation at Scania Production Slupsk S.P.S

Gustafsson, Daniel, Johansson, Mikael January 2007 (has links)
<p>This master’s thesis is performed at Department of Management and Engineering Linköping University, for Scania Omni at Scania Production Slupsk (S.P.S). Omni is responsible for development, manufacturing and marketing of city, suburban and intercity buses. After acquisition of the production unit in Slupsk in 2002 lower production cost per bus is possible. But without control over the organisation costs are rising due to late delivery fees and high stock levels. At the outset, the thesis included three clearly defined objectives:</p><p>- Map the present situation at Scania Production Slupsk regarding material flow from supplier to assembly line including a part and storage analysis.</p><p>- Benchmark the current routines at Scania Production Slupsk with other successful companies. Furthermore, conduct literature research in order to find theories and philosophies that support problem analysis and thesis solution.</p><p>- Develop standard routines for material control methods (MCM) and material supply methods (MSM).</p><p>A complimentary objective is to work as a catalyst during the time of the thesis.</p><p>The mapping of the present situation showed that MCM and MSM are very tight connected to each other. It was questioned whether this structure was the best way to manage the material flow. After a parts and storage analysis, material was divided into different segments depending of price, consumption and movement. </p><p>The benchmarking studies showed different ways to manage the material flow. Implementation of unit load, kanban and clear defined interface between departments showed potential to improve the material handling and increase effectiveness.</p><p>New routines and part segment definitions described in a logistics manual (Appendix I) were made align with a comparison between previous and recommended definitions.</p><p>The result showed that some parts needs to be controlled differently. Primary recommendations are that logistics manual shall be used when new parts are introduced into the Scala system. Responsible personnel are suppose to give suggestion concerning decision making of MCM and MSM and with help of the logistics manual the work can be more efficient, resulting in a material flow that is flexible and have potential for improvements.</p><p>Secondary, to avoid material handling to some extent implementation of two-bin system is recommended. Additional recommendations regarding two-bin system is to handle material according to unit load, which enable FIFO, traceability and higher turn over rate</p>
72

Optimizing a military supply chain in the presence of random, non-stationary demands /

Ng, Yew Soon. January 2003 (has links) (PDF)
Thesis (M.S. in Operations Research)--Naval Postgraduate School, December 2003. / Thesis advisor(s): Moshe Kress, Robert Dell. Includes bibliographical references (p. 45-47). Also available online.
73

Important factors in predicting detection probabilities for radiation portal monitors

Tong, Fei, 1986- 12 November 2010 (has links)
This report analyzes the impact of some important factors on the prediction of detection probabilities for radiation portal monitors (RPMs). The application of innovative detection technology to improve operational sensitivity of RPMs has received increasing attention in recent decades. In particular, two alarm algorithms, gross count and energy windowing, have been developed to try to distinguish between special nuclear material (SNM) and naturally occurring radioactive material (NORM). However, the use of the two detection strategies is quite limited due to a very large number of unpredictable threat scenarios. We address this problem by implementing a new Monte Carlo radiation transport simulation approach to model a large set of threat scenarios with predefined conditions. In this report, our attention is focused on the effect of two important factors on the detected energy spectra in RPMs, the mass of individual nuclear isotopes and the thickness of shielding materials. To study the relationship between these factors and the resulting spectra, we apply several advanced statistical regression models for different types of data, including a multinomial logit model, an ordinal logit model, and a curvilinear regression model. By utilizing our new simulation technique together with these sophisticated regression models, we achieve a better understanding of the system response under various conditions. We find that the different masses of the isotopes change the isotopes’ effect on the energy spectra. In analyzing the joint impact of isotopes’ mass and shielding thickness, we obtain a nonlinear relation between the two factors and the gross count of gamma photons in the energy spectrum. / text
74

A study of courteous behavior on the University of Texas campus

Lu, Zhou, 1978- 22 February 2011 (has links)
This study focused on measuring courteous behavior on the University of Texas at Austin (UT) students on campus. This behavior was measured through analyzing various factors involved when a person opened the door for another. The goal was to determine which factors would significantly affect the probability that a person would hold a door for another. Three UT buildings with no automatic doors were selected (RLM, FAC and GRE), and 200 pairs of students at each location were observed to see whether they would open doors for others. These subjects were not disturbed during the data collection process. For each observation, the door holding conditions, genders, position (whether it was the one who opened the door or the recipient of this courteous gesture, abbreviated as recipient), distance between the person opening the door and the recipient, and the number of recipients were recorded. Descriptive statistics and logistic regression were used to analyze the data. The results showed that the probability of people opening the doors for others was significantly affected by gender, position, distance between the person opening the door and the recipient, the number of recipients, and the interaction term between gender and position. The study revealed that men had a slightly higher propensity of opening the doors for the recipients. The odds for men were a multiplicative factor of 1.09 of that for women on average, holding all other factors constant. However, women had much higher probability of having doors held open for them. The odds for men were a multiplicative factor of 0.55 of that for women on average, holding all other factors constant. In terms of the distance between the person opening the door and the recipient, for each meter increase in distance, the odds that the door would be held open would decrease by a multiplicative factor of 0.40 on average. Additionally, for each increase in number of recipients, the odds that the door would be held open would increase by a multiplicative factor of 1.32 on average. / text
75

Analysis of Longitudinal Data in the Case-Control Studies via Empirical Likelihood

Jian, Wen 09 June 2006 (has links)
The case-control studies are primary tools for the study of risk factors (exposures) related to the disease interested. The case-control studies using longitudinal data are cost and time efficient when the disease is rare and assessing the exposure level of risk factors is difficult. Instead of GEE method, the method of using a prospective logistic model for analyzing case-control longitudinal data was proposed and the semiparametric inference procedure was explored by Park and Kim (2004). In this thesis, we apply an empirical likelihood ratio method to derive limiting distribution of the empirical likelihood ratio and find one likelihood-ratio based confidence region for the unknown regression parameters. Our approach does not require estimating the covariance matrices of the parameters. Moreover, the proposed confidence region is adapted to the data set and not necessarily symmetric. Thus, it reflects the nature of the underlying data and hence gives a more representative way to make inferences about the parameter of interest. We compare empirical likelihood method with normal approximation based method, simulation results show that the proposed empirical likelihood ratio method performs well in terms of coverage probability.
76

Logistikos paslaugų vystymosi ir tobulinimo perspektyvos Lietuvoje / The prospects of development and improvement of Logistic Services in Lithuania

Bernotas, Arūnas 06 June 2005 (has links)
During the work researches the scientific literature analysis about the Logistic Services was made, the main characteristics of the theory of the Logistic Services were assessed, the possibilities of the development of the Logistic Services were inspected in Lithuania. In the Master work the Logistic Conception is proposed, the types of the Logistic Activity are marked, the trends and the problems of the development of the Logistic Services are formulated. The most significant findings in this Study are the proposals how to improve the development of the Logistic Services in Lithuania. The Master work contains the Study of possibilities of getting financial support from EU for small and middle-sized Lithuanian enterprises of Logistics. The material proves the scientific hypothesis formulated by author, that the Logistics in Lithuania has good perspectives. Today the business of Logistic Services in Lithuania is in the stage of evaluation. The Master work in general serves to the analysis, development and improvement of the Logistic Services in Lithuania.
77

Evaluation of logistic regression and random forest classification based on prediction accuracy and metadata analysis

Wålinder, Andreas January 2014 (has links)
Model selection is an important part of classification. In this thesis we study the two classification models logistic regression and random forest. They are compared and evaluated based on prediction accuracy and metadata analysis. The models were trained on 25 diverse datasets. We calculated the prediction accuracy of both models using RapidMiner. We also collected metadata for the datasets concerning number of observations, number of predictor variables and number of classes in the response variable.     There is a correlation between performance of logistic regression and random forest with significant correlation of 0.60 and confidence interval [0.29 0.79]. The models appear to perform similarly across the datasets with performance more influenced by choice of dataset rather than model selection.     Random forest with an average prediction accuracy of 81.66% performed better on these datasets than logistic regression with an average prediction accuracy of 73.07%. The difference is however not statistically significant with a p-value of 0.088 for Student's t-test.     Multiple linear regression analysis reveals none of the analysed metadata have a significant linear relationship with logistic regression performance. The regression of logistic regression performance on metadata has a p-value of 0.66. We get similar results with random forest performance. The regression of random forest performance on metadata has a p-value of 0.89. None of the analysed metadata have a significant linear relationship with random forest performance.     We conclude that the prediction accuracies of logistic regression and random forest are correlated. Random forest performed slightly better on the studied datasets but the difference is not statistically significant. The studied metadata does not appear to have a significant effect on prediction accuracy of either model.
78

Practical aspects of kernel smoothing for binary regression and density estimation

Signorini, David F. January 1998 (has links)
This thesis explores the practical use of kernel smoothing in three areas: binary regression, density estimation and Poisson regression sample size calculations. Both nonparametric and semiparametric binary regression estimators are examined in detail, and extended to two bandwidth cases. The asymptotic behaviour of these estimators is presented in a unified way, and the practical performance is assessed using a simulation experiment. It is shown that, when using the ideal bandwidth, the two bandwidth estimators often lead to dramatically improved estimation. These benefits are not reproduced, however, when two general bandwidth selection procedures described briefly in the literature are applied to the estimators in question. Only in certain circumstances does the two bandwidth estimator prove superior to the one bandwidth semiparametric estimator, and a simple rule-of-thumb based on robust scale estimation is suggested. The second part summarises and compares many different approaches to improving upon the standard kernel method for density estimation. These estimators all have asymptotically 'better' behaviour than the standard estimator, but a small-sample simulation experiment is used to examine which, if any, can give important practical benefits. Very simple bandwidth selection rules which rely on robust estimates of scale are then constructed for the most promising estimators. It is shown that a particular multiplicative bias-correcting estimator is in many cases superior to the standard estimator, both asymptotically and in practice using a data-dependent bandwidth. The final part shows how the sample size or power for Poisson regression can be calculated, using knowledge about the distribution of covariates. This knowledge is encapsulated in the moment generating function, and it is demonstrated that, in most circumstances, the use of the empirical moment generating function and related functions is superior to kernel smoothed estimates.
79

The Classification Model for Corporate Failures in Malaysia

MATYATIM, Rosliza 12 1900 (has links) (PDF)
No description available.
80

Multi-scaling methods applied to population models

Grozdanovski, Tatjana, Tatjana.grozdanovski@rmit.edu.au January 2009 (has links)
This dissertation presents several applications of the multi-scaling (multi-timing) technique to the analysis of both single and two species population models where the defining parameters vary slowly with time. Although exact solutions in such cases would be preferred, they are almost always impossible to obtain when slow variation is involved. Numerical solutions can be obtained in these cases, however they are often time consuming and offer limited insight into what is causing the behaviour we see in the solution. Here an approximation method is chosen as it gives an explicit analytic approximate expression for the solutions of such population models. The multi-scaling method was chosen because the defining parameters are varying slowly compared to the response of the system. This technique is well-established in the physical and engineering sciences literature; however, it has rarely been applied in the area of population modelling. All single species differential equation population models incorporate parameters which define the model - for example, the growth rate r and the carrying capacity k, for the Logistic model. For constant parameter values an exact solution may be found, giving the population as a function of time. However, for arbitrary time-varying parameters, exact solutions are rarely possible, and numerical solution techniques must be employed. Here we will demonstrate that for a Logistic model where the growth rate and carrying capacity both vary slowly with time, an analysis with multiple time scales leads to approximate closed form solutions that are explicit. These solutions prove to be valid for a range of parameter values and compare favourably with numerically generated ones.

Page generated in 0.0601 seconds