• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 35
  • 14
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 166
  • 166
  • 64
  • 31
  • 29
  • 28
  • 20
  • 19
  • 17
  • 15
  • 15
  • 15
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Rescheduling blocked Vehicles at Daimler AG

Caap Hällgren, Eric January 2012 (has links)
The purpose of this thesis is to develop a heuristic solution for the static problem of resequencing unblocked vehicles as a part of an ongoing research project at Daimler AG. The target client of this project is Mercedes-Benz Cars. An unblocked vehicle is defined as a vehicle that for some reason could not be processed in its given time slot but at a later point in time needs to be inserted into the production sequence. Work overload is defined as work that the worker is unable to finish prior to reaching the station border. The resequencing problem can be described as finding new positions for a set of unblocked vehicles in a sequence of previously not blocked vehicles, such that the new sequence containing the previously not blocked vehicles and the additional unblocked vehicles causes as little work overload as possible. A decision has to be made in real-time, forcing the solution method to return a solution within a cycle time. Today, Mercedes-Benz Cars uses the sequencing approach “car sequencing”. This approach relies on so called spacing constraints, which basically means, trying to distribute work intensive vehicles as evenly as possible over the planning horizon and thereby enabling a hopefully smooth production. The car sequencing approach needs limited information. The difficulty is to find spacing constraints that fits the high level of product customization characterizing a modern car manufacturer. To overcome these difficulties, a new approach is being considered, namely the mixed-model sequencing, which takes more detailed data into account than the car sequencing approach but on the other hand is more costly in terms of computation. To this end, a simple but promising tabu search scheme was developed, that for many instances was able to find the optimal solution in less than 30 seconds of computing time and that also clearly outperformed all benchmark heuristics.
2

Comparison of four growth curve models in Angus cow : an application of Bayesian nonlinear mixed model / Application of Bayesian nonlinear mixed model

Qin, Qing, master of science in statistics 21 August 2012 (has links)
The purpose of this study was to compare 4 growth curve functions (Brody, Logistic, Gompertz, and Von Bertalanffy) in describing the weight change across age in Angus cow. A total of 1,705 weight-age records from birth to at least 3-year of age from 171 cows were collected. All the growth models were fitted as a nonlinear mixed model using NLMIXED procedure in SAS9.2 (REML Approach) and MCMC method through WinBUGS (Bayesian Approach). The goodness of fit of these four models was compared in terms of AIC, BIC, and DIC. The results show that the Gompertz model fitted the data best under REML Approach while the Brody model appeared to be the best model under Bayesian Approach. The Bayesian Approach provided more flexibility in setting up the mixed model and more reasonable estimates for all the growth models compared to the REML Approach. These results show some advantages of Bayesian nonlinear mixed modeling. / text
3

Consumer choice modeling: comparing and contrasting the MAAM, AHP, TOPSIS and AHP-TOPSIS methodologies

Zhang, Yan 26 August 2014 (has links)
While making decisions, consumers are often confronted with choosing between multiple product and brand alternatives that may be viewed as specific bundles of attributes/criteria. Researchers, attempting to understand this decision-making process, employ multi-criteria decision making (MCDM) models in numerous ways for predicting ultimate brand choice. This thesis compares and contrasts four types of MCDM models within a laptop brand choice context—specifically, the Multi Attribute Attitude Model (MAAM; Fishbein 1967), Analytical Hierarchy Process (AHP; Saaty, 1980), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS; Hwang & Yoon, 1981), and a mixed AHP-TOPSIS model (Ghosh, 2011; Bhutia & Phipon,2012). While Fishbein’s MAAM model evaluates brand choice by multiplying attribute belief ratings with their importance weights, the AHP does a pair-wise comparison to elicit relative weights of brand attributes and alternatives. The TOPSIS method, on the other hand, proposes that consumers choose brands that are nearest to (i.e., the shortest distance from) their ideal brand solution as well as the farthest from (i.e., the greatest distance from) their worst solution. Advantages and disadvantages of each of these methods are reviewed, and a mixed AHP-TOPSIS method that addresses some of the drawbacks is proposed here. The results attained via TOPSIS and AHP-TOPSIS are the same. However, it is coincidental in the chosen laptop choice example. By applying the two models within an alternative hotel choice scenario, the rankings obtained are demonstrated as being different. Sensitivity analyses conducted also demonstrate these differences across models. This thesis has both theoretical and practical implications. From a theoretical perspective, it brings the knowledge of decision making methodologies from the supply chain management field to further the understanding of marketing related issues. Furthermore, this research is the first to apply a mixed AHP-TOPSIS model that demonstrates greater accuracy in predicting consumer brand choice. In terms of practical significance, it allows companies to improve the impression that customers hold about its performance on specific attribute types.
4

Comparison of Denominator Degrees of Freedom Approximations for Linear Mixed Models in Small-Sample Simulations

January 2020 (has links)
abstract: Whilst linear mixed models offer a flexible approach to handle data with multiple sources of random variability, the related hypothesis testing for the fixed effects often encounters obstacles when the sample size is small and the underlying distribution for the test statistic is unknown. Consequently, five methods of denominator degrees of freedom approximations (residual, containment, between-within, Satterthwaite, Kenward-Roger) are developed to overcome this problem. This study aims to evaluate the performance of these five methods with a mixed model consisting of random intercept and random slope. Specifically, simulations are conducted to provide insights on the F-statistics, denominator degrees of freedom and p-values each method gives with respect to different settings of the sample structure, the fixed-effect slopes and the missing-data proportion. The simulation results show that the residual method performs the worst in terms of F-statistics and p-values. Also, Satterthwaite and Kenward-Roger methods tend to be more sensitive to the change of designs. The Kenward-Roger method performs the best in terms of F-statistics when the null hypothesis is true. / Dissertation/Thesis / Masters Thesis Statistics 2020
5

The Generalized Linear Mixed Model for Finite Normal Mixtures with Application to Tendon Fibrilogenesis Data

Zhan, Tingting January 2012 (has links)
We propose the generalized linear mixed model for finite normal mixtures (GLMFM), as well as the estimation procedures for the GLMFM model, which are widely applicable to the hierarchical dataset with small number of individual units and multi-modal distributions at the lowest level of clustering. The modeling task is two-fold: (a). to model the lowest level cluster as a finite mixtures of the normal distribution; and (b). to model the properly transformed mixture proportions, means and standard deviations of the lowest-level cluster as a linear hierarchical structure. We propose the robust generalized weighted likelihood estimators and the new cubic-inverse weight for the estimation of the finite mixture model (Zhan et al., 2011). We propose two robust methods for estimating the GLMFM model, which accommodate the contaminations on all clustering levels, the standard-two-stage approach (Chervoneva et al., 2011, co-authored) and a robust joint estimation. Our research was motivated by the data obtained from the tendon fibril experiment reported in Zhang et al. (2006). Our statistical methodology is quite general and has potential application in a variety of relatively complex statistical modeling situations. / Statistics
6

Two-Stage SCAD Lasso for Linear Mixed Model Selection

Yousef, Mohammed A. 07 August 2019 (has links)
No description available.
7

A Review and Comparison of Models and Estimation Methods for Multivariate Longitudinal Data of Mixed Scale Type

Codd, Casey 23 September 2014 (has links)
No description available.
8

A study of covariance structure selection for split-plot designs analyzed using mixed models

Qiu, Chen January 1900 (has links)
Master of Science / Department of Statistics / Christopher I. Vahl / In the classic split-plot design where whole plots have a completely randomized design, the conventional analysis approach assumes a compound symmetry (CS) covariance structure for the errors of observation. However, often this assumption may not be true. In this report, we examine using different covariance models in PROC MIXED in the SAS system, which are widely used in the repeated measures analysis, to model the covariance structure in the split-plot data in which the simple compound symmetry assumption does not hold. The comparison of the covariance structure models in PROC MIXED and the conventional split-plot model is illustrated through a simulation study. In the example analyzed, the heterogeneous compound symmetry (CSH) covariance model has the smallest values for the Akaike and Schwarz’s Bayesian information criteria fit statistics and is therefore the best model to fit our example data.
9

A study of the calibration-inverse prediction problem in a mixed model setting

Yang, Celeste January 1900 (has links)
Master of Science / Department of Statistics / Paul I. Nelson / The Calibration-Inverse Prediction Problem was investigated in a mixed model setting. Two methods were used to construct inverse prediction intervals. Method 1 ignores the random block effect in the mixed model and constructs the inverse prediction interval in the standard manner using quantiles from an F distribution. Method 2 uses a bootstrap to estimate quantiles of an approximate pivotal and then follows essentially the same procedure as in method 1. A simulation study was carried out to compare how the intervals created by the two methods performed in terms of coverage rate and mean interval length. Results from our simulation study suggest that when the variance component of the block is large relative to the location variance component, the coverage rate of the intervals produced by the two methods differ significantly. Method 2 appears to yield intervals which have a slightly higher coverage rate and wider interval length then did method 1. Both methods yielded intervals with coverage rates below nominal for approximately 1/3 of the simulation settings.
10

Quantifying Power and Bias in Cluster Randomized Trials Using Mixed Models vs. Cluster-Level Analysis in the Presence of Missing Data: A Simulation Study

Vincent, Brenda January 2016 (has links)
In cluster randomized trials (CRTs), groups are randomized to treatment arms rather than individuals while the outcome is assessed on the individuals within each cluster. Individuals within clusters tend to be more similar than in a randomly selected sample, which poses issues with dependence, which may lead to underestimated standard errors if ignored. To adjust for the correlation between individuals within clusters, two main approaches are used to analyze CRTs: cluster-level and individual-level analysis. In a cluster-level analysis summary measures are obtained for each cluster and then the two sets of cluster-specific measures are compared, such as with a t-test of the cluster means. A mixed model which takes into account cluster membership is an example of an individual-level analysis. We used a simulation study to quantify and compare power and bias of these two methods. We further take into account the effect of missing data. Complete datasets were generated and then data were deleted to simulate missing completely at random (MCAR) and missing at random (MAR) data. A balanced design, with two treatment groups and two time points was assumed. Cluster size, variance components (including within-subject, within-cluster and between-cluster variance) and proportion of missingness were varied to simulate common scenarios seen in practice. For each combination of parameters, 1,000 datasets were generated and analyzed. Results of our simulation study indicate that cluster-level analysis resulted in substantial loss of power when data were MAR. Individual-level analysis had higher power and remained unbiased, even with a small number of clusters.

Page generated in 0.1096 seconds