• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 21
  • 21
  • 21
  • 21
  • 12
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Maquiladoras in Central America: An Analysis of Workforce Schedule, Productivity and Fatigue.

Barahona, Jose L 01 July 2019 (has links)
Textile factories or Maquiladoras are very abundant and predominant in Central American economies. However, they all do not have the same standardized work schedule or routines. Most of the Maquiladoras only follow schedules and regulations established by the current labor laws without taking into consideration many variables within their organization that could affect their overall performance. As a result, the purpose of the study is to analyze the current working structure of a textile Maquiladora and determine the most suitable schedule that will abide with the current working structure but also increase production levels, employee morale and decrease employee fatigue. A Maquiladora located in el Salvador, C.A. has been chosen for the study. It currently provides finished goods to one of the leading textile industries in the United States of America. The study will consist of collecting production numbers for two of their manufacturing cells for five consecutive days. In addition, a questionnaire will be administered to measure employee fatigue. Once all data have been collected, the data will be analyzed to determine the best working structure that will benefit the employee and the employer.
2

A New Screening Methodology for Mixture Experiments

Weese, Maria 01 May 2010 (has links)
Many materials we use in daily life are comprised of a mixture; plastics, gasoline, food, medicine, etc. Mixture experiments, where factors are proportions of components and the response depends only on the relative proportions of the components, are an integral part of product development and improvement. However, when the number of components is large and there are complex constraints, experimentation can be a daunting task. We study screening methods in a mixture setting using the framework of the Cox mixture model [1]. We exploit the easy interpretation of the parameters in the Cox mixture model and develop methods for screening in a mixture setting. We present specific methods for adding a component, removing a component and a general method for screening a subset of components in mixtures with complex constraints. The variances of our parameter estimates are comparable with the typically used Scheff ́e model variances and our methods provide a reduced run size for screening experiments with mixtures containing a large number of components. We then further extend the new screening methods by using Evolutionary Operation (EVOP) developed by Box and Draper [2]. EVOP methods use small movement in a subset of process parameters and replication to reveal effects out of the process noise. Mixture experiments inherently have small movements (since the proportions can only range from zero to unity) and the effects have large variances. We update the EVOP methods by using sequential testing of effects opposed to the confidence interval method originally proposed by Box and Draper. We show that the sequential testing approach as compared with a fixed sample size reduced the required sample size as much as 50 percent with all other testing parameters held constant. We present two methods for adding a component and a general screening method using a graphical sequential t-test and provide R-code to reproduce the limits for the test.
3

A Fault-Based Model of Fault Localization Techniques

Hays, Mark A 01 January 2014 (has links)
Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important. In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault's location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future. While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers' work. The results of the survey indicate that software testing researchers are underreporting the significance of their work.
4

TUNING OPTIMIZATION SOFTWARE PARAMETERS FOR MIXED INTEGER PROGRAMMING PROBLEMS

Sorrell, Toni P 01 January 2017 (has links)
The tuning of optimization software is of key interest to researchers solving mixed integer programming (MIP) problems. The efficiency of the optimization software can be greatly impacted by the solver’s parameter settings and the structure of the MIP. A designed experiment approach is used to fit a statistical model that would suggest settings of the parameters that provided the largest reduction in the primal integral metric. Tuning exemplars of six and 59 factors (parameters) of optimization software, experimentation takes place on three classes of MIPs: survivable fixed telecommunication network design, a formulation of the support vector machine with the ramp loss and L1-norm regularization, and node packing for coding theory graphs. This research presents and demonstrates a framework for tuning a portfolio of MIP instances to not only obtain good parameter settings used for future instances of the same class of MIPs, but to also gain insights into which parameters and interactions of parameters are significant for that class of MIPs. The framework is used for benchmarking of solvers with tuned parameters on a portfolio of instances. A group screening method provides a way to reduce the number of factors in a design and reduces the time it takes to perform the tuning process. Portfolio benchmarking provides performance information of optimization solvers on a class with instances of a similar structure.
5

Estimating Prevalence from Complex Surveys

O'Brien, Sophie 07 November 2014 (has links)
Massachusetts passed legislation in the fall of 2012 to allow the construction of three casinos and a slot parlor in the state. The prevalence of problem gambling in the state and in areas where casinos will be constructed is of particular interest. The goal is to evaluate the change in prevalence after construction of the casinos, using a multi-mode address based sample survey. The objective of this thesis is to evaluate and describe ways of using statistical inference to estimates prevalence rates in finite populations. Four methods were considered in an attempt to evaluate the prevalence of problem gambling in the context of the gambling study. These methods were evaluated unconditionally and conditionally, controlling for gender, using mean square error (MSE) as a measure of accuracy. The simple mean, the post-stratified mean, the best linear unbiased predictor (BLUP), and the empirical best linear unbiased predictor (EBLUP) were considered in three examples. Conditional analyses of a population with N=1,000 and a crude problem gambling rate of 1.5, samples of n=200 led to the simple mean and the post-stratified mean to perform better in certain situations, as measured by their low MSE values. When there are less females than expected in a sample, the post-stratified mean produces a lower mean MSE over the 10,000 simulations. When there are more females than expected in a sample, the simple mean produces a lower mean MSE over the 10,000 simulations. Conditional analysis provided more appropriate results than unconditional analysis.
6

How Other Drivers’ Vehicle Characteristics Influence Your Driving Speed

Brockett, Russell 01 January 2011 (has links)
An analysis of the effect of passing vehicles’ characteristics and their impact on other drivers’ velocities was investigated. Three experimental studies were proposed and likely outcomes were discussed. Experiment 1 focused on the effect of passing vehicle type (SUV, sedan or truck) on driver speed. Drivers were hypothesized as going faster when the same vehicle type as they were driving passed them versus when no vehicle or a different vehicle passed them. Experiment 2 focused on the effect of passing SUV age on driver’s speed. Evidence suggests passing older SUVs will increase the driver’s speed more than new SUVs. Experiment 3 focused on the effect of passing SUV color on speed. Drivers were hypothesized to go faster when brighter colors (red and yellow) rather than cooler colors (grey and black) were painted on the vehicle.
7

Route Choice Behavior in Risky Networks with Real-Time Information

Razo, Michael D 01 January 2010 (has links) (PDF)
This research investigates route choice behavior in networks with risky travel times and real-time information. A stated preference survey is conducted in which subjects use a PC-based interactive maps to choose routes link-by-link in various scenarios. The scenarios include two types of maps: the first presenting a choice between one stochastic route and one deterministic route, and the second with real-time information and an available detour. The first type measures the basic risk attitude of the subject. The second type allows for strategic planning, and measures the effect of this opportunity on subjects' choice behavior. Results from each subject are analyzed to determine whether subjects planned strategically for the en route information or simply selected fixed paths from origin to destination. The full data set is used to estimate route choice models that account for both risk attitude and strategic thinking. Estimation results are used to assess whether models that incorporate strategic behavior more accurately reflect route choice than do simpler path-based models.
8

Non-Conventional Approaches to Syntheses of Ferromagnetic Nanomaterials

Clifford, Dustin M 01 January 2016 (has links)
The work of this dissertation is centered on two non-conventional synthetic approaches to ferromagnetic nanomaterials: high-throughput experimentation (HTE) (polyol process) and continuous flow (CF) synthesis (aqueous reduction and the polyol process). HTE was performed to investigate phase control between FexCo1-x and Co3-xFexOy. Exploration of synthesis limitations based on magnetic properties was achieved by reproducing Ms=210 emu/g. Morphological control of FexCo1-x alloy was achieved by formation of linear chains using an Hext. The final study of the FexCo1-x chains used DoE to determine factors to control FexCo1-x, diameter, crystallite size and morphology. [Ag] with [Metal] provide statistically significant control of crystallite size. [OH]/[Metal] predict 100 % FexCo1-x at > 30. To conclude section 1, a morphological study was performed on synthesis of Co3-xFexOy using the polyol process. Co3-xFexOy micropillars were synthesized at various sizes. The close proximity of the particles in the nanostructure produced an optical anisotropy and was magnetically induced which is evidence for the magneto-birefringence effect. The second non-conventional synthetic approach involves continuous flow (CF) chemistry. Co nanoparticles (Ms=125 emu/g) were newly synthesized by aqueous reduction in a microreactor and had 30 ±10 nm diameter and were produced at >1g/hr, a marker of industrial-scale up viability. The final work was the CF synthesis of FexCo1-x. The FexCo1-x was synthesized with limitation to the composition. The maximum FexCo1-x phase composition at 20 % resulted from the aqueous carrier solvent triggering oxide formation over FexCo1-x.
9

Methods for evaluating dropout attrition in survey data

Hochheimer, Camille J 01 January 2019 (has links)
As researchers increasingly use web-based surveys, the ease of dropping out in the online setting is a growing issue in ensuring data quality. One theory is that dropout or attrition occurs in phases that can be generalized to phases of high dropout and phases of stable use. In order to detect these phases, several methods are explored. First, existing methods and user-specified thresholds are applied to survey data where significant changes in the dropout rate between two questions is interpreted as the start or end of a high dropout phase. Next, survey dropout is considered as a time-to-event outcome and tests within change-point hazard models are introduced. Performance of these change-point hazard models is compared. Finally, all methods are applied to survey data on patient cancer screening preferences, testing the null hypothesis of no phases of attrition (no change-points) against the alternative hypothesis that distinct attrition phases exist (at least one change-point).
10

Mangiferin as a Biomarker for Mango Anthracnose Resistance

Pierre, Herma 02 July 2015 (has links)
Mangos (Mangifera indica L.) are tropical/subtropical fruits belonging to the plant family Anacardiaceae. Anthracnose is the most deleterious disease of mango both in the field and during postharvest handling. It is most commonly caused by the Colletotrichum gloeosporioides complex. Mangiferin, a xanthanoid compound found in at least twelve plant families worldwide (Luo et al., 2012), is present in large amounts of the leaves and edible mangos. Even though this compound plays a pivotal role in the plant’s defense against biotic and abiotic stressors, no correlations been made between the compound and mango anthracnose resistance. Mangos were collected, grouped according to their countries of origin, and evaluated for their mangiferin concentrations at four different stages of development. Extracts of interest were then tested against different strains of C. gloeosporioides. The results demonstrated that mangiferin concentrations are significantly different at different stages in fruit development. The antifungal assays were inconclusive.

Page generated in 0.172 seconds