• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 131
  • 38
  • 16
  • 11
  • 9
  • 7
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 437
  • 437
  • 91
  • 84
  • 81
  • 72
  • 72
  • 48
  • 44
  • 42
  • 40
  • 40
  • 34
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Maquiladoras in Central America: An Analysis of Workforce Schedule, Productivity and Fatigue.

Barahona, Jose L 01 July 2019 (has links)
Textile factories or Maquiladoras are very abundant and predominant in Central American economies. However, they all do not have the same standardized work schedule or routines. Most of the Maquiladoras only follow schedules and regulations established by the current labor laws without taking into consideration many variables within their organization that could affect their overall performance. As a result, the purpose of the study is to analyze the current working structure of a textile Maquiladora and determine the most suitable schedule that will abide with the current working structure but also increase production levels, employee morale and decrease employee fatigue. A Maquiladora located in el Salvador, C.A. has been chosen for the study. It currently provides finished goods to one of the leading textile industries in the United States of America. The study will consist of collecting production numbers for two of their manufacturing cells for five consecutive days. In addition, a questionnaire will be administered to measure employee fatigue. Once all data have been collected, the data will be analyzed to determine the best working structure that will benefit the employee and the employer.
82

Contribution à la modélisation expérimentale du comportement transitoire des pneumatiques / Transient tire behavior experimental modelling contribution

Alarcon, Laura 02 July 2015 (has links)
La voiture de demain semble prendre forme. Elle sera connectée et autonome, c'est à dire qu'elle se substituera complètement à l'être humain. Quelques milliers de véhicules communicants devraient être mis en circulation dès 2016. Par le biais du développement de multiples fonctions avancées d'aide à la conduite et de sécurité active, il est déjà possible de parler d'autonomie partielle de conduite. En effet, ces dernières années, un grand nombre de systèmes sont apparus dans les véhicules, comme par exemple le contrôle adaptatif de vitesse, l'alerte de franchissement de ligne, l'aide au stationnement ... Ils utilisent des technologies de plus en plus perfectionnées qui induisent ainsi des coûts de développement important. Les constructeurs automobiles sont actuellement très nombreux sur le marché. Ils doivent faire face à une concurrence accrue, ce qui influe fortement sur la phase de conception. Ainsi, les délais entre la phase de conception des véhicules et celle de fabrication sont de plus en plus restreints afin d'accroître la compétitivité. A cet effet, la simulation numérique se développe afin de diminuer les coûts liés au prototypage et le temps de mise au point des véhicules . Elle fait appel à des modèles génériques et fins permettant de simuler le comportement du véhicule ou des systèmes présents dans le véhicule. La modélisation de la dynamique des véhicules en régime statique est aujourd'hui chose acquise. En ce qui concerne le régime transitoire, la caractérisation dynamique du comportement du véhicule ou des phénomènes physiques ressentis par les occupants du véhicule a fait depuis longtemps l'objet d'études, mais possède encore des lacunes. Tel est précisément le cas des modèles de pneumatiques actuels qui ne retranscrivent pas correctement le comportement transitoire de cet organe, notamment lors de manœuvres d'urgence.Ces travaux s'inscrivent ainsi dans cette problématique d'amélioration de la représentativité des modèles de pneumatiques en régime transitoire. / The car of tomorrow is taking form. It will be connected and autonomous, i.e. it will substitute for human being. A few thousand of communicating vehicles should be put in circulation by 2016. Through the development of multiple advanced functions of driver assistance and active safety, it is already possible to speak of partial autonomy of conduct. Indeed, these last years, a large number of systems have appeared in vehicles such as adaptive speed control, lane departure warning, parking assistance ... They use technologies more and more sophisticated that lead to significant development costs. Actually, car manufacturers are very numerous on the market. They face increased competition, which strongly influences the design phase. Thus, the time limit between the phase of vehicle design and manufacturing are becoming smaller in order to increase competitiveness. For that purpose, the numerical simulation is developed to reduce prototyping costs and development time of vehicles. It uses generic and accurate models allowing the simulation of the vehicle behavior or the behavior of the other systems in the vehicle. Dynamic modeling of vehicle static behavior is now an acquired thing. As regards the transient behavior, the dynamic characterization of the vehicle behavior or the physical phenomena experienced by the vehicle occupants has long been studied, but still has gaps. This is precisely the case of current tire models that do not correctly transcribe the transient behavior of this body, in particular during emergency maneuvers. This work is inscribed in this problem of transient tire models representativeness improvement.
83

Sensitivity Analysis Of Design Parameters For Trunnion-Hub Assemblies Of Bascule Bridges Using Finite Element Methods

Paul, Jai P 31 January 2005 (has links)
Hundreds of thousands of dollars could be lost due to failures during the fabrication of Trunnion-Hub-Girder (THG) assemblies of bascule bridges. Two different procedures are currently utilized for the THG assembly. Crack formations in the hubs of various bridges during assembly led the Florida Department of Transportation (FDOT) to commission a project to investigate why the assemblies failed. Consequently, a research contract was granted to the Mechanical Engineering department at USF in 1998 to conduct theoretical, numerical and experimental studies. It was found that the steady state stresses were well below the yield strength of the material and could not have caused failure. A parametric finite element model was designed in ANSYS to analyze the transient stresses, temperatures and critical crack lengths in the THG assembly during the two assembly procedures. The critical points and the critical stages in the assembly were identified based on the critical crack length. Furthermore, experiments with cryogenic strain gauges and thermocouples were developed to determine the stresses and temperatures at critical points of the THG assembly during the two assembly procedures. One result revealed by the studies was that large tensile hoop stresses develop in the hub at the trunnion-hub interface in AP1 when the trunnion-hub assembly is cooled for insertion into the girder. These stresses occurred at low temperatures, and resulted in low values of critical crack length. A suggestion to solve this was to study the effect of thickness of the hub and to understand its influence on critical stresses and crack lengths. In addition, American Association of State Highway and Transportation Officials (AASHTO) standards call for a hub radial thickness of 0.4 times the inner diameter while currently a thickness of 0.1 to 0.2 times the inner diameter is used. In this thesis, the geometrical dimensions are changed according to the design of experiments standards to find the sensitivity of these parameters on critical stresses and critical crack lengths during the assembly. Parameters changed are hub radial thickness to trunnion outer diameter ratio, trunnion outer diameter to trunnion bore diameter ratio and variations of the interference. The radial thickness of the hub was found to be the most influential parameter on critical stresses and critical crack lengths.
84

Sensitivity Analysis of Three Assembly Procedures for a Bascule Bridge Fulcrum

Snyder, Luke Allen 04 November 2009 (has links)
Many different hub assembly procedures have been utilized over the years in bascule bridge construction. The first assembly procedure (AP1) involves shrink fitting a trunnion component into a hub, followed by the shrink fitting of the entire trunnion-hub (TH) assembly into the girder of the bridge. The second assembly procedure (AP2) involves shrink fitting the hub component first into the girder, then shrink fitting the trunnion component into the hub-girder (HG) assembly. The final assembly procedure uses a warm shrink fitting process whereby induction coils are placed on the girder of the bridge and heat is applied until sufficient thermal expansion of the girder hole allows for insertion of the hub component. All three assembly procedures use a cooling method at some stage of the assembly procedure to contract components to allow the insertion of one part into the next. Occasionally, during these cooling and heating procedures, cracks can develop in the material due to the large thermal shock and subsequent thermal stresses. Previous works conducted a formal design of experiments analysis on AP1 to determine the overall effect of various factors on the critical design parameters, overall minimum stress ratio (OMSR) and overall minimum critical crack length (OMCCL). This work focuses on conducting a formal design of experiments analysis on AP1, AP2 and AP3 using the same cooling methods and parameters as in previous studies with the addition of the bridge size as a factor in the experiment. The use of the medium bridge size in AP1 yields the largest OMCCL values of any bridge and the second largest OMSR values. The large bridge size has the largest OMSR values versus all factors for AP1. The OMCCL and OMSR increases for every bridge size with an increase in the alpha ratio for AP1. The smallest bridge showed the largest OMCCL and OMSR values for every cooling method and every alpha ratio for AP2 and AP3. The OMCCL and OMSR decrease for every bridge size with an increase in the alpha ratio for AP2 and AP3.
85

Antiviral Resistance and Dynamic Treatment and Chemoprophylaxis of Pandemic Influenza

Paz, Sandro 21 March 2014 (has links)
Public health data show the tremendous economic and societal impact of pandemic influenza in the past. Currently, the welfare of society is threatened by the lack of planning to ensure an adequate response to a pandemic. This preparation is difficult because the characteristics of the virus that would cause the pandemic are unknown, but primarily because the response requires tools to support decision-making based on scientific methods. The response to the next pandemic influenza will likely include extensive use of antiviral drugs, which will create an unprecedented selective pressure for the emergence of antiviral resistant strains. Nevertheless, the literature has insufficient exhaustive models to simulate the spread and mitigation of pandemic influenza, including infection by an antiviral resistant strain. We are building a large-scale simulation optimization framework for development of dynamic antiviral strategies including treatment of symptomatic cases and chemoprophylaxis of pre- and post-exposure cases. The model considers an oseltamivir-sensitive strain and a resistant strain with low/high fitness cost, induced by the use of the several antiviral measures. The mitigation strategies incorporate age/immunitybased risk groups for treatment and pre-/post-exposure chemoprophylaxis, and duration of pre-exposure chemoprophylaxis. The model is tested on a hypothetical region in Florida, U.S., involving more than one million people. The analysis is conducted under different virus transmissibility and severity scenarios, varying intensity of non-pharmaceutical interventions, measuring the levels of antiviral stockpile availability. The model is intended to support pandemic preparedness and response policy making.
86

A New Screening Methodology for Mixture Experiments

Weese, Maria 01 May 2010 (has links)
Many materials we use in daily life are comprised of a mixture; plastics, gasoline, food, medicine, etc. Mixture experiments, where factors are proportions of components and the response depends only on the relative proportions of the components, are an integral part of product development and improvement. However, when the number of components is large and there are complex constraints, experimentation can be a daunting task. We study screening methods in a mixture setting using the framework of the Cox mixture model [1]. We exploit the easy interpretation of the parameters in the Cox mixture model and develop methods for screening in a mixture setting. We present specific methods for adding a component, removing a component and a general method for screening a subset of components in mixtures with complex constraints. The variances of our parameter estimates are comparable with the typically used Scheff ́e model variances and our methods provide a reduced run size for screening experiments with mixtures containing a large number of components. We then further extend the new screening methods by using Evolutionary Operation (EVOP) developed by Box and Draper [2]. EVOP methods use small movement in a subset of process parameters and replication to reveal effects out of the process noise. Mixture experiments inherently have small movements (since the proportions can only range from zero to unity) and the effects have large variances. We update the EVOP methods by using sequential testing of effects opposed to the confidence interval method originally proposed by Box and Draper. We show that the sequential testing approach as compared with a fixed sample size reduced the required sample size as much as 50 percent with all other testing parameters held constant. We present two methods for adding a component and a general screening method using a graphical sequential t-test and provide R-code to reproduce the limits for the test.
87

Laminar Flow Control Flight Experiment Design

Tucker, Aaron 1975- 14 March 2013 (has links)
Demonstration of spanwise-periodic discrete roughness element laminar flow control (DRE LFC) technology at operationally relevant flight regimes requires extremely stable flow conditions in flight. A balance must be struck between the capabilities of the host aircraft and the scientific apparatus. A safe, effective, and efficient flight experiment is described to meet the test objectives, a flight test technique is designed to gather research-quality data, flight characteristics are analyzed for data compatibility, and an experiment is designed for data collection and analysis. The objective is to demonstrate DRE effects in a flight environment relevant to transport-category aircraft: [0.67 – 0.75] Mach number and [17.0M – 27.5M] Reynolds number. Within this envelope, flight conditions are determined which meet evaluation criteria for minimum lift coefficient and crossflow transition location. The angle of attack data band is determined, and the natural laminar flow characteristics are evaluated. Finally, DRE LFC technology is demonstrated in the angle of attack data band at the specified flight conditions. Within the angle of attack data band, a test angle of attack must be maintained with a tolerance of ± 0.1° for 15 seconds. A flight test technique is developed that precisely controls angle of attack. Lateral-directional stability characteristics of the host aircraft are exploited to manipulate the position of flight controls near the wing glove. Directional control inputs are applied in conjunction with lateral control inputs to achieve the desired flow conditions. The data are statistically analyzed in a split-plot factorial that produces a system response model in six variables: angle of attack, Mach number, Reynolds number, DRE height, DRE spacing, and the surface roughness of the leading edge. Predictions on aircraft performance are modeled to enable planning tools for efficient flight research while still producing statistically rigorous flight data. The Gulfstream IIB aircraft is determined to be suitable for a laminar flow control wing glove experiment using a low-bank-angle-turn flight test technique to enable precise, repeatable data collection at stabilized flight conditions. Analytical angle of attack models and an experimental design were generated to ensure efficient and effective flight research.
88

Model Discrimination Using Markov Chain Monte Carlo Methods

Masoumi, Samira 24 April 2013 (has links)
Model discrimination deals with situations where there are several candidate models available to represent a system. The objective is to find the “best” model among rival models with respect to prediction of system behavior. Empirical and mechanistic models are two important categories of models. Mechanistic models are developed based on physical mechanisms. These types of models can be applied for prediction purposes, but they are also developed to gain improved understanding of the underlying physical mechanism or to estimate physico-chemical parameters of interest. When model discrimination is applied to mechanistic models, the main goal is typically to determine the “correct” underlying physical mechanism. This study focuses on mechanistic models and presents a model discrimination procedure which is applicable to mechanistic models for the purpose of studying the underlying physical mechanism. Obtaining the data needed from the real system is one of the challenges particularly in applications where experiments are expensive or time consuming. Therefore, it is beneficial to get the maximum information possible from the real system using the least possible number of experiments. In this research a new approach to model discrimination is presented that takes advantage of Monte Carlo (MC) methods. It combines a design of experiments (DOE) method with an adaptation of MC model selection methods to obtain a sequential Bayesian Markov Chain Monte Carlo model discrimination framework which is general and usable for a wide range of model discrimination problems. The procedure has been applied to chemical engineering case studies and the promising results have been discussed. Four case studies, order of reaction, rate of FeIII formation, copolymerization, and RAFT polymerization, are presented in this study. The first three benchmark problems allowed us to refine the proposed approach. Moreover, applying the Sequential Bayesian Monte Carlo model discrimination framework in the RAFT problem made a contribution to the polymer community by recommending analysis an approach to selecting the correct mechanism.
89

Model Refinement and Reduction for the Nitroxide-Mediated Radical Polymerization of Styrene with Applications on the Model-Based Design of Experiments

Hazlett, Mark Daniel 21 September 2012 (has links)
Polystyrene (PS) is an important commodity polymer. In its most commonly used form, PS is a high molecular weight linear polymer, typically produced through free-radical polymerization, which is a well understood and robust process. This process produces a high molecular weight, clear thermoplastic that is hard, rigid and has good thermal and melt flow properties for use in moldings, extrusions and films. However, polystyrene produced through the free radical process has a very broad molecular weight distribution, which can lead to poor performance in some applications. To this end, nitroxide-mediated radical polymerization (NMRP) can synthesize materials with a much more consistently defined molecular architecture as well as relatively low polydispersity than other methods. NMRP involves radical polymerization in the presence of a nitroxide mediator. This mediator is usually of the form of a stable radical which can bind to and disable the growing polymer chain. This will “tie up” some of the free radicals forming a dynamic equilibrium between active and dormant species, through a reversible coupling process. NMRP can be conducted through one of two different processes: (1) The bimolecular process, which can be initiated with a conventional peroxide initiator (i.e. BPO) but in the presence of a stable nitroxide radical (i.e. TEMPO), which is a stable radical that can reversibly bind with the growing polymer radical chain, and (2) The unimolecular process, where nitroxyl ether is introduced to the system, which then degrades to create both the initiator and mediator radicals. Based on previous research in the group, which included experimental investigations with both unimolecular and bimolecular NMRP under various conditions, it was possible to build on an earlier model and come up with an improved detailed mechanistic model. Additionally, it was seen that certain parameters in the model had little impact on the overall model performance, which suggested that their removal would be appropriate, also serving to reduce the complexity of the model. Comparisons of model predictions with experimental data both from within the group and the general literature were performed and trends verified. Further work was done on the development of an additionally reduced model, and on the testing of these different levels of model complexity with data. The aim of this analysis was to develop a model to capture the key process responses in a simple and easy to implement manner with comparable accuracy to the complete models. Due to its lower complexity, this substantially reduced model would me a much likelier candidate for use in on-line applications. Application of these different model levels to the model-based D-optimal design of experiments was then pursued, with results compared to those generated by a parallel Bayesian design project conducted within the group. Additional work was done using a different optimality criterion, targeted at reducing the amount of parameter correlation that may be seen in D-optimal designs. Finally, conclusions and recommendations for future work were made, including a detailed explanation of how a model similar to the ones described in this paper could be used in the optimal selection of sensors and design of experiments.
90

An Efficient Robust Concept Exploration Method and Sequential Exploratory Experimental Design

Lin, Yao 31 August 2004 (has links)
Experimentation and approximation are essential for efficiency and effectiveness in concurrent engineering analyses of large-scale complex systems. The approximation-based design strategy is not fully utilized in industrial applications in which designers have to deal with multi-disciplinary, multi-variable, multi-response, and multi-objective analysis using very complicated and expensive-to-run computer analysis codes or physical experiments. With current experimental design and metamodeling techniques, it is difficult for engineers to develop acceptable metamodels for irregular responses and achieve good design solutions in large design spaces at low prices. To circumvent this problem, engineers tend to either adopt low-fidelity simulations or models with which important response properties may be lost, or restrict the study to very small design spaces. Information from expensive physical or computer experiments is often used as a validation in late design stages instead of analysis tools that are used in early-stage design. This increases the possibility of expensive re-design processes and the time-to-market. In this dissertation, two methods, the Sequential Exploratory Experimental Design (SEED) and the Efficient Robust Concept Exploration Method (E-RCEM) are developed to address these problems. The SEED and E-RCEM methods help develop acceptable metamodels for irregular responses with expensive experiments and achieve satisficing design solutions in large design spaces with limited computational or monetary resources. It is verified that more accurate metamodels are developed and better design solutions are achieved with SEED and E-RCEM than with traditional approximation-based design methods. SEED and E-RCEM facilitate the full utility of the simulation-and-approximation-based design strategy in engineering and scientific applications. Several preliminary approaches for metamodel validation with additional validation points are proposed in this dissertation, after verifying that the most-widely-used method of leave-one-out cross-validation is theoretically inappropriate in testing the accuracy of metamodels. A comparison of the performance of kriging and MARS metamodels is done in this dissertation. Then a sequential metamodeling approach is proposed to utilize different types of metamodels along the design timeline. Several single-variable or two-variable examples and two engineering example, the design of pressure vessels and the design of unit cells for linear cellular alloys, are used in this dissertation to facilitate our studies.

Page generated in 0.2285 seconds