• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7712
  • 3557
  • 3293
  • 1169
  • 361
  • 177
  • 166
  • 164
  • 150
  • 88
  • 76
  • 76
  • 56
  • 55
  • 47
  • Tagged with
  • 20669
  • 3850
  • 3299
  • 3209
  • 2744
  • 2697
  • 2689
  • 1937
  • 1803
  • 1512
  • 1371
  • 1242
  • 1186
  • 1123
  • 980
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Analysis of comet rotation through modeling of features in the coma

Harris, Ien 13 May 2022 (has links) (PDF)
An integral field unit fiber array spectrograph was used to observe the emission spectra of radical species (C2, C3, CH, CN, and NH2) in multiple comets. The resultant azimuthal and radial division maps produced from the reduced data provide a unique method of analyzing features with these radicals in the comae, as well as how they behave over time. A Monte Carlo model was developed in order to simulate the behavior of particles from the outer nucleus and coma of each comet depending on various parameters including rotational period, outflow velocity, and active area location. The results from the model were used to constrain the physical parameters of three comets: 10P/Tempel 2, C/2009 P1 (Garradd), and 168P/Hergenrother.
472

Evaluation of novel S-ICD electrode placements using computational modeling / Utvärdering av nya S-ICD-elektrodplaceringar med hjälp av beräkningsmodellering

Plancke, Anne-Marie January 2018 (has links)
OBJECTIVES The aim of this study is to investigate the influence of new subcutaneous Implantable Cardioverter Defibrillator (ICD) electrode placements on the Defibrillation Threshold (DFT), shock impedance, and sites of activation in the heart caused by a low voltage shock. BACKGROUND The subcutaneous ICD, though it has many advantages due to its less invasive nature compared to the transvenous ICD, requires higher energy shocks to terminate ventricular arrhythmias, and is not currently thought to be able to reliably pace the heart. METHODS A cohort of nine heart-torso computer models built from computed tomography scans are used to simulate the electric field that occurs during defibrillation. The conventional subcutaneous configuration is tested, along with new “dual sternal”, “L”, and “posterior” configurations, where an additional ground or shocking electrode is present. The estimated DFT (defined as the voltage required to produce an electric field of 5 volts/cm or more in at least 95 % of the ventricular myocardium) along with the shock impedance is computed for all electrode configurations. In addition, the number and location of stimulated sites in the heart is noted for each configuration after delivery of a low voltage shock, to assess respective potential pacing capabilities.     RESULTS In average, the addition of an anterior shocking electrode in the “L” or “dual sternal” position leads to 1.5 times lower DFT and shock impedance than the ones obtained with the classical configuration. The small torso has 1.6 times lower DFT than the medium and large torso in average. Dilated and hypertrophic cardiomyopathies have little impact on the computed DFT. The different electrode configurations cause different number and location of simulated sites in the heart after delivery of a low voltage shock. CONCLUSIONS The models suggest that an S-ICD implantation involving the addition to the conventional configuration of a shocking anterior electrode in the “L” or “dual sternal” position enables to reduce the estimated DFT in all torso size. Pacing by the S-ICD seems feasible.
473

A Test of Abelson and Baysinger's (1984) Optimal Turnover Hypothesis in the Context of Public Organizations using Computational Simulation

Kohn, Harold D. 02 May 2008 (has links)
Both practitioners and researchers have long noted that employee turnover creates both positive and negative consequences for an organization. From a management perspective, the question is how much turnover is the right amount. Abelson and Baysinger (1984) first proposed that an optimal level of turnover could be found based on individual, organizational, and environmental factors. However, as Glebbeek and Bax (2004) noted, their approach was overly complex to empirically verify, let alone utilize at the practitioner level. This study is an attempt to demonstrate whether a logic- and theory-based model and computational simulation of the employee turnover-organizational performance relationship can actually produce Abelson and Baysinger's optimal turnover curve (the inverted U-shape) when studied in the context of a public organization. The modeling approach is based on developing and integrating causal relationships derived from logic and the theory found in the literature. The computational approach used parallels that of Scullen, Bergey, and Aiman-Smith (2005). The level of analysis of this study is the functional department level of large public organizations placing it below the macro level of entire agencies as studied in public administration, but above the level of small group research. The focus is on agencies that employ thousands of employees in specific professional occupations such as engineers, attorneys, and contract specialists. Employee attrition (equivalent to turnover as this model has been structured) is the independent variable. Workforce performance capacity and staffing costs are the dependent variables. Work organization and organizational “character” (i.e., culture, HRM policies, and environment) are moderating elements that are held constant. Organizational parameters and initial conditions are varied to explore the problem space through the use of a number of case scenarios of interest. The model examines the effects on the dependent variables of annual turnover rates ranging from 0% to 100% over a 10-year period. Organizational size is held constant over this period. The simulation model introduces several innovative concepts in order to adapt verbal theory to mathematical expression. These are an organizational stagnation factor, a turbulence factor due to turnover, and workforce performance capacity. Its value to research comes from providing a framework of concepts, relationships, and parametric values that can be empirically tested such as through comparative analyses of similar workgroups in an organization. Its value for management lies in the conceptual framework it provides for logical actions that can be taken to control turnover and/or mitigate turnover's impact on the organization. The simulation model used a 100-employee construct as per Scullen, Bergey, and Aiman-Smith (2005), but was also tested with 1000 employees as well and no significant differences in outcome were found. Test cases were run over a 10-year period. The model was also run out to 30 years to test model stability and no instability was found. Key findings and conclusions of the analysis are as follows: 1. Results demonstrate that Abelson and Baysinger's (1984) inverted-U curve can occur, but only under certain conditions such as bringing in higher-skilled employees or alleviating stagnation. 2. Results support Scullen, Bergey, and Aiman-Smith's (2005) findings that workforce performance potential increases under the condition of increasing the quality of replacement employees. 3. Organizational type, as defined in the public administration literature, does not affect the results. In addition, an analysis of recent empirical work by Meier and Hicklin (2007) who examine the relationship between employee turnover and student test performance using data from Texas school districts is provided as an Addendum. This analysis demonstrates how the modeling and simulation methodology can be used to analyze and contribute to theory development based in empirical research. / Ph. D.
474

Multivariate Analysis of Factors Regulating the Formation of Synthetic Allophane and Imogolite Nanoparticles

Bauer, McNeill John 30 August 2019 (has links)
Imogolite and allophane are nanosized aluminosilicates with high value in industrial and technological applications, however it remains unclear what factors control their formation and abundance in nature and in the lab. This work investigated the complex system of physical and chemical conditions that influence the formation of these nanominerals. Samples were synthesized and analyzed by powder x-ray diffraction, in situ and ex situ small angle x-ray scattering, and high-resolution transmission electron microscopy. Multivariate regression analysis combined with linear combination fitting of pXRD patterns was used to model the influence of different synthesis conditions including concentration, hydrolysis ratio and rate, and Al:Si elemental ratio on the particle size of the initial precipitate and on the phase abundances of the final products. The developed models described how increasing Al:Si ratio, particle size, and hydrolysis ratio increased the proportion of imogolite produced, while increasing the concentration of starting reagents decreased the final proportion. The model confidences were >99%, and explained 86 to 98% of the data variance. It was determined from the models that hydrolysis ratio was the strongest control on the final phase composition. The models also were able to consistently predict experimentally derived results from other studies. These results demonstrated the ability to use this approach to understand complex geochemical systems with competing influences, as well as provided insight into the formation of imogolite and allophane. / Master of Science / Allophane and imogolite are nanosized aluminosilicate minerals and strongly control the physical and chemical behavior of soil. They hold promise for use in technological applications. In nature, allophane and imogolite are often observed together in varying proportions. Similarly, laboratory synthesis by various methods usually does not result in pure phases. These observations suggest they form at the same time, at a wide range of solution chemical conditions. It remains unclear what factors determine how and when these phases form in solution, which limits our understanding of their occurrence in nature and the laboratory. The objective of this study is to understand and explain what solution chemical and physical conditions control the formation of synthetic imogolite and allophane. We did this by utilizing a unique approach where we systematically varied starting conditions of formation of these particles, and then used analytic and statistical methods to develop a model that describes the relationship between each of the starting conditions – concentration, size, pH, atomic ratios, and hydrolysis ratios, and how those affect the phase abundance of the particles.
475

Hybrid Model for Monitoring and Optimization of Distillation Columns

Aljuhani, Fahad January 2016 (has links)
Distillation columns are primary equipment in petrochemical, gas plants and refineries. Distillation columns energy consumption is estimated to be 40% of the total plant energy consumption. Optimization of distillation columns has potential for saving large amount of energy and contributes to plant wide optimization. Currently rigorous tray to tray models are used to describe columns separation with high accuracy. Rigorous distillation models are being used as part of design, optimization and as a part of on-line real-time optimization applications. Due to large number of nonlinear equations, rigorous distillation models are not suitable for inclusion in optimization models of complex plants (e.g. refineries), since they would make the model too large. For this reason, current practice in plant-wide optimization for planning or for scheduling is to include simplified model. Accuracy of these simplified models is significantly lower than the accuracy of the rigorous models, thereby causing discrepancy between production planning and RTO decisions. This work describes reduced size hybrid model of distillation columns, suitable for use as stand-alone tool for individual column or as part of a complete plant model, either for RTO or for production planning. Hybrid models are comprised of first principles material and energy balances and empirical models describing separation in the column. Hybrid models can be used for production planning, scheduling and optimization. In addition this work describes inferential model development for estimating streams purity using real time data. Inferential model eliminates the need for Gas Chromatography GC analyzers and can be used for monitoring and control purposes. Predictions from the models are sufficiently accurate and small size of the models enable significant reduction in size of the total plant models. / Thesis / Master of Applied Science (MASc)
476

Modeling and Parameter Estimation in Biological Applications

Macdonald, Brian January 2016 (has links)
Biological systems, processes, and applications present modeling challenges in the form of system complexity, limited steady-state availability, and limited measurements. One primary issue is the lack of well-estimated parameters. This thesis presents two contributions in the area of modeling and parameter estimation for these kinds of biological processes. The primary contribution is the development of an adaptive parameter estimation process that includes parameter selection, evaluation, and estimation, applied along with modeling of cell growth in culture. The second contribution shows the importance of parameter estimation for evaluation of experiment and process design. / Thesis / Master of Applied Science (MASc)
477

Parent Training in Video Modeling: Comparing Least to Greatest Supports

Robbins, Janae Hammond 08 December 2023 (has links) (PDF)
This study examines one type of parent training—Video Modeling (VM)—and how intense the training needs to be for a parent to learn this intervention effectively. The researchers observed three types of training: self-training, didactic training through a pre-recorded lesson, and one-on-one training. A repeated acquisition single case design across participants was used along with a control group. The participants included nine parents from the surrounding areas of Brigham Young University who were randomly selected for either the intervention group or the control group. Four of the five participants in the intervention or treatment group achieved mastery once they received didactic training. One participant in the intervention group required one-on-one feedback before achieving mastery. These results support previous studies’ findings: parents can successfully create video models and implement the interventions with high fidelity; and the training necessary can be minimal in comparison to other types of parent training. This suggests that didactic training may be an alternative to the often-costly expense of professional training given to parents. Recommendations for future research are also included in this paper.
478

The Path to Measles Elimination in the United States

O'Meara, Elizabeth January 2022 (has links)
The eradication of infectious diseases has been of key interest for many years. While the World Health Organization (WHO) and other health organizations typically track the progress of disease eradication based on whether regions are meeting their eradication targets, being able to quantify and/or visually track the eradication of a disease could prove beneficial. This thesis creates the “canonical path” to the elimination of measles in the United States (US), using similar methods as defined by Graham et al. [1]. We build on preliminary work conducted to fulfill the requirements of an Honours Bachelor of Science in Integrated Science at McMaster University, and the analysis conducted by Graham et al. [1], through the investigation of the sensitivity of the path to changes in its definition, as well how the path changes when we change the characteristics of the disease. This thesis demonstrates the ability to use a canonical path on a smaller, country-level scale, by using United States (US) state level data to create the US canonical path. We also determine the model structures necessary to simulate the canonical path, which suggests that the canonical path method is most useful for eradicable diseases for which we have ample knowledge of the disease, including the natural history of infection and vaccination. We also predict how the path is affected by the pattern of seasonality and by the natural history of infection. Overall, the analysis suggests that the more this method is implemented for other countries that have eliminated measles or for other diseases for which we have achieved elimination, we may gain insight of the successes and failures of elimination strategies. This knowledge could help the WHO and other organizations improve their disease elimination and eradication strategies in the future. / Thesis / Master of Science (MSc)
479

Data-Driven Modeling and Control of Batch and Continuous Processes using Subspace Methods

Patel, Nikesh January 2022 (has links)
This thesis focuses on subspace based data-driven modeling and control techniques for batch and continuous processes. Motivated by the increasing amount of process data, data-driven modeling approaches have become more popular. These approaches are better in comparison to first-principles models due to their ability to capture true process dynamics. However, data-driven models rely solely on mathematical correlations and are subject to overfitting. As such, applying first-principles based constraints to the subspace model can lead to better predictions and subsequently better control. This thesis demonstrates that the addition of process gain constraints leads to a more accurate constrained model. In addition, this thesis also shows that using the constrained model in a model predictive control (MPC) algorithm allows the system to reach desired setpoints faster. The novel MPC algorithm described in this thesis is specially designed as a quadratic program to include a feedthrough matrix. This is traditionally ignored in industry however this thesis portrays that its inclusion leads to more accurate process control. Given the importance of accurate process data during model identification, the missing data problem is another area that needs improvement. There are two main scenarios with missing data: infrequent sampling/ sensor errors and quality variables. In the infrequent sampling case, data points are missing in set intervals and so correlating between different batches is not possible as the data is missing in the same place everywhere. The quality variable case is different in that quality measurements require additional expensive test making them unavailable for over 90\% of the observations at the regular sampling frequency. This thesis presents a novel subspace approach using partial least squares and principal component analysis to identify a subspace model. This algorithm is used to solve each case of missing data in both simulation (polymethyl methacrylate) and industrial (bioreactor) processes with improved performance. / Dissertation / Doctor of Philosophy (PhD) / An important consideration of chemical processes is the maximization of production and product quality. To that end developing an accurate controller is necessary to avoid wasting resources and off-spec products. All advance process control approaches rely on the accuracy of the process model, therefore, it is important to identify the best model. This thesis presents two novel subspace based modeling approaches the first using first principles based constraints and the second handling missing data approaches. These models are then applied to a modified state space model with a predictive control strategy to show that the improved models lead to improved control. The approaches in this work are tested on both simulation (polymethyl methacrylate) and industrial (bioreactor) processes.
480

Support Vector Machines for Speech Recognition

Ganapathiraju, Aravind 11 May 2002 (has links)
Hidden Markov models (HMM) with Gaussian mixture observation densities are the dominant approach in speech recognition. These systems typically use a representational model for acoustic modeling which can often be prone to overfitting and does not translate to improved discrimination. We propose a new paradigm centered on principles of structural risk minimization using a discriminative framework for speech recognition based on support vector machines (SVMs). SVMs have the ability to simultaneously optimize the representational and discriminative ability of the acoustic classifiers. We have developed the first SVM-based large vocabulary speech recognition system that improves performance over traditional HMM-based systems. This hybrid system achieves a state-of-the-art word error rate of 10.6% on a continuous alphadigit task ? a 10% improvement relative to an HMM system. On SWITCHBOARD, a large vocabulary task, the system improves performance over a traditional HMM system from 41.6% word error rate to 40.6%. This dissertation discusses several practical issues that arise when SVMs are incorporated into the hybrid system.

Page generated in 0.1065 seconds