• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

An Empirical Investigation of Critical Success Factors for Continuous Improvement Projects in Hospitals

Gonzalez Aleu Gonzalez, Fernando 17 August 2016 (has links)
A continuous improvement project (CIP) is a structured improvement project using a team of people "typically representing different departments or units in the organization" working to improve a process or work area over a relatively short period of time, such as a few days or up to several months. A CIP may use different improvement methodologies and tools, and may thus be defined according to the improvement approach. For instance, an organization adopting Lean as an improvement approach is likely to have CIPs implementing Lean tools, such as 5S or value stream mapping. These projects may be referred to as Lean projects in general, although they may also represent accelerated improvement projects such as Kaizen events, Kaizen blitz, or rapid improvement projects. Alternatively, an organization utilizing Six Sigma as an improvement approach may have Six Sigma projects that use the Define-Measure-Analyze-Improve-Control (DMAIC) process and statistical tools. Some organizations adopt an integrated improvement approach, such as Lean Six Sigma, and therefore may have CIPs with an even broader set of tools from which to choose. Lastly, many organizations may have an improvement approach not characterized by any single set of improvement processes and tools, and thus, may be thought of generally as process improvement, or quality improvement, projects using a traditional methodology as plan-do-study/check-act (PDSA or PDCA). In this dissertation, all of these types of improvement projects are referred as CIPs. Since the 1980s, hospitals have been using CIPs to address some of the problems in hospitals, such as quality in healthcare delivery, internal process efficiency, communication and coordination, and the cost of services. Some hospitals have achieved significant improvements, such as reducing the turnaround time for clinical laboratory results by 60 percent and reducing instrumentation decontaminations and sterilization cycle time by 70 percent. However, as with many other companies, hospitals often experience difficulty achieving their desired level of improvements with CIPs. Therefore, the purpose of this dissertation is to identify the critical success factors (CSFs) related to CIP success. In order to achieve this goal, five objectives were achieved: creating a methodology to assess the maturity or evolution of a research field (manuscript #1), identifying a comprehensive list of CSFs for CIPs (manuscript #2), assessing the maturity of the published literature on CIPs in hospitals (manuscript #3), identifying the most important factors related to CIPs in hospitals (manuscript #4) , and conducting an empirical investigation to define the CSFs for CIPs in hospital settings (manuscript #5 and #6). This investigation was conducted in three phases: research framing, variable reduction, and model development and testing. During these phases, the researcher used the following methodologies and data collection tools: systematic literature review, maturity framework (developed as part of this dissertation), expert study, retrospective survey questionnaire, exploratory factor analysis, partial-least squares structural equation modeling, and regression modeling. A maturity framework with nine dimensions was created (manuscript #1) and applied in order to identify a list of 53 factors related to CIP in general, involving any organization (manuscript #2). Additionally, the maturity framework was used to assess the literature available on CIPs in hospitals, considering only the authorship characteristic dimension (manuscript #3). Considering the frequency of new authors per year, the relative new integration of research groups, and the limited set of predominant authors, the research field, or area, of CIPs in hospitals is one with opportunities for improving maturity. Using the systematic literature review from manuscript #3, the list of 53 factors, and the list of predominant authors, a review of the literature was conducted, along with an expert study to more fully characterize the importance of various factors (manuscript #4). A conclusion from this particular work was that it is not possible to reduce the list of 53 factors based on these results, thus, a field study using the complete comprehensive list of factors was determined to have stronger practical implications. A field study was conducted to identify factors most related to CIP perceived success (manuscript #5) and CIP goal achievement (manuscript #6). The final results and practical implications of this dissertation consist in the identification of the following CSFs for CIP success in hospitals: Goal Characteristics, Organizational Processes, Improvement Processes, and Team Operation. These CSFs include several specific factors that, to the researcher's knowledge, have not been previously studied in empirical investigations: goal development process, organizational policies and procedures, CIP progress reporting, and CIP technical documentation. Practitioners involved with CIPs, such as CIP leaders, facilitators, stakeholders/customers, and continuous improvement managers/leaders, can utilize these results to increase the likelihood of success by considering these factors in planning and conducting CIPs. / Ph. D.
552

Parameter Estimation Methods for Ordinary Differential Equation Models with Applications to Microbiology

Krueger, Justin Michael 04 August 2017 (has links)
The compositions of in-host microbial communities (microbiota) play a significant role in host health, and a better understanding of the microbiota's role in a host's transition from health to disease or vice versa could lead to novel medical treatments. One of the first steps toward this understanding is modeling interaction dynamics of the microbiota, which can be exceedingly challenging given the complexity of the dynamics and difficulties in collecting sufficient data. Methods such as principal differential analysis, dynamic flux estimation, and others have been developed to overcome these challenges for ordinary differential equation models. Despite their advantages, these methods are still vastly underutilized in mathematical biology, and one potential reason for this is their sophisticated implementation. While this work focuses on applying principal differential analysis to microbiota data, we also provide comprehensive details regarding the derivation and numerics of this method. For further validation of the method, we demonstrate the feasibility of principal differential analysis using simulation studies and then apply the method to intestinal and vaginal microbiota data. In working with these data, we capture experimentally confirmed dynamics while also revealing potential new insights into those dynamics. We also explore how we find the forward solution of the model differential equation in the context of principal differential analysis, which amounts to a least-squares finite element method. We provide alternative ideas for how to use the least-squares finite element method to find the forward solution and share the insights we gain from highlighting this piece of the larger parameter estimation problem. / Ph. D.
553

New Methods for Synchrophasor Measurement

Zhang, Yingchen 09 February 2011 (has links)
Recent developments in smart grid technology have spawned interest in the use of phasor measurement units to help create a reliable power system transmission and distribution infrastructure. Wide-area monitoring systems (WAMSs) utilizing synchrophasor measurements can help with understanding, forecasting, or even controlling the status of power grid stability in real-time. A power system Frequency Monitoring Network (FNET) was first proposed in 2001 and was established in 2004. As a pioneering WAMS, it serves the entire North American power grid through advanced situational awareness techniques, such as real-time event alerts, accurate event location estimation, animated event visualization, and post event analysis. Traditionally, Phasor Measurement Units (PMUs) have utilized signals obtained from current transformers (CTs) to compute current phasors. Unfortunately, this requires that CTs must be directly connected with buses, transformers or power lines. Chapters 2, 3 will introduce an innovative phasor measurement instrument, the Non-contact Frequency Disturbance Recorder (NFDR), which uses the magnetic and electric fields generated by power transmission lines to obtain current phasor measurements. The NFDR is developed on the same hardware platform as the Frequency Disturbance Recorder (FDR), which is actually a single phase PMU. Prototype testing of the NFDR in both the laboratory and the field environments were performed. Testing results show that measurement accuracy of the NFDR satisfies the requirements for power system dynamics observation. Researchers have been developing various techniques in power system phasor measurement and frequency estimation, due to their importance in reflecting system health. Each method has its own pros and cons regarding accuracy and speed. The DFT (Discrete Fourier Transform) based algorithm that is adopted by the FDR device is particularly suitable for tracking system dynamic changes and is immune to harmonic distortions, but it has not proven to be very robust when the input signal is polluted by random noise. Chapter 4 will discuss the Least Mean Squares-based methods for power system frequency tracking, compared with a DFT-based algorithm. Wide-area monitoring systems based on real time PMU measurements can provide great visibility to the angle instability conditions. Chapter 5 focuses on developing an early warning algorithm on the FNET platform. / Ph. D.
554

Real-time system identification using intelligent algorithms

Madkour, A.A.M., Hossain, M. Alamgir, Dahal, Keshav P., Yu, H. January 2004 (has links)
This research presents an investigation into the development of real time system identification using intelligent algorithms. A simulation platform of a flexible beam vibration using finite difference (FD) method is used to demonstrate the real time capabilities of the identification algorithms. A number of approaches and algorithms for on line system identifications are explored and evaluated to demonstrate the merits of the algorithms for real time implementation. These approaches include identification using (a) traditional recursive least square (RLS) filter, (b) Genetic Algorithms (GAs) and (c) adaptive Neuro_Fuzzy (ANFIS) model. The above algorithms are used to estimate a linear discrete second order model for the flexible beam vibration. The model is implemented, tested and validated to evaluate and demonstrate the merits of the algorithms for real time system identification. Finally, a comparative performance of error convergence and real time computational complexity of the algorithms is presented and discussed through a set of experiments.
555

Algorithms for Tomographic Reconstruction of Rectangular Temperature Distributions using Orthogonal Acoustic Rays

Kim, Chuyoung 09 September 2016 (has links)
Non-intrusive acoustic thermometry using an acoustic impulse generator and two microphones is developed and integrated with tomographic techniques to reconstruct temperature contours. A low velocity plume at around 450 °F exiting through a rectangular duct (3.25 by 10 inches) was used for validation and reconstruction. 0.3 % static temperature relative error compared with thermocouple-measured data was achieved using a cross-correlation algorithm to calculate speed of sound. Tomographic reconstruction algorithms, the simplified multiplicative algebraic reconstruction technique (SMART) and least squares method (LSQR), are investigated for visualizing temperature contours of the heated plume. A rectangular arrangement of transmitter and microphones with a traversing mechanism collected two orthogonal sets of acoustic projection data. Both reconstruction techniques have successfully recreated the overall characteristic of the contour; however, for the future work, the integration of the refraction effect and implementation of additional angled projections are required to improve local temperature estimation accuracy. The root-mean-square percentage errors of reconstructing non-uniform, asymmetric temperature contours using the SMART and LSQR method are calculated as 20% and 19%, respectively. / Master of Science
556

A legal and economic analysis of the Virginia Chesapeake Bay Preservation Act

Richardson, Jesse J. 12 September 2009 (has links)
This paper focuses on the Virginia Chesapeake Bay Preservation Act (the Act), the regulations promulgated thereunder and several questions arising from an examination of the Act and regulations. Specifically, the analysis examines the agricultural provisions within the regulations and asks whether the provisions are economically desirable, as well as legally enforceable. The provisions of the Act and the regulations constitute the major focus of Chapter One. Chapter One's discussion concludes with an analysis of the regulations’ main pollution prevention tool, the vegetative buffer strip, and a brief listing of various issues and controversies involving the Act and the regulations. Chapter Two introduces a linear programming model designed to determine the most cost-effective means of pollution control based on farmer profits under several regulatory scenarios. The results imply that the mandatory provisions of the regulation prevent farmers from achieving the desired level of pollution reduction in the least costly fashion. The takings issue, a major concern for all environmental legislation, forms the major focus of Chapter Three. This analysis considers the provisions of the Act and regulations within the historic context of the takings clause of the Fifth Amendment of the United States Constitution, as well as the takings clause of the Virginia Constitution, and determines the legal validity of the provisions. Finally, Chapter Four presents several criticisms of the regulations as presently constituted. Suggestions for a more cost effective regulatory scheme conclude the analysis. / Master of Science
557

Non-invasive Estimation of Skin Chromophores Using Hyperspectral Imaging

Karambor Chakravarty, Sriya 07 March 2024 (has links)
Melanomas account for more than 1.7% of global cancer diagnoses and about 1% of all skin cancer diagnoses in the United States. This type of cancer occurs in the melanin-producing cells in the epidermis and exhibits distinctive variations in melanin and blood concentration values in the form of skin lesions. The current approach for evaluating skin cancer lesions involves visual inspection with a dermatoscope, typically followed by biopsy and histopathological analysis. However, to decrease the risk of misdiagnosis in this process requires invasive biopsies, contributing to the emotional and financial distress of patients. The implementation of a non-invasive imaging technique to aid the analysis of skin lesions in the early stages can potentially mitigate these consequences. Hyperspectral imaging (HSI) has shown promise as a non-invasive technique to analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and examined with an HSI camera. To achieve this in this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of parameters such as melanin concentration, blood volume fraction and blood oxygen saturation in the skin. The human skin is modelled as a bi-layer planar system, whose surface reflectance is theoretically calculated using the Kubelka-Munk theory and absorption laws by Beer and Lambert. The model is evaluated for its sensitivity to the parameters and then fitted to measured hyperspectral data of four volunteer subjects in different conditions. Mean values of melanin, blood volume fraction and oxygen saturation obtained for each of the subjects are reported and compared with theoretical values from literature. Sensitivity analysis revealed wavelengths and wavelength groups which resulted in maximum change in percentage reflectance calculated from the model were 450 and 660 nm for melanin, 500 - 520 nm and 590 - 625 nm for blood volume fraction and 606, 646 and 750 nm for blood oxygen saturation. / Master of Science / Melanoma, the most serious type of skin cancer, develops in the melanin-producing cells in the epidermis. A characteristic marker of skin lesions is the abrupt variations in melanin and blood concentration in areas of the lesion. The present technique to inspect skin cancer lesions involves dermatoscopy, which is a qualitative visual analysis of the lesion’s features using a few standardized techniques such as the 7-point checklist and the ABCDE rule. Typically, dermatoscopy is followed by a biopsy and then a histopathological analysis of the biopsy. To reduce the possibility of misdiagnosing actual melanomas, a considerable number of dermoscopically unclear lesions are biopsied, increasing emotional, financial, and medical consequences. A non-invasive imaging technique to analyze skin lesions during the dermoscopic stage can help alleviate some of these consequences. Hyperspectral imaging (HSI) is a promising methodology to non-invasively analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and analyzed with an HSI camera. In this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of melanin concentration, blood volume fraction and blood oxygen saturation. The mean and standard deviation of these estimates are reported and compared with theoretical values from the literature. The model is also evaluated for its sensitivity with respect to these parameters to identify the most relevant wavelengths.
558

Investigative Application of the Intrinsic Extended Finite Element Method for the Computational Characterization of Composite Materials

Fave, Sebastian Philipp 05 September 2014 (has links)
Computational micromechanics analysis of carbon nanotube-epoxy nanocomposites, containing aligned nanotubes, is performed using the mesh independent intrinsic extended finite element method (IXFEM). The IXFEM employs a localized intrinsic enrichment strategy to treat arbitrary discontinuities defined through the level-set method separate from the problem domain discretization, i.e. the finite element (FE) mesh. A global domain decomposition identifies local subdomains for building distinct partition of unities that appropriately suit the approximation. Specialized inherently enriched shape functions, constructed using the moving least square method, enhance the approximation space in the vicinity of discontinuity interfaces, maintaining accuracy of the solution, while standard FE shape functions are used elsewhere. Comparison of the IXFEM in solving validation problems with strong and weak discontinuities against a standard finite element method (FEM) and analytic solutions validates the enriched intrinsic bases, and shows anticipated trends in the error convergence rates. Applying the IXFEM to model composite materials, through a representative volume element (RVE), the filler agents are defined as individual weak bimaterial interfaces. Though a series of RVE studies, calculating the effective elastic material properties of carbon nanotube-epoxy nanocomposite systems, the benefits in substituting the conventional mesh dependent FEM with the mesh independent IXFEM when completing micromechanics analysis, investigating effects of high filler count or an evolving microstructure, are demonstrated. / Master of Science
559

Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling

Grimm, Alexander Rudolf 02 July 2018 (has links)
Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay. / Ph. D.
560

Analysis of Static and Dynamic Deformations of Laminated Composite Structures by the Least-Squares Method

Burns, Devin James 27 October 2021 (has links)
Composite structures, such as laminated beams, plates and shells, are widely used in the automotive, aerospace and marine industries due to their superior specific strength and tailor-able mechanical properties. Because of their use in a wide range of applications, and their commonplace in the engineering design community, the need to accurately predict their behavior to external stimuli is crucial. We consider in this thesis the application of the least-squares finite element method (LSFEM) to problems of static deformations of laminated and sandwich plates and transient plane stress deformations of sandwich beams. Models are derived to express the governing equations of linear elasticity in terms of layer-wise continuous variables for composite plates and beams, which allow inter-laminar continuity conditions at layer interfaces to be satisfied. When Legendre-Gauss-Lobatto (LGL) basis functions with the LGL nodes taken as integration points are used to approximate the unknown field variables, the methodology yields a system of discrete equations with a symmetric positive definite coefficient matrix. The main goal of this research is to determine the efficacy of the LSFEM in accurately predicting stresses in laminated composites when subjected to both quasi-static and transient surface tractions. Convergence of the numerical algorithms with respect to the LGL basis functions in space and time (when applicable) is also considered and explored. In the transient analysis of sandwich beams, we study the sensitivity of the first failure load to the beam's aspect ratio (AR), facesheet-core thickness ratio (FCTR) and facesheet-core stiffness ratio (FCSR). We then explore how failure of sandwich beams is affected by considering facesheet and core materials with different in-plane and transverse stiffness ratios. Computed results are compared to available analytical solutions, published results and those found by using the commercial FE software ABAQUS where appropriate / Master of Science / Composite materials are formed by combining two or more materials on a macroscopic scale such that they have better engineering properties than either material individually. They are usually in the form of a laminate comprised of numerous plies with each ply having unidirectional fibers. Laminates are used in all sorts of engineering applications, ranging from boat hulls, racing car bodies and storage tanks. Unlike their homogeneous material counterparts, such as metals, laminated composites present structural designers and analysts a number of computational challenges. Chief among these challenges is the satisfaction of the so-called continuity conditions, which require certain quantities to be continuous at the interfaces of the composite's layers. In this thesis, we use a mathematical model, called a state-space model, that allows us to simultaneously solve for these quantities in the composite structure's domain and satisfy the continuity conditions at layer interfaces. To solve the governing equations that are derived from this model, we use a numerical technique called the least-squares method which seeks to minimize the squares of the governing equations and the associated side condition residuals over the computational domain. With this mathematical model and numerical method, we investigate static and dynamic deformations of laminated composites structures. The goal of this thesis is to determine the efficacy of the proposed methodology in predicting stresses in laminated composite structures when subjected to static and transient mechanical loading.

Page generated in 0.0603 seconds