• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 530
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1178
  • 1032
  • 202
  • 193
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
721

Incompressible Flow Simulations Using Least Squares Spectral Element Method On Adaptively Refined Triangular Grids

Akdag, Osman 01 September 2012 (has links) (PDF)
The main purpose of this study is to develop a flow solver that employs triangular grids to solve two-dimensional, viscous, laminar, steady, incompressible flows. The flow solver is based on Least Squares Spectral Element Method (LSSEM). It has p-type adaptive mesh refinement/coarsening capability and supports p-type nonconforming element interfaces. To validate the developed flow solver several benchmark problems are studied and successful results are obtained. The performances of two different triangular nodal distributions, namely Lobatto distribution and Fekete distribution, are compared in terms of accuracy and implementation complexity. Accuracies provided by triangular and quadrilateral grids of equal computational size are compared. Adaptive mesh refinement studies are conducted using three different error indicators, including a novel one based on elemental mass loss. Effect of modifying the least-squares functional by multiplying the continuity equation by a weight factor is investigated in regards to mass conservation.
722

Aerodynamic Parameter Estimation Of A Missile In Closed Loop Control And Validation With Flight Data

Aydin, Gunes 01 September 2012 (has links) (PDF)
Aerodynamic parameter estimation from closed loop data has been developed as another research area since control and stability augmentation systems have been mandatory for aircrafts. This thesis focuses on aerodynamic parameter estimation of an air to ground missile from closed loop data using separate surface excitations. A design procedure is proposed for designing separate surface excitations. The effect of excitations signals to the system is also analyzed by examining autopilot disturbance rejection performance. Aerodynamic parameters are estimated using two different estimation techniques which are ordinary least squares and complex linear regression. The results are compared with each other and with the aerodynamic database. An application of the studied techniques to a real system is also given to validate that they are directly applicable to real life.
723

Aerodynamic Parameter Estimation Of A Missile In Closed Loop Control And Validation With Flight Data

Aydin, Gunes 01 October 2012 (has links) (PDF)
Aerodynamic parameter estimation from closed loop data has been developed as another research area since control and stability augmentation systems have been mandatory for aircrafts. This thesis focuses on aerodynamic parameter estimation of an air to ground missile from closed loop data using separate surface excitations. A design procedure is proposed for designing separate surface excitations. The effect of excitations signals to the system is also analyzed by examining autopilot disturbance rejection performance. Aerodynamic parameters are estimated using two different estimation techniques which are ordinary least squares and complex linear regression. The results are compared with each other and with the aerodynamic database. An application of the studied techniques to a real system is also given to validate that they are directly applicable to real life.
724

Chemical identification under a poisson model for Raman spectroscopy

Palkki, Ryan D. 14 November 2011 (has links)
Raman spectroscopy provides a powerful means of chemical identification in a variety of fields, partly because of its non-contact nature and the speed at which measurements can be taken. The development of powerful, inexpensive lasers and sensitive charge-coupled device (CCD) detectors has led to widespread use of commercial and scientific Raman systems. However, relatively little work has been done developing physics-based probabilistic models for Raman measurement systems and crafting inference algorithms within the framework of statistical estimation and detection theory. The objective of this thesis is to develop algorithms and performance bounds for the identification of chemicals from their Raman spectra. First, a Poisson measurement model based on the physics of a dispersive Raman device is presented. The problem is then expressed as one of deterministic parameter estimation, and several methods are analyzed for computing the maximum-likelihood (ML) estimates of the mixing coefficients under our data model. The performance of these algorithms is compared against the Cramer-Rao lower bound (CRLB). Next, the Raman detection problem is formulated as one of multiple hypothesis detection (MHD), and an approximation to the optimal decision rule is presented. The resulting approximations are related to the minimum description length (MDL) approach to inference. In our simulations, this method is seen to outperform two common general detection approaches, the spectral unmixing approach and the generalized likelihood ratio test (GLRT). The MHD framework is applied naturally to both the detection of individual target chemicals and to the detection of chemicals from a given class. The common, yet vexing, scenario is then considered in which chemicals are present that are not in the known reference library. A novel variation of nonnegative matrix factorization (NMF) is developed to address this problem. Our simulations indicate that this algorithm gives better estimation performance than the standard two-stage NMF approach and the fully supervised approach when there are chemicals present that are not in the library. Finally, estimation algorithms are developed that take into account errors that may be present in the reference library. In particular, an algorithm is presented for ML estimation under a Poisson errors-in-variables (EIV) model. It is shown that this same basic approach can also be applied to the nonnegative total least squares (NNTLS) problem. Most of the techniques developed in this thesis are applicable to other problems in which an object is to be identified by comparing some measurement of it to a library of known constituent signatures.
725

Multiresolutional partial least squares and principal component analysis of fluidized bed drying

Frey, Gerald M. 14 April 2005
Fluidized bed dryers are used in the pharmaceutical industry for the batch drying of pharmaceutical granulate. Maintaining optimal hydrodynamic conditions throughout the drying process is essential to product quality. Due to the complex interactions inherent in the fluidized bed drying process, mechanistic models capable of identifying these optimal modes of operation are either unavailable or limited in their capabilities. Therefore, empirical models based on experimentally generated data are relied upon to study these systems.<p> Principal Component Analysis (PCA) and Partial Least Squares (PLS) are multivariate statistical techniques that project data onto linear subspaces that are the most descriptive of variance in a dataset. By modeling data in terms of these subspaces, a more parsimonious representation of the system is possible. In this study, PCA and PLS are applied to data collected from a fluidized bed dryer containing pharmaceutical granulate. <p>System hydrodynamics were quantified in the models using high frequency pressure fluctuation measurements. These pressure fluctuations have previously been identified as a characteristic variable of hydrodynamics in fluidized bed systems. As such, contributions from the macroscale, mesoscale, and microscales of motion are encoded into the signals. A multiresolutional decomposition using a discrete wavelet transformation was used to resolve these signals into components more representative of these individual scales before modeling the data. <p>The combination of multiresolutional analysis with PCA and PLS was shown to be an effective approach for modeling the conditions in the fluidized bed dryer. In this study, datasets from both steady state and transient operation of the dryer were analyzed. The steady state dataset contained measurements made on a bed of dry granulate and the transient dataset consisted of measurements taken during the batch drying of granulate from approximately 33 wt.% moisture to 5 wt.%. Correlations involving several scales of motion were identified in both studies.<p> In the steady state study, deterministic behavior related to superficial velocity, pressure sensor position, and granulate particle size distribution was observed in PCA model parameters. It was determined that these properties could be characterized solely with the use of the high frequency pressure fluctuation data. Macroscopic hydrodynamic characteristics such as bubbling frequency and fluidization regime were identified in the low frequency components of the pressure signals and the particle scale interactions of the microscale were shown to be correlated to the highest frequency signal components. PLS models were able to characterize the effects of superficial velocity, pressure sensor position, and granulate particle size distribution in terms of the pressure signal components. Additionally, it was determined that statistical process control charts capable of monitoring the fluid bed hydrodynamics could be constructed using PCA<p>In the transient drying experiments, deterministic behaviors related to inlet air temperature, pressure sensor position, and initial bed mass were observed in PCA and PLS model parameters. The lowest frequency component of the pressure signal was found to be correlated to the overall temperature effects during the drying cycle. As in the steady state study, bubbling behavior was also observed in the low frequency components of the pressure signal. PLS was used to construct an inferential model of granulate moisture content. The model was found to be capable of predicting the moisture throughout the drying cycle. Preliminary statistical process control models were constructed to monitor the fluid bed hydrodynamics throughout the drying process. These models show promise but will require further investigation to better determine sensitivity to process upsets.<p> In addition to PCA and PLS analyses, Multiway Principal Component Analysis (MPCA) was used to model the drying process. Several key states related to the mass transfer of moisture and changes in temperature throughout the drying cycle were identified in the MPCA model parameters. It was determined that the mass transfer of moisture throughout the drying process affects all scales of motion and overshadows other hydrodynamic behaviors found in the pressure signals.
726

Identifying Nursing Activities to Estimate the Risk of Cross-contamination

Seyed Momen, Kaveh 07 January 2013 (has links)
Hospital Acquired Infections (HAI) are a global patient safety challenge, costly to treat, and affect hundreds of millions of patients annually worldwide. It has been shown that the majority of HAI are transferred to patients by caregivers' hands and therefore, can be prevented by proper hand hygiene (HH). However, many factors including cognitive load, cause caregivers to forget to cleanse their hands. Hand hygiene compliance among caregivers remains low around the world. In this thesis I showed that it is possible to build a wearable accelerometer-based HH reminder system to identify ongoing nursing activities with the patient, indicate the high-risk activities, and prompt the caregivers to clean their hands. Eight subjects participated in this study, each wearing five wireless accelerometer sensors on the wrist, upper arms and the back. A pattern recognition approach was used to classify six nursing activities offline. Time-domain features that included mean, standard deviation, energy, and correlation among accelerometer axes were found to be suitable features. On average, 1-Nearest Neighbour classifier was able to classify the activities with 84% accuracy. A novel algorithm was developed to adaptively segment the accelerometer signals to identify the start and stop time of each nursing activity. The overall accuracy of the algorithm for a total of 96 events performed by 8 subjects was approximately 87%. The accuracy was higher than 91% for 5 out of 8 subjects. The sequence of nursing activities was modelled by an 18-state Markov Chain. The model was evaluated by recently published data. The simulation results showed that the high-risk of cross-contamination decreases exponentially by frequency of HH and this happens more rapidly up to 50%-60% hand hygiene rate. It was also found that if the caregiver enters the room with high-risk of transferring infection to the current patient, given the assumptions in this study, only 55% HH is capable of reducing the risk of infection transfer to the lowest level. This may help to prevent the next patient from acquiring infection, preventing an infection outbreak. The model is also capable of simulating the effects of the imperfect HH on the risk of cross-contamination.
727

Relationships Between Felt Intensity And Recorded Ground Motion Parameters For Turkey

Bilal, Mustafa 01 January 2013 (has links) (PDF)
Earthquakes are among natural disasters with significant damage potential / however it is possible to reduce the losses by taking several remedies. Reduction of seismic losses starts with identifying and estimating the expected damage to some accuracy. Since both the design styles and the construction defects exhibit mostly local properties all over the world, damage estimations should be performed at regional levels. Another important issue in disaster mitigation is to determine a robust measure of ground motion intensity parameters. As of now, well-built correlations between shaking intensity and instrumental ground motion parameters are not yet studied in detail for Turkish data. In the first part of this thesis, regional empirical Damage Probability Matrices (DPMs) are formed for Turkey. As the input data, the detailed damage database of the 17 August 1999 Kocaeli earthquake (Mw=7.4) is used. The damage probability matrices are derived for Sakarya, Bolu and Kocaeli, for both reinforced concrete and masonry buildings. Results are compared with previous similar studies and the differences are discussed. After validation with future data, these DPMs can be used in the calculation of earthquake insurance premiums. In the second part of this thesis, two relationships between the felt-intensity and peak ground motion parameters are generated using linear least-squares regression technique. The first one correlates Modified Mercalli Intensity (MMI) to Peak Ground Acceleration (PGA) whereas the latter one does the same for Peak Ground Velocity (PGV). Old damage reports and isoseismal maps are employed for deriving 92 data pairs of MMI, PGA and PGV used in the regression analyses. These local relationships can be used in the future for ShakeMap applications in rapid response and disaster management activities.
728

Second-order least squares estimation in regression models with application to measurement error problems

Abarin, Taraneh 21 January 2009 (has links)
This thesis studies the Second-order Least Squares (SLS) estimation method in regression models with and without measurement error. Applications of the methodology in general quasi-likelihood and variance function models, censored models, and linear and generalized linear models are examined and strong consistency and asymptotic normality are established. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is used and its asymptotic properties are studied. Finite sample performances of the estimators in all of the studied models are investigated through simulation studies. / February 2009
729

Análisis de datos longitudinales y multivariantes mediante distancias con modelos lineales generalizados

Melo Martínez, Sandra Esperanza 06 September 2012 (has links)
Se propusieron varias metodologías para analizar datos longitudinales (en forma univariante, mediante MANOVA, en curvas de crecimiento y bajo respuesta no normal mediante modelos lineales generalizados) usando distancias entre observaciones (o individuos) con respecto a las variables explicativas con variables respuesta de tipo continuo. En todas las metodologías propuestas al agregar más componentes de la matriz de coordenadas principales se encuentra que se gana en las predicciones con respecto a los modelos clásicos. Por lo cual resulta ser una metodología alternativa frente a la clásica para realizar predicciones. Se probó que el modelo MANOVA con DB y la aproximación univariante longitudinal con DB generan resultados tan robustos como la aproximación de MANOVA clásica y univariante clásica para datos longitudinales, haciendo uso en la aproximación clásica de máxima verosimilitud restringida y mínimos cuadrados ponderados bajo condiciones de normalidad. Los parámetros del modelo univariante con DB fueron estimados por el método de máxima verosimilitud restringida y por mínimos cuadrados generalizados. Para la aproximación MANOVA con DB se uso mínimos cuadrados bajo condiciones de normalidad. Además, se presentó como realizar inferencia sobre los parámetros involucrados en el modelo para muestras grandes. Se explicó también una metodología para analizar datos longitudinales mediante modelos lineales generalizados con distancias entre observaciones con respecto a las variables explicativas, donde se encontraron resultados similares a la metodología clásica y la ventaja de poder modelar datos de respuesta continua no normal en el tiempo. Inicialmente, se presenta el modelo propuesto, junto con las ideas principales que dan su origen, se realiza la estimación de parámetros y el contraste de hipótesis. La estimación se hace aplicando la metodología de ecuaciones de estimación generalizada (EEG). Por medio de una aplicación en cada capítulo se ilustraron las metodologías propuestas. Se ajusto el modelo, se obtuvo la estimación de los diferentes parámetros involucrados, se realizó la inferencia estadística del modelo propuesto y la validación del modelo propuesto. Pequeñas diferencias del método DB con respecto al clásico fueron encontradas en el caso de datos mixtos, especialmente en muestras pequeñas de tamaño 50, resultado obtenido de la simulación. Mediante simulación para algunos tamaños de muestra se encontró que el modelo ajustado DB produce mejores predicciones en comparación con la metodología tradicional para el caso en que las variables explicativas sean mixtas utilizando la distancia de Gower. En tamaños de muestras pequeñas 50, independiente del valor de la correlación, las estructuras de autocorrelación, la varianza y el número de tiempos, usando los criterios de información Akaike y Bayesiano (AIC y BIC). Además, para muestras pequeñas de tamaño 50 se encuentra más eficiente (eficiencia mayor a 1) el método DB en comparación con el método clásico, bajo los diferentes escenarios considerados. Otro resultado importante es que el método DB presenta mejor ajuste en muestras grandes (100 y 200), con correlaciones altas (0.5 y 0.9), varianza alta (50) y mayor número de mediciones en el tiempo (7 y 10). Cuando las variables explicativas son solamente de tipo continuo o categórico o binario, se probó que las predicciones son las mismas con respecto al método clásico. Adicionalmente, se desarrollaron los programas en el software R para el análisis de este tipo de datos mediante la metodología clásica y por distancias DB para las diferentes propuestas en cada uno de los capítulos de la tesis, los cuales se anexan en un CD dentro de la tesis. Se esta trabajando en la creación de una librería en R con lo ya programado, para que todos los usuarios tengan acceso a este tipo de análisis. Los métodos propuestos tienen la ventaja de poder hacer predicciones en el tiempo, se puede modelar la estructura de autocorrelación, se pueden modelar datos con variables explicativas mixtas, binarias, categóricas o continuas, y se puede garantizar independencia en las componentes de la matriz de coordenadas principales mientras que con las variables originales no se puede garantizar siempre independencia. Por último, el método propuesto produce buenas predicciones para estimar datos faltantes, ya que al agregar una o más componentes en el modelo con respecto a las variables explicativas originales de los datos, se puede mejorar el ajuste sin alterar la información original y por consiguiente resulta ser una buena alternativa para el análisis de datos longitudinales y de gran utilidad para investigadores cuyo interés se centra en obtener buenas predicciones. / LONGITUDINAL AND MULTIVARIATE DATA ANALYSIS THROUGH DISTANCES WITH GENERALIZED LINEAR MODELS We are introducing new methodologies for the analysis of longitudinal data with continuous responses (univariate, multivariate for growth curves and with non-normal response using generalized linear models) based on distances between observations (or individuals) on the explicative variables. In all cases, after adding new components of the principal coordinate matrix, we observe a prediction improvement with respect to the classic models, thus providing an alternative prediction methodology to them. It was proven that both the distance based MANOVA model and the univariate longitudinal models are as robust as the classical counterparts using restricted maximum likelihood and weighted minimum squares under normality assumptions. The parameters of the distance based univariate model were estimated using restricted maximum likelihood and generalized minimum squares. For the distance based MANOVA we used minimum squares under normality conditions. We also showed how to perform inference on the model parameters on large samples. We indicated a methodology for the analysis of longitudinal data using generalized linear models and distances between the explanatory variables, where the results were similar to the classical approach. However, our approach allowed us to model continuous, non-normal responses in the time. As well as presenting the model and the motivational ideas, we indicate how to estimate the parameters and hypothesis test on them. For this purpose we use generalized estimating equations (EEG). We present an application case in each chapter for illustration purposes. The models were fit and validated. After performing some simulations, we found small differences in the distance based method with respect to the classical one for mixed data, particularly in the small sample setting (about 50 individuals). Using simulation we found that for some sample sizes, the distance based models improve the traditional ones when explanatory variables are mixed and Gower distance is used. This is the case for small samples, regardless of the correlation, autocorrelation structure, the variance, and the number of periods when using both the Akaike (AIC) and Bayesian (BIC) Information Criteria. Moreover, for these small samples, we found greater efficiency (>1) in our model with respect to the classical one. Our models also provide better fits in large samples (100 or 200) with high correlations (0.5 and 0.9), high variance (50) and larger number of time measurements (7 and 10). We proved that the new and the classical models coincide when explanatory variables are all either continuous or categorical (or binary). We also created programs in R for the analysis of the data considered in the different chapters of this thesis in both models, the classical and the newly proposed one, which are attached in a CD. We are currently working to create a public, accessible R package. The main advantages of these methods are that they allow for time predictions, the modelization of the autocorrelation structure, and the analysis of data with mixed variables (continuous, categorical and binary). In such cases, as opposed to the classical approach, the independency of the components principal coordinate matrix can always be guaranteed. Finally, the proposed models allow for good missing data estimation: adding extra components to the model with respect to the original variables improves the fit without changing the information original. This is particularly important in the longitudinal data analysis and for those researchers whose main interest resides in obtaining good predictions.
730

Faktoren für eine erfolgreiche Steuerung von Patentaktivitäten

Günther, Thomas, Moses, Heike 12 September 2006 (has links) (PDF)
Empirischen Studien zufolge können Patente sich positiv auf den Unternehmenserfolg auswirken. Allerdings wirkt dieser Effekt nicht automatisch, sondern Unternehmen müssen sich um den Aufbau und die gesteuerte Weiterentwicklung eines nachhaltigen und wertvollen Patentportfolios bemühen. Bisher ist jedoch nicht wissenschaftlich untersucht worden, welche Maßnahmen Unternehmen ergreifen können, um die unternehmensinternen Vorraussetzungen für eine erfolgreiche Steuerung von Patentaktivitäten zu schaffen. Um diese betrieblichen Faktoren zu identifizieren und deren Relevanz zu quantifizieren, wurden 2005 in einer breiten empirischen Untersuchung die aktiven Patentanmelder im deutschsprachigen Raum (über 1.000 Unternehmen) mit Hilfe eines standardisierten Fragebogens befragt. Auf der Basis von 325 auswertbaren Fragebögen (Ausschöpfungsquote 36,8 %) konnten zum einen Ergebnisse zum aktuellen Aufgabenspektrum der Patentabteilungen sowie zu deren organisatorischen und personellen Strukturen gewonnen werden. Ebenfalls wurde in dieser Status quo-Analyse der Bekanntheits- und Implementierungsgrad von Methoden und Systemen (z. B. Patentbewertungsmethoden, Patent-IT-Systeme) beleuchtet. Zum anderen wurden die betrieblichen Faktoren herausgestellt, auf die technologieorientierte Unternehmen achten sollten, um das Fundament für eine erfolgreiche Patentsteuerung zu legen. / Empirical studies have shown that patents can have a positive effect on corporate success. However, this effect does not occur by itself. Companies have to make an effort to create and to develop a sustainable patent portfolio. So far, no academic studies have investigated into which actions a company can take to establish the internal conditions for successful patent management. To identify and to quantify the relevance of these internal factors, a study was conducted using a standardized written questionnaire with more than 1,000 patent-oriented companies in the German-speaking countries (Germany, Austria, Switzerland, Liechtenstein). In total, 325 valid questionnaires were included in the analyses; this corresponds to an above-average response rate of 36.8 %. These analyses revealed insights into the current task profile of patent departments and their organizational and personnel structures. This status quo analysis also included the investigation into the awareness and implementation level of used methods and systems (e. g. patent evaluation methods, patent IT systems). Furthermore, the study could expose the internal determinants, which technology-oriented companies should focus on to ensure a successful patent management.

Page generated in 0.0371 seconds