• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
891

Multiresolutional partial least squares and principal component analysis of fluidized bed drying

Frey, Gerald M. 14 April 2005
Fluidized bed dryers are used in the pharmaceutical industry for the batch drying of pharmaceutical granulate. Maintaining optimal hydrodynamic conditions throughout the drying process is essential to product quality. Due to the complex interactions inherent in the fluidized bed drying process, mechanistic models capable of identifying these optimal modes of operation are either unavailable or limited in their capabilities. Therefore, empirical models based on experimentally generated data are relied upon to study these systems.<p> Principal Component Analysis (PCA) and Partial Least Squares (PLS) are multivariate statistical techniques that project data onto linear subspaces that are the most descriptive of variance in a dataset. By modeling data in terms of these subspaces, a more parsimonious representation of the system is possible. In this study, PCA and PLS are applied to data collected from a fluidized bed dryer containing pharmaceutical granulate. <p>System hydrodynamics were quantified in the models using high frequency pressure fluctuation measurements. These pressure fluctuations have previously been identified as a characteristic variable of hydrodynamics in fluidized bed systems. As such, contributions from the macroscale, mesoscale, and microscales of motion are encoded into the signals. A multiresolutional decomposition using a discrete wavelet transformation was used to resolve these signals into components more representative of these individual scales before modeling the data. <p>The combination of multiresolutional analysis with PCA and PLS was shown to be an effective approach for modeling the conditions in the fluidized bed dryer. In this study, datasets from both steady state and transient operation of the dryer were analyzed. The steady state dataset contained measurements made on a bed of dry granulate and the transient dataset consisted of measurements taken during the batch drying of granulate from approximately 33 wt.% moisture to 5 wt.%. Correlations involving several scales of motion were identified in both studies.<p> In the steady state study, deterministic behavior related to superficial velocity, pressure sensor position, and granulate particle size distribution was observed in PCA model parameters. It was determined that these properties could be characterized solely with the use of the high frequency pressure fluctuation data. Macroscopic hydrodynamic characteristics such as bubbling frequency and fluidization regime were identified in the low frequency components of the pressure signals and the particle scale interactions of the microscale were shown to be correlated to the highest frequency signal components. PLS models were able to characterize the effects of superficial velocity, pressure sensor position, and granulate particle size distribution in terms of the pressure signal components. Additionally, it was determined that statistical process control charts capable of monitoring the fluid bed hydrodynamics could be constructed using PCA<p>In the transient drying experiments, deterministic behaviors related to inlet air temperature, pressure sensor position, and initial bed mass were observed in PCA and PLS model parameters. The lowest frequency component of the pressure signal was found to be correlated to the overall temperature effects during the drying cycle. As in the steady state study, bubbling behavior was also observed in the low frequency components of the pressure signal. PLS was used to construct an inferential model of granulate moisture content. The model was found to be capable of predicting the moisture throughout the drying cycle. Preliminary statistical process control models were constructed to monitor the fluid bed hydrodynamics throughout the drying process. These models show promise but will require further investigation to better determine sensitivity to process upsets.<p> In addition to PCA and PLS analyses, Multiway Principal Component Analysis (MPCA) was used to model the drying process. Several key states related to the mass transfer of moisture and changes in temperature throughout the drying cycle were identified in the MPCA model parameters. It was determined that the mass transfer of moisture throughout the drying process affects all scales of motion and overshadows other hydrodynamic behaviors found in the pressure signals.
892

Identifying Nursing Activities to Estimate the Risk of Cross-contamination

Seyed Momen, Kaveh 07 January 2013 (has links)
Hospital Acquired Infections (HAI) are a global patient safety challenge, costly to treat, and affect hundreds of millions of patients annually worldwide. It has been shown that the majority of HAI are transferred to patients by caregivers' hands and therefore, can be prevented by proper hand hygiene (HH). However, many factors including cognitive load, cause caregivers to forget to cleanse their hands. Hand hygiene compliance among caregivers remains low around the world. In this thesis I showed that it is possible to build a wearable accelerometer-based HH reminder system to identify ongoing nursing activities with the patient, indicate the high-risk activities, and prompt the caregivers to clean their hands. Eight subjects participated in this study, each wearing five wireless accelerometer sensors on the wrist, upper arms and the back. A pattern recognition approach was used to classify six nursing activities offline. Time-domain features that included mean, standard deviation, energy, and correlation among accelerometer axes were found to be suitable features. On average, 1-Nearest Neighbour classifier was able to classify the activities with 84% accuracy. A novel algorithm was developed to adaptively segment the accelerometer signals to identify the start and stop time of each nursing activity. The overall accuracy of the algorithm for a total of 96 events performed by 8 subjects was approximately 87%. The accuracy was higher than 91% for 5 out of 8 subjects. The sequence of nursing activities was modelled by an 18-state Markov Chain. The model was evaluated by recently published data. The simulation results showed that the high-risk of cross-contamination decreases exponentially by frequency of HH and this happens more rapidly up to 50%-60% hand hygiene rate. It was also found that if the caregiver enters the room with high-risk of transferring infection to the current patient, given the assumptions in this study, only 55% HH is capable of reducing the risk of infection transfer to the lowest level. This may help to prevent the next patient from acquiring infection, preventing an infection outbreak. The model is also capable of simulating the effects of the imperfect HH on the risk of cross-contamination.
893

Relationships Between Felt Intensity And Recorded Ground Motion Parameters For Turkey

Bilal, Mustafa 01 January 2013 (has links) (PDF)
Earthquakes are among natural disasters with significant damage potential / however it is possible to reduce the losses by taking several remedies. Reduction of seismic losses starts with identifying and estimating the expected damage to some accuracy. Since both the design styles and the construction defects exhibit mostly local properties all over the world, damage estimations should be performed at regional levels. Another important issue in disaster mitigation is to determine a robust measure of ground motion intensity parameters. As of now, well-built correlations between shaking intensity and instrumental ground motion parameters are not yet studied in detail for Turkish data. In the first part of this thesis, regional empirical Damage Probability Matrices (DPMs) are formed for Turkey. As the input data, the detailed damage database of the 17 August 1999 Kocaeli earthquake (Mw=7.4) is used. The damage probability matrices are derived for Sakarya, Bolu and Kocaeli, for both reinforced concrete and masonry buildings. Results are compared with previous similar studies and the differences are discussed. After validation with future data, these DPMs can be used in the calculation of earthquake insurance premiums. In the second part of this thesis, two relationships between the felt-intensity and peak ground motion parameters are generated using linear least-squares regression technique. The first one correlates Modified Mercalli Intensity (MMI) to Peak Ground Acceleration (PGA) whereas the latter one does the same for Peak Ground Velocity (PGV). Old damage reports and isoseismal maps are employed for deriving 92 data pairs of MMI, PGA and PGV used in the regression analyses. These local relationships can be used in the future for ShakeMap applications in rapid response and disaster management activities.
894

Distribution and habitat of the least bittern and other marsh bird species in southern Manitoba

Hay, Stacey 28 March 2006 (has links)
Call-response surveys were conducted to better delineate and estimate the population of the nationally threatened least bittern and their habitat requirements in southern Manitoba, Canada. Other marsh bird species whose populations are believed to be declining due to wetland loss throughout, or in parts of, their range were also surveyed including the American bittern, pied-billed grebe, sora, Virginia rail and yellow rail. Surveys were conducted during the 2003 and 2004 breeding seasons within 46 different wetlands. Least bitterns were encountered on 26 occasions at 15 sites within 5 wetlands. The sora was the most abundant and widely distributed target species and was encountered on 330 occasions in 39 of the 46 surveyed wetlands. Yellow rails were not detected during either survey year due to survey methodology. Use of the call-response survey protocol led to an increase in the numbers of all target species detected. This increase was more significant for the least bittern, sora and Virginia rail. Habitat was assessed as percent vegetation cover within a 50-m radius around the calling sites, and forest resource inventory data were used in a Geographic Information System to determine the landscape composition within a 500-m radius around the sites and within a 5-km radius around the wetlands surveyed. Logistic regression analyses were used to evaluate the relationship between the presence of the target species and the site and landscape characteristics. The target species responded differently to different site and landscape characteristics. Least bittern and pied-billed grebe selected areas with higher proportions of Typha spp. and tall shrubs; American bittern also selected areas with higher proportions of tall shrubs. At the 5-km scale, the American bittern responded positively to the amount of wetland and some positive trends were also detected for the pied-billed grebe. Sora and Virginia rail were not associated with any of the measured landscape characteristics. One of the most important steps towards the conservation of marsh bird species in Manitoba and elsewhere is the development, adoption, and implementation of a standardized survey protocol. Based on the results of the present study, I recommend that future surveys include both a passive and call-broadcast period for marsh bird species. Future surveys should be conducted in both the morning and evening and sites should be visited 3 times each during the breeding season. In southern Manitoba, call-response surveys should begin as early as the beginning of May to ensure the survey incorporates the period of peak vocalization. I recommend that future yellow rail surveys be conducted after dark. In this study many of the target species selected sites that had a greater area of wetland habitat surrounding them. Future wetland conservation efforts should focus on the protection and/or restoration of wetland complexes to ensure that remaining wetlands do not become smaller and increasingly isolated from one another. In addition, the Rat River Swamp was found to be the most productive marsh complex for least bittern in southern Manitoba. Measures should be taken to protect this area from future development and alteration. / May 2006
895

Second-order least squares estimation in regression models with application to measurement error problems

Abarin, Taraneh 21 January 2009 (has links)
This thesis studies the Second-order Least Squares (SLS) estimation method in regression models with and without measurement error. Applications of the methodology in general quasi-likelihood and variance function models, censored models, and linear and generalized linear models are examined and strong consistency and asymptotic normality are established. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is used and its asymptotic properties are studied. Finite sample performances of the estimators in all of the studied models are investigated through simulation studies. / February 2009
896

Análisis de datos longitudinales y multivariantes mediante distancias con modelos lineales generalizados

Melo Martínez, Sandra Esperanza 06 September 2012 (has links)
Se propusieron varias metodologías para analizar datos longitudinales (en forma univariante, mediante MANOVA, en curvas de crecimiento y bajo respuesta no normal mediante modelos lineales generalizados) usando distancias entre observaciones (o individuos) con respecto a las variables explicativas con variables respuesta de tipo continuo. En todas las metodologías propuestas al agregar más componentes de la matriz de coordenadas principales se encuentra que se gana en las predicciones con respecto a los modelos clásicos. Por lo cual resulta ser una metodología alternativa frente a la clásica para realizar predicciones. Se probó que el modelo MANOVA con DB y la aproximación univariante longitudinal con DB generan resultados tan robustos como la aproximación de MANOVA clásica y univariante clásica para datos longitudinales, haciendo uso en la aproximación clásica de máxima verosimilitud restringida y mínimos cuadrados ponderados bajo condiciones de normalidad. Los parámetros del modelo univariante con DB fueron estimados por el método de máxima verosimilitud restringida y por mínimos cuadrados generalizados. Para la aproximación MANOVA con DB se uso mínimos cuadrados bajo condiciones de normalidad. Además, se presentó como realizar inferencia sobre los parámetros involucrados en el modelo para muestras grandes. Se explicó también una metodología para analizar datos longitudinales mediante modelos lineales generalizados con distancias entre observaciones con respecto a las variables explicativas, donde se encontraron resultados similares a la metodología clásica y la ventaja de poder modelar datos de respuesta continua no normal en el tiempo. Inicialmente, se presenta el modelo propuesto, junto con las ideas principales que dan su origen, se realiza la estimación de parámetros y el contraste de hipótesis. La estimación se hace aplicando la metodología de ecuaciones de estimación generalizada (EEG). Por medio de una aplicación en cada capítulo se ilustraron las metodologías propuestas. Se ajusto el modelo, se obtuvo la estimación de los diferentes parámetros involucrados, se realizó la inferencia estadística del modelo propuesto y la validación del modelo propuesto. Pequeñas diferencias del método DB con respecto al clásico fueron encontradas en el caso de datos mixtos, especialmente en muestras pequeñas de tamaño 50, resultado obtenido de la simulación. Mediante simulación para algunos tamaños de muestra se encontró que el modelo ajustado DB produce mejores predicciones en comparación con la metodología tradicional para el caso en que las variables explicativas sean mixtas utilizando la distancia de Gower. En tamaños de muestras pequeñas 50, independiente del valor de la correlación, las estructuras de autocorrelación, la varianza y el número de tiempos, usando los criterios de información Akaike y Bayesiano (AIC y BIC). Además, para muestras pequeñas de tamaño 50 se encuentra más eficiente (eficiencia mayor a 1) el método DB en comparación con el método clásico, bajo los diferentes escenarios considerados. Otro resultado importante es que el método DB presenta mejor ajuste en muestras grandes (100 y 200), con correlaciones altas (0.5 y 0.9), varianza alta (50) y mayor número de mediciones en el tiempo (7 y 10). Cuando las variables explicativas son solamente de tipo continuo o categórico o binario, se probó que las predicciones son las mismas con respecto al método clásico. Adicionalmente, se desarrollaron los programas en el software R para el análisis de este tipo de datos mediante la metodología clásica y por distancias DB para las diferentes propuestas en cada uno de los capítulos de la tesis, los cuales se anexan en un CD dentro de la tesis. Se esta trabajando en la creación de una librería en R con lo ya programado, para que todos los usuarios tengan acceso a este tipo de análisis. Los métodos propuestos tienen la ventaja de poder hacer predicciones en el tiempo, se puede modelar la estructura de autocorrelación, se pueden modelar datos con variables explicativas mixtas, binarias, categóricas o continuas, y se puede garantizar independencia en las componentes de la matriz de coordenadas principales mientras que con las variables originales no se puede garantizar siempre independencia. Por último, el método propuesto produce buenas predicciones para estimar datos faltantes, ya que al agregar una o más componentes en el modelo con respecto a las variables explicativas originales de los datos, se puede mejorar el ajuste sin alterar la información original y por consiguiente resulta ser una buena alternativa para el análisis de datos longitudinales y de gran utilidad para investigadores cuyo interés se centra en obtener buenas predicciones. / LONGITUDINAL AND MULTIVARIATE DATA ANALYSIS THROUGH DISTANCES WITH GENERALIZED LINEAR MODELS We are introducing new methodologies for the analysis of longitudinal data with continuous responses (univariate, multivariate for growth curves and with non-normal response using generalized linear models) based on distances between observations (or individuals) on the explicative variables. In all cases, after adding new components of the principal coordinate matrix, we observe a prediction improvement with respect to the classic models, thus providing an alternative prediction methodology to them. It was proven that both the distance based MANOVA model and the univariate longitudinal models are as robust as the classical counterparts using restricted maximum likelihood and weighted minimum squares under normality assumptions. The parameters of the distance based univariate model were estimated using restricted maximum likelihood and generalized minimum squares. For the distance based MANOVA we used minimum squares under normality conditions. We also showed how to perform inference on the model parameters on large samples. We indicated a methodology for the analysis of longitudinal data using generalized linear models and distances between the explanatory variables, where the results were similar to the classical approach. However, our approach allowed us to model continuous, non-normal responses in the time. As well as presenting the model and the motivational ideas, we indicate how to estimate the parameters and hypothesis test on them. For this purpose we use generalized estimating equations (EEG). We present an application case in each chapter for illustration purposes. The models were fit and validated. After performing some simulations, we found small differences in the distance based method with respect to the classical one for mixed data, particularly in the small sample setting (about 50 individuals). Using simulation we found that for some sample sizes, the distance based models improve the traditional ones when explanatory variables are mixed and Gower distance is used. This is the case for small samples, regardless of the correlation, autocorrelation structure, the variance, and the number of periods when using both the Akaike (AIC) and Bayesian (BIC) Information Criteria. Moreover, for these small samples, we found greater efficiency (>1) in our model with respect to the classical one. Our models also provide better fits in large samples (100 or 200) with high correlations (0.5 and 0.9), high variance (50) and larger number of time measurements (7 and 10). We proved that the new and the classical models coincide when explanatory variables are all either continuous or categorical (or binary). We also created programs in R for the analysis of the data considered in the different chapters of this thesis in both models, the classical and the newly proposed one, which are attached in a CD. We are currently working to create a public, accessible R package. The main advantages of these methods are that they allow for time predictions, the modelization of the autocorrelation structure, and the analysis of data with mixed variables (continuous, categorical and binary). In such cases, as opposed to the classical approach, the independency of the components principal coordinate matrix can always be guaranteed. Finally, the proposed models allow for good missing data estimation: adding extra components to the model with respect to the original variables improves the fit without changing the information original. This is particularly important in the longitudinal data analysis and for those researchers whose main interest resides in obtaining good predictions.
897

Faktoren für eine erfolgreiche Steuerung von Patentaktivitäten

Günther, Thomas, Moses, Heike 12 September 2006 (has links) (PDF)
Empirischen Studien zufolge können Patente sich positiv auf den Unternehmenserfolg auswirken. Allerdings wirkt dieser Effekt nicht automatisch, sondern Unternehmen müssen sich um den Aufbau und die gesteuerte Weiterentwicklung eines nachhaltigen und wertvollen Patentportfolios bemühen. Bisher ist jedoch nicht wissenschaftlich untersucht worden, welche Maßnahmen Unternehmen ergreifen können, um die unternehmensinternen Vorraussetzungen für eine erfolgreiche Steuerung von Patentaktivitäten zu schaffen. Um diese betrieblichen Faktoren zu identifizieren und deren Relevanz zu quantifizieren, wurden 2005 in einer breiten empirischen Untersuchung die aktiven Patentanmelder im deutschsprachigen Raum (über 1.000 Unternehmen) mit Hilfe eines standardisierten Fragebogens befragt. Auf der Basis von 325 auswertbaren Fragebögen (Ausschöpfungsquote 36,8 %) konnten zum einen Ergebnisse zum aktuellen Aufgabenspektrum der Patentabteilungen sowie zu deren organisatorischen und personellen Strukturen gewonnen werden. Ebenfalls wurde in dieser Status quo-Analyse der Bekanntheits- und Implementierungsgrad von Methoden und Systemen (z. B. Patentbewertungsmethoden, Patent-IT-Systeme) beleuchtet. Zum anderen wurden die betrieblichen Faktoren herausgestellt, auf die technologieorientierte Unternehmen achten sollten, um das Fundament für eine erfolgreiche Patentsteuerung zu legen. / Empirical studies have shown that patents can have a positive effect on corporate success. However, this effect does not occur by itself. Companies have to make an effort to create and to develop a sustainable patent portfolio. So far, no academic studies have investigated into which actions a company can take to establish the internal conditions for successful patent management. To identify and to quantify the relevance of these internal factors, a study was conducted using a standardized written questionnaire with more than 1,000 patent-oriented companies in the German-speaking countries (Germany, Austria, Switzerland, Liechtenstein). In total, 325 valid questionnaires were included in the analyses; this corresponds to an above-average response rate of 36.8 %. These analyses revealed insights into the current task profile of patent departments and their organizational and personnel structures. This status quo analysis also included the investigation into the awareness and implementation level of used methods and systems (e. g. patent evaluation methods, patent IT systems). Furthermore, the study could expose the internal determinants, which technology-oriented companies should focus on to ensure a successful patent management.
898

Training of Template-Specific Weighted Energy Function for Sequence-to-Structure Alignment

Lee, En-Shiun Annie January 2008 (has links)
Threading is a protein structure prediction method that uses a library of template protein structures in the following steps: first the target sequence is matched to the template library and the best template structure is selected, secondly the predicted target structure of the target sequence is modeled by this selected template structure. The deceleration of new folds which are added to the protein data bank promises completion of the template structure library. This thesis uses a new set of template-specific weights to improve the energy function for sequence-to-structure alignment in the template selection step of the threading process. The weights are estimated using least squares methods with the quality of the modelling step in the threading process as the label. These new weights show an average 12.74% improvement in estimating the label. Further family analysis show a correlation between the performance of the new weights to the number of seeds in pFam.
899

Training of Template-Specific Weighted Energy Function for Sequence-to-Structure Alignment

Lee, En-Shiun Annie January 2008 (has links)
Threading is a protein structure prediction method that uses a library of template protein structures in the following steps: first the target sequence is matched to the template library and the best template structure is selected, secondly the predicted target structure of the target sequence is modeled by this selected template structure. The deceleration of new folds which are added to the protein data bank promises completion of the template structure library. This thesis uses a new set of template-specific weights to improve the energy function for sequence-to-structure alignment in the template selection step of the threading process. The weights are estimated using least squares methods with the quality of the modelling step in the threading process as the label. These new weights show an average 12.74% improvement in estimating the label. Further family analysis show a correlation between the performance of the new weights to the number of seeds in pFam.
900

Throughput Scaling Laws in Point-to-Multipoint Cognitive Networks

Jamal, Nadia 07 1900 (has links)
Simultaneous operation of different wireless applications in the same geographical region and the same frequency band gives rise to undesired interference issues. Since licensed (primary) applications have been granted priority access to the frequency spectrum, unlicensed (secondary) services should avoid imposing interference on the primary system. In other words, secondary system’s activity in the same bands should be in a controlled fashion so that the primary system maintains its quality of service (QoS) requirements. In this thesis, we consider collocated point-to-multipoint primary and secondary networks that have simultaneous access to the same frequency band. Particularly, we examine three different levels at which the two networks may coexist: pure interference, asymmetric co-existence, and symmetric co-existence levels. At the pure interference level, both networks operate simultaneously regardless of their interference to each other. At the other two levels, at least one of the networks attempts to mitigate its interference to the other network by deactivating some of its users. Specifically, at the asymmetric co-existence level, the secondary network selectively deactivates its users based on knowledge of the interference and channel gains, whereas at the symmetric level, the primary network also schedules its users in the same way. Our aim is to derive optimal sum-rates (i.e., throughputs) of both networks at each co-existence level as the number of users grows asymptotically and evaluate how the sum-rates scale with the network size. In order to find the asymptotic throughput results, we derive two propositions; one on the asymptotic behaviour of the largest order statistic and one on the asymptotic behaviour of the sum of lower order statistics. As a baseline comparison, we calculate primary and secondary sum-rates for the time division (TD) channel sharing. Then, we compare the asymptotic secondary sum-rate in TD to that under simultaneous channel sharing, while ensuring the primary network maintains the same sum-rate in both cases. Our results indicate that simultaneous channel sharing at both asymmetric and symmetric co-existence levels can outperform TD. Furthermore, this enhancement is achievable when user scheduling in uplink mode is based only on the interference gains to the opposite network and not on a network’s own channel gains. In other words, the optimal secondary sum-rate is achievable by applying a scheduling strategy, referred to as the least interference strategy, for which only the knowledge of interference gains is required and can be performed in a distributed way.

Page generated in 0.0722 seconds