Spelling suggestions: "subject:"square""
761 |
Second-order least squares estimation in regression models with application to measurement error problemsAbarin, Taraneh 21 January 2009 (has links)
This thesis studies the Second-order Least Squares (SLS) estimation method in regression models with and without measurement error. Applications of the methodology in general quasi-likelihood and variance function models, censored models, and linear and generalized linear models are examined and strong consistency and asymptotic normality are established. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is used and its asymptotic properties are studied. Finite sample performances of the estimators in all of the studied models are investigated through simulation studies.
|
762 |
Development of Fluorescence-based Tools for Characterization of Natural Organic Matter and Development of Membrane Fouling Monitoring Strategies for Drinking Water Treatment SystemsPeiris, Ramila Hishantha 06 November 2014 (has links)
The objective of this research was to develop fluorescence-based tools that are suitable for performing rapid, accurate and direct characterization of natural organic matter (NOM) and colloidal/particulate substances present in natural water. Most available characterization methods are neither suitable for characterizing all the major NOM fractions such as protein-, humic acid-, fulvic acid- and polysaccharide-like substances as well as colloidal/particulate matter present in natural water nor are they suitable for rapid analyses. The individual and combined contributions of these NOM fractions and colloidal/particulate matter present in natural water contribute to membrane fouling, disinfection by-products formation and undesirable biological growth in drinking water treatment processes and distribution systems. The novel techniques developed in this research therefore, provide an avenue for improved understanding of these negative effects and proactive implementation of control and/or optimization strategies.
The fluorescence excitation-emission matrix (EEM) method was used for characterization of NOM and colloidal/particulate matter present in water. Unlike most NOM and colloidal/particulate matter characterization techniques, this method can provide fast and consistent analyses with high instrumental sensitivity. The feasibility of using this method for monitoring NOM at very low concentration levels was also demonstrated with an emphasis on optimizing the instrument parameters necessary to obtain reproducible fluorescence signals.
Partial least squares regression (PLS) was used to develop calibration models by correlating the fluorescence EEM intensities of water samples that contained surrogate NOM fractions with their corresponding dissolved organic carbon (DOC) concentrations. These fluorescence-based calibration models were found to be suitable for identifying/monitoring the extent of the relative changes that occur in different NOM fractions and the interactions between polysaccharide- and protein-like NOM in water treatment processes and distribution systems.
Principal component analysis (PCA) of fluorescence EEMs was identified as a viable tool for monitoring the performance of biological filtration as a pre-treatment step, as well as ultrafiltration (UF) and nanofiltration (NF) membrane systems. The principal components (PCs) extracted in this approach were related to the major membrane foulant groups such as humic substances (HS), protein-like and colloidal/particulate matter in natural water. The PC score plots generated using the fluorescence EEMs obtained after just one hour of UF or NF operation could be related to high fouling events likely caused by elevated levels of colloidal/particulate-like material in the biofilter effluents. This fluorescence EEM-based PCA approach was sensitive enough to be used at low organic carbon levels present in NF permeate and has potential as an early detection method to identify high fouling events, allowing appropriate operational countermeasures to be taken.
This fluorescence EEM-based PCA approach was also used to extract information relevant to reversible and irreversible membrane fouling behaviour in a bench-scale flat sheet cross flow UF process consisting of cycles of permeation and back-washing. PC score-based analysis revealed that colloidal/particulate matter mostly contributed to reversible fouling, while HS and protein-like matter were largely responsible for irreversible fouling. This method therefore has potential for monitoring modes of membrane fouling in drinking water treatment applications.
The above approach was further improved by utilizing the evolution of the PC scores over the filtration time and relating these to membrane fouling by the use of PC scores??? balanced-based differential equations. Using these equations the proposed fluorescence-based modeling approach was capable of forecasting UF fouling behaviours with good accuracy based solely on fluorescence data obtained at time = 15 min from the initiation of the filtration process. In addition, this approach was tested experimentally as a basis for optimization by modifying the UF back-washing times with the objective of minimizing energy consumption and maximizing water production. Preliminary optimization results demonstrated the potential of this approach to reduce power consumption by significant percentages. This approach was also useful for identifying the fouling components of the NOM that were contributing to reversible and irreversible membrane fouling.
Grand River water (Southwestern Ontario, Canada) was used as the natural water source for developing the techniques presented in this thesis. Future research focusing on testing these methods for monitoring of membrane fouling and treatment processes in large-scale drinking water treatment facilities that experience different sources of raw water would be useful for identifying the limitation of these techniques and areas for improvements.
|
763 |
Méthode numérique d'estimation du mouvement des masses mollesThouzé, Arsène 10 1900 (has links)
L’analyse biomécanique du mouvement humain en utilisant des systèmes optoélectroniques et des marqueurs cutanés considère les segments du corps comme des corps rigides. Cependant, le mouvement des tissus mous par rapport à l'os, c’est à dire les muscles et le tissu adipeux, provoque le déplacement des marqueurs. Ce déplacement est le fait de deux composantes, une composante propre correspondant au mouvement aléatoire de chaque marqueur et une composante à l’unisson provoquant le déplacement commun des marqueurs cutanés lié au mouvement des masses sous-jacentes. Si nombre d’études visent à minimiser ces déplacements, des simulations ont montré que le mouvement des masses molles réduit la dynamique articulaire. Cette observation est faite uniquement par la simulation, car il n'existe pas de méthodes capables de dissocier la cinématique des masses molles de celle de l’os. L’objectif principal de cette thèse consiste à développer une méthode numérique capable de distinguer ces deux cinématiques.
Le premier objectif était d'évaluer une méthode d'optimisation locale pour estimer le mouvement des masses molles par rapport à l’humérus obtenu avec une tige intra-corticale vissée chez trois sujets. Les résultats montrent que l'optimisation locale sous-estime de 50% le déplacement des marqueurs et qu’elle conduit à un classement de marqueurs différents en fonction de leur déplacement. La limite de cette méthode vient du fait qu'elle ne tient pas compte de l’ensemble des composantes du mouvement des tissus mous, notamment la composante en unisson.
Le second objectif était de développer une méthode numérique qui considère toutes les composantes du mouvement des tissus mous. Plus précisément, cette méthode devait fournir une cinématique similaire et une plus grande estimation du déplacement des marqueurs par rapport aux méthodes classiques et dissocier ces composantes. Le membre inférieur est modélisé avec une chaine cinématique de 10 degrés de liberté reconstruite par optimisation globale en utilisant seulement les marqueurs placés sur le pelvis et la face médiale du tibia. L’estimation de la cinématique sans considérer les marqueurs placés sur la cuisse et le mollet permet d'éviter l’influence de leur déplacement sur la reconstruction du modèle cinématique. Cette méthode testée sur 13 sujets lors de sauts a obtenu jusqu’à 2,1 fois plus de déplacement des marqueurs en fonction de la méthode considérée en assurant des cinématiques similaires. Une approche vectorielle a montré que le déplacement des marqueurs est surtout dû à la composante à l’unisson. Une approche matricielle associant l’optimisation locale à la chaine cinématique a montré que les masses molles se déplacent principalement autour de l'axe longitudinal et le long de l'axe antéro-postérieur de l'os.
L'originalité de cette thèse est de dissocier numériquement la cinématique os de celle des masses molles et les composantes de ce mouvement. Les méthodes développées dans cette thèse augmentent les connaissances sur le mouvement des masses molles et permettent d’envisager l’étude de leur effet sur la dynamique articulaire. / Biomechanical analysis of human movement using optoelectronic system and skin markers considers body segments as rigid bodies. However the soft tissue motion relative to the bone, including muscles, fat mass, results in relative displacement of markers. This displacement is the results of two components, an own component which corresponds to a random motion of each marker and an in-unison component corresponding to the common movement of skin markers resulting from the movement of the underlying wobbling mass. While most studies aim to minimize these displacements, computer simulation models have shown that the movement of the soft tissue motion relative to the bones reduces the joint kinetics. This observation is only available using computer simulations because there are no methods able to distinguish the kinematics of wobbling mass of the bones kinematics. The main objective of this thesis is to develop a numerical method able to distinguish this different kinematics.
The first aim of this thesis was to assess a local optimisation method for estimating the soft tissue motion using intra-cortical pins screwed into the humerus in three subjects. The results show that local optimisation underestimates of 50% the marker displacements. Also it leads to a different marker ranking in terms of displacement. The limit of local optimisation comes from the fact that it does not consider all the components of the soft tissue motion, especially the in-unison component.
The second aim of this thesis was to develop a numerical method that accounts for all the component of the soft tissue motion. More specifically, this method should provide similar kinematics and estimate large marker displacement and distinguish the two components to conventional approaches. The lower limb is modeled using a 10 degree of freedom chain model reconstructed using global optimisation and the markers placed only on the pelvis and the medial face of the shank. The original estimate of joint kinematics without considering the markers placed on the thigh and on the calf avoids the influences of these markers displacement on the kinematic model reconstruction. This method was tested on 13 subjects who performed hopping trials and obtained up to 2.1 times of marker displacement depending the method considered ensuring similar joint-kinematics. A vector approach shown that marker displacements is more induce by the in-unison component. A matrix approach combining the local optimisation and the kinematic model shown that the wobbling mass moves around the longitudinal axis and along the antero-posterior axis of the bone.
The originality of this thesis is to numerically distinguish the bone kinematics from the wobbling mass kinematics and the two components of the soft tissue motion. The methods developed in this thesis increases the knowledge on soft tissue motion and allow future studies to consider their movement in joint kinetics calculation.
|
764 |
Second-order least squares estimation in regression models with application to measurement error problemsAbarin, Taraneh 21 January 2009 (has links)
This thesis studies the Second-order Least Squares (SLS) estimation method in regression models with and without measurement error. Applications of the methodology in general quasi-likelihood and variance function models, censored models, and linear and generalized linear models are examined and strong consistency and asymptotic normality are established. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is used and its asymptotic properties are studied. Finite sample performances of the estimators in all of the studied models are investigated through simulation studies.
|
765 |
Second-order Least Squares Estimation in Generalized Linear Mixed ModelsLi, He 06 April 2011 (has links)
Maximum likelihood is an ubiquitous method used in the estimation of generalized linear mixed model (GLMM). However, the method entails computational difficulties and relies on the normality assumption for random effects. We propose a second-order least squares (SLS) estimator based on the first two marginal moments of the response variables. The proposed estimator is computationally feasible and requires less distributional assumptions than the maximum likelihood estimator. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is proposed. We show that the SLS estimators are consistent and asymptotically normally distributed under fairly general conditions in the framework of GLMM.
Missing data is almost inevitable in longitudinal studies. Problems arise if the missing data mechanism is related to the response process. This thesis develops the proposed estimators to deal with response data missing at random by either adapting the inverse probability weight method or applying the multiple imputation approach.
In practice, some of the covariates are not directly observed but are measured with error. It is well-known that simply substituting a proxy variable for the unobserved covariate in the model will generally lead to biased and inconsistent estimates. We propose the instrumental variable method for the consistent estimation of GLMM with covariate measurement error. The proposed approach does not need any parametric assumption on the distribution of the unknown covariates. This makes the method less restrictive than other methods that rely on either a parametric distribution of the covariates, or to estimate the distribution using some extra information.
In the presence of data outliers, it is a concern that the SLS estimators may be vulnerable due to the second-order moments. We investigated the robustness property of the SLS estimators using their influence functions. We showed that the proposed estimators have a bounded influence function and a redescending property so they are robust to outliers.
The finite sample performance and property of the SLS estimators are studied and compared with other popular estimators in the literature through simulation studies and real world data examples.
|
766 |
The 3σ-rule for outlier detection from the viewpoint of geodetic adjustmentLehmann, Rüdiger 21 January 2015 (has links) (PDF)
The so-called 3σ-rule is a simple and widely used heuristic for outlier detection. This term is a generic term of some statistical hypothesis tests whose test statistics are known as normalized or studentized residuals. The conditions, under which this rule is statistically substantiated, were analyzed, and the extent it applies to geodetic least-squares adjustment was investigated. Then, the efficiency or non-efficiency of this method was analyzed and demonstrated on the example of repeated observations. / Die sogenannte 3σ-Regel ist eine einfache und weit verbreitete Heuristik für die Ausreißererkennung. Sie ist ein Oberbegriff für einige statistische Hypothesentests, deren Teststatistiken als normierte oder studentisierte Verbesserungen bezeichnet werden. Die Bedingungen, unter denen diese Regel statistisch begründet ist, werden analysiert. Es wird untersucht, inwieweit diese Regel auf geodätische Ausgleichungsprobleme anwendbar ist. Die Effizienz oder Nichteffizienz dieser Methode wird analysiert und demonstriert am Beispiel von Wiederholungsmessungen.
|
767 |
Designs of orthogonal filter banks and orthogonal cosine-modulated filter banksYan, Jie 23 April 2010 (has links)
This thesis investigates several design problems concerning two-channel conjugate quadrature (CQ) filter banks and orthogonal wavelets, as well as orthogonal cosine-modulated (OCM) filter banks.
It is well known that optimal design of CQ filters and wavelets and optimal design of prototype filters (PFs) of OCM filter banks in the least squares (LS) or minimax sense are nonconvex problems and to date only local solutions can be claimed. In this thesis, we first make some improvements over several direct design techniques for local design problems in terms of convergence and solution accuracy. By virtue of the recent progress in global polynomial optimization and the improved local design methods mentioned above, we describe an attempt at developing several design strategies that may be viewed as our endeavors towards global solutions for LS CQ filter banks, minimax CQ filter banks, and OCM filter banks. In brief terms, the proposed design strategies are based on several observations made among globally optimal impulse responses of low-order filter banks, and are essentially order-recursive algorithms in terms of filter length combined with some techniques in identifying a desirable initial point in each round of iteration.
This main idea is applied to three design scenarios in this thesis, namely, LS design of orthogonal filter banks and wavelets, minimax design of orthogonal filter banks and wavelets, and design of orthogonal cosine-modulated filter banks. Simulation studies are presented to evaluate and compare the performance of the proposed design methods with several well established algorithms in the literature.
|
768 |
Credit Value Adjusted Real Options Based Valuation of Multiple-Exercise Government Guarantees for Infrastructure ProjectsNaji Almassi, Ali 24 July 2013 (has links)
Public-Private-Partnership (P3) is gaining momentum as the delivery method for the development of public infrastructure. These projects, however, are exposed to economic risks. If the private parties are not comfortable with the level of the risks, they would not participate in the project and, as a result, the infrastructure will most likely not be realized. As an incentive for participation in the P3 project, private parties are sometimes offered guarantees against unfavorable economic risks. Therefore, the valuation of these guarantees is essential for deciding whether or not to participate in the project.
While previous works focused on the valuation of guarantees, the incorporation of credit risk in the value of the P3 projects and the guarantees has been neglected. The effect of credit risk can be taken into account by using the rigorous Credit Value Adjustment method (CVA). CVA is a computationally demanding method that the valuation methods currently in the literature are not capable of handling.
This research offers a novel approach for the valuation of guarantees and P3 projects which is computationally superior to the existing methods. Because of this computational efficiency, CVA can be implemented to account for credit risk. For the development of this method, a continuous stochastic differential equation (SDE) is derived from the forecasted curve of an economic risk. Using the SDE, the partial differential equation (PDE) governing the value of the guarantees will be derived. Then, the PDE will be solved using Finite Difference Method (FDM). A new feature for this method is that it obtains exercise strategies for the Australian guarantees.
The present work extends the literature by providing a valuation method for the cases that multiple risks affect P3 projects. It also presents an approach for the valuation of the Asian style guarantee, a contract which reimburses the private party based on the average of risk factor. Finally, a hypothetical case study illustrates the implementation of the FDM-based valuation method and CVA to obtain the value of the P3 project and the guarantees adjusted for the counterparty credit risk.
|
769 |
Identifying Nursing Activities to Estimate the Risk of Cross-contaminationSeyed Momen, Kaveh 07 January 2013 (has links)
Hospital Acquired Infections (HAI) are a global patient safety challenge, costly to treat, and affect hundreds of millions of patients annually worldwide. It has been shown that the majority of HAI are transferred to patients by caregivers' hands and therefore, can be prevented by proper hand hygiene (HH). However, many factors including cognitive load, cause caregivers to forget to cleanse their hands. Hand hygiene compliance among caregivers remains low around the world.
In this thesis I showed that it is possible to build a wearable accelerometer-based HH reminder system to identify ongoing nursing activities with the patient, indicate the high-risk activities, and prompt the caregivers to clean their hands.
Eight subjects participated in this study, each wearing five wireless accelerometer sensors on the wrist, upper arms and the back. A pattern recognition approach was used to classify six nursing activities offline. Time-domain features that included mean, standard deviation, energy, and correlation among accelerometer axes were found to be suitable features. On average, 1-Nearest Neighbour classifier was able to classify the activities with 84% accuracy.
A novel algorithm was developed to adaptively segment the accelerometer signals to identify the start and stop time of each nursing activity. The overall accuracy of the algorithm for a total of 96 events performed by 8 subjects was approximately 87%. The accuracy was higher than 91% for 5 out of 8 subjects.
The sequence of nursing activities was modelled by an 18-state Markov Chain. The model was evaluated by recently published data. The simulation results showed that the high-risk of cross-contamination decreases exponentially by frequency of HH and this happens more rapidly up to 50%-60% hand hygiene rate. It was also found that if the caregiver enters the room with high-risk of transferring infection to the current patient, given the assumptions in this study, only 55% HH is capable of reducing the risk of infection transfer to the lowest level. This may help to prevent the next patient from acquiring infection, preventing an infection outbreak. The model is also capable of simulating the effects of the imperfect HH on the risk of cross-contamination.
|
770 |
Credit Value Adjusted Real Options Based Valuation of Multiple-Exercise Government Guarantees for Infrastructure ProjectsNaji Almassi, Ali 24 July 2013 (has links)
Public-Private-Partnership (P3) is gaining momentum as the delivery method for the development of public infrastructure. These projects, however, are exposed to economic risks. If the private parties are not comfortable with the level of the risks, they would not participate in the project and, as a result, the infrastructure will most likely not be realized. As an incentive for participation in the P3 project, private parties are sometimes offered guarantees against unfavorable economic risks. Therefore, the valuation of these guarantees is essential for deciding whether or not to participate in the project.
While previous works focused on the valuation of guarantees, the incorporation of credit risk in the value of the P3 projects and the guarantees has been neglected. The effect of credit risk can be taken into account by using the rigorous Credit Value Adjustment method (CVA). CVA is a computationally demanding method that the valuation methods currently in the literature are not capable of handling.
This research offers a novel approach for the valuation of guarantees and P3 projects which is computationally superior to the existing methods. Because of this computational efficiency, CVA can be implemented to account for credit risk. For the development of this method, a continuous stochastic differential equation (SDE) is derived from the forecasted curve of an economic risk. Using the SDE, the partial differential equation (PDE) governing the value of the guarantees will be derived. Then, the PDE will be solved using Finite Difference Method (FDM). A new feature for this method is that it obtains exercise strategies for the Australian guarantees.
The present work extends the literature by providing a valuation method for the cases that multiple risks affect P3 projects. It also presents an approach for the valuation of the Asian style guarantee, a contract which reimburses the private party based on the average of risk factor. Finally, a hypothetical case study illustrates the implementation of the FDM-based valuation method and CVA to obtain the value of the P3 project and the guarantees adjusted for the counterparty credit risk.
|
Page generated in 0.045 seconds