Spelling suggestions: "subject:"aprincipal component 2analysis"" "subject:"aprincipal component 3analysis""
391 |
A method for reducing dimensionality in large design problems with computationally expensive analysesBerguin, Steven Henri 08 June 2015 (has links)
Strides in modern computational fluid dynamics and leaps in high-power computing have led to unprecedented capabilities for handling large aerodynamic problem. In particular, the emergence of adjoint design methods has been a break-through in the field of aerodynamic shape optimization. It enables expensive, high-dimensional optimization problems to be tackled efficiently using gradient-based methods in CFD; a task that was previously inconceivable. However, adjoint design methods are intended for gradient-based optimization; the curse of dimensionality is still very much alive when it comes to design space exploration, where gradient-free methods cannot be avoided. This research describes a novel approach for reducing dimensionality in large, computationally expensive design problems to a point where gradient-free methods become possible. This is done using an innovative application of Principal Component Analysis (PCA), where the latter is applied to the gradient distribution of the objective function; something that had not been done before. This yields a linear transformation that maps a high-dimensional problem onto an equivalent low-dimensional subspace. None of the original variables are discarded; they are simply linearly combined into a new set of variables that are fewer in number. The method is tested on a range of analytical functions, a two-dimensional staggered airfoil test problem and a three-dimensional Over-Wing Nacelle (OWN) integration problem. In all cases, the method performed as expected and was found to be cost effective, requiring only a relatively small number of samples to achieve large dimensionality reduction.
|
392 |
A Fusion Model For Enhancement of Range Images / EnglishHua, Xiaoben, Yang, Yuxia January 2012 (has links)
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology. / Room 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
|
393 |
Robotic Hand Evaluation Based on Task Specific Kinematic RequirementsNeninger, Carlos Rafael 01 January 2011 (has links)
With the rise autonomous and robotic systems in field applications, the need for dexterous, highly adaptable end effectors has become a major research topic. Control mechanisms of robotics hands with a high number independent actuators is recognized as a complex, high dimensional problem, with exponentially complex algorithms. However, recent studies have shown that human hand motion possesses very high joint correlation which translates into a set of predefined postures, or synergies. The hand produces a motion using a complementing contribution of multiple joints, called synergies. The similarities place variables onto a common dimensional space, effectively reducing the number of independent variables.
In this thesis, we analyze the motion of the hand during a set of objects grasps using mul- tivariate Principal Component Analysis (mPCA) to extract both the principal variables and their correlation during grasping. We introduce the use of Functional PCA (fPCA) primarily on princi- pal components to study the dynamic requirements of the motion. The goal is to defined a set of synergies common and specific to all motions. We expand the analysis by classifying the objects grasps, or tasks, using their functional components, or harmonics over the entire motion. A set of groups are described based on these classification that confirms empirical findings. Lastly, we evaluate the motions generated from the analysis by applying them onto robotic hands. The results from the mPCA and fPCA procedures are used to map the principal components from each motion onto underactuated robotic designs. We produce a viable routine that indicates how the mapping is performed, and finally, we implement the motion generated onto a real hand. The resultant robotic motion was evaluated on how it mimics the human motion.
|
394 |
Atrial Fibrillation Signal AnalysisVaizurs, Raja Sarath Chandra Prasad 01 January 2011 (has links)
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia encountered in clinical practice and is associated with an increased mortality and morbidity. Identification of the sources of AF has been a goal of researchers for over 20 years. Current treatment procedures such as Cardio version, Radio Frequency Ablation, and multiple drugs have reduced the incidence of AF. Nevertheless, the success rate of these treatments is only 35-40% of the AF patients as they have limited effect in maintaining the patient in normal sinus rhythm. The problem stems from the fact that there are no methods developed to analyze the electrical activity generated by the cardiac cells during AF and to detect the aberrant atrial tissue that triggers it.
In clinical practice, the sources triggering AF are generally expected to be at one of the four pulmonary veins in the left atrium. Classifying the signals originated from four pulmonary veins in left atrium has been the mainstay of signal analysis in this thesis which ultimately leads to correctly locating the source triggering AF. Unlike many of the current researchers where they use ECG signals for AF signal analysis, we collect intra cardiac signals along with ECG signals for AF analysis. AF Signal collected from catheters placed inside the heart gives us a better understanding of AF characteristics compared to the ECG.
.
In recent years, mechanisms leading to AF induction have begun to be explored but the current state of research and diagnosis of AF is mainly about the inspection of 12 lead ECG, QRS subtraction methods, spectral analysis to find the fibrillation rate and limited to establishment of its presence or absence. The main goal of this thesis research is to develop methodology and algorithm for finding the source of AF. Pattern recognition techniques were used to classify the AF signals originated from the four pulmonary veins. The classification of AF signals recorded by a stationary intra-cardiac catheter was done based on dominant frequency, frequency distribution and normalized power. Principal Component Analysis was used to reduce the dimensionality and further, Linear Discriminant Analysis was used as a classification technique. An algorithm has been developed and tested during recorded periods of AF with promising results.
|
395 |
Αναγνώριση ταυτότητας προσώπου από βιντεοσκοπήσειςΧαντζιάρας, Γεώργιος 30 December 2014 (has links)
Σκοπός της παρούσας διπλωματικής εργασίας είναι η δημιουργία ενός συστήματος αναγνώρισης ταυτότητας προσώπων μέσω βιντεοσκοπήσεων. Αφού έγινε εκτενής μελέτη των τεχνικών που έχουν προταθεί για τον εντοπισμό και την αναγνώριση προσώπου επιλέχθηκαν ο αλγόριθμος Viola-Jones για το κομμάτι του εντοπισμού και η ανάλυση κυρίων συνιστωσών (PCA) για το κομμάτι της αναγνώρισης. Επίσης έγινε εφαρμογή του αλγορίθμου PCA στη βάση προσώπων ORL και μελετήθηκαν οι παράμετροι που επηρεάζουν την απόδοσή του.Τέλος, το σύστημα ταυτοποίησης που κατασκευάστηκε δοκιμάστηκε σε πραγματικές συνθήκες και προέκυψαν κάποια συμπεράσματα για την απόδοσή του. / The subject of this diploma thesis is the creation of a face recognition system from video sequences. After thoroughly studying various proposed methods for face detection and recognition, Viola-Jones algorithm and principal component analysis (PCA) algorithm were chosen for the detection and recognition parts respectively. PCA was also performed on ORL face database and its perfomance was measured. Finally the face identification system that was created, was tested on real conditions to measure its perfomance.
|
396 |
Kardiosignalų kiekybinės analizės metodų įvertinimas / Evaluation of methods for quantitative analysis of cardiosignalsTamošiūnas, Mindaugas 29 January 2008 (has links)
Darbo tikslas: Sukurti kiekybinius signalų morfologijos, atspindinčių širdies veiklos reguliavimą, jos audinių gyvybingumą bei centrinę hemodinamiką, vertinimo metodus. Uždaviniai: 1. Ištirti signalų dekompozicijos baigtiniu bazinių funkcijų rinkiniu (truncated signal representation) metodų tinkamum��, širdies veiklą, jos audinių gyvybingumą bei centrinę hemodinamiką aprašančių signalų, morfologijos analizei; 2. Sukurti signalų, atspindinčių širdies audinių gyvybingumą, optimalaus aprašymo metodą; 3. Sukurti centrinę hemodinamiką atspindinčio krūtinės ląstos impedanso signalo struktūrinės bei morfologinės analizės metodą; 4. Sukurti elektrokardiogramos P-bangos morfologijos dinamikos, atspindinčios širdies veiklos autonominį reguliavimą, kiekybinio įvertinimo metodą. / The aim: To elaborate the quantitative methods for evaluation of morphology of the signals reflecting viability of heart tissue, heart function control and central/peripheral hemodynamics. Objectives: 1. Investigation of usefulness truncated signal representation methods for analysis of morphology of the signals reflecting viability of heart tissue, heart function control and central/peripheral hemodynamics; 2. Elaboration of method for optimal representation of signals reflecting heart tissue viability; 3. Elaboration of method for quantitative evaluation of ECG P-wave morphology dynamics reflecting autonomous heart function control; 4. Elaboration of method for structural and morphological analysis of the chest impedance signals reflecting central hemodynamics.
|
397 |
Comparative Study of the Chemostratigraphic and Petrophysical characteristics of Wells A-A1, A-L1, A-U1 and A-I1 in the Orange Basin, South Atlantic Margin, Offshore South Africa.Bailey, Carlynne. January 2009 (has links)
<p>Many hydrocarbon reservoirs are situated in barren sequences that display poor stratigraphic control. Correlation between the wells can become extremely difficult and traditional correlation techniques can prove to be inadequate. Past studies have shown that trace and major element concentrations can be used as a correlation tool. This practice of using geochemical fingerprints to characterize between wells is called Chemostratigraphic analysis. (Pearce et al, 1999) Chemostratigraphy has been recognized as a very important correlation technique as it can be used for rocks of any age, in any geological setting as well as sequences that are traditionally defined as barren. Chemostratigraphic analyses can be used as a means of getting rid of ambiguities within data produced by traditional correlation methods such as Biostratigraphy, Lithostratigraphy and Geophysical Logging. In areas where stratigraphic data is not available it can be used to construct correlation frameworks for the sequences found in the area. The motivation behind this study is that the research is not only worthy of academic investigation, but can also provide the industry with new insights into areas that were previously misunderstood because traditional correlation methods were not adequate. The study area, the Orange basin, is located offshore South Africa and is largely underexplored. The basin, that hosts two gas field namely the Ibhubesi and the Kudu gas fields, has large potential but in the past has not been given due attention with only 34 wells being drilled in the area. The Orange basin has recently been the topic of investigation because of the belief that it may be hosts to more hydrocarbons. This study will utilise Chemostratigraphy to attempt to provide geological information on this relatively under-explored basin. The aim of this research study is to produce a chemostratigraphic framework -scheme for the Orange Basin in order to facilitate reservoir scale interwell correlation. The Objectives of this research study will be to identify chemostratigraphic units or indices, to prove the adequate use of chemostratigraphy as an independent correlation technique and to integrate the chemostratigraphy and petrophysical characteristics of the four wells to facilitate lithological identification.</p>
|
398 |
Borderline psychopathology and the defense mechanism testSundbom, Elisabet January 1992 (has links)
The main purpose of the present studies has been to develop the Defense Mechanism Test (DM1) for clinical assessment of severe psychopathology with the focus on the concept of Borderline Personality Organization (BPO) according to Kemberg. By relating the DMT and the Structural Interview to each other, the concurrent validity of the concept of Personality Organization (PO) for psychiatric inpatients has been investigated. Two different assessment approaches have been used for this purpose. One has been to take a theoretical perspective as the starting-point for the classification of PO by means of the DMT. The other has been a purely empirical approach designed to discern natural and discriminating patterns of DMT distortions for different diagnostic groups. A dialogue is also in progress between DMT and current research on the Rorschach test in order to increase understanding of borderline phenomena and pathology. The overall results support Kemberg's idea that borderline patients are characterized by specific intrapsychic constellations different from those of both psychotic and neurotic patients. Both the DMT and the Structural Interview provide reliable and consistent judgements of PO. Patients with the syndrome diagnosis Borderline Personality Disorder exhibit different perceptual distortions from patients suffering from other personality disorders. The classic borderline theory is a one-dimensional developmental model, where BPO constitutes a stable intermediate form between neurosis and psychosis. The present results suggest that a two-dimensional model might be more powerful. Hence, the level of self- and object representations and reality orientation might be considered both from a developmental gad an affective perspective across varying forms of pathology. Kemberg suggests that borderline and psychotic patients share a common defensive constellation, centered around splitting, organizing self- and object representations. This view did not find support. The defensive pattem of the BPO patients is significantly different from the PPO defensive pattern. The BPO patients form their self- and object images affectively and thus the self- and object representations would seem to influence the defensive organization and not the other way around. The results have implications for the procedure and the interpretation of the DMT e.g. one and the same DMT picture can discern different kinds of personality; reactions other than the operationalized defense categories in the DMT manual can be valid predictors of PO; some of the DMT defenses described in the manual have to be reconceptualized such as isolation, repression and to some degree denial. Multivariate models are powerful tools for the integration of reactions to DMT into diagnostic patterns. / <p>Diss. (sammanfattning) Umeå : Umeå universitet, 1992, härtill 4 uppsatser.</p> / digitalisering@umu
|
399 |
Development of Fluorescence-based Tools for Characterization of Natural Organic Matter and Development of Membrane Fouling Monitoring Strategies for Drinking Water Treatment SystemsPeiris, Ramila Hishantha 06 November 2014 (has links)
The objective of this research was to develop fluorescence-based tools that are suitable for performing rapid, accurate and direct characterization of natural organic matter (NOM) and colloidal/particulate substances present in natural water. Most available characterization methods are neither suitable for characterizing all the major NOM fractions such as protein-, humic acid-, fulvic acid- and polysaccharide-like substances as well as colloidal/particulate matter present in natural water nor are they suitable for rapid analyses. The individual and combined contributions of these NOM fractions and colloidal/particulate matter present in natural water contribute to membrane fouling, disinfection by-products formation and undesirable biological growth in drinking water treatment processes and distribution systems. The novel techniques developed in this research therefore, provide an avenue for improved understanding of these negative effects and proactive implementation of control and/or optimization strategies.
The fluorescence excitation-emission matrix (EEM) method was used for characterization of NOM and colloidal/particulate matter present in water. Unlike most NOM and colloidal/particulate matter characterization techniques, this method can provide fast and consistent analyses with high instrumental sensitivity. The feasibility of using this method for monitoring NOM at very low concentration levels was also demonstrated with an emphasis on optimizing the instrument parameters necessary to obtain reproducible fluorescence signals.
Partial least squares regression (PLS) was used to develop calibration models by correlating the fluorescence EEM intensities of water samples that contained surrogate NOM fractions with their corresponding dissolved organic carbon (DOC) concentrations. These fluorescence-based calibration models were found to be suitable for identifying/monitoring the extent of the relative changes that occur in different NOM fractions and the interactions between polysaccharide- and protein-like NOM in water treatment processes and distribution systems.
Principal component analysis (PCA) of fluorescence EEMs was identified as a viable tool for monitoring the performance of biological filtration as a pre-treatment step, as well as ultrafiltration (UF) and nanofiltration (NF) membrane systems. The principal components (PCs) extracted in this approach were related to the major membrane foulant groups such as humic substances (HS), protein-like and colloidal/particulate matter in natural water. The PC score plots generated using the fluorescence EEMs obtained after just one hour of UF or NF operation could be related to high fouling events likely caused by elevated levels of colloidal/particulate-like material in the biofilter effluents. This fluorescence EEM-based PCA approach was sensitive enough to be used at low organic carbon levels present in NF permeate and has potential as an early detection method to identify high fouling events, allowing appropriate operational countermeasures to be taken.
This fluorescence EEM-based PCA approach was also used to extract information relevant to reversible and irreversible membrane fouling behaviour in a bench-scale flat sheet cross flow UF process consisting of cycles of permeation and back-washing. PC score-based analysis revealed that colloidal/particulate matter mostly contributed to reversible fouling, while HS and protein-like matter were largely responsible for irreversible fouling. This method therefore has potential for monitoring modes of membrane fouling in drinking water treatment applications.
The above approach was further improved by utilizing the evolution of the PC scores over the filtration time and relating these to membrane fouling by the use of PC scores??? balanced-based differential equations. Using these equations the proposed fluorescence-based modeling approach was capable of forecasting UF fouling behaviours with good accuracy based solely on fluorescence data obtained at time = 15 min from the initiation of the filtration process. In addition, this approach was tested experimentally as a basis for optimization by modifying the UF back-washing times with the objective of minimizing energy consumption and maximizing water production. Preliminary optimization results demonstrated the potential of this approach to reduce power consumption by significant percentages. This approach was also useful for identifying the fouling components of the NOM that were contributing to reversible and irreversible membrane fouling.
Grand River water (Southwestern Ontario, Canada) was used as the natural water source for developing the techniques presented in this thesis. Future research focusing on testing these methods for monitoring of membrane fouling and treatment processes in large-scale drinking water treatment facilities that experience different sources of raw water would be useful for identifying the limitation of these techniques and areas for improvements.
|
400 |
Automatic Target Recognition In Infrared ImageryBayik, Tuba Makbule 01 September 2004 (has links) (PDF)
The task of automatically recognizing targets in IR imagery has a history of approximately 25 years of research and development. ATR is an application of pattern recognition and scene analysis in the field of defense industry and it is still one of the challenging problems. This thesis may be viewed as an exploratory study of ATR problem with encouraging recognition algorithms implemented in the area. The examined algorithms are among the solutions to the ATR problem, which are reported to have good performance in the literature. Throughout the study, PCA, subspace LDA, ICA, nearest mean classifier, K nearest neighbors classifier, nearest neighbor classifier, LVQ classifier are implemented and their performances are compared in the aspect of recognition rate. According to the simulation results, the system, which uses the ICA as the feature extractor and LVQ as the classifier, has the best performing results. The good performance of this system is due to the higher order statistics of the data and the success of LVQ in modifying the decision boundaries.
|
Page generated in 0.0718 seconds