• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Robotic Hand Evaluation Based on Task Specific Kinematic Requirements

Neninger, Carlos Rafael 01 January 2011 (has links)
With the rise autonomous and robotic systems in field applications, the need for dexterous, highly adaptable end effectors has become a major research topic. Control mechanisms of robotics hands with a high number independent actuators is recognized as a complex, high dimensional problem, with exponentially complex algorithms. However, recent studies have shown that human hand motion possesses very high joint correlation which translates into a set of predefined postures, or synergies. The hand produces a motion using a complementing contribution of multiple joints, called synergies. The similarities place variables onto a common dimensional space, effectively reducing the number of independent variables. In this thesis, we analyze the motion of the hand during a set of objects grasps using mul- tivariate Principal Component Analysis (mPCA) to extract both the principal variables and their correlation during grasping. We introduce the use of Functional PCA (fPCA) primarily on princi- pal components to study the dynamic requirements of the motion. The goal is to defined a set of synergies common and specific to all motions. We expand the analysis by classifying the objects grasps, or tasks, using their functional components, or harmonics over the entire motion. A set of groups are described based on these classification that confirms empirical findings. Lastly, we evaluate the motions generated from the analysis by applying them onto robotic hands. The results from the mPCA and fPCA procedures are used to map the principal components from each motion onto underactuated robotic designs. We produce a viable routine that indicates how the mapping is performed, and finally, we implement the motion generated onto a real hand. The resultant robotic motion was evaluated on how it mimics the human motion.
392

Atrial Fibrillation Signal Analysis

Vaizurs, Raja Sarath Chandra Prasad 01 January 2011 (has links)
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia encountered in clinical practice and is associated with an increased mortality and morbidity. Identification of the sources of AF has been a goal of researchers for over 20 years. Current treatment procedures such as Cardio version, Radio Frequency Ablation, and multiple drugs have reduced the incidence of AF. Nevertheless, the success rate of these treatments is only 35-40% of the AF patients as they have limited effect in maintaining the patient in normal sinus rhythm. The problem stems from the fact that there are no methods developed to analyze the electrical activity generated by the cardiac cells during AF and to detect the aberrant atrial tissue that triggers it. In clinical practice, the sources triggering AF are generally expected to be at one of the four pulmonary veins in the left atrium. Classifying the signals originated from four pulmonary veins in left atrium has been the mainstay of signal analysis in this thesis which ultimately leads to correctly locating the source triggering AF. Unlike many of the current researchers where they use ECG signals for AF signal analysis, we collect intra cardiac signals along with ECG signals for AF analysis. AF Signal collected from catheters placed inside the heart gives us a better understanding of AF characteristics compared to the ECG. . In recent years, mechanisms leading to AF induction have begun to be explored but the current state of research and diagnosis of AF is mainly about the inspection of 12 lead ECG, QRS subtraction methods, spectral analysis to find the fibrillation rate and limited to establishment of its presence or absence. The main goal of this thesis research is to develop methodology and algorithm for finding the source of AF. Pattern recognition techniques were used to classify the AF signals originated from the four pulmonary veins. The classification of AF signals recorded by a stationary intra-cardiac catheter was done based on dominant frequency, frequency distribution and normalized power. Principal Component Analysis was used to reduce the dimensionality and further, Linear Discriminant Analysis was used as a classification technique. An algorithm has been developed and tested during recorded periods of AF with promising results.
393

Αναγνώριση ταυτότητας προσώπου από βιντεοσκοπήσεις

Χαντζιάρας, Γεώργιος 30 December 2014 (has links)
Σκοπός της παρούσας διπλωματικής εργασίας είναι η δημιουργία ενός συστήματος αναγνώρισης ταυτότητας προσώπων μέσω βιντεοσκοπήσεων. Αφού έγινε εκτενής μελέτη των τεχνικών που έχουν προταθεί για τον εντοπισμό και την αναγνώριση προσώπου επιλέχθηκαν ο αλγόριθμος Viola-Jones για το κομμάτι του εντοπισμού και η ανάλυση κυρίων συνιστωσών (PCA) για το κομμάτι της αναγνώρισης. Επίσης έγινε εφαρμογή του αλγορίθμου PCA στη βάση προσώπων ORL και μελετήθηκαν οι παράμετροι που επηρεάζουν την απόδοσή του.Τέλος, το σύστημα ταυτοποίησης που κατασκευάστηκε δοκιμάστηκε σε πραγματικές συνθήκες και προέκυψαν κάποια συμπεράσματα για την απόδοσή του. / The subject of this diploma thesis is the creation of a face recognition system from video sequences. After thoroughly studying various proposed methods for face detection and recognition, Viola-Jones algorithm and principal component analysis (PCA) algorithm were chosen for the detection and recognition parts respectively. PCA was also performed on ORL face database and its perfomance was measured. Finally the face identification system that was created, was tested on real conditions to measure its perfomance.
394

Kardiosignalų kiekybinės analizės metodų įvertinimas / Evaluation of methods for quantitative analysis of cardiosignals

Tamošiūnas, Mindaugas 29 January 2008 (has links)
Darbo tikslas: Sukurti kiekybinius signalų morfologijos, atspindinčių širdies veiklos reguliavimą, jos audinių gyvybingumą bei centrinę hemodinamiką, vertinimo metodus. Uždaviniai: 1. Ištirti signalų dekompozicijos baigtiniu bazinių funkcijų rinkiniu (truncated signal representation) metodų tinkamum��, širdies veiklą, jos audinių gyvybingumą bei centrinę hemodinamiką aprašančių signalų, morfologijos analizei; 2. Sukurti signalų, atspindinčių širdies audinių gyvybingumą, optimalaus aprašymo metodą; 3. Sukurti centrinę hemodinamiką atspindinčio krūtinės ląstos impedanso signalo struktūrinės bei morfologinės analizės metodą; 4. Sukurti elektrokardiogramos P-bangos morfologijos dinamikos, atspindinčios širdies veiklos autonominį reguliavimą, kiekybinio įvertinimo metodą. / The aim: To elaborate the quantitative methods for evaluation of morphology of the signals reflecting viability of heart tissue, heart function control and central/peripheral hemodynamics. Objectives: 1. Investigation of usefulness truncated signal representation methods for analysis of morphology of the signals reflecting viability of heart tissue, heart function control and central/peripheral hemodynamics; 2. Elaboration of method for optimal representation of signals reflecting heart tissue viability; 3. Elaboration of method for quantitative evaluation of ECG P-wave morphology dynamics reflecting autonomous heart function control; 4. Elaboration of method for structural and morphological analysis of the chest impedance signals reflecting central hemodynamics.
395

Comparative Study of the Chemostratigraphic and Petrophysical characteristics of Wells A-A1, A-L1, A-U1 and A-I1 in the Orange Basin, South Atlantic Margin, Offshore South Africa.

Bailey, Carlynne. January 2009 (has links)
<p>Many hydrocarbon reservoirs are situated in barren sequences that display poor stratigraphic control. Correlation between the wells can become extremely difficult and traditional correlation techniques can prove to be inadequate. Past studies have shown that trace and major element concentrations can be used as a correlation tool. This practice of using geochemical fingerprints to characterize between wells is called Chemostratigraphic analysis. (Pearce et al, 1999) Chemostratigraphy has been recognized as a very important correlation technique as it can be used for rocks of any age, in any geological setting as well as sequences that are traditionally defined as barren. Chemostratigraphic analyses can be used as a means of getting rid of ambiguities within data produced by traditional correlation methods such as Biostratigraphy, Lithostratigraphy and Geophysical Logging. In areas where stratigraphic data is not available it can be used to construct correlation frameworks for the sequences found in the area. The motivation behind this study is that the research is not only worthy of academic investigation, but can also provide the industry with new insights into areas that were previously misunderstood because traditional correlation methods were not adequate. The study area, the Orange basin, is located offshore South Africa and is largely underexplored. The basin, that hosts two gas field namely the Ibhubesi and the Kudu gas fields, has large potential but in the past has not been given due attention with only 34 wells being drilled in the area. The Orange basin has recently been the topic of investigation because of the belief that it may be hosts to more hydrocarbons. This study will utilise Chemostratigraphy to attempt to provide geological information on this relatively under-explored basin. The aim of this research study is to produce a chemostratigraphic framework -scheme for the Orange Basin in order to facilitate reservoir scale interwell correlation. The Objectives of this research study will be to identify chemostratigraphic units or indices, to prove the adequate use of chemostratigraphy as an independent correlation technique and to integrate the chemostratigraphy and petrophysical characteristics of the four wells to facilitate lithological identification.</p>
396

Borderline psychopathology and the defense mechanism test

Sundbom, Elisabet January 1992 (has links)
The main purpose of the present studies has been to develop the Defense Mechanism Test (DM1) for clinical assessment of severe psychopathology with the focus on the concept of Borderline Personality Organization (BPO) according to Kemberg. By relating the DMT and the Structural Interview to each other, the concurrent validity of the concept of Personality Organization (PO) for psychiatric inpatients has been investigated. Two different assessment approaches have been used for this purpose. One has been to take a theoretical perspective as the starting-point for the classification of PO by means of the DMT. The other has been a purely empirical approach designed to discern natural and discriminating patterns of DMT distortions for different diagnostic groups. A dialogue is also in progress between DMT and current research on the Rorschach test in order to increase understanding of borderline phenomena and pathology. The overall results support Kemberg's idea that borderline patients are characterized by specific intrapsychic constellations different from those of both psychotic and neurotic patients. Both the DMT and the Structural Interview provide reliable and consistent judgements of PO. Patients with the syndrome diagnosis Borderline Personality Disorder exhibit different perceptual distortions from patients suffering from other personality disorders. The classic borderline theory is a one-dimensional developmental model, where BPO constitutes a stable intermediate form between neurosis and psychosis. The present results suggest that a two-dimensional model might be more powerful. Hence, the level of self- and object representations and reality orientation might be considered both from a developmental gad an affective perspective across varying forms of pathology. Kemberg suggests that borderline and psychotic patients share a common defensive constellation, centered around splitting, organizing self- and object representations. This view did not find support. The defensive pattem of the BPO patients is significantly different from the PPO defensive pattern. The BPO patients form their self- and object images affectively and thus the self- and object representations would seem to influence the defensive organization and not the other way around. The results have implications for the procedure and the interpretation of the DMT e.g. one and the same DMT picture can discern different kinds of personality; reactions other than the operationalized defense categories in the DMT manual can be valid predictors of PO; some of the DMT defenses described in the manual have to be reconceptualized such as isolation, repression and to some degree denial. Multivariate models are powerful tools for the integration of reactions to DMT into diagnostic patterns. / <p>Diss. (sammanfattning) Umeå : Umeå universitet, 1992, härtill 4 uppsatser.</p> / digitalisering@umu
397

Development of Fluorescence-based Tools for Characterization of Natural Organic Matter and Development of Membrane Fouling Monitoring Strategies for Drinking Water Treatment Systems

Peiris, Ramila Hishantha 06 November 2014 (has links)
The objective of this research was to develop fluorescence-based tools that are suitable for performing rapid, accurate and direct characterization of natural organic matter (NOM) and colloidal/particulate substances present in natural water. Most available characterization methods are neither suitable for characterizing all the major NOM fractions such as protein-, humic acid-, fulvic acid- and polysaccharide-like substances as well as colloidal/particulate matter present in natural water nor are they suitable for rapid analyses. The individual and combined contributions of these NOM fractions and colloidal/particulate matter present in natural water contribute to membrane fouling, disinfection by-products formation and undesirable biological growth in drinking water treatment processes and distribution systems. The novel techniques developed in this research therefore, provide an avenue for improved understanding of these negative effects and proactive implementation of control and/or optimization strategies. The fluorescence excitation-emission matrix (EEM) method was used for characterization of NOM and colloidal/particulate matter present in water. Unlike most NOM and colloidal/particulate matter characterization techniques, this method can provide fast and consistent analyses with high instrumental sensitivity. The feasibility of using this method for monitoring NOM at very low concentration levels was also demonstrated with an emphasis on optimizing the instrument parameters necessary to obtain reproducible fluorescence signals. Partial least squares regression (PLS) was used to develop calibration models by correlating the fluorescence EEM intensities of water samples that contained surrogate NOM fractions with their corresponding dissolved organic carbon (DOC) concentrations. These fluorescence-based calibration models were found to be suitable for identifying/monitoring the extent of the relative changes that occur in different NOM fractions and the interactions between polysaccharide- and protein-like NOM in water treatment processes and distribution systems. Principal component analysis (PCA) of fluorescence EEMs was identified as a viable tool for monitoring the performance of biological filtration as a pre-treatment step, as well as ultrafiltration (UF) and nanofiltration (NF) membrane systems. The principal components (PCs) extracted in this approach were related to the major membrane foulant groups such as humic substances (HS), protein-like and colloidal/particulate matter in natural water. The PC score plots generated using the fluorescence EEMs obtained after just one hour of UF or NF operation could be related to high fouling events likely caused by elevated levels of colloidal/particulate-like material in the biofilter effluents. This fluorescence EEM-based PCA approach was sensitive enough to be used at low organic carbon levels present in NF permeate and has potential as an early detection method to identify high fouling events, allowing appropriate operational countermeasures to be taken. This fluorescence EEM-based PCA approach was also used to extract information relevant to reversible and irreversible membrane fouling behaviour in a bench-scale flat sheet cross flow UF process consisting of cycles of permeation and back-washing. PC score-based analysis revealed that colloidal/particulate matter mostly contributed to reversible fouling, while HS and protein-like matter were largely responsible for irreversible fouling. This method therefore has potential for monitoring modes of membrane fouling in drinking water treatment applications. The above approach was further improved by utilizing the evolution of the PC scores over the filtration time and relating these to membrane fouling by the use of PC scores??? balanced-based differential equations. Using these equations the proposed fluorescence-based modeling approach was capable of forecasting UF fouling behaviours with good accuracy based solely on fluorescence data obtained at time = 15 min from the initiation of the filtration process. In addition, this approach was tested experimentally as a basis for optimization by modifying the UF back-washing times with the objective of minimizing energy consumption and maximizing water production. Preliminary optimization results demonstrated the potential of this approach to reduce power consumption by significant percentages. This approach was also useful for identifying the fouling components of the NOM that were contributing to reversible and irreversible membrane fouling. Grand River water (Southwestern Ontario, Canada) was used as the natural water source for developing the techniques presented in this thesis. Future research focusing on testing these methods for monitoring of membrane fouling and treatment processes in large-scale drinking water treatment facilities that experience different sources of raw water would be useful for identifying the limitation of these techniques and areas for improvements.
398

Automatic Target Recognition In Infrared Imagery

Bayik, Tuba Makbule 01 September 2004 (has links) (PDF)
The task of automatically recognizing targets in IR imagery has a history of approximately 25 years of research and development. ATR is an application of pattern recognition and scene analysis in the field of defense industry and it is still one of the challenging problems. This thesis may be viewed as an exploratory study of ATR problem with encouraging recognition algorithms implemented in the area. The examined algorithms are among the solutions to the ATR problem, which are reported to have good performance in the literature. Throughout the study, PCA, subspace LDA, ICA, nearest mean classifier, K nearest neighbors classifier, nearest neighbor classifier, LVQ classifier are implemented and their performances are compared in the aspect of recognition rate. According to the simulation results, the system, which uses the ICA as the feature extractor and LVQ as the classifier, has the best performing results. The good performance of this system is due to the higher order statistics of the data and the success of LVQ in modifying the decision boundaries.
399

Application Of A Natural-resonance Based Feature Extraction Technique To Small-scale Aircraft Modeled By Conducting Wires For Electromagnetic Target Classification

Ersoy, Mehmet Okan 01 October 2004 (has links) (PDF)
The problem studied in this thesis, is the classification of the small-scale aircraft targets by using a natural resonance based electromagnetic feature extraction technique. The aircraft targets are modeled by perfectly conducting, thin wire structures. The electromagnetic back-scattered data used in the classification process, are numerically generated for five aircraft models. A contemporary signal processing tool, the Wigner-Ville distribution is employed in this study in addition to using the principal components analysis technique to extract target features mainly from late-time target responses. The Wigner-Ville distribution (WD) is applied to the electromagnetic back-scattered responses from different aspects. Then, feature vectors are extracted from suitably chosen late-time portions of the WD outputs, which include natural resonance related v information, for every target and aspect to decrease aspect dependency. The database of the classifier is constructed by the feature vectors extracted at only a few reference aspects. Principal components analysis is also used to fuse the feature vectors and/or late-time aircraft responses extracted from reference aspects of a given target into a single characteristic feature vector of that target to further reduce aspect dependency. Consequently, an almost aspect independent classifier is designed for small-scale aircraft targets reaching high correct classification rate.
400

一種基於函數型資料主成分分析的曲線對齊方式 / A Curve Alignment Method Based on Functional PCA

林昱航, Lin,Yu-Hang Unknown Date (has links)
函數型資料分析的是一組曲線資料,通常定義域為一段時間範圍。常見的如某一個地區人口在成長期的身高紀錄表或是氣候統計資料。函數型資料主要特色曲線間常有共同趨勢,而且個別曲線反應共同趨勢時也有時間和強度上的差異。本文研究主要是使用Kneip 和 Ramsay提出,結合對齊程序和主成分分析的想法作為模型架構,來分析函數型資料的特性。首先在對齊過程中,使用時間轉換函數(warping function),解決觀測資料上時間的差異;並使用主成分分析方法,幫助研究者探討資料的主要特性。基於函數型資料被預期的共同趨勢性,我們可以利用此一特色作為各種類型資料分類上的依據。此外本研究會對幾種選取主成分個數的方法,進行綜合討論與比較。 / In this thesis, a procedure combining curve alignment and functional principal component analysis is studied. The procedure is proposed by Kneip and Ramsay .In functional principal component analysis, if the data curves are roughly linear combinations of k basis curves, then the data curves are expected to be explained well by principle component curves. The goal of this study is to examine whether this property still holds when curves need to be aligned. It is found that, if the aligned data curves can be approximated well by k basis curves, then applying Kneip and Ramsay's procedure to the unaligned curves gives k principal components that can explain the aligned curves well. Several approaches for selecting the number of principal components are proposed and compared.

Page generated in 0.1226 seconds