241 |
Vision-assisted Object TrackingOzertem, Kemal Arda 01 February 2012 (has links) (PDF)
In this thesis, a video tracking method is proposed that is based on both computer vision and estimation theory. For this purpose, the overall study is partitioned into four related subproblems. The first part is moving object detection / for moving object detection, two different background modeling methods are developed. The second part is feature extraction and estimation of optical flow between video frames. As the feature extraction method, a well-known corner detector algorithm is employed and this extraction is applied only at the moving regions in the scene. For the feature points, the optical flow vectors are calculated by using an improved version of Kanade Lucas Tracker. The resulting optical flow field between consecutive frames is used directly in proposed tracking method. In the third part, a particle filter structure is build to provide tracking process. However, the particle filter is improved by adding optical flow data to the state equation as a correction term. In the last part of the study, the performance of the proposed approach is compared against standard implementations particle filter based trackers. Based on the simulation results in this study, it could be argued that insertion of vision-based optical flow estimation to tracking formulation improves the overall performance.
|
242 |
Ανάπτυξη μεθόδων αυτόματης κατηγοριοποίησης κειμένων προσανατολισμένων στο φύλοΑραβαντινού, Χριστίνα 15 May 2015 (has links)
Η εντυπωσιακή εξάπλωση των μέσων κοινωνικής δικτύωσης τα τελευταία χρόνια, θέτει βασικά ζητήματα τα οποία απασχολούν την ερευνητική κοινότητα. Η συγκέντρωση και οργάνωση του τεράστιου όγκου πληροφορίας βάσει θέματος, συγγραφέα, ηλικίας ή και φύλου αποτελούν χαρακτηριστικά παραδείγματα προβλημάτων που πρέπει να αντιμετωπιστούν. Η συσσώρευση παρόμοιας πληροφορίας από τα ψηφιακά ίχνη που αφήνει ο κάθε χρήστης καθώς διατυπώνει τη γνώμη του για διάφορα θέματα ή περιγράφει στιγμιότυπα από τη ζωή του δημιουργεί τάσεις, οι οποίες εξαπλώνονται ταχύτατα μέσω των tweets, των δημοσιευμάτων σε ιστολόγια (blogs) και των αναρτήσεων στο Facebook. Ιδιαίτερο ενδιαφέρον παρουσιάζει το πώς μπορεί όλη αυτή η πληροφορία να κατηγοριοποιηθεί βάσει δημογραφικών χαρακτηριστικών, όπως το φύλο ή η ηλικία. Άμεσες πληροφορίες που παρέχει ο κάθε χρήστης για τον εαυτό του, όπως επίσης και έμμεσες πληροφορίες που μπορούν να προκύψουν από την γλωσσολογική ανάλυση των κειμένων του χρήστη, αποτελούν σημαντικά δεδομένα που μπορούν να χρησιμοποιηθούν για την ανίχνευση του φύλου του συγγραφέα. Πιο συγκεκριμένα, η αναγνώριση του φύλου ενός χρήστη από δεδομένα κειμένου, μπορεί να αναχθεί σε ένα πρόβλημα κατηγοριοποίησης κειμένου. Το κείμενο υφίσταται επεξεργασία και στη συνέχεια, με τη χρήση μηχανικής μάθησης, εντοπίζεται το φύλο. Ειδικότερα, μέσω στατιστικής και γλωσσολογικής ανάλυσης των κειμένων, εξάγονται διάφορα χαρακτηριστικά (π.χ. συχνότητα εμφάνισης λέξεων, μέρη του λόγου, μήκος λέξεων, χαρακτηριστικά που συνδέονται με το περιεχόμενο κ.τ.λ.), τα οποία στη συνέχεια χρησιμοποιούνται για να γίνει η αναγνώριση του φύλου. Στην παρούσα διπλωματική εργασία σκοπός είναι η μελέτη και η ανάπτυξη ενός συστήματος κατηγοριοποίησης κειμένων ιστολογίου και ιστοσελίδων κοινωνικής δικτύωσης, βάσει του φύλου. Εξετάζεται η απόδοση διαφορετικών συνδυασμών χαρακτηριστικών και κατηγοριοποιητών στoν εντοπισμό του φύλου. / The rapid growth of social media in recent years creates important research tasks. The collection and management of the huge information available, based on topic, author, age or gender are some examples of the problems that need to be addressed. The gathering of such information from the digital traces of the users, when they express their opinions on different subjects or they describe moments of their lives, creates trends, which expand through tweets, blog posts and Facebook statuses. An interesting aspect is to classify all the available information, according to demographic characteristics, such as gender or age. The direct clues provided by the users about themselves, along with the indirect information that can come of the linguistic analysis of their texts, are useful elements that can be used for the identification of the authors’ gender. More specifically, the detection of the users’ gender from textual data can be faced as a document classification problem. The document is processed and then, machine learning techniques are applied, in order to detect the gender. The features used for the gender identification can be extracted from statistical and linguistic analysis of the document. In the present thesis, we aim to develop an automatic system for the classification of web blog and social media posts, according to their authors’ gender. We study the performance of different combinations of features and classifiers for the identification of the gender.
|
243 |
Feature extraction via dependence structure optimization / Požymių išskyrimas optimizuojant priklausomumo struktūrąDaniušis, Povilas 01 October 2012 (has links)
In many important real world applications the initial representation of the data is inconvenient,
or even prohibitive for further analysis. For example, in image analysis, text
analysis and computational genetics high-dimensional, massive, structural, incomplete,
and noisy data sets are common. Therefore, feature extraction, or revelation of informative
features from the raw data is one of fundamental machine learning problems.
Efficient feature extraction helps to understand data and the process that generates it,
reduce costs for future measurements and data analysis. The representation of the structured
data as a compact set of informative numeric features allows applying well studied
machine learning techniques instead of developing new ones..
The dissertation focuses on supervised and semi-supervised feature extraction methods,
which optimize the dependence structure of features. The dependence is measured using
the kernel estimator of Hilbert-Schmidt norm of covariance operator (HSIC measure).
Two dependence structures are investigated: in the first case we seek features which
maximize the dependence on the dependent variable, and in the second one, we additionally
minimize the mutual dependence of features. Linear and kernel formulations of
HBFE and HSCA are provided. Using Laplacian regularization framework we construct
semi-supervised variants of HBFE and HSCA.
Suggested algorithms were investigated experimentally using conventional and multilabel
classification data... [to full text] / Daugelis praktiškai reikšmingu sistemu mokymo uždaviniu reikalauja gebeti panaudoti didelio matavimo, strukturizuotus, netiesinius duomenis. Vaizdu, teksto, socialiniu bei verslo ryšiu analize, ivairus bioinformatikos uždaviniai galetu buti tokiu uždaviniu pavyzdžiais. Todel požymiu išskyrimas dažnai yra pirmasis žingsnis, kuriuo pradedama duomenu analize ir nuo kurio priklauso galutinio rezultato sekme. Šio disertacinio darbo tyrimo objektas yra požymiu išskyrimo algoritmai, besiremiantys priklausomumo savoka. Darbe nagrinejamas priklausomumas, nusakytas kovariacinio operatoriaus Hilberto-Šmidto normos (HSIC mato) branduoliniu ivertiniu. Pasiulyti šiuo ivertiniu besiremiantys HBFE ir HSCA algoritmai leidžia dirbti su bet kokios strukturos duomenimis, bei yra formuluojami tikriniu vektoriu terminais (tai leidžia optimizavimui naudoti standartinius paketus), bei taikytini ne tik prižiurimo, bet ir dalinai prižiurimo mokymo imtims. Pastaruoju atveju HBFE ir HSCA modifikacijos remiasi Laplaso reguliarizacija. Eksperimentais su klasifikavimo bei daugiažymio klasifikavimo duomenimis parodyta, jog pasiulyti algoritmai leidžia pagerinti klasifikavimo efektyvuma lyginant su PCA ar LDA.
|
244 |
Feature Extraction Workflows for Urban Mobile-Terrestrial LiDAR DataMCQUAT, Gregory John 27 May 2011 (has links)
Mobile Terrestrial LiDAR (MTL) is an active remote sensing technology that uses laser-based ranging and global positioning systems (GPS) to record 3D point location measurements on surfaces within and near transportation corridors, such as along a railroad track or a street. This thesis examines geovisualization for improving user-oriented workflows and also examines geographic object-based image analysis (GEOBIA) for the development of automated feature extraction. A LiDAR sensor-centric perspective during the data acquisition phase is used to organize data for the user and to transform the data into a 2D reference frame for object-oriented image analysis of MTL data.
Organizing the display of MTL data relative to the scanner presented new opportunities for visualization techniques and was an effective method for communicating space that was scanned, or not, in an urban scene. It offers new avenues for quality assessment of MTL survey of urban environments by explicitly displaying gaps in data coverage. A number of techniques for navigating and visualizing data from a sensor-perspective are examined.
A novel sensor-perspective transformation of MTL data from three to two dimensions enables analysis of MTL data in common GIS and image-processing environments. GEOBIA software (Definiens’ eCognition) is used to construct a procedural feature extraction workflow. The procedures are constructed with semantic classes, data processing rules and functions that drive geometric segmentation and feature recognition. Geometric regularities in urban scenes and knowledge about spatial and semantic relationships are incorporated into the rule set. The results are fluidly integrated back into a GIS environment.
Investigation of alternative approaches to handling MTL data such as those carried out in this thesis are essential if this technology is to see widespread use. / Thesis (Master, Geography) -- Queen's University, 2011-05-24 13:10:15.198
|
245 |
Požymių išskyrimas optimizuojant priklausomumo struktūrą / Feature extraction via dependence structure optimizationDaniušis, Povilas 01 October 2012 (has links)
Daugelis praktiškai reikšmingu sistemu mokymo uždaviniu reikalauja gebeti panaudoti didelio matavimo, strukturizuotus, netiesinius duomenis. Vaizdu, teksto, socialiniu bei verslo ryšiu analize, ivairus bioinformatikos uždaviniai galetu buti tokiu uždaviniu pavyzdžiais. Todel požymiu išskyrimas dažnai yra pirmasis žingsnis, kuriuo pradedama duomenu analize ir nuo kurio priklauso galutinio rezultato sekme. Šio disertacinio darbo tyrimo objektas yra požymiu išskyrimo algoritmai, besiremiantys priklausomumo savoka. Darbe nagrinejamas priklausomumas, nusakytas kovariacinio operatoriaus Hilberto-Šmidto normos (HSIC mato) branduoliniu ivertiniu. Pasiulyti šiuo ivertiniu besiremiantys HBFE ir HSCA algoritmai leidžia dirbti su bet kokios strukturos duomenimis, bei yra formuluojami tikriniu vektoriu terminais (tai leidžia optimizavimui naudoti standartinius paketus), bei taikytini ne tik prižiurimo, bet ir dalinai prižiurimo mokymo imtims. Pastaruoju atveju HBFE ir HSCA modifikacijos remiasi Laplaso reguliarizacija. Eksperimentais su klasifikavimo bei daugiažymio klasifikavimo duomenimis parodyta, jog pasiulyti algoritmai leidžia pagerinti klasifikavimo efektyvuma lyginant su PCA ar LDA. / In many important real world applications the initial representation of the data is inconvenient, or even prohibitive for further analysis. For example, in image analysis, text analysis and computational genetics high-dimensional, massive, structural, incomplete, and noisy data sets are common. Therefore, feature extraction, or revelation of informative features from the raw data is one of fundamental machine learning problems. Efficient feature extraction helps to understand data and the process that generates it, reduce costs for future measurements and data analysis. The representation of the structured data as a compact set of informative numeric features allows applying well studied machine learning techniques instead of developing new ones.. The dissertation focuses on supervised and semi-supervised feature extraction methods, which optimize the dependence structure of features. The dependence is measured using the kernel estimator of Hilbert-Schmidt norm of covariance operator (HSIC measure). Two dependence structures are investigated: in the first case we seek features which maximize the dependence on the dependent variable, and in the second one, we additionally minimize the mutual dependence of features. Linear and kernel formulations of HBFE and HSCA are provided. Using Laplacian regularization framework we construct semi-supervised variants of HBFE and HSCA. Suggested algorithms were investigated experimentally using conventional and multilabel classification data... [to full text]
|
246 |
Learning Hierarchical Feature Extractors For Image RecognitionBoureau, Y-Lan 01 September 2012 (has links) (PDF)
Telling cow from sheep is effortless for most animals, but requires much engineering for computers. In this thesis, we seek to tease out basic principles that underlie many recent advances in image recognition. First, we recast many methods into a common unsu- pervised feature extraction framework based on an alternation of coding steps, which encode the input by comparing it with a collection of reference patterns, and pooling steps, which compute an aggregation statistic summarizing the codes within some re- gion of interest of the image. Within that framework, we conduct extensive comparative evaluations of many coding or pooling operators proposed in the literature. Our results demonstrate a robust superiority of sparse coding (which decomposes an input as a linear combination of a few visual words) and max pooling (which summarizes a set of inputs by their maximum value). We also propose macrofeatures, which import into the popu- lar spatial pyramid framework the joint encoding of nearby features commonly practiced in neural networks, and obtain significantly improved image recognition performance. Next, we analyze the statistical properties of max pooling that underlie its better perfor- mance, through a simple theoretical model of feature activation. We then present results of experiments that confirm many predictions of the model. Beyond the pooling oper- ator itself, an important parameter is the set of pools over which the summary statistic is computed. We propose locality in feature configuration space as a natural criterion for devising better pools. Finally, we propose ways to make coding faster and more powerful through fast convolutional feedforward architectures, and examine how to incorporate supervision into feature extraction schemes. Overall, our experiments offer insights into what makes current systems work so well, and state-of-the-art results on several image recognition benchmarks.
|
247 |
Design Of An Electromagnetic Classifier For Spherical TargetsAyar, Mehmet 01 May 2005 (has links) (PDF)
This thesis applies an electromagnetic feature extraction technique to design electromagnetic target classifiers for conductors, dielectrics and dielectric coated conductors using their natural resonance related late-time scattered responses. Classifier databases contain scattered data at only a few aspects for each candidate target. The targets are dielectric spheres of varying sizes and refractive indices, perfectly conducting spheres varying sizes and dielectric coated conducting spheres of varying refractive indices and thickness in coating. The applied classifier design technique is suitable for real-time target classification because of the computational efficiency of feature extraction and decision making approaches. The Wigner-Ville Distribution (WD) is employed in this study in addition to the Principal Components Analysis (PCA) technique to extract target features mainly from late-time target responses. WD is applied to the back-scattered responses at different aspects. To decrease aspect dependency, feature vectors are extracted from selected late-time portions of the WD outputs that include natural resonance related information. Principal components analysis is also used to fuse the feature vectors and/or late-time target responses extracted from reference aspects of a given target into a single characteristic feature vector for each target to further reduce aspect dependency.
|
248 |
Automatic Fault Diagnosis of Rolling Element Bearings Using Wavelet Based Pursuit FeaturesYang, Hongyu January 2005 (has links)
Today's industry uses increasingly complex machines, some with extremely demanding performance criteria. Failed machines can lead to economic loss and safety problems due to unexpected production stoppages. Fault diagnosis in the condition monitoring of these machines is crucial for increasing machinery availability and reliability. Fault diagnosis of machinery is often a difficult and daunting task. To be truly effective, the process needs to be automated to reduce the reliance on manual data interpretation. It is the aim of this research to automate this process using data from machinery vibrations. This thesis focuses on the design, development, and application of an automatic diagnosis procedure for rolling element bearing faults. Rolling element bearings are representative elements in most industrial rotating machinery. Besides, these elements can also be tested economically in the laboratory using relatively simple test rigs. Novel modern signal processing methods were applied to vibration signals collected from rolling element tests to destruction. These included three advanced timefrequency signal processing techniques, best basis Discrete Wavelet Packet Analysis (DWPA), Matching Pursuit (MP), and Basis Pursuit (BP). This research presents the first application of the Basis Pursuit to successfully diagnosing rolling element faults. Meanwhile, Best basis DWPA and Matching Pursuit were also benchmarked with the Basis Pursuit, and further extended using some novel ideas particularly on the extraction of defect related features. The DWPA was researched in two aspects: i) selecting a suitable wavelet, and ii) choosing a best basis. To choose the most appropriate wavelet function and decomposition tree of best basis in bearing fault diagnostics, several different wavelets and decomposition trees for best basis determination were applied and comparisons made. The Matching Pursuit and Basis Pursuit techniques were effected by choosing a powerful wavelet packet dictionary. These algorithms were also studied in their ability to extract precise features as well as their speed in achieving a result. The advantage and disadvantage of these techniques for feature extraction of bearing faults were further evaluated. An additional contribution of this thesis is the automation of fault diagnosis by using Artificial Neural Networks (ANNs). Most of work presented in the current literature has been concerned with the use of a standard pre-processing technique - the spectrum. This research employed additional pre-processing techniques such as the spectrogram and DWPA based Kurtosis, as well as the MP and BP features that were subsequently incorporated into ANN classifiers. Discrete Wavelet Packets and Spectra, were derived to extract features by calculating RMS (root mean square), Crest Factor, Variance, Skewness, Kurtosis, and Matched Filter. Certain spikes in Matching Pursuit analysis and Basis Pursuit analysis were also used as features. These various alternative methods of pre-processing for feature extraction were tested, and evaluated with the criteria of the classification performance of Neural Networks. Numerous experimental tests were conducted to simulate the real world environment. The data were obtained from a variety of bearings with a series of fault severities. The mechanism of bearing fault development was analysed and further modelled to evaluate the performance of this research methodology. The results of the researched methodology are presented, discussed, and evaluated in the results and discussion chapter of this thesis. The Basis Pursuit technique proved to be effective in diagnostic tasks. The applied Neural Network classifiers were designed as multi layer Feed Forward Neural Networks. Using these Neural Networks, automatic diagnosis methods based on spectrum analysis, DWPA, Matching Pursuit, and Basis Pursuit proved to be effective in diagnosing different conditions such as normal bearings, bearings with inner race and outer race faults, and rolling element faults, with high accuracy. Future research topics are proposed in the final chapter of the thesis to provide perspectives and suggestions for advancing research into fault diagnosis and condition monitoring.
|
249 |
Feature Extraction for the Cardiovascular Disease DiagnosisTang, Yu January 2018 (has links)
Cardiovascular disease is a serious life-threatening disease. It can occur suddenly and progresses rapidly. Finding the right disease features in the early stage is important to decrease the number of deaths and to make sure that the patient can fully recover. Though there are several methods of examination, describing heart activities in signal form is the most cost-effective way. In this case, ECG is the best choice because it can record heart activity in signal form and it is safer, faster and more convenient than other methods of examination. However, there are still problems involved in the ECG. For example, not all the ECG features are clear and easily understood. In addition, the frequency features are not present in the traditional ECG. To solve these problems, the project uses the optimized CWT algorithm to transform data from the time domain into the time-frequency domain. The result is evaluated by three data mining algorithms with different mechanisms. The evaluation proves that the features in the ECG are successfully extracted and important diagnostic information in the ECG is preserved. A user interface is designed increasing efficiency, which facilitates the implementation.
|
250 |
Extração semi-automática da malha viária em imagens aéreas digitais de áreas rurais utilizando otimização por programação dinâmica no espaço objeto /Gallis, Rodrigo Bezerra de Araújo. January 2006 (has links)
Resumo: Este trabalho propõe uma nova metodologia para extração de rodovias utilizando imagens aéreas digitais. A inovação baseia-se no algoritmo de Programação dinâmica (PD), que nesta metodologia realiza o processo de otimização no espaço objeto, e não no espaço imagem como as metodologias tradicionais de extração de rodovias por PD. A feição rodovia é extraída no espaço objeto, o qual implica um rigoroso modelo matemático, que é necessário para estabelecer os pontos entre o espaço imagem e objeto. Necessita-se que o operador forneça alguns pontos sementes no espaço imagem para descrever grosseiramente a rodovia, e estes pontos devem ser transformados para o espaço objeto para inicialização do processo de otimização por PD. Esta metodologia pode operar em diferentes modos (modo mono e estéreo), e com diversos tipos de imagens, incluindo imagens multisensores. Este trabalho apresenta detalhes da metodologia mono e estéreo e também os experimentos realizados e os resultados obtidos. / Abstract: This work proposes a novel road extraction methodology from digital images. The innovation is based on the dynamic programming (DP) algorithm to carry out the optimisation process in the object space, instead of doing it in the image space such as the DP traditional methodologies. Road features are traced in the object space, which implies that a rigorous mathematical model is necessary to be established between image and object space points. It is required that the operator measures a few seed points in the image space to describe sparsely and coarsely the roads, which must be transformed into the object space to make possible the initialisation of the DP optimisation process. Although the methodology can operate in different modes (mono-plotting or stereoplotting), and with several image types, including multisensor images, this work presents details of our single and stereo image methodology, along with the experimental results. / Orientador: João Fernando Custódio da Silva / Coorientador: Aluir Porfírio Dal Poz / Banca: Júlio Kiyoshi Hasegawa / Banca: Messias Meneguette Júnior / Doutor
|
Page generated in 0.1202 seconds