• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Actionable Knowledge Discovery using Multi-Step Mining

DharaniK, Kalpana Gudikandula 01 December 2012 (has links)
Data mining at enterprise level operates on huge amount of data such as government transactions, banks, insurance companies and so on. Inevitably, these businesses produce complex data that might be distributed in nature. When mining is made on such data with a single-step, it produces business intelligence as a particular aspect. However, this is not sufficient in enterprise where different aspects and standpoints are to be considered before taking business decisions. It is required that the enterprises perform mining based on multiple features, data sources and methods. This is known as combined mining. The combined mining can produce patterns that reflect all aspects of the enterprise. Thus the derived intelligence can be used to take business decisions that lead to profits. This kind of knowledge is known as actionable knowledge. / Data mining is a process of obtaining trends or patterns in historical data. Such trends form business intelligence that in turn leads to taking well informed decisions. However, data mining with a single technique does not yield actionable knowledge. This is because enterprises have huge databases and heterogeneous in nature. They also have complex data and mining such data needs multi-step mining instead of single step mining. When multiple approaches are involved, they provide business intelligence in all aspects. That kind of information can lead to actionable knowledge. Recently data mining has got tremendous usage in the real world. The drawback of existing approaches is that insufficient business intelligence in case of huge enterprises. This paper presents the combination of existing works and algorithms. We work on multiple data sources, multiple methods and multiple features. The combined patterns thus obtained from complex business data provide actionable knowledge. A prototype application has been built to test the efficiency of the proposed framework which combines multiple data sources, multiple methods and multiple features in mining process. The empirical results revealed that the proposed approach is effective and can be used in the real world.
2

Tracking Human in Thermal Vision using Multi-feature Histogram

Roychoudhury, Shoumik January 2012 (has links)
This thesis presents a multi-feature histogram approach to track a person in thermal vision. Illumination variation is a primary constraint in the performance of object tracking in visible spectrum. Thermal infrared (IR) sensor, which measures the heat energy emitted from an object, is less sensitive to illumination variations. Therefore, thermal vision has immense advantage in object tracking in varying illumination conditions. Kernel based approaches such as mean shift tracking algorithm which uses a single feature histogram for object representation, has gained popularity in the field of computer vision due its efficiency and robustness to track non-rigid object in significant complex background. However, due to low resolution of IR images the gray level intensity information is not sufficient enough to give a strong cue for object representation using histogram. Multi-feature histogram, which is the combination of the gray level intensity information and edge information, generates an object representation which is more robust in thermal vision. The objective of this research is to develop a robust human tracking system which can autonomously detect, identify and track a person in a complex thermal IR scene. In this thesis the tracking procedure has been adapted from the well-known and efficient mean shift tracking algorithm and has been modified to enable fusion of multiple features to increase the robustness of the tracking procedure in thermal vision. In order to identify the object of interest before tracking, rapid human detection in thermal IR scene is achieved using Adaboost classification algorithm. Furthermore, a computationally efficient body pose recognition method is developed which uses Hu-invariant moments for matching object shapes. An experimental setup consisting of a Forward Looking Infrared (FLIR) camera, mounted on a Pioneer P3-DX mobile robot platform was used to test the proposed human tracking system in both indoor and uncontrolled outdoor environments. The performance evaluation of the proposed tracking system on the OTCBVS benchmark dataset shows improvement in tracking performance in comparison to the traditional mean-shift tracking algorithm. Moreover, experimental results in different indoor and outdoor tracking scenarios involving different appearances of people show tracking is robust under cluttered background, varying illumination and partial occlusion of target object. / Electrical and Computer Engineering
3

Development- and noise-induced changes in central auditory processing at the ages of 2 and 4 years

Niemitalo-Haapola, E. (Elina) 23 May 2017 (has links)
Abstract To be able to acquire, produce, and comprehend language, precise central auditory processing (CAP), neural processes utilized for managing auditory input, is essential. However, the auditory environments are not always optimal for CAP because noise levels in children’s daily environments can be surprisingly high. In young children, CAP and its developmental trajectory as well as the influence of noise on it have scarcely been investigated. Event-related potentials (ERPs) offer promising means to study different stages of CAP in small children. Sound processing, preattentive auditory discrimination, and attention shifting processes can be addressed with obligatory responses, mismatch negativity (MMN), and novelty P3 of ERPs, respectively. In this thesis the developmental trajectory of CAP from 2 to 4 years of age as well as noise-induced changes on it, were investigated. In addition, the feasibility of the multi-feature paradigm with syllable stimuli and novel sounds in children was evaluated. To this end, obligatory responses (P1, N2, and N4) and MMNs for consonant, frequency, intensity, vowel, and vowel duration changes, as well as novelty P3 responses, were recorded in a silent condition and with babble noise using the multi-feature paradigm. The participants were voluntary, typically developing children. Significant P1, N2, N4, and MMN responses were elicited at both ages. Also a significant novelty P3, studied at the age of 2 years, was found. From 2 to 4 years, the P1 and N2 latencies shortened. The amplitudes of N2, N4, and MMNs increased and the increment was the largest at frontal electrode locations. During noise, P1 decreased, N2 increased, and the latency of N4 diminished as well as MMNs degraded. The noise-induced changes were largely similar at both ages. In conclusion, the multi-feature paradigm with five syllable deviant types and novel sounds was found to be an appropriate measure of CAP in toddlers. The changes in ERP morphology from 2 to 4 years of age suggest maturational changes in CAP. Noise degraded sound encoding, representation forming, and auditory discrimination. The children were similarly vulnerable to hampering effects of noise at both ages. Thus, noise might potentially harmfully influence language processing and thereby its acquisition in childhood. / Tiivistelmä Kielen omaksumiselle, tuottamiselle sekä ymmärtämiselle on tärkeää tarkka keskushermostollinen kuulotiedon käsittely eli ne hermostolliset prosessit, joita käytetään kuullun aineksen käsittelyyn. Kuunteluympäristöt eivät kuitenkaan aina ole optimaalisia kuulotiedon käsittelylle, sillä melutasot lasten elinympäristöissä voivat olla hyvinkin korkeita. Pienten lasten kuulotiedon käsittelyä, sen kehittymistä ja melun vaikutusta siihen on tutkittu vähän. Kuuloherätevasteet ovat toimiva tapa tarkastella pienten lasten kuulotiedon käsittelyä eri näkökulmista. Äänen käsittelyä, esitietoista kuuloerottelua ja tarkkaavuuden siirtymistä voidaan tarkastella obligatoristen vasteiden, poikkeavuusnegatiivisuuden ja novelty P3 -vasteiden avulla. Tässä väitöskirjassa tarkastellaan kuulotiedon käsittelyn kehittymistä kahden vuoden iästä neljän vuoden ikään sekä melun vaikutusta siihen. Lisäksi arvioidaan tavuärsykkeitä ja poikkeavia ääniä sisältävän monipiirreparadigman soveltuvuutta lapsitutkimuksiin. Tutkimuksissa rekisteröitiin monipiirreparadigman avulla obligatorisia vasteita (P1, N2 ja N4); konsonantin, taajuuden, intensiteetin, vokaalin ja vokaalin keston muutokselle syntyneitä MMN-vasteita sekä novelty P3 -vasteita hiljaisuudessa ja taustamelussa. Tutkimuksen osallistujat olivat vapaaehtoisia tyypillisesti kehittyviä lapsia. Molemmilla tutkimuskerroilla P1, N2, N4 ja MMN poikkesivat merkitsevästi nollatasosta samoin kuin kaksivuotiailta tutkittu novelty P3. Kahden vuoden iästä neljään vuoteen P1- ja N2-vasteiden latenssi lyheni sekä N2, N4 ja MMN vahvistuivat, muutoksen ollessa suurinta frontaalisilla elektrodeilla. Melun aikana P1 heikkeni, N2 vahvistui ja N4-vasteen latenssi lyhentyi. Lisäksi MMN-vaste heikkeni. Melun aiheuttamat muutokset olivat samankaltaisia sekä kahden että neljän vuoden iässä. Johtopäätöksenä voidaan todeta viittä eri tavuärsyketyyppiä ja yllättäviä ääniä sisältävän monipiirreparadigman olevan toimiva menetelmä taaperoiden kuulotiedon käsittelyn tutkimiseen. Kahden ja neljän ikävuoden välillä tapahtuvat muutokset vasteissa kuvastavat kehityksellisiä muutoksia kuulotiedon käsittelyssä. Melu heikentää äänitiedon peruskäsittelyä, edustumien muodostumista ja esitietoista kuuloerottelua. Lapset olivat lähes yhtä alttiita melun vaikutuksille sekä kahden että neljän vuoden iässä. Melu voi siis haitata kielen prosessointia ja sen omaksumista.
4

Représentation d'images hiérarchique multi-critère / Hierarchical multi-feature image representation

Randrianasoa, Tianatahina Jimmy Francky 08 December 2017 (has links)
La segmentation est une tâche cruciale en analyse d’images. L’évolution des capteurs d’acquisition induit de nouvelles images de résolution élevée, contenant des objets hétérogènes. Il est aussi devenu courant d’obtenir des images d’une même scène à partir de plusieurs sources. Ceci rend difficile l’utilisation des méthodes de segmentation classiques. Les approches de segmentation hiérarchiques fournissent des solutions potentielles à ce problème. Ainsi, l’Arbre Binaire de Partitions (BPT) est une structure de données représentant le contenu d’une image à différentes échelles. Sa construction est généralement mono-critère (i.e. une image, une métrique) et fusionne progressivement des régions connexes similaires. Cependant, la métrique doit être définie a priori par l’utilisateur, et la gestion de plusieurs images se fait en regroupant de multiples informations issues de plusieurs bandes spectrales dans une seule métrique. Notre première contribution est une approche pour la construction multicritère d’un BPT. Elle établit un consensus entre plusieurs métriques, permettant d’obtenir un espace de segmentation hiérarchique unifiée. Par ailleurs, peu de travaux se sont intéressés à l’évaluation de ces structures hiérarchiques. Notre seconde contribution est une approche évaluant la qualité des BPTs en se basant sur l’analyse intrinsèque et extrinsèque, suivant des exemples issus de vérités-terrains. Nous discutons de l’utilité de cette approche pour l’évaluation d’un BPT donné mais aussi de la détermination de la combinaison de paramètres adéquats pour une application précise. Des expérimentations sur des images satellitaires mettent en évidence la pertinence de ces approches en segmentation d’images. / Segmentation is a crucial task in image analysis. Novel acquisition devices bring new images with higher resolutions, containing more heterogeneous objects. It becomes also easier to get many images of an area from different sources. This phenomenon is encountered in many domains (e.g. remote sensing, medical imaging) making difficult the use of classical image segmentation methods. Hierarchical segmentation approaches provide solutions to such issues. Particularly, the Binary Partition Tree (BPT) is a hierarchical data-structure modeling an image content at different scales. It is built in a mono-feature way (i.e. one image, one metric) by merging progressively similar connected regions. However, the metric has to be carefully thought by the user and the handling of several images is generally dealt with by gathering multiple information provided by various spectral bands into a single metric. Our first contribution is a generalized framework for the BPT construction in a multi-feature way. It relies on a strategy setting up a consensus between many metrics, allowing us to obtain a unified hierarchical segmentation space. Surprisingly, few works were devoted to the evaluation of hierarchical structures. Our second contribution is a framework for evaluating the quality of BPTs relying both on intrinsic and extrinsic quality analysis based on ground-truth examples. We also discuss about the use of this evaluation framework both for evaluating the quality of a given BPT and for determining which BPT should be built for a given application. Experiments using satellite images emphasize the relevance of the proposed frameworks in the context of image segmentation.
5

Approches variationnelles statistiques spatio-temporelles pour l'analyse quantitative de la perfusion myocardique en IRM / Spatio-temporal statistical variational models for the quantitative assessment of myocardial perfusion in magnetic resonance imaging

Hamrouni-Chtourou, Sameh 11 July 2012 (has links)
L'analyse quantitative de la perfusion myocardique, i.e. l'estimation d'indices de perfusion segmentaires puis leur confrontation à des valeurs normatives, constitue un enjeu majeur pour le dépistage, le traitement et le suivi des cardiomyopathies ischémiques --parmi les premières causes de mortalité dans les pays occidentaux. Dans la dernière décennie, l'imagerie par résonance magnétique de perfusion (IRM-p) est la modalité privilégiée pour l'exploration dynamique non-invasive de la perfusion cardiaque. L'IRM-p consiste à acquérir des séries temporelles d'images cardiaques en incidence petit-axe et à plusieurs niveaux de coupe le long du grand axe du cœur durant le transit d'un agent de contraste vasculaire dans les cavités et le muscle cardiaques. Les examens IRM-p résultants présentent de fortes variations non linéaires de contraste et des artefacts de mouvements cardio-respiratoires. Dans ces conditions, l'analyse quantitative de la perfusion myocardique est confrontée aux problèmes complexes de recalage et de segmentation de structures cardiaques non rigides dans des examens IRM-p. Cette thèse se propose d'automatiser l’analyse quantitative de la perfusion du myocarde en développant un outil d'aide au diagnostic non supervisé dédié à l'IRM de perfusion cardiaque de premier passage, comprenant quatre étapes de traitement : -1.sélection automatique d'une région d'intérêt centrée sur le cœur; -2.compensation non rigide des mouvements cardio-respiratoires sur l'intégralité de l'examen traité; -3.segmentation des contours cardiaques; -4.quantification de la perfusion myocardique. Les réponses que nous apportons aux différents défis identifiés dans chaque étape s'articulent autour d'une idée commune : exploiter l'information liée à la cinématique de transit de l'agent de contraste dans les tissus pour discriminer les structures anatomiques et guider le processus de recalage des données. Ce dernier constitue le travail central de cette thèse. Les méthodes de recalage non rigide d'images fondées sur l'optimisation de mesures d'information constituent une référence en imagerie médicale. Leur cadre d'application usuel est l'alignement de paires d'images par appariement statistique de distributions de luminance, manipulées via leurs densités de probabilité marginales et conjointes, estimées par des méthodes à noyaux. Efficaces pour des densités jointes présentant des classes individualisées ou réductibles à des mélanges simples, ces approches atteignent leurs limites pour des mélanges non-linéaires où la luminance au pixel s’avère être un attribut trop frustre pour permettre une décision statistique discriminante, et pour des données mono-modal avec variations non linéaires et multi-modal. Cette thèse introduit un modèle mathématique de recalage informationnel multi-attributs/multi-vues générique répondant aux défis identifiés: (i) alignement simultané de l'intégralité de l'examen IRM-p analysé par usage d'un atlas, naturel ou synthétique, dans lequel le cœur est immobile et en utilisant les courbes de rehaussement au pixel comme ensemble dense de primitives; et (ii) capacité à intégrer des primitives image composites, spatiales ou spatio-temporelles, de grande dimension. Ce modèle, disponible dans le cadre classique de Shannon et dans le cadre généralisé d'Ali-Silvey, est fondé sur de nouveaux estimateurs géométriques de type k plus proches voisins des mesures d'information, consistants en dimension arbitraire. Nous étudions leur optimisation variationnelle en dérivant des expressions analytiques de leurs gradients sur des espaces de transformations spatiales régulières de dimension finie et infinie, et en proposant des schémas numériques et algorithmiques de descente en gradient efficace. Ce modèle de portée générale est ensuite instancié au cadre médical ciblé, et ses performances, notamment en terme de précision et de robustesse, sont évaluées dans le cadre d'un protocole expérimental tant qualitatif que quantitatif / Quantitative assessment of moycardium perfusion, i.e. computation of perfusion parameters which are then confronted to normative values, is a key issue for the diagnosis, therapy planning and monitoring of ischemic cardiomyopathies --the leading cause of death in Western countries. Within the last decade, perfusion magnetic resonance imaging (p-MRI) has emerged as a reference modality for reliably assessing myocardial perfusion in a noninvasive and accurate way. In p-MRI acquisitions, short-axis image sequences are captured at multiple slice levels along the long-axis of the heart during the transit of a vascular contrast agent through the cardiac chambers and muscle. Resulting p-MRI exams exhibit high nonlinear contrast variations and complex cardio-thoracic motions. Perfusion assessment is then faced with the complex problems of non rigid registration and segmentation of cardiac structures in p-MRI exams. The objective of this thesis is enabling an automated quantitative computer-aided diagnosis tool for first pass cardiac perfusion MRI, comprising four processing steps: -1.automated cardiac region of interest extraction; -2.non rigid registration of cardio-thoracic motions throughout the whole sequence; -3.cardiac boundaries segmentation; -4.quantification of myocardial perfusion. The answers we give to the various challenges identified in each step are based on a common idea: investigating information related to the kinematics of contrast agent transit in the tissues for discriminating the anatomical structures and driving the alignment process. This latter is the main work of this thesis. Non rigid image registration methods based on the optimization of information measures provide versatile solutions for robustly aligning medical data. Their usual application setting is the alignment of image pairs by statistically matching luminance distributions, handled using marginal and joint probability densities estimated via kernel techniques. Though efficient for joint densities exhibiting well-separated clusters or reducible to simple mixtures, these approaches reach their limits for nonlinear mixtures where pixelwise luminance appears to be a too coarse feature for allowing unambiguous statistical decisions, and for mono-modal with nonlinear variations and multi-modal data. This thesis presents a unified mathematical model for the information-theoretic multi-feature/multi-view non rigid registration, addressing the identified challenges : (i) simultaneous registration of the whole p-MRI exam, using a natural or synthetic atlas generated as a motion-free exam depicting the transit of the vascular contrast agent through cardiac structures and using local contrast enhancement curves as a feature set; (ii) can be easily generalized to richer feature spaces combining radiometric and geometric information. The resulting model is based on novel consistent k-nearest neighbors estimators of information measures in high dimension, for both classical Shannon and generalized Ali-Silvey frameworks. We study their variational optimization by deriving under closed-form their gradient flows over finite and infinite dimensional smooth transform spaces, and by proposing computationally efficient gradient descent schemas. The resulting generic theoretical framework is applied to the groupwise alignment of cardiac p-MRI exams, and its performances, in terms of accuracy and robustness, are evaluated in an experimental qualitative and quantitative protocol

Page generated in 0.0339 seconds