• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Segmentação de nome e endereço por meio de modelos escondidos de Markov e sua aplicação em processos de vinculação de registros / Segmentation of names and addresses through hidden Markov models and its application in record linkage

Rita de Cássia Braga Gonçalves 11 December 2013 (has links)
A segmentação dos nomes nas suas partes constitutivas é uma etapa fundamental no processo de integração de bases de dados por meio das técnicas de vinculação de registros. Esta separação dos nomes pode ser realizada de diferentes maneiras. Este estudo teve como objetivo avaliar a utilização do Modelo Escondido de Markov (HMM) na segmentação nomes e endereços de pessoas e a eficiência desta segmentação no processo de vinculação de registros. Foram utilizadas as bases do Sistema de Informações sobre Mortalidade (SIM) e do Subsistema de Informação de Procedimentos de Alta Complexidade (APAC) do estado do Rio de Janeiro no período entre 1999 a 2004. Uma metodologia foi proposta para a segmentação de nome e endereço sendo composta por oito fases, utilizando rotinas implementadas em PL/SQL e a biblioteca JAHMM, implementação na linguagem Java de algoritmos de HMM. Uma amostra aleatória de 100 registros de cada base foi utilizada para verificar a correção do processo de segmentação por meio do modelo HMM.Para verificar o efeito da segmentação do nome por meio do HMM, três processos de vinculação foram aplicados sobre uma amostra das duas bases citadas acima, cada um deles utilizando diferentes estratégias de segmentação, a saber: 1) divisão dos nomes pela primeira parte, última parte e iniciais do nome do meio; 2) divisão do nome em cinco partes; (3) segmentação segundo o HMM. A aplicação do modelo HMM como mecanismo de segmentação obteve boa concordância quando comparado com o observador humano. As diferentes estratégias de segmentação geraram resultados bastante similares na vinculação de registros, tendo a estratégia 1 obtido um desempenho pouco melhor que as demais. Este estudo sugere que a segmentação de nomes brasileiros por meio do modelo escondido de Markov não é mais eficaz do que métodos tradicionais de segmentação. / The segmentation of names into its constituent parts is a fundamental step in the integration of databases by means of record linkage techniques. This segmentation can be accomplished in different ways. This study aimed to evaluate the use of Hidden Markov Models (HMM) in the segmentation names and addresses of people and the efficiency of the segmentation on the record linkage process. Databases of the Information System on Mortality (SIM in portuguese) and Information Subsystem for High Complexity Procedures (APAC in portuguese) of the state of Rio de Janeiro between 1999 and 2004 were used. A method composed of eight stages has been proposed for segmenting the names and addresses using routines implemented in PL/SQL and a library called JAHMM, a Java implementation of HMM algorithms. A random sample of 100 records in each database was used to verify the correctness of the segmentation process using the hidden Markov model. In order to verify the effect of segmenting the names through the HMM, three record linkage process were applied on a sample of the aforementioned databases, each of them using a different segmentation strategy, namely: 1) dividing the name into first name , last name, and middle initials; 2) division of the name into five parts; 3) segmentation by HMM. The HMM segmentation mechanism was in good agreement when compared to a human observer. The three linkage processes produced very similar results, with the first strategy performing a little better than the others. This study suggests that the segmentation of Brazilian names by means of HMM is not more efficient than the traditional segmentation methods.
2

Segmentação de nome e endereço por meio de modelos escondidos de Markov e sua aplicação em processos de vinculação de registros / Segmentation of names and addresses through hidden Markov models and its application in record linkage

Rita de Cássia Braga Gonçalves 11 December 2013 (has links)
A segmentação dos nomes nas suas partes constitutivas é uma etapa fundamental no processo de integração de bases de dados por meio das técnicas de vinculação de registros. Esta separação dos nomes pode ser realizada de diferentes maneiras. Este estudo teve como objetivo avaliar a utilização do Modelo Escondido de Markov (HMM) na segmentação nomes e endereços de pessoas e a eficiência desta segmentação no processo de vinculação de registros. Foram utilizadas as bases do Sistema de Informações sobre Mortalidade (SIM) e do Subsistema de Informação de Procedimentos de Alta Complexidade (APAC) do estado do Rio de Janeiro no período entre 1999 a 2004. Uma metodologia foi proposta para a segmentação de nome e endereço sendo composta por oito fases, utilizando rotinas implementadas em PL/SQL e a biblioteca JAHMM, implementação na linguagem Java de algoritmos de HMM. Uma amostra aleatória de 100 registros de cada base foi utilizada para verificar a correção do processo de segmentação por meio do modelo HMM.Para verificar o efeito da segmentação do nome por meio do HMM, três processos de vinculação foram aplicados sobre uma amostra das duas bases citadas acima, cada um deles utilizando diferentes estratégias de segmentação, a saber: 1) divisão dos nomes pela primeira parte, última parte e iniciais do nome do meio; 2) divisão do nome em cinco partes; (3) segmentação segundo o HMM. A aplicação do modelo HMM como mecanismo de segmentação obteve boa concordância quando comparado com o observador humano. As diferentes estratégias de segmentação geraram resultados bastante similares na vinculação de registros, tendo a estratégia 1 obtido um desempenho pouco melhor que as demais. Este estudo sugere que a segmentação de nomes brasileiros por meio do modelo escondido de Markov não é mais eficaz do que métodos tradicionais de segmentação. / The segmentation of names into its constituent parts is a fundamental step in the integration of databases by means of record linkage techniques. This segmentation can be accomplished in different ways. This study aimed to evaluate the use of Hidden Markov Models (HMM) in the segmentation names and addresses of people and the efficiency of the segmentation on the record linkage process. Databases of the Information System on Mortality (SIM in portuguese) and Information Subsystem for High Complexity Procedures (APAC in portuguese) of the state of Rio de Janeiro between 1999 and 2004 were used. A method composed of eight stages has been proposed for segmenting the names and addresses using routines implemented in PL/SQL and a library called JAHMM, a Java implementation of HMM algorithms. A random sample of 100 records in each database was used to verify the correctness of the segmentation process using the hidden Markov model. In order to verify the effect of segmenting the names through the HMM, three record linkage process were applied on a sample of the aforementioned databases, each of them using a different segmentation strategy, namely: 1) dividing the name into first name , last name, and middle initials; 2) division of the name into five parts; 3) segmentation by HMM. The HMM segmentation mechanism was in good agreement when compared to a human observer. The three linkage processes produced very similar results, with the first strategy performing a little better than the others. This study suggests that the segmentation of Brazilian names by means of HMM is not more efficient than the traditional segmentation methods.
3

Zpracování zákaznických dat a jejich využití / Processing and utilizing of customer data

Bartelová, Jana January 2012 (has links)
The topic of this master dissertation is data mining of customer data for marketing purposes within an enterprise. The information resulting from this process is then used to create targeted marketing campaigns. Nowadays, identifying and exploiting customer's needs is vital for any enterprise. With that in mind, the theoretical part of this dissertation is focused primarily on different methods of data analysis such as segmentation, profiling, customer scoring and determining customer value. A significant segment of this part focuses on web analysis, which studies customer's web browsing behaviour. The practical part of this dissertation is based on a case study of a specific e-shop. The case study identifies and solves problems of emailing realization. Solving these problems using Silverpop Engage brings new opportunities for emailing. The main goal of this dissertation is to show new opportunities of utilizing behavioural data for e-mailing campaigns execution.
4

Hodnocení viability kardiomyocytů / Evaluation of viability of cardiomyocytes

Kremličková, Lenka January 2017 (has links)
The aim of this diploma thesis is to get acquainted with the properties of image data and the principle of their capture. Literary research on methods of image segmentation in the area of cardiac tissue imaging and, last but not least, efforts to find methods for classification of dead cardiomyocytes and analysis of their viability. Dead cardiomyocytes were analyzed for their shape and similarity to the template created as a mean of dead cells. Another approach was the application of the method based on local binary characters and the computation of symptoms from a simple and associated histogram.
5

Naturliga kluster av funktionella enheter i ultraljudssekvenser : En utvärdering av klusteranalys för att detektera motoriska enheter i kontraherande skelettmuskulatur / Natural clusters of functional units in ultrasound sequences : An evaluation of cluster analysis for detection of motor units in contracting skeletal muscle tissue

Mårell Ohlsson, Adam January 2014 (has links)
Strukturell avbildning med ultraljud kan användas för att upptäcka sjukdomar och störningar i kroppen. För att ställa tillförlitliga diagnoser räcker det inte alltid med en strukturell avbildning utan ibland krävs det även fysiologisk information. Vid användning av funktionell avbildning med ultraljud kan den informationen mätas i kroppens olika fysiologiska system.   Systemen består av funktionellt olika enheter och kallas för motoriska enheter i skelettmuskulatur. Vid sjukdomar som ateroskleros (åderförkalkning) kan dessa enheter vara så kallade kärlplack, som består av vävnad med varierande egenskaper och medicinsk relevans. Möjligheten att kunna analysera funktionella enheter i system som dessa kan bidra mycket till diagnostisering av sjukdomar och störningar.   Den här studien presenterar en metod för att hitta naturliga kluster av funktionella enheter i skelettmuskulatur, från 3D-data inhämtat med ultraljudssekvenser.   I studien genererades syntetiska data från en modell som simulerar sekvenser av aktionspotentialer i kontraherande muskelvävnad. Datat bearbetades med förbehandlingar och klusteranalys och resultaten utvärderades med siluettkoefficienter. Kombinationer av fyra förbehandlingssätt och två klustringsalgoritmer jämförs i studien. Även tester på riktigt ultraljudsdata av muskelkontraktioner utfördes.   Den bästa kombinationen av förbehandling och klustringsalgoritm gav goda resultat och använder datanormalisering samt temporal bandpassfiltrering som förbehandling tillsammans med hierarkisk Complete Linkage-klustring. Den var dessutom relativt okänslig för störningar i datat. Resultaten från riktigt ultraljudsdata gav en grov indelning av områden i muskeln som visuellt överensstämmer med anatomin i den strukturella bilden. / Structural imaging using ultra sound can be used to detect diseases and disorders in the body. It’s not always enough to structurally image these detections for accurate diagnosis, sometimes physiologically functional information is needed. By using functional imaging, this information can be measured in various physiological systems throughout the body.   The systems are built up by functionally different units. In skeletal muscle these units are called motor units and in cases of disease, like atherosclerosis, they can be arterial plack. The placks have a tissue composition of various properties and clinical relevance. If functional units could be analyzed in systems like these, much could be contributed to diagnosis of diseases and disorders.   In this study, a method of detecting natural clusters of functional units in skeletal muscle, using 3D data collected from ultrasound sequences, is presented.   Using a model that simulates a series of actions potentials in contracting muscle tissue, synthetic data was generated. During analysis the data was preprocessed and clustered, the results were analyzed using silhouette coefficients. In this study, combinations of four methods of preprocessing and two clustering algorithms are compared. Real ultrasound data of contracting muscle tissue was also examined.   A combination of preprocessing and clustering that clustered the data particularly well used data normalization and temporal passband filtering for preprocessing together with hierarchical Complete Linkage clustering. It also seemed to be relatively unaffected by noise. Clustering of the real ultrasound data resulted in a coarse sorting of the different areas of the muscle that corresponds to the anatomy seen in structural images.
6

Real time intelligent decision making from heterogeneous and imperfect data / La prise de décision intelligente en temps réel à partir de données hétérogènes et imparfaites

Sfar, Hela 09 July 2019 (has links)
De nos jours, l'informatique omniprésente fait face à un progrès croissant. Ce paradigme est caractérisé par de multiples capteurs intégrés dans des objets du monde physique. Le développement d'applications personnelles utilisant les données fournies par ces capteurs a conduit à la création d'environnements intelligents, conçus comme un framework de superposition avancé qui aide de manière proactive les individus dans leur vie quotidienne. Une application d’environnement intelligent collecte les données de capteurs deployés d'une façon en continu , traite ces données et les analyse avant de prendre des décisions pour exécuter des actions sur l’environnement physique. Le traitement de données en ligne consiste principalement en une segmentation des données pour les diviser en fragments. Généralement, dans la littérature, la taille des fragments est fixe. Cependant, une telle vision statique entraîne généralement des problèmes de résultats imprécis. Par conséquent, la segmentation dynamique utilisant des tailles variables de fenêtres d’observation est une question ouverte. La phase d'analyse prend en entrée un segment de données de capteurs et extrait des connaissances au moyen de processus de raisonnement ou d'extraction. La compréhension des activités quotidiennes des utilisateurs et la prévention des situations anormales sont une préoccupation croissante dans la littérature, mais la résolution de ces problèmes à l'aide de données de petite taille et imparfaites reste un problème clé. En effet, les données fournies par les capteurs sont souvent imprécises, inexactes, obsolètes, contradictoires ou tout simplement manquantes. Par conséquent, l'incertitude liée à la gestion est devenue un aspect important. De plus, il n'est pas toujours possible et trop intrusif de surveiller l'utilisateur pour obtenir une grande quantité de données sur sa routine de vie. Les gens ne sont pas souvent ouverts pour être surveillés pendant une longue période. Évidemment, lorsque les données acquises sur l'utilisateur sont suffisantes, la plupart des méthodes existantes peuvent fournir une reconnaissance précise, mais les performances baissent fortement avec de petits ensembles de données. Dans cette thèse, nous avons principalement exploré la fertilisation croisée d'approches d'apprentissage statistique et symbolique et les contributions sont triples: (i) DataSeg, un algorithme qui tire parti à la fois de l'apprentissage non supervisé et de la représentation ontologique pour la segmentation des données. Cette combinaison choisit de manière dynamique la taille de segment pour plusieurs applications, contrairement à la plupart des méthodes existantes. De plus, contrairement aux approches de la littérature, Dataseg peut être adapté à toutes les fonctionnalités de l’application; (ii) AGACY Monitoring, un modèle hybride de reconnaissance d'activité et de gestion des incertitudes qui utilise un apprentissage supervisé, une inférence de logique possibiliste et une ontologie permettant d'extraire des connaissances utiles de petits ensembles de données; (iii) CARMA, une méthode basée sur les réseaux de Markov et les règles d'association causale pour détecter les causes d'anomalie dans un environnement intelligent afin d'éviter leur apparition. En extrayant automatiquement les règles logiques concernant les causes d'anomalies et en les intégrant dans les règles MLN, nous parvenons à une identification plus précise de la situation, même avec des observations partielles. Chacune de nos contributions a été prototypée, testée et validée à l'aide de données obtenues à partir de scénarios réels réalisés. / Nowadays, pervasive computing is facing an increasing advancement. This paradigm is characterized by multiple sensors highly integrated in objects of the physical world.The development of personal applications using data provided by these sensors has prompted the creation of smart environments, which are designed as an overlay advanced framework that proactively, but sensibly, assist individuals in their every day lives. A smart environment application gathers streaming data from the deployed sensors, processes and analyzes the collected data before making decisions and executing actions on the physical environment. Online data processing consists mainly in data segmentation to divide data into fragments. Generally, in the literature, the fragment size is fixed. However, such static vision usually brings issues of imprecise outputs. Hence, dynamic segmentation using variable sizes of observation windows is an open issue. The analysis phase takes as input a segment of sensor data and extract knowledge by means of reasoning or mining processes. In particular, understanding user daily activities and preventing anomalous situations are a growing concern in the literature but addressing these problems with small and imperfect data is still a key issue. Indeed, data provided by sensors is often imprecise, inaccurate, outdated, in contradiction, or simply missing. Hence, handling uncertainty became an important aspect. Moreover, monitoring the user to obtain a large amount of data about his/her life routine is not always possible and too intrusive. People are not often open to be monitored for a long period of time. Obviously, when the acquired data about the user are sufficient, most existing methods can provide precise recognition but the performances decline sharply with small datasets.In this thesis, we mainly explored cross-fertilization of statistic and symbolic learning approaches and the contributions are threefold: (i) DataSeg, an algorithm that takes advantage of both unsupervised learning and ontology representation for data segmentation. This combination chooses dynamically the segment size for several applications unlike most of existing methods. Moreover, unlike the literature approaches, Dataseg is able to be adapted to any application features; (ii) AGACY Monitoring, a hybrid model for activity recognition and uncertainty handling which uses supervised learning, possibilistic logic inference, and an ontology to extract meaningful knowledge from small datasets; (iii) CARMA, a method based on Markov Logic Networks (MLN) and causal association rules to detect anomaly causes in a smart environment so as to prevent their occurrence. By automatically extracting logic rules about anomalies causes and integrating them in the MLN rules, we reach a more accurate situation identification even with partial observations. Each of our contributions was prototyped, tested and validated through data obtained from real scenarios that are realized.
7

Pokročilé metody detekce kontury srdečních buněk / Advanced methods for cardiac cells contour detection

Spíchalová, Barbora January 2015 (has links)
This thesis focuses on advanced methods of detecting contours of the cardiac cells and measuring their contraction. The theoretical section describes the types of confocal microscopes, which are used for capturing biological samples. The following chapter is devoted to the methods of cardiac cells segmentation, where we are introduced to the generally applied approaches. The most widely spread methods of segmentation are active contours and mathematical morphology, which are the crucial topics of this thesis. Thanks to the those methods we are able in the visual data to accurately detect required elements and measure their surface chnage in time. Acquired theoretical knowledge leads us to the practical realization of the methods in MATLAB.
8

Měření seismické činnosti pomocí optických vláknových senzorů / Seismic activity measurement using fiber optic sensors

Vaněk, Stanislav January 2018 (has links)
The aim of master's thesis is to get familiarized with the problems of measurement and analysis of seismic waves. Theoretical part deals with the description of seismic waves, especially their types, sources and properties. Attention was afterwards focused on the measurement systems of these waves, emphasis was placed on their principles and advantages. The practical part discusses methods of noise reduction and highlighting of significant events in measured data. At the end, individual methods are implemented into user-friendly graphical interface.
9

Interaktivní segmentace 3D CT dat s využitím hlubokého učení / Interactive 3D CT Data Segmentation Based on Deep Learning

Trávníčková, Kateřina January 2020 (has links)
This thesis deals with CT data segmentation using convolutional neural nets and describes the problem of training with limited training sets. User interaction is suggested as means of improving segmentation quality for the models trained on small training sets and the possibility of using transfer learning is also considered. All of the chosen methods help improve the segmentation quality in comparison with the baseline method, which is the use of automatic data specific segmentation model. The segmentation has improved by tens of percents in Dice score when trained with very small datasets. These methods can be used, for example, to simplify the creation of a new segmentation dataset.

Page generated in 0.1001 seconds