Spelling suggestions: "subject:"kann""
61 |
Seleção de características usando algoritmos genéticos para classificação de imagens de textos em manuscritos e impressosCoelho, Gleydson Vilanova Viana 31 January 2013 (has links)
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-10T18:50:01Z
No. of bitstreams: 2
Dissertação Gleydson Vilanova.pdf: 10406213 bytes, checksum: 4161dab35fb90ca62e4ebd0186c0870e (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-11T17:34:31Z (GMT). No. of bitstreams: 2
Dissertação Gleydson Vilanova.pdf: 10406213 bytes, checksum: 4161dab35fb90ca62e4ebd0186c0870e (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2013 / A presença de textos manuscritos e impressos em um mesmo documento representa
um grande desafio para os atuais mecanismos de Reconhecimento Óptico de
Caracteres. Uma vez que essas classes de texto possuem suas próprias rotinas de
reconhecimento, o uso de técnicas que permitam diferenciação entre elas tornou-se
indispensável e o bom funcionamento dessas técnicas depende da escolha de características
que melhor representem os elementos de texto sobre os quais os classificadores
devem atuar. Considerando que na literatura existe uma grande variedade
de características utilizadas para este fim, este trabalho objetiva o desenvolvimento
de um método que permita, através de um processo de otimização com Algoritmos
Genéticos e a partir de um conjunto inicial de 52 características, a seleção de
subconjuntos de melhores características que, além de menores que o conjunto original,
possibilitem melhoria dos resultados de classificação. Os experimentos foram
realizados com classificadores kNN e Redes Neurais MLP a partir de imagens de
palavras segmentadas. O método proposto foi avaliado fazendo uso de uma base
de dados pública para textos manuscritos e outra criada especificamente para este
trabalho para textos impressos. Os resultados dos experimentos mostram que os
objetivos propostos foram alcançados. Os Erros Médios de Classificação foram estatisticamente
equivalentes para os dois classificadores e uma melhor performance
foi obtida com o kNN. A influência dos diferentes tipos de fontes e estilos utilizados
nos textos impressos também foi analisada e mostrou que as fontes que imitam
textos manuscritos como a "Lucida Handwriting" e "Comic Sans MS" apresentam
maiores ocorrências de erros de classificação. Da mesma forma, a maioria dos erros
foi percebida nos textos impressos com estilo itálico.
|
62 |
Massively parallel nearest neighbors searches in dynamic point clouds on GPUJosé Silva Leite, Pedro 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:57:17Z (GMT). No. of bitstreams: 2
arquivo3157_1.pdf: 3737373 bytes, checksum: 7ca491f9a72f2e9cf51764a7acac3e3c (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2010 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Esta dissertação introduz uma estrutura de dados baseada em gride implementada em GPU. Ela foi desenvolvida para pesquisa dos vizinhos mais próximos em nuvens de pontos dinâmicas, de uma forma massivamente paralela. A implementação possui desempenho em tempo real e é executada em GPU, ambas construção do gride e pesquisas dos vizinhos mais próximos (exatos e aproximados). Dessa forma, a transferência de memória entre sistema e dispositivo é minimizada, aumentando o desempenho de uma forma geral. O algoritmo proposto pode ser usado em diferentes aplicações com cenários estáticos ou dinâmicos. Além disso, a estrutura de dados suporta nuvens de pontos tridimensionais e dada sua natureza dinâmica, o usuário pode mudar seus parâmetros em tempo de execução. O mesmo se aplica ao número de vizinhos pesquisados. Uma referência em CPU foi implementada e comparações de desempenho justificam o uso de GPUs como processadores massivamente paralelos. Em adição, o desempenho da estrutura de dados proposta é comparada com implementações em CPU e GPU de trabalhos anteriores. Finalmente, uma aplicação de renderização baseada em pontos foi desenvolvida de forma a verificar o potencial da estrutura de dados
|
63 |
Automatic Eartag Recognition on Dairy Cows in Real Barn EnvironmentIlestrand, Maja January 2017 (has links)
All dairy cows in Europe wear unique identification tags in their ears. These eartags are standardized and contains the cows identification numbers, today only used for visual identification by the farmer. The cow also needs to be identified by an automatic identification system connected to milk machines and other robotics used at the farm. Currently this is solved with a non-standardized radio transmitter which can be placed on different places on the cow and different receivers needs to be used on different farms. Other drawbacks with the currently used identification system are that it is expensive and unreliable. This thesis explores the possibility to replace this non standardized radio frequency based identification system with a standardized computer vision based system. The method proposed in this thesis uses a color threshold approach for detection, a flood fill approach followed by Hough transform and a projection method for segmentation and evaluates template matching, k-nearest neighbour and support vector machines as optical character recognition methods. The result from the thesis shows that the quality of the data used as input to the system is vital. By using good data, k-nearest neighbour, which showed the best results of the three OCR approaches, handles 98 % of the digits.
|
64 |
Performance Analysis of kNN Query Processing on large datasets using CUDA & Pthreads : comparing between CPU & GPUKalakuntla, Preetham January 2017 (has links)
Telecom companies do a lot of analytics to provide consumers a better service and to stay in competition. These companies accumulate special big data that has potential to provide inputs for business. Query processing is one of the major tool to fire analytics at their data. Traditional query processing techniques which follow in-memory algorithm cannot cope up with the large amount of data of telecom operators. The k nearest neighbour technique(kNN) is best suitable method for classification and regression of large datasets. Our research is focussed on implementation of kNN as query processing algorithm and evaluate the performance of it on large datasets using single core, multi-core and on GPU. This thesis shows an experimental implementation of kNN query processing on single core CPU, Multicore CPU and GPU using Python, P- threads and CUDA respectively. We considered different levels of sizes, dimensions and k as inputs to evaluate the performance. The experiment shows that GPU performs better than CPU single core on the order of 1.4 to 3 times and CPU multi-core on the order of 5.8 to 16 times for different levels of inputs.
|
65 |
Analysis and Visualization of the Two-Dimensional Blood Flow Velocity Field from VideosJun, Yang January 2015 (has links)
We estimate the velocity field of the blood flow in a human face from videos. Our approach first performs spatial preprocessing to improve the signal-to-noise ratio (SNR) and the computational efficiency. The discrete Fourier transform (DFT) and a temporal band-pass filter are then applied to extract the frequency corresponding to the subjects heart rate. We propose multiple kernel based k-NN classification for removing the noise positions from the resulting phase and amplitude maps. The 2D blood flow field is then estimated from the relative phase shift between the pixels. We evaluate our approach about segmentation as well as velocity field on real and synthetic face videos. Our method produces the recall and precision as well as a velocity field with an angular error and magnitude error on the average.
|
66 |
Automatic retrieval of data for industrial machines with handheld devices : Positioning in indoor environments using iBeaconsSjöbro, Linus January 2021 (has links)
Positioning of mobile phones or other handheld devices in indoor environments is hard because it’s often not possible to retrieve a GPS-signal. Therefore, other techniques need to be used for this. Despite the difficulties with indoor positioning, the Swedish mining company LKAB want to do exactly this in their processing plants. LKAB has developed an Apple iPhone mobile application to maintain real-time process data and documents for their machines. To retrieve the information an OCR code need to be manually scanned with the application. Instead of manually scanning these codes, LKAB want to develop an Indoor Positioning System that can automatically locate handheld devices in their production plants. This thesis aimed to create a proof of concept Apple iOS application that can position devices without GPS-signals. In the system developed Bluetooth Low Energy iBeacons is used to transmit data to the application. From this data Received Signal Strength Indication values is collected and sent off to a server that transform the values into positioning fingerprints. These fingerprints are used together with the classification algorithms K-Nearest Neighbour to determine in which, on pre-hand created, group the user is located. In these created groups there is a defined set of machines that is being presented back to the user. Test results conducted with the proof of concept application shows that the implemented system works and gives a positioning accuracy of up to 75%.
|
67 |
Rozpoznávání domácích spotřebičů na základě jejich odběrové charakteristiky / Recognition of Home Appliances Based on Their Power Consumption CharacteristicsVaňková, Klára January 2015 (has links)
The goal of this master's thesis is to design and implement a system for recognition of home appliances based on their power consumption characteristics. This system should identify the individual home appliances from measurements of the total household consumption. The acquired data could be used for statistics of usage of a particular appliance and subsequent detection of errors or non-standard behavior of the measured device. An important part of my work is a design and hardware implementation of a unit for measuring and a system for processing the measured signal. The first version of my project uses pulse output of an electrometer to measure the energy. This method does not provide a sufficient sample rate but it's a quick way to obtain data for processing and analysis. The second version monitors the power consumption with a multi-purpose AC converter which measures active and reactive power with the desired sample rate. The data is then processed and recognized by two classifiers - HMM and KNN.
|
68 |
Modelos probabilísticos e não probabilísticos de classificação binária para pacientes com ou sem demência como auxílio na prática clínica em geriatria.Galdino, Maicon Vinícius. January 2020 (has links)
Orientador: Liciana Vaz de Arruda Silveira / Resumo: Os objetivos deste trabalho foram apresentar modelos de classificação (Regressão Logística, Naive Bayes, Árvores de Classificação, Random Forest, k-Vizinhos mais próximos e Redes Neurais Artificiais) e a comparação destes utilizando processos de reamostragem em um conjunto de dados da área de geriatria (diagnóstico de demência). Analisar as pressuposições de cada metodologia, vantagens, desvantagens e cenários em que cada metodologia pode ser melhor utilizada. A justificativa e relevância desse projeto se baseiam na importância e na utilidade do tema proposto, visto que a população idosa aumenta em todo o mundo (nos países desenvolvidos e nos em desenvolvimento como o Brasil), os modelos de classificação podem ser úteis aos profissionais médicos, em especial aos médicos generalistas, no diagnóstico de demências, pois em diversos momentos o diagnóstico não é simples. / Doutor
|
69 |
Analysis of Data from a Smart Home Research EnvironmentGuthenberg, Patrik January 2022 (has links)
This thesis projects presents a system for gathering and using data in the context of a smarthome research enviroment. The system was developed at the Human Health and ActivityLaborty, H2Al, at Luleå University of Technology and consists of two distinct parts. First, a data export application that runs in the H2Al enviroment. This application syn-chronizes data from various sensor systems and forwards the data for further analysis. Thisanalysis was performed in the iMotions platform in order to visualize, record and export data.As a delimitation, the only sensor used was the WideFind positional system installed at theH2Al. Secondly, an activity recognition application that uses data generated from the iMotionsplatform and data export application. This includes several scripts which transforms rawdata into labeled datasets and translates them into activity recognition models with the helpof machine learning algorithms. As a delimitation, activity recognition was limited to falldetection. These fall detection models were then hosted on a basic server to test accuracyand to act as an example use case for the rest of the project. The project resulted in an effective data gathering system and was generally successful asa tool to create datasets. The iMotions platform was especially successful in both visualizingand recording data together with the data export application. The example fall detectionmodels trained showed theoretical promise, but failed to deliver good results in practice,partly due to the limitations of the positional sensor system used. Some of the conclusions drawn at the end of the project were that the data collectionprocess needed more structure, planning and input from professionals, that a better positionalsensor system may be required for better fall detection results but also that this kind of systemshows promise in the context of smart homes, especially within areas like elderly healthcare.
|
70 |
Using machine learning to predict power deviations at ForsmarkBjörn, Albin January 2021 (has links)
The power output at the Forsmark nuclear power plant sometimes deviates from the expected value. The causes of these deviations are sometimes known and sometimes unknown. Three types of machine learning methods (k-nearest neighbors, support vector machines and linear regression) were trained to predict whether or not the power deviation would be outside an expected interval. The data used to train the models was gathered from points in the power production process and the data signals consisted mostly of temperatures, pressures and flows. A large part of the project was dedicated to preparing the data before using it to train the models. Temperature signals were shown to be the best predictors of deviation in power, followed by pressure and flow. The model type that performed the best was k-nearest neighbors, followed by support vector machines and linear regression. Principal component analysis was performed to reduce the size of the training datasets and was found to perform equally well in the prediction task as when principal component analysis was not used.
|
Page generated in 0.0326 seconds