21 |
Algorithms for image segmentation in fermentation.Mkolesia, Andrew Chikondi. January 2011 (has links)
M. Tech. Mathematical Technology. / Aims of this research project is to mathematically analyse froth patterns and build a database of the images at different stages of the fermentation process, so that a decision-making procedure can be developed, which enables a computer to react according to what has been observed. This would allow around-the-clock observation which is not possible with humans. In addition, mechanised decision-making would minimize errors usually associated with human actions. Different mathematical algorithms for image processing will be considered and compared. These algorithms have been designed for different image processing situations. In this dissertation the algorithms will be applied to froth images in particular and will be used to simulate the human eye for decision-making in the fermentation process. The preamble of the study will be to consider algorithms for the detection of edges and then analyse these edges. MATLAB will be used to do the pre-processing of the images and to write and test any new algorithms designed for this project.
|
22 |
Detecção do complexo QRS através de morfologia matemática multiescalarSaraiva, Aratã Andrade 05 September 2012 (has links)
Este trabalho apresenta a morfologia matemática multiescalar com quatro escalas aplicada no sinal de eletrocardiografia para a detecção do complexo QRS. Para o desenvolvimento deste trabalho pluridisciplinar de Engenharia Biomédica foram utilizados conhecimentos de Cardiologia, Eletrocardiografia, Bioestatística, Processamento Digital de Sinais Biomédicos, Teoria de Detecção de Sinais, Análise ROC e Índices de Desempenho de Classificadores, interagindo com áreas da edicina, da Estatística, da Matemática, da Engenharia da Computação e da Engenharia Elétrica. Testes foram realizados com o banco de sinais de ECG do MIT/BIH. O desempenho do método foi avaliado através da curva ROC e do índice DER. Os resultados foram comparados com a morfologia multiescalar de uma, duas e de três escalas. Nos resultados foi observado que o método de morfologia multiescalar com quatro escalas, aplicado nas condições estabelecidas, apresentou índices melhores de detecção do complexo QRS, confirmando seu potencial no processamento de sinais biomédicos, dando suporte na manipulação do complexo QRS e oferecendo melhorias na detecção. / This work presents the multiscale mathematical morphology with four scales applied in the ECG signal for detection of the QRS complex. To develop this Biomedical Engineering multidisciplinary work were used knowledge of Cardiology, Electrocardiography, Biostatistics, Biomedical Digital Signal Processing, Signal Detection Theory, ROC Analysis and Performance Classifiers Indexes, interacting with areas of Medicine, Statistics, Mathematics, Computer Engineering and Electrical Engineering. The tests were realized with MIT/BIH ECG signals database. The performance of the method was evaluated using ROC curves and the DER index. The results were compared with the multiscale mathematical morphology of one, two and three scales. In the results was observed that the multiscale mathematical morphology with four scales presented better indexes detection of the QRS complex, confirming its potential in biomedical signal processing, supporting the handling of the QRS complex and offering improvements in detection.
|
23 |
Metodologia para a captura, detecção e normalização de imagens faciaisProdossimo, Flávio das Chagas 29 May 2013 (has links)
CAPES / O reconhecimento facial está se tornando uma tarefa comum com a evolução da tecnologia da informação. Este artefato pode ser utilizado na área de segurança, controlando acesso a lugares restritos, identificando pessoas que tenham cometido atos ilícitos, entre outros. Executar o reconhecimento facial é uma tarefa complexa e, para completar este processo, são implementadas etapas que compreendem: a captura de imagens faciais, a detecção de regiões de interesse, a normalização facial, a extração de características e o reconhecimento em si. Dentre estas, as três primeiras são tratadas neste trabalho, que tem como objetivo principal a normalização automática de faces. Tanto para a captura de imagens quanto para a normalização frontal existem normas internacionais que padronizam o procedimento de execução destas tarefas e que foram utilizadas neste trabalho. Além disto, algumas normas foram adaptadas para a construção de uma base de imagens faciais com o objetivo de auxiliar o processo de reconhecimento facial. Também foi criada uma nova metodologia para normalização de imagens faciais laterais, baseando-se nas normas da normalização frontal. Foram implementadas normalização semiautomática frontal, semiautomática lateral e automática lateral. Para a execução da normalização facial automática são necessários dois pontos de controle, os dois olhos, o que torna indispensável a execução da etapa de detecção de regiões de interesse. Neste trabalho, foram comparadas duas metodologias semelhantes para detecção. Primeiramente foi detectada uma região contendo ambos os olhos e, em seguida, dentro desta região, foram detectados cada um dos olhos de forma mais precisa. Para as duas metodologias foram utilizadas técnicas de processamento de imagens e reconhecimento de padrões. A primeira metodologia utiliza como filtro o Haar-Like Features em conjunto com a técnica de reconhecimento de padrões Adaptative Boosting. Sendo que as técnicas equivalentes no segundo algoritmo foram o Local Binary Pattern e o Support Vector Machines, respectivamente. Na segunda metodologia também foi utilizado um algoritmo de otimização de busca baseado em vizinhança, o Variable Neighborhood Search. Os estudos resultaram em uma base com 3726 imagens, mais uma base normalizada frontal com 966 imagens e uma normalizada lateral com 276 imagens. A detecção de olhos resultou, nos melhores testes, em aproximadamente 99% de precisão para a primeira metodologia e 95% para a segunda, sendo que em todos os testes a primeira foi o mais rápida. Com o desenvolvimento de trabalhos futuros pretende-se: tornar públicas as bases de imagens, melhorar a porcentagem de acerto e velocidade de processamento para todos os testes e melhorar a normalização, implementando a normalização de plano de fundo e também de iluminação. / With the evolution of information technology Facial recognition is becoming a common task. This artifact can be used in security, controlling access to restricted places and identifying persons, for example. Facial recognition is a complex task, and it's divided into some process, comprising: facial images capture, detection of regions of interest, facial normalization, feature extraction and recognition itself. Among these, the first three are treated in this work, which has as its main objective the automatic normalization of faces. For the capture of images and for the image normalization there are international standards that standardize the procedure for implementing these tasks and which were used in this work. In addition to following these rules, other standardizations have been developed to build a database of facial images in order to assist the process of face recognition. A new methodology for normalization of profile faces, based on the rules of frontal normalization. Some ways of normalization were implemented: frontal semiautomatic, lateral semiautomatic and automatic frontal. For the execution of frontal automatic normalization we need two points of interest, the two eyes, which makes it a necessary step to execute the detection regions of interest. In this work, we compared two similar methods for detecting. Where was first detected a region containing both eyes and then, within this region were detected each eye more accurately. For the two methodologies were used techniques of image processing and pattern recognition. The first method based on the Viola and Jones algorithm, the filter uses as Haar-like Features with the technique of pattern recognition Adaptive Boosting. Where the second algorithm equivalent techniques were Local Binary Pattern and Support Vector Machines, respectively. In the second algorithm was also used an optimization algorithm based on neighborhood search, the Variable Neighborhood Search. This studies resulted in a database with 3726 images, a frontal normalized database with 966 images and a database with face's profile normalized with 276 images. The eye detection resulted in better tests, about 99 % accuracy for the first method and 95 % for the second, and in all tests the first algorithm was the fastest. With the development of future work we have: make public the images database, improve the percentage of accuracy and processing speed for all tests and improve the normalization by implementing the normalization of the background and also lighting.
|
24 |
Metodologia para a captura, detecção e normalização de imagens faciaisProdossimo, Flávio das Chagas 29 May 2013 (has links)
CAPES / O reconhecimento facial está se tornando uma tarefa comum com a evolução da tecnologia da informação. Este artefato pode ser utilizado na área de segurança, controlando acesso a lugares restritos, identificando pessoas que tenham cometido atos ilícitos, entre outros. Executar o reconhecimento facial é uma tarefa complexa e, para completar este processo, são implementadas etapas que compreendem: a captura de imagens faciais, a detecção de regiões de interesse, a normalização facial, a extração de características e o reconhecimento em si. Dentre estas, as três primeiras são tratadas neste trabalho, que tem como objetivo principal a normalização automática de faces. Tanto para a captura de imagens quanto para a normalização frontal existem normas internacionais que padronizam o procedimento de execução destas tarefas e que foram utilizadas neste trabalho. Além disto, algumas normas foram adaptadas para a construção de uma base de imagens faciais com o objetivo de auxiliar o processo de reconhecimento facial. Também foi criada uma nova metodologia para normalização de imagens faciais laterais, baseando-se nas normas da normalização frontal. Foram implementadas normalização semiautomática frontal, semiautomática lateral e automática lateral. Para a execução da normalização facial automática são necessários dois pontos de controle, os dois olhos, o que torna indispensável a execução da etapa de detecção de regiões de interesse. Neste trabalho, foram comparadas duas metodologias semelhantes para detecção. Primeiramente foi detectada uma região contendo ambos os olhos e, em seguida, dentro desta região, foram detectados cada um dos olhos de forma mais precisa. Para as duas metodologias foram utilizadas técnicas de processamento de imagens e reconhecimento de padrões. A primeira metodologia utiliza como filtro o Haar-Like Features em conjunto com a técnica de reconhecimento de padrões Adaptative Boosting. Sendo que as técnicas equivalentes no segundo algoritmo foram o Local Binary Pattern e o Support Vector Machines, respectivamente. Na segunda metodologia também foi utilizado um algoritmo de otimização de busca baseado em vizinhança, o Variable Neighborhood Search. Os estudos resultaram em uma base com 3726 imagens, mais uma base normalizada frontal com 966 imagens e uma normalizada lateral com 276 imagens. A detecção de olhos resultou, nos melhores testes, em aproximadamente 99% de precisão para a primeira metodologia e 95% para a segunda, sendo que em todos os testes a primeira foi o mais rápida. Com o desenvolvimento de trabalhos futuros pretende-se: tornar públicas as bases de imagens, melhorar a porcentagem de acerto e velocidade de processamento para todos os testes e melhorar a normalização, implementando a normalização de plano de fundo e também de iluminação. / With the evolution of information technology Facial recognition is becoming a common task. This artifact can be used in security, controlling access to restricted places and identifying persons, for example. Facial recognition is a complex task, and it's divided into some process, comprising: facial images capture, detection of regions of interest, facial normalization, feature extraction and recognition itself. Among these, the first three are treated in this work, which has as its main objective the automatic normalization of faces. For the capture of images and for the image normalization there are international standards that standardize the procedure for implementing these tasks and which were used in this work. In addition to following these rules, other standardizations have been developed to build a database of facial images in order to assist the process of face recognition. A new methodology for normalization of profile faces, based on the rules of frontal normalization. Some ways of normalization were implemented: frontal semiautomatic, lateral semiautomatic and automatic frontal. For the execution of frontal automatic normalization we need two points of interest, the two eyes, which makes it a necessary step to execute the detection regions of interest. In this work, we compared two similar methods for detecting. Where was first detected a region containing both eyes and then, within this region were detected each eye more accurately. For the two methodologies were used techniques of image processing and pattern recognition. The first method based on the Viola and Jones algorithm, the filter uses as Haar-like Features with the technique of pattern recognition Adaptive Boosting. Where the second algorithm equivalent techniques were Local Binary Pattern and Support Vector Machines, respectively. In the second algorithm was also used an optimization algorithm based on neighborhood search, the Variable Neighborhood Search. This studies resulted in a database with 3726 images, a frontal normalized database with 966 images and a database with face's profile normalized with 276 images. The eye detection resulted in better tests, about 99 % accuracy for the first method and 95 % for the second, and in all tests the first algorithm was the fastest. With the development of future work we have: make public the images database, improve the percentage of accuracy and processing speed for all tests and improve the normalization by implementing the normalization of the background and also lighting.
|
25 |
3D numerical techniques for determining the foot of a continental slopePantland, Nicolette Ariana 12 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2004. / ENGLISH ABSTRACT: The United Nations Convention on the Law of the Sea (UNCLOS) provides an opportunity for qualifying coastal signatory states to claim extended maritime estate. The opportunity to claim rests on the precept
that in certain cases a continental shelf extends beyond the traditionally demarcated two hundred nautical
mile (200M) Exclusive Economic Zone (EEZ) mark. In these cases a successful claim results in states
having sovereign rights to the living and non-living resources of the seabed and subsoil, as well as the
sedentary species, of the area claimed. Where the continental shelf extends beyond the 200M mark, the
Foot of the Continental Slope (FoS) has to be determined as one of the qualifying criteria. Article 76 of
UNCLOS de nes the FoS as ". . . the point of maximum change in the gradient at its base." Currently
Caris Lots is the most widely used software which incorporates public domain data to determine the
FoS as a step towards defining the offshore extent of an extended continental shelf. In this software,
existing methods to compute the FoS are often subjective, typically involving an operator choosing the
best perceived foot point during consideration of a two dimensional profile of the continental slope.
These foot points are then joined by straight lines to form the foot line to be used in the desk top study
(feasibility study). The purpose of this thesis is to establish a semi-automated and mathematically based
three dimensional method for determination of the FoS using South African data as a case study.
Firstly, a general background of UNCLOS is given (with emphasis on Article 76), including a brief
discussion of the geological factors that influence the characteristics of a continental shelf and thus
factors that could influence the determination of the FoS.
Secondly, a mathematical method for determination of the surfaces of extremal curvature (on three
dimensional data), originally proposed by Vanicek and Ou in 1994, is detailed and applied to two smooth,
hypothetical sample surfaces. A discussion of the bathymetric data to be used for application introduces
the factors to be taken into account when using extensive survey data as well as methods to process
the raw data for use. The method is then applied to two sets of gridded bathymetric data of differing
resolution for four separate regions around the South African coast. The ridges formed on the resulting
surfaces of maximum curvature are then traced in order to obtain a foot line definition for each region
and each resolution.
The results obtained from application of the method are compared with example foot points provided
by the subjective two dimensional method of computation within the Caris Lots software suite. A
comparison of the results for the different resolutions of data is included to provide insight as to the
effectiveness of the method with differing spatial coarseness of data.
Finally, an indication of further work is provided in the conclusion to this thesis, in the form of a
number of recommendations for possible adaptations of the mathematical and tracing methods, and
improvements thereof. / AFRIKAANSE OPSOMMING: Die Verenigde Nasies se Konvensie oor die Wet van die See (UNCLOS) bied 'n geleentheid aan kwalifiserende state wat ondertekenaars van die Konvensie is om aanspraak te maak op uitgebreide maritieme gebied. Die geleentheid om op uitgebreide gebied aanspraak te maak berus op die veronderstelling
dat 'n kontinentale tafel in sekere gevalle tot buite die tradisioneel afgebakende 200 seemyl eksklusiewe
ekonomiese zone (EEZ) strek. In sulke gevalle het 'n suksesvolle aanspraak die gevolg dat die staat
soewereine reg oor die lewende en nie-lewende bronne van die seevloer en ondergrond verkry, sowel as
die inwonende spesies van die gebied buite die EEZ waarop aanspraak gemaak word.
Die voet van die kontinentale tafel (FoS) moet vasgestel word as een van die bepalende kriteria vir
afbakening van die aanspraak waar die kontinentale tafel tot buite die EEZ strek. Artikel 76 van UNCLOS
defineer die FoS as ". . . die punt van maksimale verandering in die helling by sy basis." Die mees algemeen
gebruikte rekenaar sagteware wat openbare domein data aanwend om die voet van die helling te bepaal,
is tans "Caris Lots." Die metodes wat in die program gebruik word om die voet van die helling te bepaal,
is dikwels subjektief en berus tipies op 'n operateur se keuse van die beste afgeskatte punt van die voet
van die helling uit 'n oorweging van 'n twee dimensionele profiel van die kontinentale tafel. Die berekende
voet-punte word dan deur middel van reguit lyne verbind om 'n hellingsvoetlyn te vorm. Hierdie voetlyn
kan dan in die Suid-Afrikaanse lessenaarstudie (doenlikheidstudie) oor die bepaling van die voet van
die kontinentale tafel gebruik word. Die doel van hierdie verhandeling is om 'n semi-outomatiese en
wiskundig gebaseerde drie-dimensionele metode te beskryf vir die vasstelling van die FoS, deur as 'n
gevallestudie van Suid-Afrikaanse data gebruik te maak.
'n Algemene agtergrond van UNCLOS, met beklemtoning van Artikel 76, word eerstens gegee. 'n Kort bespreking van die geologiese faktore wat die kontinentale tafel beïnvloed en wat gevolglik 'n invloed kan
hê op die vasstelling van die voet van die helling, is ingesluit.
Tweedens word 'n wiskundige metode, wat oorspronklik in 1994 deur Vanicek en Ou voorgestel is, vir
bepaling van die oppervlaktes van maksimale kromming (gebaseer op drie-dimensionele data) in detail
bespreek en 'n voorbeeld van 'n toepassing op twee gladde, denkbeeldige oppervaktes word beskryf.
Die faktore wat in ag geneem moet word wanneer omvattende dieptemeting data gebruik word, en die
metodes wat gebruik word om die rou data te verwerk, word ingelei deur 'n bespreking van die aard van
die dieptemeting data wat gebruik is. Die metode word dan toegepas op twee stelle geruite dieptemeting
data van verskillende resolusies vir vier afsonderlike streke om die Suid-Afrikaanse kus. Die riwwe wat
op die resulterende oppervlaktes van maksimale kromming gevorm word, word dan nagetrek ten einde
'n lyndefinisie van die voet van die kontinentale tafel vir elke streek teen elke resolusie te bepaal.
Die resultate verkry uit toepassings van die metode word vergelyk met hellingsvoetpunte soos bepaal
deur die subjektiewe twee dimensionele berekeningsmetode in die "Caris Lots" rekenaar-program. 'n
Vergelyking van die resultate vir die verskillende data resolusies word ingesluit om die doeltreffendheid
van die metode met betrekking tot die hantering van verskillende ruimtelike data resolusies te ondersoek.
'n Aanduiding van verdere werk, bestaande uit 'n aantal aanbevelings vir moontlike aanpassings en verbeterings
van die wiskundige en natrek metodes, word ten slotte in die gevolgtrekking van die verhandeling verskaf.
|
26 |
Multiscale Active Contour Methods in Computer Vision with Applications in TomographyAlvino, Christopher Vincent 10 April 2005 (has links)
Most applications in computer vision suffer from two major difficulties. The first is they are notoriously ridden with sub-optimal local minima. The second is that they typically require high
computational cost to be solved robustly. The reason for these two drawbacks is that most problems in computer vision, even when
well-defined, typically require finding a solution in a very large high-dimensional space.
It is for these two reasons that multiscale methods are particularly well-suited to problems in computer vision. Multiscale methods, by
way of looking at the coarse scale nature of a problem before considering the fine scale nature, often have the ability to avoid sub-optimal local minima and obtain a more globally optimal solution. In addition, multiscale methods typically enjoy reduced computational
cost.
This thesis applies novel multiscale active contour methods to several problems in computer vision, especially in simultaneous segmentation
and reconstruction of tomography images. In addition, novel multiscale methods are applied to contour registration using minimal surfaces and to the computation of non-linear rotationally invariant optical flow. Finally, a methodology for fast robust image segmentation is presented that relies on a lower dimensional image
basis derived from an image scale space.
The specific advantages of using multiscale methods in each of these problems is highlighted in the various simulations throughout the
thesis, particularly their ability to avoid sub-optimal local minima and their ability to solve the problems at a lower overall
computational cost.
|
27 |
Applications of Riesz Transforms and Monogenic Wavelet Frames in Imaging and Image ProcessingReinhardt, Martin 15 March 2019 (has links)
Die Dissertation mit dem Titel 'Applications of Riesz Transforms and Monogenic Wavelet Frames in Imaging and Image Processing' beschäftigt sich mit modernen Verfahren der Signalverarbeitung in der Bildgebung sowie in der Bildverarbeitung. Hierzu werden Riesz-Transformationen und translationsinvariante Wavelet Frames zu monogenen Frames vereint und angewandt. Bekannte Techniken wie der Strukturtensor und der Energieoperator werden mit Hilfe der neuen Verfahren verbessert und für die Orientierungsbestimmung in Bildern genutzt. Eine weitere Anwendung stellt der Algorithmus 'Equalization of Brightness' dar. Er wird mit einigen Anpassungen verwendet, um eine Implementierung der monogenen Wavelet Frames mit Hilfe des NVIDIA CUDA Frameworks vorzustellen. Bei einem empirischen Vergleich der vorgestellten Techniken mit den ursprünglichen Verfahren konnten präzisere Ergebnisse mit niedrigerer Rauschanfälligkeit nachgewiesen werden. Ein weiterer Punkt der Arbeit beschäftigt sich mit den Möglichkeiten, monogene Wavelet Frames als Filter in optischen Systemen einzusetzen.:Preface
Introduction
1. Time Frequency Analysis for Signal Processing
2. The Riesz Transform and Monogenic Wavelet Frames
3. Applications of Monogenic Wavelet Frames
Conclusion
A. Mathematical Appendix
B. Source Code Listings
Bibliography
List of Figures
|
28 |
Development of a novel sensor for soot deposition measurement in a diesel particulate filter using electrical capacitance tomographyHuq, Ragibul January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This paper presents a novel approach of particulate material (soot) measurement in a Diesel particulate filter using Electrical Capacitance Tomography. Modern Diesel Engines are equipped with Diesel Particulate Filters (DPF), as well as on-board technologies to evaluate the status of DPF because complete knowledge of DPF soot loading is very critical for robust efficient operation of the engine exhaust after treatment system. Emission regulations imposed upon all internal combustion engines including Diesel engines on gaseous as well as particulates (soot) emissions by Environment Regulatory Agencies. In course of time, soot will be deposited inside the DPFs which tend to clog the filter and hence generate a back pressure in the exhaust system, negatively impacting the fuel efficiency. To remove the soot build-up, regeneration of the DPF must be done as an engine exhaust after treatment process at pre-determined time intervals. Passive regeneration use exhaust heat and catalyst to burn the deposited soot but active regeneration use external energy in such as injection of diesel into an upstream DOC to burn the soot. Since the regeneration process consume fuel, a robust and efficient operation based on accurate knowledge of the particulate matter deposit (or soot load)becomes essential in order to keep the fuel consumption at a minimum. In this paper, we propose a sensing method for a DPF that can accurately measure in-situ soot load using Electrical Capacitance Tomography (ECT). Simulation results show that the proposed method offers an effective way to accurately estimate the soot load in DPF. The proposed method is expected to have a profound impact in improving overall PM filtering efficiency (and thereby fuel efficiency), and durability of a Diesel Particulate Filter (DPF) through appropriate closed loop regeneration operation.
|
Page generated in 0.1911 seconds