• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 7
  • 4
  • 3
  • 1
  • Tagged with
  • 47
  • 12
  • 11
  • 11
  • 10
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Head Rotation Detection in Marmoset Monkeys

January 2014 (has links)
abstract: Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical rotation of head movement in marmoset monkeys. Experiments were conducted in a sound-attenuated acoustic chamber. Head movement of marmoset monkey was studied under various auditory and visual stimulation conditions. With increasing complexity, these conditions are (1) idle, (2) sound-alone, (3) sound and visual signals, and (4) alert signal by opening and closing of the chamber door. All of these conditions were tested with either house light on or off. Infra-red camera with a frame rate of 90 Hz was used to capture of the head movement of monkeys. To assist the signal detection, two circular markers were attached to the top of monkey head. The data analysis used an image-based marker detection scheme. Images were processed using the Computation Vision Toolbox in Matlab. The markers and their positions were detected using blob detection techniques. Based on the frame-by-frame information of marker positions, the angular position, velocity and acceleration were extracted in horizontal and vertical planes. Adaptive Otsu Thresholding, Kalman filtering and bound setting for marker properties were used to overcome a number of challenges encountered during this analysis, such as finding image segmentation threshold, continuously tracking markers during large head movement, and false alarm detection. The results show that the blob detection method together with Kalman filtering yielded better performances than other image based techniques like optical flow and SURF features .The median of the maximal head turn in the horizontal plane was in the range of 20 to 70 degrees and the median of the maximal velocity in horizontal plane was in the range of a few hundreds of degrees per second. In comparison, the natural alert signal - door opening and closing - evoked the faster head turns than other stimulus conditions. These results suggest that behaviorally relevant stimulus such as alert signals evoke faster head-turn responses in marmoset monkeys. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2014
32

Insulator Fault Detection using Image Processing

Banerjee, Abhik 01 February 2019 (has links)
This thesis aims to present a method for detection of faults (burn marks) on insulator using only image processing algorithms. It is accomplished by extracting the insulator from the background image and then detecting the burn marks on the segmented image. Apart from several other challenges encountered during the detection phase, the main challenge was to eliminate the connector marks which might be detected as burn-marks. The technique discussed in this thesis work is one of a kind and not much research has been done in areas of burn mark detection on the insulator surface. Several algorithms have been pondered upon before coming up with a set of algorithms applied in a particular manner. The first phase of the work emphasizes on detection of the insulator from the image. Apart from pre-processing and other segmentation techniques, Symmetry detection and adaptive GrabCut are the main algorithms used for this purpose. Efficient and powerful algorithms such as feature detection and matching were considered before arriving at this method, based on pros and cons. The second phase is the detection of burn marks on the extracted image while eliminating the connector marks. Algorithms such as Blob detection and Contour detection, adapted in a particular manner, have been used for this purpose based on references from medical image processing. The elimination of connector marks is obtained by applying a set of mathematical calculations. The entire project is implemented in Visual Studio using OpenCV libraries. Result obtained is cross-validated across an image data set.
33

Machine learning for blob detection in high-resolution 3D microscopy images

Ter Haak, Martin January 2018 (has links)
The aim of blob detection is to find regions in a digital image that differ from their surroundings with respect to properties like intensity or shape. Bio-image analysis is a common application where blobs can denote regions of interest that have been stained with a fluorescent dye. In image-based in situ sequencing for ribonucleic acid (RNA) for example, the blobs are local intensity maxima (i.e. bright spots) corresponding to the locations of specific RNA nucleobases in cells. Traditional methods of blob detection rely on simple image processing steps that must be guided by the user. The problem is that the user must seek the optimal parameters for each step which are often specific to that image and cannot be generalised to other images. Moreover, some of the existing tools are not suitable for the scale of the microscopy images that are often in very high resolution and 3D. Machine learning (ML) is a collection of techniques that give computers the ability to ”learn” from data. To eliminate the dependence on user parameters, the idea is applying ML to learn the definition of a blob from labelled images. The research question is therefore how ML can be effectively used to perform the blob detection. A blob detector is proposed that first extracts a set of relevant and nonredundant image features, then classifies pixels as blobs and finally uses a clustering algorithm to split up connected blobs. The detector works out-of-core, meaning it can process images that do not fit in memory, by dividing the images into chunks. Results prove the feasibility of this blob detector and show that it can compete with other popular software for blob detection. But unlike other tools, the proposed blob detector does not require parameter tuning, making it easier to use and more reliable. / Syftet med blobdetektion är att hitta regioner i en digital bild som skiljer sig från omgivningen med avseende på egenskaper som intensitet eller form. Biologisk bildanalys är en vanlig tillämpning där blobbar kan beteckna intresseregioner som har färgats in med ett fluorescerande färgämne. Vid bildbaserad in situ-sekvensering för ribonukleinsyra (RNA) är blobbarna lokala intensitetsmaxima (dvs ljusa fläckar) motsvarande platserna för specifika RNA-nukleobaser i celler. Traditionella metoder för blob-detektering bygger på enkla bildbehandlingssteg som måste vägledas av användaren. Problemet är att användaren måste hitta optimala parametrar för varje steg som ofta är specifika för just den bilden och som inte kan generaliseras till andra bilder. Dessutom är några av de befintliga verktygen inte lämpliga för storleken på mikroskopibilderna som ofta är i mycket hög upplösning och 3D. Maskininlärning (ML) är en samling tekniker som ger datorer möjlighet att “lära sig” från data. För att eliminera beroendet av användarparametrar, är tanken att tillämpa ML för att lära sig definitionen av en blob från uppmärkta bilder. Forskningsfrågan är därför hur ML effektivt kan användas för att utföra blobdetektion. En blobdetekteringsalgoritm föreslås som först extraherar en uppsättning relevanta och icke-överflödiga bildegenskaper, klassificerar sedan pixlar som blobbar och använder slutligen en klustringsalgoritm för att dela upp sammansatta blobbar. Detekteringsalgoritmen fungerar utanför kärnan, vilket innebär att det kan bearbeta bilder som inte får plats i minnet genom att dela upp bilderna i mindre delar. Resultatet visar att detekteringsalgoritmen är genomförbar och visar att den kan konkurrera med andra populära programvaror för blobdetektion. Men i motsats till andra verktyg behöver den föreslagna detekteringsalgoritmen inte justering av sina parametrar, vilket gör den lättare att använda och mer tillförlitlig.
34

Object detection algorithms analysis and implementation for augmented reality system / Objecktų aptikimo algoritmai, jų analizė ir pritaikymas papildytosios realybės sistemoje

Zavistanavičiūtė, Rasa 05 November 2013 (has links)
Object detection is the initial step in any image analysis procedure and is essential for the performance of object recognition and augmented reality systems. Research concerning the detection of edges and blobs is particularly rich and many algorithms or methods have been proposed in the literature. This master‟s thesis presents 4 most common blob and edge detectors, proposes method for detected numbers separation and describes the experimental setup and results of object detection and detected numbers separation performance. Finally, we determine which detector demonstrates the best results for mobile augmented reality system. / Objektų aptikimas yra pagrindinis žingsnis vaizdų analizės procese ir yra pagrindinis veiksnys apibrėžiantis našumą objektų atpažinimo ir papildytosios realybės sistemose. Literatūroje gausu metodų ir algoritmų aprašančių sričių ir ribų aptikimą. Šiame magistro laipsnio darbe aprašomi 4 dažniausiai naudojami sričių ir ribų aptikimo algoritmai, pasiūlomas metodas aptiktų skaičių atskyrimo problemai išspręsti. Pateikiami atliktų eksperimentų rezultatai, palyginmas šių algoritmų našumas. Galiausiai yra nustatoma, kuris iš jų yra geriausias.
35

An empirical study of the impact of two antipatterns on program comprehension

Abbes, Marwen 11 1900 (has links)
Les antipatrons sont de “mauvaises” solutions à des problèmes récurrents de conception logicielle. Leur apparition est soit due à de mauvais choix lors de la phase de conception soit à des altérations et des changements continus durant l’implantation des programmes. Dans la littérature, il est généralement admis que les antipatrons rendent la compréhension des programmes plus difficile. Cependant, peu d’études empiriques ont été menées pour vérifier l’impact des antipatrons sur la compréhension. Dans le cadre de ce travail de maîtrise, nous avons conçu et mené trois expériences, avec 24 sujets chacune, dans le but de recueillir des données sur la performance des sujets lors de tâches de compréhension et d’évaluer l’impact de l’existence de deux antipatrons, Blob et Spaghetti Code, et de leurs combinaisons sur la compréhension des programmes. Nous avons mesuré les performances des sujets en terme : (1) du TLX (NASA task load index) pour l’éffort ; (2) du temps consacré à l’exécution des tâches ; et, (3) de leurs pourcentages de réponses correctes. Les données recueillies montrent que la présence d’un antipatron ne diminue pas sensiblement la performance des sujets alors que la combinaison de deux antipatrons les entrave de façon significative. Nous concluons que les développeurs peuvent faire face à un seul antipatron, alors que la combinaison de plusieurs antipatrons devrait être évitée, éventuellement par le biais de détection et de réusinage. / Antipatterns are “poor” solutions to recurring design problems which are conjectured in the literature to make object-oriented systems harder to maintain. However, little quantitative evidence exists to support this conjecture. We performed an empirical study to investigate whether the occurrence of antipatterns does indeed affect the understandability of systems by developers during comprehension and maintenance tasks. We designed and conducted three experiments, each with 24 subjects, to collect data on the performance of these subjects on basic tasks related to program comprehension and assess the impact of two antipatterns and their combinations: Blob and Spaghetti Code. We measured the subjects’ performance with: (1) TLX (NASA task load index) for their effort; (2) the time that they spent performing their tasks; and, (3) their percentages of correct answers. The collected data shows that the occurrence of one antipattern does not significantly decrease developers’ performance while the combination of two antipatterns impedes developers significantly. We conclude that developers can cope with one antipattern but that combinations thereof should be avoided possibly through detection and refactorings.
36

Méthodes de reconstruction d'images à partir d'un faible nombre de projections en tomographie par rayons x

Wang, Han 24 October 2011 (has links) (PDF)
Afin d'améliorer la sûreté (dose plus faible) et la productivité (acquisition plus rapide) du système de la tomographie par rayons X (CT), nous cherchons à reconstruire une image de haute qualitée avec un faible nombre de projections. Les algorithmes classiques ne sont pas adaptés à cette situation et la reconstruction est instable et perturbée par des artefacts. L'approche "Compressed Sensing" (CS) fait l'hypothèse que l'image inconnue est "parcimonieuse" ou "compressible", et la reconstruit via un problème d'optimisation (minimisation de la norme TV/L1) en promouvant la parcimonie. Pour appliquer le CS en CT, en utilisant le pixel/voxel comme base de representation, nous avons besoin d'une transformée parcimonieuse, et nous devons la combiner avec le "projecteur du rayon X" appliqué sur une image pixelisée. Dans cette thèse, nous avons adapté une base radiale de famille Gaussienne nommée "blob" à la reconstruction CT par CS. Elle a une meilleure localisation espace-fréquentielle que le pixel, et des opérations comme la transformée en rayons-X, peuvent être évaluées analytiquement et sont facilement parallélisables (sur plateforme GPU par exemple). Comparé au blob classique de Kaisser-Bessel, la nouvelle base a une structure multi-échelle : une image est la somme des fonctions translatées et dilatées de chapeau Mexicain radiale. Les images médicales typiques sont compressibles sous cette base. Ainsi le système de representation parcimonieuse dans les algorithmes ordinaires de CS n'est plus nécessaire. Des simulations (2D) ont montré que les algorithmes TV/L1 existants sont plus efficaces et les reconstructions ont des meilleures qualités visuelles que par l'approche équivalente basée sur la base de pixel-ondelettes. Cette nouvelle approche a également été validée sur des données expérimentales (2D), où nous avons observé que le nombre de projections en général peut être réduit jusqu'à 50%, sans compromettre la qualité de l'image.
37

An empirical study of the impact of two antipatterns on program comprehension

Abbes, Marwen 11 1900 (has links)
No description available.
38

Detekce objektu ve videosekvencích / Object Detection in Video Sequences

Šebela, Miroslav January 2010 (has links)
The thesis consists of three parts. Theoretical description of digital image processing, optical character recognition and design of system for car licence plate recognition (LPR) in image or video sequence. Theoretical part describes image representation, smoothing, methods used for blob segmentation and proposed are two methods for optical character recognition (OCR). Concern of practical part is to find solution and design procedure for LPR system included OCR. The design contain image pre-processing, blob segmentation, object detection based on its properties and OCR. Proposed solution use grayscale trasformation, histogram processing, thresholding, connected component,region recognition based on its patern and properties. Implemented is also optical recognition method of licence plate where acquired values are compared with database used to manage entry of vehicles into object.
39

Development and Implementation of Star Tracker Electronics / Utveckling och implementering av elektronik för en stjärnkamera

Lindh, Marcus January 2014 (has links)
Star trackers are essential instruments commonly used on satellites. They provide precise measurement of the orientation of a satellite and are part of the attitude control system. For cubesats star trackers need to be small, consume low power and preferably cheap to manufacture. In this thesis work the electronics for a miniature star tracker has been developed. A star detection algorithm has been implemented in hardware logic, tested and verified. A platform for continued work is presented and future improvements of the current implementation are discussed. / Stjärnkameror är vanligt förekommande instrument på satelliter. De tillhandahåller information om satellitens orientering med mycket hög precision och är en viktig del i satellitens reglersystem. För kubsatelliter måste dessa vara små, strömsnåla och helst billiga att tillverka. I detta examensarbete har elektroniken för en sådan stjärnkamera utvecklats. En algoritm som detekterar stjärnor har implementerats i hårdvara, testats och verifierats. En hårdvaruplattform som fortsatt arbete kan utgå ifrån har skapats och förslag på förbättringar diskuteras.
40

Discrete Scale-Space Theory and the Scale-Space Primal Sketch

Lindeberg, Tony January 1991 (has links)
This thesis, within the subfield of computer science known as computer vision, deals with the use of scale-space analysis in early low-level processing of visual information. The main contributions comprise the following five subjects: The formulation of a scale-space theory for discrete signals. Previously, the scale-space concept has been expressed for continuous signals only. We propose that the canonical way to construct a scale-space for discrete signals is by convolution with a kernel called the discrete analogue of the Gaussian kernel, or equivalently by solving a semi-discretized version of the diffusion equation. Both the one-dimensional and two-dimensional cases are covered. An extensive analysis of discrete smoothing kernels is carried out for one-dimensional signals and the discrete scale-space properties of the most common discretizations to the continuous theory are analysed. A representation, called the scale-space primal sketch, which gives a formal description of the hierarchical relations between structures at different levels of scale. It is aimed at making information in the scale-space representation explicit. We give a theory for its construction and an algorithm for computing it. A theory for extracting significant image structures and determining the scales of these structures from this representation in a solely bottom-up data-driven way. Examples demonstrating how such qualitative information extracted from the scale-space primal sketch can be used for guiding and simplifying other early visual processes. Applications are given to edge detection, histogram analysis and classification based on local features. Among other possible applications one can mention perceptual grouping, texture analysis, stereo matching, model matching and motion. A detailed theoretical analysis of the evolution properties of critical points and blobs in scale-space, comprising drift velocity estimates under scale-space smoothing, a classification of the possible types of generic events at bifurcation situations and estimates of how the number of local extrema in a signal can be expected to decrease as function of the scale parameter. For two-dimensional signals the generic bifurcation events are annihilations and creations of extremum-saddle point pairs. Interpreted in terms of blobs, these transitions correspond to annihilations, merges, splits and creations. Experiments on different types of real imagery demonstrate that the proposed theory gives perceptually intuitive results. / <p>QC 20120119</p>

Page generated in 0.0471 seconds