• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 207
  • 62
  • 60
  • 53
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1325
  • 1325
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 117
  • 116
  • 114
  • 110
  • 110
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Colour image segmentation using perceptual colour difference saliency algorithm

Bukola, Taiwo Tunmike 23 August 2017 (has links)
Submitted in fulfillment of the requirements for the Master's Degree in Information and Communication Technology, Durban, University of Technology, Durban, South Africa, 2017. / The topic of colour image segmentation has been and still is a hot issue in areas such as computer vision and image processing because of its wide range of practical applications. The urge has led to the development of numerous colour image segmentation algorithms to extract salient objects from colour images. However, because of the diverse imaging conditions in varying application domains, accuracy and robustness of several state-of-the-art colour image segmentation algorithms still leave room for further improvement. This dissertation reports on the development of a new image segmentation algorithm based on perceptual colour difference saliency along with binary morphological operations. The algorithm consists of four essential processing stages which are colour image transformation, luminance image enhancement, salient pixel computation and image artefact filtering. The input RGB colour image is first transformed into the CIE L*a*b colour image to achieve perceptual saliency and obtain the best possible calibration of the transformation model. The luminance channel of the transformed colour image is then enhanced using an adaptive gamma correction function to alleviate the adverse effects of illumination variation, low contrast and improve the image quality significantly. The salient objects in the input colour image are then determined by calculating saliency at each pixel in order to preserve spatial information. The computed saliency map is then filtered using the morphological operations to eliminate undesired factors that are likely present in the colour image. A series of experiments was performed to evaluate the effectiveness of the new perceptual colour difference saliency algorithm for colour image segmentation. This was accomplished by testing the algorithm on a large set of a hundred and ninety images acquired from four distinct publicly available benchmarks corporal. The accuracy of the developed colour image segmentation algorithm was quantified using four widely used statistical evaluation metrics in terms of precision, F-measure, error and Dice. Promising results were obtained despite the fact that the experimental images were selected from four different corporal and in varying imaging conditions. The results have indeed demonstrated that the performance of the newly developed colour image segmentation algorithm is consistent with an improved performance compared to a number of other saliency and non- saliency state-of-the-art image segmentation algorithms. / M
572

Object Recognition Using Digitally Generated Images as Training Data

Ericson, Anton January 2013 (has links)
Object recognition is a much studied computer vision problem, where the task is to find a given object in an image. This Master Thesis aims at doing a MATLAB implementation of an object recognition algorithm that finds three kinds of objects in images: electrical outlets, light switches and wall mounted air-conditioning controls. Visually, these three objects are quite similar and the aim is to be able to locate these objects in an image, as well as being able to distinguish them from one another. The object recognition was accomplished using Histogram of Oriented Gradients (HOG). During the training phase, the program was trained with images of the objects to be located, as well as reference images which did not contain the objects. A Support Vector Machine (SVM) was used in the classification phase. The performance was measured for two different setups, one where the training data consisted of photos and one where the training data consisted of digitally generated images created using a 3D modeling software, in addition to the photos. The results show that using digitally generated images as training images didn’t improve the accuracy in this case. The reason for this is probably that there is too little intraclass variability in the gradients in digitally generated images, they’re too synthetic in a sense, which makes them poor at reflecting reality for this specific approach. The result might have been different if a higher number of digitally generated images had been used.
573

Point cloud densification

Forsman, Mona January 2010 (has links)
Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In manycases, the result is a sparse point cloud, unevenly distributed over the scene.After determining the coordinates of the same point in two images of an object, the 3D positionof that point can be calculated using knowledge of camera data and relative orientation. A model created from a unevenly distributed point clouds may loss detail and precision in thesparse areas. The aim of this thesis is to study methods for densification of point clouds. This thesis contains a literature study over different methods for extracting matched point pairs,and an implementation of Least Square Template Matching (LSTM) with a set of improvementtechniques. The implementation is evaluated on a set of different scenes of various difficulty. LSTM is implemented by working on a dense grid of points in an image and Wallis filtering isused to enhance contrast. The matched point correspondences are evaluated with parameters fromthe optimization in order to keep good matches and discard bad ones. The purpose is to find detailsclose to a plane in the images, or on plane-like surfaces. A set of extensions to LSTM is implemented in the aim of improving the quality of the matchedpoints. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) andMultiple Seed Points (MSP) for the same template, and then tested to see if they converge to thesame result. Wallis filtering is used to increase the contrast in the image. The quality of the extractedpoints are evaluated with respect to correlation with other optimization parameters and comparisonof standard deviation in x- and y- direction. If a point is rejected, the option to try again with a largertemplate size exists, called Adaptive Template Size (ATS).
574

Image analysis, an approach to measure grass roots from images

Hansson, Jonas January 2001 (has links)
In this project a method to analyse images is presented. The images document the development of grassroots in a tilled field in order to study the movement of nitrate in the field. The final aim of the image analysis is to estimate the volume of dead and living roots in the soil. Since the roots and the soil have a broad and overlapping range of colours the fundamental problem is to find the roots in the images. Earlier methods for analysis of root images have used methods based on thresholds to extract the roots. To use a threshold the pixels of the object must have a unique range of colours separating them from the colour of the background, this is not the case for the images in this project. Instead the method uses a neural network to classify the individual pixels. In this paper a complete method to analyse images is presented and although the results are far from perfect, the method gives interesting results
575

Positioning of Nuclear Fuel Assemblies by Means of Image Analysis on Tomographic Data

Troeng, Mats January 2004 (has links)
A tomographic measurement technique for nuclear fuel assemblies has been developed at the Department of Radiation Sciences at Uppsala University [1]. The technique requires highly accurate information about the position of the measured nuclear fuel assembly relative to the measurement equipment. In experimental campaigns performed earlier, separate positioning measurements have therefore been performed in connection to the tomographic measurements. In this work, another positioning approach has been investigated, which requires only the collection of tomographic data. Here, a simplified tomographic reconstruction is performed, whereby an image is obtained. By performing image analysis on this image, the lateral and angular position of the fuel assembly can be determined. The position information can then be used to perform a more accurate tomographic reconstruction involving detailed physical modeling. Two image analysis techniques have been developed in this work. The stability of the two techniques with respect to some central parameters has been studied. The agreement between these image analysis techniques and the previously used positioning technique was found to meet the desired requirements. Furthermore, it has been shown that the image analysis techniques offer more detailed information than the previous technique. In addition, its off-line analysis properties reduce the need for valuable measurement time. When utilizing the positions obtained from the image analysis techniques in tomographic reconstructions of the rod-by-rod power distribution, the repeatability of the reconstructed values was improved. Furthermore, the reconstructions resulted in better agreement to theoretical data.
576

Image analysis tool for geometric variations of the jugular veins in ultrasonic sequences : Development and evaluation

Westlund, Arvid January 2018 (has links)
The aim of this project is to develop and perform a first evaluation of a software, based on the active contour, which automatically computes the cross-section area of the internal jugular veins through a sequence of 90 ultrasound images. The software is intended to be useful in future research in the field of intra cranial pressure and its associated diseases. The biomechanics of the internal jugular veins and its relationship to the intra cranial pressure is studied with ultrasound. It generates data in the form of ultrasound sequences shot in seven different body positions, supine to upright. Vein movements in cross section over the cardiac cycle are recorded for all body positions. From these films, it is interesting to know how the cross-section area varies over the cardiac cycle and between body positions, in order to estimate the pressure. The software created was semi-automatic, where the operator loads each individual sequence and sets the initial contour on the first frame. It was evaluated in a test by comparing its computed areas with manually estimated areas.  The test showed that the software was able to track and compute the area with a satisfactory accuracy for a variety of sequences. It is also faster and more consistent than manual measurements. The most difficult sequences to track were small vessels with narrow geometries, fast moving walls, and blurry edges. Further development is required to correct a few bugs in the algorithm. Also, the improved algorithm should be evaluated on a larger sample of sequences before using it in research.
577

Automated fundus images analysis techniques to screen retinal diseases in diabetic patients / Analyse de "Fundus" image par le diagnostique de la retinopathie diabétique

Giancardo, Luca 27 September 2011 (has links)
Cette thèse a pour objet l’étude de nouvelles méthodes de traitement d’image appliquées à l’analyse d’images numériques du fond d'œil de patients diabétiques. En particulier, nous nous sommes concentrés sur le développement algorithmique supportant un système de dépistage automatique de la rétinopathie diabétique. Les techniques présentées dans ce document peuvent être classées en trois catégories: (1) l’évaluation et l’amélioration de la qualité d’image, (2) la segmentation des lésions, et (3) le diagnostic. Pour la première catégorie, nous présentons un algorithme rapide permettant l’estimation numérique de la qualité d’une seule image à partir de caractéristiques extraites de la vascularisation et de la couleur du fond d'œil. De plus, nous démontrons qu’il est possible d’augmenter la qualité des images et de supprimer les artefacts de réflexion en fusionnant les informations extraites de plusieurs images d’un même fond d'œil (images capturées en changeant le point d’attention regardé par le patient). Pour la deuxième catégorie, deux familles de lésion sont ciblées: les exsudats et les microanévrysmes. Deux nouveaux algorithmes pour l’analyse des images du fond d'œil sont proposés et comparés avec les techniques existantes afin de démontrer leur efficacité. Dans le cas des microanévrismes, une nouvelle méthode basée sur la transformée de Radon a été développée. Dans la dernière catégorie, nous présentons un algorithme permettant de diagnostiquer la rétinopathie diabétique et les œdèmes maculaires en analysant les lésions détectées par segmentation d’image; à partir d’une seule image, notre algorithme permet de diagnostiquer une rétinopathie diabétique et/ou un œdème maculaire en ~ 22 secondes sur une machine à 1,6 GHz avec 4 Go de RAM; de plus, nous montrons les premiers résultats de notre algorithme de détection d'œdème maculaire basé sur des images du fond d'œil multiples, qui peut éventuellement permettre d’identifier le gonflement de la macula même si aucune lésion n’est visible. / In this Ph.D. thesis, we study new methods to analyse digital fundus images of diabetic patients. In particular, we concentrate on the development of the algorithmic components of an automatic screening system for diabetic retinopathy. The techniques developed can be categorized in: quality assessment and improvement, lesion segmentation and diagnosis. For the first category, we present a fast algorithm to numerically estimate the quality of a single image by employing vasculature and colour-based features; additionally, we show how it is possible to increase the image quality and remove reflection artefacts by merging information gathered in multiple fundus images (which are captured by changing the stare point of the patient). For the second category, two families of lesion are targeted: exudate and microaneurysms; two new algorithms which work on single fundus images are proposed and compared with existing techniques in order to prove their efficacy; in the microaneurysms case, a new Radon transform-based operator was developed. In the last diagnosis category, we have developed an algorithm that diagnoses diabetic retinopathy and diabetic macular edema based on the lesions segmented; starting from a single unseen image, our algorithm can generate a diabetic retinopathy and ma cular edema diagnosis in _22 seconds on a 1.6 GHz machine with 4 GB of RAM; additionally, we show the first results of a macular edema detection algorithm based on multiple fundus images, which can potentially identify the swelling of the macula even when no lesions are visible.
578

Analysis of surface coverage in regards to surface functionalization : A microscopic approach

Leppälä, Daniel January 2017 (has links)
The understanding of how white blood cells react when coming into contact with various surfaces is of major importance for a wide range of biomaterials and biosensor applications. In this study it is investigated if it is possible to determine how neutrophils react to a certain type of sensor chip called cell clinic being developed. This study investigates the cell surface coverage on the sensor chip and how it correlates to the signal response of the sensor at hand. Neutrophils, as other white blood cells, are cells that quickly adhere to surfaces and during the adhesion process they activate at different levels depending on i.e. type of surface or surface functionalization, this activation can be visualized by the change in morphology. While measuring the change of capacitance with the cell clinic sensor during cell adhesion, the cell surface coverage is of main importance. The main focus of this diploma work has been to develop an image analysis script capable of conducting automated analysis on a large body of images estimating the surface coverage. Input data for this modeling is taken from fluorescent microscopy images. The experiments conducted during this project have indicated that white blood cells adhered to the sensor surface shows signs of being activated also without external activation. This clearly shows that knowledge of how neutrophils react to surface modifications is of great importance as well as the awareness that any surface may trigger a response from the immune system i.e. neutrophil activation, so also in the cell clinic. It is a fact that it might be difficult to evaluate the effect of a foreign substance on the neutrophils while a significant amount is activated from being in contact with the surface. Regarding different surfaces the white blood cells does not display any preference of adhering to any specific surface. The surfaces used in this project was silicon oxide wafers, silicon oxide wafers with a nitride surface functionalization and the intended sensor chip; however the addition of PMA clearly shows an effect on how many cells that adheres to the surface as well as the average area of each cell.
579

Quantitative bioimaging in single cell signaling

Bernhem, Kristoffer January 2017 (has links)
Imaging of cellular samples has for several hundred years been a way for scientists to investigate biological systems. With the discovery of immunofluorescence labeling in the 1940’s and later genetic fluorescent protein labeling in the 1980’s the most important part in imaging, contrast and specificity, was drastically improved. Eversince, we have seen a increased use of fluorescence imaging in biological research, and the application and tools are constantly being developed further. Specific ion imaging has long been a way to discern signaling events in cell systems. Through use of fluorescent ion reporters, ionic concentrations can be measured inliving cells as result of applied stimuli. Using Ca2+ imaging we have demonstrated that there is a inverse influence by plasma membrane voltage gated calcium channels on angiotensin II type 1 receptor (a protein involved in blood pressure regulation). This has direct implications in treatment of hypertension (high blood pressure),one of the most common serious diseases in the western civilization today with approximately one billion afflicted adults world wide in 2016. Extending from this more lower resolution live cell bioimaging I have moved into super resolution imaging. This thesis includes works on the interpretation of super resolution imaging data of the neuronal Na+, K+ - ATPase α3, a receptor responsible for maintaining cell homeostasis during brain activity. The imaging data is correlated with electrophysiological measurements and computer models to point towards possible artefacts in super resolution imaging that needs to be taken into account when interpreting imaging data. Moreover, I proceeded to develop a software for single-molecule localization microscopy analysis aimed for the wider research community and employ this software to identify expression artifacts in transiently transfected cell systems. In the concluding work super-resultion imaging was used to map out the early steps of the intrinsic apoptotic signaling cascade in space and time. Using superresoultion imaging, I mapped out in intact cells at which time points and at which locations the various proteins involved in apoptotic regulation are activated and interact. / Avbildning av biologiska prover har i flera hundra år varit ett sätt för forskare att undersöka biologiska system. Med utvecklingen av immunofluoresens inmärkn-ing och fluoresens-mikroskopi förbättrades de viktigaste aspekterna av mikroskopi,kontrast och specificitet. Sedan 1941 har vi sett kontinuerligt mer mångsidigt och frekvent användning av fluorosense-mikroskopi i biologisk forskning. Jon-mikroskopi har länge varit en metod att studera signalering i cell-system. Genom användning av fluorosenta jon-sensorer går det att mäta variationer avjon koncentrationer i levande celler som resultat av yttre påverkan. Genom att använda Ca2+ mikroskopi har jag visat att det finns en omvänd koppling mellan kalcium-kanaler i plasma-membran och angiotensin II typ 1 receptorn (ett proteininvolverat i blodtrycksreglering). Detta har direkta implikationer för behandlingav högt blodtryck, en av de mer vanliga sjukdomarna i västvärlden idag med överen miljard drabbade patienter i världen 2016. Efter detta projekt vidgades mitt fokus till att inkludera superupplösnings-mikroskopi. Denna avhandling inkluderar ett arbete fokuserat på tolkningen av superupplösnings-mikroskopi data från neuronal Na+, K+ - ATPase α3, en jon-pump som återställer cellernas jonbalans i samband med cell signalering. Mikroskopi-datan korreleras mot elektrofysiologi experiment och modeller för att illustrera möjliga artefakter i superupplösnings-mikroskopi som måste tas i beaktande i samband med tolkning av data. Jag fortsatte med att utveckla mjukvara för analys av data från singel-molekyl-lokalisations-mikroskopi där fokuset för mjukvaran framförallt varit på användarvänligheten. Detta då jag hoppas att den kommer vara användbar för ett bredare forskingsfält. Mjukvaran användes även i ett separat projekt för att identifiera överuttrycks-artefakter i transfekterade celler. I det avslutande arbetet använder jag superupplösnings-mikroskopi för att karakterisera de tidiga stegen i mitokondriell apoptos. Jag identifierar när och var i cellen de olika proteinerna involverade i apoptos signaleringen är aktiverade och interagerar. / <p>QC 20171003</p>
580

Learning and recognizing texture characteristics using local binary patterns

Turtinen, M. (Markus) 05 June 2007 (has links)
Abstract Texture plays an important role in numerous computer vision applications. Many methods for describing and analyzing of textured surfaces have been proposed. Variations in the appearance of texture caused by changing illumination and imaging conditions, for example, set high requirements on different analysis methods. In addition, real-world applications tend to produce a great deal of complex texture data to be processed that should be handled effectively in order to be exploited. A local binary pattern (LBP) operator offers an efficient way of analyzing textures. It has a simple theory and combines properties of structural and statistical texture analysis methods. LBP is invariant against monotonic gray-scale variations and has also extensions to rotation invariant texture analysis. Analysis of real-world texture data is typically very laborious and time consuming. Often there is no ground truth or other prior knowledge of the data available, and important properties of the textures must be learned from the images. This is a very challenging task in texture analysis. In this thesis, methods for learning and recognizing texture categories using local binary pattern features are proposed. Unsupervised clustering and dimensionality reduction methods combined to visualization provide useful tools for analyzing texture data. Uncovering the data structures is done in an unsupervised fashion, based only on texture features, and no prior knowledge of the data, for example texture classes, is required. In this thesis, non-linear dimensionality reduction, data clustering and visualization are used for building a labeled training set for a classifier, and for studying the performance of the features. The thesis also proposes a multi-class approach to learning and labeling part based texture appearance models to be used in scene texture recognition using only little human interaction. Also a semiautomatic approach to learning texture appearance models for view based texture classification is proposed. The goal of texture characterization is often to classify textures into different categories. In this thesis, two texture classification systems suitable for different applications are proposed. First, a discriminative classifier that combines local and contextual texture information of the image in scene recognition is proposed. Secondly, a real-time capable texture classifier with a self-intuitive user interface to be used in industrial texture classification is proposed. Two challenging real-world texture analysis applications are used to study the performance and usefulness of the proposed methods. The first one is visual paper analysis which aims to characterize paper quality based on texture properties. The second application is outdoor scene image analysis where texture information is used to recognize different regions in the scenes.

Page generated in 0.4156 seconds