• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 207
  • 62
  • 60
  • 53
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1325
  • 1325
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 117
  • 116
  • 114
  • 110
  • 110
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Graph-based Methods for Interactive Image Segmentation

Malmberg, Filip January 2011 (has links)
The subject of digital image analysis deals with extracting relevant information from image data, stored in digital form in a computer. A fundamental problem in image analysis is image segmentation, i.e., the identification and separation of relevant objects and structures in an image. Accurate segmentation of objects of interest is often required before further processing and analysis can be performed. Despite years of active research, fully automatic segmentation of arbitrary images remains an unsolved problem. Interactive, or semi-automatic, segmentation methods use human expert knowledge as additional input, thereby making the segmentation problem more tractable. The goal of interactive segmentation methods is to minimize the required user interaction time, while maintaining tight user control to guarantee the correctness of the results. Methods for interactive segmentation typically operate under one of two paradigms for user guidance: (1) Specification of pieces of the boundary of the desired object(s). (2) Specification of correct segmentation labels for a small subset of the image elements. These types of user input are referred to as boundary constraints and regional constraints, respectively. This thesis concerns the development of methods for interactive segmentation, using a graph-theoretic approach. We view an image as an edge weighted graph, whose vertex set is the set of image elements, and whose edges are given by an adjacency relation among the image elements. Due to its discrete nature and mathematical simplicity, this graph based image representation lends itself well to the development of efficient, and provably correct, methods. The contributions in this thesis may be summarized as follows: Existing graph-based methods for interactive segmentation are modified to improve their performance on images with noisy or missing data, while maintaining a low computational cost. Fuzzy techniques are utilized to obtain segmentations from which feature measurements can be made with increased precision. A new paradigm for user guidance, that unifies and generalizes regional and boundary constraints, is proposed. The practical utility of the proposed methods is illustrated with examples from the medical field.
472

Image Filtering Methods for Biomedical Applications

Niazi, M. Khalid Khan January 2011 (has links)
Filtering is a key step in digital image processing and analysis. It is mainly used for amplification or attenuation of some frequencies depending on the nature of the application. Filtering can either be performed in the spatial domain or in a transformed domain. The selection of the filtering method, filtering domain, and the filter parameters are often driven by the properties of the underlying image. This thesis presents three different kinds of biomedical image filtering applications, where the filter parameters are automatically determined from the underlying images. Filtering can be used for image enhancement. We present a robust image dependent filtering method for intensity inhomogeneity correction of biomedical images. In the presented filtering method, the filter parameters are automatically determined from the grey-weighted distance transform of the magnitude spectrum. An evaluation shows that the filter provides an accurate estimate of intensity inhomogeneity. Filtering can also be used for analysis. The thesis presents a filtering method for heart localization and robust signal detection from video recordings of rat embryos. It presents a strategy to decouple motion artifacts produced by the non-rigid embryonic boundary from the heart. The method also filters out noise and the trend term with the help of empirical mode decomposition. Again, all the filter parameters are determined automatically based on the underlying signal. Transforming the geometry of one image to fit that of another one, so called image registration, can be seen as a filtering operation of the image geometry. To assess the progression of eye disorder, registration between temporal images is often required to determine the movement and development of the blood vessels in the eye. We present a robust method for retinal image registration. The method is based on particle swarm optimization, where the swarm searches for optimal registration parameters based on the direction of its cognitive and social components. An evaluation of the proposed method shows that the method is less susceptible to becoming trapped in local minima than previous methods. With these thesis contributions, we have augmented the filter toolbox for image analysis with methods that adjust to the data at hand.
473

Image Analysis in Support of Computer-Assisted Cervical Cancer Screening

Malm, Patrik January 2013 (has links)
Cervical cancer is a disease that annually claims the lives of over a quarter of a million women. A substantial number of these deaths could be prevented if population wide cancer screening, based on the Papanicolaou test, were globally available. The Papanicolaou test involves a visual review of cellular material obtained from the uterine cervix. While being relatively inexpensive from a material standpoint, the test requires highly trained cytology specialists to conduct the analysis. There is a great shortage of such specialists in developing countries, causing these to be grossly overrepresented in the mortality statistics. For the last 60 years, numerous attempts at constructing an automated system, able to perform the screening, have been made. Unfortunately, a cost-effective, automated system has yet to be produced. In this thesis, a set of methods, aimed to be used in the development of an automated screening system, are presented. These have been produced as part of an international cooperative effort to create a low-cost cervical cancer screening system. The contributions are linked to a number of key problems associated with the screening: Deciding which areas of a specimen that warrant analysis, delineating cervical cell nuclei, rejecting artefacts to make sure that only cells of diagnostic value are included when drawing conclusions regarding the final diagnosis of the specimen. Also, to facilitate efficient method development, two methods for creating synthetic images that mimic images acquired from specimen are described.
474

Automatic Virus Identification using TEM : Image Segmentation and Texture Analysis / Automatisk identifiering av virus med hjälp av transmissionselektronmikroskopi : bildsegmentering och texturanalys

Kylberg, Gustaf January 2014 (has links)
Viruses and their morphology have been detected and studied with electron microscopy (EM) since the end of the 1930s. The technique has been vital for the discovery of new viruses and in establishing the virus taxonomy. Today, electron microscopy is an important technique in clinical diagnostics. It both serves as a routine diagnostic technique as well as an essential tool for detecting infectious agents in new and unusual disease outbreaks. The technique does not depend on virus specific targets and can therefore detect any virus present in the sample. New or reemerging viruses can be detected in EM images while being unrecognizable by molecular methods. One problem with diagnostic EM is its high dependency on experts performing the analysis. Another problematic circumstance is that the EM facilities capable of handling the most dangerous pathogens are few, and decreasing in number. This thesis addresses these shortcomings with diagnostic EM by proposing image analysis methods mimicking the actions of an expert operating the microscope. The methods cover strategies for automatic image acquisition, segmentation of possible virus particles, as well as methods for extracting characteristic properties from the particles enabling virus identification. One discriminative property of viruses is their surface morphology or texture in the EM images. Describing texture in digital images is an important part of this thesis. Viruses show up in an arbitrary orientation in the TEM images, making rotation invariant texture description important. Rotation invariance and noise robustness are evaluated for several texture descriptors in the thesis. Three new texture datasets are introduced to facilitate these evaluations. Invariant features and generalization performance in texture recognition are also addressed in a more general context. The work presented in this thesis has been part of the project Panvirshield, aiming for an automatic diagnostic system for viral pathogens using EM. The work is also part of the miniTEM project where a new desktop low-voltage electron microscope is developed with the aspiration to become an easy to use system reaching high levels of automation for clinical tissue sections, viruses and other nano-sized particles.
475

Distance Functions and Their Use in Adaptive Mathematical Morphology

Ćurić, Vladimir January 2014 (has links)
One of the main problems in image analysis is a comparison of different shapes in images. It is often desirable to determine the extent to which one shape differs from another. This is usually a difficult task because shapes vary in size, length, contrast, texture, orientation, etc. Shapes can be described using sets of points, crisp of fuzzy. Hence, distance functions between sets have been used for comparing different shapes. Mathematical morphology is a non-linear theory related to the shape or morphology of features in the image, and morphological operators are defined by the interaction between an image and a small set called a structuring element. Although morphological operators have been extensively used to differentiate shapes by their size, it is not an easy task to differentiate shapes with respect to other features such as contrast or orientation. One approach for differentiation on these type of features is to use data-dependent structuring elements. In this thesis, we investigate the usefulness of various distance functions for: (i) shape registration and recognition; and (ii) construction of adaptive structuring elements and functions. We examine existing distance functions between sets, and propose a new one, called the Complement weighted sum of minimal distances, where the contribution of each point to the distance function is determined by the position of the point within the set. The usefulness of the new distance function is shown for different image registration and shape recognition problems. Furthermore, we extend the new distance function to fuzzy sets and show its applicability to classification of fuzzy objects. We propose two different types of adaptive structuring elements from the salience map of the edge strength: (i) the shape of a structuring element is predefined, and its size is determined from the salience map; (ii) the shape and size of a structuring element are dependent on the salience map. Using this salience map, we also define adaptive structuring functions. We also present the applicability of adaptive mathematical morphology to image regularization. The connection between adaptive mathematical morphology and Lasry-Lions regularization of non-smooth functions provides an elegant tool for image regularization.
476

Contributions to 3D Image Analysis using Discrete Methods and Fuzzy Techniques : With Focus on Images from Cryo-Electron Tomography

Gedda, Magnus January 2010 (has links)
With the emergence of new imaging techniques, researchers are always eager to push the boundaries by examining objects either smaller or further away than what was previously possible. The development of image analysis techniques has greatly helped to introduce objectivity and coherence in measurements and decision making. It has become an essential tool for facilitating both large-scale quantitative studies and qualitative research. In this Thesis, methods were developed for analysis of low-resolution (in respect to the size of the imaged objects) three-dimensional (3D) images with low signal-to-noise ratios (SNR) applied to images from cryo-electron tomography (cryo-ET) and fluorescence microscopy (FM). The main focus is on methods of low complexity, that take into account both grey-level and shape information, to facilitate large-scale studies. Methods were developed to localise and represent complex macromolecules in images from cryo-ET. The methods were applied to Immunoglobulin G (IgG) antibodies and MET proteins. The low resolution and low SNR required that grey-level information was utilised to create fuzzy representations of the macromolecules. To extract structural properties, a method was developed to use grey-level-based distance measures to facilitate decomposition of the fuzzy representations into sub-domains. The structural properties of the MET protein were analysed by developing a analytical curve representation of its stalk. To facilitate large-scale analysis of structural properties of nerve cells, a method for tracing neurites in FM images using local path-finding was developed. Both theoretical and implementational details of computationally heavy approaches were examined to keep the time complexity low in the developed methods. Grey-weighted distance definitions and various aspects of their implementations were examined in detail to form guidelines on which definition to use in which setting and which implementation is the fastest. Heuristics were developed to speed up computations when calculating grey-weighted distances between two points. The methods were evaluated on both real and synthetic data and the results show that the methods provide a step towards facilitating large-scale studies of images from both cryo-ET and FM.
477

Transitional and turbulent fibre suspension flows

Kvick, Mathias January 2014 (has links)
In this thesis the orientation of macro-sized fibres in turbulent flows is studied, as well as the effect of nano-sized fibrils on hydrodynamic stability. The focus lies on enabling processes for new materials where cellulose is the main constituent. When fibres (or any elongated particles) are added to a fluid, the complexity of the flow-problem increases. The fluid flow will influence the rotation of the fibres, and therefore also effect the overall fibre orientation. Exactly how the fibres rotate depends to a large extent on the mean velocity gradient in the flow. In addition, when fibres are added to a suspending fluid, the total stress in the suspension will increase, resulting in an increased apparent viscosity. The increase in stress is related to the direction of deformation in relation to the orientation of the particle, i.e. whether the deformation happens along the long or short axis of the fibre. The increase in stress, which in most cases is not constant neither in time nor space, will in turn influence the flow. This thesis starts off with the orientation and spatial distribution of fibres in the turbulent flow down an inclined plate. By varying fibre and flow parameters it is discovered that the main parameter controlling the orientation distribution is the aspect ratio of the fibres, with only minor influences from the other parameters. Moreover, the fibres are found to agglomerate into streamwise streaks. A new method to quantify this agglomeration is developed, taking care of the problems that arise due to the low concentration in the experiments. It is found that streakiness, i.e. the tendency to agglomerate in streaks, varies with Reynolds number. Going from fibre orientation to flow dynamics of fibre suspensions, the influence of cellulose nanofibrils (CNF) on laminar/turbulent transition is investigated in three different setups, namely plane channel flow, curved-rotating channel flow, and the flow in a flow focusing device. This last flow case is selected since it is can be used for assembly of CNF based materials. In the plane channel flow, the addition of CNF delays the transition more than predicted from measured viscosities while in the curved-rotating channel the opposite effect is discovered. This is qualitatively confirmed by linear stability analyses. Moreover, a transient growth analysis in the plane channel reveals an increase in streamwise wavenumber with increasing concentration of CNF. In the flow focusing device, i.e. at the intersection of three inlets and one outlet, the transition is found to mainly depend on the Reynolds number of the side flow. Recirculation zones forming downstream of two sharp corners are hypothesised to be the cause of the transition. With that in mind, the two corners are given a larger radius in an attempt to stabilise the flow. However, if anything, the flow seems to become unstable at a smaller Reynolds number, indicating that the separation bubble is not the sole cause of the transition. The choice of fluid in the core flow is found to have no effect on the stability, neither when using fluids with different viscosities nor when a non-Newtonian CNF dispersion was used. Thus, Newtonian model fluids can be used when studying the flow dynamics in this type of device. As a proof of concept, a flow focusing device is used to produce a continuous film from CNF. The fibrils are believed to be aligned due to the extensional flow created in the setup, resulting in a transparent film, with an estimated thickness of 1 um. / <p>QC 20141003</p>
478

Intelligent optical methods in image analysis for human detection

Graumann, Jean-Marc January 2005 (has links)
This thesis introduces the concept of a person recognition system for use on an integrated autonomous surveillance camera. Developed to enable generic surveillance tasks without the need for complex setup procedures nor operator assistance, this is achieved through the novel use of a simple dynamic noise reduction and object detection algorithm requiring no previous knowledge of the installation environment and without any need to train the system to its installation. The combination of this initial processing stage with a novel hybrid neural network structure composed of a SOM mapper and an MLP classifier using a combination of common and individual input data lines has enabled the development of a reliable detection process, capable of dealing with both noisy environments and partial occlusion of valid targets. With a final correct classification rate of 94% on a single image analysis, this provides a huge step forwards as compared to the reported 97% failure rate of standard camera surveillance systems.
479

Computational Systems Biology of Saccharomyces cerevisiae Cell Growth and Division

Mayhew, Michael Benjamin January 2014 (has links)
<p>Cell division and growth are complex processes fundamental to all living organisms. In the budding yeast, <italic>Saccharomyces cerevisiae</italic>, these two processes are known to be coordinated with one another as a cell's mass must roughly double before division. Moreover, cell-cycle progression is dependent on cell size with smaller cells at birth generally taking more time in the cell cycle. This dependence is a signature of size control. Systems biology is an emerging field that emphasizes connections or dependencies between biological entities and processes over the characteristics of individual entities. Statistical models provide a quantitative framework for describing and analyzing these dependencies. In this dissertation, I take a statistical systems biology approach to study cell division and growth and the dependencies within and between these two processes, drawing on observations from richly informative microscope images and time-lapse movies. I review the current state of knowledge on these processes, highlighting key results and open questions from the biological literature. I then discuss my development of machine learning and statistical approaches to extract cell-cycle information from microscope images and to better characterize the cell-cycle progression of populations of cells. In addition, I analyze single cells to uncover correlation in cell-cycle progression, evaluate potential models of dependence between growth and division, and revisit classical assertions about budding yeast size control. This dissertation presents a unique perspective and approach towards comprehensive characterization of the coordination between growth and division.</p> / Dissertation
480

Retinal Image Analysis and its use in Medical Applications

Zhang, Yibo (Bob) 19 April 2011 (has links)
Retina located in the back of the eye is not only a vital part of human sight, but also contains valuable information that can be used in biometric security applications, or for the diagnosis of certain diseases. In order to analyze this information from retinal images, its features of blood vessels, microaneurysms and the optic disc require extraction and detection respectively. We propose a method to extract vessels called MF-FDOG. MF-FDOG consists of using two filters, Matched Filter (MF) and the first-order derivative of Gaussian (FDOG). The vessel map is extracted by applying a threshold to the response of MF, which is adaptively adjusted by the mean response of FDOG. This method allows us to better distinguish vessel objects from non-vessel objects. Microaneurysm (MA) detection is accomplished with two proposed algorithms, Multi-scale Correlation Filtering (MSCF) and Dictionary Learning (DL) with Sparse Representation Classifier (SRC). MSCF is hierarchical in nature, consisting of two levels: coarse level microaneurysm candidate detection and fine level true microaneurysm detection. In the first level, all possible microaneurysm candidates are found while the second level extracts features from each candidate and compares them to a discrimination table for decision (MA or non-MA). In Dictionary Learning with Sparse Representation Classifier, MA and non-MA objects are extracted from images and used to learn two dictionaries, MA and non-MA. Sparse Representation Classifier is then applied to each MA candidate object detected beforehand, using the two dictionaries to determine class membership. The detection result is further improved by adding a class discrimination term into the Dictionary Learning model. This approach is known as Centralized Dictionary Learning (CDL) with Sparse Representation Classifier. The optic disc (OD) is an important anatomical feature in retinal images, and its detection is vital for developing automated screening programs. Currently, there is no algorithm designed to automatically detect the OD in fundus images captured from Asians, which are larger and have thicker vessels compared to Caucasians. We propose such a method to complement current algorithms using two steps: OD vessel candidate detection and OD vessel candidate matching. The proposed extraction/detection approaches are tested in medical applications, specifically the case study of detecting diabetic retinopathy (DR). DR is a complication of diabetes that damages the retina and can lead to blindness. There are four stages of DR and is a leading cause of sight loss in industrialized nations. Using MF-FDOG, blood vessels were extracted from DR images, while DR images fed into MSCF and Dictionary and Centralized Dictionary Learning with Sparse Representation Classifier produced good microaneurysm detection results. Using a new database consisting of only Asian DR patients, we successfully tested our OD detection method. As part of future work we intend to improve existing methods such as enhancing low contrast microaneurysms and better scale selection. In additional, we will extract other features from the retina, develop a generalized OD detection method, apply Dictionary Learning with Sparse Representation Classifier to vessel extraction, and use the new image database to carry out more experiments in medical applications.

Page generated in 0.0485 seconds