• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cytotoxicita vybraných naftochinonů na prostatických buněčných liniích / Cytotoxicity of selected naphthoquinones on prostatic cell cultures

Mondeková, Věra January 2013 (has links)
This master´s thesis discusses cytotoxicity of selected naphthoquinones on prostatic cell cultures. The introductory part is dedicated to general characteristic of naphthoquinones with focus on their cytotoxicity, testing of cytotoxicity and mechanisms of cytotoxicity. This part is followed by chapters about cytotoxicity, characteristics and biological activities of selected naphthoquinones; plumbagin and naphthazarin. The last part of this thesis’ theoretical section speaks about fluorescence microscopy and its use in research of naphthoquinones cytotoxicity. The practical part is dedicated to evaluation of cytotoxical tests’ results and to analysation of pictures of cells obtained by fluorescence microscope. At the end of thesis, all finding are summarized and put in the context.
2

Mobile high-throughput phenotyping using watershed segmentation algorithm

Dammannagari Gangadhara, Shravan January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / This research is a part of BREAD PHENO, a PhenoApps BREAD project at K-State which combines contemporary advances in image processing and machine vision to deliver transformative mobile applications through established breeder networks. In this platform, novel image analysis segmentation algorithms are being developed to model and extract plant phenotypes. As a part of this research, the traditional Watershed segmentation algorithm has been extended and the primary goal is to accurately count and characterize the seeds in an image. The new approach can be used to characterize a wide variety of crops. Further, this algorithm is migrated into Android making use of the Android APIs and the first ever user-friendly Android application implementing the extended Watershed algorithm has been developed for Mobile field-based high-throughput phenotyping (HTP).
3

New data structure and process model for automated watershed delineation

Mudgal, Naveen 19 April 2005
DEM analysis to delineate the stream network and its associated subwatersheds are the primary steps in the raster-based parameterization of watersheds. There are two widely used methods for delineating subwatersheds. One of these is the Upstream Catchment Area (UCA) method. The UCA method employs a user specified threshold value of upstream catchment area to delineate subwatersheds from the extracted network of streams. The other common technique is the nodal method. In this approach, subwatersheds are initiated at stream network nodes, where nodes occur at the upstream starting point of streams and at the point of intersection of streams in the network. The UCA approach and the Nodal approach do not permit watershed initiation at points of specific interests. They also fail to explicitly recognize drainage features other than single channel reaches. That is, they exclude water bodies, wetlands, braided channels and other important hydrologic features. TOPAZ (TOpographic PArameteriZation) [Garbrecht and Martz, 1992], is a typical program for raster based, automated drainage analysis. It initiates subwatersheds at source points and at points of intersection of drainage channels. TOPAZ treats lakes as spurious depressions arising out of errors in DEM, and removes them. This research addresses one important limitation of the currently used modeling techniques and tools. It adds the capability to initiate watershed delineation at points of specific interest other than junction and source points in the delineated networks from the Digital Elevation Models (DEMs). The research project evaluates qualitative and quantitative aspects of a new Object Oriented data structure and process model for raster format data analysis in spatial hydrology. The concept of incorporating a user-specified analysis in extraction and parameterization of watersheds is based on the need to have a tool to allow for studies specific to certain points in the stream network including gauging stations. It is also based on the need for an improved delineation of hydrologic features (water bodies) in hydrologic modeling. The research project developed an interface module for TOPAZ [Garbrecht and Martz, 1992] to offset the aforementioned disadvantages of the subwatershed delineation techniques. The research developed an internal hybrid, raster-based, Object Oriented data structure and processing model similar to that of vector data type. The new internal data structure permits further augmentation of the software tool. This internal data structure and algorithms provide an improved framework for discretization of the important hydrologic entities (water bodies) and the capability of extracting homogenous hydrological subwatersheds.
4

New data structure and process model for automated watershed delineation

Mudgal, Naveen 19 April 2005 (has links)
DEM analysis to delineate the stream network and its associated subwatersheds are the primary steps in the raster-based parameterization of watersheds. There are two widely used methods for delineating subwatersheds. One of these is the Upstream Catchment Area (UCA) method. The UCA method employs a user specified threshold value of upstream catchment area to delineate subwatersheds from the extracted network of streams. The other common technique is the nodal method. In this approach, subwatersheds are initiated at stream network nodes, where nodes occur at the upstream starting point of streams and at the point of intersection of streams in the network. The UCA approach and the Nodal approach do not permit watershed initiation at points of specific interests. They also fail to explicitly recognize drainage features other than single channel reaches. That is, they exclude water bodies, wetlands, braided channels and other important hydrologic features. TOPAZ (TOpographic PArameteriZation) [Garbrecht and Martz, 1992], is a typical program for raster based, automated drainage analysis. It initiates subwatersheds at source points and at points of intersection of drainage channels. TOPAZ treats lakes as spurious depressions arising out of errors in DEM, and removes them. This research addresses one important limitation of the currently used modeling techniques and tools. It adds the capability to initiate watershed delineation at points of specific interest other than junction and source points in the delineated networks from the Digital Elevation Models (DEMs). The research project evaluates qualitative and quantitative aspects of a new Object Oriented data structure and process model for raster format data analysis in spatial hydrology. The concept of incorporating a user-specified analysis in extraction and parameterization of watersheds is based on the need to have a tool to allow for studies specific to certain points in the stream network including gauging stations. It is also based on the need for an improved delineation of hydrologic features (water bodies) in hydrologic modeling. The research project developed an interface module for TOPAZ [Garbrecht and Martz, 1992] to offset the aforementioned disadvantages of the subwatershed delineation techniques. The research developed an internal hybrid, raster-based, Object Oriented data structure and processing model similar to that of vector data type. The new internal data structure permits further augmentation of the software tool. This internal data structure and algorithms provide an improved framework for discretization of the important hydrologic entities (water bodies) and the capability of extracting homogenous hydrological subwatersheds.
5

Digital Image Analysis of Cells : Applications in 2D, 3D and Time

Pinidiyaarachchi, Amalka January 2009 (has links)
Light microscopes are essential research tools in biology and medicine. Cell and tissue staining methods have improved immensely over the years and microscopes are now equipped with digital image acquisition capabilities. The image data produced require development of specialized analysis methods. This thesis presents digital image analysis methods for cell image data in 2D, 3D and time sequences. Stem cells have the capability to differentiate into specific cell types. The mechanism behind differentiation can be studied by tracking cells over time. This thesis presents a combined segmentation and tracking algorithm for time sequence images of neural stem cells.The method handles splitting and merging of cells and the results are similar to those achieved by manual tracking. Methods for detecting and localizing signals from fluorescence stained biomolecules are essential when studying how they function and interact. A study of Smad proteins, that serve as transcription factors by forming complexes and enter the cell nucleus, is included in the thesis. Confocal microscopy images of cell nuclei are delineated using gradient information, and Smad complexes are localized using a novel method for 3D signal detection. Thus, the localization of Smad complexes in relation to the nuclear membrane can be analyzed. A detailed comparison between the proposed and previous methods for detection of point-source signals is presented, showing that the proposed method has better resolving power and is more robust to noise. In this thesis, it is also shown how cell confluence can be measured by classification of wavelet based texture features. Monitoring cell confluence is valuable for optimization of cell culture parameters and cell harvest. The results obtained agree with visual observations and provide an efficient approach to monitor cell confluence and detect necrosis. Quantitative measurements on cells are important in both cytology and histology. The color provided by Pap (Papanicolaou) staining increases the available image information. The thesis explores different color spaces of Pap smear images from thyroid nodules, with the aim of finding the representation that maximizes detection of malignancies using color information in addition to quantitative morphological parameters. The presented methods provide useful tools for cell image analysis, but they can of course also be used for other image analysis applications.
6

Automatic Multi-scale Segmentation Of High Spatial Resolution Satellite Images Using Watersheds

Sahin, Kerem 01 January 2013 (has links) (PDF)
Useful information extraction from satellite images for the use of other higher level applications such as road network extraction and update, city planning etc. is a very important and active research area. It is seen that pixel-based techniques becomes insufficient for this task with increasing spatial resolution of satellite imaging sensors day by day. Therefore, the use of object-based techniques becomes indispensable and the segmentation method selection is very crucial for object-based techniques. In this thesis, various segmentation algorithms applied in remote sensing literature are presented and a segmentation process that is based on watersheds and multi-scale segmentation is proposed to use as the segmentation step of an object-based classifier. For every step of the proposed segmentation process, qualitative and quantitative comparisons with alternative approaches are done. The ones which provide best performance are incorporated into the proposed algorithm. Also, an unsupervised segmentation accuracy metric to determine all parameters of the algorithm is proposed. By this way, the proposed segmentation algorithm has become a fully automatic approach. Experiments that are done on a database formed with images taken from Google Earth&reg / software provide promising results.
7

Fusion d'informations par la théorie de l'évidence pour la segmentation d'images / Information fusion using theory of evidence for image segmentation

Chahine, Chaza 31 October 2016 (has links)
La fusion d’informations a été largement étudiée dans le domaine de l’intelligence artificielle. Une information est en général considérée comme imparfaite. Par conséquent, la combinaison de plusieurs sources d’informations (éventuellement hétérogènes) peut conduire à une information plus globale et complète. Dans le domaine de la fusion on distingue généralement les approches probabilistes et non probabilistes dont fait partie la théorie de l’évidence, développée dans les années 70. Cette méthode permet de représenter à la fois, l’incertitude et l’imprécision de l’information, par l’attribution de fonctions de masses qui s’appliquent non pas à une seule hypothèse (ce qui est le cas le plus courant pour les méthodes probabilistes) mais à un ensemble d’hypothèses. Les travaux présentés dans cette thèse concernent la fusion d’informations pour la segmentation d’images.Pour développer cette méthode nous sommes partis de l’algorithme de la « Ligne de Partage des Eaux » (LPE) qui est un des plus utilisés en détection de contours. Intuitivement le principe de la LPE est de considérer l’image comme un relief topographique où la hauteur d’un point correspond à son niveau de gris. On suppose alors que ce relief se remplit d’eau par des sources placées au niveau des minima locaux de l’image, formant ainsi des bassins versants. Les LPE sont alors les barrages construits pour empêcher les eaux provenant de différents bassins de se mélanger. Un problème de cette méthode de détection de contours est que la LPE directement appliquée sur l’image engendre une sur-segmentation, car chaque minimum local engendre une région. Meyer et Beucher ont proposé de résoudre cette question en spécifiant un ensemble de marqueurs qui seront les seules sources d’inondation du relief. L'extraction automatique des marqueurs à partir des images ne conduit pas toujours à un résultat satisfaisant, en particulier dans le cas d'images complexes. Plusieurs méthodes ont été proposées pour déterminer automatiquement ces marqueurs.Nous nous sommes en particulier intéressés à l’approche stochastique d’Angulo et Jeulin qui estiment une fonction de densité de probabilité (fdp) d'un contour (LPE) après M simulations de la segmentation LPE classique. N marqueurs sont choisis aléatoirement pour chaque réalisation. Par conséquent, une valeur de fdp élevée est attribuée aux points de contours correspondant aux fortes réalisations. Mais la décision d’appartenance d’un point à la « classe contour » reste dépendante d’une valeur de seuil. Un résultat unique ne peut donc être obtenu.Pour augmenter la robustesse de cette méthode et l’unicité de sa réponse, nous proposons de combiner des informations grâce à la théorie de l’évidence.La LPE se calcule généralement à partir de l’image gradient, dérivée du premier ordre, qui donne une information globale sur les contours dans l’image. Alors que la matrice Hessienne, matrice des dérivées d’ordre secondaire, donne une information plus locale sur les contours. Notre objectif est donc de combiner ces deux informations de nature complémentaire en utilisant la théorie de l’évidence. Les différentes versions de la fusion sont testées sur des images réelles de la base de données Berkeley. Les résultats sont comparés avec cinq segmentations manuelles fournies, en tant que vérités terrain, avec cette base de données. La qualité des segmentations obtenues par nos méthodes sont fondées sur différentes mesures: l’uniformité, la précision, l’exactitude, la spécificité, la sensibilité ainsi que la distance métrique de Hausdorff / Information fusion has been widely studied in the field of artificial intelligence. Information is generally considered imperfect. Therefore, the combination of several sources of information (possibly heterogeneous) can lead to a more comprehensive and complete information. In the field of fusion are generally distinguished probabilistic approaches and non-probabilistic ones which include the theory of evidence, developed in the 70s. This method represents both the uncertainty and imprecision of the information, by assigning masses not only to a hypothesis (which is the most common case for probabilistic methods) but to a set of hypothesis. The work presented in this thesis concerns the fusion of information for image segmentation.To develop this method we start with the algorithm of Watershed which is one of the most used methods for edge detection. Intuitively the principle of the Watershed is to consider the image as a landscape relief where heights of the different points are associated with grey levels. Assuming that the local minima are pierced with holes and the landscape is immersed in a lake, the water filled up from these minima generate the catchment basins, whereas watershed lines are the dams built to prevent mixing waters coming from different basins.The watershed is practically applied to the gradient magnitude, and a region is associated with each minimum. Therefore the fluctuations in the gradient image and the great number of local minima generate a large set of small regions yielding an over segmented result which can hardly be useful. Meyer and Beucher proposed seeded watershed or marked-controlled watershed to surmount this oversegmentation problem. The essential idea of the method is to specify a set of markers (or seeds) to be considered as the only minima to be flooded by water. The number of detected objects is therefore equal to the number of seeds and the result is then markers dependent. The automatic extraction of markers from the images does not lead to a satisfying result especially in the case of complex images. Several methods have been proposed for automatically determining these markers.We are particularly interested in the stochastic approach of Angulo and Jeulin who calculate a probability density function (pdf) of contours after M simulations of segmentation using conventional watershed with N markers randomly selected for each simulation. Therefore, a high pdf value is assigned to strong contour points that are more detected through the process. But the decision that a point belong to the "contour class" remains dependent on a threshold value. A single result cannot be obtained.To increase the robustness of this method and the uniqueness of its response, we propose to combine information with the theory of evidence.The watershed is generally calculated on the gradient image, first order derivative, which gives comprehensive information on the contours in the image.While the Hessian matrix, matrix of second order derivatives, gives more local information on the contours. Our goal is to combine these two complementary information using the theory of evidence. The method is tested on real images from the Berkeley database. The results are compared with five manual segmentation provided as ground truth, with this database. The quality of the segmentation obtained by our methods is tested with different measures: uniformity, precision, recall, specificity, sensitivity and the Hausdorff metric distance
8

Segmentace obrazu jako výškové mapy / Image Segmentation Using Height Maps

Moučka, Milan January 2011 (has links)
This thesis deals with image segmentation of volumetric medical data. It describes a well-known watershed technique that has received much attention in the field of medical image processing. An application for a direct segmentation of 3D data is proposed and further implemented by using ITK and VTK toolkits. Several kinds of pre-processing steps used before the watershed method are presented and evaluated. The obtained results are further compared against manually annotated datasets by means of the F-Measure and discussed.

Page generated in 0.1711 seconds