• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • Tagged with
  • 21
  • 21
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

SIMULATION-BASED PERFORMANCE COMPARISONS OF GEOCAST ROUTING PROTOCOLS

Hequn, Zhang, Rui, Wang January 2014 (has links)
Intelligent Transportation System (ITS)  is the main research domain for making road transport safer and more comfortable. For the sake of increasing the benefits of ITS, projects about Inter-Vehicle Communication (IVC)  system have been proposed to make communications among vehicles possible, to exchange traffic information and avoid accidents. In order to create communication network among vehicles or between vehicles and infrastructure,  Vehicular Ad hoc Networks (VANETs) has been proposed. Many applications in VANETs need to send messages to vehicles within a specific geographic region. This behavior is called geocast and this specific geographic region is called the Zone of Relevance (ZOR). Some routing protocols which are related to Geocast have been proposed in literature for VANETs. So it is significant to evaluate and compare the performance of these known Geocast routing protocols. In this thesis, categories of the routing protocols, as well as communication forwarding schemes are introduced. The routing protocols in VANETs are also summarized and compared. In order to evaluate the performance of these protocols, the evaluation methods are proposed and then a Geocast routing simulator is designed and used to simulate the Geocast network environment and several Geocast routing protocols.
2

Region-based classificationpotential for land-cover classification with Very High spatial Resolution satellite data

Carleer, Alexandre A.P. 14 February 2006 (has links)
Abstract Since 1999, Very High spatial Resolution satellite data (Ikonos-2, QuickBird and OrbView-3) represent the surface of the Earth with more detail. However, information extraction by multispectral pixel-based classification proves to have become more complex owing to the internal variability increase in the land-cover units and to the weakness of spectral resolution. Therefore, one possibility is to consider the internal spectral variability of land-cover classes as a valuable source of spatial information that can be used as an additional clue in characterizing and identifying land cover. Moreover, the spatial resolution gap that existed between satellite images and aerial photographs has strongly decreased, and the features used in visual interpretation transposed to digital analysis (texture, morphology and context) can be used as additional information on top of spectral features for the land cover classification. The difficulty of this approach is often to transpose the visual features to digital analysis. To overcome this problem region-based classification could be used. Segmentation, before classification, produces regions that are more homogeneous in themselves than with nearby regions and represent discrete objects or areas in the image. Each region becomes then a unit analysis, which makes it possible to avoid much of the structural clutter and allows to measure and use a number of features on top of spectral features. These features can be the surface, the perimeter, the compactness, the degree and kind of texture. Segmentation is one of the only methods which ensures to measure the morphological features (surface, perimeter...) and the textural features on non-arbitrary neighbourhood. In the pixel-based methods, texture is calculated with mobile windows that smooth the boundaries between discrete land cover regions and create between-class texture. This between-class texture could cause an edge-effect in the classification. In this context, our research focuses on the potential of land cover region-based classification of VHR satellite data through the study of the object extraction capacity of segmentation processes, and through the study of the relevance of region features for classifying the land-cover classes in different kinds of Belgian landscapes; always keeping in mind the parallel with the visual interpretation which remains the reference. Firstly, the results of the assessment of four segmentation algorithms belonging to the two main segmentation categories (contour- and region-based segmentation methods) show that the contour detection methods are sensitive to local variability, which is precisely the problem that we want to overcome. Then, a pre-processing like a filter may be used, at the risk of losing a part of the information. The “region-growing” segmentation that uses the local variability in the segmentation process appears to be the best compromise for the segmentation of different kinds of landscape. Secondly, the features calculated thanks to segmentation seem to be relevant to identify some land-cover classes in urban/sub-urban and rural areas. These relevant features are of the same type as the features selected visually, which shows that the region-based classification gets close to the visual interpretation. The research shows the real usefulness of region-based classification in order to classify the land cover with VHR satellite data. Even in some cases where the features calculated thanks to the segmentation prove to be useless, the region-based classification has other advantages. Working with regions instead of pixels allows to avoid the salt-and-pepper effect and makes the GIS integration easier. The research also highlights some problems that are independent from the region-based classification and are recursive in VHR satellite data, like shadows and the spatial resolution weakness for identifying some land-cover classes. Résumé Depuis 1999, les données satellitaires à très haute résolution spatiale (IKONOS-2, QuickBird and OrbView-3) représentent la surface de la terre avec plus de détail. Cependant, l’extraction d’information par une classification multispectrale par pixel devient plus complexe en raison de l’augmentation de la variabilité spectrale dans les unités d’occupation du sol et du manque de résolution spectrale de ces données. Cependant, une possibilité est de considérer cette variabilité spectrale comme une information spatiale utile pouvant être utilisée comme une information complémentaire dans la caractérisation de l’occupation du sol. De plus, de part la diminution de la différence de résolution spatiale qui existait entre les photographies aériennes et les images satellitaires, les caractéristiques (attributs) utilisées en interprétation visuelle transposées à l’analyse digitale (texture, morphologie and contexte) peuvent être utilisées comme information complémentaire en plus de l’information spectrale pour la classification de l’occupation du sol. La difficulté de cette approche est la transposition des caractéristiques visuelles à l’analyse digitale. Pour résoudre ce problème la classification par région pourrait être utilisée. La segmentation, avant la classification, produit des régions qui sont plus homogène en elles-mêmes qu’avec les régions voisines et qui représentent des objets ou des aires dans l’image. Chaque région devient alors une unité d’analyse qui permet l’élimination de l’effet « poivre et sel » et permet de mesurer et d’utiliser de nombreuses caractéristiques en plus des caractéristiques spectrales. Ces caractéristiques peuvent être la surface, le périmètre, la compacité, la texture. La segmentation est une des seules méthodes qui permet le calcul des caractéristiques morphologiques (surface, périmètre, …) et des caractéristiques texturales sur un voisinage non-arbitraire. Avec les méthodes de classification par pixel, la texture est calculée avec des fenêtres mobiles qui lissent les limites entre les régions d’occupation du sol et créent une texture interclasse. Cette texture interclasse peut alors causer un effet de bord dans le résultat de la classification. Dans ce contexte, la recherche s’est focalisée sur l’étude du potentiel de la classification par région de l’occupation du sol avec des images satellitaires à très haute résolution spatiale. Ce potentiel a été étudié par l’intermédiaire de l’étude des capacités d’extraction d’objet de la segmentation et par l’intermédiaire de l’étude de la pertinence des caractéristiques des régions pour la classification de l’occupation du sol dans différents paysages belges tant urbains que ruraux.
3

Multi-Manifold learning and Voronoi region-based segmentation with an application in hand gesture recognition

Hettiarachchi, Randima 12 1900 (has links)
A computer vision system consists of many stages, depending on its application. Feature extraction and segmentation are two key stages of a typical computer vision system and hence developments in feature extraction and segmentation are significant in improving the overall performance of a computer vision system. There are many inherent problems associated with feature extraction and segmentation processes of a computer vision system. In this thesis, I propose novel solutions to some of these problems in feature extraction and segmentation. First, I explore manifold learning, which is a non-linear dimensionality reduction technique for feature extraction in high dimensional data. The classical manifold learning techniques perform dimensionality reduction assuming that original data lie on a single low dimensional manifold. However, in reality, data sets often consist of data belonging to multiple classes, which lie on their own manifolds. Thus, I propose a multi-manifold learning technique to simultaneously learn multiple manifolds present in a data set, which cannot be achieved through classical single manifold learning techniques. Secondly, in image segmentation, when the number of segments of the image is not known, automatically determining the number of segments becomes a challenging problem. In this thesis, I propose an adaptive unsupervised image segmentation technique based on spatial and feature space Dirichlet tessellation as a solution to this problem. Skin segmentation is an important as well as a challenging problem in computer vision applications. Thus, thirdly, I propose a novel skin segmentation technique by combining the multi-manifold learning-based feature extraction and Vorono\"{i} region-based image segmentation. Finally, I explore hand gesture recognition, which is a prevalent topic in intelligent human computer interaction and demonstrate that the proposed improvements in the feature extraction and segmentation stages improve the overall recognition rates of the proposed hand gesture recognition framework. I use the proposed skin segmentation technique to segment the hand, the object of interest in hand gesture recognition and manifold learning for feature extraction to automatically extract the salient features. Furthermore, in this thesis, I show that different instances of the same dynamic hand gesture have similar underlying manifolds, which allows manifold-matching based hand gesture recognition. / February 2017
4

Fusion of images from dissimilar sensor systems

Chow, Khin Choong 12 1900 (has links)
Approved for public release; distribution in unlimited. / Different sensors exploit different regions of the electromagnetic spectrum; therefore, a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit; to produce information that cannot be obtained by viewing the images separately. In this thesis, a framework for the multiresolution fusion of the night vision devices and thermal infrared imagery is presented. It encompasses a wavelet-based approach that supports both pixel-level and region-based fusion, and aims to maximize scene content by incorporating spectral information from both the source images. In pixel-level fusion, source images are decomposed into different scales, and salient directional features are extracted and selectively fused together by comparing the corresponding wavelet coefficients. To increase the degree of subject relevance in the fusion process, a region-based approach which uses a multiresolution segmentation algorithm to partition the image domain at different scales is proposed. The region's characteristics are then determined and used to guide the fusion process. The experimental results obtained demonstrate the feasibility of the approach. Potential applications of this development include improvements in night piloting (navigation and target discrimination), law enforcement etc. / Civilian, Republic of Singapore
5

Region-based Crossover for Clustering Problems

Dsouza, Jeevan 01 January 2012 (has links)
Data clustering, which partitions data points into clusters, has many useful applications in economics, science and engineering. Data clustering algorithms can be partitional or hierarchical. The k-means algorithm is the most widely used partitional clustering algorithm because of its simplicity and efficiency. One problem with the k-means algorithm is that the quality of partitions produced is highly dependent on the initial selection of centers. This problem has been tackled using genetic algorithms (GA) where a set of centers is encoded into an individual of a population and solutions are generated using evolutionary operators such as crossover, mutation and selection. Of the many GA methods, the region-based genetic algorithm (RBGA) has proven to be an effective technique when the centroid was used as the representative object of a cluster (ROC) and the Euclidean distance was used as the distance metric. The RBGA uses a region-based crossover operator that exchanges subsets of centers that belong to a region of space rather than exchanging random centers. The rationale is that subsets of centers that occupy a given region of space tend to serve as building blocks. Exchanging such centers preserves and propagates high-quality partial solutions. This research aims at assessing the RBGA with a variety of ROCs and distance metrics. The RBGA was tested along with other GA methods, on four benchmark datasets using four distance metrics, varied number of centers, and centroids and medoids as ROCs. The results obtained showed the superior performance of the RBGA across all datasets and sets of parameters, indicating that region-based crossover may prove an effective strategy across a broad range of clustering problems.
6

STATISTICAL ANALYSES TO DETECT AND REFINE GENETIC ASSOCIATIONS WITH NEURODEGENERATIVE DISEASES

Katsumata, Yuriko 01 January 2017 (has links)
Dementia is a clinical state caused by neurodegeneration and characterized by a loss of function in cognitive domains and behavior. Alzheimer’s disease (AD) is the most common form of dementia. Although the amyloid β (Aβ) protein and hyperphosphorylated tau aggregates in the brain are considered to be the key pathological hallmarks of AD, the exact cause of AD is yet to be identified. In addition, clinical diagnoses of AD can be error prone. Many previous studies have compared the clinical diagnosis of AD against the gold standard of autopsy confirmation and shown substantial AD misdiagnosis Hippocampal sclerosis of aging (HS-Aging) is one type of dementia that is often clinically misdiagnosed as AD. AD and HS-Aging are controlled by different genetic architectures. Familial AD, which often occurs early in life, is linked to mainly mutations in three genes: APP, PSEN1, and PSEN2. Late-onset AD (LOAD) is strongly associated with the ε4 allele of apolipoprotein E (APOE) gene. In addition to the APOE gene, genome-wide association studies (GWAS) have identified several single nucleotide polymorphisms (SNPs) in or close to some genes associated with LOAD. On the other hand, GRN, TMEM106B, ABCC9, and KCNMB2 have been reported to harbor risk alleles associated with HS-Aging pathology. Although GWAS have succeeded in revealing numerous susceptibility variants for dementias, it is an ongoing challenge to identify functional loci and to understand how they contribute to dementia pathogenesis. Until recently, rare variants were not investigated comprehensively. GWAS rely on genotype imputation which is not reliable for rare variants. Therefore, imputed rare variants are typically removed from GWAS analysis. Recent advances in sequencing technologies enable accurate genotyping of rare variants, thus potentially improving our understanding the role of rare variants on disease. There are significant computational and statistical challenges for these sequencing studies. Traditional single variant-based association tests are underpowered to detect rare variant associations. Instead, more powerful and computationally efficient approaches for aggregating the effects of rare variants have become a standard approach for association testing. The sequence-kernel association test (SKAT) is one of the most powerful rare variant analysis methods. A recently-proposed scan-statistic-based test is another approach to detect the location of rare variant clusters influencing disease. In the first study, we examined the gene-based associations of the four putative risk genes, GRN, TMEM106B, ABCC9, and KCNMB2 with HS-aging pathology. We analyzed haplotype associations of a targeted ABCC9 region with HS-Aging pathology and with ABCC9 gene expression. In the second study, we elucidated the role of the non-coding SNPs identified in the International Genomics of Alzheimer’s Project (IGAP) consortium GWAS within a systems genetics framework to understand the flow of biological information underlying AD. In the last study, we identified genetic regions which contain rare variants associated with AD using a scan-statistic-based approach.
7

Automatic Stability Checking for Large Analog Circuits

Mukherjee, Parijat 1985- 14 March 2013 (has links)
Small signal stability has always been an important concern for analog designers. Recent advances such as the Loop Finder algorithm allows designers to detect and identify local, potentially unstable return loops without the need to identify and add breakpoints. However, this method suffers from extremely high time and memory complexity and thus cannot be scaled to very large analog circuits. In this research work, we first take an in-depth look at the loop finder algorithm so as to identify certain key enhancements that can be made to overcome these shortcomings. We next propose pole discovery and impedance computation methods that address these shortcomings by exploring only a certain region of interest in the s-plane. The reduced time and memory complexity obtained via the new methodology allows us to extend automatic stability checking to much larger circuits than was previously possible.
8

Design, Analysis and Resource Allocations in Networks In Presence of Region-Based Faults

January 2013 (has links)
abstract: Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks. / Dissertation/Thesis / Ph.D. Computer Science 2013
9

AIMM - Analyse d'Images nucléaires dans un contexte Multimodal et Multitemporel / IAMM - nuclear Imaging Analysis in a Multimodal and Multitemporal context

Alvarez padilla, Francisco Javier 13 September 2019 (has links)
Ces travaux de thèse portent sur la proposition de stratégies de segmentation des tumeurs cancéreuses dans un contexte multimodal et multitemporel. La multimodalité fait référence au couplage de données TEP/TDM pour exploiter conjointement les deux sources d’information pour améliorer les performances de la segmentation. La multitemporalité fait référence à la disposition des images acquises à différents dates, ce qui limite une correspondance spatiale possible entre elles.Dans une première méthode, une structure arborescente est utilisée pour traiter et pour extraire des informations afin d’alimenter une segmentation par marche aléatoire. Un ensemble d'attributs est utilisé pour caractériser les nœuds de l'arbre, puis le filtrer et projeter des informations afin de créer une image vectorielle. Un marcheur aléatoire guidé par les données vectorielles provenant de l'arbre est utilisé pour étiqueter les voxels à des fins de segmentation.La deuxième méthode traite le problème de la multitemporalité en modifiant le paradigme de voxel à voxel par celui de nœud à nœud. Deux arbres sont alors modélisés à partir de la TEP et de la TDM avec injection de contraste pour comparer leurs nœuds par une différence entre leurs attributs et ainsi correspondre à ceux considérés comme similaires en supprimant ceux qui ne le sont pas.Dans une troisième méthode, qui est une extension de la première, l'arbre calculé à partir de l'image est directement utilisé pour mettre en œuvre l'algorithme développé. Une structure arborescente est construite sur la TEP, puis les données TDM sont projetées sur l’arbre en tant qu’informations contextuelles. Un algorithme de stabilité de nœud est appliqué afin de détecter et d'élaguer les nœuds instables. Des graines, extraites de la TEP, sont projetées dans l'arbre pour fournir des étiquettes (pour la tumeur et le fond) à ses nœuds correspondants et les propager au sein de la hiérarchie. Les régions évaluées comme incertaines sont soumises à une méthode de marche aléatoire vectorielle pour compléter l'étiquetage de l'arbre et finaliser la segmentation. / This work focuses on the proposition of cancerous tumor segmentation strategies in a multimodal and multitemporal context. Multimodal scope refers to coupling PET/CT data in order to jointly exploit both information sources with the purpose of improving segmentation performance. Multitemporal scope refers to the use of images acquired at different dates, which limits a possible spatial correspondence between them.In a first method, a tree is used to process and extract information dedicated to feed a random walker segmentation. A set of region-based attributes is used to characterize tree nodes, filter the tree and then project data into the image space for building a vectorial image. A random walker guided by vectorial tree data on image lattice is used to label voxels for segmentation.The second method is geared toward multitemporality problem by changing voxel-to-voxel for node-to-node paradigm. A tree structure is thus applied to model two hierarchical graphs from PET and contrast-enhanced CT, respectively, and compare attribute distances between their nodes to match those assumed similar whereas discarding the others.In a third method, namely an extension of the first one, the tree is directly involved as the data-structure for algorithm application. A tree structure is built on the PET image, and CT data is then projected onto the tree as contextual information. A node stability algorithm is applied to detect and prune unstable attribute nodes. PET-based seeds are projected into the tree to assign node seed labels (tumor and background) and propagate them by hierarchy. The uncertain nodes, with region-based attributes as descriptors, are involved in a vectorial random walker method to complete tree labeling and build the segmentation.
10

Region-based approximation to solve inference in loopy factor graphs : decoding LDPC codes by the Generalized Belief Propagation

Sibel, Jean-Christophe 07 June 2013 (has links) (PDF)
This thesis addresses the problem of inference in factor graphs, especially the LDPC codes, almost solved by message-passing algorithms. In particular, the Belief Propagation algorithm (BP) is investigated as a particular message-passing algorithm whose suboptimality is discussed in the case where the factor graph has a loop-like topology. From the equivalence between the BP and the Bethe approximation in statistical physics that is generalized to the region-based approximation, is detailed the Generalized Belief Propagation algorithm (GBP), a message-passing algorithm between clusters of the factor graph. It is experimentally shown to surpass the BP in the cases where the clustering deals with the harmful topological structures that prevents the BP from rightly decoding any LDPC code, namely the trapping sets. We do not only confront the BP and the GBP algorithms according to their performance from the point of view of the channel coding with the error-rate, but also according to their dynamical behaviors for non-trivial error-events for which both algorithms can exhibit chaotic beahviors. By means of classical and original dynamical quantifiers, it is shown that the GBP algorithm can overcome the BP algorithm.

Page generated in 0.0453 seconds