• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 16
  • 10
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 28
  • 28
  • 20
  • 19
  • 18
  • 17
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Detecção de pele humana utilizando modelos estocásticos multi-escala de textura / Skin detection for hand gesture segmentation via multi-scale stochastic texture models

Medeiros, Rafael Sachett January 2013 (has links)
A detecção de gestos é uma etapa importante em aplicações de interação humanocomputador. Se a mão do usuário é detectada com precisão, tanto a análise quanto o reconhecimento do gesto de mão se tornam mais simples e confiáveis. Neste trabalho, descrevemos um novo método para detecção de pele humana, destinada a ser empregada como uma etapa de pré-processamento para segmentação de gestos de mão em sistemas que visam o seu reconhecimento. Primeiramente, treinamos os modelos de cor e textura de pele (material a ser identificado) a partir de um conjunto de treinamento formado por imagens de pele. Nessa etapa, construímos um modelo de mistura de Gaussianas (GMM), para determinar os tons de cor da pele e um dicionário de textons, para textura de pele. Em seguida, introduzimos um estratégia de fusão estocástica de regiões de texturas, para determinar todos os segmentos de diferentes materiais presentes na imagem (cada um associado a uma textura). Tendo obtido todas as regiões, cada segmento encontrado é classificado com base nos modelos de cor de pele (GMM) e textura de pele (dicionário de textons). Para testar o desempenho do algoritmo desenvolvido realizamos experimentos com o conjunto de imagens SDC, projetado especialmente para esse tipo de avaliação (detecção de pele humana). Comparado com outras técnicas do estado-daarte em segmentação de pele humana disponíveis na literatura, os resultados obtidos em nossos experimentos mostram que a abordagem aqui proposta é resistente às variações de cor e iluminação decorrentes de diferentes tons de pele (etnia do usuário), assim como de mudanças de pose da mão, mantendo sua capacidade de discriminar pele humana de outros materiais altamente texturizados presentes na imagem. / Gesture detection is an important task in human-computer interaction applications. If the hand of the user is precisely detected, both analysis and recognition of hand gesture become more simple and reliable. This work describes a new method for human skin detection, used as a pre-processing stage for hand gesture segmentation in recognition systems. First, we obtain the models of color and texture of human skin (material to be identified) from a training set consisting of skin images. At this stage, we build a Gaussian mixture model (GMM) for identifying skin color tones and a dictionary of textons for skin texture. Then, we introduce a stochastic region merging strategy, to determine all segments of different materials present in the image (each associated with a texture). Once the texture regions are obtained, each segment is classified based on skin color (GMM) and skin texture (dictionary of textons) model. To verify the performance of the developed algorithm, we perform experiments on the SDC database, specially designed for this kind of evaluation (human skin detection). Also, compared with other state-ofthe- art skin segmentation techniques, the results obtained in our experiments show that the proposed approach is robust to color and illumination variations arising from different skin tones (ethnicity of the user) as well as changes of pose, while keeping its ability for discriminating human skin from other highly textured background materials.
82

Detecção de pele humana utilizando modelos estocásticos multi-escala de textura / Skin detection for hand gesture segmentation via multi-scale stochastic texture models

Medeiros, Rafael Sachett January 2013 (has links)
A detecção de gestos é uma etapa importante em aplicações de interação humanocomputador. Se a mão do usuário é detectada com precisão, tanto a análise quanto o reconhecimento do gesto de mão se tornam mais simples e confiáveis. Neste trabalho, descrevemos um novo método para detecção de pele humana, destinada a ser empregada como uma etapa de pré-processamento para segmentação de gestos de mão em sistemas que visam o seu reconhecimento. Primeiramente, treinamos os modelos de cor e textura de pele (material a ser identificado) a partir de um conjunto de treinamento formado por imagens de pele. Nessa etapa, construímos um modelo de mistura de Gaussianas (GMM), para determinar os tons de cor da pele e um dicionário de textons, para textura de pele. Em seguida, introduzimos um estratégia de fusão estocástica de regiões de texturas, para determinar todos os segmentos de diferentes materiais presentes na imagem (cada um associado a uma textura). Tendo obtido todas as regiões, cada segmento encontrado é classificado com base nos modelos de cor de pele (GMM) e textura de pele (dicionário de textons). Para testar o desempenho do algoritmo desenvolvido realizamos experimentos com o conjunto de imagens SDC, projetado especialmente para esse tipo de avaliação (detecção de pele humana). Comparado com outras técnicas do estado-daarte em segmentação de pele humana disponíveis na literatura, os resultados obtidos em nossos experimentos mostram que a abordagem aqui proposta é resistente às variações de cor e iluminação decorrentes de diferentes tons de pele (etnia do usuário), assim como de mudanças de pose da mão, mantendo sua capacidade de discriminar pele humana de outros materiais altamente texturizados presentes na imagem. / Gesture detection is an important task in human-computer interaction applications. If the hand of the user is precisely detected, both analysis and recognition of hand gesture become more simple and reliable. This work describes a new method for human skin detection, used as a pre-processing stage for hand gesture segmentation in recognition systems. First, we obtain the models of color and texture of human skin (material to be identified) from a training set consisting of skin images. At this stage, we build a Gaussian mixture model (GMM) for identifying skin color tones and a dictionary of textons for skin texture. Then, we introduce a stochastic region merging strategy, to determine all segments of different materials present in the image (each associated with a texture). Once the texture regions are obtained, each segment is classified based on skin color (GMM) and skin texture (dictionary of textons) model. To verify the performance of the developed algorithm, we perform experiments on the SDC database, specially designed for this kind of evaluation (human skin detection). Also, compared with other state-ofthe- art skin segmentation techniques, the results obtained in our experiments show that the proposed approach is robust to color and illumination variations arising from different skin tones (ethnicity of the user) as well as changes of pose, while keeping its ability for discriminating human skin from other highly textured background materials.
83

Construção de Ontologias de Domínio a Partir de Mapas Conceituais / Construction of domain ontologies from conceptual maps.

Macedo, Gretchen Torres de 14 May 2007 (has links)
Made available in DSpace on 2015-04-11T14:03:22Z (GMT). No. of bitstreams: 1 Gretchen Torres de Macedo.pdf: 2254096 bytes, checksum: a92696f086cab0a30ffe0ff73682aa0f (MD5) Previous issue date: 2007-05-14 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Ontologies have been built and used in a variety of applications as a form of knowledge representation that is meant for software systems, or agents, as well as for human users. In relation to ontologies, conceptual maps are a more informal, simple and, thus, accessible form of knowledge representation. However, the freedom enjoyed in defining concepts and their links makes it dificult to directly draw formal representations from conceptual maps. This work presents a transcription process that is able to transform conceptual maps into ontologies specified in OWL (Web Ontology Language). In this way, the ease of construction of conceptual maps can be taken advantage of to alleviate the knowledge acquisition bottleneck that is inherent in ontology engineering. The translation process consists of two main stages: translation and merging. In the translation stage a group of conceptual maps about the same knowledge domain is transformed into a set of preliminary ontologies by mens of a translator module software. In the merging stage, ontology merging techniques are applied to the set of preliminary ontologies so as to yield a single unified ontology. This phase has been achieved by means of an available merging tool. Experiments for building conceptual maps have also been done and submited to the two phases of the translation process, in order to evaluate it. / Ontologias têm sido construídas e utilizadas em diversas aplicações como um modelo de representação de conhecimento compartilhável entre agentes de software e usuários. Mapas conceituais, por sua vez, são um modelo de representação do conhecimento que, em relação às ontologias, é informal, menos complexo e, portanto, de fácil elaboração. Entretanto, a liberdade permitida na definição de conceitos e relações nos mapas dificulta a transcrição direta desses modelos em representações formais que possam ser utilizadas em aplicações baseadas em conhecimento. Este trabalho apresenta um processo de transcrição de mapas conceituais em ontologias especificadas em OWL (Web Ontology Language), tornando possível o aproveitamento da facilidade de elaboração oferecida por mapas conceituais no processo de construção de ontologias de domínio. O processo de transcrição consiste de duas etapas principais: tradução e mesclagem. A etapa de tradução consiste na obtenção de ontologias intermediárias a partir de um conjunto de mapas conceituais tendo sido realizada mediante o desenvolvimento de uma ferramenta de software. A etapa de mesclagem, responsável por unificar as ontologias intermediárias obtidas na primeira etapa, foi realizada através da utilização de uma ferramenta de mesclagem existente. Foram ainda realizadas experiências de produção de mapas conceituais, os quais foram submetidos às ferramentas mencionadas, de forma a avaliar o processo apresentado.
84

Multi-Criteria Evaluation in Support of the Decision-Making Process in Highway Construction Projects

jia, jianmin 31 March 2017 (has links)
The decision-making process in highway construction projects identifies and selects the optimal alternative based on the user requirements and evaluation criteria. The current practice of the decision-making process does not consider all construction impacts in an integrated decision-making process. This dissertation developed a multi-criteria evaluation framework to support the decision-making process in highway construction projects. In addition to the construction cost and mobility impacts, reliability, safety, and emission impacts are assessed at different evaluation levels and used as inputs to the decision-making process. Two levels of analysis, referred to as the planning level and operation level, are proposed in this research to provide input to a Multi-Criteria Decision-Making (MCDM) process that considers user prioritization of the assessed criteria. The planning level analysis provides faster and less detailed assessments of the inputs to the MCDM utilizing analytical tools, mainly in a spreadsheet format. The second level of analysis produces more detailed inputs to the MCDM and utilizes a combination of mesoscopic simulation-based dynamic traffic assignment tool, and microscopic simulation tool, combined with other utilities. The outputs generated from the two levels of analysis are used as inputs to a decision-making process based on present worth analysis and the Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) MCDM method and the results are compared.
85

Assisting in the reuse of existing materials to build adaptive hypermedia / Aide à la Création d’Hypermédia Adaptatifs par Réutilisation des Modèles des Créateurs

Zemirline, Nadjet 12 July 2011 (has links)
Aujourd'hui, l’approche « one-size-fits-all » pour les hypermédias n'est plus applicable. Les Hypermédias Adaptatifs (AH) proposent d’adapter leur comportement aux besoins des utilisateurs. Cependant, en raison de la complexité de leur processus de création et des différentes compétences requises par les auteurs, peu d'entre eux ont été développés. Ces dernières années, de nombreux efforts ont été faits pour proposer d’aider les auteurs à créer leurs propres AH. mais certains problèmes demeurent non résolus.Nous nous sommes intéressées à deux problèmes particuliers.Le premier problème concerne l'intégration des ressources des auteurs dans des systèmes existants. Cela permet aux auteurs de réutiliser directement les adaptations prévues dans un système et de les exécuter sur leurs ressources. Pour répondre à ce problème, nous proposons un processus semi-automatique de fusion / spécialisation pour intégrer le modèle d'un auteur dans un modèle d'un système existant. Notre objectif est double: créer un support pour la définition des correspondances entre les éléments d’un modèle existant et ceux du modèle de l'auteur, et aider à créer un modèle cohérent intégrant les deux modèles et les correspondances entre eux. De cette façon, nous permettons aux auteurs d’intégrer leur modèle complet sans aucune transformation ni perte d’information. Le deuxième problème concerne la spécification de l'adaptation, qui est notoirement le processus le plus difficile dans la création des hypermédias adaptatifs. Nous proposons un framework EAP avec trois contributions principales : un ensemble de 22 patrons d'adaptation élémentaires pour l’adaptation de navigation, une typologie organisant les différents patrons d'adaptation élémentaires et un processus pour générer des stratégies d'adaptation basées sur l'utilisation et la combinaison semi-automatique des patrons d’adaptation élémentaires. Nos objectifs sont de permettre de définir facilement des stratégies d'adaptation à un niveau d’abstraction élevé en combinant des stratégies d’adaptation simples. Nous avons comparé l'expressivité du framework EAP à des solutions existantes, identifiant ainsi les avantages et les inconvénients des différentes décisions nécessaires à la définition d’un langage d'adaptation idéal. Nous proposons aussi une vision unifiée de l'adaptation et des langages d'adaptation, basée sur l’analyse de ces solutions et de notre framework, ainsi que sur l'étude de l'expressivité de l'adaptation et de l'interopérabilité entre les différentes solutions analysées. La vision unifiée sur l'adaptation n’est pas limitée aux solutions analysées. Elle peut être utilisée pour comparer et étendre d'autres approches.Outre ces études théoriques qualitatives, cette thèse décrit également des implémentations et des évaluations expérimentales. / Nowadays, there is a growing demand for personalization and the “one-size-fits-all” approach for hypermedia systems is no longer applicable. Adaptive hypermedia (AH) systems adapt their behavior to the needs of individual users. However due to the complexity of their authoring process and the different skills required from authors, only few of them have been proposed. These last years, numerous efforts have been put to propose assistance for authors to create their own AH. However, as explained in this thesis some problems remain.In this thesis, we tackle two particular problems. A first problem concerns the integration of authors’ materials (information and user profile) into models of existing systems. Thus, allowing authors to directly reuse existing reasoning and execute it on their materials. We propose a semi-automatic merging/specialization process to integrate an author’s model into a model of an existing system. Our objectives are twofold: to create a support for defining mappings between elements in a model of existing models and elements in the author’s model and to help creating consistent and relevant models integrating the two models and taking into account the mappings between them.A second problem concerns the adaptation specification, which is famously the hardest part of the authoring process of adaptive web-based systems. We propose an EAP framework with three main contributions: a set of elementary adaptation patterns for the adaptive navigation, a typology organizing the proposed elementary adaptation patterns and a semi-automatic process to generate adaptation strategies based on the use and the combination of patterns. Our objectives are to define easily adaptation strategies at a high level by combining simple ones. Furthermore, we have studied the expressivity of some existing solutions allowing the specification of adaptation versus the EAP framework, discussing thus, based on this study, the pros and cons of various decisions in terms of the ideal way of defining an adaptation language. We propose a unified vision of adaptation and adaptation languages, based on the analysis of these solutions and our framework, as well as a study of the adaptation expressivity and the interoperability between them, resulting in an adaptation typology. The unified vision and adaptation typology are not limited to the solutions analysed, and can be used to compare and extend other approaches in the future. Besides these theoretical qualitative studies, this thesis also describes implementations and experimental evaluations of our contributions in an e-learning application.
86

Systém chránění s využitím výstupu z elektronického senzorického systému měření proudu a napětí / The Protection System Working on Output of Electronic Sensor System Measuring Current and Voltage

Bajánek, Tomáš January 2017 (has links)
At present, there is a widespread use of alternative measurement technologies in electrical networks that include current and voltage sensors. Their use is closely related to the use of IEC 61850-9-2 for data transfer of measured values within the substation for the purpose of protection and measurement. The use of sensors and communication standard IEC 61850 together with high-speed Ethernet will simplify the concept of the arrangement of protection terminals in substations and enable the development of a new protection system based on central protection. The dissertation is focused on protection algorithms, which use the SV according to IEC 61850-9-2, and their implementation into the central protection model. Thesis describes development in the field of protection of substations and the currently available solutions using IEC 61850-9-2 and the principle of central protection. Thesis explains algorithms for selected protection functions - overcurrent protection, negative sequence overcurrent protection, logic busbar protection and differential protection. Further, thesis deals with the programming of protection function algorithms in LabView development environment in the form of a central protection model. The model processes data from a process bus according to IEC 61850-9-2 and sends a GOOSE message over Ethernet in the event of a failure. To verify the correct function of the programmed protection algorithms, a testing procedure was developed using OMICRON 256plus, the current sensor and the merging unit. The results of the testing of the central protection model and the proposed algorithms were compared with the results of testing the currently used protections. At the end, the thesis deals with the assessment of the benefits of central protection for protecting substations and the possibility of further utilization of the central protection model. The thesis highlights a new way of protecting the electrification system using digital data from MU transferred via the process bus described in IEC 61850-9-2.
87

Structural and dynamic models for complex road networks

Jiawei Xue (8672484) 04 May 2020 (has links)
<div>The interplay between network topology and traffic dynamics in road networks impacts various performance measures. There are extensive existing researches focusing on link-level fundamental diagrams, traffic assignments under route choice assumptions. However, the underlying coupling of structure and dynamic makes network-level traffic not fully investigated. In this thesis, we build structural and dynamic models to deal with three challenges: 1) describing road network topology and understanding the difference between cities; 2) quantifying network congestion considering both road network topology and traffic flow information; 3) allocating transportation management resources to optimize the road network connectivity.</div><div><br></div><div>The first part of the thesis focuses on structural models for complex road networks. Online road map data platforms, like OpenStreetMap, provide us with reliable road network data of the world. To solve the duplicate node problem, an O(n) time complexity node merging algorithm is designed to pre-process the raw road network with n nodes. Hereafter, we define unweighted and weighted node degree distribution for</div><div>road networks. Numerical experiments present the heterogeneity in node degree distribution for Beijing and Shanghai road network. Additionally, we find that the power law distribution fits the weighted road network under certain parameter settings, extending the current knowledge that degree distribution for the primal road network is not power law.</div><div><br></div><div>In the second part, we develop a road network congestion analysis and management framework. Different from previous methods, our framework incorporates both network structure and dynamics. Moreover, it relies on link speed data only, which is more accessible than previously used link density data. Specifically, we start from the existing traffic percolation theory and critical relative speed to describe network-level traffic congestion level. Based on traffic component curves, we construct Aij for two road segments i and j to quantify the necessity of considering the two road segments in the same traffic zone. Finally, we apply the Louvain algorithm on defined road segment networks to generate road network partition candidates. These candidate partitions will help transportation engineers to control regional traffic.</div><div><br></div><div>The last part formulates and solves a road network management resource allocation optimization. The objective is to maximize critical relative speed, which is defined from traffic component curves and is closely related to personal driving comfort. Budget upper bound serves as one of the constraints. To solve the simulation-based nonlinear optimization problem, we propose a simple allocation and a meta-heuristic method based on the genetic algorithm. Three applications demonstrate that the meta-heuristic method finds better solutions than simple allocation. The results will inform the optimal allocation of resources at each road segment in metropolitan cities to enhance the connectivity of road networks.</div>
88

Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping / Radar och optisk datafusion för objektbaserad kartering av urbant marktäcke

Jacob, Alexander January 2011 (has links)
The creation and classification of segments for object based urban land cover mapping is the key goal of this master thesis. An algorithm based on region growing and merging was developed, implemented and tested. The synergy effects of a fused data set of SAR and optical imagery were evaluated based on the classification results. The testing was mainly performed with data of the city of Beijing China. The dataset consists of SAR and optical data and the classified land cover/use maps were evaluated using standard methods for accuracy assessment like confusion matrices, kappa values and overall accuracy. The classification for the testing consists of 9 classes which are low density buildup, high density buildup, road, park, water, golf course, forest, agricultural crop and airport. The development was performed in JAVA and a suitable graphical interface for user friendly interaction was created parallel to the development of the algorithm. This was really useful during the period of extensive testing of the parameter which easily could be entered through the dialogs of the interface. The algorithm itself treats the pixels as a connected graph of pixels which can always merge with their direct neighbors, meaning sharing an edge with those. There are three criteria that can be used in the current state of the algorithm, a mean based spectral homogeneity measure, a variance based textural homogeneity measure and fragmentation test as a shape measure. The algorithm has 3 key parameters which are the minimum and maximum segments size as well as a homogeneity threshold measure which is based on a weighted combination of relative change due to merging two segments. The growing and merging is divided into two phases the first one is based on mutual best partner merging and the second one on the homogeneity threshold. In both phases it is possible to use all three criteria for merging in arbitrary weighting constellations. A third step is the check for the fulfillment of minimum size which can be performed prior to or after the other two steps. The segments can then in a supervised manner be labeled interactively using once again the graphical user interface for creating a training sample set. This training set can be used to derive a support vector machine which is based on a radial base function kernel. The optimal settings for the required parameters of this SVM training process can be found from a cross-validation grid search process which is implemented within the program as well. The SVM algorithm is based on the LibSVM java implementation. Once training is completed the SVM can be used to predict the whole dataset to get a classified land-cover map. It can be exported in form of a vector dataset. The results yield that the incorporation of texture features already in the segmentation is superior to spectral information alone especially when working with unfiltered SAR data. The incorporation of the suggested shape feature however doesn’t seem to be of advantage, especially when taking the much longer processing time into account, when incorporating this criterion. From the classification results it is also evident, that the fusion of SAR and optical data is beneficial for urban land cover mapping. Especially the distinction of urban areas and agricultural crops has been improved greatly but also the confusion between high and low density could be reduced due to the fusion. / Dragon 2 Project
89

Road Estimation Using GPS Traces and Real Time Kinematic Data

Ghanbarynamin, Samira 29 April 2022 (has links)
Advance Driver Assistance System (ADAS) are becoming the main issue in today’s automotive industry. The new generation of ADAS aims at focusing on more details and obtaining more accuracy. To achieve this objective, the research and development parts of the automobile industry intend to utilize Global Positioning System (GPS) by integrating it with other existing tools in ADAS. There are several driving assistance systems which are served by a digital map as a primary or a secondary sensor. The traditional techniques of digital map generation are expensive and time consuming and require extensive manual effort. Therefore, having frequently updated maps is an issue. Furthermore, the existing commercial digital maps are not highly accurate. This Master thesis presents several algorithms for automatically converting raw Universal Serial Bus (USB)-GPS and Real Time Kinematic (RTK) GPS traces into a routable road network. The traces are gathered by driving 20 times on a highway. This work begins by pruning raw GPS traces using four different algorithms. The first step tries to minimize the number of outliers. After the traces are smoothed, they tend to consolidate into smooth paths. So in order to merge all 20 trips together and estimate the road network a Trace Merging algorithm is applied. Finally, a Non-Uniform Rational B-Spline (NURBS) curve is implemented as an approximation curve to smooth the road shape and decrease the effect of noisy data further. Since the RTK-GPS receiver provides highly accurate data, the curve resulted from its GPS data is the most sufficient road shape. Therefore, it is used as a ground truth to compare the result of each pruning algorithm based on data from USB-GPS. Lastly, the results of this work are demonstrated and a quality evaluation is done for all methods.
90

A Model-Driven Approach for LoD-2 Modeling Using DSM from Multi-stereo Satellite Images

Gui, Shengxi January 2020 (has links)
No description available.

Page generated in 0.0487 seconds