• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1088
  • 239
  • 152
  • 123
  • 76
  • 51
  • 35
  • 24
  • 24
  • 23
  • 18
  • 16
  • 8
  • 7
  • 7
  • Tagged with
  • 2214
  • 322
  • 216
  • 175
  • 171
  • 169
  • 168
  • 163
  • 130
  • 128
  • 120
  • 118
  • 115
  • 111
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Constrained Time-Dependent Adaptive Eco-Routing Navigation System / Systèmes eco-routing adaptatifs de navigation dépendant du temps avec des contraintes

Kubička, Matěj 16 November 2017 (has links)
L'éco-routage est une méthode de navigation du véhicule qui sélectionne les trajets vers une destination minimisant la consommation de carburant, la consommation d'énergie ou les émissions de polluants. C'est l'une des techniques qui tentent de réduire les coûts d'exploitation et l'empreinte environnementale du véhicule. Ce travail passe en revue les méthodes actuelles d'éco-routage et propose une nouvelle méthode pour pallier leurs insuffisances. La plupart des méthodes actuelles attribuent à chaque route du réseau routier un coût constant qui représente la consommation du véhicule ou la quantité de polluants émis. Un algorithme de routage optimal est ensuite utilisé pour trouver le chemin qui minimise la somme de ces coûts. Différentes extensions sont considérées dans la littérature. L'éco-routage contraint permet d'imposer des limites sur le temps de trajet, la consommation d'énergie et les émissions de polluants. L'éco-routage dépendant du temps permet le routage sur un graphique avec des coûts qui sont fonction du temps. L'éco-routage adaptatif permet de mettre à jour la solution d'éco-routage au cas où elle deviendrait invalide en raison d'un développement inattendu sur la route. Il existe des méthodes d'éco-routage optimales publiées qui résolvent l'éco-routage dépendant du temps ou l'éco-routage contraint ou l'éco-routage adaptatif. Chacun vient avec des frais généraux de calcul considérablement plus élevés par rapport à l'éco-routage standard et, à la connaissance de l'auteur, aucune méthode publiée ne prend en charge la combinaison des trois: éco-routage adaptatif dépendant du temps contraint. On soutient dans ce travail que les coûts d'acheminement sont incertains en raison de leur dépendance au trafic immédiat autour du véhicule, du comportement du conducteur et d'autres perturbations. Il est en outre soutenu que puisque ces coûts sont incertains, il y a peu d'avantages à utiliser un routage optimal car l'optimalité de la solution ne tient que tant que les coûts de routage sont corrects. Au lieu de cela, une méthode d'approximation est proposée dans ce travail. La charge de calcul est plus faible car la solution n'est pas requise pour être optimale. Cela permet l'éco-routage adaptatif dépendant du temps contraint. / Eco-routing is a vehicle navigation method that selects those paths to a destination that minimize fuel consumption, energy consumption or pollutant emissions. It is one of the techniques that attempt to lower vehicle's operational cost and environmental footprint. This work reviews the current eco-routing methods and proposes a new method designed to overcome their shortcomings. Most current methods assign every road in the road network some constant cost that represents either vehicle's consumption there or the amount of emitted pollutants. An optimal routing algorithm is then used to find the path that minimizes the sum of these costs. Various extensions are considered in the literature. Constrained eco-routing allows imposing limits on travel time, energy consumption, and pollutant emissions. Time-dependent eco-routing allows routing on a graph with costs that are functions of time. Adaptive eco-routing allows updating the eco-routing solution in case it becomes invalid due to some unexpected development on the road. There exist published optimal eco-routing methods that solve either the time-dependent eco-routing, or constrained eco-routing, or adaptive eco-routing. Each comes with considerably higher computational overhead with respect to the standard eco-routing and, to author's best knowledge, no published method supports the combination of all three: constrained time-dependent adaptive eco-routing. It is argued in this work that the routing costs are uncertain because of their dependence on immediate traffic around the vehicle, on driver's behavior, and other perturbations. It is further argued that since these costs are uncertain, there is little benefit in using optimal routing because the optimality of the solution holds only as long as the routing costs are correct. Instead, an approximation method is proposed in this work. The computational overhead is lower since the solution is not required to be optimal. This enables the constrained time-dependent adaptive eco-routing.
232

Preprocessing für das Matchen von Produktangeboten

Thomas, Stefan 19 February 2018 (has links)
Digital gespeicherte Daten erfreuen sich einer stetig steigenden Verwendung. Eine manuelle Konsolidierung dieser Daten ist im kommerziellen Bereich aus Kostenund Zeitgründen praktisch nicht mehr durchführbar. Ein Verzicht auf Dublettenerkennung ist aber ebenso wenig eine Alternative. Es existieren bereits viele Ansätze um Objekt-Matching voll- bzw. zumindest semi-automatisch durchzuführen, aber insbesondere Datenbasen, welche aus Webdaten gewonnen werden, weisen eine derart hohe Heterogenität auf, dass bestehende Ansätze an ihre Grenzen stoßen. Insbesondere Produkt-Matching ist hiervon betroffen. Um Produkt-Matching-Verfahren zu unterstützen, werden hier Möglichkeiten der Vorverarbeitung vorgestellt. Es wird speziell eine Strategie entwickelt, mit der es möglich ist, gezielt Produktcodes in Textattributen zuerkennen und zu extrahieren. Diese und weitere Strategien wurden implementiert und indas bestehende Framework des WDI-Lab integriert.
233

Brandmal-Erkennung zur Detektion beschädigter Glaskappenisolatoren an Hochspannungsfreileitungen

Junghanns, Nico 23 September 2020 (has links)
Für den zuverlässigen Betrieb von Hochspannungsfreileistungen ist es notwendig, die an ihnen eingesetzten Isolatoren regelmäßig zu überprüfen. Somit können gravierendere Beschädigungen vorgebeugt werden. Für diese Überprüfung sind verschiedene Verfahren geeignet. Die Brandmalerkennung ist dabei noch ein relativ neues Verfahren. Mit ihrer Hilfe ist es jedoch möglich auch kleinste Beschädigungen zu erkennen. Im Rahmen dieser Bachelorarbeit wird ein neues Verfahren zur Erkennung von Brandmalen vorgestellt. Dieses verwendet einen Template-Matching-Algorithmus zum Finden der Isolatoren. Dessen Erkennungsrate liegt bei 90,18 %. Alle damit gefundenen Isolatoren untersucht man nach Brandmalen. Diese werden segmentiert und durch ein Connected-Component-Labeling lokalisiert. Insgesamt konnten 71,05% der Brandmale erkannt werden. So wurde der Zustand von 88,19% der Isolatoren korrekt bestimmt.
234

Instance-Based Matching of Large Life Science Ontologies

Kirsten, Toralf, Thor, Andreas, Rahm, Erhard 06 February 2019 (has links)
Ontologies are heavily used in life sciences so that there is increasing value to match different ontologies in order to determine related conceptual categories. We propose a simple yet powerful methodology for instance-based ontology matching which utilizes the associations between molecular-biological objects and ontologies. The approach can build on many existing ontology associations for instance objects like sequences and proteins and thus makes heavy use of available domain knowledge. Furthermore, the approach is flexible and extensible since each instance source with associations to the ontologies of interest can contribute to the ontology mapping. We study several approaches to determine the instance-based similarity of ontology categories. We perform an extensive experimental evaluation to use protein associations for different species to match between subontologies of the Gene Ontology and OMIM. We also provide a comparison with metadata-based ontology matching.
235

Ontology Integration with Non-Violation Check and Context Extraction

Wu, Dan January 2013 (has links)
Matching and integrating ontologies has been a desirable technique in areas such as data fusion, knowledge integration, the Semantic Web and the development of advanced services in distributed system. Unfortunately, the heterogeneities of ontologies cause big obstacles in the development of this technique. This licentiate thesis describes an approach to tackle the problem of ontology integration using description logics and production rules, both on a syntactic level and on a semantic level. Concepts in ontologies are matched and integrated to generate ontology intersections. Context is extracted and rules for handling heterogeneous ontology reasoning with contexts are developed. Ontologies are integrated by two processes. The first integration is to generate an ontology intersection from two OWL ontologies. The result is an ontology intersection, which is an independent ontology containing non-contradictory assertions based on the original ontologies. The second integration is carried out by rules that extract context, such as ontology content and ontology description data, e.g. time and ontology creator. The integration is designed for conceptual ontology integration. The information of instances isn't considered, neither in the integrating process nor in the integrating results. An ontology reasoner is used in the integration process for non-violation check of two OWL ontologies and a rule engine for handling conflicts according to production rules. The ontology reasoner checks the satisfiability of concepts with the help of anchors, i.e. synonyms and string-identical entities; production rules are applied to integrate ontologies, with the constraint that the original ontologies should not be violated. The second integration process is carried out with production rules with context data of the ontologies. Ontology reasoning, in a repository, is conducted within the boundary of each ontology. Nonetheless, with context rules, reasoning is carried out across ontologies. The contents of an ontology provide context for its defined entities and are extracted to provide context with the help of an ontology reasoner. Metadata of ontologies are criteria that are useful for describing ontologies. Rules using context, also called context rules, are developed and in-built in the repository. New rules can also be added. The scientific contribution of the thesis is the suggested approach applying semantic based techniques to provide a complementary method for ontology matching and integrating semantically. With the illustration of the ontology integration process and the context rules and a few manually integrated ontology results, the approach shows the potential to help to develop advanced knowledge-based services. / <p>QC 20130201</p>
236

Matchningsverktyg för reumatiker

Mona, Magan, Shahrina, Uddin January 2020 (has links)
Matchning av personer inom olika grupper och intressen har blivit allt vanligare på olika typer av plattformar. Matchning är processen att föra samman två eller flera objekt baserat på givna parametrar. Hos ReumaKompis, som är en del av Reumatikerförbundet Stockholm, används matchning för att matcha ihop reumatiker som vill ha stöd och någon att prata med om sin reumatiska sjukdom. Denna manuella matchning genomförs av ReumaKompis och är enligt förbundet en mycket tidskrävande process som kräver resurser i både material och arbetskraft. Därmed behöver ReumaKompis hjälp med att matcha ihop personer med varandra på ett mer automatiserat sätt. I detta examensarbete har en fallstudie gjorts där det undersöks hur man kan ta fram ett matchningsverktyg i form av en webbapplikation för att matcha ihop reumatiker med varandra utifrån diagnos och sjukdomstid. Detta matchningsverktyg är tänkt att hjälpa personer med reumatism att matchas ihop hos ReumaKompis. I rapporten undersöks den tidigare matchningsprocessen för att avgöra vad som behövs och vilka åtgärder som behöver vidtas för att matchningsverktyget ska kunna ersätta den tidigare matchningsprocessen. Undersökningen består av att hitta tillgängliga matchningsalgoritmer som kan användas i matchningsverktyget. För att göra matchningsverktyget användbart och testbart har en webbapplikation tagits fram. Resultatet av fallstudien visar att det är möjligt att förbättra matchningen för reumatiker med ett datoriserat matchningsverktyg avsett för reumatiker. Förbättringen av matchningen har uppnåtts i ökad tillgänglighet, då användarna kan matchas när som helst om det finns någon att matchas med. / Matching people within different groups and interests has become increasingly common on different types of platforms. Matching is the process of bringing together two or more objects based on given parameters. At ReumaKompis, which is part of the Stockholm Rheumatism Association, matching is used to match rheumatics who want support and someone to talk to about their rheumatic disease. This manual matching is carried out by ReumaKompis and is, according to the union, a very time-consuming process that requires resources in both materials and manpower. Thus, ReumaKompis needs help with matching people with each other in a more automated way. In this thesis, a case study has been done where it is investigated how to develop a matching tool to match rheumatologists with each other based on diagnosis and time of illness. This matching tool is intended to help people with rheumatism to be matched together at ReumaKompis. The report examines the previous matching process to determine what is needed and what measures need to be taken for the matching tool to replace the previous matching process. The survey consists of finding available matching algorithms that can be used in the matching tool. To make the matching tool useful and testable, a web application has been developed. The results of the case study show that it is possible to improve the matching for rheumatics with a digital web-based matching tool intended for rheumatics. The improvement of the matching has been achieved in increased accessibility, as the users can be matched at any time if there is someone to match with.
237

Compressed Pattern Matching For Text And Images

Tao, Tao 01 January 2005 (has links)
The amount of information that we are dealing with today is being generated at an ever-increasing rate. On one hand, data compression is needed to efficiently store, organize the data and transport the data over the limited-bandwidth network. On the other hand, efficient information retrieval is needed to speedily find the relevant information from this huge mass of data using available resources. The compressed pattern matching problem can be stated as: given the compressed format of a text or an image and a pattern string or a pattern image, report the occurrence(s) of the pattern in the text or image with minimal (or no) decompression. The main advantages of compressed pattern matching versus the naïve decompress-then-search approach are: First, reduced storage cost. Since there is no need to decompress the data or there is only minimal decompression required, the disk space and the memory cost is reduced. Second, less search time. Since the size of the compressed data is smaller than that of the original data, a searching performed on the compressed data will result in a shorter search time. The challenge of efficient compressed pattern matching can be met from two inseparable aspects: First, to utilize effectively the full potential of compression for the information retrieval systems, there is a need to develop search-aware compression algorithms. Second, for data that is compressed using a particular compression technique, regardless whether the compression is search-aware or not, we need to develop efficient searching techniques. This means that techniques must be developed to search the compressed data with no or minimal decompression and with not too much extra cost. Compressed pattern matching algorithms can be categorized as either for text compression or for image compression. Although compressed pattern matching for text compression has been studied for a few years and many publications are available in the literature, there is still room to improve the efficiency in terms of both compression and searching. None of the search engines available today make explicit use of compressed pattern matching. Compressed pattern matching for image compression, on the other hand, has been relatively unexplored. However, it is getting more attention because lossless compression has become more important for the ever-increasing large amount of medical images, satellite images and aerospace photos, which requires the data to be losslessly stored. Developing efficient information retrieval techniques from the losslessly compressed data is therefore a fundamental research challenge. In this dissertation, we have studied compressed pattern matching problem for both text and images. We present a series of novel compressed pattern matching algorithms, which are divided into two major parts. The first major work is done for the popular LZW compression algorithm. The second major work is done for the current lossless image compression standard JPEG-LS. Specifically, our contributions from the first major work are: 1. We have developed an "almost-optimal" compressed pattern matching algorithm that reports all pattern occurrences. An earlier "almost-optimal" algorithm reported in the literature is only capable of detecting the first occurrence of the pattern and the practical performance of the algorithm is not clear. We have implemented our algorithm and provide extensive experimental results measuring the speed of our algorithm. We also developed a faster implementation for so-called "simple patterns". The simple patterns are patterns that no unique symbol appears more than once. The algorithm takes advantage of this property and runs in optimal time. 2. We have developed a novel compressed pattern matching algorithm for multiple patterns using the Aho-Corasick algorithm. The algorithm takes O(mt+n+r) time with O(mt) extra space, where n is the size of the compressed file, m is the total size of all patterns, t is the size of the LZW trie and r is the number of occurrences of the patterns. The algorithm is particularly efficient when being applied on archival search if the archives are compressed with a common LZW trie. All the above algorithms have been implemented and extensive experiments have been conducted to test the performance of our algorithms and to compare with the best existing algorithms. The experimental results show that our compressed pattern matching algorithm for multiple patterns is competitive among the best algorithms and is practically the fastest among all approaches when the number of patterns is not very large. Therefore, our algorithm is preferable for general string matching applications. LZW is one of the most efficient and popular compression algorithms used extensively and both of our algorithms require no modification on the compression algorithm. Our work, therefore, has great economical and market potential Our contributions from the second major work are: 1 We have developed a new global context variation of the JPEG-LS compression algorithm and the corresponding compressed pattern matching algorithm. Comparing to the original JPEG-LS, the global context variation is search-aware and has faster encoding and decoding speeds. The searching algorithm based on the global-context variation requires partial decompression of the compressed image. The experimental results show that it improves the search speed by about 30% comparing to the decompress-then-search approach. Based on our best knowledge, this is the first two-dimensional compressed pattern matching work for the JPEG-LS standard. 2 We have developed a two-pass variation of the JPEG-LS algorithm and the corresponding compressed pattern matching algorithm. The two-pass variation achieves search-awareness through a common compression technique called semi-static dictionary. Comparing to the original algorithm, the compression of the new algorithm is equally well but the encoding takes slightly longer. The searching algorithm based on the two-pass variation requires no decompression at all and therefore works in the fully compressed domain. It runs in time O(nc+mc+nm+m^2) with extra space O(n+m+mc), where n is the number of columns of the image, m is the number of rows and columns of the pattern, nc is the compressed image size and mc is the compressed pattern size. The algorithm is the first known two-dimensional algorithm that works in the fully compressed domain.
238

A Software Suite to Detect Hardware Trojans on Integrated Circuits Using Computer Vision

Bowman, David January 2022 (has links)
No description available.
239

The Maximum Induced Matching Problem for Some Subclasses of Weakly Chordal Graphs

Krishnamurthy, Chandra Mohan January 2009 (has links)
No description available.
240

The Relation of Race/Ethnic-Matching to the Engagement, Retention, and Treatment Outcomes of Adolescent Substance Users

Weekes, Jerren C., M.A. 26 September 2011 (has links)
No description available.

Page generated in 0.255 seconds