• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 460
  • 150
  • 51
  • 36
  • 35
  • 35
  • 23
  • 20
  • 13
  • 12
  • 9
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 948
  • 276
  • 275
  • 265
  • 252
  • 116
  • 100
  • 94
  • 89
  • 61
  • 57
  • 56
  • 56
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Numerické modelování chování částicového kompozitu se sesíťovanou polymerní matricí / Numerical modeling of behavior of a particle composite with crosslinked polymer matrix

Máša, Bohuslav January 2011 (has links)
The master's thesis deals with the determination of macroscopic behavior of a particulate composite with cross-linked polymer matrix under tensile load. The main focus of thesis is estimation of mechanical properties of a composite loaded by tensile loading using numerical methods (especially finite elements method). Investigated composite is composed of matrix in a rubbery state filled by alumina-based particles (Al2O3). Hyperelastic properties of the matrix have been modeled by the Mooney-Rivlin material model. Different compositions of particles, their different shape, orientation and different volume fractions have been considered. For all these characteristics of composite numerical models have been developed. The damage mechanisms of the matrix have also been taken into account. Results of numerical analyses have been compared with experimental data and good agreement between numerical models with damage mechanisms of matrix and experimental data has been found.
422

Entwicklung und Realisierung einer Strategie zur Syndikation von Linked Data

Doehring, Raphael 20 October 2017 (has links)
Die Veröffentlichung von strukturierten Daten im Linked Data Web hat stark zugenommen. Für viele Internetnutzer sind diese Daten jedoch nicht nutzbar, da der Zugriff ohne Kenntnis einer Programmiersprache nicht möglich ist. Mit der Webapplikation LESS wurde eine Templateengine für Linked Data-Datenquellen und SPARQL-Ergebnisse entwickelt. Auf der Plattform können Templates erstellt, veröffentlicht und von anderen Nutzern weiterverwendet werden. Der Nutzer wird bei der Entwicklung von Templates unterstützt, so dass es auch mit geringen technischen Kenntnissen möglich ist, mit Semantic Web-Daten zu arbeiten. LESS ermöglicht die Integration von Daten aus unterschiedlichen Quellen, sowie die Erzeugung textbasierter Ausgabeformate wie RSS, XML und HTML mit Javascript. Templates können für unterschiedliche Ressourcen erstellt und anschließend einfach in bestehende Webapplikationen und Webseiten integriert werden. Um die Zuverlässigkeit und Geschwindigkeit des Linked Data Web zu verbessern, erfolgt eine Zwischenspeicherung der verwendete Daten in LESS für eine bestimmte Zeit oder für den Fall des Ausfalls der Datenquelle.
423

Aspekte der Kommunikation und Datenintegration in semantischen Daten-Wikis

Frischmuth, Philipp 20 October 2017 (has links)
Das Semantic Web, eine Erweiterung des ursprünglichen World Wide Web um eine se- mantische Schicht, kann die Integration von Informationen aus verschiedenen Datenquellen stark vereinfachen. Mit RDF und der SPARQL-Anfragesprache wurden Standards etabliert, die eine einheitliche Darstellung von strukturierten Informationen ermöglichen und diese abfragbar machen. Mit Linked Data werden diese Informationen über ein einheitliches Pro- tokoll verfügbar gemacht und es entsteht ein Netz aus Daten, anstelle von Dokumenten. In der vorliegenden Arbeit werden Aspekte einer auf solchen semantischen Technologien basierenden Datenintegration betrachtet und analysiert. Darauf aufbauend wird ein System spezifiziert und implementiert, das die Ergebnisse dieser Untersuchungen in einer konkreten Anwendung realisiert. Als Basis für die Implementierung dient OntoWiki, ein semantisches Daten-Wiki.
424

EAGLE - learning of link specifications using genetic programming

Lyko, Klaus 13 February 2018 (has links)
Um die Vision eines Linked Data Webs zu verwirklichen werden effiziente halbautomatische Verfahren benötigt, um Links zwischen verschiedenen Datenquellen zu generieren. Viele bekannte Link Discovery Frameworks verlangen von einem Benutzer eine Linkspezifikation manuell zu erstellen, bevor der eigentliche Vergleichsprozess zum Finden dieser Links gestartet werden kann. Zwar wurden jüngst zeit- und ressourcenschonende Werkzeuge zur Ausführung von Linking-Operationen entwickelt, aber die Generierung möglichst präziser Linkspezifikationen ist weiterhin ein kompliziertes Unterfangen. Diese Arbeit präsentiert EAGLE - ein Werkzeug zum halbautomatischen Lernen solcher Linkspezifikationen. EAGLE erweitert das zeiteffiziente LIMES Framework um aktive Lernalgorithmen basierend auf Methoden der Genetischen Programmierung. Ziel ist es den manuellen Arbeitsaufwand während der Generierung präziser Linkspezifikationen für Benutzer zu minimieren. Das heißt insbesondere, dass die Menge an manuell annotierten Trainingsdaten minimiert werden soll. Dazu werden Batch- als auch aktive Lernalgorithmen verglichen. Zur Evaluation werden mehrere Datensätze unterschiedlichen Ursprungs und verschiedener Komplexität herangezogen. Es wird gezeigt, dass EAGLE zeiteffizient Linkspezifikationen vergleichbarer Genauigkeit bezüglich der F-Maße gernerieren kann, während ein geringerer Umfang an Trainingsdaten für die aktiven Lernalgorithmen benötigt wird. / On the way to the Linked Data Web, efficient and semi-automatic approaches for generating links between several data sources are needed. Many common Link Discovery frameworks require a user to specify a link specification, before starting the linking process. While time-efficient approaches for executing those link specification have been developed over the last years, the discovery of accurate link specifications remains a non-trivial problem. In this thesis, we present EAGLE, a machine-learning approach for link specifications. The overall goal behind EAGLE is to limit the labeling effort for the user, while generating highly accurate link specifications. To achieve this goal, we rely on the algorithms implemented in the LIMES framework and enhance it with both batch and active learning mechanisms based on genetic programming techniques. We compare both batch and active learning and evaluate our approach on several real world datasets from different domains. We show that we can discover link specifications with f-measures comparable to other approaches while relying on a smaller number of labeled instances and requiring significantly less execution time.
425

Expanding The NIF Ecosystem - Corpus Conversion, Parsing And Processing Using The NLP Interchange Format 2.0

Brümmer, Martin 26 February 2018 (has links)
This work presents a thorough examination and expansion of the NIF ecosystem.
426

Integrade Linked Data / Linked Data Integration

Michelfeit, Jan January 2013 (has links)
Linked Data have emerged as a successful publication format which could mean to structured data what Web meant to documents. The strength of Linked Data is in its fitness for integration of data from multiple sources. Linked Data integration opens door to new opportunities but also poses new challenges. New algorithms and tools need to be developed to cover all steps of data integration. This thesis examines the established data integration proceses and how they can be applied to Linked Data, with focus on data fusion and conflict resolution. Novel algorithms for Linked Data fusion are proposed and the task of supporting trust with provenance information and quality assessment of fused data is addressed. The proposed algorithms are implemented as part of a Linked Data integration framework ODCleanStore.
427

Scalable Data Integration for Linked Data

Nentwig, Markus 06 August 2020 (has links)
Linked Data describes an extensive set of structured but heterogeneous datasources where entities are connected by formal semantic descriptions. In thevision of the Semantic Web, these semantic links are extended towards theWorld Wide Web to provide as much machine-readable data as possible forsearch queries. The resulting connections allow an automatic evaluation to findnew insights into the data. Identifying these semantic connections betweentwo data sources with automatic approaches is called link discovery. We derivecommon requirements and a generic link discovery workflow based on similaritiesbetween entity properties and associated properties of ontology concepts. Mostof the existing link discovery approaches disregard the fact that in times ofBig Data, an increasing volume of data sources poses new demands on linkdiscovery. In particular, the problem of complex and time-consuming linkdetermination escalates with an increasing number of intersecting data sources.To overcome the restriction of pairwise linking of entities, holistic clusteringapproaches are needed to link equivalent entities of multiple data sources toconstruct integrated knowledge bases. In this context, the focus on efficiencyand scalability is essential. For example, reusing existing links or backgroundinformation can help to avoid redundant calculations. However, when dealingwith multiple data sources, additional data quality problems must also be dealtwith. This dissertation addresses these comprehensive challenges by designingholistic linking and clustering approaches that enable reuse of existing links.Unlike previous systems, we execute the complete data integration workflowvia a distributed processing system. At first, the LinkLion portal will beintroduced to provide existing links for new applications. These links act asa basis for a physical data integration process to create a unified representationfor equivalent entities from many data sources. We then propose a holisticclustering approach to form consolidated clusters for same real-world entitiesfrom many different sources. At the same time, we exploit the semantic typeof entities to improve the quality of the result. The process identifies errorsin existing links and can find numerous additional links. Additionally, theentity clustering has to react to the high dynamics of the data. In particular,this requires scalable approaches for continuously growing data sources withmany entities as well as additional new sources. Previous entity clusteringapproaches are mostly static, focusing on the one-time linking and clustering ofentities from few sources. Therefore, we propose and evaluate new approaches for incremental entity clustering that supports the continuous addition of newentities and data sources. To cope with the ever-increasing number of LinkedData sources, efficient and scalable methods based on distributed processingsystems are required. Thus we propose distributed holistic approaches to linkmany data sources based on a clustering of entities that represent the samereal-world object. The implementation is realized on Apache Flink. In contrastto previous approaches, we utilize efficiency-enhancing optimizations for bothdistributed static and dynamic clustering. An extensive comparative evaluationof the proposed approaches with various distributed clustering strategies showshigh effectiveness for datasets from multiple domains as well as scalability on amulti-machine Apache Flink cluster.
428

Large-Scale Multilingual Knowledge Extraction, Publishing and Quality Assessment: The case of DBpedia

Kontokostas, Dimitrios 04 September 2018 (has links)
No description available.
429

CubeViz.js: A Lightweight Framework for Discovering and Visualizing RDF Data Cubes

Abicht, Konrad, Alkhouri, Georges, Arndt, Natanael, Meissner, Roy, Martin, Michael 30 October 2018 (has links)
In this paper we present CubeViz.js, the successor of CubeViz, as an approach for lightweight visualization and exploration of statistical data using the RDF Data Cube vocabulary. In several use cases, such as the European Unions Open Data Portal, in which we deployed CubeViz, we were able to gather various requirements that eventually led to the decision of reimplementing CubeViz as JavaScript-only application. As part of this paper we showcase major functionalities of CubeViz.js and its improvements in comparison to the prior version.
430

Escape Simulation Suite

Merrell, Thomas Yates 21 April 2005 (has links)
Ever since we were children the phrase "In case of an emergency, walk, DON'T run, to the nearest exit" has been drilled into our heads. How to evacuate a large number of people from a given area as quickly and safely as possible has been a question of great importance since the first congregation of man; a question that has yet to be optimally answered. There have been many attempts at finding an answer and many more yet to be made. In light of recent world events, 9/11 for instance, the need for a better answer is apparent. While finding a solution to this problem is the end objective, the goal of this thesis is to develop an application or tool that will aid in the search of an answer to this problem. There are several aspects of traditional evacuation plans that make them inherently suboptimal. First among these is that they are static by nature. When a building is designed, there is some care taken in analyzing its floor plan and finding an optimal evacuation route for everyone. These plans are made under several assumptions and with the obvious constant that they cannot be modified during the actual emergency. Yes, it is possible for such a plan to actually end up being the optimal plan during any given evacuation, but the likelihood of this being the case is most definitely less then 100%. There are many reasons for this. The most obvious is this: the situation that the plan is trying to solve is a very dynamic one. People will not be where they should be or in the quantities that the static plan was prepared for. Many of them will probably not know what they should do in an emergency and so most likely will follow any large group of people, like lemmings. Finally, most situations that require the evacuation of a building or area occur because all or part of the building has become, or is becoming, unsafe. It is impossible for a static evacuation plan to take into account the way a fire or poisonous gas is spreading, or the state of the structural stability of the building. What is needed during a crisis is an artificially intelligent and dynamic evacuation system that is capable of (1) analyzing the state of the building and its occupants, (2) coming up with a plan to get everyone out as fast as possible, and (3) directing all occupants along the best exit routes. Furthermore, the system should be able to modify its plan as the evacuation progresses. This application is intended to provide researchers in this area the means to quickly and accurately simulate different evacuation theories and ideas. That being the case, it will have powerful graphical capabilities, thus allowing the researchers to easily see the real-time results of their work. It will be able to use diverse modeling techniques in order to handle the many different ways of approaching this problem. It will provide a simple way for equations and mathematical models to be entered which can affect the behavior of most aspects of the world being simulated. This work is in conjunction with, and closely tied to, Dr Pushkin Kachroo's research on this same topic. The application is designed so that future developers can quickly add to and modify its design to meet their specifications. It is not the goal of this work to provide an application that directly solves the optimal evacuation problem, or one that inherently simulates everything perfectly. It is the job of the researchers using this application to define the specific physics equations and models for each component of the simulation. This application provides an easy way to add these definitions into the simulation calculations. In brief, this Escape Simulator is a client server application. All of the graphics and human interaction are handled client side using Win32 and Direct3D. The actual simulation world calculations are handled server side, and both the client and server communicate via DirectPlay. The algorithm being used to model the objects and world by the server will be completely configurable. In fact, everything in the world, including the world physics, will be completely modifiable. Though the researchers will need to write the necessary pluggins that to define the actual models and algorithms used by the agents, objects, and world, ultimately this will give them much more power and flexibility. It will also allow for third parties to develop libraries of commonly used algorithms and resources that the researchers can use. This research was supported in part from the National Science Foundation through grant no. CMS-0428196 with Dr. S. C. Liu as the Program Director. This support is gratefully acknowledged. Any opinion, findings, and conclusions or recommendations expressed in this study are those of the writer and do not necessarily reflect the views of the National Science Foundation. / Master of Science

Page generated in 0.0406 seconds