• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 38
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 98
  • 77
  • 55
  • 53
  • 53
  • 52
  • 27
  • 21
  • 18
  • 18
  • 14
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data

Hellmann, Sebastian 12 January 2015 (has links) (PDF)
This thesis is a compendium of scientific works and engineering specifications that have been contributed to a large community of stakeholders to be copied, adapted, mixed, built upon and exploited in any way possible to achieve a common goal: Integrating Natural Language Processing (NLP) and Language Resources Using Linked Data The explosion of information technology in the last two decades has led to a substantial growth in quantity, diversity and complexity of web-accessible linguistic data. These resources become even more useful when linked with each other and the last few years have seen the emergence of numerous approaches in various disciplines concerned with linguistic resources and NLP tools. It is the challenge of our time to store, interlink and exploit this wealth of data accumulated in more than half a century of computational linguistics, of empirical, corpus-based study of language, and of computational lexicography in all its heterogeneity. The vision of the Giant Global Graph (GGG) was conceived by Tim Berners-Lee aiming at connecting all data on the Web and allowing to discover new relations between this openly-accessible data. This vision has been pursued by the Linked Open Data (LOD) community, where the cloud of published datasets comprises 295 data repositories and more than 30 billion RDF triples (as of September 2011). RDF is based on globally unique and accessible URIs and it was specifically designed to establish links between such URIs (or resources). This is captured in the Linked Data paradigm that postulates four rules: (1) Referred entities should be designated by URIs, (2) these URIs should be resolvable over HTTP, (3) data should be represented by means of standards such as RDF, (4) and a resource should include links to other resources. Although it is difficult to precisely identify the reasons for the success of the LOD effort, advocates generally argue that open licenses as well as open access are key enablers for the growth of such a network as they provide a strong incentive for collaboration and contribution by third parties. In his keynote at BNCOD 2011, Chris Bizer argued that with RDF the overall data integration effort can be “split between data publishers, third parties, and the data consumer”, a claim that can be substantiated by observing the evolution of many large data sets constituting the LOD cloud. As written in the acknowledgement section, parts of this thesis has received numerous feedback from other scientists, practitioners and industry in many different ways. The main contributions of this thesis are summarized here: Part I – Introduction and Background. During his keynote at the Language Resource and Evaluation Conference in 2012, Sören Auer stressed the decentralized, collaborative, interlinked and interoperable nature of the Web of Data. The keynote provides strong evidence that Semantic Web technologies such as Linked Data are on its way to become main stream for the representation of language resources. The jointly written companion publication for the keynote was later extended as a book chapter in The People’s Web Meets NLP and serves as the basis for “Introduction” and “Background”, outlining some stages of the Linked Data publication and refinement chain. Both chapters stress the importance of open licenses and open access as an enabler for collaboration, the ability to interlink data on the Web as a key feature of RDF as well as provide a discussion about scalability issues and decentralization. Furthermore, we elaborate on how conceptual interoperability can be achieved by (1) re-using vocabularies, (2) agile ontology development, (3) meetings to refine and adapt ontologies and (4) tool support to enrich ontologies and match schemata. Part II - Language Resources as Linked Data. “Linked Data in Linguistics” and “NLP & DBpedia, an Upward Knowledge Acquisition Spiral” summarize the results of the Linked Data in Linguistics (LDL) Workshop in 2012 and the NLP & DBpedia Workshop in 2013 and give a preview of the MLOD special issue. In total, five proceedings – three published at CEUR (OKCon 2011, WoLE 2012, NLP & DBpedia 2013), one Springer book (Linked Data in Linguistics, LDL 2012) and one journal special issue (Multilingual Linked Open Data, MLOD to appear) – have been (co-)edited to create incentives for scientists to convert and publish Linked Data and thus to contribute open and/or linguistic data to the LOD cloud. Based on the disseminated call for papers, 152 authors contributed one or more accepted submissions to our venues and 120 reviewers were involved in peer-reviewing. “DBpedia as a Multilingual Language Resource” and “Leveraging the Crowdsourcing of Lexical Resources for Bootstrapping a Linguistic Linked Data Cloud” contain this thesis’ contribution to the DBpedia Project in order to further increase the size and inter-linkage of the LOD Cloud with lexical-semantic resources. Our contribution comprises extracted data from Wiktionary (an online, collaborative dictionary similar to Wikipedia) in more than four languages (now six) as well as language-specific versions of DBpedia, including a quality assessment of inter-language links between Wikipedia editions and internationalized content negotiation rules for Linked Data. In particular the work described in created the foundation for a DBpedia Internationalisation Committee with members from over 15 different languages with the common goal to push DBpedia as a free and open multilingual language resource. Part III - The NLP Interchange Format (NIF). “NIF 2.0 Core Specification”, “NIF 2.0 Resources and Architecture” and “Evaluation and Related Work” constitute one of the main contribution of this thesis. The NLP Interchange Format (NIF) is an RDF/OWL-based format that aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. The core specification is included in and describes which URI schemes and RDF vocabularies must be used for (parts of) natural language texts and annotations in order to create an RDF/OWL-based interoperability layer with NIF built upon Unicode Code Points in Normal Form C. In , classes and properties of the NIF Core Ontology are described to formally define the relations between text, substrings and their URI schemes. contains the evaluation of NIF. In a questionnaire, we asked questions to 13 developers using NIF. UIMA, GATE and Stanbol are extensible NLP frameworks and NIF was not yet able to provide off-the-shelf NLP domain ontologies for all possible domains, but only for the plugins used in this study. After inspecting the software, the developers agreed however that NIF is adequate enough to provide a generic RDF output based on NIF using literal objects for annotations. All developers were able to map the internal data structure to NIF URIs to serialize RDF output (Adequacy). The development effort in hours (ranging between 3 and 40 hours) as well as the number of code lines (ranging between 110 and 445) suggest, that the implementation of NIF wrappers is easy and fast for an average developer. Furthermore the evaluation contains a comparison to other formats and an evaluation of the available URI schemes for web annotation. In order to collect input from the wide group of stakeholders, a total of 16 presentations were given with extensive discussions and feedback, which has lead to a constant improvement of NIF from 2010 until 2013. After the release of NIF (Version 1.0) in November 2011, a total of 32 vocabulary employments and implementations for different NLP tools and converters were reported (8 by the (co-)authors, including Wiki-link corpus, 13 by people participating in our survey and 11 more, of which we have heard). Several roll-out meetings and tutorials were held (e.g. in Leipzig and Prague in 2013) and are planned (e.g. at LREC 2014). Part IV - The NLP Interchange Format in Use. “Use Cases and Applications for NIF” and “Publication of Corpora using NIF” describe 8 concrete instances where NIF has been successfully used. One major contribution in is the usage of NIF as the recommended RDF mapping in the Internationalization Tag Set (ITS) 2.0 W3C standard and the conversion algorithms from ITS to NIF and back. One outcome of the discussions in the standardization meetings and telephone conferences for ITS 2.0 resulted in the conclusion there was no alternative RDF format or vocabulary other than NIF with the required features to fulfill the working group charter. Five further uses of NIF are described for the Ontology of Linguistic Annotations (OLiA), the RDFaCE tool, the Tiger Corpus Navigator, the OntosFeeder and visualisations of NIF using the RelFinder tool. These 8 instances provide an implemented proof-of-concept of the features of NIF. starts with describing the conversion and hosting of the huge Google Wikilinks corpus with 40 million annotations for 3 million web sites. The resulting RDF dump contains 477 million triples in a 5.6 GB compressed dump file in turtle syntax. describes how NIF can be used to publish extracted facts from news feeds in the RDFLiveNews tool as Linked Data. Part V - Conclusions. provides lessons learned for NIF, conclusions and an outlook on future work. Most of the contributions are already summarized above. One particular aspect worth mentioning is the increasing number of NIF-formated corpora for Named Entity Recognition (NER) that have come into existence after the publication of the main NIF paper Integrating NLP using Linked Data at ISWC 2013. These include the corpora converted by Steinmetz, Knuth and Sack for the NLP & DBpedia workshop and an OpenNLP-based CoNLL converter by Brümmer. Furthermore, we are aware of three LREC 2014 submissions that leverage NIF: NIF4OGGD - NLP Interchange Format for Open German Governmental Data, N^3 – A Collection of Datasets for Named Entity Recognition and Disambiguation in the NLP Interchange Format and Global Intelligent Content: Active Curation of Language Resources using Linked Data as well as an early implementation of a GATE-based NER/NEL evaluation framework by Dojchinovski and Kliegr. Further funding for the maintenance, interlinking and publication of Linguistic Linked Data as well as support and improvements of NIF is available via the expiring LOD2 EU project, as well as the CSA EU project called LIDER, which started in November 2013. Based on the evidence of successful adoption presented in this thesis, we can expect a decent to high chance of reaching critical mass of Linked Data technology as well as the NIF standard in the field of Natural Language Processing and Language Resources.
82

Collaborative new product development strategy : the case of the automotive industry /

Wolff, Timo. January 2007 (has links)
Hochsch. für Wirtschafts-, Rechts- und Sozialwiss., Diss.--St. Gallen, 2007.
83

Nelly Sachs’s Literary Transformation in Exile, 1940–1947

Pedersen, Daniel 29 July 2019 (has links)
No description available.
84

Vom Kritischen Bericht zur Kritischen Dokumentation am Beispiel der Digital-interaktiven Mozart-Edition

Dubowy, Norbert 29 October 2020 (has links)
A digital music edition that follows the principles implemented in the fully-digital, MEI-coded Digital Interactive Mozart Edition, pursued by the Mozarteum Foundation and the Packard Humanities Institute, has many advantages over conventional analog editions. One advantage is greater transparency, which is achieved not only at the level of the material, e. g. the inclusion of digital images of the sources, but above all by making editorial processes and decisions visible in the edition itself. In the digital edition, the Critical Report, a defining component of any critical edition and often physically separate from the edited musical text, becomes part of the overall digital code. The philological findings and editorial processes reported encompass the entire range of forms of expression, from verbal comments and annotations to pure code and non-verbal, largely visual communication strategies. Therefore, the format of the traditional printed Critical Report, which is mainly made up of text and tables, dissolves and is replaced by an immaterial, non-delimitable field of data, information, references and media for which the term Critical Documentation is more appropriate.
85

Marjan Asgari: Makom – deterritorialisiert. Gegenorte in der deutschsprachigen jüdischen Literatur

Ludewig, Anna-Dorothea 19 January 2021 (has links)
No description available.
86

Mobile privacy and apps: investigating behavior and attitude

Havelka, Stefanie 31 August 2020 (has links)
Diese Dissertation untersucht das Nutzerverhalten und die Einstellungen von Smartphone- und App-BenutzerInnen und welche Rolle die Kultur in Bezug auf mobile Privatsphäre spielt. Die zentrale Forschungsfrage lautet: Gibt es Unterschiede im Verhalten und in der Einstellung von amerikanischen und deutschen Studenten der Bibliotheks- und Informationswissenschaften in Bezug auf die mobile Privatsphäre? Im Mittelpunkt dieser Dissertation steht die ethnographische Forschung in einem interkulturellen Umfeld. Das Forschungsdesign besteht aus halb-strukturierten Interviews, gekoppelt mit Experimenten und Beobachtungen der Teilnehmer über die Nutzung mobiler Technologien. Die Feldforschung 1 wurde (in persona) an zwei verschiedenen Orten durchgeführt: an der Humboldt-Universität zu Berlin, Deutschland, und an der Rutgers State University of New Jersey, USA. Die Feldforschung 2 wurde (digital) über eine Online-Videokonferenzplattform durchgeführt. Im Gegensatz dazu, was die Autorin dieser vorliegenden Studie prognostizierte, kommt es zu folgenden Ergebnissen in dieser Studie: Bei den Probanden können fast keine kulturellen Unterschiede im Verhalten und in der Einstellung zur mobilen Privatsphäre festgestellt werden. Stattdessen werden in Bezug auf die mobile Privatsphäre ähnliche Einstellungen unter den Studienteilnehmenden festgestellt. Zum einen die Selbstzufriedenheit, zum anderen das Gefühl der Hilfslosigkeit und schließlich Pragmatismus scheinen, deutsche und amerikanische Studierende gleichermaßen zu beeinflussen. Das Ergebnis wurde aber ursprünglich nicht so erwartet, da eigentlich zu Beginn der Studie davon ausgegangen wurde, dass der unterschiedliche Kenntnis- und Bewusstseinsstand zur mobilen Privatsphäre in beiden Kulturen zu unterschiedlichen Reaktionen führen würde. Dennoch bieten die Ergebnisse dieser Studie sicher nachfolgenden WissenschaftlerInnen interessante Impulse und eine gute Ausgangsbasis für weitere Studien. / This dissertation examines the role of culture, mobile privacy, apps, and user behavior and attitude. The core research question is: Are there differences in the mobile privacy behaviors and attitudes of American and German library and information science students? This dissertation uses ethnography as its research methodology since culture is at the heart of ethnography. Furthermore, ethnographers try to make sense of behavior, customs, and attitudes of the culture they observe and research. This ethnographer aims to portray a thick narrative and transforms participants' mobile privacy attitude and behavior into a rich account. The research design is comprised of semi-structured interviews, coupled with experiments and participant observations about mobile technology use. Fieldwork 1 was conducted in two different sites: Humboldt-Universität zu Berlin, Germany, and Rutgers, the State University of New Jersey, USA. Fieldwork 2 was conducted via an online video conferencing platform. Contrary to what this researcher predicted, the findings have revealed that there are nearly no cultural differences in mobile privacy behavior and attitude. Similar attitudes, such as mobile privacy complacency, mobile privacy learned-helplessness, and mobile privacy pragmatism, seem to impact German and American students equally. The findings provide support for further research recommendations, and in conclusion, this researcher highlights three contributions this study makes to the scholarly literature.
87

Wireless Networking in Future Factories: Protocol Design and Evaluation Strategies

Naumann, Roman 17 January 2020 (has links)
Industrie-4.0 bringt eine wachsende Nachfrage an Netzwerkprotokollen mit sich, die es erlauben, Informationen vom Produktionsprozess einzelner Maschinen zu erfassen und verfügbar zu machen. Drahtlose Übertragung erfüllt hierbei die für industrielle Anwendungen benötigte Flexibilität, kann in herausfordernden Industrieumgebungen aber nicht immer zeitnahe und zuverlässige Übertragung gewährleisten. Die Beiträge dieser Arbeit behandeln schwerpunktmäßig Protokollentwurf und Protokollevaluation für industrielle Anwendungsfälle. Zunächst identifizieren wir Anforderungen für den industriellen Anwendungsfall und leiten daraus konkrete Entwufskriterien ab, die Protokolle erfüllen sollten. Anschließend schlagen wir Protokollmechanismen vor, die jene Entwurfskriterien für unterschiedliche Arten von Protokollen umsetzen, und die in verschiedenem Maße kompatibel zu existierenden Netzwerken und existierender Hardware sind: Wir zeigen, wie anwendungsfallspezifische Priorisierung von Netzwerkdaten dabei hilft, zuverlässige Übertragung auch unter starken Störeinflüssen zu gewährleisten, indem zunächst eine akkurate Vorschau von Prozessinformationen übertragen wird. Für deren Fehler leiten wir präziser Schranken her. Ferner zeigen wir, dass die Fairness zwischen einzelnen Maschinen durch Veränderung von Warteschlangen verbessert werden kann, wobei hier ein Teil der Algorithmen von Knoten innerhalb des Netzwerks durchgeführt wird. Ferner zeigen wir, wie Network-Coding zu unserem Anwendungsfall beitragen kann, indem wir spezialisierte Kodierungs- und Dekodierungsverfahren einführen. Zuletzt stellen wir eine neuartige Softwarearchitektur und Evaluationstechnik vor, die es erlaubt, potentiell proprietäre Protokollimplementierungen innerhalb moderner diskreter Ereignissimulatoren zu verwenden. Wir zeigen, dass unser vorgeschlagener Ansatz ausreichend performant für praktische Anwendungen ist und, darüber hinaus, die Validität von Evaluationsergebnissen gegenüber existierenden Ansätzen verbessert. / As smart factory trends gain momentum, there is a growing need for robust information transmission protocols that make available sensor information gathered by individual machines. Wireless transmission provides the required flexibility for industry adoption but poses challenges for timely and reliable information delivery in challenging industrial environments. This work focuses on to protocol design and evaluation aspects for industrial applications. We first introduce the industrial use case, identify requirements and derive concrete design principles that protocols should implement. We then propose mechanisms that implement these principles for different types of protocols, which retain compatibility with existing networks and hardware to varying degrees: we show that use-case tailored prioritization at the source is a powerful tool to implement robustness against challenged connectivity by conveying an accurate preview of information from the production process. We also derive precise bounds for the quality of that preview. Moving parts of the computational work into the network, we show that reordering queues in accordance with our prioritization scheme improves fairness among machines. We also demonstrate that network coding can benefit our use case by introducing specialized encoding and decoding mechanisms. Last, we propose a novel architecture and evaluation techniques that allows incorporating possibly proprietary networking protocol implementations with modern discrete event network simulators, rendering, among others, the adaption of protocols to specific industrial use cases more cost efficient. We demonstrate that our approach provides sufficient performance and improves the validity of evaluation results over the state of the art.
88

Deutsches Haus Radeberg – Hotel, Gaststätte, Vereinszentrum - Aufstieg und Untergang

Schönfuß, Klaus 21 June 2021 (has links)
Das „Deutsche Haus“ wurde in Folge der Entstehung eines neuen Stadtteils nördlich des Bahnhofes in der Zeit der Industrialisierung erbaut. Mit der wachsenden Industrie- und Wohnbebauung dieses Gebietes (expandierende Glashütten, „Saxonia Eisenwerke und Eisenbahnbedarfsfabrik“) sollte die Arbeiterschaft auch „schankmäßig“ ausreichend versorgt werden. Außerdem wurde das Deutsche Haus, unmittelbar in Bahnhofsnähe, zu einem kulturellen und politischen Anlaufpunkt, so wurde hier als Höhepunkt die Gründungsversammlung des „Consum Verein für Radeberg“ am 27. Mai 1877 durchgeführt und der Name „ALLGEMEINER CONSUMVEREIN ZU RADEBERG beschlossen und eingetragen. Historische politische Ereignisse für Radeberg waren die Reden von August Bebel und Wilhelm Liebknecht im Deutschen Haus, die mit ihren Ausführungen auf eine sozialdemokratische Orientierung der zahlenmäßig starken Arbeiterschaft abzielten. Unter DDR-Bedingungen wurde es anderen Bestimmungen zugeführt, zunächst „Haus der Antifa-Jugend“ und dann 1946 „Haus der FDJ“. Nach Gründung der Pionierorganisation 1948 wurde es zum „Haus der Jungen Pioniere“.
89

Das kulturelle Leben in Radeberg 1945 - 1989 als Spiegel der Zeit

Schönfuß, Klaus 21 June 2021 (has links)
In der Nachwendezeit nach 1990 wurde viel über die Kultur-Vergangenheit der DDR diskutiert. Was war DDR-Kultur und welche Rolle spielte sie in den fast 45 Jahren SBZ- und DDR-Geschichte? War sie nur ein ideologisches Machtinstrument des herrschenden Systems? Oder war sie Selbstzweck, weil sich vielleicht einzelne Elemente zeitweise „verselbständigt“ hatten? Oder war es nicht einfach das Bedürfnis der meisten Menschen, einfach Freude am Erleben und Genießen von Kunst und Kultur oder sogar beim eigenen aktiven kulturellen Schaffen zu haben, egal ob mit Gleichgesinnten oder allein, und dabei eine tiefe Freude zu empfinden? Viele Fragen, die Antworten möge jeder für sich selbst finden. Unstrittig ist aber, dass unser Leben in dieser Zeit ohne diese Vielfalt an kulturellen Möglichkeiten und Erlebnissen, die auch zu künstlerischen Berufsentwicklungen führte und gefördert wurde, um vieles ärmer gewesen wäre und dass diese Möglichkeiten in der DDR für jeden Interessierten vom Staat kostenlos und mit sehr guter fachlicher Anleitung geboten worden sind.
90

Quantitative Modeling and Verification of Evolving Software

Getir Yaman, Sinem 15 September 2021 (has links)
Mit der steigenden Nachfrage nach Innovationen spielt Software in verschiedenenWirtschaftsbereichen eine wichtige Rolle, wie z.B. in der Automobilindustrie, bei intelligenten Systemen als auch bei Kommunikationssystemen. Daher ist die Qualität für die Softwareentwicklung von großer Bedeutung. Allerdings ändern sich die probabilistische Modelle (die Qualitätsbewertungsmodelle) angesichts der dynamischen Natur moderner Softwaresysteme. Dies führt dazu, dass ihre Übergangswahrscheinlichkeiten im Laufe der Zeit schwanken, welches zu erheblichen Problemen führt. Dahingehend werden probabilistische Modelle im Hinblick auf ihre Laufzeit kontinuierlich aktualisiert. Eine fortdauernde Neubewertung komplexer Wahrscheinlichkeitsmodelle ist jedoch teuer. In letzter Zeit haben sich inkrementelle Ansätze als vielversprechend für die Verifikation von adaptiven Systemen erwiesen. Trotzdem wurden bei der Bewertung struktureller Änderungen im Modell noch keine wesentlichen Verbesserungen erzielt. Wahrscheinlichkeitssysteme werden als Automaten modelliert, wie bei Markov-Modellen. Solche Modelle können in Matrixform dargestellt werden, um die Gleichungen basierend auf Zuständen und Übergangswahrscheinlichkeiten zu lösen. Laufzeitmodelle wie Matrizen sind nicht signifikant, um die Auswirkungen von Modellveränderungen erkennen zu können. In dieser Arbeit wird ein Framework unter Verwendung stochastischer Bäume mit regulären Ausdrücken entwickelt, welches modular aufgebaut ist und eine aktionshaltige sowie probabilistische Logik im Kontext der Modellprüfung aufweist. Ein solches modulares Framework ermöglicht dem Menschen die Entwicklung der Änderungsoperationen für die inkrementelle Berechnung lokaler Änderungen, die im Modell auftreten können. Darüber hinaus werden probabilistische Änderungsmuster beschrieben, um eine effiziente inkrementelle Verifizierung, unter Verwendung von Bäumen mit regulären Ausdrücken, anwenden zu können. Durch die Bewertung der Ergebnisse wird der Vorgang abgeschlossen. / Software plays an innovative role in many different domains, such as car industry, autonomous and smart systems, and communication. Hence, the quality of the software is of utmost importance and needs to be properly addressed during software evolution. Several approaches have been developed to evaluate systems’ quality attributes, such as reliability, safety, and performance of software. Due to the dynamic nature of modern software systems, probabilistic models representing the quality of the software and their transition probabilities change over time and fluctuate, leading to a significant problem that needs to be solved to obtain correct evaluation results of quantitative properties. Probabilistic models need to be continually updated at run-time to solve this issue. However, continuous re-evaluation of complex probabilistic models is expensive. Recently, incremental approaches have been found to be promising for the verification of evolving and self-adaptive systems. Nevertheless, substantial improvements have not yet been achieved for evaluating structural changes in the model. Probabilistic systems are usually represented in a matrix form to solve the equations based on states and transition probabilities. On the other side, evolutionary changes can create various effects on theese models and force them to re-verify the whole system. Run-time models, such as matrices or graph representations, lack the expressiveness to identify the change effect on the model. In this thesis, we develop a framework using stochastic regular expression trees, which are modular, with action-based probabilistic logic in the model checking context. Such a modular framework enables us to develop change operations for the incremental computation of local changes that can occur in the model. Furthermore, we describe probabilistic change patterns to apply efficient incremental quantitative verification using stochastic regular expression trees and evaluate our results.

Page generated in 0.0429 seconds