• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 29
  • 22
  • 22
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Flyover-a technique for achieving high performance COBRA-based systems /

Wu, Wai Keung, January 1900 (has links)
Thesis (M. App. Sc.)--Carleton University, 2001. / Includes bibliographical references (p. 126-130). Also available in electronic format on the Internet.
22

Application level guidelines for building high performance CORBA-based system /

Tao, Weili, January 1900 (has links)
Thesis (M.C.S.) - Carleton University, 2002. / Includes bibliographical references (p. 107-112). Also available in electronic format on the Internet.
23

Design and implementation of high performance filing systems

Zhang, Zhihui. January 2005 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Computer Science Department. / Includes bibliographical references.
24

Communication infratructure for a distributed actor system /

Gandhi, Rajiv. January 1994 (has links)
Report (M.S.)--Virginia Polytechnic Institute and State University, 1994. / Abstract. Includes bibliographical references (leaf 36). Also available via the Internet.
25

IMPRESS improving multicore performance and reliability via efficient software support for monitoring /

Nagarajan, Vijayanand. January 2009 (has links)
Thesis (Ph. D.)--University of California, Riverside, 2009. / Includes abstract. Title from first page of PDF file (viewed March 12, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 151-158). Also issued in print.
26

Risk-based proactive availability management

Cai, Zhongtang. January 2008 (has links)
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2008. / Committee Member: Ahamad, Mustaque; Committee Member: Eisenhauer, Greg; Committee Member: Milojicic, Dejan; Committee Member: Pu, Calton; Committee Member: Schwan, Karsten.
27

Easing information extraction on the web through automated rules discovery

Ortona, Stefano January 2016 (has links)
The advent of the era of big data on the Web has made automatic web information extraction an essential tool in data acquisition processes. Unfortunately, automated solutions are in most cases more error prone than those created by humans, resulting in dirty and erroneous data. Automatic repair and cleaning of the extracted data is thus a necessary complement to information extraction on the Web. This thesis investigates the problem of inducing cleaning rules on web extracted data in order to (i) repair and align the data w.r.t. an original target schema, (ii) produce repairs that are as generic as possible such that different instances can benefit from them. The problem is addressed from three different angles: replace cross-site redundancy with an ensemble of entity recognisers; produce general repairs that can be encoded in the extraction process; and exploit entity-wide relations to infer common knowledge on extracted data. First, we present ROSeAnn, an unsupervised approach to integrate semantic annotators and produce a unied and consistent annotation layer on top of them. Both the diversity in vocabulary and widely varying accuracy justify the need for middleware that reconciles different annotator opinions. Considering annotators as "black-boxes" that do not require per-domain supervision allows us to recognise semantically related content in web extracted data in a scalable way. Second, we show in WADaR how annotators can be used to discover rules to repair web extracted data. We study the problem of computing joint repairs for web data extraction programs and their extracted data, providing an approximate solution that requires no per-source supervision and proves effective across a wide variety of domains and sources. The proposed solution is effective not only in repairing the extracted data, but also in encoding such repairs in the original extraction process. Third, we investigate how relationships among entities can be exploited to discover inconsistencies and additional information. We present RuDiK, a disk-based scalable solution to discover first-order logic rules over RDF knowledge bases built from web sources. We present an approach that does not limit its search space to rules that rely on "positive" relationships between entities, as in the case with traditional mining of constraints. On the contrary, it extends the search space to also discover negative rules, i.e., patterns that lead to contradictions in the data.
28

Экосистема анализа больших данных hadoop : магистерская диссертация / Ecosystem of analysis of big data hadoop

Харин, А. В., Kharin, A. V. January 2017 (has links)
Технологии хранения и анализа огромного количества информации разного типа являются актуальным направлением информационных технологий для всех компаний на сегодняшний день. Целью данной диссертационной работы является создание учебного пособия для студентов, разработчиков или простого человека, который хочет расширить свой кругозор, по экосистеме анализа данных «Hadoop». Данная научно–исследовательская работа представляет из себя учебное пособие по теме: «Экосистема анализа больших данных Hadoop». Эта система считается одной из основополагающих технологий «Big Data». / Currently, the technological methods of saving and analyzing of large amounts of information of different kinds are at the forefront of information technology development for most companies. The goal of the thesis is to create an instructional manual for students, web developers, and laypeople aiming to expand their tech savvy, about the ecosystem of big data analysis Hadoop. This research paper is a manual on “the Ecosystem of Big Data Analysis Hadoop.” This system is considered to be one of the groundbreaking technologies of “Big Data.”
29

Data Science and Analytics in Industrial Maintenance: Selection, Evaluation, and Application of Data-Driven Methods

Zschech, Patrick 02 October 2020 (has links)
Data-driven maintenance bears the potential to realize various benefits based on multifaceted data assets generated in increasingly digitized industrial environments. By taking advantage of modern methods and technologies from the field of data science and analytics (DSA), it is possible, for example, to gain a better understanding of complex technical processes and to anticipate impending machine faults and failures at an early stage. However, successful implementation of DSA projects requires multidisciplinary expertise, which can rarely be covered by individual employees or single units within an organization. This expertise covers, for example, a solid understanding of the domain, analytical method and modeling skills, experience in dealing with different source systems and data structures, and the ability to transfer suitable solution approaches into information systems. Against this background, various approaches have emerged in recent years to make the implementation of DSA projects more accessible to broader user groups. These include structured procedure models, systematization and modeling frameworks, domain-specific benchmark studies to illustrate best practices, standardized DSA software solutions, and intelligent assistance systems. The present thesis ties in with previous efforts and provides further contributions for their continuation. More specifically, it aims to create supportive artifacts for the selection, evaluation, and application of data-driven methods in the field of industrial maintenance. For this purpose, the thesis covers four artifacts, which were developed in several publications. These artifacts include (i) a comprehensive systematization framework for the description of central properties of recurring data analysis problems in the field of industrial maintenance, (ii) a text-based assistance system that offers advice regarding the most suitable class of analysis methods based on natural language and domain-specific problem descriptions, (iii) a taxonomic evaluation framework for the systematic assessment of data-driven methods under varying conditions, and (iv) a novel solution approach for the development of prognostic decision models in cases of missing label information. Individual research objectives guide the construction of the artifacts as part of a systematic research design. The findings are presented in a structured manner by summarizing the results of the corresponding publications. Moreover, the connections between the developed artifacts as well as related work are discussed. Subsequently, a critical reflection is offered concerning the generalization and transferability of the achieved results. Thus, the thesis not only provides a contribution based on the proposed artifacts; it also paves the way for future opportunities, for which a detailed research agenda is outlined.:List of Figures List of Tables List of Abbreviations 1 Introduction 1.1 Motivation 1.2 Conceptual Background 1.3 Related Work 1.4 Research Design 1.5 Structure of the Thesis 2 Systematization of the Field 2.1 The Current State of Research 2.2 Systematization Framework 2.3 Exemplary Framework Application 3 Intelligent Assistance System for Automated Method Selection 3.1 Elicitation of Requirements 3.2 Design Principles and Design Features 3.3 Prototypical Instantiation and Evaluation 4 Taxonomic Framework for Method Evaluation 4.1 Survey of Prognostic Solutions 4.2 Taxonomic Evaluation Framework 4.3 Exemplary Framework Application 5 Method Application Under Industrial Conditions 5.1 Conceptualization of a Solution Approach 5.2 Prototypical Implementation and Evaluation 6 Discussion of the Results 6.1 Connections Between Developed Artifacts and Related Work 6.2 Generalization and Transferability of the Results 7 Concluding Remarks Bibliography Appendix I: Implementation Details Appendix II: List of Publications A Publication P1: Focus Area Systematization B Publication P2: Focus Area Method Selection C Publication P3: Focus Area Method Selection D Publication P4: Focus Area Method Evaluation E Publication P5: Focus Area Method Application / Datengetriebene Instandhaltung birgt das Potential, aus den in Industrieumgebungen vielfältig anfallenden Datensammlungen unterschiedliche Nutzeneffekte zu erzielen. Unter Verwendung von modernen Methoden und Technologien aus dem Bereich Data Science und Analytics (DSA) ist es beispielsweise möglich, das Verhalten komplexer technischer Prozesse besser nachzuvollziehen oder bevorstehende Maschinenausfälle und Fehler frühzeitig zu erkennen. Eine erfolgreiche Umsetzung von DSA-Projekten erfordert jedoch multidisziplinäres Expertenwissen, welches sich nur selten von einzelnen Personen bzw. Einheiten innerhalb einer Organisation abdecken lässt. Dies umfasst beispielsweise ein fundiertes Domänenverständnis, Kenntnisse über zahlreiche Analysemethoden, Erfahrungen im Umgang mit verschiedenen Quellsystemen und Datenstrukturen sowie die Fähigkeit, geeignete Lösungsansätze in Informationssysteme zu überführen. Vor diesem Hintergrund haben sich in den letzten Jahren verschiedene Ansätze herausgebildet, um die Durchführung von DSA-Projekten für breitere Anwendergruppen zugänglich zu machen. Dazu gehören strukturierte Vorgehensmodelle, Systematisierungs- und Modellierungsframeworks, domänenspezifische Benchmark-Studien zur Veranschaulichung von Best Practices, Standardlösungen für DSA-Software und intelligente Assistenzsysteme. An diese Arbeiten knüpft die vorliegende Dissertation an und liefert weitere Artefakte, um insbesondere die Selektion, Evaluation und Anwendung datengetriebener Methoden im Bereich der industriellen Instandhaltung zu unterstützen. Insgesamt erstreckt sich die Abhandlung auf vier Artefakte, die in einzelnen Publikationen erarbeitet wurden. Dies umfasst (i) ein umfangreiches Systematisierungsframework zur Beschreibung zentraler Ausprägungen wiederkehrender Datenanalyseprobleme im Bereich der industriellen Instandhaltung, (ii) ein textbasiertes Assistenzsystem, welches ausgehend von natürlichsprachlichen und domänenspezifischen Problembeschreibungen eine geeignete Klasse von Analysemethoden vorschlägt, (iii) ein taxonomisches Evaluationsframework zur systematischen Bewertung von datengetriebenen Methoden unter verschiedenen Rahmenbedingungen sowie (iv) einen neuartigen Lösungsansatz zur Entwicklung von prognostischen Entscheidungsmodellen im Fall von eingeschränkter Informationslage. Die Konstruktion der Artefakte wird durch einzelne Forschungsziele im Rahmen eines systematischen Forschungsdesigns angeleitet. Neben der Darstellung der einzelnen Forschungsbeiträge unter Bezugnahme auf die erzielten Ergebnisse der dazugehörigen Publikationen werden auch die Verbindungen zwischen den entwickelten Artefakten beleuchtet und Zusammenhänge zu angrenzenden Arbeiten hergestellt. Zudem erfolgt eine kritische Reflektion der Ergebnisse hinsichtlich ihrer Verallgemeinerung und Übertragung auf andere Rahmenbedingungen. Dadurch liefert die vorliegende Abhandlung nicht nur einen Beitrag anhand der erzeugten Artefakte, sondern ebnet auch den Weg für fortführende Forschungsarbeiten, wofür eine detaillierte Forschungsagenda erarbeitet wird.:List of Figures List of Tables List of Abbreviations 1 Introduction 1.1 Motivation 1.2 Conceptual Background 1.3 Related Work 1.4 Research Design 1.5 Structure of the Thesis 2 Systematization of the Field 2.1 The Current State of Research 2.2 Systematization Framework 2.3 Exemplary Framework Application 3 Intelligent Assistance System for Automated Method Selection 3.1 Elicitation of Requirements 3.2 Design Principles and Design Features 3.3 Prototypical Instantiation and Evaluation 4 Taxonomic Framework for Method Evaluation 4.1 Survey of Prognostic Solutions 4.2 Taxonomic Evaluation Framework 4.3 Exemplary Framework Application 5 Method Application Under Industrial Conditions 5.1 Conceptualization of a Solution Approach 5.2 Prototypical Implementation and Evaluation 6 Discussion of the Results 6.1 Connections Between Developed Artifacts and Related Work 6.2 Generalization and Transferability of the Results 7 Concluding Remarks Bibliography Appendix I: Implementation Details Appendix II: List of Publications A Publication P1: Focus Area Systematization B Publication P2: Focus Area Method Selection C Publication P3: Focus Area Method Selection D Publication P4: Focus Area Method Evaluation E Publication P5: Focus Area Method Application
30

The Sea of Stuff : a model to manage shared mutable data in a distributed environment

Conte, Simone Ivan January 2019 (has links)
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.

Page generated in 0.0889 seconds