Spelling suggestions: "subject:"[een] DISTRIBUTED SYSTEM"" "subject:"[enn] DISTRIBUTED SYSTEM""
81 |
Handling Big Data using a Distributed Search Engine : Preparing Log Data for On-Demand AnalysisEkman, Niklas January 2017 (has links)
Big data are datasets that is very large and computational complex. With an increasing volume of data the time a trivial processing task can be challenging. Companies collects data at a fast rate but knowing what to do with the data can be hard. A search engine is a system that indexes data making it efficiently queryable by users. When a bug occurs in a computer system log data is consulted in order to understand why, but processing big log data can take a long time. The purpose of this thesis is to investigate, compare and implement a distributed search engine that can prepare log data for analysis, which will make it easier for a developer to investigate bugs. There are three popular search engines: Apache Lucene, Elasticsearch and Apache Solr. Elasticsearch and Apache Solr are built as distributed systems making them capable of handling big data. Requirements was established through interviews. Big log data of totally 40 GB was provided that would be indexed in the selected search engine. The log data provided was generated in a proprietary binary format and it had to be decoded before. The distributed search engines was evaluated based on: Distributed architecture, text analysis, indexing and querying. Elasticsearch was selected for implementation. A cluster was set up on Amazon Web Services and tests was executed in order to determine how different configurations performed. An indexing software was written that would transfer data to the cluster. Results was verified through a case-study with participants of the stakeholder. / Stordata är en datamängd som är mycket stora och komplexa att göra beräkningar på. När en datamängd ökar blir en trivial bearbetningsuppgift betydligt mera utmanande. Företagen samlar idag in data i allt snabbare takt men det är svårt att veta exakt vad man ska göra med den data. En sökmotor är ett system som indexerar data och gör det effektivt att för användare att söka i det. När ett fel inträffar i ett datorsystem går utvecklare igenom loggdata för att få en insikt i varför, men det kan ta lång tid att söka igenom en stor mängd loggdata. Syftet med denna avhandling är att undersöka, jämföra och implementera en distribuerad sökmotor som kan förbereda loggdata för analys, vilket gör det lättare för utvecklare att undersöka buggar. Det finns tre populära sökmotorer: Apache Lucene, Elasticsearch och Apache Solr. Elasticsearch och Apache Solr är byggda som distribuerade system och kan därav hantera stordata. Krav fastställdes genom intervjuer. En stor mängd loggdata på totalt 40 GB indexerades i den valda sökmotorn. Den loggdata som användes genererades i en proprietär binärt format som behövdes avkodas för att kunna användas. De distribuerade sökmotorerna utvärderades utifrån kriterierna: Distribuerad arkitektur, textanalys, indexering och förfrågningar. Elasticsearch valdes för att implementeras. Ett kluster sattes upp på Amazon Web Services och test utfördes för att bestämma hur olika konfigurationer presterade. En indexeringsprogramvara skrevs som skulle överföra data till klustret. Resultatet verifierades genom en studie med deltagare från intressenten.
|
82 |
Analysis Of Aircraft Arrival Delay And Airport On-time PerformanceBai, Yuqiong 01 January 2006 (has links)
While existing grid environments cater to specific needs of a particular user community, we need to go beyond them and consider general-purpose large-scale distributed systems consisting of large collections of heterogeneous computers and communication systems shared by a large user population with very diverse requirements. Coordination, matchmaking, and resource allocation are among the essential functions of large-scale distributed systems. Although deterministic approaches for coordination, matchmaking, and resource allocation have been well studied, they are not suitable for large-scale distributed systems due to the large-scale, the autonomy, and the dynamics of the systems. We have to seek for nondeterministic solutions for large-scale distributed systems. In this dissertation we describe our work on a coordination service, a matchmaking service, and a macro-economic resource allocation model for large-scale distributed systems. The coordination service coordinates the execution of complex tasks in a dynamic environment, the matchmaking service supports finding the appropriate resources for users, and the macro-economic resource allocation model allows a broker to mediate resource providers who want to maximize their revenues and resource consumers who want to get the best resources at the lowest possible price, with some global objectives, e.g., to maximize the resource utilization of the system.
|
83 |
A Study of Implementation Methodologies for Distributed Real Time CollaborationCraft, Lauren A 01 June 2021 (has links) (PDF)
Collaboration drives our world and is almost unavoidable in the programming industry. From higher education to the top technological companies, people are working together to drive discovery and innovation. Software engineers must work with their peers to accomplish goals daily in their workplace. When working with others there are a variety of tools to choose from such as Google Docs, Google Colab and Overleaf. Each of the aforementioned collaborative tools utilizes the Operational Transform (OT) technique in order to implement their real time collaboration functionality. Operational transform is the technique seen amongst most if not all major collaborative tools in our industry today. However, there is another way of implementing real time collaboration through a data structure called Conflict-free Replicated Data Type (CRDT) which has made claims of superiority over OT. Previous studies have taken place with the focus on comparing the theory behind OT and CRDT's, but as far as we know, there have not been studies which compare real time collaboration performance using an OT implementation versus a CRDT implementation in a popularly used product such as Google Docs or Overleaf.
Our work will focus on comparing OT and CRDT's real time collaborative performance in Overleaf, an academic authorship tool, which allows for easy collaboration on academic and professional papers. Overleaf's current published version implements real time collaboration using operational transform. This thesis will contribute an analysis of the current real time collaboration performance of operational transform in Overleaf, an implementation of CRDT's for real time collaboration in Overleaf and an analysis of the performance of real time collaboration through the CRDT implementation in Overleaf. This thesis describes the main advantages and disadvantages of OT vs CRDTs, as well as, to our knowledge, the first results of a non-theoretical attempt at implementing CRDTs for handling document edits in a collaborative environment which was originally operating using an OT implementation.
|
84 |
Blockchain of Learning Logs (BOLL): Connecting Distributed Educational Data across Multiple Systems / ブロックチェーン・オブ・ラーニングログ(BOLL):複数のシステムに分散した教育データの連結OCHEJA, PATRICK ILEANWA 26 September 2022 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24260号 / 情博第804号 / 新制||情||136(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 緒方 広明, 教授 伊藤 孝行, 教授 吉川 正俊 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
85 |
NETWORKING ISSUES IN DEFER CACHE- IMPLEMENTATION AND ANALYSISPRABHU, SHALAKA K. January 2003 (has links)
No description available.
|
86 |
E-CRADLE v1.1 - An improved distributed system for Photovoltaic InformaticsZhao, Pei 27 January 2016 (has links)
No description available.
|
87 |
Resource Efficient Parallel VLDB with Customizable Degree of RedundancyXiong, Fanfan January 2009 (has links)
This thesis focuses on the practical use of very large scale relational databases. It leverages two recent breakthroughs in parallel and distributed computing: a) synchronous transaction replication technologies by Justin Y. Shi and Suntain Song; and b) Stateless Parallel Processing principle pioneered by Justin Y. Shi. These breakthroughs enable scalable performance and reliability of database service using multiple redundant shared-nothing database servers. This thesis presents a Functional Horizontal Partitioning method with customizable degree of redundancy to address practical very large scale database applications problems. The prototype VLDB implementation is designed for transparent non-intrusive deployments. The prototype system supports Microsoft SQL Servers databases. Computational experiments are conducted using industry-standard benchmark (TPC-E). / Computer and Information Science
|
88 |
Development and Application of Dynamic Architecture Flow Optimization to Assess the Impact of Energy Storage on Naval Ship Mission Effectiveness, System Vulnerability and RecoverabilityKara, Mustafa Yasin 20 May 2022 (has links)
This dissertation presents the development and application of a naval ship distributed system architecture framework, Architecture Flow Optimization (AFO), Dynamic Architecture Flow Optimization (DAFO), and Energy Storage System (ESS) model in naval ship Concept and Requirements Exploration (CandRE). The particular objective of this dissertation is to determine and assess Energy Storage System (ESS) capacity, charging and discharging capabilities in a complex naval ship system of systems to minimize vulnerability and maximize recoverability and effectiveness. The architecture framework is implemented through integrated Ship Behavior Interaction Models (SBIMs) that include the following: Warfighting Model (WM), Ship Operational Model (OM), Capability Model (CM), and Dynamic Architecture Flow Optimization (DAFO). These models provide a critical interface between logical, physical, and operational architectures, quantifying warfighting and propulsion capabilities through system measures of performance at specific capability nodes. This decomposition greatly simplifies the Mission, Power, and Energy System (MPES) design process for use in CandRE. AFO and DAFO are network-based, linear programming optimization methods used to design and analyze MPESs at a sufficient level of detail to understand system energy flow, define MPES architecture and sizing, model operations, reduce system vulnerability and improve system effectiveness and recoverability with ESS capabilities. AFO incorporates system topologies, energy coefficient component models, preliminary arrangements, and (nominal and damaged) steady state scenarios to minimize the energy flow cost required to satisfy all operational scenario demands and constraints. The refined DAFO applies the same principles as AFO, but adds two more capabilities, Propulsion and ESS charging, and maximizes effectiveness at each scenario timestep. DAFO also integrates with a warfighting model, operational model, and capabilities model that quantify the performance of tasks enabled by capabilities through system measures of performance at specific capability nodes. This dissertation provides a description of the design tools developed to implement these processes and methods, including a ship synthesis model, hullform exploration, MPES explorations and objective attribute metrics for cost, effectiveness and risk, using design of experiments (DOEs) response surface models (RSMs) and Energy Storage System (ESS) applications. / Doctor of Philosophy / This dissertation presents the development and application of a naval ship distributed system architecture framework, Architecture Flow Optimization (AFO), Dynamic Architecture Flow Optimization (DAFO), and Energy Storage System (ESS) design in naval ship Concept and Requirements Exploration (CandRE). The particular objective of this dissertation is to determine and assess Energy Storage System (ESS) capacity, charging and discharging capabilities in a complex naval ship system of systems to minimize vulnerability and maximize recoverability and effectiveness. The architecture framework is implemented through integrated Ship Behavior Interaction Models (SBIMs) that include the following: Warfighting Model (WM), Ship Operational Model (OM), Capability Model (CM), and Dynamic Architecture Flow Optimization (DAFO). These models provide a critical interface between logical, physical, and operational architectures, quantifying warfighting and propulsion capabilities through system measures of performance at specific capability nodes. This decomposition greatly simplifies the Mission, Power, and Energy System (MPES) design process for use in CandRE. AFO and DAFO are network-based, linear programming optimization methods used to design and analyze MPESs at a sufficient level of detail to understand system energy flow, define MPES architecture and sizing, model operations, reduce system vulnerability and improve system effectiveness and recoverability with ESS capabilities. AFO incorporates system topologies, energy coefficient component models, preliminary arrangements, and (nominal and damaged) steady state scenarios to minimize the energy flow cost required to satisfy all operational scenario demands and constraints. DAFO applies the same principles as AFO, but adds two more capabilities, Propulsion and ESS charging, and maximizes effectiveness at each scenario timestep. DAFO also integrates with a warfighting model, operational model, and capabilities model that quantify the performance of tasks enabled by capabilities through system measures of performance at specific capability nodes. This dissertation provides an overview of the design tools developed to implement these process and methods, including a ship synthesis model, hullform exploration, MPES explorations and objective attribute metrics for cost, effectiveness and risk, using design of experiments (DOEs) response surface models (RSMs) and Energy Storage System (ESS) applications.
|
89 |
Refinement of Surface Combatant Ship Synthesis Model for Network-Based System DesignStinson, Nicholas Taylor 17 June 2019 (has links)
This thesis describes an adaptable component level machinery system weight and size estimation tool used in the context of a ship distributed system architecture framework and ship synthesis model for naval ship concept design. The system architecture framework decomposes the system of systems into three intersecting architectures: physical, logical, and operational to describe the spatial and functional relationships of the system together with their temporal behavior characteristics. Following an Architecture Flow Optimization (AFO), or energy flow analysis based on this framework, vital components are sized based on their energy flow requirements for application in the ship synthesis model (SSM). Previously, components were sized manually or parametrically. This was not workable for assessing many designs in concept exploration and outdated parametric models based on historical data were not sufficiently applicable to new ship designs. The new methodology presented in this thesis uses the energy flow analysis, baseline component data, and physical limitations to individually calculate sizes and weights for each vital component in a ship power and energy system. The methodology allows for new technologies to be quickly and accurately implemented to assess their overall impact on the design. The optimized flow analysis combined with the component level data creates a higher fidelity design that can be analyzed to assess the impact of various systems and operational cases on the overall design. This thesis describes the SSM, discusses the AFO's contribution, and provides background on the component sizing methodology including the underlying theory, baseline data, energy conversion, and physical assumptions. / Master of Science / This thesis describes an adaptable component level machinery system weight and size estimation tool used in the context of a preliminary ship system design and naval ship concept design. The system design decomposes the system of systems into three intersecting areas: physical, logical, and operational to describe the spatial and functional relationships of the system together with their time dependent behavior characteristics. Following an Architecture Flow Optimization (AFO), or energy flow analysis based on this system design, vital components are sized based on their energy flow requirements for application in the ship synthesis model (SSM). Previously, components were sized manually or with estimated equations. This was not workable for assessing many designs in concept exploration and outdated equation models based on historical data were not sufficiently applicable to new ship designs. The new methodology presented in this thesis uses the energy flow analysis, baseline component data, and physical limitations to individually calculate sizes and weights for each vital component in a ship power and energy system. The methodology allows for new technologies to be quickly and accurately implemented to assess their overall impact on the design. The optimized flow analysis combined with the component level data creates a more accurate design that can be analyzed to assess the impact of various systems and operational cases on the overall design. This thesis describes the SSM, discusses the AFO’s contribution, and provides background on the component sizing methodology including the underlying theory, baseline data, energy conversion, and physical assumptions.
|
90 |
CASM: A Content-Aware Protocol for Secure Video MulticastYin, H., Lin, C., Qiu, F., Liu, J., Min, Geyong, Li, B. January 2006 (has links)
No / Information security has been a critical issue in the design and development of reliable distributed communication systems and has attracted significant research efforts. A challenging task is how to maintain information security at a high level for multiple-destination video applications with the huge volume of data and dynamic property of clients. This paper proposes a novel Content-Aware Secure Multicast (CASM) protocol for video distribution that seamlessly integrates three important modules: 1) a scalable light-weight algorithm for group key management; 2) a content-aware key embedding algorithm that can make video quality distortion imperceptible and is reliable for clients to detect embedded keys; and 3) a smart two-level video encryption algorithm that can selectively encrypt a small set of video data only, and yet ensure the video as well as the embedded keys unrecognizable without a genuine key. The implementation of the CASM protocol is independent of the underlying multicast mechanism and is fully compatible with existing coding standards. Performance evaluation studies built upon a CASM prototype have demonstrated that CASM is highly robust and scalable in dynamic multicast environments. Moreover, it ensures secure distribution of key and video data with minimized communication and computation overheads. The proposed content-aware key embedding and encryption algorithms are fast enough to support real-time video multicasting.
|
Page generated in 0.0318 seconds