• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 25
  • 11
  • 8
  • 8
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 159
  • 159
  • 39
  • 20
  • 19
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Anomaly-based Self-Healing Framework in Distributed Systems

Kim, Byoung Uk January 2008 (has links)
One of the important design criteria for distributed systems and their applications is their reliability and robustness to hardware and software failures. The increase in complexity, interconnectedness, dependency and the asynchronous interactions between the components that include hardware resources (computers, servers, network devices), and software (application services, middleware, web services, etc.) makes the fault detection and tolerance a challenging research problem. In this dissertation, we present a self healing methodology based on the principles of autonomic computing, statistical and data mining techniques to detect faults (hardware or software) and also identify the source of the fault. In our approach, we monitor and analyze in real-time all the interactions between all the components of a distributed system using two software modules: Component Fault Manager (CFM) to monitor all set of measurement attributes for applications and nodes and Application Fault Manager (AFM) that is responsible for several activities such as monitoring, anomaly analysis, root cause analysis and recovery. We used three-dimensional array of features to capture spatial and temporal features to be used by an anomaly analysis engine to immediately generate an alert when abnormal behavior pattern is detected due to a software or hardware failure. We use several fault tolerance metrics (false positive, false negative, precision, recall, missed alarm rate, detection accuracy, latency and overhead) to evaluate the effectiveness of our self healing approach when compared to other techniques. We applied our approach to an industry standard web e-commerce application to emulate a complex e-commerce environment. We evaluate the effectiveness of our approach and its performance to detect software faults that we inject asynchronously, and compare the results for different noise levels. Our experimental results showed that by applying our anomaly based approach, false positive, false negative, missed alarm rate and detection accuracy can be improved significantly. For example, evaluating the effectiveness of this approach to detect faults injected asynchronously shows a detection rate of above 99.9% with no false alarms for a wide range of faulty and normal operational scenarios.
32

Vision utility framework : a new approach to vision system development

Afrah, Amir 05 1900 (has links)
We are addressing two aspects of vision based system development that are not fully exploited in current frameworks: abstraction over low-level details and high-level module reusability. Through an evaluation of existing frameworks, we relate these shortcomings to the lack of systematic classification of sub-tasks in vision based system development. Our approach for addressing these two issues is to classify vision into decoupled sub-tasks, hence defining a clear scope for a vision based system development framework and its sub-components. Firstly, we decompose the task of vision system development into data management and processing. We then proceed to further decompose data management into three components: data access, conversion and transportation. To verify our approach for vision system development we present two frameworks: the Vision Utility (VU) framework for providing abstraction over the data management component; and the Hive framework for providing the data transportation and high-level code reuse. VU provides the data management functionality for developers while hiding the low-level system details through a simple yet flexible Application Programming Interface (API). VU mediates the communication between the developer's application, vision processing modules, and data sources by utilizing different frameworks for data access, conversion and transportation (Hive). We demonstrate VU's ability for providing abstraction over low-level system details through the examination of a vision system developed using the framework. Hive is a standalone event based framework for developing distributed vision based systems. Hive provides simple high-level methods for managing communication, control and configuration of reusable components. We verify the requirements of Hive (reusability and abstraction over inter-module data transportation) by presenting a number of different systems developed on the framework using a set of reusable modules. Through this work we aim to demonstrate that this novel approach for vision system development could fundamentally change vision based system development by addressing the necessary abstraction, and promoting high-level code reuse.
33

Structures for supporting BIM-based automation in the design process / Strukturer för stödjande av BIM-baserad automation i designprocessen

Mukkavaara, Jani January 2018 (has links)
During recent decades the advent of IT in the construction industry has prompted a gradual shift from manual paper-based processes to computer-aided design and production. In this shift there has been an increasing interest in the application of building information modelling (BIM) for the overall management of information throughout the lifecycle of a building. By implementing BIM and automating the workflows within, decreased time spent on engineering tasks and an increased focus on building performance could be achieved during the design process. Due to the complexity of the design process it is rare that a single BIM application can manage all the activities that are present. This puts pressure on the coupling of multiple applications, tools, and information. The challenges that this poses on interoperability and information exchange has received a wealth of attention in research however it is still argued that many of these operations require manual input. Automating parts of a BIM-based workflow is facilitated by the possibilities that exists for exchanging information and controlling the flow of information. This implies that not only do we need to understand this on a data level, but also that we understand how the system and information structures can be managed to enable this. The purpose of this thesis was to investigate how structures could be applied on both a system and information level to support automation within a BIM-based design process, and more specifically how these structures could be used to overcome some of the challenges of information exchange. Three studies were conducted to explore different methods and their potential in achieving automated workflows. The findings of these studies were then analysed against a theoretical framework based on structures of systems and information. The results show that choosing a distributed method for structuring systems allows for the coupling of multiple software, tools, and information without the need for a single shared schema. The critical component of the distributed system structure is a middleware which is responsible for controlling the flow of information. It is the middleware that when implemented allows for the management of multiple sources of information, each with their corresponding schemas. The results also showed that information which travels between the components of the distributed system can be structured according to their relationships to provide the foundation for a mapping. This structure enables the decomposition of information that can be used to transfer information only relevant to the current activity. When applied this aids to resolve the coupling of information at each activity in an automated BIM-based workflow.
34

[en] FAULT TOLERANCE IN DISTRIBUTED SYSTEMS / [pt] RECUPERAÇÃO DE ERROS EM SISTEMAS DE PROCESSOS DISTRIBUÍDOS

ALEXANDRE DE REZENDE ABIBE 27 December 2006 (has links)
[pt] Esta dissertação aborda o problema da recuperação de erros em sistemas distribuídos. Inicialmente, é feita uma breve análise sobre a origem deste problema e as soluções encontradas. Alguns métodos de resolução são então apresentados. Para a simulação do sistema distribuído foi desenvolvido um núcleo multi-tarefa numa máquina compatível com o PC-IBM-XT, utilizando o MS-DOS (versão 3.0 ou acima) como servidor. Finalmente, são apresentadas duas propostas. A primeira visa fornecer a um processo recursos que possibilitem a recuperação por retorno. A segunda utiliza redundância em um conjunto de processos em diferentes estações para garantir que o sistema como um todo continue operativo, mesmo com uma estação de falha / [en] This dissertation deals with the problem of fault tolerance in distributed systems. Initially, a brief analysis on the origins of this problem and its solutions is made. Some of the resolutions methods are then presented. In order to simulate a distributed system, a multi tasking operating system kernel was developed in an IBM PC-XT compatible machine, making use of the MS-DOS (version 3.0 or above) as a server. Finally, two proposals are presented. The first, is intended to supply a process with resources that allow recovery in case of algorithmic faults, making use of the backward error recovery method. The second, uses redundancy in a set of processes over different stations in order to warrant that the system as a whole keeps operative, even with a faulty s
35

Vision utility framework : a new approach to vision system development

Afrah, Amir 05 1900 (has links)
We are addressing two aspects of vision based system development that are not fully exploited in current frameworks: abstraction over low-level details and high-level module reusability. Through an evaluation of existing frameworks, we relate these shortcomings to the lack of systematic classification of sub-tasks in vision based system development. Our approach for addressing these two issues is to classify vision into decoupled sub-tasks, hence defining a clear scope for a vision based system development framework and its sub-components. Firstly, we decompose the task of vision system development into data management and processing. We then proceed to further decompose data management into three components: data access, conversion and transportation. To verify our approach for vision system development we present two frameworks: the Vision Utility (VU) framework for providing abstraction over the data management component; and the Hive framework for providing the data transportation and high-level code reuse. VU provides the data management functionality for developers while hiding the low-level system details through a simple yet flexible Application Programming Interface (API). VU mediates the communication between the developer's application, vision processing modules, and data sources by utilizing different frameworks for data access, conversion and transportation (Hive). We demonstrate VU's ability for providing abstraction over low-level system details through the examination of a vision system developed using the framework. Hive is a standalone event based framework for developing distributed vision based systems. Hive provides simple high-level methods for managing communication, control and configuration of reusable components. We verify the requirements of Hive (reusability and abstraction over inter-module data transportation) by presenting a number of different systems developed on the framework using a set of reusable modules. Through this work we aim to demonstrate that this novel approach for vision system development could fundamentally change vision based system development by addressing the necessary abstraction, and promoting high-level code reuse. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
36

Privacy policy-based framework for privacy disambiguation in distributed systems

Alhalafi, Dhafer January 2015 (has links)
With an increase in the pervasiveness of distributed systems, now and into the future, there will be an increasing concern for the privacy of users in a world where almost everyone will be connected to the internet through numerous devices. Current ways of considering privacy in distributed system development are based on the idea of protecting personally-identifiable information such as name and national insurance number, however, with the abundance of distributed systems it is becoming easier to identify people through information that is not personally-identifiable, thus increasing privacy concerns. As a result ideas about privacy have changed and should be reconsidered towards the development of distributed systems. This requires a new way to conceptualise privacy. In spite of active effort on handling the privacy and security worries throughout the initial periods of plan of distributed systems, there has not been much work on creating a reliable and meaningful contribution towards stipulating and scheming a privacy policy framework. Beside developing and fully understanding how the earliest stage of this work is been carried out, the procedure for privacy policy development risks marginalising stakeholders, and therefore defeating the object of what such policies are designed to do. The study proposes a new Privacy Policy Framework (PPF) which is based on a combination of a new method for disambiguating the meaning of privacy from users, owners and developers of distributed systems with distributed system architecture and technical considerations. Towards development of the PPF semi-structured interviews and questionnaires were conducted to determine the current situation regards privacy policy and technical considerations, these methods were also employed to demonstrate the application and evaluation of the PPF itself. The study contributes a new understanding and approach to the consideration of privacy in distributed systems and a practical approach to achieving user privacy and privacy disambiguation through the development of a privacy button concept.
37

Desenvolvimento de técnicas de anycast na camada de aplicação para a provisão de qualidade de serviço em computação na nuvem / Development of application layer anycast techniques for quality of service provision in cloud computing

Lucas Junqueira Adami 13 October 2015 (has links)
Nos últimos anos, houve um aumento da complexidade e variedade de serviços disponíveis na Internet, fato que tem levado à busca por técnicas eficientes de roteamento de requisições de um cliente ao melhor servidor disponível, sendo uma delas conhecida como application layer anycast (ALA). O objetivo deste mestrado é elaborar meios eficientes de prover anycast na camada de aplicação com qualidade de serviço no contexto de computação em nuvem. Para atingir esse objetivo, um novo sistema foi proposto (GALA, Global Application Layer Anycast). Ele herda características de um outro sistema existente e emprega a geolocalização como diferencial, a fim de melhorar o desempenho geral do algoritmo. Experimentos foram realizados por meio de simulação e os resultados mostraram que esse novo sistema, comparado ao algoritmo herdado, mantém a eficiência das requisições realizadas pelos clientes e diminui consideravelmente o tempo de latência dessas operações. Ainda, o sistema proposto foi desenvolvido em um ambiente real a fim de fortalecer os resultados das simulações. Com os resultados obtidos, o sistema modelado foi validado e sua eficácia confirmada. / In the past years, the complexity and variety of available services expanded in the Internet, fact that is drawing attention of many researchers that wish to find out efficient techniques of routing client requests to the closest server, being one of them known as application layer anycast (ALA). Thus, the objective of this research is to elaborate ways to offer application layer anycast that are scalable and select the closest servers with the shortest latency possible, in the context of cloud computing. To achieve this goal, a new system was proposed (GALA, Global Application Layer Anycast). It inherits features from an existing system and applies geolocation to improve its overall performance. Simulation results indicated that the new system, compared to its antecessor, has the same efficiency but decreases considerably the requests latency. Yet, the proposed system was deployed in a real environment to strengthen the simulations results. With the obtained data, the modeled system was validated and its efficiency confirmed.
38

Marine visualization system: an augmented reality approach

Cojoc-Wisernig, Eduard 28 August 2020 (has links)
Sailboat operation must account for a variety of environmental factors, including wind, tidal currents, shore features and atmospheric conditions. We introduce the first method of rendering an augmented reality scene for sailing, using various visual techniques to represent environmental aspects, such as particle cloud animations for the wind and current. The visual content is provided using a hardware/software system that gathers data from various scattered sources on a boat (e.g. instruments), processes the data and broadcasts the information over a local network to one or more displays that render the immersive 3D graphics. Current technology provides information about environmental factors via a diverse collection of displays which render data collected by sensors and instruments. This data is typically provided numerically or using rudimentary abstract graphical representations, with minimal processing, and with little or no integration of the various scattered sources. My goal was to build the first working prototype of a system that centralizes collected data on a boat and provides an integrated 3D rendering using a unified AR visual interface. Since this research is the first of its kind in a few largely unexplored areas of technological interest, I found that the most fruitful method to evaluate the various iterations of different components was to employ an autobiographical design method. Sailing is the process of controlling various aspects of boat operation in order to produce propulsion by harnessing wind energy using sails. Devising a strategy for safe and adequate sailboat control relies upon a solid understanding of the surrounding environment and its behaviour, in addition to many layers of know-how pertaining to employing the acquired knowledge. My research is grouped into three distinct, yet interdependent parts; first, a hardware and software system that collects data with the purpose of processing and broadcasting visual information; second, a graphical interface that provides information using immersive AR graphics; and last, an in-depth investigation and discussion of the problem and potential solutions from a design thinking perspective. The scope of this investigation is broad, covering aspects from assembling mechanical implements, to building electronics with customized sensing capabilities, interfacing existing ship's instruments, configuring a local network and server, implementing processing strategies, and broadcasting a WebGL-based AR scene as an immersive visual experience. I also performed a design thinking investigation that incorporates recent research from the most relevant fields of study (e.g. HCI, visualization etc.) with the ultimate goal of integrating it into a conceptual system and a taxonomy of relevant factors. The term interdisciplinary is most accurate in denoting the nature of this body of work. At the time of writing, there are two major players that are starting to develop AR-based commercial products for marine navigation: Raymarine (an AR extension of their chart-based data) and Mitsubishi (AR navigation software for commercial/industrial shipping). I am not aware of any marine AR visualization that is targeted at environmental awareness for sailboats through visualization (wind, tidal currents etc.) and my research constitutes the first documented and published efforts that approached this topic. / Graduate
39

A Conceptual Framework for Distributed Software Quality Network

Patil, Anushka H. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The advancement in technology has revolutionized the role of software in recent years. Software usage is practically found in all areas of the industry and has become a prime factor in the overall working of the companies. Simultaneously with an increase in the utilization of software, the software quality assurance parameters have become more crucial and complex. Currently the quality measurement approaches, standards, and models that are applied in the software industry are extremely divergent. Many a time the correct approach will wind up to be a combination of di erent concepts and techniques from di erent software assurance approaches [1]. Thus, having a platform that provides a single workspace for incorporating multiple software quality assurance approaches will ease the overall software quality process. In this thesis we have proposed a theoretical framework for distributed software quality assurance, which will be able to continuously monitor a source code repository; create a snapshot of the system for a given commit (both past and present); the snapshot can be used to create a multi-granular blockchain of the system and its metrics (i.e.,metadata) which we believe will let the tool developers and vendors participate continuously in assuring quality and security of systems and in the process be accessible when required while being rewarded for their services.
40

Network-Based Naval Ship Distributed System Design and Mission Effectiveness using Dynamic Architecture Flow Optimization

Parsons, Mark Allen 16 July 2021 (has links)
This dissertation describes the development and application of a naval ship distributed system architectural framework, Architecture Flow Optimization (AFO), and Dynamic Architecture Flow Optimization (DAFO) to naval ship Concept and Requirements Exploration (CandRE). The architectural framework decomposes naval ship distributed systems into physical, logical, and operational architectures representing the spatial, functional, and temporal relationships of distributed systems respectively. This decomposition greatly simplifies the Mission, Power, and Energy System (MPES) design process for use in CandRE. AFO and DAFO are a network-based linear programming optimization methods used to design and analyze MPES at a sufficient level of detail to understand system energy flow, define MPES architecture and sizing, model operations, reduce system vulnerability and improve system reliability. AFO incorporates system topologies, energy coefficient component models, preliminary arrangements, and (nominal and damaged) steady state scenarios to minimize the energy flow cost required to satisfy all operational scenario demands and constraints. DAFO applies the same principles as AFO and adds a second commodity, data flow. DAFO also integrates with a warfighting model, operational model, and capabilities model that quantify tasks and capabilities through system measures of performance at specific capability nodes. This enables the simulation of operational situations including MPES configuration and operation during CandRE. This dissertation provides an overview of design tools developed to implement this process and methods, including objective attribute metrics for cost, effectiveness and risk, ship synthesis model, hullform exploration and MPES explorations using design of experiments (DOEs) and response surface models. / Doctor of Philosophy / This dissertation describes the development and application of a warship system architectural framework, Architecture Flow Optimization (AFO), and Dynamic Architecture Flow Optimization (DAFO) to warship Concept and Requirements Exploration (CandRE). The architectural framework decomposes warship systems into physical, logical, and operational architectures representing the spatial, functional, and time-based relationships of systems respectively. This decomposition greatly simplifies the Mission, Power, and Energy System (MPES) design process for use in CandRE. AFO and DAFO are a network-based linear programming optimization methods used to design and analyze MPES at a sufficient level of detail to understand system energy usage, define MPES connections and sizing, model operations, reduce system vulnerability and improve system reliability. AFO incorporates system templates, simple physics and energy-based component models, preliminary arrangements, and simple undamaged/damaged scenarios to minimize the energy flow usage required to satisfy all operational scenario demands and constraints. DAFO applies the same principles and adds a second commodity, data flow representing system operation. DAFO also integrates with a warfighting model, operational model, and capabilities model that quantify tasks and capabilities through system measures of performance. This enables the simulation of operational situations including MPES configuration and operation during CandRE. This dissertation provides an overview of design tools developed to implement this process and methods, including optimization objective attribute metrics for cost, effectiveness and risk.

Page generated in 0.0502 seconds