• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 25
  • 14
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 140
  • 51
  • 35
  • 28
  • 27
  • 26
  • 24
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Riskbedömning och underhållsstrategier för ABB:s högeffektlaboratorium i Västerås / Risk assessment and maintenance strategies for ABB's High Power Laboratory in Västerås

Nyberg, Johan January 2015 (has links)
Inom de närmsta åren står ABB:s högeffektlaboratorium i Västerås inför omfattande reinvesteringar. För att underlätta vid beslutsarbetet, samt lägga grunden till optimala underhållsprogram vid nyanskaffning, är det viktigt med en tydlig underhållsstrategi samt metoder för att bedöma de risker som kan knytas till olika anläggningsdelar. Rapporten syftar till att lyfta fram sådana strategier och metoder för riskbedömning.Arbetet bedrivs i form av en litteraturstudie inom ämnesområdena riskhantering och underhållsteknik, följt av en kvalitativ studie av de förhållanden som råder i anläggningen. Resultatet är en underhållsstrategi, med utgångspunkt i ABB Corporate Researchs övergripande affärsstrategier, där kvalitativa riskbedömningar används som beslutsunderlag för underhållschemat. Genom att dela in anläggningens olika enheter i riskklasser kan underhållsinsatserna anpassas utefter de ekonomiska och arbetsmiljömässiga risker som dessa medför. Det datoriserade underhållssystemet står för den sammanhållande strukturen, därför är det av stor vikt att systemets information är korrekt, komplett och sökbar.Vid högeffektlaboratoriet används ofta riskbedömningar som ett verktyg i det löpande arbetsmiljöarbetet, dessutom är tillgången på teknisk kompetens inom ABB Corporate Researchs egen personal god. Mot bakgrund av detta ser förutsättningarna goda ut för en implementering av kvalitativa riskbedömningar i underhållsarbetet. / In the upcoming years, ABB's high power laboratory in Västerås is facing a period of reinvestments. In order to facilitate the decision-making, and to lay the foundation to effective maintenance programs for new equipment, it is important to have a clearly defined maintenance strategy, including methods to adequately assess the plant and its risk factors. The aim of this report is to suggest such strategies and risk assessments methods.A literature study of maintenance engineering, followed by a qualitative study of the high power laboratory's prerequisites, leads up to a maintenance strategy based on the primary business strategies of ABB Corporate Research. Assigning a risk class to different components enables a risk-adapted maintenance schedule to be deployed. These risk classes are the weighted result of qualitative risk assessments that includes both economic and human consequences. The cohesive structure in this strategy is the Computerized Maintenance Management System (CMMS), hence the importance of its data being thorough, complete and searchable.At the high power laboratory, risk assessments in areas related to personal safety is commonly used, and the technical skills within ABB Corporate Research's own personnel is high. Thus, the step of implementing a qualitative risk assessment approach to maintenance management should be fairly straight forward.
32

Anskaffningsprocessens inverkan på leveransprecisionen hos kundorderdrivna företag / The procurement process impact on delivery dependability at customer-order-driven company’s

Nordin, Johannes, Buebo, Ludvig January 2015 (has links)
Bakgrund – För kundorderdrivna företag lyfts vikten av att hålla en hög leveransprecision. Avsaknaden av material är något som påverkar leveransprecisionen negativt, genom att förbättra anskaffningsprocessen ökar chanserna till att köpt material inkommer när det efterfrågas. Syfte – Syftet med denna studie är att undersöka hur leveransprecisionen kan ökas genom att förbättra anskaffningsprocessen hos ett kundorderdrivet företag. För att uppnå syftet kommer följande frågeställningar att besvaras: 1. Vilka brister kan identifieras i en anskaffningsprocess hos ett kundorderdrivet företag? 2. Hur kan bristerna hanteras för att uppnå en ökad leveransprecision? Metod och genomförande – För att uppnå studien syfte och besvara de två frågeställningarna har en fallstudie utförts på ett kundorderdrivet företag. Genom litteraturstudier har ett teoretiskt ramverk skapats för att ge en djup kunskapsgrund inom området. Observationer, intervjuer och dokumentstudier har utförts på fallföretget för att samla in empiri som sedan har analyserats mot den insamlade teorin. Resultat – Studiens resultat bygger på den insamlade empirin och genomförda litteraturstudier. Resultatet visar vilka brister som författarna har identifierat inom en anskaffningsprocess som har en påverkan på leveransprecisionen ut till kunderna. Genom att besvara studiens två frågeställningar visar resultatet att bristande informationsflöde, manuell hantering, brist på rutiner och bristande helhetssyn är samtliga faktorer som har en påverkan på anskaffningsprocessen och i slutändan fallföretagets leveransprecision. Implikationer – De framtagna lösningarna bör implementeras på fallföretaget först efter det att en kartläggning av samtliga processer på fallföretaget har utförts, för att finna gemensamma utmaningar och undvika suboptimeringar. Slutsatser – Enligt författarna hänger många av de identifierade bristerna ihop med avsaknaden av en helhetssyn i anskaffningsprocessen och de ingående aktiviteterna och avdelningar hos fallföretaget. / Background – For customer-order-driven companies it is very important to maintain high delivery dependability. The lack of materials is something that affects delivery dependability negatively and by improving the procurement process the chances increases that the purchased materials are received when requested. Purpose – The purpose of this study is to examine how improving the procurement process in a customer-order-driven company can increase delivery dependability. To achieve the purpose, the following questions will be answered:1. Which areas of improvement can be identified in the procurement process at a customer-order-driven company? 2. How can the areas of improvement be handled to achieve increased delivery dependability? Method and implementation – In order to achieve the purpose of the study and answer the two questions a case study has been conducted at a customer-order-driven company. Through literature studies a theoretical framework have been created to provide a deep knowledge base in the field. Observations, interviews and document studies have been conducted at the case company to collect empirical data which then were analysed against the collected theory. Findings – The results of the study are based on the collected empirical data and performed literature studies. The results show which areas of improvement the authors have identified in the procurement process that has an impact on delivery dependability. By answering the study's two questions the result shows that a lack of information flow, manual handling, lack of procedures and lack of holistic view are all factors that have an impact on the procurement process and ultimately the case company's delivery dependability. Implications – The proposed solutions should be implemented at case company only after mapping all processes within the case company, in order to find common challenges and avoid sub-optimization. Conclusion – According to the authors depends many of the identified areas of improvement on the lack of a holistic view of the procurement process and the included activities and departments at the case company.
33

Sistemų su klaidų įterpimu formalizavimas / Systems with possibility to insert faults formalization

Blažaitytė, Eglė 16 August 2007 (has links)
Kiekvienos sistemos kūrimo tikslas yra veikianti, gyvybinga ir saugi sistema, teikianti norimus ir patikimus rezultatus. Sistemos saugumas – tai sistemos savybė, reiškianti, kad sistemos funkcionavimo metu neįvyks jokia nenumatyta situacija. Gyvybingumas – sistemos reakcija į tam tikrus įvykius ir sugebėjimas atlikti nustatytas užduotis bei pateikti teisingus sprendimus arba rezultatus. Norint sukurti tokią sistemą, kuri ateityje tenkins nustatytus reikalavimus, yra labai svarbu iš anksto nustatyti jos formalią reikalavimų specifikaciją, nes nuo to priklauso galutinis produktas – kiek įvairių situacijų, į kurias sistema gali patekti, ar bus numatyta, kaip ji susidoros su atitinkamais išoriniais ar vidiniais įvykiais. Tokią specifikaciją galima praplėsti įvairiomis modifikacijomis, kurios gali padėti aptikti potencialias klaidas sistemoje, kurias įvertinus kūrimo metu, galima sistemai suteikti tolerancijos klaidoms savybę. / In order to create a fault tolerant system, very clear requirements should be prepared and all possible fault events should be analyzed. It can be properly made by using any of system modeling formalism. In this work alternating bit protocol system was chosen to formalize and analyzed in fault tolerant software aspects. Alternating bit protocol was modified in two ways – it’s functionality under perfect circumstances and with added faults, in order to make the system fault tolerant. These both cases were formalized by PLA and DEVS formalization methods. After the research of different formalisms and adjusting FDEVS to alternating bit protocol, FPLA formalization method was created.
34

Fault-Tolerance Strategies and Probabilistic Guarantees for Real-Time Systems

Aysan, Hüseyin January 2012 (has links)
Ubiquitous deployment of embedded systems is having a substantial impact on our society, since they interact with our lives in many critical real-time applications. Typically, embedded systems used in safety or mission critical applications (e.g., aerospace, avionics, automotive or nuclear domains) work in harsh environments where they are exposed to frequent transient faults such as power supply jitter, network noise and radiation. They are also susceptible to errors originating from design and production faults. Hence, they have the design objective to maintain the properties of timeliness and functional correctness even under error occurrences. Fault-tolerance plays a crucial role towards achieving dependability, and the fundamental requirement for the design of effective and efficient fault-tolerance mechanisms is a realistic and applicable model of potential faults and their manifestations. An important factor to be considered in this context is the random nature of faults and errors, which, if addressed in the timing analysis by assuming a rigid worst-case occurrence scenario, may lead to inaccurate results. It is also important that the power, weight, space and cost constraints of embedded systems are addressed by efficiently using the available resources for fault-tolerance. This thesis presents a framework for designing predictably dependable embedded real-time systems by jointly addressing the timeliness and the reliability properties. It proposes a spectrum of fault-tolerance strategies particularly targeting embedded real-time systems. Efficient resource usage is attained by considering the diverse criticality levels of the systems' building blocks. The fault-tolerance strategies are complemented with the proposed probabilistic schedulability analysis techniques, which are based on a comprehensive stochastic fault and error model.
35

Safety in Action : Designing a Crew Resource Management prototype for N-USOC

Valle, Rune Kristiansen January 2014 (has links)
High-risk industries are operating in an increasingly complex and dynamic environment; this leads to new perspectives on the role of the human operator in the safety management system, encouraging organizations to exploit the uniquely human capabilities of operator teams in order to maintain safe operations. Crew resource management is a popular framework for training operator teams, but has not yet been adapted to accommodate this theoretical development in any major way. Through an action research project within N-USOC, a control room supporting science missions at the International Space Station, a prototypical CRM course is developed for a distributed team working in a complex-dynamic environment, guided by theoretical analysis of safety literature and by the specific needs of the N-USOC context. Adaptive decision making strategies and skills are identified as important success factors for the human operator, along with developing team processes to increase the team capacity for managing safety margins. For N-USOC operators, building this desired adaptive expertise while learning how to manage workload and utilize domain expertise in time-critical situations is especially important. While the development of CRM training for N-USOC is not complete, the study represents a foundation to build upon for the organization, and a theoretical contribution to safety research.
36

Providing Adaptability in Survivable Systems through Situation Awareness

Öster, Daniel January 2006 (has links)
<p>System integration, interoperability, just in time delivery, window of opportunity, and dust-to-dust optimization are all keywords of our computerized future. Survivability is an important concept that together with dependability and quality of service are key issues in the systems of the future, i.e. infrastructural systems, business applications, and everyday desktop applications. The importance of dependable systems and the widely spread usage of dependable system together with the complexity of those systems makes middleware and frameworks for survivability imperative to the system builder of the future. This thesis presents a simulation approach to investigate the effect on data survival when the defending system uses knowledge of the current situation to protect the data. The results show the importance of situation awareness to avoid wasting recourses. A number of characteristics of the situational information provided and how this information may be used to optimize the system.</p>
37

Performance et fiabilité des protocoles de tolérance aux fautes / Towards Performance and Dependability Benchmarking of Distributed Fault Tolerance Protocols

Gupta, Divya 18 March 2016 (has links)
A l'ère de l’informatique omniprésente et à la demande, où les applications et les services sont déployés sur des infrastructures bien gérées et approvisionnées par des grands groupes de fournisseurs d’informatique en nuage (Cloud Computing), tels Amazon,Google,Microsoft,Oracle, etc, la performance et la fiabilité de ces systèmes sont devenues des objectifs primordiaux. Cette informatique a rendu particulièrement nécessaire la prise en compte des facteurs de la Qualité de Service (QoS), telles que la disponibilité, la fiabilité, la vivacité, la sureté et la sécurité,dans la définition complète d’un système. En effet, les systèmes informatiques doivent être résistants aussi bien aux défaillances qu’aux attaques et ce, afin d'éviter qu'ils ne deviennent inaccessibles, entrainent des couts de maintenance importants et la perte de parts de marché. L'augmentation de la taille et la complexité des systèmes en nuage rend de plus en plus commun les défauts, augmentant la fréquence des pannes, et n’offrant donc plus la Garantie de Service visée. Les fournisseurs d’informatique en nuage font ainsi face épisodiquement à des fautes arbitraires, dites Byzantines, durant lesquelles les systèmes ont des comportements imprévisibles.Ce constat a amené les chercheurs à s’intéresser de plus en plus à la tolérance aux fautes byzantines (BFT) et à proposer de nombreux prototypes de protocoles et logiciels. Ces solutions de BFT visent non seulement à fournir des services cohérents et continus malgré des défaillances arbitraires, mais cherchent aussi à réduire le coût et l’impact sur les performances des systèmes sous-jacents. Néanmoins les prototypes BFT ont été évalués le plus souvent dans des contextes ad hoc, soit dans des conditions idéales, soit en limitant les scénarios de fautes. C’est pourquoi ces protocoles de BFT n’ont pas réussi à convaincre les professionnels des systèmes distribués de les adopter. Cette thèse entend répondre à ce problème en proposant un environnement complet de banc d’essai dont le but est de faciliter la création de scénarios d'exécution utilisables pour aussi bien analyser que comparer l'efficacité et la robustesse des propositions BFT existantes. Les contributions de cette thèse sont les suivantes :Nous introduisons une architecture générique pour analyser des protocoles distribués. Cette architecture comprend des composants réutilisables permettant la mise en œuvre d’outils de mesure des performances et d’analyse de la fiabilité des protocoles distribués. Cette architecture permet de définir la charge de travail, de défaillance, et l’injection de ces dernières. Elle fournit aussi des statistiques de performance, de fiabilité du système de bas niveau et du réseau. En outre, cette thèse présente les bénéfices d’une architecture générale.Nous présentons BFT-Bench, le premier système de banc d’essai de la BFT, pour l'analyse et la comparaison d’un panel de protocoles BFT utilisés dans des situations identiques. BFT-Bench permet aux utilisateurs d'évaluer des implémentations différentes pour lesquels ils définissent des comportements défaillants avec différentes charges de travail.Il permet de déployer automatiquement les protocoles BFT étudiés dans un environnement distribué et offre la possibilité de suivre et de rendre compte des aspects performance et fiabilité. Parmi nos résultats, nous présentons une comparaison de certains protocoles BFT actuels, réalisée avec BFT-Bench, en définissant différentes charges de travail et différents scénarii de fautes. Cette réelle application de BFT-Bench en démontre l’efficacité.Le logiciel BFT-Bench a été conçu en ce sens pour aider les utilisateurs à comparer efficacement différentes implémentations de BFT et apporter des solutions effectives aux lacunes identifiées des prototypes BFT. De plus, cette thèse défend l’idée que les techniques BFT sont nécessaires pour assurer un fonctionnement continu et correct des systèmes distribués confrontés à des situations critiques. / In the modern era of on-demand ubiquitous computing, where applications and services are deployed in well-provisioned, well-managed infrastructures, administered by large groups of cloud providers such as Amazon, Google, Microsoft, Oracle, etc., performance and dependability of the systems have become primary objectives.Cloud computing has evolved from questioning the Quality-of-Service (QoS) making factors such as availability, reliability, liveness, safety and security, extremely necessary in the complete definition of a system. Indeed, computing systems must be resilient in the presence of failures and attacks to prevent their inaccessibility which can lead to expensive maintenance costs and loss of business. With the growing components in cloud systems, faults occur more commonly resulting in frequent cloud outages and failing to guarantee the QoS. Cloud providers have seen episodic incidents of arbitrary (i.e., Byzantine) faults where systems demonstrate unpredictable conducts, which includes incorrect response of a client's request, sending corrupt messages, intentional delaying of messages, disobeying the ordering of the requests, etc.This has led researchers to extensively study Byzantine Fault Tolerance (BFT) and propose numerous protocols and software prototypes. These BFT solutions not only provide consistent and available services despite arbitrary failures, they also intend to reduce the cost and performance overhead incurred by the underlying systems. However, BFT prototypes have been evaluated in ad-hoc settings, considering either ideal conditions or very limited faulty scenarios. This fails to convince the practitioners for the adoption of BFT protocols in a distributed system. Some argue on the applicability of expensive and complex BFT to tolerate arbitrary faults while others are skeptical on the adeptness of BFT techniques. This thesis precisely addresses this problem and presents a comprehensive benchmarking environment which eases the setup of execution scenarios to analyze and compare the effectiveness and robustness of these existing BFT proposals.Specifically, contributions of this dissertation are as follows.First, we introduce a generic architecture for benchmarking distributed protocols. This architecture, comprises reusable components for building a benchmark for performance and dependability analysis of distributed protocols. The architecture allows defining workload and faultload, and their injection. It also produces performance, dependability, and low-level system and network statistics. Furthermore, the thesis presents the benefits of a general architecture.Second, we present BFT-Bench, the first BFT benchmark, for analyzing and comparing representative BFT protocols under identical scenarios. BFT-Bench allows end-users evaluate different BFT implementations under user-defined faulty behaviors and varying workloads. It allows automatic deploying these BFT protocols in a distributed setting with ability to perform monitoring and reporting of performance and dependability aspects. In our results, we empirically compare some existing state-of-the-art BFT protocols, in various workloads and fault scenarios with BFT-Bench, demonstrating its effectiveness in practice.Overall, this thesis aims to make BFT benchmarking easy to adopt by developers and end-users of BFT protocols.BFT-Bench framework intends to help users to perform efficient comparisons of competing BFT implementations, and incorporating effective solutions to the detected loopholes in the BFT prototypes. Furthermore, this dissertation strengthens the belief in the need of BFT techniques for ensuring correct and continued progress of distributed systems during critical fault occurrence.
38

La modélisation et le contrôle des services BigData : application à la performance et la fiabilité de MapReduce / Modeling and control of cloud services : application to MapReduce performance and dependability

Berekmeri, Mihaly 18 November 2015 (has links)
Le grand volume de données généré par nos téléphones mobiles, tablettes, ordinateurs, ainsi que nos montres connectées présente un défi pour le stockage et l'analyse. De nombreuses solutions ont émergées dans l'industrie pour traiter cette grande quantité de données, la plus populaire d'entre elles est MapReduce. Bien que la complexité de déploiement des systèmes informatiques soit en constante augmentation, la disponibilité permanente et la rapidité du temps de réponse sont toujours une priorité. En outre, avec l'émergence des solutions de virtualisation et du cloud, les environnements de fonctionnement sont devenus de plus en plus dynamiques. Par conséquent, assurer les contraintes de performance et de fiabilité d'un service MapReduce pose un véritable challenge. Dans cette thèse, les problématiques de garantie de la performance et de la disponibilité de services de cloud MapReduce sont abordées en utilisant une approche basée sur la théorie du contrôle. Pour commencer, plusieurs modèles dynamiques d'un service MapReduce exécutant simultanément de multiples tâches sont introduits. Par la suite, plusieurs lois de contrôle assurant les différents objectifs de qualités de service sont synthétisées. Des contrôleurs classiques par retour de sortie avec feedforward garantissant les performances de service ont d'abord été développés. Afin d'adapter nos contrôleurs au cloud, tout en minimisant le nombre de reconfigurations et les coûts, une nouvelle architecture de contrôle événementiel a été mise en œuvre. Finalement, l'architecture de contrôle optimal MR-Ctrl a été développée. C'est la première solution à fournir aux systèmes MapReduce des garanties en termes de performances et de disponibilité, tout en minimisant le coût. Les approches de modélisation et de contrôle ont été évaluées à la fois en simulation, et en expérimentation sous MRBS, qui est une suite de tests complète pour évaluer la performance et la fiabilité des systèmes MapReduce. Les tests ont été effectuées en ligne sur un cluster MapReduce de 60 nœuds exécutant une tâche de calcul intensive de type Business Intelligence. Nos expériences montrent que le contrôle ainsi conçu, peut garantir les contraintes de performance et de disponibilité. / The amount of raw data produced by everything from our mobile phones, tablets, computers to our smart watches brings novel challenges in data storage and analysis. Many solutions have arisen in the industry to treat these large quantities of raw data, the most popular being the MapReduce framework. However, while the deployment complexity of such computing systems is steadily increasing, continuous availability and fast response times are still the expected norm. Furthermore, with the advent of virtualization and cloud solutions, the environments where these systems need to run is becoming more and more dynamic. Therefore ensuring performance and dependability constraints of a MapReduce service still poses significant challenges. In this thesis we address this problematic of guaranteeing the performance and availability of MapReduce based cloud services, taking an approach based on control theory. We develop the first dynamic models of a MapReduce service running a concurrent workload. Furthermore, we develop several control laws to ensure different quality of service objectives. First, classical feedback and feedforward controllers are developed to guarantee service performance. To further adapt our controllers to the cloud, such as minimizing the number of reconfigurations and costs, a novel event-based control architecture is introduced for performance management. Finally we develop the optimal control architecture MR-Ctrl, which is the first solution to provide guarantees in terms of both performance and dependability for MapReduce systems, meanwhile keeping cost at a minimum. All the modeling and control approaches are evaluated both in simulation and experimentally using MRBS, a comprehensive benchmark suite for evaluating the performance and dependability of MapReduce systems. Validation experiments were run in a real 60 node Hadoop MapReduce cluster, running a data intensive Business Intelligence workload. Our experiments show that the proposed techniques can successfully guarantee performance and dependability constraints.
39

Providing Adaptability in Survivable Systems through Situation Awareness

Öster, Daniel January 2006 (has links)
System integration, interoperability, just in time delivery, window of opportunity, and dust-to-dust optimization are all keywords of our computerized future. Survivability is an important concept that together with dependability and quality of service are key issues in the systems of the future, i.e. infrastructural systems, business applications, and everyday desktop applications. The importance of dependable systems and the widely spread usage of dependable system together with the complexity of those systems makes middleware and frameworks for survivability imperative to the system builder of the future. This thesis presents a simulation approach to investigate the effect on data survival when the defending system uses knowledge of the current situation to protect the data. The results show the importance of situation awareness to avoid wasting recourses. A number of characteristics of the situational information provided and how this information may be used to optimize the system.
40

A tool for automatic formal analysis of fault tolerance

Nilsson, Markus January 2005 (has links)
The use of computer-based systems is rapidly increasing and such systems can now be found in a wide range of applications, including safety-critical applications such as cars and aircrafts. To make the development of such systems more efficient, there is a need for tools for automatic safety analysis, such as analysis of fault tolerance. In this thesis, a tool for automatic formal analysis of fault tolerance was developed. The tool is built on top of the existing development environment for the synchronous language Esterel, and provides an output that can be visualised in the Item toolkit for fault tree analysis (FTA). The development of the tool demonstrates how fault tolerance analysis based on formal verification can be automated. The generated output from the fault tolerance analysis can be represented as a fault tree that is familiar to engineers from the traditional FTA analysis. The work also demonstrates that interesting attributes of the relationship between a critical fault combination and the input signals can be generated automatically. Two case studies were used to test and demonstrate the functionality of the developed tool. A fault tolerance analysis was performed on a hydraulic leakage detection system, which is a real industrial system, but also on a synthetic system, which was modeled for this purpose.

Page generated in 0.461 seconds