• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 25
  • 14
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 140
  • 51
  • 35
  • 28
  • 27
  • 26
  • 24
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Runtime Safety Analysis Concept for Open Adaptive Systems

Kabir, Sohag, Sorokos, I., Aslansefat, K., Papadopoulos, Y., Gheraibia, Y., Reich, J., Saimler, M., Wei, R. 11 October 2019 (has links)
Yes / In the automotive industry, modern cyber-physical systems feature cooperation and autonomy. Such systems share information to enable collaborative functions, allowing dynamic component integration and architecture reconfiguration. Given the safety-critical nature of the applications involved, an approach for addressing safety in the context of reconfiguration impacting functional and non-functional properties at runtime is needed. In this paper, we introduce a concept for runtime safety analysis and decision input for open adaptive systems. We combine static safety analysis and evidence collected during operation to analyse, reason and provide online recommendations to minimize deviation from a system’s safe states. We illustrate our concept via an abstract vehicle platooning system use case. / DEIS H2020 Project under Grant 732242.
52

Dependability of the Internet of Things: current status and challenges

Abdulhamid, Alhassan, Kabir, Sohag, Ghafir, Ibrahim, Lei, Ci 03 February 2023 (has links)
Yes / The advances in the Internet of Things (IoT) has substantially contributed to the automation of modern societies by making physical things around us more interconnected and remotely controllable over the internet. This technological progress has inevitably created an intelligent society where various mechatronic systems are becoming increasingly efficient, innovative, and convenient. Undoubtedly, the IoT paradigm will continue to impact human life by providing efficient control of the environment with minimum human intervention. However, despite the ubiquity of IoT devices in modern society, the dependability of IoT applications remains a crucial challenge. Accordingly, this paper systematically reviews the current status and challenges of IoT dependability frameworks. Based on the review, existing IoT dependability frameworks are mainly based on informal reliability models. These informal reliability models are unable to effectively evaluate the unified treatment safety faults and cyber-security threats of IoT systems. Additionally, the existing frameworks are also unable to deal with the conflicting interaction between co-located IoT devices and the dynamic features of self-adaptive, reconfigurable, and other autonomous IoT systems. To this end, this paper suggested the design of a novel model-based dependability framework for quantifying safety faults and cyber-security threats as well as interdependencies between safety and cyber-security in IoT ecosystems. Additionally, robust approaches dealing with conflicting interactions between co-located IoT systems and the dynamic behaviours of IoT systems in reconfigurable and other autonomous systems are required.
53

A Runtime Safety Analysis Concept for Open Adaptive Systems

Kabir, Sohag, Sorokos, I., Aslansefat, K., Papadopoulos, Y., Gheraibia, Y., Reich, J., Saimler, M., Wei, R. 18 October 2019 (has links)
No / In the automotive industry, modern cyber-physical systems feature cooperation and autonomy. Such systems share information to enable collaborative functions, allowing dynamic component integration and architecture reconfiguration. Given the safety-critical nature of the applications involved, an approach for addressing safety in the context of reconfiguration impacting functional and non-functional properties at runtime is needed. In this paper, we introduce a concept for runtime safety analysis and decision input for open adaptive systems. We combine static safety analysis and evidence collected during operation to analyse, reason and provide online recommendations to minimize deviation from a system’s safe states. We illustrate our concept via an abstract vehicle platooning system use case. / This conference paper is available to view at http://hdl.handle.net/10454/17415.
54

Model-based dependability analysis: State-of-the-art, challenges, and future outlook

Sharvia, S., Kabir, Sohag, Walker, M., Papadopoulos, Y. 21 October 2019 (has links)
No
55

Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

Tuzov, Ilya 25 January 2021 (has links)
[ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7. / [CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7. / [EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA. / Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883 / TESIS
56

An empirical investigation of the linkage between dependability, quality and customer satisfaction in information intensive service firms

Kumar, Vikas January 2010 (has links)
The information service sector e.g. utilities, telecommunications and banking has grown rapidly in recent years and is a significant contributor to the Gross Domestic Product (GDP) of the world’s leading economies. Though, the information service sector has grown significantly, there have been relatively few attempts by researchers to explore this sector. The lack of research in this sector has motivated my PhD research that aims to explore the pre-established relationships between dependability, quality and customer satisfaction (RQ1) within the context of information service sector. Literature looking at the interrelationship between the dependability and quality (RQ2a), and their further impact on customer satisfaction (RQ2b) is also limited. With the understanding that Business to Business (B2B) and Business to Customer (B2C) businesses are different, exploring these relationships in these two different types of information firms will further add to existing literature. This thesis also attempts to investigate the relative significance of dependability and quality in both B2B and B2C information service firms (RQ3a and RQ3b). To address these issues, this PhD research follows a theory testing approach and uses multiple case studies to address the research questions. In total five cases from different B2B and B2C information service firms are being investigated. To explore the causality, the time series data set of over 24 to 60 months time and the ‘Path Analysis’ method has been used. For the generalization of the findings, Cumulative Meta Analysis method has been applied. The findings of this thesis indicate that dependability significantly affects customer satisfaction and an interrelationship exists between dependability and quality that further impacts customer satisfaction. The findings from B2C cases challenges the traditional priority afforded to relational aspect of quality by showing that dependability is the key driver of customer satisfaction. However, B2B cases findings shows that both dependability and quality are key drivers of customer satisfaction. Therefore, the findings of this thesis add considerably to literature in B2B and B2C information services context.
57

Un cadre général de causalité basé sur les traces pour des systèmes à composants / A general trace-based causality analysis framework for component systems

Geoffroy, Yoann 07 December 2016 (has links)
Dans des système concurrent, potentiellement embarqués et distribué, il est souvent crucial d'être capable de déterminer quel(s) composant(s) est(sont) responsable(s) d'une défaillance, que ce soit pour debbuger, établir une responsabilité contractuelle du fournisseur des composant, ou pour isolée, ou redémarrer les composants défaillants. Le diagnostic s'appuie sur l'analyse de la causalité logique pour distinguer les composants ayant contribué à la défaillance du système, de ceux ayant eu peu ou pas d'impact sur cette dernière. Plus précisément, un composant C est une cause nécessaire, si la propriété P du système n'aurait pas été violée si C s'était comporté selon sa spécification S. De même, C est une cause suffisante de la violation de P (défaillance du système) si P aurait été violée, même si tous les composants, sauf C, avait respecté leur spécification. Autrement dit, la violation de S, du composant C, est suffisante pour violer P. L'approche a été formalisée, initialement, pour des modèle d'interaction BIP. Le but de ce projet est de formaliser un raisonnement similaire pour des programmes fonctionnels, où les fonctions sont équipées d'invariant décrivant leur comportement attendu. L'analyse prendrait en entrée une trace d'exécution (défaillante) et les invariant, et déterminerait quelle(s) fonction(s) est(sont) une(des) cause(s) de la défaillance. L'approche devra être implémenté et appliquée à des cas d'études provenant du domaine médical, ou de l'automatique. / In a concurrent, possibly embedded and distributed system, it is often crucial to be able to determine which component(s) caused an observed failure - be it for debugging, to establish the contractual liability of component providers, or to isolate or reset the failing component. The diagnostic relies on analysis of logical causality to distinguish component failures that actually contributed to the outcome from failures that had little or no impact on the system-level failure . More precisely, necessary causality of a component C characterizes cases when a system-level property P would not have been violated if the specification S of C had been fulfilled. Sufficient causality characterizes cases where P would have been violated even if all the components but C had fulfilled their specifications. In other words, the violation of S by C was sufficient to violate P. The initial approach to causality analysis on execution traces was formalized for the BIP interaction model. The goal of this project is to formalize a similar reasoning for functional programs where functions are equipped with invariants describing the expected behavior. The analysis should take a (faulty) execution trace and the invariants and determine which function(s) caused the failure. The results should be implemented and applied to case studies from the medical and automotive domains.
58

Gestion de groupe partitionnable dans les réseaux mobiles spontanés / Partitionable group membership in mobile ad hoc networks

Lim, Léon 29 November 2012 (has links)
Dans les réseaux mobiles spontanés (en anglais, Mobile Ad hoc NETworks ou MANETs), la gestion de groupe partitionnable est un service de base permettant la construction d'applications réparties tolérantes au partitionnement. Aucune des spécifications existantes ne satisfait les deux exigences antagonistes suivantes : 1) elle doit être assez forte pour fournir des garanties utiles aux applications réparties dans les systèmes partitionnables ; 2) elle doit être assez faible pour être résoluble. Dans cette thèse, nous proposons une solution à la gestion de groupe partitionnable en environnements réseaux très dynamiques tels que les MANETs. Pour mettre en œuvre notre solution, nous procédons en trois étapes. Tout d'abord, nous proposons un modèle de système réparti dynamique qui caractérise la stabilité dans les MANETs. Ensuite, nous adaptons pour les systèmes partitionnables l'approche Paxos à base de consensus Synod. Cette adaptation résulte en la spécification d'un consensus abandonnable AC construit au-dessus d'un détecteur ultime des α participants d'une partition ♢PPD et d'un registre ultime par partition ♢RPP. ♢PPD garantit la vivacité dans une partition même si la partition n'est pas complètement stable tandis que ♢RPP préserve la sûreté dans la même partition. Enfin, la gestion de groupe partitionnable est résolue en la transformant en une séquence d'instances de AC. Chacun des modules ♢PPD, ♢RPP, AC et gestion de groupe partitionnable est implanté et prouvé. Par ailleurs, nous analysons les performances de ♢PPD par simulation / In Mobile Ad hoc NETworks or MANETs, partitionable group membership is a basic service for building partition-tolerant applications. None of the existing specifications satisfy the two following antagonistic requirements: 1) it must be strong enough to simplify the design of partition-tolerant distributed applications in partitionable systems; 2) it must be weak enough to be implantable. In this thesis, we propose a solution to partitionable group membership in very dynamic network environment such as MANETs. To this means, we proceed in three steps. First, we develop a dynamic distributed system model that characterises stability in MANETs. Then, we propose a solution to the problem of partitionable group membership by adapting Paxos for such systems. This adatation results in a specification of abortable consensus AC which is composed of an eventual α partition-participants detector ♢PPD and an eventual register per partition ♢RPP. ♢PPD guarantees liveness in a partition even if the partition is not completely stable whereas ♢RPP ensures safety in the same partition. Finally, partitionable group membership is solved by transforming it into a sequence of abortable consensus instances AC. Each of the modules ♢PPD, ♢RPP, AC, and partitionable group membership is implanted and proved. Next, we analyse the performances of ♢PPD by simulation
59

Conception des systèmes mécaniques complexes en comportement dynamique. Contribution à une démarche physico-fiabiliste à partir d'un système à pile à combustible pour véhicule électrique à hydrogène / Design of complex mechanical systems with dynamic behavior contribution to a physical-reliability-based approach from a fuel cell system for hydrogen electric vehicle

Collong, Sophie 07 April 2016 (has links)
L ’intégration de systèmes mécaniques complexes soumis à des environnements vibratoirescontraignants nécessite de tenir compte, dès la conception, des sollicitations réelles d’usage.La thèse montre que l’environnement vibratoire ainsi que la durée d’exposition dépendent del’utilisation qui sera faite d’un système tout au long de son cycle de vie. L’ évaluation de sonl’utilisation repose sur l’ évolution conjointe du comportement des utilisateurs et du développementde la technologie du système.L’analyse de la sûreté de fonctionnement d’un système mécanique complexe a permis de considérerle système dans son ensemble et d’investiguer ainsi de fac¸on approfondie le comportementdynamique de composants critiques. La modélisation simple de systèmes mécaniques précisequalitativement et quantitativement les comportements dynamiques principaux et simule lessollicitations vibratoires auxquelles un composant critique identifié est soumis. Sur cette base, lamodélisation du comportement d’un composant mécanique permet d’ évaluer le dommage par fatiguequ’il subira. Cet indicateur apporte au concepteur une aide aux choix de la géométrie du composant.Enfin, l’environnement climatique ainsi que des impacts li ´es au fonctionnement interne du système,ont ´ et ´e pris en compte par la réalisation d’essais vibro-climatiques en fonctionnement. Ces étudesont été menées sur un système à pile à combustible intégré à un véhicule électrique à hydrogène.Elles ont permis de mettre au point un cheminement comme appui `a la conception des systèmesmécaniques complexes.Le cheminement pluridisciplinaire propos´e dans cette thèse repose donc sur l’interaction de travauxde recherche issus principalement des domaines de la sociologie, de la sûreté de fonctionnement etde la mécanique. / The integration of complex mechanical systems subject to stringent vibration environments requiresconsideration of the real conditions of use from the beginning of the design phase.The thesis shows that the vibration environment and the duration of exposure to this environmentdepend on the use of the system throughout its life cycle. The evaluation of its use is based on thejoint evolution of both the user behavior and the system technology development.The dependability analysis of a complex mechanical system leads to consider the system as a wholeand thus to investigate in depth the dynamic behavior of critical components. A basic modeling ofthe mechanical system allows to qualitatively and quantitatively identify key dynamic behaviors anddetermines the vibration loads to which selected critical components are subjected. On this basis,modeling the behavior of a mechanical component leads to assess its fatigue damage. This indicatorhelps the designer in his choice of component geometry.Finally, the climatic environment as well as effects related to the internal functioning of the system,have been taken into account by performing vibro-climatic tests of on an operating systems, i.e. a fuelcell system integrated into a hydrogen electric vehicle. This helped to develop a procedure to supportthe design of complex mechanical systems.
60

Selecting the best strategy to improve quality, keeping in view the cost and other aspects

Karahasanovic, Ermin, Lönn, Henrik January 2007 (has links)
The purpose with the thesis was to create a general model that can help companies to take the best decision when it comes to improving the quality of an object. The model was created to solve the problem formulation; How to find the best way to improve the quality of an object, focusing primarily on the relationship between cost and quality but also take other important aspects into consideration. Before the model was created a literature study was performed in ELIN without any useable result. After the literature study was performed quality models like Quality Function Deployment (QFD) and Total Quality Management (TQM) were studied. The study of QFD and TQM showed that they are somewhat complicated and often consider the entire organisation. Simple Quality Model is a smaller model and focuses only at one object at a time. TQM and QFD have however been good inspiration for the creation of SQM. The model was tested in a real-time situation at Saab Communication. Together with Saab Communication we decided to apply SQM to the Swedish defence telenetwork (FTN). In FTN the model was tested at the basic connections. SQM generated 7 different alternatives to improve the dependability in a basic connection. After the application of SQM it showed that alternative 7 was the best alternative. Alternative 7 was to decrease the switch over time. The switch over is today not handled by a special employee and is instead shared among several workers. By employing two new employees there is a possibility to lower the switch over time with 50% down from today’s 60 minutes to 30. To implement this alternative would bring a cost of 5 374 034 SEK and a quality increase of 0,1398955% for the basic connections in the Swedish defence tele-network.

Page generated in 0.0684 seconds