• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 25
  • 14
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 140
  • 51
  • 35
  • 28
  • 27
  • 26
  • 24
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A model on how to use field data to improve product design : A case study

Christoffersson, Karolina January 2009 (has links)
To stay competitive, companies are forced to improve their products continuously. Field data is a source of information that shows the actual performance of products during operation, and that information can be used to clarify the items in need of improvements. This master thesis aims at identifying the set of field data that is required for dependability improvements and to develop a working procedure that enables increased utilization of the field data in order to make cost-effective design improvements. To achieve this, a 12-step model called the Design Improvement Cycle (DIC) was developed and tested in a single case study. The field data need was identified using a top-down method and was included as a part of the DIC. Testing of the model showed that it was practicable and each step could be carried through, even though the last steps only could be tested hypothetically during discussions with concerned personnel. The model implied a working procedure that should be aimed at, according to personnel with competence within the subject. As the DIC appeared to be very flexible it should be possible to use within several areas. It was discovered that field data was not a sufficient source of information to support design improvements but it could be used to indicate which items that should be focused on during further investigations. The quality of the field data had a big impact on the analysis possibilities and to point out which data quality issues that had to be amended to make the data more useful, the data need for dependability improvements could be used.
62

Selecting the best strategy to improve quality, keeping in view the cost and other aspects

Karahasanovic, Ermin, Lönn, Henrik January 2007 (has links)
<p>The purpose with the thesis was to create a general model that can help companies to take the best decision when it comes</p><p>to improving the quality of an object. The model was created to solve the problem formulation; How to find the best way to</p><p>improve the quality of an object, focusing primarily on the relationship between cost and quality but also take other</p><p>important aspects into consideration. Before the model was created a literature study was performed in ELIN without any</p><p>useable result. After the literature study was performed quality models like Quality Function Deployment (QFD) and Total</p><p>Quality Management (TQM) were studied. The study of QFD and TQM showed that they are somewhat complicated and</p><p>often consider the entire organisation. Simple Quality Model is a smaller model and focuses only at one object at a time.</p><p>TQM and QFD have however been good inspiration for the creation of SQM. The model was tested in a real-time situation</p><p>at Saab Communication. Together with Saab Communication we decided to apply SQM to the Swedish defence telenetwork</p><p>(FTN). In FTN the model was tested at the basic connections. SQM generated 7 different alternatives to improve</p><p>the dependability in a basic connection. After the application of SQM it showed that alternative 7 was the best alternative.</p><p>Alternative 7 was to decrease the switch over time. The switch over is today not handled by a special employee and is</p><p>instead shared among several workers. By employing two new employees there is a possibility to lower the switch over time</p><p>with 50% down from today’s 60 minutes to 30. To implement this alternative would bring a cost of 5 374 034 SEK and a</p><p>quality increase of 0,1398955% for the basic connections in the Swedish defence tele-network.</p>
63

A model on how to use field data to improve product design : A case study

Christoffersson, Karolina January 2009 (has links)
<p>To stay competitive, companies are forced to improve their products continuously. Field data is a source of information that shows the actual performance of products during operation, and that information can be used to clarify the items in need of improvements. This master thesis aims at identifying the set of field data that is required for dependability improvements and to develop a working procedure that enables increased utilization of the field data in order to make cost-effective design improvements. To achieve this, a 12-step model called the Design Improvement Cycle (DIC) was developed and tested in a single case study. The field data need was identified using a top-down method and was included as a part of the DIC.</p><p>Testing of the model showed that it was practicable and each step could be carried through, even though the last steps only could be tested hypothetically during discussions with concerned personnel. The model implied a working procedure that should be aimed at, according to personnel with competence within the subject. As the DIC appeared to be very flexible it should be possible to use within several areas. It was discovered that field data was not a sufficient source of information to support design improvements but it could be used to indicate which items that should be focused on during further investigations. The quality of the field data had a big impact on the analysis possibilities and to point out which data quality issues that had to be amended to make the data more useful, the data need for dependability improvements could be used.</p>
64

Regionų interneto tinklo infrastruktūros patikimumo tyrimai / Investigation of the regional internet network infrastructure dependability

Rainys, Rytis 06 January 2012 (has links)
Disertacijoje nagrinėjama interneto tinklo infrastruktūros patikimumo problematika, analizei naudojant tinklo topologijos tyrimus, grafų teorijos taikymą ir tinklo modeliavimą. Tyrimų objektas, interneto tinklo infrastruktūra, kurios pagrindą sudaro autonominės sistemos ir tarptinklinių sujungimų linijos. Nuo stabilios interneto tinklo veiklos priklauso informacijos pasiekiamumas, elektroninė komercija, nutolusių objektų valdymas ir t. t. Pagrindinis disertacijos tikslas – sukurti metodikas ir algoritmus, skirtus interneto tinklo infrastruktūros analizei bei veiklos patikimumo kontrolei. Tyrimų taikymo sritis – interneto tinklo veiklos tęstinumo priežiūra ir reguliavimas. Darbe sprendžiami šie pagrindiniai uždaviniai: interneto tinklo topologijos analizės modelio ir priemonių sudarymas; interneto junglumo tyrimas siekiant identifikuoti kritinius elementus, kurių pažeidimai susiję su tinklo funkcionalumo praradimu; kritinių interneto tinklo elementų stebėsenos modelio sudarymas; eksperimentiniai kibernetinių atakų simuliacijos bandymai; bei interneto tinklo infrastruktūros patikimo stiprinimo parinkimas. Disertaciją sudaro įvadas, 5 skyriai, rezultatų apibendrinimas, naudotos literatūros ir autoriaus publikacijų disertacijos tema sąrašai bei priedai. / The dissertation investigates the issues of dependability of the Internet network infrastructure that were studied by using the network topology analysis, graph theory and network modelling. The object of the study is the Internet network infrastructure, based on autonomous systems and interconnecting lines. Stable functioning of the Internet network determines the availability of information, electronic commerce, control of remote objects, etc. The main objective is to develop the methodologies and algorithms for analysing the Internet network infrastructure and controlling the reliability of functioning. The scope of application of the study is the supervision and regulation of continuity of the Internet. The following main tasks are solved: development of the topological scheme of the Internet network, selection of models and tools for analysis; the Internet network connectivity analysis for the purpose of identification of the critical network elements, whose violations would result in loss of connectivity of the network; as well as development of the model for monitoring of the critical elements of the Internet network and cyber-attacks simulation experiments. The scientific work consists of the general characteristic of the dissertation, 5 chapters, conclusions, list of literature, list of publications and annexes.
65

Investigation of the regional internet network infrastructure dependability / Regionų interneto tinklo infrastruktūros patikimumo tyrimai

Rainys, Rytis 06 January 2012 (has links)
The dissertation investigates the issues of dependability of the Internet network infrastructure that were studied by using the network topology analysis, graph theory and network modelling. The object of the study is the Internet network infrastructure, based on autonomous systems and interconnecting lines. Stable functioning of the Internet network determines the availability of information, electronic commerce, control of remote objects, etc. The main objective is to develop the methodologies and algorithms for analysing the Internet network infrastructure and controlling the reliability of functioning. The scope of application of the study is the supervision and regulation of continuity of the Internet. The following main tasks are solved: development of the topological scheme of the Internet network, selection of models and tools for analysis; the Internet network connectivity analysis for the purpose of identification of the critical network elements, whose violations would result in loss of connectivity of the network; as well as development of the model for monitoring of the critical elements of the Internet network and cyber-attacks simulation experiments. The scientific work consists of the general characteristic of the dissertation, 5 chapters, conclusions, list of literature, list of publications and annexes. / Disertacijoje nagrinėjama interneto tinklo infrastruktūros patikimumo problematika, analizei naudojant tinklo topologijos tyrimus, grafų teorijos taikymą ir tinklo modeliavimą. Tyrimų objektas, interneto tinklo infrastruktūra, kurios pagrindą sudaro autonominės sistemos ir tarptinklinių sujungimų linijos. Nuo stabilios interneto tinklo veiklos priklauso informacijos pasiekiamumas, elektroninė komercija, nutolusių objektų valdymas ir t. t. Pagrindinis disertacijos tikslas – sukurti metodikas ir algoritmus, skirtus interneto tinklo infrastruktūros analizei bei veiklos patikimumo kontrolei. Tyrimų taikymo sritis – interneto tinklo veiklos tęstinumo priežiūra ir reguliavimas. Darbe sprendžiami šie pagrindiniai uždaviniai: interneto tinklo topologijos analizės modelio ir priemonių sudarymas; interneto junglumo tyrimas siekiant identifikuoti kritinius elementus, kurių pažeidimai susiję su tinklo funkcionalumo praradimu; kritinių interneto tinklo elementų stebėsenos modelio sudarymas; eksperimentiniai kibernetinių atakų simuliacijos bandymai; bei interneto tinklo infrastruktūros patikimo stiprinimo parinkimas. Disertaciją sudaro įvadas, 5 skyriai, rezultatų apibendrinimas, naudotos literatūros ir autoriaus publikacijų disertacijos tema sąrašai bei priedai.
66

Modelling of Safety Concepts for Autonomous Vehicles using Semi-Markov Models

Bondesson, Carl January 2018 (has links)
Autonomous vehicles is soon a reality in the every-day life. Though before it is used commercially the vehicles need to be proven safe. The current standard for functional safety on roads, ISO 26262, does not include autonomous vehicles at the moment, which is why in this project an approach using semi-Markov models is used to assess safety. A semi-Markov process is a stochastic process modelled by a state space model where the transitions between the states of the model can be arbitrarily distributed. The approach is realized as a MATLAB tool where the user can use a steady-state based analysis called a Loss and Risk based measure of safety to assess safety. The tool works and can assess safety of semi-Markov systems as long as they are irreducible and positive recurrent. For systems that fulfill these properties, it is possible to draw conclusions about the safety of the system through a risk analysis and also about which autonomous driving level the system is in through a sensitivity analysis. The developed tool, or the approach with the semi-Markov model, might be a good complement to ISO 26262.
67

Approches logicielles de sûreté de fonctionnement pour les systèmes RFID / Software dependability approches for RFID systems

Kheddam, Rafik 09 April 2014 (has links)
On assiste de nos jours à une utilisation croissante des systèmes RFID (Radio Frequency IDentification systems) dans diversdomaines d’application (logistique, systèmes de production, inventaires, traçabilité, etc.). Certaines de ces applicationsprésentent un caractère critique à l’image du respect de la chaîne de froid lors de l’acheminement de denrées alimentaires oudans le cas de systèmes de manutention de bagages dans les aéroports. Or, la sensibilité des systèmes RFID vis-à-vis de leurenvironnement, notamment des perturbations électromagnétiques ou de la présence d’obstacles, les rend vulnérables. Demême, de par le nombre important d’éléments (étiquettes, lecteurs) mis en oeuvre dans de tels systèmes, des comportementserronés peuvent survenir en raison de fautes dans les divers éléments constituant le système. D’où l’importance et la nécessitéde traiter le problème de la sûreté de fonctionnement et de la tolérance aux fautes dans le but de rendre ces systèmes plusrobustes.L’objectif de cette thèse concerne la proposition d’approches logicielles de test et de diagnostic en ligne adaptées aux systèmesRFID en vue d’améliorer leur robustesse. Depuis quelques années, une exploitation efficace des systèmes RFID a vu ledéveloppement d’intergiciels ou de middlewares RFID, dont le rôle est de proposer des services permettant la gestion desquantités de données importantes en provenance des lecteurs RFID. L’utilisation de tels intergiciels est d’un grand intérêt pourla sûreté de fonctionnement des systèmes RFID en raison de la nature distribuée de ces systèmes ; en particulier, grâce àl’intégration des mécanismes de sûreté de fonctionnement, plus précisément le test et le diagnostic en ligne, au niveau dumiddleware. Dans cette thématique, nous avons proposé plusieurs solutions pour couvrir les deux couches centrales du systèmeà savoir la couche middleware et son interface de communication avec les sources de données, le protocole LLRP (Low LevelReader Protocol). Nous avons proposé une solution middleware compatible avec le standard de communication des systèmesRFID, et utilisée comme un réceptacle pour une solution algorithmique de diagnostic probabiliste qui permet de détecter lesdéfaillances potentielles des composants du système sur la base d’un modèle probabiliste qui tient compte de l’environnementd’exécution. Ensuite, nous avons proposé un mécanisme d’analyse des fichiers log de l’interface de communication LLRP,complémentaire à l’algorithme probabiliste et qui permet d’approfondir le diagnostic en recherchant les causes de la défaillancedétectée sur la base de différentes signatures de défaillances déjà établies. Enfin, nous avons proposé une extension dustandard de communication LLRP qui tient compte de plusieurs comportements défaillants dans le but de rendre ce dernier plusfiable. / We are witnessing today a growing use of RFID (Radio Frequency IDentification) systems in various application areas (logistics,production systems, product traceability, etc.). Some of these applications are critical such as food-related cold chain logistics orbaggage handling systems in airports. Nevertheless, RFID are very sensitive to their environment, including electromagneticdisturbances or presence of obstacles, making them error-prone. Also, because of the large number of elements (tags, readers,and sensors) constituting current RFID systems, erroneous behaviors are more frequent. Hence, it is important to address all theproblems related to RFID system dependability and deal with them in order to make these systems more robust.The goal of this thesis is the development of software test and online diagnosis facilities for RFID systems to improve theirrobustness. In recent years, the effective use of RFID systems has seen the development of RFID middleware solutions, whoserole is to provide services for the management of large amounts of raw data of the various RFID sources. Due to the distributednature of current RFID systems, the use of such solutions is of great interest regarding the improvement of RFID systemdependability. In particular, thanks to the integration of dependability mechanisms, specifically the online test and diagnosisapproaches in the RFID middleware solution. In addition, because of the middleware is considered as the backbone of an RFIDsystem, whereby the whole RFID dataflow passes; all the needed information will be availabe to our proposed approaches toperform a correct diagnosis. We proposed several solutions to cover the two main layers of RFID systems; namely, themiddleware layer and the communication layer between the middleware and the data sources, the Low Level Reader Protocol(LLRP). We have proposed a LLRP compliant middleware solution, used to accommodate a probabilistic diagnosis algorithm todetect potential failures of the RFID system components on the basis of a probabilistic model that takes into account theexecution conditions. Then, we proposed a complementary mechanism to the previous algorithm for analyzing the log files ofthe LLRP communication interface allowing further analysis by looking for the causes of the detected failures on the basis of an already defined set of failure signatures. Finally, we proposed an extension of the LLRP standard to make it more reliable bytaking into account several RFID failures.
68

EVALUATING THE IMPACT OF UNCERTAINTY ON THE INTEGRITY OF DEEP NEURAL NETWORKS

Harborn, Jakob January 2021 (has links)
Deep Neural Networks (DNNs) have proven excellent performance and are very successful in image classification and object detection. Safety critical industries such as the automotive and aerospace industry aim to develop autonomous vehicles with the help of DNNs. In order to certify the usage of DNNs in safety critical systems, it is essential to prove the correctness of data within the system. In this thesis, the research is focused on investigating the sources of uncertainty, what effects various sources of uncertainty has on NNs, and how it is possible to reduce uncertainty within an NN. Probabilistic methods are used to implement an NN with uncertainty estimation to analyze and evaluate how the integrity of the NN is affected. By analyzing and discussing the effects of uncertainty in an NN it is possible to understand the importance of including a method of estimating uncertainty. Preventing, reducing, or removing the presence of uncertainty in such a network improves the correctness of data within the system. With the implementation of the NN, results show that estimating uncertainty makes it possible to identify and classify the presence of uncertainty in the system and reduce the uncertainty to achieve an increased level of integrity, which improves the correctness of the predictions.
69

On Dependable Wireless Communications through Multi-Connectivity

Hößler, Tom 23 December 2020 (has links)
The realization of wireless ultra-reliable low-latency communications (URLLC) is one of the key challenges of the fifth generation (5G) of mobile communications systems and beyond. Ensuring ultra-high reliability together with a latency in the (sub-)millisecond range is expected to enable self-driving cars, wireless factory automation, and the Tactile Internet. In wireless communications, reliability is usually only considered as percentage of successful packet delivery, aiming for 1 − 10⁻⁵ up to 1 − 10⁻⁹ in URLLC.
70

Conception de systèmes embarqués fiables et auto-réglables : applications sur les systèmes de transport ferroviaire / Design of self-tuning reliable embedded systems and its application in railway transportation systems

Alouani, Ihsen 26 April 2016 (has links)
Un énorme progrès dans les performances des semiconducteurs a été accompli ces dernières années. Avec l’´émergence d’applications complexes, les systèmes embarqués doivent être à la fois performants et fiables. Une multitude de travaux ont été proposés pour améliorer l’efficacité des systèmes embarqués en réduisant le décalage entre la flexibilité des solutions logicielles et la haute performance des solutions matérielles. En vertu de leur nature reconfigurable, les FPGAs (Field Programmable Gate Arrays) représentent un pas considérable pour réduire ce décalage performance/flexibilité. Cependant, la reconfiguration dynamique a toujours souffert d’une limitation liée à la latence de reconfiguration.Dans cette thèse, une nouvelle technique de reconfiguration dynamiqueau niveau ”grain-moyen” pour les circuits à base de blocks DSP48E1 est proposée. L’idée est de profiter de la reprogrammabilité des blocks DSP48E1 couplée avec un circuit d’interconnection reconfigurable afin de changer la fonction implémentée par le circuit en un cycle horloge. D’autre part, comme les nouvelles technologies s’appuient sur la réduction des dimensions des transistors ainsi que les tensions d’alimentation, les circuits électroniques sont devenus de plus en plus susceptibles aux fautes transitoires. L’impact de ces erreurs au niveau système peut être catastrophique et les SETs (Single Event Transients) sont devenus une menace tangible à la fiabilité des systèmes embarqués, en l’occurrence pour les applications critiques comme les systèmes de transport. Les techniques de fiabilité qui se basent sur des taux d’erreurs (SERs) surestimés peuvent conduire à un gaspillage de ressources et par conséquent un cout en consommation de puissance électrique. Il est primordial de prendre en compte le phénomène de masquage d’erreur pour une estimation précise des SERs.Cette thèse propose une nouvelle modélisation inter-couches de la vulnérabilité des circuits qui combine les mécanismes de masquage au niveau transistor (TLM) et le masquage au niveau Système (SLM). Ce modèle est ensuite utilisé afin de construire une architecture adaptative tolérante aux fautes qui évalue la vulnérabilité effective du circuit en runtime. La stratégie d’amélioration de fiabilité est adaptée pour ne protéger que les parties vulnérables du système, ce qui engendre un circuit fiable avec un cout optimisé. Les expérimentations effectuées sur un système de détection d’obstacles à base de radar pour le transport ferroviaire montre que l’approche proposée permet d’´établir un compromis fiabilité/ressources utilisées. / During the last few decades, a tremendous progress in the performance of semiconductor devices has been accomplished. In this emerging era of high performance applications, machines need not only to be efficient but also need to be dependable at circuit and system levels. Several works have been proposed to increase embedded systems efficiency by reducing the gap between software flexibility and hardware high-performance. Due to their reconfigurable aspect, Field Programmable Gate Arrays (FPGAs) represented a relevant step towards bridging this performance/flexibility gap. Nevertheless, Dynamic Reconfiguration (DR) has been continuously suffering from a bottleneck corresponding to a long reconfiguration time.In this thesis, we propose a novel medium-grained high-speed dynamic reconfiguration technique for DSP48E1-based circuits. The idea is to take advantage of the DSP48E1 slices runtime reprogrammability coupled with a re-routable interconnection block to change the overall circuit functionality in one clock cycle. In addition to the embedded systems efficiency, this thesis deals with the reliability chanllenges in new sub-micron electronic systems. In fact, as new technologies rely on reduced transistor size and lower supply voltages to improve performance, electronic circuits are becoming remarkably sensitive and increasingly susceptible to transient errors. The system-level impact of these errors can be far-reaching and Single Event Transients (SETs) have become a serious threat to embedded systems reliability, especially for especially for safety critical applications such as transportation systems. The reliability enhancement techniques that are based on overestimated soft error rates (SERs) can lead to unnecessary resource overheads as well as high power consumption. Considering error masking phenomena is a fundamental element for an accurate estimation of SERs.This thesis proposes a new cross-layer model of circuits vulnerability based on a combined modeling of Transistor Level (TLM) and System Level Masking (SLM) mechanisms. We then use this model to build a self adaptive fault tolerant architecture that evaluates the circuit’s effective vulnerability at runtime. Accordingly, the reliability enhancement strategy is adapted to protect only vulnerable parts of the system leading to a reliable circuit with optimized overheads. Experimentations performed on a radar-based obstacle detection system for railway transportation show that the proposed approach allows relevant reliability/resource utilization tradeoffs.

Page generated in 0.066 seconds