• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 19
  • 11
  • 10
  • 9
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 153
  • 27
  • 21
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Reliable UDP and Circular DHT implementation for the MediaSense Open-Source Platform

Schröder, Timo, Rüter, Florian January 2012 (has links)
MediaSense is an EU funded platform that is an implementation of an Internet-of-Things framework. This project adds two fundamental functions to it, namely, a new lookup service based on a peer-to-peer Distributed Hash Table (DHT) called Chord and a reliable communication protocol based on UDP (RUDP). The lookup service makes the use of a central server, that can be a single point of failure or get compromised, unnessecary. Reliable UDP transmits data from the very first packet onwards and avoids any connection management as itis packet based. The methodology for both functions was to develop a simulation environment, compatible to MediaSense, at its initiation, at which point its functionality can be tested and measurements can betaken. The resulting DHT simulation environment enables there to be deep insight into and a control of the state and action of the DHT. The resulting graphs show the performance properties of both the DHT and RUDP. In conclusion, the MediaSense platform has been extended by means of two usable functionalities and which also leaves space for further development such as for security enhancements and performance increases. / MediaSense
82

Architectures and Algorithms for Mitigation of Soft Errors in Nanoscale VLSI Circuits

Bhattacharya, Koustav 22 October 2009 (has links)
The occurrence of transient faults like soft errors in computer circuits poses a significant challenge to the reliability of computer systems. Soft error, which occurs when the energetic neutrons coming from space or the alpha particles arising out of packaging materials hit the transistors, may manifest themselves as a bit flip in the memory element or as a transient glitch generated at any internal node of combinational logic, which may subsequently propagate to and be captured in a latch. Although the problem of soft errors was earlier only a concern for space applications, aggressive technology scaling trends have exacerbated the problem to modern VLSI systems even for terrestrial applications. In this dissertation, we explore techniques at all levels of the design flow to reduce the vulnerability of VLSI systems against soft errors without compromising on other design metrics like delay, area and power. We propose new models for estimating soft errors for storage structures and combinational logic. While soft errors in caches are estimated using the vulnerability metric, soft errors in logic circuits are estimated using two new metrics called the glitch enabling probability (GEP) and the cumulative probability of observability (CPO). These metrics, based on signal probabilities of nets, accurately model soft errors in radiation-aware synthesis algorithms and helps in efficient exploration of the design solution space during optimization. At the physical design level, we leverage the use of larger netlengths to provide larger RC ladders for effectively filtering out the transient glitches. Towards this, a new heuristic has been developed to selectively assign larger wirelengths to certain critical nets. This reduces the delay and area overhead while improving the immunity to soft errors. Based on this, we propose two placement algorithms based on simulated annealing and quadratic programming which significantly reduce the soft error rates of circuits. At the circuit level, we develop techniques for hardening circuit nodes using a novel radiation jammer technique. The proposed technique is based on the principles of a RC differentiator and is used to isolate the driven cell from the driving cell which is being hit by a radiation strike. Since the blind insertion of radiation blocker cells on all circuit nodes is expensive, candidate nodes are selected for insertion of these cells using a new metric called the probability of radiation blocker circuit insertion (PRI). We investigate a gate sizing algorithm, at the logic level, in which we simultaneously optimize both the soft error rate (SER) and the crosstalk noise besides the power and performance of circuits while considering the effect of process variations. The reliability centric gate sizing technique has been formulated as a mathematical program and is efficiently solved. At the architectural level, we develop solutions for the correction of multi-bit errors in large L2 caches by controlling or mining the redundancy in the memory hierarchy and methods to increase the amount of redundancy in the memory hierarchy by employing a redundancy-based replacement policy, in which the amount of redundancy is controlled using a user defined redundancy threshold. The novel architectures and the new reliability-centric synthesis algorithms proposed for the various design abstraction levels have been shown to achieve significant reduction of soft error rates in current nanometer circuits. The design techniques, algorithms and architectures can be integrated into existing design flows. A VLSI system implementation can leverage on the architectural solutions for the reliability of the caches while the custom hardware synthesized for the VLSI system can be protected against radiation strikes by utilizing the circuit level, logic level and layout level optimization algorithms that have been developed.
83

Directed connectivity analysis and its application on LEO satellite backbone

Hu, Junhao 03 September 2021 (has links)
Network connectivity is a fundamental property affecting network performance. Given the reliability of each link, network connectivity determines the probability that a message can be delivered from the source to the destination. In this thesis, we study the directed network connectivity where the message will be forwarded toward the destination hop by hop, so long as the neighbor(s) is (are) closer to the destination. Directed connectivity, closely related to directed percolation, is very complicated to calculate. The existing state-of-the-art can only calculate directed connectivity for a lattice network up-to-the size of 10 × 10. In this thesis, we devise a new approach that is simpler and more scalable and can handle general network topology and heterogeneous links. The proposed approach uses an unambiguous hop count to divide the networks into hops and gives two steps of pre-process to transform hop-count ambiguous networks into unambiguous ones, and derive the end-to-end connectivity. Then, using the Markov property to obtain the state transition probability hop by hop. Second, with tens of thousands of Low Earth Orbit (LEO) satellites covering the Earth, LEO satellite networks can provide coverage and services that are otherwise not possible using terrestrial communication systems. The regular and dense LEO satellite constellation also provides new opportunities and challenges for network protocol design. In this thesis, we apply the directed connectivity analytical model on LEO satellite backbone networks to ensure ultra-reliable and low-latency (URLL) services using LEO networks, and propose a directed percolation routing (DPR) algorithm to lower the cost of transmission without sacrificing speed. Using Starlink constellation (with 1,584 satellites) as an example, the proposed DPR can achieve a few to tens of milliseconds latency reduction for inter-continental transmissions compared to the Internet backbone, while maintaining high reliability without link-layer retransmissions. Finally, considering the link redundancy overhead and delay/reliability tradeoff, DPR can control the size of percolation. In other words, we can choose a part of links to be active links considering the reliability and cost tradeoff. / Graduate
84

Software Development Process and Reliability Quantification for Safety Critical Embedded Systems Design

Lockhart, Jonathan A. 01 October 2019 (has links)
No description available.
85

Dealing with uncertainty in global warming impact assessments of refrigeration systems / Hantering utav osäkerheter inom bedömningen av den globala uppvärmningseffekten hos kylsystem

Boström, Linn Caroline, Ljungberg, Hanna January 2018 (has links)
The United Nations recognises anthropocentric greenhouse gas emissions to be the leading cause of global warming. The International Institute of Refrigeration further addresses that in 2014 7.8% of the global greenhouse gas emissions were assigned to the refrigeration sector. This marked the importance of using metrics to evaluate the climate impact of a refrigeration system. However, as these metrics rely on uncertain values it is difficult to assess how reliable they are. The purpose of this study is therefore to evaluate the reliability of two environmental metrics by applying methods for dealing with uncertainties, and to present possible improvements to the applied methods and metric. The study begins by introducing refrigeration systems and their environmentalcontext. In the background the reader is further introduced to the topic by accounting for the evaluated metrics, TEWI and LCCP, as well as three different methods for dealing with uncertainties, Sensitivity analysis, Uncertainty analysis and Monte Carlo Simulation. In order to fulfil the purpose a data centre is modelled, and the restrictions and operation conditions of the system will be further described under section 3. The result will consist of two parts. The first part will consider the theoretical aspect of the study as well as sources and typologies of values and uncertainties. The second part will consist of the empirical results from applying the mentioned methods on the modelled system. These will be presented in graphs sorted after method and metric and are then analysed and evaluated in the discussion. It is seen that only a few parameters dominate the influence in the Sensitivity and Uncertainty analysis but that the influential parameter is dependent on the relative order of magnitude. It is also stated that the LCCP rends no additional information at the analysed conditions. When applying the Monte Carlo Simulation TEWI is considered more reliable, as in that the deterministic value is a more accurate estimation of the ’true’ environmental impact of the system. One possible improvement may be to use the rendered standard deviation for TEWI as an uncertainty range to incorporate the uncertainties in the deterministic value. The study concludes that the Sensitivity and Uncertainty analysis illustrates theinfluence of one single parameter on the final metric value. However, the analyses do not determine to what extent these final values may be considered reliable. A Monte Carlo Simulation is better applicable for some uncertainty typology than others and as such TEWI is considered more reliable than LCCP. The study lands in the conclusion that the presented methods may be improved by assigning uncertainty typologies in order to evaluate the viability of a method to incorporate the uncertainties, e.g. a Monte Carlo Simulation. / Förenta Nationerna erkänner antropocentriska utsläpp av växthusgaser som den främsta orsaken till global uppvärmning. Vidare belyser IIR att kylsektorn stod för 7.8% av de globala utsläppen av växthusgaser år 2014. Detta åskådliggjorde vikten av att använda mätmetoder som kan utvärdera klimatpåverkan hos ett kylsystem. Då dessa mätmetoder baseras på osäkra värden är det svårt att bedöma hur pålitliga de faktiskt är. Syftet med detta projekt är därför att utvärdera tillförlitligheten hos två mätmetoder genom att tillämpa metoder för att hantera osäkerheter och att presentera möjliga förbättringar till de tillämpade metoderna och mätmetoderna. Projektet börjar med att introducera kylsystem och deras miljösammanhang. I bakgrunden får läsaren lära sig mer om ämnet genom en redogörelse för de utvärderade mätmetoderna, TEWI och LCCP, samt tre olika metoder för att hantera osäkerheter, Känslighetsanalys, Osäkerhetsanalys och Monte Carlo-simulation. För att uppfylla syftet modelleras ett data center, och systemets begränsningar och driftsförhållanden beskrivs vidare under rubriken Metod. Resultatet består av två delar. Den första delen redovisar den teoretiska aspekten av studien så som källor för osäkerheter och typologier samt att här tilldelas parametervärden och osäkerheter. Den andra delen består av de empiriska resultaten som fås då metoderna tillämpas på det modellerade systemet. Dessa presenteras i diagram vilka sorteras efter metod och mätmetod. Dessa analyseras och utvärderas sedan i diskussionen. Från resultaten går det att se att endast ett fåtal parametrar dominerar inflytandet i Känslighets- och Osäkerhetsanalysen men att den inflytelserika parametern är beroende av den relativa storleksordningen. Det visar sig även att LCCP inte bidrar till ytterligare information vid de analyserade förhålandena. Vid tillämpningen av Monte Carlo-simuleringen anses TEWI vara mer tillförlitlig. En möjlig förbättring kan vara att använda den givna standardavvikelsen för TEWI som ett osäkerhetsintervall för att inkorporera osäkerheten i det deterministiska värdet. Projektet landar i slutsatsen att Känslighets- och Osäkerhetsanalysen illustrerarinflytandet av en enskild parameter på det slutliga metriska värdet. Analyserna avgör emellertid inte i vilken utsträckning dessa värden kan anses vara tillförlitliga. En Monte Carlo-simulering är bättre tillämplig för en viss osäkerhetstypologi än andra och som sådan anses TEWI vara mer tillförlitlig än LCCP. Projektet landar även i slutsatsen att de presenterade metoderna kan förbättras genom att tilldela osäkerhetstypologier för att utvärdera huruvida en metod kan anses tillämplig för att inkorporera osäkerheter, t.ex. en Monte Carlo-simulering.
86

On Age-of-Information Aware Resource Allocation for Industrial Control-Communication-Codesign

Scheuvens, Lucas 23 January 2023 (has links)
Unter dem Überbegriff Industrie 4.0 wird in der industriellen Fertigung die zunehmende Digitalisierung und Vernetzung von industriellen Maschinen und Prozessen zusammengefasst. Die drahtlose, hoch-zuverlässige, niedrig-latente Kommunikation (engl. ultra-reliable low-latency communication, URLLC) – als Bestandteil von 5G gewährleistet höchste Dienstgüten, die mit industriellen drahtgebundenen Technologien vergleichbar sind und wird deshalb als Wegbereiter von Industrie 4.0 gesehen. Entgegen diesem Trend haben eine Reihe von Arbeiten im Forschungsbereich der vernetzten Regelungssysteme (engl. networked control systems, NCS) gezeigt, dass die hohen Dienstgüten von URLLC nicht notwendigerweise erforderlich sind, um eine hohe Regelgüte zu erzielen. Das Co-Design von Kommunikation und Regelung ermöglicht eine gemeinsame Optimierung von Regelgüte und Netzwerkparametern durch die Aufweichung der Grenze zwischen Netzwerk- und Applikationsschicht. Durch diese Verschränkung wird jedoch eine fundamentale (gemeinsame) Neuentwicklung von Regelungssystemen und Kommunikationsnetzen nötig, was ein Hindernis für die Verbreitung dieses Ansatzes darstellt. Stattdessen bedient sich diese Dissertation einem Co-Design-Ansatz, der beide Domänen weiterhin eindeutig voneinander abgrenzt, aber das Informationsalter (engl. age of information, AoI) als bedeutenden Schnittstellenparameter ausnutzt. Diese Dissertation trägt dazu bei, die Echtzeitanwendungszuverlässigkeit als Folge der Überschreitung eines vorgegebenen Informationsalterschwellenwerts zu quantifizieren und fokussiert sich dabei auf den Paketverlust als Ursache. Anhand der Beispielanwendung eines fahrerlosen Transportsystems wird gezeigt, dass die zeitlich negative Korrelation von Paketfehlern, die in heutigen Systemen keine Rolle spielt, für Echtzeitanwendungen äußerst vorteilhaft ist. Mit der Annahme von schnellem Schwund als dominanter Fehlerursache auf der Luftschnittstelle werden durch zeitdiskrete Markovmodelle, die für die zwei Netzwerkarchitekturen Single-Hop und Dual-Hop präsentiert werden, Kommunikationsfehlerfolgen auf einen Applikationsfehler abgebildet. Diese Modellierung ermöglicht die analytische Ableitung von anwendungsbezogenen Zuverlässigkeitsmetriken wie die durschnittliche Dauer bis zu einem Fehler (engl. mean time to failure). Für Single-Hop-Netze wird das neuartige Ressourcenallokationsschema State-Aware Resource Allocation (SARA) entwickelt, das auf dem Informationsalter beruht und die Anwendungszuverlässigkeit im Vergleich zu statischer Multi-Konnektivität um Größenordnungen erhöht, während der Ressourcenverbrauch im Bereich von konventioneller Einzelkonnektivität bleibt. Diese Zuverlässigkeit kann auch innerhalb eines Systems von Regelanwendungen, in welchem mehrere Agenten um eine begrenzte Anzahl Ressourcen konkurrieren, statistisch garantiert werden, wenn die Anzahl der verfügbaren Ressourcen pro Agent um ca. 10 % erhöht werden. Für das Dual-Hop Szenario wird darüberhinaus ein Optimierungsverfahren vorgestellt, das eine benutzerdefinierte Kostenfunktion minimiert, die niedrige Anwendungszuverlässigkeit, hohes Informationsalter und hohen durchschnittlichen Ressourcenverbrauch bestraft und so das benutzerdefinierte optimale SARA-Schema ableitet. Diese Optimierung kann offline durchgeführt und als Look-Up-Table in der unteren Medienzugriffsschicht zukünftiger industrieller Drahtlosnetze implementiert werden.:1. Introduction 1 1.1. The Need for an Industrial Solution . . . . . . . . . . . . . . . . . . . 3 1.2. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Related Work 7 2.1. Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2. Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3. Codesign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.1. The Need for Abstraction – Age of Information . . . . . . . . 11 2.4. Dependability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3. Deriving Proper Communications Requirements 17 3.1. Fundamentals of Control Theory . . . . . . . . . . . . . . . . . . . . 18 3.1.1. Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.2. Performance Requirements . . . . . . . . . . . . . . . . . . . 21 3.1.3. Packet Losses and Delay . . . . . . . . . . . . . . . . . . . . . 22 3.2. Joint Design of Control Loop with Packet Losses . . . . . . . . . . . . 23 3.2.1. Method 1: Reduced Sampling . . . . . . . . . . . . . . . . . . 23 3.2.2. Method 2: Markov Jump Linear System . . . . . . . . . . . . . 25 3.2.3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.3. Focus Application: The AGV Use Case . . . . . . . . . . . . . . . . . . 31 3.3.1. Control Loop Model . . . . . . . . . . . . . . . . . . . . . . . 31 3.3.2. Control Performance Requirements . . . . . . . . . . . . . . . 33 3.3.3. Joint Modeling: Applying Reduced Sampling . . . . . . . . . . 34 3.3.4. Joint Modeling: Applying MJLS . . . . . . . . . . . . . . . . . 34 3.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4. Modeling Control-Communication Failures 43 4.1. Communication Assumptions . . . . . . . . . . . . . . . . . . . . . . 43 4.1.1. Small-Scale Fading as a Cause of Failure . . . . . . . . . . . . 44 4.1.2. Connectivity Models . . . . . . . . . . . . . . . . . . . . . . . 46 4.2. Failure Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2.1. Single-hop network . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2.2. Dual-hop network . . . . . . . . . . . . . . . . . . . . . . . . 51 4.3. Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.3.1. Mean Time to Failure . . . . . . . . . . . . . . . . . . . . . . . 54 4.3.2. Packet Loss Ratio . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3.3. Average Number of Assigned Channels . . . . . . . . . . . . . 57 4.3.4. Age of Information . . . . . . . . . . . . . . . . . . . . . . . . 57 4.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5. Single Hop – Single Agent 61 5.1. State-Aware Resource Allocation . . . . . . . . . . . . . . . . . . . . 61 5.2. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.3. Erroneous Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 67 5.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6. Single Hop – Multiple Agents 71 6.1. Failure Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.1.1. Admission Control . . . . . . . . . . . . . . . . . . . . . . . . 72 6.1.2. Transition Probabilities . . . . . . . . . . . . . . . . . . . . . . 73 6.1.3. Computational Complexity . . . . . . . . . . . . . . . . . . . 74 6.1.4. Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . 75 6.2. Illustration Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 6.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6.3.1. Verification through System-Level Simulation . . . . . . . . . 78 6.3.2. Applicability on the System Level . . . . . . . . . . . . . . . . 79 6.3.3. Comparison of Admission Control Schemes . . . . . . . . . . 80 6.3.4. Impact of the Packet Loss Tolerance . . . . . . . . . . . . . . . 82 6.3.5. Impact of the Number of Agents . . . . . . . . . . . . . . . . . 84 6.3.6. Age of Information . . . . . . . . . . . . . . . . . . . . . . . . 84 6.3.7. Channel Saturation Ratio . . . . . . . . . . . . . . . . . . . . 86 6.3.8. Enforcing Full Channel Saturation . . . . . . . . . . . . . . . 86 6.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 7. Dual Hop – Single Agent 91 7.1. State-Aware Resource Allocation . . . . . . . . . . . . . . . . . . . . 91 7.2. Optimization Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 7.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 7.3.1. Extensive Simulation . . . . . . . . . . . . . . . . . . . . . . . 96 7.3.2. Non-Integer-Constrained Optimization . . . . . . . . . . . . . 98 7.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 8. Conclusions and Outlook 105 8.1. Key Results and Conclusions . . . . . . . . . . . . . . . . . . . . . . . 105 8.2. Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 A. DC Motor Model 111 Bibliography 113 Publications of the Author 127 List of Figures 129 List of Tables 131 List of Operators and Constants 133 List of Symbols 135 List of Acronyms 137 Curriculum Vitae 139 / In industrial manufacturing, Industry 4.0 refers to the ongoing convergence of the real and virtual worlds, enabled through intelligently interconnecting industrial machines and processes through information and communications technology. Ultrareliable low-latency communication (URLLC) is widely regarded as the enabling technology for Industry 4.0 due to its ability to fulfill highest quality-of-service (QoS) comparable to those of industrial wireline connections. In contrast to this trend, a range of works in the research domain of networked control systems have shown that URLLC’s supreme QoS is not necessarily required to achieve high quality-ofcontrol; the co-design of control and communication enables to jointly optimize and balance both quality-of-control parameters and network parameters through blurring the boundary between application and network layer. However, through the tight interlacing, this approach requires a fundamental (joint) redesign of both control systems and communication networks and may therefore not lead to short-term widespread adoption. Therefore, this thesis instead embraces a novel co-design approach which keeps both domains distinct but leverages the combination of control and communications by yet exploiting the age of information (AoI) as a valuable interface metric. This thesis contributes to quantifying application dependability as a consequence of exceeding a given peak AoI with the particular focus on packet losses. The beneficial influence of negative temporal packet loss correlation on control performance is demonstrated by means of the automated guided vehicle use case. Assuming small-scale fading as the dominant cause of communication failure, a series of communication failures are mapped to an application failure through discrete-time Markov models for single-hop (e.g, only uplink or downlink) and dual-hop (e.g., subsequent uplink and downlink) architectures. This enables the derivation of application-related dependability metrics such as the mean time to failure in closed form. For single-hop networks, an AoI-aware resource allocation strategy termed state-aware resource allocation (SARA) is proposed that increases the application reliability by orders of magnitude compared to static multi-connectivity while keeping the resource consumption in the range of best-effort single-connectivity. This dependability can also be statistically guaranteed on a system level – where multiple agents compete for a limited number of resources – if the provided amount of resources per agent is increased by approximately 10 %. For the dual-hop scenario, an AoI-aware resource allocation optimization is developed that minimizes a user-defined penalty function that punishes low application reliability, high AoI, and high average resource consumption. This optimization may be carried out offline and each resulting optimal SARA scheme may be implemented as a look-up table in the lower medium access control layer of future wireless industrial networks.:1. Introduction 1 1.1. The Need for an Industrial Solution . . . . . . . . . . . . . . . . . . . 3 1.2. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Related Work 7 2.1. Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2. Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3. Codesign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.1. The Need for Abstraction – Age of Information . . . . . . . . 11 2.4. Dependability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3. Deriving Proper Communications Requirements 17 3.1. Fundamentals of Control Theory . . . . . . . . . . . . . . . . . . . . 18 3.1.1. Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.2. Performance Requirements . . . . . . . . . . . . . . . . . . . 21 3.1.3. Packet Losses and Delay . . . . . . . . . . . . . . . . . . . . . 22 3.2. Joint Design of Control Loop with Packet Losses . . . . . . . . . . . . 23 3.2.1. Method 1: Reduced Sampling . . . . . . . . . . . . . . . . . . 23 3.2.2. Method 2: Markov Jump Linear System . . . . . . . . . . . . . 25 3.2.3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.3. Focus Application: The AGV Use Case . . . . . . . . . . . . . . . . . . 31 3.3.1. Control Loop Model . . . . . . . . . . . . . . . . . . . . . . . 31 3.3.2. Control Performance Requirements . . . . . . . . . . . . . . . 33 3.3.3. Joint Modeling: Applying Reduced Sampling . . . . . . . . . . 34 3.3.4. Joint Modeling: Applying MJLS . . . . . . . . . . . . . . . . . 34 3.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4. Modeling Control-Communication Failures 43 4.1. Communication Assumptions . . . . . . . . . . . . . . . . . . . . . . 43 4.1.1. Small-Scale Fading as a Cause of Failure . . . . . . . . . . . . 44 4.1.2. Connectivity Models . . . . . . . . . . . . . . . . . . . . . . . 46 4.2. Failure Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2.1. Single-hop network . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2.2. Dual-hop network . . . . . . . . . . . . . . . . . . . . . . . . 51 4.3. Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.3.1. Mean Time to Failure . . . . . . . . . . . . . . . . . . . . . . . 54 4.3.2. Packet Loss Ratio . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3.3. Average Number of Assigned Channels . . . . . . . . . . . . . 57 4.3.4. Age of Information . . . . . . . . . . . . . . . . . . . . . . . . 57 4.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5. Single Hop – Single Agent 61 5.1. State-Aware Resource Allocation . . . . . . . . . . . . . . . . . . . . 61 5.2. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.3. Erroneous Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 67 5.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6. Single Hop – Multiple Agents 71 6.1. Failure Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.1.1. Admission Control . . . . . . . . . . . . . . . . . . . . . . . . 72 6.1.2. Transition Probabilities . . . . . . . . . . . . . . . . . . . . . . 73 6.1.3. Computational Complexity . . . . . . . . . . . . . . . . . . . 74 6.1.4. Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . 75 6.2. Illustration Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 6.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6.3.1. Verification through System-Level Simulation . . . . . . . . . 78 6.3.2. Applicability on the System Level . . . . . . . . . . . . . . . . 79 6.3.3. Comparison of Admission Control Schemes . . . . . . . . . . 80 6.3.4. Impact of the Packet Loss Tolerance . . . . . . . . . . . . . . . 82 6.3.5. Impact of the Number of Agents . . . . . . . . . . . . . . . . . 84 6.3.6. Age of Information . . . . . . . . . . . . . . . . . . . . . . . . 84 6.3.7. Channel Saturation Ratio . . . . . . . . . . . . . . . . . . . . 86 6.3.8. Enforcing Full Channel Saturation . . . . . . . . . . . . . . . 86 6.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 7. Dual Hop – Single Agent 91 7.1. State-Aware Resource Allocation . . . . . . . . . . . . . . . . . . . . 91 7.2. Optimization Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 7.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 7.3.1. Extensive Simulation . . . . . . . . . . . . . . . . . . . . . . . 96 7.3.2. Non-Integer-Constrained Optimization . . . . . . . . . . . . . 98 7.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 8. Conclusions and Outlook 105 8.1. Key Results and Conclusions . . . . . . . . . . . . . . . . . . . . . . . 105 8.2. Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 A. DC Motor Model 111 Bibliography 113 Publications of the Author 127 List of Figures 129 List of Tables 131 List of Operators and Constants 133 List of Symbols 135 List of Acronyms 137 Curriculum Vitae 139
87

Advanced Theory, Materials and Applications for Electrowetting on Structured Surfaces

Dhindsa, Manjeet S. 19 April 2011 (has links)
No description available.
88

Creating additional Internet Gateways for Wireless Mesh Networks and Virtual Cell implementation using Dynamic Multiple Multicast Trees

Weragama, Nishan S. 25 October 2013 (has links)
No description available.
89

Untersuchungen zur Risikominimierungstechnik Stealth Computing für verteilte datenverarbeitende Software-Anwendungen mit nutzerkontrollierbar zusicherbaren Eigenschaften / Investigations of the risk minimisation technique Stealth Computing for distributed data-processing software applications with user-controllable guaranteed properties

Spillner, Josef 05 July 2016 (has links) (PDF)
Die Sicherheit und Zuverlässigkeit von Anwendungen, welche schutzwürdige Daten verarbeiten, lässt sich durch die geschützte Verlagerung in die Cloud mit einer Kombination aus zielgrößenabhängiger Datenkodierung, kontinuierlicher mehrfacher Dienstauswahl, dienstabhängiger optimierter Datenverteilung und kodierungsabhängiger Algorithmen deutlich erhöhen und anwenderseitig kontrollieren. Die Kombination der Verfahren zu einer anwendungsintegrierten Stealth-Schutzschicht ist eine notwendige Grundlage für die Konstruktion sicherer Anwendungen mit zusicherbaren Sicherheitseigenschaften im Rahmen eines darauf angepassten Softwareentwicklungsprozesses. / The security and reliability of applications processing sensitive data can be significantly increased and controlled by the user by a combination of techniques. These encompass a targeted data coding, continuous multiple service selection, service-specific optimal data distribution and coding-specific algorithms. The combination of the techniques towards an application-integrated stealth protection layer is a necessary precondition for the construction of safe applications with guaranteeable safety properties in the context of a custom software development process.
90

Farliga metaller från reningsverk : En jämförelse av kunskapsläge mellan yngre och äldre vad gäller metallerna bly, arsenik, kadmium och antimon / Hazardous metals from sewage treatment plants : A comparison of the knowledge between younger and older people regarding the metals lead, arsenic, cadmium and antimony

Samuel, Seliger January 2015 (has links)
Miljöproblem kan ses som ett växande problem. Studien visar hur stor kunskapen är hos gemene man angående särskilda miljöproblem och dess risker. I denna studie tas problemet upp angående miljö- och hälsofarliga metaller som ansamlas i slammet hos svenska reningsverk. Metallerna kan ge allvarliga hälsoeffekter på människan och förekommer i en rad kända vardagliga produkter. Studien grundar sig i en enkätstudie med 200 deltagande respondenter och syftet är att mäta deras kunskapsläge. Respondenterna är indelade i en yngre och en äldre åldersgrupp. Skillnader och likheter har analyserats mellan de yngre och de äldre. Studien tar upp miljö- och hälsokonsekvenser för respektive metall samt dess förekomst i olika varor och produkter. Det allmänna kunskapsläget om miljöproblem och dess risker tas upp, samt om metaller kan renas i svenska reningsverk eller inte. En annan del av studien behandlar gemene mans tillförlitlighet till olika aktörer som har förmågan att sprida information om miljöproblem och dess risker. Respondenterna får även svara på vilka vägar av informationsspridning de själva föredrar.Enkätundersökningen är indelad i en kvantitativ och en kvalitativ del. Den kvantitativa delen har kommit fram till att den äldre åldersgruppen visar sig ha högre kunskapsnivå än den yngre åldersgruppen vad gäller de fyra olika metallerna rent allmänt. De har även större kunskap vad gäller metallernas hälsoeffekter och förekomst i varor och produkter. Den yngre åldersgruppen har dock bättre kunskap om miljöproblem och dess risker i allmänhet samt om metaller kan renas i reningsverk eller inte. I den kvalitativa delen undersöker vilka spridningsmetoder respondenterna föredrar samt vilka aktörer de känner störst tillförlitlighet till. Respondenterna föredrar att kunskap bör förmedlas genom bland annat skolväsendet och media. Tillförlitliga aktörer anses inom informationsspridning vara myndigheter, forskare och organisationer. / Environmental problems can be seen as a growing problem. The studie will show how big the knowledge of ordinary people in specific environmental problems and risks are. In this study the issues concerning environmental and health hazardous metals that accumulate in the sludge at the Swedish treatment plants are analysed. The metals may have serious health effects on humans and appear in a series known everyday products. The study is based on a survey with 200 participating respondents and the purpose is to measure their knowledge. The respondents are divided into a younger and an older age group. Similarities and differences have been analysed between the younger and the older ones. The study addresses the environmental and health impacts of each metal, and its incidence in different goods and products. The general state of knowledge about the environmental problems and risks will be addressed, as well as if metals can be purified in Swedish sewage or not. Another part of the study deals with the common man's reliability to the different actors who have the ability to disseminate information about environmental problems and risks. The respondents have also answer what routes of dissemination of information they prefer.The survey is divided into a quantitative and a qualitative part. The quantitative part has come to the conclusion that the older age group have a higher level of knowledge than the younger age group in terms of the four metals in general. They also have greater knowledge regarding the metals health effects and occurrence of goods and products. The younger age group has a better understanding of environmental problems and risks in general and about if metals can be purified in wastewater treatment or not. In the qualitative part the study examines the methods of dissemination respondents prefer and which actor they feel the greatest reliability to. Respondents prefer that knowledge should be conveyed by including the school system and the media. Reliable traders considered in the dissemination of information to be authorities, researchers and organizations.

Page generated in 0.0612 seconds