Spelling suggestions: "subject:"[een] ROBUSTNESS"" "subject:"[enn] ROBUSTNESS""
51 |
Technique Comparisons for Estimating Fragility Analysis in the Central Mid-WestWalker, Kimberly Ann 01 April 2016 (has links)
Climate change studies and examinations of increasing sea levels and temperatures show storm intensity and frequency are increasing. As these storms are increasing in intensity and frequency, the effects of these storms must be monitored to determine the probable damages or impacts to critical infrastructure [2, 35]. These storms suddenly create new demands and requirements upon already stressed critical infrastructure sectors [1]. A combined and interdisciplinary effort must be made to identify these stresses and to mitigate any failures. This effort is needed so that the 21st Century Smart Grid is robust and resilient enough to ensure that the grid is secured against all hazards. This project focuses on anticipating loss of above ground electrical power due to extreme wind speeds. This thesis selected a study region of Indiana, Illinois, Kentucky, and Tennessee to investigate the skill of fragility curve generation for this region, during Hurricane Irene, in the Fall of 2011. Three published fragility techniques are compared within the Midwest study region to determine the best skilled technique for the low wind speeds experienced in this region in August 2011. The three techniques studied are: 1) Powerline Technique [6], a correlation between “as published” state based construction standards and surface wind speeds sustained for greater than one minute; 2) the ANL Headout Technique [37], a correlation of Hurricane Irene three second wind gusts with DOE situation reports of outages; and 3) the Walker Technique [1], a correlation of utility reported outages in the Eastern Seaboard counties with three second surface gusts. The deliverable outcomes for this project include: 1) metrics for determining the method best for the study region, from the archival data during Hurricane Irene timeframe; 2) a fragility curve methodology description for each technique; and 3) a mathematical representation for each technique suitable for inclusion in automated forecast algorithms. Overall, this project combines situational awareness modeling to provide distinct fragility techniques that can be used by the public and private sectors to improve emergency management, restoration processes, and critical infrastructure all-hands-preparedness. This work was supported by Western Kentucky University (WKU) and the National Oceanic Atmospheric Administration (NOAA)
|
52 |
Algorithmic Analysis of Complex Semantics for Timed and Hybrid Automata.Doyen, Laurent 13 June 2006 (has links)
In the field of formal verification of real-time systems, major developments have been recorded in the last fifteen years. It is about logics, automata, process algebra, programming languages, etc. From the beginning, a formalism has played an important role: timed automata and their natural extension,hybrid automata. Those models allow the definition of real-time constraints using real-valued clocks, or more generally analog variables whose evolution is governed by differential equations. They generalize finite automata in that their semantics defines timed words where each symbol is associated with an occurrence timestamp.
The decidability and algorithmic analysis of timed and hybrid automata have been intensively studied in the literature. The central result for timed automata is that they are positively decidable. This is not the case for hybrid automata, but semi-algorithmic methods are known when the dynamics is relatively simple, namely a linear relation between the derivatives of the variables.
With the increasing complexity of nowadays systems, those models are however limited in their classical semantics, for modelling realistic implementations or dynamical systems.
In this thesis, we study the algorithmics of complex semantics for timed and hybrid automata.
On the one hand, we propose implementable semantics for timed automata and we study their computational properties: by contrast with other works, we identify a semantics that is implementable and that has decidable properties.
On the other hand, we give new algorithmic approaches to the analysis of hybrid automata whose dynamics is given by an affine function of its variables.
|
53 |
Robustness and Preferences in Combinatorial OptimizationHites, Romina 15 December 2005 (has links)
In this thesis, we study robust combinatorial problems with interval data. We introduce several new measures of robustness in response to the drawbacks of existing measures of robustness. The idea of these new measures is to ensure that the solutions are satisfactory for the decision maker in all scenarios, including the worst case scenario. Therefore, we have introduced a threshold over the worst case costs, in which above this threshold, solutions are no longer satisfactory for the decision maker. It is, however, important to consider other criteria than just the worst case.
Therefore, in each of these new measures, a second criteria is used to evaluate the performance of the solution in other scenarios such as the best case one.
We also study the robust deviation p-elements problem. In fact, we study when this solution is equal to the optimal solution in the scenario where the cost of each element is the midpoint of its corresponding interval.
Then, we finally formulate the robust combinatorial problem with interval data as a bicriteria problem. We also integrate the decision maker's preferences over certain types of solutions into the model. We propose a method that uses these preferences to find the set of solutions that are never preferred by any other solution. We call this set the final set.
We study the properties of the final sets from a coherence point of view and from a robust point of view. From a coherence point of view, we study necessary and sufficient conditions for the final set to be monotonic, for the corresponding preferences to be without cycles, and for the set to be stable.
Those that do not satisfy these properties are eliminated since we believe these properties to be essential. We also study other properties such as the transitivity of the preference and indifference relations and more. We note that many of our final sets are included in one another and some are even intersections of other final sets. From a robust point of view, we compare our final sets with different measures of robustness and with the first- and second-degree stochastic dominance. We show which sets contain all of these solutions and which only contain these types of solutions. Therefore, when the decision maker chooses his preferences to find the final set, he knows what types of solutions may or may not be in the set.
Lastly, we implement this method and apply it to the Robust Shortest Path Problem. We look at how this method performs using different types of randomly generated instances.
|
54 |
Anonymity and time in public-key encryptionQuaglia, Elizabeth January 2012 (has links)
In a world that is increasingly relying on digital technologies, the ability to securely communicate and distribute information is of crucial importance. Cryptography plays a key role in this context and the research presented in this thesis focuses on developing cryptographic primitives whose properties address more closely the needs of users. We start by considering the notion of robustness in public-key encryption, a property which models the idea that a ciphertext should not decrypt to a valid mes- sage under two different keys. In contexts where anonymity is relevant, robustness is likely to be needed as well, since a user cannot tell from the ciphertext if it is intended for him or not. We develop and study new notions of robustness, relating them to one another and showing how to achieve them. We then consider the important issue of protecting users' privacy in broadcast encryption. Broadcast encryption (BE) is a cryptographic primitive designed to efficiently broadcast an encrypted message to a target set of users that can decrypt it. Its extensive real-life application to radio, television and web-casting renders BE an extremely interesting area. However, all the work so far has striven for efficiency, focusing in particular on solutions which achieve short ciphertexts, while very little attention has been given to anonymity. To address this issue, we formally define anonymous broadcast encryption, which guarantees recipient-anonymity, and we provide generic constructions to achieve it from public-key, identity-based and attribute-based encryption. Furthermore, we present techniques to improve the efficiency of our constructions. Finally, we develop a new primitive, called time-specific encryption (TSE), which allows us to include the important element of time in the encryption and decryption processes. In TSE, the sender is able to specify during what time interval a ciphertext can be decrypted by a receiver. This is a relevant property since information may become useless after a certain point, sensitive data may not be released before a particular time, or we may wish to enable access to information for only a limited period. We define security models for various flavours of TSE and provide efficient instantiations for all of them. These results represent our efforts in developing public-key encryption schemes with enhanced properties, whilst maintaining the delicate balance between security and efficiency.
|
55 |
Design and analysis of electronic feedback mechanismsLi, Qin January 2012 (has links)
With the advent and development of modern information technology, such as the Internet, the difficulty in transmitting data has been reduced significantly. This makes it easier for entities to share their experience to a larger extent than before. In this thesis, we study the design and analysis of feedback mechanisms, which are the information systems that enable entities to learn information from others' experience. We provide a framework for feedback mechanisms. We first provide an abstract model of a feedback mechanism which defines the scope of our concept and identifies the necessary components of a feedback mechanism. We then provide a framework for feedback mechanisms. This provides a global and systematic view of feedback mechanisms. We also use our model and framework to decompose and analyse several existing feedback mechanisms. We propose an electronic marketplace which can be used for trading online services such as computational resources and digital storage. This marketplace incorporates a dispute prevention and resolution mechanism that is explicitly designed to encourage the good conduct of marketplace users, as well as providing important security features and being cost-effective. We also show how to incorporate the marketplace into Grid computing for exchanging computational resources. We propose a novel feedback mechanism for electronic marketplaces. In this setting, the role of feedback is no longer a “shadow of the future”, but a “shadow of the present”. In other words, feedback directly impacts on the seller's payoff for the current transaction instead of future transactions. This changes the fundamental functionality of feedback, which solves many inherent problems of reputation systems that are commonly applied in electronic marketplaces. We provide a novel announcement scheme for vehicular ad-hoc networks (VANETs) based on a reputation system in order to evaluate message reliability. This scheme features robustness against adversaries, efficiency and fault tolerance to temporary unavailability of the central server.
|
56 |
Modelling and optimisation of mechanical ventilation for critically ill patientsDas, Anup January 2012 (has links)
This thesis is made up of three parts: i) the development of a comprehensive computational model of the pulmonary (patho)physiology of healthy and diseased lungs, ii) the application of a novel optimisation-based approach to validate this computational model, and iii) the use of this model to optimise mechanical ventilator settings for patients with diseased lungs. The model described in this thesis is an extended implementation of the Nottingham Physiological Simulator (NPS) in MATLAB. An iterative multi-compartmental modelling approach is adopted, and modifications (based on physiological mechanisms) are proposed to characterise healthy as well as diseased states. In the second part of the thesis, an optimisation-based approach is employed to validate the robustness of this model. The model is subjected to simultaneous variations in the values of multiple physiologically relevant uncertain parameters with respect to a set of specified performance criteria, based on expected levels of variation in arterial blood gas values found in the patient population. Performance criteria are evaluated using computer simulations. Local and global optimisation algorithms are employed to search for the worst-case parameter combination that could cause the model outputs to deviate from their expected range of operation, i.e. violate the specified model performance criteria. The optimisation-based analysis is proposed as a useful complement to current statistical model validation techniques, which are reliant on matching data from in vitro and in vivo studies. The last section of the thesis considers the problem of optimising settings of mechanical ventilation in an Intensive Therapy Unit (ITU) for patients with diseased lungs. This is a challenging task for physicians who have to select appropriate mechanical ventilator settings to satisfy multiple, sometimes conflicting, objectives including i) maintaining adequate oxygenation, ii) maintaining adequate carbon dioxide clearance and iii) minimising the risks of ventilator associated lung injury (VALI). Currently, physicians are reliant on guidelines based on previous experience and recommendations from a very limited number of in vivo studies which, by their very nature, cannot form the basis of personalised, disease-specific treatment protocols. This thesis formulates the choice of ventilator settings as a constrained multi-objective optimisation problem, which is solved using a hybrid optimisation algorithm and a validated physiological simulation model, to optimise the settings of mechanical ventilation for a healthy lung and for several pulmonary disease cases. The optimal settings are shown to satisfy the conflicting clinical objectives, to improve the ventilation perfusion matching within the lung, and, crucially, to be disease-specific.
|
57 |
In-silico Models for Capturing the Static and Dynamic Characteristics of Robustness within Complex NetworksKamapantula, Bhanu K 01 January 2015 (has links)
Understanding the role of structural patterns within complex networks is essential to establish the governing principles of such networks. Social networks, biological networks, technological networks etc. can be considered as complex networks where information processing and transport plays a central role. Complexity in these net works can be due to abstraction, scale, functionality and structure. Depending on the abstraction each of these can be categorized further. Gene regulatory networks are one such category of biological networks. Gene regulatory networks (GRNs) are assumed to be robust under internal and external perturbations. Network motifs such as feed-forward loop motif and bifan motif are believed to play a central role functionally in retaining GRN behavior under lossy conditions. While the role of static characteristics like average shortest path, density, degree centrality among other topological features is well documented by the research community, the structural role of motifs and their dynamic characteristics are not xiii well understood. Wireless sensor networks in the last decade were intensively studied using network simulators. Can we use in-silico experiments to understand biological network topologies better? Does the structure of these motifs have any role to play in ensuring robust information transport in such networks? How do their static and dynamic roles differ? To understand these questions, we use in-silico network models to capture the dynamic characteristics of complex network topologies. Developing these models involve network mapping, sink selection strategies and identifying metrics to capture robust system behavior. Further, I studied the dynamic aspect of network characteristics using variation in network information flow under perturbations defined by lossy conditions and channel capacity. We use machine learning techniques to identify significant features that contribute to robust network performance. Our work demonstrates that although the structural role of feed-forward loop motif in signal transduction within GRNs is minimal, these motifs stand out under heavy perturbations.
|
58 |
An analysis of schedule buffer time for increased robustness and cost efficiency in Scandinavian Airlines´traffic programForsberg, Lucas, Ström, Anders January 2016 (has links)
The airline industry has become more competitive since the introduction of low fare airlines in the 1990s. Thus, requirement of optimized planning is nowadays essential in order to obtain high market shares. The planning behind an airworthy traffic program is complex and includes many different resources, as fleet, crew and maintenance, which have to be synchronized. With a constant risk of unforeseen disruptions and variances in the weather conditions, robustness in form of time buffers are necessary in order to give the system a chance of recovery. Generally, one delay often affects several flights due to the lack of time buffers. In order to achieve cost reductions and to maximize profit, airlines tends to create traffic programs maximizing airborne hours. This report culminates in a conclusion of where and when time buffers, in a cost efficient way, can be added in Scandinavian Airline´s traffic program in order to obtain a higher robustness. This is performed by analyzing historical flight data and by implementing a Monte Carlo simulation.
|
59 |
Etude et Robustesse de la Modulation Multiporteuses en Banc de Filtres : FBMC / Study and Robustness of Filter Bank Multicarrier Modulation : FBMCValette, Agathe 09 January 2017 (has links)
Les systèmes multi-porteuses sont utilisés depuis longtemps, et dans de nombreux standards de communication tels qu'ADSL, DVB-T, WiMax et LTE.la modulation OFDM (Orthogonal Frequency Division Multiplexing) est aujourd'hui la technologie dominante. Cependant, l'innovation qu'est l'agrégation de spectres permet aux systèmes de communication de mieux exploiter le spectre radio, aujourd'hui rare, cher et sous-utilisé.Les futures communications 5G devront trouver le moyen d'exploiter ce spectre fragmenté de manière aussi flexible que possible.Le standard PMR (Private Mobile Radio) fait face aux mêmes problèmes vis-à-vis de l'introduction de services large bande dans un spectre déja surchargé.Ces problématiques nécessitent des formes d'onde dont l'occupation spectrale est presque parfaite, afin de limiter au possible les bandes de garde entre différents utilisateurs. L'occupation spectrale d'OFDM n'est tout simplement pas assez bonne pour satisfaire ces impératifs. Parmi les nouvelles formes d'onde prises en considération dans ce cadre, les systèmes FBMC/OQAM (Filter Bank Multicarrier/Offset QAM) sont conçus de manière à fournir une bien meilleure occupation spectrale que les systèmes OFDM, avec un débit optimal, et sans le besoin d'ajouter un préfixe cyclique.Cependant, un des grands inconvénients des systèmes multi-porteuses tels qu'OFDM ou FBMC/OQAM est leur enveloppe non-constante. Les signaux générés présentent de nombreux pics de puissance élevée qui apparaissent quand les sous-porteuses modulées indépendamment puis sommées sont en phase les unes avec les autres.Cela fait que les signaux multi-porteuses sont très sensibles aux non-linéarités des composants électroniques des systèmes de communication, et tout particulièrement à celles de l'amplificateur de puissance (PA) à l'émission. Ces non-linéarités génèrent des distorsions en dedans et en dehors de la bande utile, ce qui crée des remontées spectrales qui viennent dégrader l'occupation spectrale des signaux.Le but de cette thèse est d'évaluer l'impact que le PA peut avoir sur les performances spectrales du signal FBMC/OQAM, et de réduire la sensibilité de la forme d'ondes à ces non-linéarités.Nous basons nos travaux sur des simulations Matlab et des mesures expérimentales, en utilisant le signal OFDM comme référence.Nous commençons par confirmer que le signal FBMC/OQAM a de meilleures performances spectrales que le signal OFDM. Puis nous quantifions l'effet des non-linéarités de l'amplificateur sur les deux signaux.Ensuite, nous proposons une méthode améliorée de contrôle de la dynamique de l'enveloppe du signal, basée sur une technique de précodage, qui a pour but de réduire la sensibilité du signal FBMC/OQAM aux non-linéarités, pour un coût de complexité modeste.Nous étudions les différents paramètres de cette méthode pour en déduire le paramétrage optimal.Enfin, nous présentons des simulations et des mesures de la capacité de cette méthode à réduire les remontées spectrales en dedans et en dehors de la bande utile quand le signal FBMC/OQAM subit les non-linéarités du PA. / Multi-carrier systems are well established in many different communication standards such as ADSL, DVB-T, WiMax, and LTE.The dominant technology for broadband communications today is OFDM (Orthogonal Frequency Division Multiplexing).However the introduction of frequency bands aggregation is allowing systems to deal with a spectrum that is scarce, expensive and underutilized.Future 5G communications must find a way to exploit this fragmented spectrum as flexibly as possible.Similar problems are present also for introducing future broadband PMR (Private Mobile Radio) standards in the already crowded PMR spectrum.This requires waveforms with almost perfect spectrum occupation in order to limit the guard frequency band between users.OFDM's spectral occupation is not good enough to fulfill these requirements.Among the considered waveform approaches, FBMC/OQAM (Filter Bank Multicarrier/Offset QAM) systems are designed to provide a much better spectral occupation than OFDM systems, with optimal data rate, and no need for a cyclic prefix.However, a major disadvantage of a multi-carrier system such as OFDM or FBMC/OQAM is the resulting non constant envelope with numerous high power peaks that appear when the independently modulated sub-carriers are added coherently.This results in a high sensitivity to the non-linearities of electronic components, especially to the PA (Power Amplifier).PA non-linearities generate distortions in-band and out-of band, creating spectral regrowth which degrades the spectral occupation of the signals at the transmitter.The aim of this thesis is to evaluate the impact of the PA on the FBMC/OQAM signal's spectral performances, and to reduce the waveform's sensitivity to those non-linearities.Through simulations and experimental measurements, using the OFDM signal as a basis for comparison, we first confirm FBMC/OQAM's better spectral occupation than OFDM, and then quantify the effect of the PA non-linearities on the FBMC/OQAM an OFDM signals.We then propose an improved precoding method for dynamic envelope control, which aims to reduce the FBMC/OQAM signal's sensitivity to PA non-linearities with limited additional complexity. We study the various parameters in order to provide the optimal parameter choice.Finally, we present simulations and measurements of the method's ability to reduce spectral regrowth in and out of band when the FBMC/OQAM signal is subjected to the PA nonlinearities.
|
60 |
Design and performance of precast concrete structuresRobinson, Gary P. January 2014 (has links)
A precast concrete structural system offers many advantages over in-situ casting. For example, greater control over the quality of materials and workmanship, improved health and safety (with casting carried out at ground level rather than at height) and cost efficiency (with standard forms continually re-used) are all realised through the off-site production of structural elements. As a result, a large body of research has been conducted into their performance, with many national codes of practice also devoting specific sections to design and detailing. However, contemporary design practice has been shown to not always correctly reflect the findings of published experimental studies. Concrete technology is continually evolving, as is the industry s knowledge of how to model and predict the behaviour of the resulting structural components. Using such understanding to design and justify the more efficient, cost-effective or flexible manufacture of precast components can offer a key commercial advantage to a precast manufacturer. In this context, the numerical and experimental investigations undertaken as part of this study have been specifically focussed on quantifying the advantages of utilising beneficial alternatives. Specifically the research has looked at improvements in concrete mixes, lightweight aggregates and reinforcing strategies, for precast structural elements required to transfer loads both vertically and horizontally. However, because of the non-standard solutions considered, different approaches have been used to demonstrate their suitability. Towards this goal, an alternative assessment strategy was devised for slender precast concrete panels with central reinforcement. The procedure was found to lead to design capacities that are in good agreement with actual experimental findings and should thus result in future manufacturing efficiency. The method can also be used for alternative concrete types and reinforcement layouts. Fresh and early-age material characteristics of self-compacting concrete mixes with a partial or complete replacement of traditional gravel and sand constituents with lightweight alternatives were investigated. This was done to demonstrate the feasibility of their use for the manufacture of large scale structural components, with clear benefits in terms of lifting and transportation. A computational push-down procedure was utilised to demonstrate the potential unsuitability of current tying regulations for avoiding a progressive collapse event in precast framed structures. The findings are considered to be of particular significance for these structures due to the segmental nature of the construction and the associated inherent lack of structural continuity.
|
Page generated in 0.0298 seconds