• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 340
  • 128
  • 49
  • 38
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 701
  • 183
  • 93
  • 88
  • 85
  • 75
  • 68
  • 53
  • 53
  • 53
  • 52
  • 50
  • 48
  • 41
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Investigations into testability and related concepts

Al-Khanjari, Zuhoor Abdullah January 1999 (has links)
No description available.
12

Robust Network Design and Robustness Factor

Ghayoori, Armin 07 August 2013 (has links)
This thesis presents a robust design approach for communication networks that includes capacitation and routing strategy design. Robustness is a mandatory property of core networks to respond to perturbations in network parameters for performance stability and reliable service delivery to different customers. Our proposed design approach is applicable to any system that is modelled by a weighted directed graph. To quantify the robustness measure, we borrow and develop different concepts and properties from Markov chain literatures as well as graph theory survivability discussions. We propose a new robustness definition for Markov chains. The new Markov chain robustness definition has different applications in network design. We define robustness as the sensitivity of the mean first passage time between any two states of the Markov chain. This sensitivity is measured based on the variations of the mean first passage times to the perturbations in transition probabilities. We show that this definition of robustness is related to the sensitivity of the betweenness of a node/state in a Markov chain, which is defined as the number of visits by a random walker that wanders around in the Markov chain according to its transition probabilities. It was shown that for an infinite walk, the proportion of number of visits to the total number of hops converges to the stationary probabilities. Therefore, an analogy can be seen between the well-known condition number and the robustness factor in a Markov chain. We also extend the robustness factor definition to network design problems. We show that the robustness factor can be used as a design criterion. The newly defined robustness factor is a function of the network capacitation, routing and external input and output traffic. We also emphasize the importance of the newly discovered graph theoretic metric, called the Kemeny constant, in network design problems. We discuss that a function of the Kemeny constant and robustness factor limits the sensitivity of network performance parameters to the perturbations in the network.
13

Robust Network Design and Robustness Factor

Ghayoori, Armin 07 August 2013 (has links)
This thesis presents a robust design approach for communication networks that includes capacitation and routing strategy design. Robustness is a mandatory property of core networks to respond to perturbations in network parameters for performance stability and reliable service delivery to different customers. Our proposed design approach is applicable to any system that is modelled by a weighted directed graph. To quantify the robustness measure, we borrow and develop different concepts and properties from Markov chain literatures as well as graph theory survivability discussions. We propose a new robustness definition for Markov chains. The new Markov chain robustness definition has different applications in network design. We define robustness as the sensitivity of the mean first passage time between any two states of the Markov chain. This sensitivity is measured based on the variations of the mean first passage times to the perturbations in transition probabilities. We show that this definition of robustness is related to the sensitivity of the betweenness of a node/state in a Markov chain, which is defined as the number of visits by a random walker that wanders around in the Markov chain according to its transition probabilities. It was shown that for an infinite walk, the proportion of number of visits to the total number of hops converges to the stationary probabilities. Therefore, an analogy can be seen between the well-known condition number and the robustness factor in a Markov chain. We also extend the robustness factor definition to network design problems. We show that the robustness factor can be used as a design criterion. The newly defined robustness factor is a function of the network capacitation, routing and external input and output traffic. We also emphasize the importance of the newly discovered graph theoretic metric, called the Kemeny constant, in network design problems. We discuss that a function of the Kemeny constant and robustness factor limits the sensitivity of network performance parameters to the perturbations in the network.
14

Fragile robustness : principles and practice

Quinton-Tulloch, Mark January 2012 (has links)
Selective robustness is a key feature of biochemical networks, conferring a fitness benefit to organisms living in dynamic environments. The (in-)sensitivity of a network to external perturbations results from the interaction between network dynamics, design, and enzyme kinetics. In this work, we focus on the subtle interplay between robustness and fragility. We describe a quantitative method for defining the fragility and robustness of system fluxes and metabolite concentrations to perturbations in enzyme activity. We find that for many mathematical models of metabolic pathways, the robustness is captured by a broad distribution of the robustness coefficients and demonstrate that, unlike fragility, robustness is not a conserved process. Using a combination of existing in silico models and novel sets of models, designed to allow specific network features of interest to be studied in isolation, we examine the effect of various network properties on the robustness of such pathways. We discuss the question of how to measure, in a meaningful way, the robustness of a pathway as a whole, defining several summary metrics which, in combination, can be used to compare the robustness of different pathways. We show that networking increases robustness, but that robustness is affected differently by varying aspects of complexity. The effect of system control loops on robustness is analysed and we find that, in general, the addition of such regulation increases pathway robustness. The evolution of flux robustness is also examined. We show that robustness in metabolic pathways is unlikely to simply be a by-product of selection for other pathway traits, highlighting several trade-offs that result from the evolution of robust systems. Finally, we extend our definition of robustness, defining robustness coefficients for cellular properties other than flux or metabolite concentration, and to perturbations other than changes in enzyme activity. Using the effect of benzoic acid on glycolysis as a case study, we show how such robustness coefficients can be used to give novel insights from experimental data.
15

Towards Generalized and Robust Knowledge Association

Pei, Shichao 17 November 2021 (has links)
The next generation of artificial intelligence is based on human knowledge and experience that can assist the evolution of artificial intelligence towards learning the capability of planning and reasoning. Although knowledge collection and organiza- tion have achieved tremendous progress, it is non-trivial to construct a comprehen- sive knowledge graph due to different data sources, various construction methods, and alternate entity surface forms. The difficulty motivates the study of knowledge association. Knowledge association has attracted the attention of researchers, and some solutions have been proposed to resolve the problem, yet these current solutions of knowledge association still suffer from two primary shortages, i.e., generalization and robustness. Specifically, most knowledge association methods require a sufficient number of labeled data and ignore the effective exploration and utilization of complex relationships between entities. Besides, prevailing approaches rely on clean labeled data as the training set, making the model vulnerable to noises in the given labeled data. These drawbacks motivate the research on generalization and robustness of knowledge association in this dissertation. This dissertation explores two kinds of knowledge association tasks, i.e., entity alignment and entity synonym discovery, and makes innovative contributions to ad- dress the above drawbacks. First, semi-supervised entity alignment frameworks, which take advantage of both labeled with unlabeled entities, are proposed. One em- ploys an entity-level loss that is based on the cycle-consistency translation loss, and another one dually minimizes both entity-level and group-level loss by utilizing opti- mal transport theory to ease the strict constraint imposed by the cycle-consistency loss and match the whole picture of labeled and unlabeled data in different data sources. Second, robust entity alignment methods are proposed to solve the draw- back of robustness. One is designed by following adversarial training principle and leveraging graph neural network, and is optimized by a unified reinforced training strategy to combine its two components, i.e., noise detection and noise-aware entity alignment. Another one resorts to non-sampling and curriculum learning to address the negative sampling issue and the positive data selection issue remaining in the previous method. Lastly, a set-aware entity synonym discovery model that enables a flexible receptive field by making a breakthrough in using entity synonym set informa- tion is proposed to explore the complex relationship between entities. The contextual information of entities and entity synonym sets are arranged by a two-level network from which both of them can be mapped into the same space to facilitate synonym discovery by encoding the high-order contexts from flexible receptive fields.
16

PARALLELIZED ROBUSTNESS COMPUTATION FOR CYBER PHYSICALSYSTEMS VERIFICATION

Cralley, Joseph 01 May 2020 (has links)
Failures in cyber physical systems can be costly in terms of money and lives. The marsclimate orbiter alone had a mission cost of 327.6 million USD which was almostcompletely wasted do to an uncaught design flaw. This shows the importance of beingable to define formal requirements as well as being able to test the design against theserequirements. One way to define requirements is in Metric Temporal Logic (MTL), whichallows for constraints that also have a time component. MTL can also have a distancemetric defined that allows for the calculation of how close the MTL constraint is to beingfalsified. This is termed robustness.Being able to calculate MTL robustness quickly can help reduce development time andcosts for a cyber physical system. In this thesis, improvements to the current method ofcomputing MTL robustness are proposed. These improvements lower the timecomplexity, allows parallel processing to be used, and lowers the memory foot print forMTL robustness calculation. These improvements will hopefully increase the likelihood ofMTL robustness being used in systems that were previously inaccessible do to timeconstraints, data resolution or real time systems that need results quickly. Theseimprovements will also open the possibility of using MTL in systems that operate for alarge amount of time and produce a large amount of signal data
17

Enhanced shape-from-shading for object recognition

Worthington, Philip Lee January 1999 (has links)
No description available.
18

Data contamination versus model deviation

Fonseca, Viviane Grunert da January 1999 (has links)
No description available.
19

Quantification of structural redundancy and robustness

Brett, Colin Joseph January 2015 (has links)
Historical collapse events are testament to the inherent dangers of non-robust structures. Designing robust structures is vital to ensure that localised damage events, such as the failure of a single structural element, do not lead to catastrophic disproportionate collapse. While the advent of robustness research can be dated to the collapse of the Ronan Point building in 1968, the quantification of robustness remains an active and important research field. The importance of developing effective robustness assessment methods is emphasized by a number of factors. One issue is the growing problem of inspecting, maintaining and ensuring the safety of ageing infrastructure. Older structures are more likely to be non-redundant and are more susceptible to structural defects. Another factor is the pursuit of greater efficiency and design optimisation, which has eliminated traditional design conservatism and many undocumented factors of safety. As a result, modern buildings may be more vulnerable to unforeseen conditions during their service life. The objective of quantifying robustness highlights the need for a new system-oriented perspective on structural performance to complement traditional component-based design. There is, as of yet, no single framework that incorporates all the essential aspects in an explicit, transparent and quantitative manner leading to a comprehensive outcome in terms of quantification of the structural robustness. This thesis focuses primarily on the quantification of redundancy and robustness, with the view that the capacity of a structure to withstand a damage event is an inherent property of the structure, which can be considered complementary to other commonly discussed structural properties, such as strength and ductility. Hence, a comprehensive unified framework for redundancy quantification is proposed, which builds upon existing strength-based measures. The role of structural uncertainties in the quantification of robustness is investigated, with a focus on the importance of the sequence of events which precede the collapse of a structure. Directly incorporating structural uncertainties into robustness quantification typically requires computationally expensive methods such as Monte Carlo simulations. Moreover, such collapse analyses are susceptible to numerical instabilities, further complicating the simulation of multiple collapse scenarios. To address these issues, a novel incremental elastic analysis method is proposed in this thesis, which analyses the full load-displacement relationship of a structure and additionally, has an inbuilt capacity to incorporate structural variability and thus output a spectrum of possible response outcomes.
20

ARTS: Agent-Oriented Robust Transactional System

Wang, Mingzhong January 2009 (has links)
Internet computing enables the construction of large-scale and complex applications by aggregating and sharing computational, data and other resources across institutional boundaries. The agent model can address the ever-increasing challenges of scalability and complexity, driven by the prevalence of Internet computing, by its intrinsic properties of autonomy and reactivity, which support the flexible management of application execution in distributed, open, and dynamic environments. However, the non-deterministic behaviour of autonomous agents leads to a lack of control, which complicates exception management in the system, thus threatening the robustness and reliability of the system, because improperly handled exceptions may cause unexpected system failure and crashes. / In this dissertation, we investigate and develop mechanisms to integrate intrinsic support for concurrency control, exception handling, recoverability, and robustness into multi-agent systems. The research covers agent specification, planning and scheduling, execution, and overall coordination, in order to reduce the impact of environmental uncertainty. Simulation results confirm that our model can improve the robustness and performance of the system, while relieving developers from dealing with the low level complexity of exception handling. / A survey, along with a taxonomy, of existing proposals and approaches for building robust multi-agent systems is provided first. In addition, the merits and limitations of each category are highlighted. / Next, we introduce the ARTS (Agent-Oriented Robust Transactional System) platform which allows agent developers to compose recursively-defined, atomically-handled tasks to specify scoped and hierarchically-organized exception-handling plans for a given goal. ARTS then supports automatic selection, execution, and monitoring of appropriate plans in a systematic way, for both normal and recovery executions. Moreover, we propose multiple-step backtracking, which extends the existing step-by-step plan reversal, to serve as the default exception handling and recovery mechanism in ARTS. This mechanism utilizes previous planning results in determining the response to a failure, and allows a substitutable path to start, prior to, or in parallel with, the compensation process, thus allowing an agent to achieve its goals more directly and efficiently. ARTS helps developers to focus on high-level business logic and relaxes them from considering low-level complexity of exception management. / One of the reasons for the occurrence of exceptions in a multi-agent system is that agents are unable to adhere to their commitments. We propose two scheduling algorithms for minimising such exceptions when commitments are unreliable. The first scheduling algorithm is trust-based scheduling, which incorporates the concept of trust, that is, the probability that an agent will comply with its commitments, along with the constraints of system budget and deadline, to improve the predictability and stability of the schedule. Trust-based scheduling supports the runtime adaptation and evolvement of the schedule by interleaving the processes of evaluation, scheduling, execution, and monitoring in the life cycle of a plan. The second scheduling algorithm is commitment-based scheduling, which focuses on the interaction and coordination protocol among agents, and augments agents with the ability to reason about and manipulate their commitments. Commitment-based scheduling supports the refactoring and parallel execution of commitments to maximize the system's overall robustness and performance. While the first scheduling algorithm needs to be performed by a central coordinator, the second algorithm is designed to be distributed and embedded into the individual agent. / Finally, we discuss the integration of our approaches into Internet-based applications, to build flexible but robust systems. Specifically, we discuss the designs of an adaptive business process management system and of robust scientific workflow scheduling.

Page generated in 0.0586 seconds