• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 9
  • 5
  • 3
  • 3
  • Tagged with
  • 67
  • 67
  • 18
  • 16
  • 12
  • 11
  • 11
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimal Sensor Placement for Infrastructure System Monitoring using Probabilistic Graphical Models and Value of Information

Malings, Carl Albert 01 May 2017 (has links)
Civil infrastructure systems form the backbone of modern civilization, providing the basic services that allow society to function. Effective management of these systems requires decision-making about the allocation of limited resources to maintain and repair infrastructure components and to replace failed or obsolete components. Making informed decisions requires an understanding of the state of the system; such an understanding can be achieved through a computational or conceptual system model combined with information gathered on the system via inspections or sensors. Gathering of this information, referred to generally as sensing, should be optimized to best support the decision-making and system management processes, in order to reduce long-term operational costs and improve infrastructure performance. In this work, an approach to optimal sensing in infrastructure systems is developed by combining probabilistic graphical models of infrastructure system behavior with the value of information (VoI) metric, which quantifies the utility of information gathering efforts (referred to generally as sensor placements) in supporting decision-making in uncertain systems. Computational methods are presented for the efficient evaluation and optimization of the VoI metric based on the probabilistic model structure. Various case studies on the application of this approach to managing infrastructure systems are presented, illustrating the flexibility of the basic method as well as various special cases for its practical implementation. Three main contributions are presented in this work. First, while the computational complexity of the VoI metric generally grows exponentially with the number of components, growth can be greatly reduced in systems with certain topologies (designated as cumulative topologies). Following from this, an efficient approach to VoI computation based on a cumulative topology and Gaussian random field model is developed and presented. Second, in systems with non-cumulative topologies, approximate techniques may be used to evaluate the VoI metric. This work presents extensive investigations of such systems and draws some general conclusions about the behavior of this metric. Third, this work presents several complete application cases for probabilistic modeling techniques and the VoI metric in supporting infrastructure system management. Case studies are presented in structural health monitoring, seismic risk mitigation, and extreme temperature response in urban areas. Other minor contributions included in this work are theoretical and empirical comparisons of the VoI with other sensor placement metrics and an extension of the developed sensor placement method to systems that evolve in time. Overall, this work illustrates how probabilistic graphical models and the VoI metric can allow for efficient sensor placement optimization to support infrastructure system management. Areas of future work to expand on the results presented here include the development of approximate, heuristic methods to support efficient sensor placement in non-cumulative system topologies, as well as further validation of the efficient sensing optimization approaches used in this work.
2

Advanced modelling and visualisation of liquid-liquid separations of complex sample components, with variable phase distribution and mode of operation

De Folter, Jozefus Johannes Martinus January 2013 (has links)
This research is about liquid-liquid chromatography modelling. While the main focus was on liquid-liquid chromatography, where the stationary and mobile phases are both liquid, theory of different types of chromatography, including the currently most used techniques, were considered as well. The main goal of this research was to develop a versatile liquid-liquid separation model, able to model all potential operating scenarios and modes of operation. A second goal was to create effective and usable interfaces to such a model, implying primarily information visualisation, and secondarily educative visualisation. The first model developed was a model based on Counter-Current Distribution. Next a new more elemental model was developed, the probabilistic model, which better models continuous liquid-liquid chromatography techniques. Finally, a more traditional model was developed using transport theory. These models were used and compared to experimental data taken from literature. The models were demonstrated to model all main liquid-liquid chromatography techniques, incorporated the different modes of operation, and were able to accurately model many sample components and complex sample injections. A model interface was developed, permitting functional and effective model configuration, exploration and analysis using visualisation and interactivity. Different versions of the interface were then evaluated using questionnaires, group interviews and Insight Evaluation. The visualisation and interactivity enhancements have proven to contribute understanding and insight of the underlying chromatography process. This also proved the value of the Insight Evaluation method, providing valuable qualitative evaluation results desired for this model interface evaluation. A prototype of a new graphical user interface developed, and showed great potential for combining model parameter input and exploring the liquid-liquid chromatography processes. Additionally, a new visualisation method was developed that can accurately visualise different modes of operation. This was used to create animations, which were also evaluated. The results of this evaluation show the new visualisation helps understanding of the liquid-liquid chromatography process amongst CCC novices. The model software will be a valuable tool for industry for predicting, evaluating and validating experimental separations and production processes. While effective models already existed, the use of interactive visualisation permits users to explore the relationship between parameters and performances in a simpler yet more powerful way. It will also be a valuable tool for academia for teaching & training, both staff and students, on how to use the technology. Prior to this work no such tool existed or existing tools were limited in their accessibility and educational value.
3

Bounds for the Maximum-Time Stochastic Shortest Path Problem

Kozhokanova, Anara Bolotbekovna 13 December 2014 (has links)
A stochastic shortest path problem is an undiscounted infinite-horizon Markov decision process with an absorbing and costree target state, where the objective is to reach the target state while optimizing total expected cost. In almost all cases, the objective in solving a stochastic shortest path problem is to minimize the total expected cost to reach the target state. But in probabilistic model checking, it is also useful to solve a problem where the objective is to maximize the expected cost to reach the target state. This thesis considers the maximum-time stochastic shortest path problem, which is a special case of the maximum-cost stochastic shortest path problem where actions have unit cost. The contribution is an efficient approach to computing high-quality bounds on the optimal solution for this problem. The bounds are useful in themselves, but can also be used by other algorithms to accelerate search for an optimal solution.
4

A probabilistic model of virus adsorption in packed beds

Visneski, Michael J. January 1986 (has links)
No description available.
5

Bilayer Network Modeling

Creasy, Miles Austin 14 September 2011 (has links)
This dissertation presents the development of a modeling scheme that is developed to model the membrane potentials and ion currents through a bilayer network system. The modeling platform builds off of work performed by Hodgkin and Huxley in modeling cell membrane potentials and ion currents with electrical circuits. This modeling platform is built specifically for cell mimics where individual aqueous volumes are separated by single bilayers like the droplet-interface-bilayer. Applied potentials in one of the aqueous volumes will propagate through the system creating membrane potentials across the bilayers of the system and ion currents through the membranes when proteins are incorporated to form pores or channels within the bilayers. The model design allows the system to be divided into individual nodes of single bilayers. The conductance properties of the proteins embedded within these bilayers are modeled and a finite element analysis scheme is used to form the system equations for all of the nodes. The system equation can be solved for the membrane potentials through the network and then solve for the ion currents through individual membranes in the system. A major part of this work is modeling the conductance of the proteins embedded within the bilayers. Some proteins embedded in bilayers open pores and channels through the bilayer in response to specific stimuli and allow ion currents to flow from one aqueous volume to an adjacent volume. Modeling examples of the conductance behavior of specific proteins are presented. The examples demonstrate aggregate conductance behavior of multiple embedded proteins in a single bilayer, and at examples where few proteins are embedded in the bilayer and the conductance comes from a single-channel or pore. The effect of ion gradients on the single channel conductance example is explored and those effects are included in the single-channel conductance model. Ultimately these conductance models are used with the system model to predict ion currents through a bilayer or through part of a bilayer network system. These modeling efforts provide a modeling tool that will assist engineers in designing bilayer network systems. / Ph. D.
6

Adaptation Timing in Self-Adaptive Systems

Moreno, Gabriel A. 01 April 2017 (has links)
Software-intensive systems are increasingly expected to operate under changing and uncertain conditions, including not only varying user needs and workloads, but also fluctuating resource capacity. Self-adaptation is an approach that aims to address this problem, giving systems the ability to change their behavior and structure to adapt to changes in themselves and their operating environment without human intervention. Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, some adaptation tactics—atomic adaptation actions that leave the system in a consistent state—have latency and take some time to produce their effect. In that case, reactive adaptation causes the system to lag behind environment changes. What is worse, a long running adaptation action may prevent the system from performing other adaptations until it completes, further limiting its ability to effectively deal with the environment changes. To address these limitations and improve the effectiveness of self-adaptation, we present proactive latency-aware adaptation, an approach that considers the timing of adaptation (i) leveraging predictions of the near future state of the environment to adapt proactively; (ii) considering the latency of adaptation tactics when deciding how to adapt; and (iii) executing tactics concurrently. We have developed three different solution approaches embodying these principles. One is based on probabilistic model checking, making it inherently able to deal with the stochastic behavior of the environment, and guaranteeing optimal adaptation choices over a finite decision horizon. The second approach uses stochastic dynamic programming to make adaptation decisions, and thanks to performing part of the computations required to make those decisions off-line, it achieves a speedup of an order of magnitude over the first solution approach without compromising optimality. A third solution approach makes adaptation decisions based on repertoires of adaptation strategies— predefined compositions of adaptation tactics. This approach is more scalable than the other two because the solution space is smaller, allowing an adaptive system to reap some of the benefits of proactive latency-aware adaptation even if the number of ways in which it could adapt is too large for the other approaches to consider all these possibilities. We evaluate the approach using two different classes of systems with different adaptation goals, and different repertoires of adaptation strategies. One of them is a web system, with the adaptation goal of utility maximization. The other is a cyberphysical system operating in a hostile environment. In that system, self-adaptation must not only maximize the reward gained, but also keep the probability of surviving a mission above a threshold. In both cases, our results show that proactive latency-aware adaptation improves the effectiveness of self-adaptation with respect to reactive time-agnostic adaptation.
7

Estimation du contexte par vision embarquée et schémas de commande pour l’automobile / Context estimation using embedded vision and schemes control for automobile

Ammar, Moez 21 December 2012 (has links)
Les systèmes dotés d’autonomie doivent continument évaluer leur environnement, via des capteurs embarqués, afin de prendre des décisions pertinentes au regard de leur mission, mais aussi de l’endosystème et de l’exosystème. Dans le cas de véhicules dits ‘intelligents’, l’attention quant au contexte environnant se porte principalement d’une part sur des objets parfaitement normalisés, comme la signalisation routière verticale ou horizontale, et d’autre part sur des objets difficilement modélisables de par leur nombre et leur variété (piétons, cyclistes, autres véhicules, animaux, ballons, obstacles quelconques sur la chaussée, etc…). La décision a contrario offre un cadre formel, adapté à ce problème de détection d’objets variables, car modélisant le bruit plutôt qu’énumérant les objets à détecter. La contribution principale de cette thèse est d’adapter des mesures probabilistes de type NFA (Nombre de Fausses Alarmes) au problème de la détection d’objets soit ayant un mouvement propre, soit saillants par rapport au plan de la route. Un point fort des algorithmes développés est qu’ils s’affranchissent de tout seuil de détection. Une première mesure NFA permet d’identifier le sous-domaine de l'image (pixels non nécessairement connexes) dont les valeurs de niveau de gris sont les plus étonnantes, sous hypothèse de bruit gaussien (modèle naïf). Une seconde mesure NFA permet ensuite d’identifier le sous-ensemble des fenêtres de significativité maximale, sous hypothèse de loi binômiale (modèle naïf). Nous montrons que ces mesures NFA peuvent également servir de critères d’optimisation de paramètres, qu’il s’agisse du mouvement 6D de la caméra embarquée, ou d’un seuil de binarisation sur les niveaux de gris. Enfin, nous montrons que les algorithmes proposés sont génériques au sens où ils s’appliquent à différents types d’images en entrée, radiométriques ou de disparité.A l’opposé de l’approche a contrario, les modèles markoviens permettent d’injecter des connaissances a priori sur les objets recherchés. Nous les exploitons dans le cas de la classification de marquages routiers.A partir de l’estimation du contexte (signalisation, détection d’objets ‘inconnus’), la partie commande comporte premièrement une spécification des trajectoires possibles et deuxièmement des lois en boucle fermée assurant le suivi de la trajectoire sélectionnée. Les diverses trajectoires possibles sont regroupées en un faisceau, soit un ensemble de fonctions du temps où divers paramètres permettent de régler les invariants géométriques locaux (pente, courbure). Ces paramètres seront globalement fonction du contexte extérieur au véhicule (présence de vulnérables, d'obstacles fixes, de limitations de vitesse, etc.) et permettent de déterminer l'élément du faisceau choisi. Le suivi de la trajectoire choisie s'effectue alors en utilisant des techniques de type platitude différentielle, qui s'avèrent particulièrement bien adaptées aux problèmes de suivi de trajectoire. Un système différentiellement plat est en effet entièrement paramétré par ses sorties plates et leurs dérivées. Une autre propriété caractéristique de ce type de systèmes est d'être linéarisable de manière exacte (et donc globale) par bouclage dynamique endogène et transformation de coordonnées. Le suivi stabilisant est alors trivialement obtenu sur le système linéarisé. / To take relevant decisions, autonomous systems have to continuously estimate their environment via embedded sensors. In the case of 'intelligent' vehicles, the estimation of the context focuses both on objects perfectly known such as road signs (vertical or horizontal), and on objects unknown or difficult to describe due to their number and variety (pedestrians, cyclists, other vehicles, animals, any obstacles on the road, etc.). Now, the a contrario modelling provides a formal framework adapted to the problem of detection of variable objects, by modeling the noise rather than the objects to detect. Our main contribution in this PhD work was to adapt the probabilistic NFA (Number of False Alarms) measurements to the problem of detection of objects simply defined either as having an own motion, or salient to the road plane. A highlight of the proposed algorithms is that they are free from any detection parameter, in particular threshold. A first NFA criterion allows the identification of the sub-domain of the image (not necessarily connected pixels) whose gray level values are the most amazing under Gaussian noise assumption (naive model). A second NFA criterion allows then identifying the subset of maximum significant windows under binomial hypothesis (naive model). We prove that these measurements (NFA) can also be used for the estimation of intrinsec parameters, for instance either the 6D movement of the onboard camera, or a binarisation threshold. Finally, we prove that the proposed algorithms are generic and can be applied to different kinds of input images, for instance either radiometric images or disparity maps. Conversely to the a contrario approach, the Markov models allow to inject a priori knowledge about the objects sought. We use it in the case of the road marking classification. From the context estimation (road signs, detected objects), the control part includes firstly a specification of the possible trajectories and secondly the laws to achieve the selected path. The possible trajectories are grouped into a bundle, and various parameters are used to set the local geometric invariants (slope, curvature). These parameters depend on the vehicle context (presence of vulnerables, fixed obstacles, speed limits, etc ... ), and allows determining the selected the trajectory from the bundle. Differentially flat system is indeed fully parameterized by its flat outputs and their derivatives. Another feature of this kind of systems is to be accurately linearized by endogenous dynamics feed-back and coordinate transformation. Tracking stabilizer is then trivially obtained from the linearized system.
8

Approche Bayésienne pour la Sélection de l'Action et la Focalisation de l'Attention. Application à la Programmation de Robots Autonomes.

Chagas E Cavalcante Koike, Carla Maria 14 November 2005 (has links) (PDF)
Les systèmes sensorimoteurs autonomes, placés dans des environnements dynamiques, doivent répondre continuellement à la question ultime: comment contrôler les commandes motrices à partir des entrées sensorielles? Répondre à cette question est un problème très complexe, principalement à cause de l'énorme quantité d'informations qui doit être traitée, tout en respectant plusieurs restrictions: contraintes de temps réel, espace mémoire restreint, et capacité de traitement des données limitée. Un autre défi majeur consiste à traiter l'information incomplète et imprécise, habituellement présente dans des environnements dynamiques. Cette thèse s'intéresse au problème posé par la commande des systèmes sensorimoteurs autonomes et propose un enchaînement d'hypothèses et de simplifications. Ces hypothèses et simplifications sont définies dans un cadre mathématique précis et strict appelé programmation bayésienne, une extension des réseaux bayésiens. L'enchaînement se présente en cinq paliers: utilisation d'états internes; les hypothèses de Markov de premier ordre, de stationnarité et les filtres bayésiens; exploitation de l'indépendance partielle entre les variables d'état; addition d'un mécanisme de choix de comportement;la focalisation de l'attention guidée par l'intention de comportement. La description de chaque étape est suivie de son analyse selon les exigences de mémoire, de complexité de calcul, et de difficulté de modélisation. Nous présentons également des discussions approfondies concernant d'une part la programmation des robots et d'autre part les systèmes cognitifs. Enfin, nous décrivons l'application de ce cadre de programmation à un robot mobile.
9

Quantitative Analysis of Information Leakage in Probabilistic and Nondeterministic Systems

Andrés, Miguel 01 July 2011 (has links) (PDF)
As we dive into the digital era, there is growing concern about the amount of personal digital information that is being gathered about us. Websites often track people's browsing behavior, health care insurers gather medical data, and many smartphones and navigation systems store or trans- mit information that makes it possible to track the physical location of their users at any time. Hence, anonymity, and privacy in general, are in- creasingly at stake. Anonymity protocols counter this concern by offering anonymous communication over the Internet. To ensure the correctness of such protocols, which are often extremely complex, a rigorous framework is needed in which anonymity properties can be expressed, analyzed, and ulti- mately verified. Formal methods provide a set of mathematical techniques that allow us to rigorously specify and verify anonymity properties. This thesis addresses the foundational aspects of formal methods for applications in security and in particular in anonymity. More concretely, we develop frameworks for the specification of anonymity properties and propose algorithms for their verification. Since in practice anonymity pro- tocols always leak some information, we focus on quantitative properties which capture the amount of information leaked by a protocol. We start our research on anonymity from its very foundations, namely conditional probabilities - these are the key ingredient of most quantitative anonymity properties. In Chapter 2 we present cpCTL, the first temporal logic making it possible to specify conditional probabilities. In addition, we present an algorithm to verify cpCTL formulas in a model-checking fashion. This logic, together with the model-checker, allows us to specify and verify quantitative anonymity properties over complex systems where probabilistic and nondeterministic behavior may coexist. We then turn our attention to more practical grounds: the constructions of algorithms to compute information leakage. More precisely, in Chapter 3 we present polynomial algorithms to compute the (information-theoretic) leakage of several kinds of fully probabilistic protocols (i.e. protocols with- out nondeterministic behavior). The techniques presented in this chapter are the first ones enabling the computation of (information-theoretic) leak- age in interactive protocols. In Chapter 4 we attack a well known problem in distributed anonymity protocols, namely full-information scheduling. To overcome this problem, we propose an alternative definition of schedulers together with several new definitions of anonymity (varying according to the attacker's power), and revise the famous definition of strong-anonymity from the literature. Furthermore, we provide a technique to verify that a distributed protocol satisfies some of the proposed definitions. In Chapter 5 we provide (counterexample-based) techniques to debug complex systems, allowing for the detection of flaws in security protocols. Finally, in Chapter 6 we briefly discuss extensions to the frameworks and techniques proposed in Chapters 3 and 4.
10

Contributions to Bayesian Network Learning/Contributions à l'apprentissage des réseaux bayesiens

Auvray, Vincent 19 September 2007 (has links)
No description available.

Page generated in 0.0566 seconds