• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 41
  • 18
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 144
  • 28
  • 22
  • 22
  • 21
  • 20
  • 19
  • 19
  • 18
  • 18
  • 17
  • 16
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Extração de preferências por meio de avaliações de comportamentos observados. / Preference elicitation using evaluation over observed behaviours.

Valdinei Freire da Silva 07 April 2009 (has links)
Recentemente, várias tarefas tem sido delegadas a sistemas computacionais, principalmente quando sistemas computacionais são mais confiáveis ou quando as tarefas não são adequadas para seres humanos. O uso de extração de preferências ajuda a realizar a delegação, permitindo que mesmo pessoas leigas possam programar facilmente um sistema computacional com suas preferências. As preferências de uma pessoa são obtidas por meio de respostas para questões específicas, que são formuladas pelo próprio sistema computacional. A pessoa age como um usuário do sistema computacional, enquanto este é visto como um agente que age no lugar da pessoa. A estrutura e contexto das questões são apontadas como fonte de variações das respostas do usuário, e tais variações podem impossibilitar a factibilidade da extração de preferências. Uma forma de evitar tais variações é questionar um usuário sobre a sua preferência entre dois comportamentos observados por ele. A questão de avaliar relativamente comportamentos observados é mais simples e transparente ao usuário, diminuindo as possíveis variações, mas pode não ser fácil para o agente interpretar tais avaliações. Se existem divergências entre as percepções do agente e do usuário, o agente pode ficar impossibilitado de aprender as preferências do usuário. As avaliações são geradas com base nas percepções do usuário, mas tudo que um agente pode fazer é relacionar tais avaliações às suas próprias percepções. Um outro problema é que questões, que são expostas ao usuário por meio de comportamentos demonstrados, são agora restritas pela dinâmica do ambiente e um comportamento não pode ser escolhido arbitrariamente. O comportamento deve ser factível e uma política de ação deve ser executada no ambiente para que um comportamento seja demonstrado. Enquanto o primeiro problema influencia a inferência de como o usuário avalia comportamentos, o segundo problema influencia quão rápido e acurado o processo de aprendizado pode ser feito. Esta tese propõe o problema de Extração de Preferências com base em Comportamentos Observados utilizando o arcabouço de Processos Markovianos de Decisão, desenvolvendo propriedades teóricas em tal arcabouço que viabilizam computacionalmente tal problema. O problema de diferentes percepções é analisado e soluções restritas são desenvolvidas. O problema de demonstração de comportamentos é analisado utilizando formulação de questões com base em políticas estacionárias e replanejamento de políticas, sendo implementados algoritmos com ambas soluções para resolver a extração de preferências em um cenário sob condições restritas. / Recently, computer systems have been delegated to accomplish a variety of tasks, when the computer system can be more reliable or when the task is not suitable or not recommended for a human being. The use of preference elicitation in computational systems helps to improve such delegation, enabling lay people to program easily a computer system with their own preference. The preference of a person is elicited through his answers to specific questions, that the computer system formulates by itself. The person acts as an user of the computer system, whereas the computer system can be seen as an agent that acts in place of the person. The structure and context of the questions have been pointed as sources of variance regarding the users answers, and such variance can jeopardize the feasibility of preference elicitation. An attempt to avoid such variance is asking an user to choose between two behaviours that were observed by himself. Evaluating relatively observed behaviours turn questions more transparent and simpler for the user, decreasing the variance effect, but it might not be easier interpreting such evaluations. If divergences between agents and users perceptions occur, the agent may not be able to learn the users preference. Evaluations are generated regarding users perception, but all an agent can do is to relate such evaluation to his own perception. Another issue is that questions, which are exposed to the user through behaviours, are now constrained by the environment dynamics and a behaviour cannot be chosen arbitrarily, but the behaviour must be feasible and a policy must be executed in order to achieve a behaviour. Whereas the first issue influences the inference regarding users evaluation, the second problem influences how fast and accurate the learning process can be made. This thesis proposes the problem of Preference Elicitation under Evaluations over Observed Behaviours using the Markov Decision Process framework and theoretic properties in such framework are developed in order to turn such problem computationally feasible. The problem o different perceptions is analysed and constraint solutions are developed. The problem of demonstrating a behaviour is considered under the formulation of question based on stationary policies and non-stationary policies. Both type of questions was implemented and tested to solve the preference elicitation in a scenario with constraint conditions.
122

Filtragem robusta recursiva para sistemas lineares a tempo discreto com parâmetros sujeitos a saltos Markovianos / Recursive robust filtering for discrete-time Markovian jump linear systems

Gildson Queiroz de Jesus 26 August 2011 (has links)
Este trabalho trata de filtragem robusta para sistemas lineares sujeitos a saltos Markovianos discretos no tempo. Serão desenvolvidas estimativas preditoras e filtradas baseadas em algoritmos recursivos que são úteis para aplicações em tempo real. Serão desenvolvidas duas classes de filtros robustos, uma baseada em uma estratégia do tipo H \'INFINITO\' e a outra baseada no método dos mínimos quadrados regularizados robustos. Além disso, serão desenvolvidos filtros na forma de informação e seus respectivos algoritmos array para estimar esse tipo de sistema. Neste trabalho assume-se que os parâmetros de saltos do sistema Markoviano não são acessíveis. / This work deals with the problem of robust state estimation for discrete-time uncertain linear systems subject to Markovian jumps. Predicted and filtered estimates are developed based on recursive algorithms which are useful in on-line applications. We develop two classes of filters, the first one is based on a H \'INFINITO\' approach and the second one is based on a robust regularized leastsquare method. Moreover, we develop information filter and their respective array algorithms to estimate this kind of system. We assume that the jump parameters of the Markovian system are not acessible.
123

Mise en place d'un modèle de fuite multi-états en secteur hydraulique partiellement instrumenté / Mastering losses on drinking water network

Claudio, Karim 19 December 2014 (has links)
L’évolution de l’équipement des réseaux d’eau potable a considérablement amélioré le pilotage de ces derniers. Le telérelevé des compteurs d’eau est sans doute la technologie qui a créé la plus grande avancée ces dernières années dans la gestion de l’eau, tant pour l’opérateur que pour l’usager. Cette technologie a permis de passer d’une information le plus souvent annuelle sur les consommations (suite à la relève manuelle des compteurs d’eau) à une information infra-journalière. Mais le télérelevé, aussi performant soit-il, a un inconvénient : son coût. L’instrumentation complète d’un réseau engendre des investissements que certains opérateurs ne peuvent se permettre. Ainsi la création d’un échantillon de compteurs à équiper permet d’estimer la consommation totale d’un réseau tout en minimisant les coûts d’investissement. Cet échantillon doit être construit de façon intelligente de sorte que l’imprécision liée à l’estimation ne nuise pas à l’évaluation des consommations. Une connaissance précise sur les consommations d’eau permet de quantifier les volumes perdus en réseau. Mais, même dans le cas d’une évaluation exacte des pertes, cela ne peut pas suffire à éliminer toutes les fuites sur le réseau. En effet, si le réseau de distribution d’eau potable est majoritairement enterré, donc invisible, il en va de même pour les fuites. Une fraction des fuites est invisible et même indétectable par les techniques actuelles de recherche de fuites, et donc irréparable. La construction d’un modèle de fuite multi-états permet de décomposer le débit de fuite suivant les différents stades d’apparition d’une fuite : invisible et indétectable, invisible mais détectable par la recherche de fuite et enfin visible en surface. Ce modèle, de type semi-markovien, prend en compte les contraintes opérationnelles, notamment le fait que nous disposons de données de panel. La décomposition du débit de fuite permet de fait une meilleure gestion du réseau en ciblant et adaptant les actions de lutte contre les fuites à mettre en place en fonction de l’état de dégradation du réseau. / The evolution of equipment on drinking water networks has considerably bettered the monitoring of these lasts. Automatic meter reading (AMR) is clearly the technology which has brought the major progress these last years in water management, as for the operator and the end-users. This technology has allowed passing from an annual information on water consumption (thanks to the manual meter reading) toan infra-daily information. But as efficient as AMR can be, it has one main inconvenient : its cost. A complete network instrumentation generates capital expenditures that some operators can’t allowed themselves. The constitution of a sample of meters to equip enables then to estimate the network total consumption while minimizing the investments. This sample has to be built smartly so the inaccuracy of the estimator shouldn’t be harmful to the consumption estimation. A precise knowledge on water consumption allowsquantifying the water lost volumes on the network. But even an exact assessment of losses is still not enough to eliminate all the leaks on the network. Indeed, if the water distribution network is buried, and so invisible, so do the leaks. A fraction of leaks are invisible and even undetectable by the current technologies of leakage control, and so these leaks are un-reparable. The construction of a multi-state model enables us to decompose the leakage flow according to the different stages of appearance of a leak : invisible and undetectable, invisible but detectable with leakage control and finally detectable. This semi-Markovian model takes into account operational constrains, in particular the fact that we dispose of panel data. The leakage flow decomposition allows a better network monitoring but targeting and adapting the action of leakage reduction to set up according to the degradation state of the network.
124

Etude asymptotique de la turbulence d'ondes en rotation

Bellet, Fabien 23 July 2003 (has links) (PDF)
Il s'agit de déterminer l'influence d'une rotation solide sur la structure de la turbulence homogène incompressible. Les résultats du modèle spectral EDQNM étant probants en turbulence purement isotrope, la discrétisation spatiale devient un facteur limitant dans le cas anisotrope. Dans le cas où le nombre de Rossby est faible, un développement asymptotique en temps est possible. Le rôle joué par les surfaces résonantes étant dominant, le nouveau modèle conduit à une équation intégro-différentielle fermée pour l'énergie spectrale. Par un traitement numérique précis, un code parallélisé donne des résultats quantitatifs. Il apparaît que l'énergie se concentre avec le temps vers le plan perpendiculaire au vecteur rotation. De plus, le spectre intégré suit une loi de pente -3 dans la zone inertielle, sans que cela soit dû aux seuls vecteurs d'ondes horizontaux. Il n'y a donc pas de vraie bidimensionnalisation, mais les vecteurs proches du plan horizontal ont une dynamique spécifique.
125

Task-Driven Integrity Assessment and Control for Vehicular Hybrid Localization Systems

Drawil, Nabil 17 January 2013 (has links)
Throughout the last decade, vehicle localization has been attracting significant attention in a wide range of applications, including Navigation Systems, Road Tolling, Smart Parking, and Collision Avoidance. To deliver on their requirements, these applications need specific localization accuracy. However, current localization techniques lack the required accuracy, especially for mission critical applications. Although various approaches for improving localization accuracy have been reported in the literature, there is still a need for more efficient and more effective measures that can ascribe some level of accuracy to the localization process. These measures will enable localization systems to manage the localization process and resources so as to achieve the highest accuracy possible, and to mitigate the impact of inadequate accuracy on the target application. In this thesis, a framework for fusing different localization techniques is introduced in order to estimate the location of a vehicle along with location integrity assessment that captures the impact of the measurement conditions on the localization quality. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The framework introduced provides the tools that would allow for modeling the impact of the operation conditions on estimate accuracy and integrity, as such it enables more robust system performance in three steps. First, localization system parameters are utilized to contrive a feature space that constitutes probable accuracy classes. Due to the strong overlap among accuracy classes in the feature space, a hierarchical classification strategy is developed to address the class ambiguity problem via the class unfolding approach (HCCU). HCCU strategy is proven to be superior with respect to other hierarchical configuration. Furthermore, a Context Based Accuracy Classification (CBAC) algorithm is introduced to enhance the performance of the classification process. In this algorithm, knowledge about the surrounding environment is utilized to optimize classification performance as a function of the observation conditions. Second, a task-driven integrity (TDI) model is developed to enable the applications modules to be aware of the trust level of the localization output. Typically, this trust level functions in the measurement conditions; therefore, the TDI model monitors specific parameter(s) in the localization technique and, accordingly, infers the impact of the change in the environmental conditions on the quality of the localization process. A generalized TDI solution is also introduced to handle the cases where sufficient information about the sensing parameters is unavailable. Finally, the produce of the employed localization techniques (i.e., location estimates, accuracy, and integrity level assessment) needs to be fused. Nevertheless, these techniques are hybrid and their pieces of information are conflicting in many situations. Therefore, a novel evidence structure model called Spatial Evidence Structure Model (SESM) is developed and used in constructing a frame of discernment comprising discretized spatial data. SESM-based fusion paradigms are capable of performing a fusion process using the information provided by the techniques employed. Both the location estimate accuracy and aggregated integrity resultant from the fusion process demonstrate superiority over the employing localization techniques. Furthermore, a context aware task-driven resource allocation mechanism is developed to manage the fusion process. The main objective of this mechanism is to optimize the usage of system resources and achieve a task-driven performance. Extensive experimental work is conducted on real-life and simulated data to validate models developed in this thesis. It is evident from the experimental results that task-driven integrity assessment and control is applicable and effective on hybrid localization systems.
126

Noise, Delays, and Resonance in a Neural Network

Quan, Austin 01 May 2011 (has links)
A stochastic-delay differential equation (SDDE) model of a small neural network with recurrent inhibition is presented and analyzed. The model exhibits unexpected transient behavior: oscillations that occur at the boundary of the basins of attraction when the system is bistable. These are known as delay-induced transitory oscillations (DITOs). This behavior is analyzed in the context of stochastic resonance, an unintuitive, though widely researched phenomenon in physical bistable systems where noise can play in constructive role in strengthening an input signal. A method for modeling the dynamics using a probabilistic three-state model is proposed, and supported with numerical evidence. The potential implications of this dynamical phenomenon to nocturnal frontal lobe epilepsy (NFLE) are also discussed.
127

Task-Driven Integrity Assessment and Control for Vehicular Hybrid Localization Systems

Drawil, Nabil 17 January 2013 (has links)
Throughout the last decade, vehicle localization has been attracting significant attention in a wide range of applications, including Navigation Systems, Road Tolling, Smart Parking, and Collision Avoidance. To deliver on their requirements, these applications need specific localization accuracy. However, current localization techniques lack the required accuracy, especially for mission critical applications. Although various approaches for improving localization accuracy have been reported in the literature, there is still a need for more efficient and more effective measures that can ascribe some level of accuracy to the localization process. These measures will enable localization systems to manage the localization process and resources so as to achieve the highest accuracy possible, and to mitigate the impact of inadequate accuracy on the target application. In this thesis, a framework for fusing different localization techniques is introduced in order to estimate the location of a vehicle along with location integrity assessment that captures the impact of the measurement conditions on the localization quality. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The framework introduced provides the tools that would allow for modeling the impact of the operation conditions on estimate accuracy and integrity, as such it enables more robust system performance in three steps. First, localization system parameters are utilized to contrive a feature space that constitutes probable accuracy classes. Due to the strong overlap among accuracy classes in the feature space, a hierarchical classification strategy is developed to address the class ambiguity problem via the class unfolding approach (HCCU). HCCU strategy is proven to be superior with respect to other hierarchical configuration. Furthermore, a Context Based Accuracy Classification (CBAC) algorithm is introduced to enhance the performance of the classification process. In this algorithm, knowledge about the surrounding environment is utilized to optimize classification performance as a function of the observation conditions. Second, a task-driven integrity (TDI) model is developed to enable the applications modules to be aware of the trust level of the localization output. Typically, this trust level functions in the measurement conditions; therefore, the TDI model monitors specific parameter(s) in the localization technique and, accordingly, infers the impact of the change in the environmental conditions on the quality of the localization process. A generalized TDI solution is also introduced to handle the cases where sufficient information about the sensing parameters is unavailable. Finally, the produce of the employed localization techniques (i.e., location estimates, accuracy, and integrity level assessment) needs to be fused. Nevertheless, these techniques are hybrid and their pieces of information are conflicting in many situations. Therefore, a novel evidence structure model called Spatial Evidence Structure Model (SESM) is developed and used in constructing a frame of discernment comprising discretized spatial data. SESM-based fusion paradigms are capable of performing a fusion process using the information provided by the techniques employed. Both the location estimate accuracy and aggregated integrity resultant from the fusion process demonstrate superiority over the employing localization techniques. Furthermore, a context aware task-driven resource allocation mechanism is developed to manage the fusion process. The main objective of this mechanism is to optimize the usage of system resources and achieve a task-driven performance. Extensive experimental work is conducted on real-life and simulated data to validate models developed in this thesis. It is evident from the experimental results that task-driven integrity assessment and control is applicable and effective on hybrid localization systems.
128

Essays on interest rate theory

Elhouar, Mikael January 2008 (has links)
Diss. (sammanfattning) Stockholm : Handelshögskolan, 2008 Sammanfattning jämte 3 uppsatser
129

Credit risk & forward price models

Gaspar, Raquel M. January 2006 (has links)
This thesis consists of three distinct parts. Part I introduces the basic concepts and the notion of general quadratic term structures (GQTS) essential in some of the following chapters. Part II focuses on credit risk models and Part III studies forward price term structure models using both the classical and the geometrical approach.  Part I is organized as follows. Chapter 1 is divided in two main sections. The first section presents some of the fundamental concepts which are a pre-requisite to the papers that follow. All of the concepts and results are well known and hence the section can be regarded as an introduction to notation and the basic principles of arbitrage theory. The second part of the chapter is of a more technical nature and its purpose is to summarize some key results on point processes or differential geometry that will be used later in the thesis. For finite dimensional factor models, Chapter 2 studies GQTS. These term structures include, as special cases, the affine term structures and Gaussian quadratic term structures previously studied in the literature. We show, however, that there are other, non-Gaussian, quadratic term structures and derive sufficient conditions for the existence of these GQTS for zero-coupon bond prices. On Part II we focus on credit risk models.   In Chapter 3 we propose a reduced form model for default that allows us to derive closed-form solutions for all the key ingredients in credit risk modeling: risk-free bond prices, defaultable bond prices (with and without stochastic recovery) and survival probabilities. We show that all these quantities can be represented in general exponential quadratic forms, despite the fact that the intensity of default is allowed to jump producing shot-noise effects. In addition, we show how to price defaultable digital puts, CDSs and options on defaultable bonds. Further on, we study a model for portfolio credit risk that considers both firm-specific and systematic risk. The model generalizes the attempt of Duffie and Garleanu (2001). We find that the model produces realistic default correlation and clustering effects. Next, we show how to price CDOs, options on CDOs and how to incorporate the link to currently proposed credit indices. In Chapter 4 we start by presenting a reduced-form multiple default type of model and derive abstract results on the influence of a state variable $X$ on credit spreads when both the intensity and the loss quota distribution are driven by $X$. The aim is to apply the results to a real life situation, namely, to the influence of macroeconomic risks on the term structure of credit spreads. There is increasing support in the empirical literature for the proposition that both the probability of default (PD) and the loss given default (LGD) are correlated and driven by macroeconomic variables. Paradoxically, there has been very little effort, from the theoretical literature, to develop credit risk models that would take this into account. One explanation might be the additional complexity this leads to, even for the ``treatable'' default intensity models. The goal of this paper is to develop the theoretical framework necessary to deal with this situation and, through numerical simulation, understand the impact of macroeconomic factors on the term structure of credit spreads. In the proposed setup, periods of economic depression are both periods of higher default intensity and lower recovery, producing a business cycle effect. Furthermore, we allow for the possibility of an index volatility that depends negatively on the index level and show that, when we include this realistic feature, the impacts on the credit spread term structure are emphasized. Part III studies forward price term structure models. Forward prices differ from futures prices in stochastic interest rate settings and become an interesting object of study in their own right. Forward prices with different maturities are martingales under different forward measures. This mathematical property implies that the term structure of forward prices is always linked to the term structure of bond prices, and this dependence makes forward price term structure models relatively harder to handle. For finite dimensional factor models, Chapter 5 applies the concept of GQTS to the term structure of forward prices. We show how the forward price term structure equation depends on the term structure of bond prices. We then exploit this connection and show that even in quadratic short rate settings we can have affine term structures for forward prices. Finally, we show how the study of futures prices is naturally embedded in the study of forward prices, that the difference between the two term structures may be deterministic in some (non-trivial) stochastic interest rate settings. In Chapter 6 we study a fairly general Wiener driven model for the term structure of forward prices. The model, under a fixed martingale measure, $\Q$, is described by using two infinite dimensional stochastic differential equations (SDEs). The first system is a standard HJM model for (forward) interest rates, driven by a multidimensional Wiener process $W$. The second system is an infinite SDE for the term structure of forward prices on some specified underlying asset driven by the same $W$. Since the zero coupon bond volatilities will enter into the drift part of the SDE for these forward prices, the interest rate system is needed as input to the forward price system. Given this setup, we use the Lie algebra methodology of Bj\o rk et al. to investigate under what conditions, on the volatility structure of the forward prices and/or interest rates, the inherently (doubly) infinite dimensional SDE for forward prices can be realized by a finite dimensional Markovian state space model. / Diss. Stockholm : Handelshögskolan, 2006
130

Cellular automaton models for time-correlated random walks: derivation and analysis

Nava-Sedeño, Josue Manik, Hatzikirou, Haralampos, Klages, Rainer, Deutsch, Andreas 05 June 2018 (has links) (PDF)
Many diffusion processes in nature and society were found to be anomalous, in the sense of being fundamentally different from conventional Brownian motion. An important example is the migration of biological cells, which exhibits non-trivial temporal decay of velocity autocorrelation functions. This means that the corresponding dynamics is characterized by memory effects that slowly decay in time. Motivated by this we construct non-Markovian lattice-gas cellular automata models for moving agents with memory. For this purpose the reorientation probabilities are derived from velocity autocorrelation functions that are given a priori; in that respect our approach is “data-driven”. Particular examples we consider are velocity correlations that decay exponentially or as power laws, where the latter functions generate anomalous diffusion. The computational efficiency of cellular automata combined with our analytical results paves the way to explore the relevance of memory and anomalous diffusion for the dynamics of interacting cell populations, like confluent cell monolayers and cell clustering.

Page generated in 0.0588 seconds