• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • Tagged with
  • 14
  • 14
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Axiomatizations of the Choquet integral on general decision spaces

Timonin, Mikhail January 2017 (has links)
We propose an axiomatization of the Choquet integral model for the general case of a heterogeneous product set X = X1 Xn. Previous characterizations of the Choquet integral have been given for the particular cases X = Y n and X = Rn. However, this makes the results inapplicable to problems in many fields of decision theory, such as multicriteria decision analysis (MCDA), state-dependent utility (SD-DUU), and social choice. For example, in multicriteria decision analysis the elements of X are interpreted as alternatives, characterized by criteria taking values from the sets Xi. Obviously, the identicalness or even commensurateness of criteria cannot be assumed a priori. Despite this theoretical gap, the Choquet integral model is quite popular in the MCDA community and is widely used in applied and theoretical works. In fact, the absence of a sufficiently general axiomatic treatment of the Choquet integral has been recognized several times in the decision-theoretic literature. In our work we aim to provide missing results { we construct the axiomatization based on a novel axiomatic system and study its uniqueness properties. Also, we extend our construction to various particular cases of the Choquet integral and analyse the constraints of the earlier characterizations. Finally, we discuss in detail the implications of our results for the applications of the Choquet integral as a model of decision making.
2

Evaluating service quality by a Choquet-fuzzy-integral model

Tsai, Hui-Hua 09 December 2002 (has links)
Considering measurable evidence and fuzzy measures that involve linguistic terms, this thesis proposes a fuzzy-number based Choquet integral to aggregate linguistic information when information fusion between criteria is considered. The proposed fuzzy-number based Choquet integral is a generalization of a standard Choquet integral that can cope with interval-number or fuzzy-number types of measurable evidence and fuzzy measures. Furthermore, by investigating the related characteristics of the Choquet integral and the fuzzy-number based Choquet integral, the operation process of the fuzzy-number based Choquet integral is clarified in this thesis. Combining linguistic terms and the compatibility of psychology, fuzzy numbers and the fuzzy-number based Choquet integral, when information fusion between criteria is considered, are introduced into evaluating service quality and aggregating information in three-column format of SERVQUAL. Finally, a numerical example, regarding the comparison of overall service performance between e-stores, is demonstrated to illustrate how the fuzzy-number based Choquet integral and its two-stage aggregation process operate in three-column format of SERVQUAL for evaluating service quality.
3

Contracting under Heterogeneous Beliefs

Ghossoub, Mario 25 May 2011 (has links)
The main motivation behind this thesis is the lack of belief subjectivity in problems of contracting, and especially in problems of demand for insurance. The idea that an underlying uncertainty in contracting problems (e.g. an insurable loss in problems of insurance demand) is a given random variable on some exogenously determined probability space is so engrained in the literature that one can easily forget that the notion of an objective uncertainty is only one possible approach to the formulation of uncertainty in economic theory. On the other hand, the subjectivist school led by De Finetti and Ramsey challenged the idea that uncertainty is totally objective, and advocated a personal view of probability (subjective probability). This ultimately led to Savage's approach to the theory of choice under uncertainty, where uncertainty is entirely subjective and it is only one's preferences that determine one's probabilistic assessment. It is the purpose of this thesis to revisit the "classical" insurance demand problem from a purely subjectivist perspective on uncertainty. To do so, we will first examine a general problem of contracting under heterogeneous subjective beliefs and provide conditions under which we can show the existence of a solution and then characterize that solution. One such condition will be called "vigilance". We will then specialize the study to the insurance framework, and characterize the solution in terms of what we will call a "generalized deductible contract". Subsequently, we will study some mathematical properties of collections of vigilant beliefs, in preparation for future work on the idea of vigilance. This and other envisaged future work will be discussed in the concluding chapter of this thesis. In the chapter preceding the concluding chapter, we will examine a model of contracting for innovation under heterogeneity and ambiguity, simply to demonstrate how the ideas and techniques developed in the first chapter can be used beyond problems of insurance demand.
4

Contracting under Heterogeneous Beliefs

Ghossoub, Mario 25 May 2011 (has links)
The main motivation behind this thesis is the lack of belief subjectivity in problems of contracting, and especially in problems of demand for insurance. The idea that an underlying uncertainty in contracting problems (e.g. an insurable loss in problems of insurance demand) is a given random variable on some exogenously determined probability space is so engrained in the literature that one can easily forget that the notion of an objective uncertainty is only one possible approach to the formulation of uncertainty in economic theory. On the other hand, the subjectivist school led by De Finetti and Ramsey challenged the idea that uncertainty is totally objective, and advocated a personal view of probability (subjective probability). This ultimately led to Savage's approach to the theory of choice under uncertainty, where uncertainty is entirely subjective and it is only one's preferences that determine one's probabilistic assessment. It is the purpose of this thesis to revisit the "classical" insurance demand problem from a purely subjectivist perspective on uncertainty. To do so, we will first examine a general problem of contracting under heterogeneous subjective beliefs and provide conditions under which we can show the existence of a solution and then characterize that solution. One such condition will be called "vigilance". We will then specialize the study to the insurance framework, and characterize the solution in terms of what we will call a "generalized deductible contract". Subsequently, we will study some mathematical properties of collections of vigilant beliefs, in preparation for future work on the idea of vigilance. This and other envisaged future work will be discussed in the concluding chapter of this thesis. In the chapter preceding the concluding chapter, we will examine a model of contracting for innovation under heterogeneity and ambiguity, simply to demonstrate how the ideas and techniques developed in the first chapter can be used beyond problems of insurance demand.
5

Insights and Characterization of l1-norm Based Sparsity Learning of a Lexicographically Encoded Capacity Vector for the Choquet Integral

Adeyeba, Titilope Adeola 09 May 2015 (has links)
This thesis aims to simultaneously minimize function error and model complexity for data fusion via the Choquet integral (CI). The CI is a generator function, i.e., it is parametric and yields a wealth of aggregation operators based on the specifics of the underlying fuzzy measure. It is often the case that we desire to learn a fusion from data and the goal is to have the smallest possible sum of squared error between the trained model and a set of labels. However, we also desire to learn as “simple’’ of solutions as possible. Herein, L1-norm regularization of a lexicographically encoded capacity vector relative to the CI is explored. The impact of regularization is explored in terms of what capacities and aggregation operators it induces under different common and extreme scenarios. Synthetic experiments are provided in order to illustrate the propositions and concepts put forth.
6

Fuzzy Integral-based Rule Aggregation in Fuzzy Logic

Tomlin, Leary, Jr 07 May 2016 (has links)
The fuzzy inference system has been tuned and revamped many times over and applied to numerous domains. New and improved techniques have been presented for fuzzification, implication, rule composition and defuzzification, leaving rule aggregation relatively underrepresented. Current FIS aggregation operators are relatively simple and have remained more-or-less unchanged over the years. For many problems, these simple aggregation operators produce intuitive, useful and meaningful results. However, there exists a wide class of problems for which quality aggregation requires nonditivity and exploitation of interactions between rules. Herein, the fuzzy integral, a parametric non-linear aggregation operator, is used to fill this gap. Specifically, recent advancements in extensions of the fuzzy integral to “unrestricted” fuzzy sets, i.e., subnormal and non-convex, makes this now possible. The roles of two extensions, gFI and the NDFI, are explored and demonstrate when and where to apply these aggregations, and present efficient algorithms to approximate their solutions.
7

Non-concave and behavioural optimal portfolio choice problems

Meireles Rodrigues, Andrea Sofia January 2014 (has links)
Our aim is to examine the problem of optimal asset allocation for investors exhibiting a behaviour in the face of uncertainty which is not consistent with the usual axioms of Expected Utility Theory. This thesis is divided into two main parts. In the first one, comprising Chapter II, we consider an arbitrage-free discrete-time financial model and an investor whose risk preferences are represented by a possibly nonconcave utility function (defined on the non-negative half-line only). Under straightforward conditions, we establish the existence of an optimal portfolio. As for Chapter III, it consists of the study of the optimal investment problem within a continuous-time and (essentially) complete market framework, where asset prices are modelled by semi-martingales. We deal with an investor who behaves in accordance with Kahneman and Tversky's Cumulative Prospect Theory, and we begin by analysing the well-posedness of the optimisation problem. In the case where the investor's utility function is not bounded above, we derive necessary conditions for well-posedness, which are related only to the behaviour of the distortion functions near the origin and to that of the utility function as wealth becomes arbitrarily large (both positive and negative). Next, we focus on an investor whose utility is bounded above. The problem's wellposedness is trivial, and a necessary condition for the existence of an optimal trading strategy is obtained. This condition requires that the investor's probability distortion function on losses does not tend to zero faster than a given rate, which is determined by the utility function. Provided that certain additional assumptions are satisfied, we show that this condition is indeed the borderline for attainability, in the sense that, for slower convergence of the distortion function, there does exist an optimal portfolio. Finally, we turn to the case of an investor with a piecewise power-like utility function and with power-like distortion functions. Easily verifiable necessary conditions for wellposedness are found to be sufficient as well, and the existence of an optimal strategy is demonstrated.
8

Modèle de performance agrégée et raisonnement approché pour l’optimisation de la consommation énergétique et du confort dans les bâtiments / Aggregate performance model and approximate reasoning for optimization of building energy consumption and occupant comfort

Denguir, Afef 27 May 2014 (has links)
Ce travail s'inscrit dans le cadre du projet FUI RIDER (Research for IT Driven Energy efficiency) qui vise à développer un système de gestion de l'énergie faiblement dépendant du bâtiment à contrôler et propose une nouvelle approche pour réduire les coûts énergétiques. Cette approche exploite la notion de confort thermique afin de calculer de nouvelles consignes à fournir au système de contrôle du conditionnement du bâtiment. L'approche s'appuie sur l'idée que le confort thermique est une notion multidimensionnelle subjective. La littérature propose des modèles statistiques pour appréhender le confort thermique. Malheureusement, ces modèles sont fortement non linéaires et non interprétables ce qui rend difficile leur utilisation pour la conduite ou l'optimisation. Nous proposons un nouveau modèle de confort basé sur la théorie de l'utilité multi attributs et les intégrales de Choquet. L'intérêt d'un tel modèle est qu'il est interprétable en termes de préférences pour la conduite, linéaire par simplexe ce qui facilite la résolution des problèmes d'optimisation, et plus concis qu'un système de contrôle à base de règles. Dans la seconde partie de ce travail, le THermal Process Enhancement (THPE) s'intéresse à l'obtention efficiente des consignes calculées avec le modèle du confort thermique. Le THPE se base sur un raisonnement approché établi à partir d'un modèle qualitatif enrichi EQM (Extended Qualitative Model). L'EQM est le résultat de l'étude mathématique et qualitative des équations différentielles régissant les processus thermiques. Il est enrichi en continu par un système de gestion de l'expérience basé sur un apprentissage avec pénalités qui fournit les informations quantitatives nécessaires pour inférer des recommandations de conduite quantifiées à partir des tendances modélisées dans l'EQM. L'EQM et les raisonnements associés requièrent peu de paramètres et sont opérationnels même si la base d'apprentissage est initialement vide au lancement de RIDER. Le système de gestion de l'expérience permet simplement de quantifier les recommandations et de converger plus vite vers une commande optimale. Le raisonnement à base de modèles qui supporte notre approche est faiblement dépendant du processus thermique, pertinent dès le lancement de RIDER et se prête facilement au changement d'échelle de l'analyse thermique d'un bâtiment. Les performances de notre THPE, sa stabilité et son adaptation par rapport aux variations de l'environnement sont illustrées sur différents problèmes de contrôle et d'optimisation. Les commandes optimales sont généralement obtenues en quelques itérations et permettent d'avoir un contrôle adaptatif et individuel des pièces d'un bâtiment. / The present work is part of the FUI RIDER project (Research for IT Driven Energy efficiency). It aims to develop an energy management system that has to be weakly dependent on building's specificities in order to be easily deployed in different kinds of buildings. This work proposes a new approach based on the thermal comfort concept in order to reduce energy costs. This approach takes advantage of the thermal comfort concept in order to compute new optimized setpoints for the building energy control system. It relies on the idea that thermal comfort is a subjective multidimensional concept that can be used to reduce energy consumption. The literature provides statistical thermal comfort models but their complexity and non-linearity make them not useful for the control and optimization purposes. Our new thermal comfort model is based on the multi attributes utility theory and Choquet integrals. The advantages of our model are: its interpretability in term of preference relationships, its linearity in simplex regions which simplifies optimization problems' solving, and its compact form which is more tractable than a rule based control formalism. In the second part of this work, the THermal Process Enhancement (THPE) proposes a control system approach to efficiently reach the optimized setpoints provided by the comfort model. The THPE proposes an efficient and simple thermal control approach based on imprecise knowledge of buildings' special features. Its weak data-dependency ensures the scalability and simplicity of our approach. For this, an extended thermal qualitative model (EQM) is proposed. It is based on a qualitative description of influences that actions' parameters may have on buildings' thermal performances. This description results from the mathematical and qualitative analysis of dynamical thermal behaviors. Our thermal qualitative model is then enriched by online collecting and assessing previous thermal control performances. The online learning provides the necessary quantitative information to infer quantified control recommendations from the qualitative tendencies displayed by the EQM. Thus, an approximate reasoning based on the EQM and an online learning coupled with a penalty function provides smart thermal control functionalities. The EQM based approximate reasoning guarantees our control system weak dependency with regard to the building special features as well as its multi-scale applicability and its relevancy even for RIDER's first start when the learning database lacks of information. The performances of our THPE are assessed on various types of control and optimization issues. An optimal control is generally achieved in a few iterations which allows providing an adaptive and individual control of building's rooms.
9

Méthode non-additive intervalliste de super-résolution d'images, dans un contexte semi-aveugle / A non-additive interval-valued super-resolution image method, in a semi-blind context

Graba, Farès 17 April 2015 (has links)
La super-résolution est une technique de traitement d'images qui consiste en la reconstruction d'une image hautement résolue à partir d'une ou plusieurs images bassement résolues.Cette technique est apparue dans les années 1980 pour tenter d'augmenter artificiellement la résolution des images et donc de pallier, de façon algorithmique, les limites physiques des capteurs d'images.Comme beaucoup des techniques de reconstruction en traitement d'images, la super-résolution est connue pour être un problème mal posé dont la résolution numérique est mal conditionnée. Ce mauvais conditionnement rend la qualité des images hautement résolues reconstruites très sensible au choix du modèle d'acquisition des images, et particulièrement à la modélisation de la réponse impulsionnelle de l'imageur.Dans le panorama des méthodes de super-résolution que nous dressons, nous montrons qu'aucune des méthodes proposées par la littérature ne permet de modéliser proprement le fait que la réponse impulsionnelle d'un imageur est, au mieux, connue de façon imprécise. Au mieux l'écart existant entre modèle et réalité est modélisé par une variable aléatoire, alors que ce biais est systématique.Nous proposons de modéliser l'imprécision de la connaissance de la réponse impulsionnelle par un ensemble convexe de réponses impulsionnelles. L'utilisation d'un tel modèle remet en question les techniques de résolution. Nous proposons d'adapter une des techniques classiques les plus populaires, connue sous le nom de rétro-projection itérative, à cette représentation imprécise.L'image super-résolue reconstruite est de nature intervalliste, c'est à dire que la valeur associée à chaque pixel est un intervalle réel. Cette reconstruction s'avère robuste à la modélisation de la réponse impulsionnelle ainsi qu'à d'autres défauts. Il s'avère aussi que la largeur des intervalles obtenus permet de quantifier l'erreur de reconstruction. / Super-resolution is an image processing technique that involves reconstructing a high resolution image based on one or several low resolution images. This technique appeared in the 1980's in an attempt to artificially increase image resolution and therefore to overcome, algorithmically, the physical limits of an imager.Like many reconstruction problems in image processing, super-resolution is known as an ill-posed problem whose numerical resolution is ill-conditioned. This ill-conditioning makes high resolution image reconstruction qualityvery sensitive to the choice of image acquisition model, particularly to the model of the imager Point Spread Function (PSF).In the panorama of super-resolution methods that we draw, we show that none of the methods proposed in the relevant literature allows properly modeling the fact that the imager PSF is, at best, imprecisely known. At best the deviation between model and reality is considered as being a random variable, while it is not: the bias is systematic.We propose to model scant knowledge on the imager's PSF by a convex set of PSFs. The use of such a model challenges the classical inversion methods. We propose to adapt one of the most popular super-resolution methods, known under the name of "iterative back-projection", to this imprecise representation. The super-resolved image reconstructed by the proposed method is interval-valued, i.e. the value associated to each pixel is a real interval. This reconstruction turns out to be robust to the PSF model and to some other errors. It also turns out that the width of the obtained intervals quantifies the reconstruction error.
10

Contribution à la formalition de bilans/états de santé multi-niveaux d'un système pour aider à la prise de décision en maintenance : agrégation d'indicateurs par l'intégrale de Choquet / Contribution to the formalization of health assessment for a multi-layers system to aid maintenance decision making : Choquet integral-based aggregation of heterogeneous indicators

Abichou, Bouthaïna 18 April 2013 (has links)
Dans cette thèse est défendu l'intérêt d'évaluer la santé d'un système/objet industriel multi-composants à travers un bilan de santé multi-niveaux hiérarchiques. Elle a donc pour objet principal de justifier les éléments essentiels du concept de bilan de santé générique qui représente l'état réel d'un système sous la forme d'un vecteur d'indicateurs de différentes natures. Vis-à-vis de ce fondement, la thèse se focalise plus spécifiquement sur les fonctions de détection des anomalies-normalisation et agrégation d'indicateurs pour élaborer un index synthétique représentatif de l'état de santé global pour chaque élément du système. Il est ainsi proposé, une nouvelle approche de détection conditionnelle des anomalies. Cette approche a pour intérêt de quantifier la déviation pour chaque indicateur par rapport à son mode de comportement nominal tout en prenant en compte le contexte dans lequel évolue le système. Une extension à l'exploitation de l'intégrale de Choquet en tant qu'opérateur d'agrégation des indicateurs est aussi proposée. Cette extension concerne, d'une part, un processus d'apprentissage non supervisé des capacités pour le niveau le plus inférieur dans l'abstraction, à savoir celui des composants, et d'autre part, une approche de mise en oeuvre de leur inférence d'un niveau à l'autre. Ces propositions sont appliquées sur un moteur diesel de navire, système essentiel du projet BMCI du pôle MER-PACA dans lequel s'inscrit cette thèse / This work is addressing the health assessment of a multi-component system by means of multi-levels health check-up. Thus scientific Ph. D. objective aims to establish items of a generic health check-up concept. It focuses specifically on the functions of anomaly detection, normalization and aggregation of different indicators to develop a synthetic index representing the overall health status for each element within the system. In that way, it is proposed a new approach for detecting conditional anomalies. This approach has the advantage of quantifying the deviation for each indicator compared to its nominal behavior while taking into account the context in which the system operates. An extension of the Choquet integral used as an operator aggregating indicators is also proposed. This extension regards on the one hand, a process of an unsupervised learning of the capacity coefficients for the lowest level of abstraction, namely components level, and on the other hand, an approach to inference them from one level to another. These contributions are implemented on a ship diesel engine which is the most critical system for the BMCI project of the MER-PACA pole to which this thesis is attached

Page generated in 0.0505 seconds