• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 54
  • 50
  • 49
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 447
  • 95
  • 73
  • 71
  • 66
  • 56
  • 46
  • 43
  • 43
  • 38
  • 37
  • 33
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Contributions aux algorithmes stochastiques pour le Big Data et à la théorie des valeurs extrèmes multivariés. / Contributions to stochastic algorithm for Big Data and multivariate extreme value theory.

Ho, Zhen Wai Olivier 04 October 2018 (has links)
La thèse comporte deux parties distinctes. La première partie concerne des modèles pour les extrêmes multivariés.On donne une construction de vecteurs aléatoires multivariés à variations régulières. La construction se base sur une extension multivariée d'un lemme de Breiman établissant la propriété de variation régulière d'un produit $RZ$ de variable aléatoire avec $R$ positive à variation régulière et $Z$ positive suffisamment intégrable. En prenant $mathbf{Z}$ multivarié et suffisamment intégrable, on montre que $Rmathbf{Z}$ est un vecteur aléatoire à variations régulières et on caractérise sa mesure limite. On montre ensuite que pour $mathbf{Z}$ de loi bien choisie, on retrouve des modèles stables classiques comme le modèle t-extremal, Hüsler-Reiss, etc. Puis, on étend notre construction pour considérer la notion de variation régulière multivariée non standard. On montre ensuite que le modèle de Pareto (qu'on appelle Hüsler-Reiss Pareto) associé au modèle max-stable Hüsler-Reiss forme une famille exponentielle complète. On donne quelques propriétés du modèle Hüsler-Reiss Pareto puis on propose un algorithme de simulation exacte. On étudie l'inférence par le maximum de vraisemblance. Finalement, on considère une extension du modèle Hüsler-Reiss Pareto utilisant la notion de variation régulière non standard. On étudie l'inférence par le maximum de vraisemblance du modèle généralisé et on propose une méthode d'estimation des paramètres. On donne une étude numérique sur l'estimateur du maximum de vraisemblance pour le modèle Hüsler-Reiss Pareto. Dans la second partie qui concerne l'apprentissage statistique, on commence par donner une borne sur la valeur singulière minimale d'une matrice perturbée par l'ajout d'une colonne. On propose alors un algorithme de sélection de colonne afin d'extraire les caractéristiques de la matrice. On illustre notre algorithme sur des données réelles de séries temporelles où chaque série est pris comme étant une colonne de la matrice. Deuxièmement, on montre que si une matrice $X$ à une propriété d'incohérence alors $X$ possède aussi une version affaiblie de la propriété NSP (null space property). Puis, on s'intéresse au problème de sélection de matrice incohérente. A partir d'une matrice $Xin mathbb{R}^{n imes p}$ et $mu>0$, on cherche la plus grande sous-matrice de $X$ avec une cohérence inférieure à $mu$. Ce problème est formulé comme un programme linéaire avec contrainte quadratique sur ${0,1}^p$. Comme ce problème est NP-dur, on considère une relaxation sur la sphère et on obtient une borne sur l'erreur lorsqu'on considère le problème relaxé. Enfin, on analyse l'algorithme de gradient stochastique projeté pour l'analyse en composante principale online. On montre qu'en espérance, l'algorithme converge vers un vecteur propre maximum et on propose un algorithme pour sélectionner le pas de l'algorithme. On illustre ensuite cet algorithme par une expérience de simulation. / This thesis in divided in two parts. The first part studies models for multivariate extremes. We give a method to construct multivariate regularly varying random vectors. The method is based on a multivariate extension of a Breiman Lemma that states that a product $RZ$ of a random non negative regularly varying variable $R$ and a non negative $Z$ sufficiently integrable is also regularly varying. Replacing $Z$ with a random vector $mathbf{Z}$, we show that the product $Rmathbf{Z}$ is regularly varying and we give a characterisation of its limit measure. Then, we show that taking specific distributions for $mathbf{Z}$, we obtain classical max-stable models. We extend our result to non-standard regular variations. Next, we show that the Pareto model associated with the Hüsler-Reiss max-stable model forms a full exponential family. We show some properties of this model and we give an algorithm for exact simulation. We study the properties of the maximum likelihood estimator. Then, we extend our model to non-standard regular variations. To finish the first part, we propose a numerical study of the Hüsler-Reiss Pareto model.In the second part, we start by giving a lower bound of the smallest singular value of a matrix perturbed by appending a column. Then, we give a greedy algorithm for feature selection and we illustrate this algorithm on a time series dataset. Secondly, we show that an incoherent matrix satisfies a weakened version of the NSP property. Thirdly, we study the problem of column selection of $Xinmathbb{R}^{n imes p}$ given a coherence threshold $mu$. This means we want the largest submatrix satisfying some coherence property. We formulate the problem as a linear program with quadratic constraint on ${0,1}^p$. Then, we consider a relaxation on the sphere and we bound the relaxation error. Finally, we study the projected stochastic gradient descent for online PCA. We show that in expectation, the algorithm converges to a leading eigenvector and we suggest an algorithm for step-size selection. We illustrate this algorithm with a numerical experiment.
212

Design modeling and evaluation of a bidirectional highly integrated AC/DC converter / Conception, modélisation et évaluation d'un convertisseur AC/DC réversible isolé

Le Lesle, Johan 05 April 2019 (has links)
De nos jours, les énergies renouvelables remplacent les énergies fossiles. Pour assurer une l’interconnexion entre toutes ces installations électriques, l’électronique de puissance est nécessaire. Les principales spécifications de la prochaine génération de convertisseur de puissances sont un rendement et une densité de puissance élevés, fiabilité et faibles coûts. L’intégration PCB des composants actifs et/ou passifs est perçue comme une approche prometteuse, peu onéreuse et efficace. Les délais ainsi que les coûts de fabrication des convertisseurs de puissance peuvent considérablement réduits. L’intégration permet également d’améliorer les performances des convertisseurs. Dans ce but, un concept original d’inductance 3D pliable utilisant la technologie PCB est présenté. Il permet un coût faible pour une production en série, ainsi qu’une excellente reproductibilité. Un usinage partiel de la carte PCB est utilisé, permettant le pliage et la conception des enroulements de l’inductance. Différents prototypes sont développés par le biais d’une procédure d’optimisation. Des tests électriques et thermiques sont réalisés pour valider l’applicabilité du concept au sein de convertisseurs de puissance.Le développement d’une procédure d’optimisation appliqué aux convertisseurs hautement intégrés utilisant l’enterrement PCB est présenté. Tous les choix importants, facilitant l’intégration PCB, e.g. réduction des composants passifs, sont présentés. Cela inclut la sélection de la topologie adéquate avec la modulation associée. La procédure de design et les modèles analytiques sont introduits. Il en résulte un convertisseur comprenant quatre pont-complet entrelacés avec des bras fonctionnant à basse (50 Hz) et haute (180 kHz) fréquences. Cette configuration autorise une variation de courant importante dans les inductances, assurant ainsi la commutation des semi-conducteurs à zéro de tension (ZVS), et ceux sur une période complète du réseau. L’impact de la forte variation de courant sur le filtre CEM est compensé par l’entrelacement. Deux prototypes d’un convertisseur AC/DC bidirectionnel de 3.3 kW sont présentés, les résultats théorique et pratique sont analysés.Pour augmenter la densité de puissance du system, un filtre actif de type “Buck” est étudié. La procédure d’optimisation est adaptée à partir de la procédure implémentée pour le convertisseur AC/DC. L’approche utilisée, mène à un convertisseur opérant également en ZVS durant une période compète du réseau, et ce, à fréquence de commutation fixe. Les technologies sélectionnées, condensateur céramique et inductance compatible avec la technologie PCB sont favorable à l’intégration et sont implémenté sur le prototype. / Nowadays, the green energy sources are replacing fossil energies. To assure proper interconnections between all these different electrical facilities, power electronics is mandatory. The main requirements of next generation converters are high efficiency, high power density, high reliability and low-cost. The Printed Circuit Board (PCB) integration of dies and/or passives is foreseen as a promising, low-cost and efficient approach. The manufacturing time and cost of power converters can be drastically reduced. Moreover, integration allows the converter performances to be improved. For this purpose, an original 3D folded power inductor concept using PCB technology is introduced. It is low cost for mass production and presents good reproducibility. A partial milling of the PCB is used to allow bending and building the inductor winding. Prototypes are designed through an optimisation procedure. Electrical and thermal tests are performed to validate the applicability in power converters. The development of an optimisation procedure for highly integrated converters, using PCB embedding, is presented. All important choices, facilitating the PCB integration, e.g. reduction of passive components, are presented. It includes the selection of the suitable converter topology with the associated modulation. The design procedure and implemented analytical models are introduced. It results in four interleaved full-bridges operating with low (50 Hz) and high (180 kHz) frequency legs. The configuration allows high current ripple in the input inductors inducing zero voltage switching (ZVS) for all the semiconductors, and for a complete grid period. The impact of high current ripple on the EMI filter is compensated by the interleaving. Two prototypes of a 3.3 kW bidirectional AC/DC converters are presented, theoretical and practical results are discussed. To further increase the power density of the overall system, a Buck power pulsating buffer is investigated. The optimisation procedure is derived from the procedure implemented for the AC/DC converter. The result favours an original approach, where the converter also operates with ZVS along the entire main period at a fixed switching frequency. The selected technologies for prototyping are integration friendly as ceramic capacitors and PCB based inductors are implemented in the final prototype.
213

Developments in statistics applied to hydrometeorology : imputation of streamflow data and semiparametric precipitation modeling / Développements en statistiques appliquées à l'hydrométéorologie : imputation de données de débit et modélisation semi-paramétrique de la précipitation

Tencaliec, Patricia 01 February 2017 (has links)
Les précipitations et les débits des cours d'eau constituent les deux variables hydrométéorologiques les plus importantes pour l'analyse des bassins versants. Ils fournissent des informations fondamentales pour la gestion intégrée des ressources en eau, telles que l’approvisionnement en eau potable, l'hydroélectricité, les prévisions d'inondations ou de sécheresses ou les systèmes d'irrigation.Dans cette thèse de doctorat sont abordés deux problèmes distincts. Le premier prend sa source dans l’étude des débits des cours d’eau. Dans le but de bien caractériser le comportement global d'un bassin versant, de longues séries temporelles de débit couvrant plusieurs dizaines d'années sont nécessaires. Cependant les données manquantes constatées dans les séries représentent une perte d'information et de fiabilité, et peuvent entraîner une interprétation erronée des caractéristiques statistiques des données. La méthode que nous proposons pour aborder le problème de l'imputation des débits se base sur des modèles de régression dynamique (DRM), plus spécifiquement, une régression linéaire multiple couplée à une modélisation des résidus de type ARIMA. Contrairement aux études antérieures portant sur l'inclusion de variables explicatives multiples ou la modélisation des résidus à partir d'une régression linéaire simple, l'utilisation des DRMs permet de prendre en compte les deux aspects. Nous appliquons cette méthode pour reconstruire les données journalières de débit à huit stations situées dans le bassin versant de la Durance (France), sur une période de 107 ans. En appliquant la méthode proposée, nous parvenons à reconstituer les débits sans utiliser d'autres variables explicatives. Nous comparons les résultats de notre modèle avec ceux obtenus à partir d'un modèle complexe basé sur les analogues et la modélisation hydrologique et d'une approche basée sur le plus proche voisin. Dans la majorité des cas, les DRMs montrent une meilleure performance lors de la reconstitution de périodes de données manquantes de tailles différentes, dans certains cas pouvant allant jusqu'à 20 ans.Le deuxième problème que nous considérons dans cette thèse concerne la modélisation statistique des quantités de précipitations. La recherche dans ce domaine est actuellement très active car la distribution des précipitations exhibe une queue supérieure lourde et, au début de cette thèse, il n'existait aucune méthode satisfaisante permettant de modéliser toute la gamme des précipitations. Récemment, une nouvelle classe de distribution paramétrique, appelée distribution généralisée de Pareto étendue (EGPD), a été développée dans ce but. Cette distribution exhibe une meilleure performance, mais elle manque de flexibilité pour modéliser la partie centrale de la distribution. Dans le but d’améliorer la flexibilité, nous développons, deux nouveaux modèles reposant sur des méthodes semiparamétriques.Le premier estimateur développé transforme d'abord les données avec la distribution cumulative EGPD puis estime la densité des données transformées en appliquant un estimateur nonparamétrique par noyau. Nous comparons les résultats de la méthode proposée avec ceux obtenus en appliquant la distribution EGPD paramétrique sur plusieurs simulations, ainsi que sur deux séries de précipitations au sud-est de la France. Les résultats montrent que la méthode proposée se comporte mieux que l'EGPD, l’erreur absolue moyenne intégrée (MIAE) de la densité étant dans tous les cas presque deux fois inférieure.Le deuxième modèle considère une distribution EGPD semiparamétrique basée sur les polynômes de Bernstein. Plus précisément, nous utilisons un mélange creuse de densités béta. De même, nous comparons nos résultats avec ceux obtenus par la distribution EGPD paramétrique sur des jeux de données simulés et réels. Comme précédemment, le MIAE de la densité est considérablement réduit, cet effet étant encore plus évident à mesure que la taille de l'échantillon augmente. / Precipitation and streamflow are the two most important meteorological and hydrological variables when analyzing river watersheds. They provide fundamental insights for water resources management, design, or planning, such as urban water supplies, hydropower, forecast of flood or droughts events, or irrigation systems for agriculture.In this PhD thesis we approach two different problems. The first one originates from the study of observed streamflow data. In order to properly characterize the overall behavior of a watershed, long datasets spanning tens of years are needed. However, the quality of the measurement dataset decreases the further we go back in time, and blocks of data of different lengths are missing from the dataset. These missing intervals represent a loss of information and can cause erroneous summary data interpretation or unreliable scientific analysis.The method that we propose for approaching the problem of streamflow imputation is based on dynamic regression models (DRMs), more specifically, a multiple linear regression with ARIMA residual modeling. Unlike previous studies that address either the inclusion of multiple explanatory variables or the modeling of the residuals from a simple linear regression, the use of DRMs allows to take into account both aspects. We apply this method for reconstructing the data of eight stations situated in the Durance watershed in the south-east of France, each containing daily streamflow measurements over a period of 107 years. By applying the proposed method, we manage to reconstruct the data without making use of additional variables, like other models require. We compare the results of our model with the ones obtained from a complex approach based on analogs coupled to a hydrological model and a nearest-neighbor approach, respectively. In the majority of cases, DRMs show an increased performance when reconstructing missing values blocks of various lengths, in some of the cases ranging up to 20 years.The second problem that we approach in this PhD thesis addresses the statistical modeling of precipitation amounts. The research area regarding this topic is currently very active as the distribution of precipitation is a heavy-tailed one, and at the moment, there is no general method for modeling the entire range of data with high performance. Recently, in order to propose a method that models the full-range precipitation amounts, a new class of distribution called extended generalized Pareto distribution (EGPD) was introduced, specifically with focus on the EGPD models based on parametric families. These models provide an improved performance when compared to previously proposed distributions, however, they lack flexibility in modeling the bulk of the distribution. We want to improve, through, this aspect by proposing in the second part of the thesis, two new models relying on semiparametric methods.The first method that we develop is the transformed kernel estimator based on the EGPD transformation. That is, we propose an estimator obtained by, first, transforming the data with the EGPD cdf, and then, estimating the density of the transformed data by applying a nonparametric kernel density estimator. We compare the results of the proposed method with the ones obtained by applying EGPD on several simulated scenarios, as well as on two precipitation datasets from south-east of France. The results show that the proposed method behaves better than parametric EGPD, the MIAE of the density being in all the cases almost twice as small.A second approach consists of a new model from the general EGPD class, i.e., we consider a semiparametric EGPD based on Bernstein polynomials, more specifically, we use a sparse mixture of beta densities. Once again, we compare our results with the ones obtained by EGPD on both simulated and real datasets. As before, the MIAE of the density is considerably reduced, this effect being even more obvious as the sample size increases.
214

Distribuição generalizada de chuvas máximas no Estado do Paraná. / Local and regional frequency analysis by lh-moments and generalized distributions

Pansera, Wagner Alessandro 07 December 2013 (has links)
Made available in DSpace on 2017-05-12T14:46:53Z (GMT). No. of bitstreams: 1 Wagner.pdf: 5111902 bytes, checksum: b4edf3498cca6f9c7e2a9dbde6e62e18 (MD5) Previous issue date: 2013-12-07 / The purpose of hydrologic frequency analysis is to relate magnitude of events with their occurrence frequency based on probability distribution. The generalized probability distributions can be used on the study concerning extreme hydrological events: extreme events, logistics and Pareto. There are several methodologies to estimate probability distributions parameters, however, L-moments are often used due to computational easiness. Reliability of quantiles with high return period can be increased by LH-moments or high orders L-moments. L-moments have been widely studied; however, there is little information about LH-moments on literature, thus, there is a great research requirement on such area. Therefore, in this study, LH-moments were studied under two approaches commonly used in hydrology: (i) local frequency analysis (LFA) and (ii) regional frequency analysis (RFA). Moreover, a database with 227 rainfall stations was set (daily maximum annual), in Paraná State, from 1976 to 2006. LFA was subdivided into two steps: (i) Monte Carlo simulations and (ii) application of results to database. The main result of Monte Carlo simulations was that LH-moments make 0.99 and 0.995 quantiles less biased. Besides, simulations helped on creating an algorithm to perform LFA by generalized distributions. The algorithm was applied to database and enabled an adjustment of 227 studied series. In RFA, the 227stations have been divided into 11 groups and regional growth curves were obtained; while local quantiles were obtained from the regional growth curves. The difference between local quantiles obtained by RFA was quantified with those obtained via LFA. The differences may be approximately 33 mm for return periods of 100 years. / O objetivo da análise de frequência das variáveis hidrológicas é relacionar a magnitude dos eventos com sua frequência de ocorrência por meio do uso de uma distribuição de probabilidade. No estudo de eventos hidrológicos extremos, podem ser usadas as distribuições de probabilidade generalizadas: de eventos extremos, logística e Pareto. Existem diversas metodologias para a estimativa dos parâmetros das distribuições de probabilidade, no entanto, devido às facilidades computacionais, utilizam-se frequentemente os momentos-L. A confiabilidade dos quantis com alto período de retorno pode ser aumentada utilizando os momentos-LH ou momentos-L de altas ordens. Os momentos-L foram amplamente estudados, todavia, os momentos-LH apresentam literatura reduzida, logo, mais pesquisas são necessárias. Portanto, neste estudo, os momentos-LH foram estudados sob duas abordagens comumente utilizadas na hidrologia: (i) Análise de frequência local (AFL) e (ii) Análise de frequência regional (AFR). Além disso, foi montado um banco de dados com 227 estações pluviométricas (máximas diárias anuais), localizadas no Estado do Paraná, no período de 1976 a 2006. A AFL subdividiu-se em duas etapas: (i) Simulações de Monte Carlo e (ii) Aplicação dos resultados ao banco de dados. O principal resultado das simulações de Monte Carlo foi que os momentos-LH tornam os quantis 0,99 e 0,995 menos enviesados. Além disso, as simulações viabilizaram a criação de um algoritmo para realizar a AFL utilizando as distribuições generalizadas. O algoritmo foi aplicado ao banco de dados e possibilitou ajuste das 227 séries estudadas. Na AFR, as 227 estações foram dividas em 11 grupos e foram obtidas as curvas de crescimento regional. Os quantis locais foram obtidos a partir das curvas de crescimento regional. Foi quantificada a diferença entre os quantis locais obtidos via AFL com aqueles obtidos via AFR. As diferenças podem ser de aproximadamente 33 mm para períodos de retorno de 100 anos.
215

Modelli di distribuzione della dimensione di impresa per i settori manifatturieri italiani: il problema della regolarità statistica e relative implicazioni economiche / Modelling Firm Size Distribution of Italian Manufacturing Industries: the Puzzle of Statistical Regularity and Related Economic Implications

CROSATO, LISA 13 July 2007 (has links)
Questo lavoro studia la distribuzione della dimensione d'impresa sulla base di due datasets. Il primo è l'indagine micro1 di istat, che include tutte le imprese manifatturiere con più di 20 addetti sopravvissute dal 1989 al 1997. Il secondo è il file Cerved riguardante l'universo delle imprese del settore meccanico (atecodk29), dal 1997 al 2002. Lo scopo generale della tesi è quello di espolare la possibilità di trovare nuove regolarità empiriche riguardanti la distribuzione della dimensione d'impresa, sulla base della passata evidenza empirica che attesta la (in)capacità di Lognormale e Pareto di modellare in modo soddisfacente la dimensione d'impresa nell'intero arco dimensionale. Vengono per questo proposti due modelli mai utilizzati prima. Gli stessi vengono poi convalidati su differenti variabili dimensionali e a diversi livelli di aggregazione. La tesi cerca anche di esplicitare al meglio le implicazioni economiche dei modelli parametrici di distribuzione adottati secondo diversi punti di vista. / The present work studies the firm size distribution of Italian manufacturing industries on the basis of two datasets. The first is the Micro1 survey carried out by ISTAT, which recorded all manufacturing firms with 20 employees and more surviving from 1989 to 1997. The second is the Cerved file regarding all firms of the mechanical sector (DK29) from 1997 to 2002. The general aim of this research is to explore the possibility to find new empirical regularities in the size distribution of firms, building on the relevant past evidence about the (in) capacity of the Lognormal and Pareto distribution of satisfactorily modelling the whole size range. Two unused statistical models are proposed and validated on different size proxies and at different levels of data aggregation. The thesis also addresses the economic implications of parametric models of firm size distribution in different aspects.
216

Misskötsel av sopor : ett utbrett fenomen

Sågström, Karin, Stark, Anna January 2004 (has links)
No description available.
217

Game Theory and Microeconomic Theory for Beamforming Design in Multiple-Input Single-Output Interference Channels

Mochaourab, Rami 24 July 2012 (has links) (PDF)
In interference-limited wireless networks, interference management techniques are important in order to improve the performance of the systems. Given that spectrum and energy are scarce resources in these networks, techniques that exploit the resources efficiently are desired. We consider a set of base stations operating concurrently in the same spectral band. Each base station is equipped with multiple antennas and transmits data to a single-antenna mobile user. This setting corresponds to the multiple-input single-output (MISO) interference channel (IFC). The receivers are assumed to treat interference signals as noise. Moreover, each transmitter is assumed to know the channels between itself and all receivers perfectly. We study the conflict between the transmitter-receiver pairs (links) using models from game theory and microeconomic theory. These models provide solutions to resource allocation problems which in our case correspond to the joint beamforming design at the transmitters. Our interest lies in solutions that are Pareto optimal. Pareto optimality ensures that it is not further possible to improve the performance of any link without reducing the performance of another link. Strategic games in game theory determine the noncooperative choice of strategies of the players. The outcome of a strategic game is a Nash equilibrium. While the Nash equilibrium in the MISO IFC is generally not efficient, we characterize the necessary null-shaping constraints on the strategy space of each transmitter such that the Nash equilibrium outcome is Pareto optimal. An arbitrator is involved in this setting which dictates the constraints at each transmitter. In contrast to strategic games, coalitional games provide cooperative solutions between the players. We study cooperation between the links via coalitional games without transferable utility. Cooperative beamforming schemes considered are either zero forcing transmission or Wiener filter precoding. We characterize the necessary and sufficient conditions under which the core of the coalitional game with zero forcing transmission is not empty. The core solution concept specifies the strategies with which all players have the incentive to cooperate jointly in a grand coalition. While the core only considers the formation of the grand coalition, coalition formation games study coalition dynamics. We utilize a coalition formation algorithm, called merge-and-split, to determine stable link grouping. Numerical results show that while in the low signal-to-noise ratio (SNR) regime noncooperation between the links is efficient, at high SNR all links benefit in forming a grand coalition. Coalition formation shows its significance in the mid SNR regime where subset link cooperation provides joint performance gains. We use the models of exchange and competitive market from microeconomic theory to determine Pareto optimal equilibria in the two-user MISO IFC. In the exchange model, the links are represented as consumers that can trade goods within themselves. The goods in our setting correspond to the parameters of the beamforming vectors necessary to achieve all Pareto optimal points in the utility region. We utilize the conflict representation of the consumers in the Edgeworth box, a graphical tool that depicts the allocation of the goods for the two consumers, to provide closed-form solution to all Pareto optimal outcomes. The exchange equilibria are a subset of the points on the Pareto boundary at which both consumers achieve larger utility then at the Nash equilibrium. We propose a decentralized bargaining process between the consumers which starts at the Nash equilibrium and ends at an outcome arbitrarily close to an exchange equilibrium. The design of the bargaining process relies on a systematic study of the allocations in the Edgeworth box. In comparison to the exchange model, a competitive market additionally defines prices for the goods. The equilibrium in this economy is called Walrasian and corresponds to the prices that equate the demand to the supply of goods. We calculate the unique Walrasian equilibrium and propose a coordination process that is realized by the arbitrator which distributes the Walrasian prices to the consumers. The consumers then calculate in a decentralized manner their optimal demand corresponding to beamforming vectors that achieve the Walrasian equilibrium. This outcome is Pareto optimal and lies in the set of exchange equilibria. In this thesis, based on the game theoretic and microeconomic models, efficient beamforming strategies are proposed that jointly improve the performance of the systems. The gained results are applicable in interference-limited wireless networks requiring either coordination from the arbitrator or direct cooperation between the transmitters.
218

A Markovian state-space framework for integrating flexibility into space system design decisions

Lafleur, Jarret Marshall 16 December 2011 (has links)
The past decades have seen the state of the art in aerospace system design progress from a scope of simple optimization to one including robustness, with the objective of permitting a single system to perform well even in off-nominal future environments. Integrating flexibility, or the capability to easily modify a system after it has been fielded in response to changing environments, into system design represents a further step forward. One challenge in accomplishing this rests in that the decision-maker must consider not only the present system design decision, but also sequential future design and operation decisions. Despite extensive interest in the topic, the state of the art in designing flexibility into aerospace systems, and particularly space systems, tends to be limited to analyses that are qualitative, deterministic, single-objective, and/or limited to consider a single future time period. To address these gaps, this thesis develops a stochastic, multi-objective, and multi-period framework for integrating flexibility into space system design decisions. Central to the framework are five steps. First, system configuration options are identified and costs of switching from one configuration to another are compiled into a cost transition matrix. Second, probabilities that demand on the system will transition from one mission to another are compiled into a mission demand Markov chain. Third, one performance matrix for each design objective is populated to describe how well the identified system configurations perform in each of the identified mission demand environments. The fourth step employs multi-period decision analysis techniques, including Markov decision processes (MDPs) from the field of operations research, to find efficient paths and policies a decision-maker may follow. The final step examines the implications of these paths and policies for the primary goal of informing initial system selection. Overall, this thesis unifies state-centric concepts of flexibility from economics and engineering literature with sequential decision-making techniques from operations research. The end objective of this thesis' framework and its supporting analytic and computational tools is to enable selection of the next-generation space systems today, tailored to decision-maker budget and performance preferences, that will be best able to adapt and perform in a future of changing environments and requirements. Following extensive theoretical development, the framework and its steps are applied to space system planning problems of (1) DARPA-motivated multiple- or distributed-payload satellite selection and (2) NASA human space exploration architecture selection.
219

Méthodes et applications industrielles en optimisation multi-critère de paramètres de processus et de forme en emboutissage

Oujebbour, Fatima Zahra 12 March 2014 (has links) (PDF)
Face aux exigences concurrentielles et économiques actuelles dans le secteur automobile, l'emboutissage a l'avantage, comme étant un procédé de mise en forme par grande déformation, de produire, en grandes cadences, des pièces de meilleure qualité géométrique par rapport aux autres procédés de fabrication mécanique. Cependant, il présente des difficultés de mise en œuvre, cette dernière s'effectue généralement dans les entreprises par la méthode classique d'essai-erreur, une méthode longue et très coûteuse. Dans la recherche, le recours à la simulation du procédé par la méthode des éléments finis est une alternative. Elle est actuellement une des innovations technologiques qui cherche à réduire le coût de production et de réalisation des outillages et facilite l'analyse et la résolution des problèmes liés au procédé. Dans le cadre de cette thèse, l'objectif est de prédire et de prévenir, particulièrement, le retour élastique et la rupture. Ces deux problèmes sont les plus répandus en emboutissage et présentent une difficulté en optimisation puisqu'ils sont antagonistes. Une pièce mise en forme par emboutissage à l'aide d'un poinçon sous forme de croix a fait l'objet de l'étude. Nous avons envisagé, d'abord, d'analyser la sensibilité des deux phénomènes concernés par rapport à deux paramètres caractéristiques du procédé d'emboutissage (l'épaisseur du flan initial et de la vitesse du poinçon), puis par rapport à quatre (l'épaisseur du flan initial, de la vitesse du poinçon, l'effort du serre flan et le coefficient du frottement) et finalement par rapport à la forme du contour du flan. Le recours à des méta-modèles pour optimiser les deux critères était nécessaire.
220

Cash-to-cash-styrning : ett spelteoretiskt angreppssätt

Mangs, Christian, Fernholm, Nicholas January 2015 (has links)
Objective: The objective is to examine if long terms of payment are created because of a perceived zero-sum game in the cash-to-cash cycles between companies. The study will also examine if dependence effects the dependent companies Cash-to-Cash-cycle because of long terms of payment. Scientific method: The study uses a qualitative and a quantitative method, where primary data is collected from semi-structured interviews. Additional primary data is collected from an unstructured interview with an expert in the field of which is examined. Additional data has been collected from the studied companies’ annual reports. Theoretical references: The primary theory has been Game theory, where the researchers have used this theory to analyze the behaviors of the companies. This theory has been reinforced by the theory Pareto efficiency to help analyze the strategy which gives the highest net-outcome. The theory Cash-to- Cash has also been a focal point in the study, which has been used in concert with Supply chain finance and Supply chain management. This has been done to be able to further analyze the behaviors of the companies and the cooperation in the “supply chain”. Result: The result shows that there seems to exist a perceived zero-sum game in the cash-to-cash cycles between companies. The consequence of this is that the companies in a dominant position use a strategy that prolongs payables outstanding and shortens accounts receivables. This in turn prolongs the cash-to- cash cycles for the dominant companies suppliers. Focus to improve the Cash-to-Cash-cycles for both companies is very small or non-existent. The authors want to emphasize that additional research is needed within this area, where other contributing factors can be effecting the company’s decisions. / Syfte: Studien syftar till att undersöka om ofördelaktiga betalningsvillkor skapas på grund av ett uppfattat nollsummespel inom Cash-to-Cash-cykler mellan företag. Studien syftar även till att undersöka om beroendeförhållanden påverkar beroende aktörers Cash-to-Cash-cykel på grund av ofördelaktiga betalningsvillkor. Metod: Studien använde en metodtriangulering, där primärdata samlades in via semistrukturerade intervjuer. Ytterligare primärdata samlades in via en ostrukturerad intervju med en expert inom området. Slutligen har data samlats in angående intervjuobjektens årsredovisning. Teoretisk referensram: Den primära teorin har varit Game theory, där forskarna använt teorin för att kunna analysera aktörernas beteenden. Denna teori har förstärkts av ytterligare teorier såsom Paretooptimalitet som hjälper att analysera utfall som ger högst möjlig nytta. Teorier angående Cash-to-Cash har även de varit centrala i arbetet som har kopplats till Supply chain management samt Supply chain finance för att ytterligare kunna analysera beteenden och data angående samarbeten i kedjor av aktörer. Resultat: Resultatet visar att det verkar finnas ett uppfattat nollsummespel inom cash-to-cash-cykeln mellan företagen. Detta leder till att aktörer med maktpositioner utgår från en strategi som innebär att förlänga betaltiderna för leverantörer, som då förlänger leverantörernas Cash-to-Cash-cykel. Fokus på att förbättra Cash-to-Cash-cyklerna mellan företagen är väldigt liten eller obefintlig. Författarna vill dock tydliggöra att ytterligare forskning kring ämnet behövs för att kunna säkerställa resultatet, då andra faktorer kan ha påverkat aktörernas beslut och beteenden.

Page generated in 0.0543 seconds