• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Tactical production planning for physical and financial flows for supply chain in a multi-site context / Planification tactique de production des flux physiques et financiers d’une chaîne logistique multi-site

Bian, Yuan 19 December 2017 (has links)
En période de crise financière, les entreprises ont besoin de trésorerie pour réagir efficacement aux aléas et assurer leur solvabilité. Cette thèse se situe à l’interface entre l’opérationnel et la finance pour développer des modèles de planification tactique gérant simultanément les flux physiques et financiers dans la supply chain. Le coût de financement des opérations basé sur le besoin en fond de roulement (BFR) est intégré comme un nouvel aspect financier jamais considéré dans la littérature de lot-sizing. Nous débutons par une extension du modèle EOQ considérant les coûts de financement du BFR. L’objectif est la maximisation du profit. Une quantité de production optimale est obtenue analytiquement ainsi que l’analyse de la sensibilité du modèle. De plus, les comparaisons avec le modèle EOQ et un modèle qui considère le coût du capital sont étudiées. Ensuite, un modèle basé sur un lot-sizing dynamique est établi. La propriété ZIO est démontrée et permet l’utilisation d’un algorithme en temps polynomial. Enfin un scénario multi-niveau à capacité infini est étudié avec une approche séquentielle puis centralisée. La propriété ZIO est prouvée dans ces deux cas. Des algorithmes de programmation dynamique sont utilisés pour obtenir une solution optimale. Cette thèse peut être considérée comme un premier, mais significatif, travail combinant la planification de production et la gestion du besoin en fond de roulement dans des modèles de planification tactique. Nous montrons que les aspects financiers ont un impact significatif sur les plans de production. Les cas étudiés dans cette thèse peuvent être considérés comme des sous-problèmes dans l’étude de scénario plus réalistes. / In financial crisis, companies always need free cash flow to efficiently react to any uncertainties to ensure solvency. Thus, this thesis serves as an interface between operations and finance to develop tactical production planning models for joint management of physical and financial flows in the supply chain. In these models, the financing cost of operation-based working capital requirement (WCR) is integrated as a new financial aspect never before considered in the lot-sizing literature. We first focus on extending the classic EOQ model by considering the financing cost of WCR with a profit maximization objective. The optimal analytic production quantity formula is derived as well as sensitivity analysis of this model. Moreover, a comparison with the EOQ model and with the formula which considers the cost of capital are discussed. Secondly, a dynamic lot-sizing-based, discounted cash flow model is established based on Uncapacitated lot-sizing model. The zero-inventory ordering property is proven valid for this case and a polynomial-time algorithm can thus be established. Thirdly, multi-level and infinite capacity scenario is investigated with both sequential and centralized approaches. The ZIO property is demonstrated valid in both cases. Dynamic-programming based algorithms are constructed in order to obtain an optimal solution. This thesis should be considered as a first, but significant setup of combining production planning and working capital management. It is shown the significant financial consequences of lot-sizing decision on production planning. The cases investigated in this thesis may be tackled as subproblems in the study of more realistic scenarios.
292

Optimization techniques for radio resource management in wireless communication networks

Weeraddana, P. C. (Pradeep Chathuranga) 22 November 2011 (has links)
Abstract The application of optimization techniques for resource management in wireless communication networks is considered in this thesis. It is understood that a wide variety of resource management problems of recent interest, including power/rate control, link scheduling, cross-layer control, network utility maximization, beamformer design of multiple-input multiple-output networks, and many others are directly or indirectly reliant on the general weighted sum-rate maximization (WSRMax) problem. Thus, in this dissertation a greater emphasis is placed on the WSRMax problem, which is known to be NP-hard. A general method, based on the branch and bound technique, is developed, which solves globally the nonconvex WSRMax problem with an optimality certificate. Efficient analytic bounding techniques are derived as well. More broadly, the proposed method is not restricted to WSRMax. It can also be used to maximize any system performance metric, which is Lipschitz continuous and increasing on signal-to-interference-plus-noise ratio. The method can be used to find the optimum performance of any network design method, which relies on WSRMax, and therefore it is also useful for evaluating the performance loss encountered by any heuristic algorithm. The considered link-interference model is general enough to accommodate a wide range of network topologies with various node capabilities, such as singlepacket transmission, multipacket transmission, simultaneous transmission and reception, and many others. Since global methods become slow in large-scale problems, fast local optimization methods for the WSRMax problem are also developed. First, a general multicommodity, multichannel wireless multihop network where all receivers perform singleuser detection is considered. Algorithms based on homotopy methods and complementary geometric programming are developed for WSRMax. They are able to exploit efficiently the available multichannel diversity. The proposed algorithm, based on homotopy methods, handles efficiently the self interference problem that arises when a node transmits and receives simultaneously in the same frequency band. This is very important, since the use of supplementary combinatorial constraints to prevent simultaneous transmissions and receptions of any node is circumvented. In addition, the algorithm together with the considered interference model, provide a mechanism for evaluating the gains when the network nodes employ self interference cancelation techniques with different degrees of accuracy. Next, a similar multicommodity wireless multihop network is considered, but all receivers perform multiuser detection. Solutions for the WSRMax problem are obtained by imposing additional constraints, such as that only one node can transmit to others at a time or that only one node can receive from others at a time. The WSRMax problem of downlink OFDMA systems is also considered. A fast algorithm based on primal decomposition techniques is developed to jointly optimize the multiuser subcarrier assignment and power allocation to maximize the weighted sum-rate (WSR). Numerical results show that the proposed algorithm converges faster than Lagrange relaxation based methods. Finally, a distributed algorithm for WSRMax is derived in multiple-input single-output multicell downlink systems. The proposed method is based on classical primal decomposition methods and subgradient methods. It does not rely on zero forcing beamforming or high signal-to-interference-plus-noise ratio approximation like many other distributed variants. The algorithm essentially involves coordinating many local subproblems (one for each base station) to resolve the inter-cell interference such that the WSR is maximized. The numerical results show that significant gains can be achieved by only a small amount of message passing between the coordinating base stations, though the global optimality of the solution cannot be guaranteed. / Tiivistelmä Tässä työssä tutkitaan optimointimenetelmien käyttöä resurssienhallintaan langattomissa tiedonsiirtoverkoissa. Monet ajankohtaiset resurssienhallintaongelmat, kuten esimerkiksi tehonsäätö, datanopeuden säätö, radiolinkkien ajastus, protokollakerrosten välinen optimointi, verkon hyötyfunktion maksimointi ja keilanmuodostus moniantenniverkoissa, liittyvät joko suoraan tai epäsuorasti painotetun summadatanopeuden maksimointiongelmaan (weighted sum-rate maximization, WSRMax). Tästä syystä tämä työ keskittyy erityisesti WSRMax-ongelmaan, joka on tunnetusti NP-kova. Työssä kehitetään yleinen branch and bound -tekniikkaan perustuva menetelmä, joka ratkaisee epäkonveksin WSRMax-ongelman globaalisti ja tuottaa todistuksen ratkaisun optimaalisuudesta. Työssä johdetaan myös tehokkaita analyyttisiä suorituskykyrajojen laskentatekniikoita. Ehdotetun menetelmän käyttö ei rajoitu vain WSRMax-ongelmaan, vaan sitä voidaan soveltaa minkä tahansa suorituskykymetriikan maksimointiin, kunhan se on Lipschitz-jatkuva ja kasvava signaali-häiriö-plus-kohinasuhteen funktiona. Menetelmää voidaan käyttää minkä tahansa WSRMax-ongelmaan perustuvan verkkosuunnittelumenetelmän optimaalisen suorituskyvyn määrittämiseen, ja siksi sitä voidaan hyödyntää myös minkä tahansa heuristisen algoritmin aiheuttaman suorituskykytappion arvioimiseen. Tutkittava linkki-häiriömalli on riittävän yleinen monien erilaisten verkkotopologioiden ja verkkosolmujen kyvykkyyksien mallintamiseen, kuten esimerkiksi yhden tai useamman datapaketin siirtoon sekä yhtäaikaiseen lähetykseen ja vastaanottoon. Koska globaalit menetelmät ovat hitaita suurien ongelmien ratkaisussa, työssä kehitetään WSRMax-ongelmalle myös nopeita paikallisia optimointimenetelmiä. Ensiksi käsitellään yleistä useaa eri yhteyspalvelua tukevaa monikanavaista langatonta monihyppyverkkoa, jossa kaikki vastaanottimet suorittavat yhden käyttäjän ilmaisun, ja kehitetään algoritmeja, joiden perustana ovat homotopiamenetelmät ja komplementaarinen geometrinen optimointi. Ne hyödyntävät tehokkaasti saatavilla olevan monikanavadiversiteetin. Esitetty homotopiamenetelmiin perustuva algoritmi käsittelee tehokkaasti itsehäiriöongelman, joka syntyy, kun laite lähettää ja vastaanottaa samanaikaisesti samalla taajuuskaistalla. Tämä on tärkeää, koska näin voidaan välttää lisäehtojen käyttö yhtäaikaisen lähetyksen ja vastaanoton estämiseksi. Lisäksi algoritmi yhdessä tutkittavan häiriömallin kanssa auttaa arvioimaan, paljonko etua saadaan, kun laitteet käyttävät itsehäiriön poistomenetelmiä erilaisilla tarkkuuksilla. Seuraavaksi tutkitaan vastaavaa langatonta monihyppyverkkoa, jossa kaikki vastaanottimet suorittavat monen käyttäjän ilmaisun. Ratkaisuja WSRMax-ongelmalle saadaan asettamalla lisäehtoja, kuten että vain yksi lähetin kerrallaan voi lähettää tai että vain yksi vastaanotin kerrallaan voi vastaanottaa. Edelleen tutkitaan WSRMax-ongelmaa laskevalla siirtotiellä OFDMA-järjestelmässä, ja johdetaan primaalihajotelmaan perustuva nopea algoritmi, joka yhteisoptimoi monen käyttäjän alikantoaalto- ja tehoallokaation maksimoiden painotetun summadatanopeuden. Numeeriset tulokset osoittavat, että esitetty algoritmi suppenee nopeammin kuin Lagrangen relaksaatioon perustuvat menetelmät. Lopuksi johdetaan hajautettu algoritmi WSRMax-ongelmalle monisoluisissa moniantennilähetystä käyttävissä järjestelmissä laskevaa siirtotietä varten. Esitetty menetelmä perustuu klassisiin primaalihajotelma- ja aligradienttimenetelmiin. Se ei turvaudu nollaanpakotus-keilanmuodostukseen tai korkean signaali-häiriö-plus-kohinasuhteen approksimaatioon, kuten monet muut hajautetut muunnelmat. Algoritmi koordinoi monta paikallista aliongelmaa (yhden kutakin tukiasemaa kohti) ratkaistakseen solujen välisen häiriön siten, että WSR maksimoituu. Numeeriset tulokset osoittavat, että merkittävää etua saadaan jo vähäisellä yhdessä toimivien tukiasemien välisellä viestinvaihdolla, vaikka globaalisti optimaalista ratkaisua ei voidakaan taata.
293

Influencers characterization in a social network for viral marketing perspectives / Caractérisation des influenceurs dans un réseau social pour des perspectives de Marketing viral

Jendoubi, Siwar 16 December 2016 (has links)
Le marketing viral est une nouvelle forme de marketing qui exploite les réseaux sociaux afin de promouvoir un produit, une marque, etc. Il se fonde sur l'influence qu'exerce un utilisateur sur un autre. La maximisation de l'influence est le problème scientifique pour le marketing viral. En fait, son but principal est de sélectionner un ensemble d'utilisateurs d'influences qui pourraient adopter le produit et déclencher une large cascade d'influence et d'adoption à travers le réseau. Dans cette thèse, nous proposons deux modèles de maximisation de l'influence sur les réseaux sociaux. L'approche proposée utilise la théorie des fonctions de croyance pour estimer l'influence des utilisateurs. En outre, nous introduisons une mesure d'influence qui fusionne de nombreux aspects d'influence, comme l'importance de l'utilisateur sur le réseau et la popularité de ces messages. Ensuite, nous proposons trois scénarii de marketing viral. Pour chaque scénario, nous introduisons deux mesures d'influence. Le premier scénario cherche les influenceurs ayant une opinion positive sur le produit. Le second scénario concerne les influenceurs ayant une opinion positive et qui influencent des utilisateurs ayant une opinion positive et le dernier scénario cherche des influenceurs ayant une opinion positive et qui influencent des utilisateurs ayant une opinion négative. Dans un deuxième lieu, nous nous sommes tournés vers un autre problème important, qui est le problème de la prédiction du sujet du message social. En effet, le sujet est également un paramètre important dans le problème de la maximisation de l'influence. A cet effet, nous introduisons quatre algorithmes de classification qui ne nécessitent pas le contenu du message pour le classifier, nous avons juste besoin de ces traces de propagation. Dans nos expérimentations, nous comparons les solutions proposées aux solutions existantes et nous montrons la performance des modèles de maximisation de l'influence et les classificateurs proposés. / The Viral Marketing is a relatively new form of marketing that exploits social networks in order to promote a product, a brand, etc. It is based on the influence that exerts one user on another. The influence maximization is the scientific problem for the Viral Marketing. In fact, its main purpose is to select a set of influential users that could adopt the product and trigger a large cascade of influence and adoptions through the network. In this thesis, we propose two evidential influence maximization models for social networks. The proposed approach uses the theory of belief functions to estimate users influence. Furthermore, we introduce an influence measure that fuses many influence aspects, like the importance of the user in the network and the popularity of his messages. Next, we propose three Viral Marketing scenarios. For each scenario we introduce two influence measures. The first scenario is about influencers having a positive opinion about the product. The second scenario searches for influencers having a positive opinion and influence positive opinion users and the last scenario looks for influencers having a positive opinion and influence negative opinion users. On the other hand, we turned to another important problem which is about the prediction of the social message topic. Indeed, the topic is also an important parameter in the influence maximization problem. For this purpose, we introduce four classification algorithms that do not need the content of the message to classify it, they just need its propagation traces. In our experiments, we compare the proposed solutions to existing ones and we show the performance of the proposed influence maximization solutions and the proposed classifiers.
294

Parallel Tomographic Image Reconstruction On Hierarchical Bus-Based And Extended Hypercube Architectures

Rajan, K 07 1900 (has links) (PDF)
No description available.
295

Allocation de ressource et analyse des critères de performance dans les réseaux cellulaires coopératifs / Resource allocation and performance metrics analysis in cooperative cellular networks

Maaz, Mohamad 03 December 2013 (has links)
Dans les systèmes de communications sans fil, la transmission de grandes quantités d'information et à faible coût énergétique sont les deux principales questions qui n'ont jamais cessé d'attirer l'attention de la communauté scientifique au cours de la dernière décennie. Récemment, il a été démontré que la communication coopérative est une technique intéressante notamment parce qu'elle permet d'exploiter la diversité spatiale dans le canal sans fil. Cette technique assure une communication robuste et fiable, une meilleure qualité de service (QoS) et rend le concept de coopération prometteur pour les futurs générations de systèmes cellulaires. Typiquement, les QoS sont le taux d'erreurs paquet, le débit et le délai. Ces métriques sont impactées par le délai, induit par les mécanismes de retransmission Hybrid-Automatic Repeat-Request (HARQ) inhérents à la réception d'un paquet erroné et qui a un retard sur la QoS demandée. En revanche, les mécanismes HARQ créent une diversité temporelle. Par conséquent, l'adoption conjointe de la communication coopérative et des protocoles HARQ pourrait s'avérer avantageux pour la conception de schémas cross-layer. Nous proposons tout d'abord une stratégie de maximisation de débit total dans un réseau cellulaire hétérogène. Nous introduisons un algorithme qui alloue la puissance optimale à la station de base (BS) et aux relais, qui à chaque utilisateur attribue de manière optimale les sous-porteuses et les relais. Nous calculons le débit maximal atteignable ainsi que le taux d'utilisateurs sans ressources dans le réseau lorsque le nombre d'utilisateurs actifs varie. Nous comparons les performances de notre algorithme à ceux de la littérature existante, et montrons qu'un gain significatif est atteint sur la capacité globale. Dans un second temps, nous analysons théoriquement le taux d'erreurs paquet, le délai ainsi que l'efficacité de débit des réseaux HARQ coopératifs, dans le canal à évanouissements par blocs. Dans le cas des canaux à évanouissement lents, le délai moyen du mécanisme HARQ n'est pas pertinent à cause de la non-ergodicité du processus. Ainsi, nous nous intéressons plutôt à la probabilité de coupure de délai en présence d'évanouissements lents. La probabilité de coupure de délai est de première importance pour les applications sensibles au délai. Nous proposons une forme analytique de la probabilité de coupure permettant de se passer de longues simulations. Dans la suite de notre travail, nous analysons théoriquement l'efficacité énergétique (bits/joule) dans les réseaux HARQ coopératifs. Nous résolvons ensuite un problème de minimisation de l'énergie dans les réseaux coopératifs en liaison descendante. Dans ce problème, chaque utilisateur possède une contrainte de délai moyen à satisfaire de telle sorte que la contrainte sur la puissance totale du système soit respectée. L'algorithme de minimisation permet d'attribuer à chaque utilisateur la station-relai optimale et sa puissance ainsi que la puissance optimale de la BS afin de satisfaire les contraintes de délai. Les simulations montrent qu'en termes de consommation d'énergie, les techniques assistées par relais prédominent nettement les transmissions directes, dans tout système limité en délai. En conclusion, les travaux proposés dans cette thèse peuvent promettre d'établir des règles fiables pour l'ingénierie et la conception des futures générations de systèmes cellulaires énergétiquement efficaces. / In wireless systems, transmitting large amounts of information with low energetic cost are two main issues that have never stopped drawing the attention of the scientific community during the past decade. Later, it has been shown that cooperative communication is an appealing technique that exploits spatial diversity in wireless channel. Therefore, this technique certainly promises a robust and reliable communications, higher quality-of-service (QoS) and makes the cooperation concept attractive for future cellular systems. Typically, the QoS requirements are the packet error rate, throughput and delay. These metrics are affected by the delay, where each erroneous packet is retransmitted several times according to Hybrid-Automatic Repeat-Request (HARQ) mechanism inducing a delay on the demanded QoS but a temporal diversity is created. Therefore, adopting jointly cooperative communications and HARQ mechanisms could be beneficial for designing cross-layer schemes. First, a new rate maximization strategy, under heterogeneous data rate constraints among users is proposed. We propose an algorithm that allocates the optimal power at the base station (BS) and relays, assigns subcarriers and selects relays. The achievable data rate is investigated as well as the average starvation rate in the network when the load, i.e. the number of active users in the network, is increasing. It showed a significant gain in terms of global capacity compared to literature. Second, in block fading channel, theoretical analyses of the packet error rate, delay and throughput efficiency in relayassisted HARQ networks are provided. In slow fading channels, the average delay of HARQ mechanisms w.r.t. the fading states is not relevant due to the non-ergodic process of the fading channel. The delay outage is hence invoked to deal with the slow fading channel and is defined as the probability that the average delay w.r.t. AWGN channel exceeds a predefined threshold. This criterion has never been studied in literature, although being of importance for delay sensitive applications in slow fading channels. Then, an analytical form of the delay outage probability is proposed which might be useful to avoid lengthy simulations. These analyses consider a finite packet length and a given modulation and coding scheme (MCS) which leads to study the performance of practical systems. Third, a theoretical analysis of the energy efficiency (bits/joule) in relay-assisted HARQ networks is provided. Based on this analysis, an energy minimization problem in multiuser relayassisted downlink cellular networks is investigated. Each user has an average delay constraint to be satisfied such that a total power constraint in the system is respected. The BS is assumed to have only knowledge about the average channel statistics but no instantaneous channel state information (CSI). Finally, an algorithm that jointly allocates the optimal power at BS, the relay stations and selects the optimal relay in order to satisfy the delay constrains of users is proposed. The simulations show the improvement in terms of energy consumption of relay-assisted techniques compared to nonaided transmission in delay-constrained systems. Hence, the work proposed in this thesis can give useful insights for engineering rules in the design of the next generation energyefficient cellular systems.
296

Apprentissage à partir de données et de connaissances incertaines : application à la prédiction de la qualité du caoutchouc / Learning from uncertain data and knowledge : application to the natural rubber quality prediction

Sutton-Charani, Nicolas 28 May 2014 (has links)
Pour l’apprentissage de modèles prédictifs, la qualité des données disponibles joue un rôle important quant à la fiabilité des prédictions obtenues. Ces données d’apprentissage ont, en pratique, l’inconvénient d’être très souvent imparfaites ou incertaines (imprécises, bruitées, etc). Ce travail de doctorat s’inscrit dans ce cadre où la théorie des fonctions de croyance est utilisée de manière à adapter des outils statistiques classiques aux données incertaines.Le modèle prédictif choisi est l’arbre de décision qui est un classifieur basique de l’intelligence artificielle mais qui est habituellement construit à partir de données précises. Le but de la méthodologie principale développée dans cette thèse est de généraliser les arbres de décision aux données incertaines (floues, probabilistes,manquantes, etc) en entrée et en sortie. L’outil central d’extension des arbres de décision aux données incertaines est une vraisemblance adaptée aux fonctions de croyance récemment proposée dans la littérature dont certaines propriétés sont ici étudiées de manière approfondie. De manière à estimer les différents paramètres d’un arbre de décision, cette vraisemblance est maximisée via l’algorithme E2M qui étend l’algorithme EM aux fonctions de croyance. La nouvelle méthodologie ainsi présentée, les arbres de décision E2M, est ensuite appliquée à un cas réel : la prédiction de la qualité du caoutchouc naturel. Les données d’apprentissage, essentiellement culturales et climatiques, présentent de nombreuses incertitudes qui sont modélisées par des fonctions de croyance adaptées à ces imperfections. Après une étude statistique standard de ces données, des arbres de décision E2M sont construits et évalués en comparaison d’arbres de décision classiques. Cette prise en compte des incertitudes des données permet ainsi d’améliorer très légèrement la qualité de prédiction mais apporte surtout des informations concernant certaines variables peu prises en compte jusqu’ici par les experts du caoutchouc. / During the learning of predictive models, the quality of available data is essential for the reliability of obtained predictions. These learning data are, in practice very often imperfect or uncertain (imprecise, noised, etc). This PhD thesis is focused on this context where the theory of belief functions is used in order to adapt standard statistical tools to uncertain data.The chosen predictive model is decision trees which are basic classifiers in Artificial Intelligence initially conceived to be built from precise data. The aim of the main methodology developed in this thesis is to generalise decision trees to uncertain data (fuzzy, probabilistic, missing, etc) in input and in output. To realise this extension to uncertain data, the main tool is a likelihood adapted to belief functions,recently presented in the literature, whose behaviour is here studied. The maximisation of this likelihood provide estimators of the trees’ parameters. This maximisation is obtained via the E2M algorithm which is an extension of the EM algorithm to belief functions.The presented methodology, the E2M decision trees, is applied to a real case : the natural rubber quality prediction. The learning data, mainly cultural and climatic,contains many uncertainties which are modelled by belief functions adapted to those imperfections. After a simple descriptiv statistic study of the data, E2M decision trees are built, evaluated and compared to standard decision trees. The taken into account of the data uncertainty slightly improves the predictive accuracy but moreover, the importance of some variables, sparsely studied until now, is highlighted.
297

Modelos para dados censurados sob a classe de distribuições misturas de escala skew-normal / Censored regression models under the class of scale mixture of skew-normal distributions

Massuia, Monique Bettio, 1989- 03 June 2015 (has links)
Orientador: Víctor Hugo Lachos Dávila / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-26T19:55:07Z (GMT). No. of bitstreams: 1 Massuia_MoniqueBettio_M.pdf: 2926597 bytes, checksum: 2a1154c0a61b13f369e8390159fc4c3e (MD5) Previous issue date: 2015 / Resumo: Este trabalho tem como objetivo principal apresentar os modelos de regressão lineares com respostas censuradas sob a classe de distribuições de mistura de escala skew-normal (SMSN), visando generalizar o clássico modelo Tobit ao oferecer alternativas mais robustas à distribuição Normal. Um estudo de inferência clássico é desenvolvido para os modelos em questão sob dois casos especiais desta família de distribuições, a Normal e a t de Student, utilizando o algoritmo EM para obter as estimativas de máxima verossimilhança dos parâmetros dos modelos e desenvolvendo métodos de diagnóstico de influência global e local com base na metodologia proposta por Cook (1986) e Poom & Poon (1999). Sob o enfoque Bayesiano, o modelo de regressão para respostas censuradas é estudado sob alguns casos especiais da classe SMSN, como a Normal, a t de Student, a skew-Normal, a skew-t e a skew-Slash. Neste caso, o amostrador de Gibbs é a principal ferramenta utilizada para a inferência sobre os parâmetros do modelo. Apresentamos também alguns estudos de simulação para avaliar a metodologia desenvolvida que, por fim, é aplicada em dois conjuntos de dados reais. Os pacotes SMNCensReg, CensRegMod e BayesCR para o software R dão suporte computacional aos desenvolvimentos deste trabalho / Abstract: This work aims to present the linear regression model with censored response variable under the class of scale mixture of skew-normal distributions (SMSN), generalizing the well known Tobit model as providing a more robust alternative to the normal distribution. A study based on classic inference is developed to investigate these censored models under two special cases of this family of distributions, Normal and t-Student, using the EM algorithm for obtaining maximum likelihood estimates and developing methods of diagnostic based on global and local influence as suggested by Cook (1986) and Poom & Poon (1999). Under a Bayesian approach, the censored regression model was studied under some special cases of SMSN class, such as Normal, t-Student, skew-Normal, skew-t and skew-Slash. In these cases, the Gibbs sampler was the main tool used to make inference about the model parameters. We also present some simulation studies for evaluating the developed methodologies that, finally, are applied on two real data sets. The packages SMNCensReg, CensRegMod and BayesCR implemented for the software R give computational support to this work / Mestrado / Estatistica / Mestra em Estatística
298

Dinâmicas de propagação de informações e rumores em redes sociais / Information and rumor propagation in social networks

Didier Augusto Vega Oliveros 12 May 2017 (has links)
As redes sociais se tornaram um novo e importante meio de intercâmbio de informações, ideias e comunicação que aproximam parentes e amigos sem importar as distâncias. Dada a natureza aberta da Internet, as informações podem fluir muito fácil e rápido na população. A rede pode ser representada como um grafo, onde os indivíduos ou organizações são o conjunto de vértices e os relacionamentos ou conexões entre os vértices são o conjunto de arestas. Além disso, as redes sociais representam intrinsecamente a estrutura de um sistema mais complexo que é a sociedade. Estas estruturas estão relacionadas com as características dos indivíduos. Por exemplo, os indivíduos mais populares são aqueles com maior número de conexões. Em particular, é aceito que a estrutura da rede pode afetar a forma como a informação se propaga nas redes sociais. No entanto, ainda não está claro como a estrutura influencia na propagação, como medir seu impacto e quais as possíveis estratégias para controlar o processo de difusão. Nesta tese buscamos contribuir nas análises da interação entre as dinâmicas de propagação de informações e rumores e a estrutura da rede. Propomos um modelo de propagação mais realista considerando a heterogeneidade dos indivíduos na transmissão de ideias ou informações. Nós confirmamos a presença de propagadores mais influentes na dinâmica de rumor e observamos que é possível melhorar ou reduzir expressivamente a difusão de uma informação ao selecionar uma fração muito pequena de propagadores influentes. No caso em que se objetiva selecionar um conjunto de propagadores iniciais que maximizem a difusão de informação, a melhor opção é selecionar os indivíduos mais centrais ou importantes nas comunidades. Porém, se o padrão de conexão dos vértices está negativamente correlacionado, a melhor alternativa é escolher entre os indivíduos mais centrais de toda a rede. Por outro lado, através de abordagens topológicas e de técnicas de aprendizagem máquina, identificamos aos propagadores menos influentes e mostramos que eles atuam como um firewall no processo de difusão. Nós propomos um método adaptativo de reconexão entre os vértices menos influentes para um indivíduo central da rede, sem afetar a distribuição de grau da rede. Aplicando o nosso método em uma pequena fração de propagadores menos influentes, observamos um aumento importante na capacidade de propagação desses vértices e da rede toda. Nossos resultados vêm de uma ampla gama de simulações em conjuntos de dados artificiais e do mundo real e a comparação com modelos clássicos de propagação da literatura. A propagação da informação em redes é de grande relevância para as áreas de publicidade e marketing, educação, campanhas políticas ou de saúde, entre outras. Os resultados desta tese podem ser aplicados e estendidos em diferentes campos de pesquisa como redes biológicas e modelos de comportamento social animal, modelos de propagação de epidemias e na saúde pública, entre outros. / On-line Social networks become a new and important medium of exchange of information, ideas and communication that approximate relatives and friends no matter the distances. Given the open nature of the Internet, the information can flow very easy and fast in the population. The network can be represented as a graph, where individuals or organizations are the set of vertices and the relationship or connection among the vertices are the set of edge. Moreover, the social networks are also intrinsically representing the structure of a more complex system that is the society. These structures are related with characteristics of the subjects, like the most popular individuals have many connections, the correlation in the connectivity of vertices that is a trace of homophily phenomenon, among many others. In particular, it is well accepted that the structure of the network can affect the way the information propagates on the social networks. However, how the structure impacts in the propagation, how to measure that impact and what are the strategies for controlling the propagation of some information, it is still unclear. In this thesis, we seek to contribute in the analysis of the interplay between the dynamics of information and rumor spreading and the structure of the networks. We propose a more realistic propagation model considering the heterogeneity of the individuals in the transmission of ideas or information. We confirm the presence of influential spreaders in the rumor propagation process and found that selecting a very small fraction of influential spreaders, it is possible to expressively improve or reduce de diffusion of some information on the network. In the case we want to select a set of initial spreaders that maximize the information diffusion on the network, the simple and best alternative is to select the most central or important individuals from the networks communities. But, if the pattern of connection of the networks is negatively correlated, the best alternative is to choose from the most central individuals in the whole network. On the other hand, we identify, by topological approach and machine learning techniques, the least influential spreaders and show that they act as a firewall in the propagation process. We propose an adaptative method that rewires one edge for a given vertex to a central individual, without affecting the overall distribution of connection. Applying our proposed method in a little fraction of least influential spreaders, we observed an important increasing in the capacity of propagation of these vertices and in the overall network. Our results are from a wide range of simulations in artificial and real-world data sets and the comparison with the classical rumor propagation model. The propagation of information is of greatest relevance for publicity and marketing area, education, political or health campaigns, among others. The results of this these might be applicable and extended in different research fields like biological networks and animal social behavior models.
299

An Analysis of Markov Regime-Switching Models for Weather Derivative Pricing

Gerdin Börjesson, Fredrik January 2021 (has links)
The valuation of weather derivatives is greatly dependent on accurate modeling and forecasting of the underlying temperature indices. The complexity and uncertainty in such modeling has led to several temperature processes being developed for the Monte Carlo simulation of daily average temperatures. In this report, we aim to compare the results of two recently developed models by Gyamerah et al. (2018) and Evarest, Berntsson, Singull, and Yang (2018). The paper gives a thorough introduction to option theory, Lévy and Wiener processes, and generalized hyperbolic distributions frequently used in temperature modeling. Implementations of maximum likelihood estimation and the expectation-maximization algorithm with Kim's smoothed transition probabilities are used to fit the Lévy process distributions and both models' parameters, respectively. Later, the use of both models is considered for the pricing of European HDD and CDD options by Monte Carlo simulation. The evaluation shows a tendency toward the shifted temperature regime over the base regime, in contrast to the two articles, when evaluated for three data sets. Simulation is successfully demonstrated for the model of Evarest, however Gyamerah's model was unable to be replicated. This is concluded to be due to the two articles containing several incorrect derivations, why the thesis is left unanswered and the articles' conclusions are questioned. We end by proposing further validation of the two models and summarize the alterations required for a correct implementation.
300

Statistical inference of time-dependent data

Suhas Gundimeda (5930648) 11 May 2020 (has links)
Probabilistic graphical modeling is a framework which can be used to succinctly<br>represent multivariate probability distributions of time series in terms of each time<br>series’s dependence on others. In general, it is computationally prohibitive to sta-<br>tistically infer an arbitrary model from data. However, if we constrain the model to<br>have a tree topology, the corresponding learning algorithms become tractable. The<br>expressive power of tree-structured distributions are low, since only n − 1 dependen-<br>cies are explicitly encoded for an n node tree. One way to improve the expressive<br>power of tree models is to combine many of them in a mixture model. This work<br>presents and uses simulations to validate extensions of the standard mixtures of trees<br>model for i.i.d data to the setting of time series data. We also consider the setting<br>where the tree mixture itself forms a hidden Markov chain, which could be better<br>suited for approximating time-varying seasonal data in the real world. Both of these<br>are evaluated on artificial data sets.<br><br>

Page generated in 0.0724 seconds