• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 10
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 108
  • 22
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Bayesian Off-policy Sim-to-Real Transfer for Antenna Tilt Optimization

Larsson Forsberg, Albin January 2021 (has links)
Choosing the correct angle of electrical tilt in a radio base station is essential when optimizing for coverage and capacity. A reinforcement learning agent can be trained to make this choice. If the training of the agent in the real world is restricted or even impossible, alternative methods can be used. Training in simulation combined with an approximation of the real world is one option that comes with a set of challenges associated with the reality gap. In this thesis, a method based on Bayesian optimization is implemented to tune the environment in which domain randomization is performed to improve the quality of the simulation training. The results show that using Bayesian optimization to find a good subset of parameters works even when access to the real world is constrained. Two off- policy estimators based on inverse propensity scoring and direct method evaluation in combination with an offline dataset of previously collected cell traces were tested. The method manages to find an isolated subspace of the whole domain that optimizes the randomization while still giving good performance in the target domain. / Rätt val av elektrisk antennvinkel för en radiobasstation är avgörande när täckning och kapacitetsoptimering (eng. coverage and capacity optimization) görs för en förstärkningsinlärningsagent. Om träning av agenten i verkligheten är besvärlig eller till och med omöjlig att genomföra kan olika alternativa metoder användas. Simuleringsträning kombinerad med en skattningsmodell av verkligheten är ett alternativ som har olika utmaningar kopplade till klyftan mellan simulering och verkligheten (eng. reality gap). I denna avhandling implementeras en lösning baserad på Bayesiansk Optimering med syftet att anpassa miljön som domänrandomisering sker i för att förbättra kvaliteten på simuleringsträningen. Resultatet visar att Bayesiansk Optimering kan användas för att hitta ett urval av fungerande parametrar även när tillgången till den faktiska verkligheten är begränsad. Två skattningsmodeller baserade på invers propensitetsviktning och direktmetodutvärdering i kombination med ett tidigare insamlat dataset av nätverksdata testades. Den tillämpade metoden lyckas hitta ett isolerat delrum av parameterrymden som optimerar randomiseringen samtidigt som prestationen i verkligheten hålls på en god nivå.
102

Insulin Resistance : Causes, biomarkers and consequences

Nowak, Christoph January 2017 (has links)
The worldwide increasing number of persons affected by largely preventable diseases like diabetes demands better prevention and treatment. Insulin is required for effective utilisation of circulating nutrients. Impaired responsiveness to insulin (insulin resistance, IR) is a hallmark of type 2 diabetes and independently raises the risk of heart attack and stroke. The pathophysiology of IR is incompletely understood. High-throughput measurement of large numbers of circulating biomarkers may provide new insights beyond established risk factors. The aims of this thesis were to (i) use proteomics, metabolomics and genomics methods in large community samples to identify biomarkers of IR; (ii) assess biomarkers for risk prediction and insights into aetiology and consequences of IR; and (iii) use Mendelian randomisation analysis to assess causality. In Study I, analysis of 80 circulating proteins in 70-to-77-year-old Swedes identified cathepsin D as a biomarker for IR and highlighted a tentative causal effect of IR on raised plasma tissue plasminogen activator levels. In Study II, nontargeted fasting plasma metabolomics was used to discover 52 metabolites associated with glycaemic traits in non-diabetic 70-year-old men. Replication in independent samples of several thousand persons provided evidence for a causal effect of IR on reduced plasma oleic acid and palmitoleic acid levels. In Study III, nontargeted metabolomics in plasma samples obtained at three time points during an oral glucose challenge in 70-year-old men identified associations between a physiologic measure of IR and concentration changes in medium-chain acylcarnitines, monounsaturated fatty acids, bile acids and lysophosphatidylethanolamines. Study IV provided evidence in two large longitudinal cohorts for causal effects of type 2 diabetes and impaired insulin secretion on raised coronary artery disease risk. In conclusion, the Studies in this thesis provide new insights into the pathophysiology and adverse health consequences of IR and illustrate the value of combining traditional epidemiologic designs with recent molecular techniques and bioinformatics methods. The results provide limited evidence for the role of circulating proteins and small molecules in IR and require replication in separate studies and validation in experimental designs.
103

Genetics and molecular epidemiology of metabolic syndrome-related traits:focus on metabolic profiling of lipid-lowering therapies and fatty liver, and the role of genetic factors in inflammatory load

Sliz, E. (Eeva) 14 May 2019 (has links)
Abstract Metabolic syndrome is a constellation of metabolic abnormalities predisposing to cardiovascular diseases (CVD), type 2 diabetes, and increased mortality. Due to the high prevalence and severe co-morbidities, metabolic syndrome constitutes a major burden for both public health and the global economy. Improved understanding of the detailed molecular mechanisms could provide novel strategies for the treatment and preferably prevention of the metabolic syndrome-related health issues. Recent advancements in ‘omics’ technologies have facilitated the development of novel tools to examine the links between genetic variation and human health. The new techniques allow determination of millions of genotypes or quantification of hundreds of metabolic measures from a single blood sample. In this thesis, genomics and metabolomics approaches are coupled to improve our understanding of the metabolic syndrome-related health issues. More precisely, my projects evaluate the metabolic effects of two lipid-lowering therapies and non-alcoholic fatty liver, as well as assess genetic determinants of chronic inflammation. The present results indicate generally consistent metabolic effects of statins and proprotein convertase subtilisin/kexin type 9 (PCSK9) genetic inhibition. The subtle discrepancies observed could potentially contribute to differences in the efficacy to lower CVD risk between statins and PCSK9 inhibitors. The dissimilar metabolic effects of the four genetic variants that increase the risk of non-alcoholic fatty liver disease (NAFLD) highlight the heterogeneity of the molecular mechanisms involved in NAFLD pathogenesis. The results further suggest that fatty liver by itself might not promote unfavourable metabolic aberrations associated with fatty liver on a population level. The newly identified loci associating with inflammatory phenotypes elucidate the genetic mechanisms contributing to the inflammatory load. In particular, the present results suggest the important role of the locus determining the ABO blood types in the regulation of the soluble adhesion molecule levels. To conclude, this thesis successfully complements the knowledge of the molecular mechanisms involved in metabolic syndrome-related traits and provides examples of how to couple omics technologies in the study of complex traits or in the evaluation of drug effects. / Tiivistelmä Metabolinen oireyhtymä on tila, jossa useiden aineenvaihdunnallisten riskitekijöiden kasautuminen suurentaa riskiä sairastua tyypin 2 diabetekseen ja sydän- ja verisuonitauteihin sekä lisää kokonaiskuolleisuutta. Vakavista liitännäissairauksista ja suuresta esiintyvyydestä johtuen metabolinen oireyhtymä kuormittaa merkittävästi sekä terveydenhuoltoa että kansantaloutta. Jotta metabolisen oireyhtymän hoitoon ja ennaltaehkäisyyn voitaisiin kehittää uusia keinoja, on tärkeää ymmärtää paremmin oireyhtymän syntyyn vaikuttavat täsmälliset molekyylimekanismit. Niin sanottujen ’omiikka-tekniikoiden’ viimeaikainen kehitys tarjoaa uusia mahdollisuuksia tutkia geenimuutosten vaikutuksia terveyteen. Uusien tekniikoiden avulla voidaan määrittää miljoonia genotyyppejä tai satoja aineenvaihdunnan merkkiaineita yhdestä verinäytteestä. Tässä väitöskirjatyössä yhdistetään genomiikan ja metabolomiikan menetelmiä metaboliseen oireyhtymään liittyvien terveysongelmien tutkimiseksi. Väitöskirjani osatöissä arvioin kahden lipidilääkkeen sekä ei-alkoholiperäisen rasvamaksan aineenvaihdunnallisia vaikutuksia sekä pyrin tunnistamaan krooniseen tulehdukseen vaikuttavia geneettisiä tekijöitä. Tulosten mukaan statiinien ja PCSK9:n (engl. proprotein convertase subtilisin/kexin type 9) geneettisen eston aineenvaihduntavaikutukset ovat hyvin samankaltaiset. Kuitenkin havaitut pienet poikkeavuudet tietyissä merkkiaineissa voivat vaikuttaa eroavaisuuksiin siinä, kuinka tehokkaasti lääkeaineet alentavat sydäntautiriskiä. Suuret erot rasvamaksan riskiä lisäävien geenimuutosten vaikutuksissa aineenvaihduntaan korostavat rasvamaksaan liittyvien molekyylimekanismien monimuotoisuutta. Tulosten perusteella vaikuttaa siltä, että rasvan kertyminen maksaan ei luultavasti itsessään aiheuta suuria muutoksia verenkierron aineenvaihduntatuotteiden pitoisuuksiin. Tulehdusmerkkiaineisiin assosioituvat uudet geenialueet täydentävät tulehduksen molekyylimekanismeihin liittyvää tietoa. Tulokset korostavat ABO-veriryhmän määräävän geenin vaikutusta liukoisten adheesiomolekyylien pitoisuuksiin. Kaiken kaikkiaan väitöskirjan osatyöt tuovat uutta tietoa metaboliseen oireyhtymään liittyvien terveysongelmien molekyylimekanismeihin. Projektit havainnollistavat, miten omiikka-tekniikoita voidaan hyödyntää monitekijäisten fenotyyppien tutkimuksessa sekä lääkeaineiden aineenvaihduntavaikutusten arvioinnissa.
104

Three essays in matching mechanism design

Nesterov, Alexander 26 September 2016 (has links)
In diese Dissertation, betrachte ich das Problem der Aufteilung der unteilbaren Objekte unter Agenten, ihren Vorlieben entsprechend, und die Transfers fehlen. In Kapitel 1 studiere ich den Kompromiss zwischen Fairness und Effizienz in der Klasse der strategy-proof Aufteilungsmechanismen. Das wichtigste Ergebnis ist, dass für die strategy-proof Mechanismen folgende Effizienz- und Fairness-Kriterien nicht miteinander vereinbar sind: (1) Ex-post-Effizienz und Neidfreiheit, (2) Ordnung-Effizienz und schwache Neidfreiheit und (3) Ordnung-Effizienz und gleiche-Teilung-untere-Grenze. In Kapitel 2 ist der Fokus auf zwei Darstellungen einer Zuteilung: als probabilistische Zuordnung und als Lotterie über deterministische Zuordnungen. Um die Gestaltung der praktischen Lotterie-Mechanismen zu erleichtern schlagen wir neue Werkzeuge für den Erhalt der stochastischen Verbesserungen bei Lotterien vor. Als Anwendungen schlagen wir Lotterie Mechanismen, die die weit verbreiteten Random serial dictatorship Mechanismus verbessern, und eine Lotterie-Darstellung seiner Konkurrent, die Probabilistic serial Mechanismus, vor. In Kapitel 3 schlage ich einen neuen Mechanismus vor, der Schüler an Grundschulen zuweist: Adaptive Acceptance (AA). AA sammelt von Neumann-Morgenstern Präferenzen von Studenten über Schulen und implementiert die Zuordnung unter Verwendung eines iterativen Verfahrens, das ähnlich der vorherrschenden Immediate Acceptance (IA) ist. AA verfügt über eine starke Kombination von Anreize und Effizienzeigenschaften im Vergleich zu IA und sein Rivale, Deferred Acceptance (DA). / I consider the problem of allocating indivisible objects among agents according to their preferences when transfers are absent. In Chapter 1, I study the tradeoff between fairness and efficiency in the class of strategy-proof allocation mechanisms. The main finding is that for strategy-proof mechanisms the following efficiency and fairness criteria are mutually incompatible: (1) Ex-post efficiency and envy-freeness, (2) ordinal efficiency and weak envy-freeness and (3) ordinal efficiency and equal division lower bound. In Chapter 2, the focus is on two representations of an allocation when randomization is used: as a probabilistic assignment and as a lottery over deterministic assignments. To help facilitate the design of practical lottery mechanisms, we provide new tools for obtaining stochastic improvements in lotteries. As applications, we propose lottery mechanisms that improve upon the widely-used random serial dictatorship mechanism, and a lottery representation of its competitor, the probabilistic serial mechanism. In Chapter 3, I propose a new mechanism to assign students to primary schools: the Adaptive Acceptance rule (AA). AA collects von Neumann-Morgenstern utilities of students over schools and implements the assignment using an iterative procedure similar to the prevalent Immediate Acceptance rule (IA). AA enjoys a strong combination of incentive and efficiency properties compared to IA and its rival, the Deferred Acceptance rule (DA). In case of strict priorities, AA implements the student-optimal stable matching in dominant strategies, which dominates each equilibrium outcome of IA. In case of no priorities, AA is ex-post efficient while some equilibrium outcomes of IA are not; also, AA causes loss of ex-ante efficiency less often than DA. If, in addition, students have common ordinal preferences, AA is approximately strategy-proof and ex-ante dominates DA.
105

Wi-Fi tracking : Fingerprinting attacks and counter-measures / Traçage Wi-Fi : Attaques par prise d'empreinte et contre-mesures

Matte, Célestin 07 December 2017 (has links)
Le récent développement des appareils portatifs possédant une interface Wi-Fi (smartphones, tablettes et « wearables ») s'accompagne d'une menace sur la vie privée de leurs utilisateurs, et sur la société toute entière. Ces appareils émettent en continu des signaux pouvant être capturés par un attaquant passif, à l'aide de matériel peu coûteux et de connaissances basiques. Ces signaux contiennent un identifiant unique appelé l'adresse MAC. Pour faire face à cette menace, les acteurs du secteur déploient actuellement une contre-mesure sur les appareils récents: le changement aléatoire de l'adresse MAC. Malheureusement, nous montrons que cette mesure, dans son état actuel, n'est pas suffisante pour empêcher le traçage des appareils. Pour cela, nous introduisons plusieurs attaques basées sur le contenu et la répartition temporelle des signaux. En complément, nous étudions les implémentations du changement aléatoire de l'adresse MAC sur des appareils récents, et trouvons un certain nombre de manquements limitant l'efficacité de ces implémentations à prévenir le traçage. En parallèle, nous effectuons deux études de terrain. La première s'attaque au développement des acteurs exploitant les problèmes cités plus haut afin d'installer des systèmes de traçage basés sur le Wi-Fi. Nous listons certaines de ces installations et examinons plusieurs aspects de ces systèmes : leur régulation, les implications en terme de vie privée, les questions de consentement et leur acceptation par le public. La seconde étude concerne la progression du changement aléatoire d'adresse MAC dans la population des appareils. Finalement, nous présentons deux outils : le premier est un système de traçage expérimental développé pour effectuer des tests et sensibiliser le public aux problèmes de vie privée liés à de tels systèmes. Le second estime l'unicité d'un appareil en se basant sur le contenu des signaux qu'il émet, même si leur identifiant est modifié. / The recent spread of everyday-carried Wi-Fi-enabled devices (smartphones, tablets and wearable devices) comes with a privacy threat to their owner, and to society as a whole. These devices continuously emit signals which can be captured by a passive attacker using cheap hardware and basic knowledge. These signals contain a unique identifier, called the MAC address. To mitigate the threat, device vendors are currently deploying a countermeasure on new devices: MAC address randomization. Unfortunately, we show that this mitigation, in its current state, is insufficient to prevent tracking. To do so, we introduce several attacks, based on the content and the timing of emitted signals. In complement, we study implementations of MAC address randomization in some recent devices, and find a number of shortcomings limiting the efficiency of these implementations at preventing device tracking. At the same time, we perform two real-world studies. The first one considers the development of actors exploiting this issue to install Wi-Fi tracking systems. We list some real-world installations and discuss their various aspects, including regulation, privacy implications, consent and public acceptance. The second one deals with the spread of MAC address randomization in the devices population. Finally, we present two tools: an experimental Wi-Fi tracking system for testing and public awareness raising purpose, and a tool estimating the uniqueness of a device based on the content of its emitted signals even if the identifier is randomized.
106

Solving dense linear systems on accelerated multicore architectures / Résoudre des systèmes linéaires denses sur des architectures composées de processeurs multicœurs et d’accélerateurs

Rémy, Adrien 08 July 2015 (has links)
Dans cette thèse de doctorat, nous étudions des algorithmes et des implémentations pour accélérer la résolution de systèmes linéaires denses en utilisant des architectures composées de processeurs multicœurs et d'accélérateurs. Nous nous concentrons sur des méthodes basées sur la factorisation LU. Le développement de notre code s'est fait dans le contexte de la bibliothèque MAGMA. Tout d'abord nous étudions différents solveurs CPU/GPU hybrides basés sur la factorisation LU. Ceux-ci visent à réduire le surcoût de communication dû au pivotage. Le premier est basé sur une stratégie de pivotage dite "communication avoiding" (CALU) alors que le deuxième utilise un préconditionnement aléatoire du système original pour éviter de pivoter (RBT). Nous montrons que ces deux méthodes surpassent le solveur utilisant la factorisation LU avec pivotage partiel quand elles sont utilisées sur des architectures hybrides multicœurs/GPUs. Ensuite nous développons des solveurs utilisant des techniques de randomisation appliquées sur des architectures hybrides utilisant des GPU Nvidia ou des coprocesseurs Intel Xeon Phi. Avec cette méthode, nous pouvons éviter l'important surcoût du pivotage tout en restant stable numériquement dans la plupart des cas. L'architecture hautement parallèle de ces accélérateurs nous permet d'effectuer la randomisation de notre système linéaire à un coût de calcul très faible par rapport à la durée de la factorisation. Finalement, nous étudions l'impact d'accès mémoire non uniformes (NUMA) sur la résolution de systèmes linéaires denses en utilisant un algorithme de factorisation LU. En particulier, nous illustrons comment un placement approprié des processus légers et des données sur une architecture NUMA peut améliorer les performances pour la factorisation du panel et accélérer de manière conséquente la factorisation LU globale. Nous montrons comment ces placements peuvent améliorer les performances quand ils sont appliqués à des solveurs hybrides multicœurs/GPU. / In this PhD thesis, we study algorithms and implementations to accelerate the solution of dense linear systems by using hybrid architectures with multicore processors and accelerators. We focus on methods based on the LU factorization and our code development takes place in the context of the MAGMA library. We study different hybrid CPU/GPU solvers based on the LU factorization which aim at reducing the communication overhead due to pivoting. The first one is based on a communication avoiding strategy of pivoting (CALU) while the second uses a random preconditioning of the original system to avoid pivoting (RBT). We show that both of these methods outperform the solver using LU factorization with partial pivoting when implemented on hybrid multicore/GPUs architectures. We also present new solvers based on randomization for hybrid architectures for Nvidia GPU or Intel Xeon Phi coprocessor. With this method, we can avoid the high cost of pivoting while remaining numerically stable in most cases. The highly parallel architecture of these accelerators allow us to perform the randomization of our linear system at a very low computational cost compared to the time of the factorization. Finally we investigate the impact of non-uniform memory accesses (NUMA) on the solution of dense general linear systems using an LU factorization algorithm. In particular we illustrate how an appropriate placement of the threads and data on a NUMA architecture can improve the performance of the panel factorization and consequently accelerate the global LU factorization. We show how these placements can improve the performance when applied to hybrid multicore/GPU solvers.
107

Joint Source-Channel Coding Reliability Function for Single and Multi-Terminal Communication Systems

Zhong, Yangfan 15 May 2008 (has links)
Traditionally, source coding (data compression) and channel coding (error protection) are performed separately and sequentially, resulting in what we call a tandem (separate) coding system. In practical implementations, however, tandem coding might involve a large delay and a high coding/decoding complexity, since one needs to remove the redundancy in the source coding part and then insert certain redundancy in the channel coding part. On the other hand, joint source-channel coding (JSCC), which coordinates source and channel coding or combines them into a single step, may offer substantial improvements over the tandem coding approach. This thesis deals with the fundamental Shannon-theoretic limits for a variety of communication systems via JSCC. More specifically, we investigate the reliability function (which is the largest rate at which the coding probability of error vanishes exponentially with increasing blocklength) for JSCC for the following discrete-time communication systems: (i) discrete memoryless systems; (ii) discrete memoryless systems with perfect channel feedback; (iii) discrete memoryless systems with source side information; (iv) discrete systems with Markovian memory; (v) continuous-valued (particularly Gaussian) memoryless systems; (vi) discrete asymmetric 2-user source-channel systems. For the above systems, we establish upper and lower bounds for the JSCC reliability function and we analytically compute these bounds. The conditions for which the upper and lower bounds coincide are also provided. We show that the conditions are satisfied for a large class of source-channel systems, and hence exactly determine the reliability function. We next provide a systematic comparison between the JSCC reliability function and the tandem coding reliability function (the reliability function resulting from separate source and channel coding). We show that the JSCC reliability function is substantially larger than the tandem coding reliability function for most cases. In particular, the JSCC reliability function is close to twice as large as the tandem coding reliability function for many source-channel pairs. This exponent gain provides a theoretical underpinning and justification for JSCC design as opposed to the widely used tandem coding method, since JSCC will yield a faster exponential rate of decay for the system error probability and thus provides substantial reductions in complexity and coding/decoding delay for real-world communication systems. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2008-05-13 22:31:56.425
108

Statistical analysis of clinical trial data using Monte Carlo methods

Han, Baoguang 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies. For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.

Page generated in 0.1462 seconds