• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 17
  • 15
  • 11
  • 10
  • 8
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 334
  • 334
  • 146
  • 79
  • 73
  • 54
  • 47
  • 46
  • 44
  • 42
  • 42
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Single and Multiple Emitter Localization in Cognitive Radio Networks

Ureten, Suzan January 2017 (has links)
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
292

Acceleration Strategies of Markov Chain Monte Carlo for Bayesian Computation / Stratégies d'accélération des algorithmes de Monte Carlo par chaîne de Markov pour le calcul Bayésien

Wu, Chang-Ye 04 October 2018 (has links)
Les algorithmes MCMC sont difficiles à mettre à l'échelle, car ils doivent balayer l'ensemble des données à chaque itération, ce qui interdit leurs applications dans de grands paramètres de données. En gros, tous les algorithmes MCMC évolutifs peuvent être divisés en deux catégories: les méthodes de partage et de conquête et les méthodes de sous-échantillonnage. Le but de ce projet est de réduire le temps de calcul induit par des fonctions complexes ou à grande efficacité. / MCMC algorithms are difficult to scale, since they need to sweep over the whole data set at each iteration, which prohibits their applications in big data settings. Roughly speaking, all scalable MCMC algorithms can be divided into two categories: divide-and-conquer methods and subsampling methods. The aim of this project is to reduce the computing time induced by complex or largelikelihood functions.
293

Evoluce velikosti mozku u letounů (Chiroptera) / Evolution of brain size in bats (Chiroptera)

Králová, Zuzana January 2010 (has links)
According to the prevailing doctrine, brain size has mainly increased throughout the evolution of mammals and reductions in brain size were rare. On the other hand, energetic costs of developing and maintaining big brain are high, so brain size reduction should occur every time when the respective selective pressure is present. Modern phylogenetic methods make it possible to test the presence of evolutionary trend and to infer the ancestral values of the trait in question based on knowledge of phylogeny and trait values for recent species. However, this approach has been rarely applied to study brain evolution so far. In this thesis, I focus on bats (Chiroptera). Bats are a suitable group for demonstrating the importance of brain size reductions. Considering their energetically demanding mode of locomotion, they are likely to have been under selection pressure for brain reduction. Furthermore, there is a large amount of data on body and brain mass of recent species available. Finally, phylogenetic relationships among bats are relatively well resolved. My present study is based on body masses and brain masses of 334 recent bat species (Baron et al., 1996) and on a phylogeny obtained by adjusting existing bat supertree (Jones et al., 2002) according to recent molecular studies. Analysing the data for...
294

Efficient and Scalable Subgraph Statistics using Regenerative Markov Chain Monte Carlo

Mayank Kakodkar (12463929) 26 April 2022 (has links)
<p>In recent years there has been a growing interest in data mining and graph machine learning for techniques that can obtain frequencies of <em>k</em>-node Connected Induced Subgraphs (<em>k</em>-CIS) contained in large real-world graphs. While recent work has shown that 5-CISs can be counted exactly, no exact polynomial-time algorithms are known that solve this task for <em>k </em>> 5. In the past, sampling-based algorithms that work well in moderately-sized graphs for <em>k</em> ≤ 8 have been proposed. In this thesis I push this boundary up to <em>k</em> ≤ 16 for graphs containing up to 120M edges, and to <em>k</em> ≤ 25 for smaller graphs containing between a million to 20M edges. I do so by re-imagining two older, but elegant and memory-efficient algorithms -- FANMOD and PSRW -- which have large estimation errors by modern standards. This is because FANMOD produces highly correlated k-CIS samples and the cost of sampling the PSRW Markov chain becomes prohibitively expensive for k-CIS’s larger than <em>k </em>> 8.</p> <p>In this thesis, I introduce:</p> <p>(a)  <strong>RTS:</strong> a novel regenerative Markov chain Monte Carlo (MCMC) sampling procedure on the tree, generated on-the-fly by the FANMOD algorithm. RTS is able to run on multiple cores and multiple machines (embarrassingly parallel) and compute confidence intervals of estimates, all this while preserving the memory-efficient nature of FANMOD. RTS is thus able to estimate subgraph statistics for <em>k</em> ≤ 16 for larger graphs containing up to 120M edges, and for <em>k</em> ≤ 25 for smaller graphs containing between a million to 20M edges.</p> <p>(b) <strong>R-PSRW:</strong> which scales the PSRW algorithm to larger CIS-sizes using a rejection sampling procedure to efficiently sample transitions from the PSRW Markov chain. R-PSRW matches RTS in terms of scaling to larger CIS sizes.</p> <p>(c) <strong>Ripple:</strong> which achieves unprecedented scalability by stratifying the R-PSRW Markov chain state-space into ordered strata via a new technique that I call <em>sequential stratified regeneration</em>. I show that the Ripple estimator is consistent, highly parallelizable, and scales well. Ripple is able to <em>count</em> CISs of size up to <em>k </em>≤ 12 in real world graphs containing up to 120M edges.</p> <p>My empirical results show that the proposed methods offer a considerable improvement over the state-of-the-art. Moreover my methods are able to run at a scale that has been considered unreachable until now, not only by prior MCMC-based methods but also by other sampling approaches. </p> <p><strong>Optimization of Restricted Boltzmann Machines. </strong>In addition, I also propose a regenerative transformation of MCMC samplers of Restricted Boltzmann Machines RBMs. My approach, Markov Chain Las Vegas (MCLV) gives statistical guarantees in exchange for random running times. MCLV uses a stopping set built from the training data and has a maximum number of Markov chain step-count <em>K</em> (referred as MCLV-<em>K</em>). I present a MCLV-<em>K</em> gradient estimator (LVS-<em>K</em>) for RBMs and explore the correspondence and differences between LVS-<em>K</em> and Contrastive Divergence (CD-<em>K</em>). LVS-<em>K</em> significantly outperforms CD-<em>K</em> in the task of training RBMs over the MNIST dataset, indicating MCLV to be a promising direction in learning generative models.</p>
295

New attempts for error reduction in lattice field theory calculations

Volmer, Julia Louisa 23 August 2018 (has links)
Gitter QCD ist ein erfolgreiches Instrument zur nicht-perturbativen Berechnung von QCD Observablen. Die hierfür notwendige Auswertung des QCD Pfadintegrals besteht aus zwei Teilen: Zuerst werden Stützstellen generiert, an denen danach das Pfadintegral ausgewertet wird. In der Regel werden für den ersten Teil Markov-chain Monte Carlo (MCMC) Methoden verwendet, die für die meisten Anwendungen sehr gute Ergebnisse liefern, aber auch Probleme wie eine langsame Fehlerskalierung und das numerische Vorzeichenproblem bergen. Der zweite Teil beinhaltet die Berechnung von Quark zusammenhängenden und unzusammenhängenden Diagrammen. Letztere tragen maßgeblich zu physikalischen Observablen bei, jedoch leidet deren Berechnung an großen Fehlerabschätzungen. In dieser Arbeit werden Methoden präsentiert, um die beschriebenen Schwierigkeiten in beiden Auswertungsteilen des QCD Pfadintegrals anzugehen und somit Observablen effizienter beziehungsweise genauer abschätzen zu können. Für die Berechnung der unzusammenhängenden Diagramme haben wir die Methode der exakten Eigenmodenrekonstruktion mit Deflation getestet und konnten eine 5.5 fache Verbesserung der Laufzeit erreichen. Um die Probleme von MCMC Methoden zu adressieren haben wir die rekursive numerische Integration zur Vereinfachung von Integralauswertungen getestet. Wir haben diese Methode, kominiert mit einer Gauß-Quadraturregel, auf den eindimensionalen quantenmechanischen Rotor angewandt und konnten exponentiell skalierende Fehlerabschätzungen erreichen. Der nächste Schritt ist eine Verallgemeinerung zu höheren Raumzeit Dimensionen. Außerdem haben wir symmetrisierte Quadraturregeln entwickelt, um das Vorzeichenproblem zu umgehen. Wir haben diese Regeln auf die eindimensionale QCD mit chemischem Potential angewandt und konnten zeigen, dass sie das Vorzeichenproblem beseitigen und sehr effizient auf Modelle mit einer Variablen angewendet werden können. Zukünftig kann die Effizienz für mehr Variablen verbessert werden. / Lattice QCD is a very successful tool to compute QCD observables non-perturbatively from first principles. The therefore needed evaluation of the QCD path integral consists of two parts: first, sampling points are generated at which second, the path integral is evaluated. The first part is typically achieved by Markov-chain Monte Carlo (MCMC) methods which work very well for most applications but also have some issues as their slow error scaling and the numerical sign-problem. The second part includes the computation of quark connected and disconnected diagrams. Improvements of the signal-to-noise ratio have to be found since the disconnected diagrams, though their estimation being very noisy, contribute significantly to physical observables. Methods are proposed to overcome the aforementioned difficulties in both parts of the evaluation of the lattice QCD path integral and therefore to estimate observables more efficiently and more accurately. For the computation of quark disconnected diagrams we tested the exact eigenmode reconstruction with deflation method and found that this method resulted in a 5.5-fold reduction of runtime. To address the difficulties of MCMC methods, we tested the recursive numerical integration method, which simplifies the evaluation of the integral. We applied the method in combination with a Gauss quadrature rule to the one-dimensional quantum-mechanical rotor and found that we can compute error estimates that scale exponentially to the correct result. A generalization to higher space-time dimensions can be done in the future. Additionally, we developed the symmetrized quadrature rules to address the sign-problem. We applied them to the one-dimensional QCD with a chemical potential and found that this method is capable of overcoming the sign-problem completely and is very efficient for models with one variable. Improvements of the efficiency for multi-variable scenarios can be made in the future.
296

Machine Learning for Exploring State Space Structure in Genetic Regulatory Networks

Thomas, Rodney H. 01 January 2018 (has links)
Genetic regulatory networks (GRN) offer a useful model for clinical biology. Specifically, such networks capture interactions among genes, proteins, and other metabolic factors. Unfortunately, it is difficult to understand and predict the behavior of networks that are of realistic size and complexity. In this dissertation, behavior refers to the trajectory of a state, through a series of state transitions over time, to an attractor in the network. This project assumes asynchronous Boolean networks, implying that a state may transition to more than one attractor. The goal of this project is to efficiently identify a network's set of attractors and to predict the likelihood with which an arbitrary state leads to each of the network’s attractors. These probabilities will be represented using a fuzzy membership vector. Predicting fuzzy membership vectors using machine learning techniques may address the intractability posed by networks of realistic size and complexity. Modeling and simulation can be used to provide the necessary training sets for machine learning methods to predict fuzzy membership vectors. The experiments comprise several GRNs, each represented by a set of output classes. These classes consist of thresholds τ and ¬τ, where τ = [τlaw,τhigh]; state s belongs to class τ if the probability of its transitioning to attractor 􀜣 belongs to the range [τlaw,τhigh]; otherwise it belongs to class ¬τ. Finally, each machine learning classifier was trained with the training sets that was previously collected. The objective is to explore methods to discover patterns for meaningful classification of states in realistically complex regulatory networks. The research design took a GRN and a machine learning method as input and produced output class < Ατ > and its negation ¬ < Ατ >. For each GRN, attractors were identified, data was collected by sampling each state to create fuzzy membership vectors, and machine learning methods were trained to predict whether a state is in a healthy attractor or not. For T-LGL, SVMs had the highest accuracy in predictions (between 93.6% and 96.9%) and precision (between 94.59% and 97.87%). However, naive Bayesian classifiers had the highest recall (between 94.71% and 97.78%). This study showed that all experiments have extreme significance with pvalue < 0.0001. The contribution this research offers helps clinical biologist to submit genetic states to get an initial result on their outcomes. For future work, this implementation could use other machine learning classifiers such as xgboost or deep learning methods. Other suggestions offered are developing methods that improves the performance of state transition that allow for larger training sets to be sampled.
297

Predictive Modeling and Statistical Inference for CTA returns : A Hidden Markov Approach with Sparse Logistic Regression

Fransson, Oskar January 2023 (has links)
This thesis focuses on predicting trends in Commodity Trading Advisors (CTAs), also known as trend-following hedge funds. The paper applies a Hidden Markov Model (HMM) for classifying trends. Additionally, by incorporating additional features, a regularized logistic regression model is used to enhance prediction capability. The model demonstrates success in identifying positive trends in CTA funds, with particular emphasis on precision and risk-adjusted return metrics. In the context of regularized regression models, techniques for statistical inference such as bootstrap resampling and Markov Chain Monte Carlo are applied to estimate the distribution of parameters. The findings suggest the model's effectiveness in predicting favorable CTA performance and mitigating equity market drawdowns. For future research, it is recommended to explore alternative classification models and extend the methodology to different markets and datasets.
298

Thesis_deposit.pdf

Sehwan Kim (15348235) 25 April 2023 (has links)
<p>    Adaptive MCMC is advantageous over traditional MCMC due to its ability to automatically adjust its proposal distributions during the sampling process, providing improved sampling efficiency and faster convergence to the target distribution, especially in complex or high-dimensional problems. However, designing and validating the adaptive scheme cautiously is crucial to ensure algorithm validity and prevent the introduction of biases. This dissertation focuses on the use of Adaptive MCMC for deep learning, specifically addressing the mode collapse issue in Generative Adversarial Networks (GANs) and implementing Fiducial inference, and its application to Causal inference in individual treatment effect problems.</p> <p><br></p> <p>    First, GAN was recently introduced in the literature as a novel machine learning method for training generative models. However, GAN is very difficult to train due to the issue of mode collapse, i.e., lack of diversity among generated data. We figure out the reason why GAN suffers from this issue and lay out a new theoretical framework for GAN based on randomized decision rules such that the mode collapse issue can be overcome essentially. Under the new theoretical framework, the discriminator converges to a fixed point while the generator converges to a distribution at the Nash equilibrium.</p> <p><br></p> <p>    Second, Fiducial inference was generally considered as R.A. Fisher's a big blunder, but the goal he initially set, <em>making inference for the uncertainty of model parameters on the basis of observations</em>, has been continually pursued by many statisticians. By leveraging on advanced statistical computing techniques such as stochastic approximation Markov chain Monte Carlo, we develop a new statistical inference method, the so-called extended Fiducial inference, which achieves the initial goal of fiducial inference. </p> <p><br></p> <p>    Lastly, estimating ITE is important for decision making in various fields, particularly in health research where precision medicine is being investigated. Conditional average treatment effect (CATE) is often used for such purpose, but uncertainty quantification and explaining the variability of predicted ITE is still needed for fair decision making. We discuss using extended Fiducial inference to construct prediction intervals for ITE, and introduces a double neural net algorithm for efficient prediction and estimation of nonlinear ITE.</p>
299

Effects of power-law correlated disorder on a 3D XY model / Effekterna av potenslagskorrelerad oordning på 3D XY-modeller

Broms, Philip January 2022 (has links)
This thesis investigates the effects of power-law correlated disorder on a three-dimensional XY model and the Weinrib-Halperin disorder relevance criterion’s pre-dictive ability. Ising models are used as a map to realise disorder couplings. Simula-tions are conducted using hybrid Monte Carlo method constituting Metropolis’ andWolff’s algorithms. Two cases using two-dimensional and three-dimensional Isinggenerated disorder corresponding to (d + 1)- and d-dimensional models are tested.In addition, a superficial scaling analysis is performed to highlight the change ofuniversality class.It is shown that magnetisation, response functions and Binder ratio along withits temperature derivative display stark differences from the pure XY model case.The results agree with the Weinrib-Halperin criterion in terms of predicting achange of universality class but show a discrepancy in both qualitative and nu-merical results. The main new result is that power-law correlated disorder canintroduce two phase transitions at different critical couplings. This is in disagree-ment with prior established theory and predicts new physics to be investigated insuperconductors and superfluids with correlated disorder. / Det här examensarbetet undersöker hur potenslagskorrelerad oordning påverkar en tredimensionell XY-modell samt förutsägelseförmågan hos Weinrib-Halperins oordningskriterium. Oordningen realiseras genom varierande kopplingsstyrka som definieras med hjälp av Isingmodeller. Simuleringar utförs med hjälp av en Monte Carlo-hybridmetod bestående av Metropolis och Wolffs algoritmer. Undersökningen innefattar oordning genererad av tvådimensionella samt tredimensionella Isingmodeller med syftet att emulera oordning hos (d + 1)- och d-dimensionella modeller. Dessutom utförs en ytlig skalningsanalys för att tydliggöra förändringen hos universalitetsklassen. Resultaten uppvisar markanta skillnader från en ren XY-modell i magnetiseringen, responsfunktionerna och Binderparametern med temperaturderivata. Resultaten bekräftar Weinrib-Halperin-oordningskriteriets förmåga att förutsäga förändring hos universalitetsklassen men avviker från exakta kvalitativa och kvantitativa prognoser. Det huvudsakliga nya resultatet är att potenslagskorrelerad oordning kan introducera två fasövergångar vid två olika kritiska kopplingar. Detta motsäger den etablerade teorin och pekar på ny fysik att utreda hos supraledare och suprafluider med korrelerad oordning.
300

Parallel distributed-memory particle methods for acquisition-rate segmentation and uncertainty quantifications of large fluorescence microscopy images

Afshar, Yaser 08 November 2016 (has links) (PDF)
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. Another issue is the information loss during image acquisition due to limitations of the optical imaging systems. Analysis of the acquired images may, therefore, find multiple solutions (or no solution) due to imaging noise, blurring, and other uncertainties introduced during image acquisition. In this thesis, we address the computational processing time and memory issues by developing a distributed parallel algorithm for segmentation of large fluorescence-microscopy images. The method is based on the versatile Discrete Region Competition (Cardinale et al., 2012) algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collective solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10^10 pixels) but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data inspection and interactive experiments. Second, we estimate the segmentation uncertainty on large images that do not fit the main memory of a single computer. We there- fore develop a distributed parallel algorithm for efficient Markov- chain Monte Carlo Discrete Region Sampling (Cardinale, 2013). The parallel algorithm provides a measure of segmentation uncertainty in a statistically unbiased way. It approximates the posterior probability densities over the high-dimensional space of segmentations around the previously found segmentation. / Moderne Fluoreszenzmikroskopie, wie zum Beispiel Lichtblattmikroskopie, erlauben die Aufnahme hochaufgelöster, 3-dimensionaler Bilder. Dies führt zu einen Engpass bei der Bearbeitung und Analyse der aufgenommenen Bilder, da die Aufnahmerate die Datenverarbeitungsrate übersteigt. Zusätzlich können diese Bilder so groß sein, dass sie die Speicherkapazität eines einzelnen Computers überschreiten. Hinzu kommt der aus Limitierungen des optischen Abbildungssystems resultierende Informationsverlust während der Bildaufnahme. Bildrauschen, Unschärfe und andere Messunsicherheiten können dazu führen, dass Analysealgorithmen möglicherweise mehrere oder keine Lösung für Bildverarbeitungsaufgaben finden. Im Rahmen der vorliegenden Arbeit entwickeln wir einen verteilten, parallelen Algorithmus für die Segmentierung von speicherintensiven Fluoreszenzmikroskopie-Bildern. Diese Methode basiert auf dem vielseitigen "Discrete Region Competition" Algorithmus (Cardinale et al., 2012), der sich bereits in anderen Anwendungen als nützlich für die Segmentierung von Mikroskopie-Bildern erwiesen hat. Das hier präsentierte Verfahren unterteilt das Eingangsbild in kleinere Unterbilder, welche auf die Speicher mehrerer Computer verteilt werden. Die Koordinierung des globalen Segmentierungsproblems wird durch die Benutzung von Netzwerkkommunikation erreicht. Dies erlaubt die Segmentierung von sehr großen Bildern, wobei wir die Anwendung des Algorithmus auf Bildern mit bis zu 10^10 Pixeln demonstrieren. Zusätzlich wird die Segmentierungsgeschwindigkeit erhöht und damit vergleichbar mit der Aufnahmerate des Mikroskops. Dies ist eine Grundvoraussetzung für die intelligenten Mikroskope der Zukunft, und es erlaubt die Online-Betrachtung der aufgenommenen Daten, sowie interaktive Experimente. Wir bestimmen die Unsicherheit des Segmentierungsalgorithmus bei der Anwendung auf Bilder, deren Größe den Speicher eines einzelnen Computers übersteigen. Dazu entwickeln wir einen verteilten, parallelen Algorithmus für effizientes Markov-chain Monte Carlo "Discrete Region Sampling" (Cardinale, 2013). Dieser Algorithmus quantifiziert die Segmentierungsunsicherheit statistisch erwartungstreu. Dazu wird die A-posteriori-Wahrscheinlichkeitsdichte über den hochdimensionalen Raum der Segmentierungen in der Umgebung der zuvor gefundenen Segmentierung approximiert.

Page generated in 0.5713 seconds