• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 861
  • 403
  • 113
  • 89
  • 24
  • 19
  • 13
  • 10
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1894
  • 661
  • 330
  • 235
  • 224
  • 216
  • 215
  • 213
  • 209
  • 204
  • 191
  • 183
  • 171
  • 150
  • 145
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Inference methods for locally ordered and common breaks in multiple regressions

Li, Ye 04 November 2024 (has links)
This dissertation consists of two chapters related to inference about locally ordered and common breaks in multiple regressions and one chapter pertains to modeling exchange rate volatility with random level shifts. The first chapter considers inference about locally ordered breaks in a system of equations. These apply when break dates in different equations are not separated by a positive fraction of the sample size. We extend the results of Qu and Perron (2007) in several directions allowing: a) the covariates to be any mix of trends, stationary or integrated regressors; b) breaks in the variance-covariance matrix of errors; c) an arbitrary number of breaks occurring in a subset of equations. We show that the limit distributions derived provide good approximations to the finite sample distributions and forming confidence intervals in a joint fashion allows more precision. The second chapter considers testing for common breaks and estimating locally ordered breaks in a multiple system with joined segmented trends. To test for common breaks, we consider a likelihood ratio type test. The null hypothesis is that some subsets of coefficients for slope shifts share some common breaks, while the alternative hypothesis is that the breaks dates are different and possibly locally ordered. To estimate locally ordered breaks, we use a quasi-maximum likelihood estimation method. We show consistency and derive rate of convergence and asymptotic distributions of test statistics and estimates of the break dates. Simulation results show that the asymptotic results provide useful approximations in finite samples. In the third chapter, we estimate a random level shifts model for the log-absolute returns of the Dollar-Mark and Dollar-Yen exchange rates, in order to assess whether random level shifts can explain the long memory property. The results show that there are few level shifts, but once they are taken into account, the long memory property disappears. We also provide out-of-sample forecasting comparisons, which show that the random level shifts model outperforms standard fractionally integrated models.
172

Generalization of prior information for rapid Bayesian time estimation

Roach, N.W., McGraw, Paul V., Whitaker, David J., Heron, James 2016 December 1922 (has links)
Yes / To enable effective interaction with the environment, the brain combines noisy sensory information with expectations based on prior experience. There is ample evidence showing that humans can learn statistical regularities in sensory input and exploit this knowledge to improve perceptual decisions and actions. However, fundamental questions remain regarding how priors are learned and how they generalize to different sensory and behavioral contexts. In principle, maintaining a large set of highly specific priors may be inefficient and restrict the speed at which expectations can be formed and updated in response to changes in the environment. However, priors formed by generalizing across varying contexts may not be accurate. Here, we exploit rapidly induced contextual biases in duration reproduction to reveal how these competing demands are resolved during the early stages of prior acquisition. We show that observers initially form a single prior by generalizing across duration distributions coupled with distinct sensory signals. In contrast, they form multiple priors if distributions are coupled with distinct motor outputs. Together, our findings suggest that rapid prior acquisition is facilitated by generalization across experiences of different sensory inputs but organized according to how that sensory information is acted on.
173

<b>STOCHASTIC NEURAL NETWORK AND CAUSAL INFERENCE</b>

Yaxin Fang (17069563) 10 January 2025 (has links)
<p dir="ltr">Estimating causal effects from observational data has been challenging due to high-dimensional complex dataset and confounding biases. In this thesis, we try to tackle these issues by leveraging deep learning techniques, including sparse deep learning and stochastic neural networks, that have been developed in recent literature. </p><p dir="ltr">With the advancement of data science, the collection of increasingly complex datasets has become commonplace. In such datasets, the data dimension can be extremely high, and the underlying data generation process can be unknown and highly nonlinear. As a result, the task of making causal inference with high-dimensional complex data has become a fundamental problem in many disciplines, such as medicine, econometrics, and social science. However, the existing methods for causal inference are frequently developed under the assumption that the data dimension is low or that the underlying data generation process is linear or approximately linear. To address these challenges, chapter 3 proposes a novel causal inference approach for dealing with high-dimensional complex data. By using sparse deep learning techniques, the proposed approach can address both the high dimensionality and unknown data generation process in a coherent way. Furthermore, the proposed approach can also be used when missing values are present in the datasets. Extensive numerical studies indicate that the proposed approach outperforms existing ones. </p><p dir="ltr">One of the major challenges in causal inference with observational data is handling missing confounder. Latent variable modeling is a valid framework to address this challenge, but current approaches within the framework often suffer from consistency issues in causal effect estimation and are hard to extend to more complex application scenarios. To bridge this gap, in chapter 4, we propose a new latent variable modeling approach. It utilizes a stochastic neural network, where the latent variables are imputed as the outputs of hidden neurons using an adaptive stochastic gradient HMC algorithm. Causal inference is then conducted based on the imputed latent variables. Under mild conditions, the new approach provides a theoretical guarantee for the consistency of causal effect estimation. The new approach also serves as a versatile tool for modeling various causal relationships, leveraging the flexibility of the stochastic neural network in natural process modeling. We show that the new approach matches state-of-the-art performance on benchmarks for causal effect estimation and demonstrate its adaptability to proxy variable and multiple-cause scenarios.</p>
174

Inferences in context : contextualism, inferentialism and the concept of universal quantification

Tabet, Chiara January 2008 (has links)
This Thesis addresses issues that lie at the intersection of two broad philosophical projects: inferentialism and contextualism. It discusses and defends an account of the logical concepts based on the following two ideas: 1) that the logical concepts are constituted by our canonical inferential usages of them; 2) that to grasp, or possess, a logical concept is to undertake an inferential commitment to the canonical consequences of the concept when deploying it in a linguistic practice. The account focuses on the concept of universal quantification, with respect to which it also defends the view that linguistic context contributes to an interpretation of instances of the concept by determining the scope of our commitments to the canonical consequences of the quantifier. The model that I offer for the concept of universal quantification relies on, and develops, three main ideas: 1) our understanding of the concept’s inferential role is one according to which the concept expresses full inferential generality; 2) what I refer to as the ‘domain model’ (the view that the universal quantifier always ranges over a domain of quantification, and that the specification of such a domain contributes to determine the proposition expressed by sentences in which the quantifier figures) is subject to a series of crucial difficulties, and should be abandoned; 3) we should regard the undertaking of an inferential commitment to the canonical consequences of the universal quantifier as a stable and objective presupposition of a universally quantified sentence expressing a determinate proposition in context. In the last chapter of the Thesis I sketch a proposal about how contextual quantifier restrictions should be understood, and articulate the main challenges that a commitment-theoretic story about the context-sensitivity of the universal quantifier faces.
175

Exponenciální třídy a jejich význam pro statistickou inferenci / Exponenciální třídy a jejich význam pro statistickou inferenci

Moneer Borham Abdel-Maksoud, Sally January 2011 (has links)
This diploma thesis provides an evaluation of Exponential families of distributions which has a special position in mathematical statistics. Diploma will learn the basic concepts and facts associated with the distribution of exponential type. Especially with focusing on the advantages of exponential families in classical parametric statistics, thus in theory of estimation and hypothesis testing. Emphasis will be placed on one-parameter and multi-parameters systems.
176

Exponenciální třídy a jejich význam pro statistickou inferenci / Exponenciální třídy a jejich význam pro statistickou inferenci

Moneer Borham Abdel-Maksoud, Sally January 2011 (has links)
Title: Exponential families in statistical inference Author: Sally Abdel-Maksoud Department: Department of Probability and Mathematical Statistics Supervisor: doc. RNDr. Daniel Hlubinka, Ph.D. Supervisor's e-mail address: Daniel.Hlubinka@mff.cuni.cz Abstract: This diploma thesis provides an evaluation of Exponential families of distributions which has a special position in mathematical statistic including appropriate properties for estimation of population parameters, hypothesis testing and other inference problems. Diploma will introduce the basic concepts and facts associated with the distribution of exponential type especially with focusing on the advantages of exponential families in classical parametric statistics, thus in theory of estimation and hypothesis testing. Emphasis will be placed on one-parameter and multi- parameters systems. It also exposes an important concepts about the curvature of a statistical problem including the curvature in exponential families. We will define a quantity that measure how nearly "exponential" the families are. This quantity is said to be the statistical curvature of the family. We will show that the family with a small curvature enjoy the good properties of exponential families Moreover, the properties of the curvature, hypotheses testing and some...
177

Testování učení restartovacích automatů genetickými algoritmy / Testing Learning of Restarting Automata using Genetic Algorithm

Kovářová, Lenka January 2012 (has links)
Title: Testing the Learning of Restarting Automata using Genetic Algorithm Author: Bc. Lenka Kovářová Department: Department of Software and Computer Science Education Supervisor: RNDr. František Mráz, CSc. Abstract: Restarting automaton is a theoretical model of device recognizing a formal language. The construction of various versions of restarting automata can be a hard work. Many different methods of learning such automata have been developed till now. Among them are also methods for learning target restarting automaton from a finite set of positive and negative samples using genetic algorithms. In this work, we propose a method for improving learning of restarting automata by means of evolutionary algorithms. The improvement consists in inserting new rules of a special form enabling adaption of the learning algorithm to the particular language. Furthermore, there is proposed a system for testing of learning algorithms for restarting automata supporting especially learning by evolutionary algorithms. A part of the work is a program for learning restarting automata using the proposed new method with a subsequent testing of discovered automata and its evaluation in a graphic form mainly. Keywords: machine learning, grammatical inference, restarting automata, genetic algorithms
178

A forecasting of indices and corresponding investment decision making application

Patel, Pretesh Bhoola 01 March 2007 (has links)
Student Number : 9702018F - MSc(Eng) Dissertation - School of Electrical and Information Engineering - Faculty of Engineering and the Built Environment / Due to the volatile nature of the world economies, investing is crucial in ensuring an individual is prepared for future financial necessities. This research proposes an application, which employs computational intelligent methods that could assist investors in making financial decisions. This system consists of 2 components. The Forecasting Component (FC) is employed to predict the closing index price performance. Based on these predictions, the Stock Quantity Selection Component (SQSC) recommends the investor to purchase stocks, hold the current investment position or sell stocks in possession. The development of the FC module involved the creation of Multi-Layer Perceptron (MLP) as well as Radial Basis Function (RBF) neural network classifiers. TCategorizes that these networks classify are based on a profitable trading strategy that outperforms the long-term “Buy and hold” trading strategy. The Dow Jones Industrial Average, Johannesburg Stock Exchange (JSE) All Share, Nasdaq 100 and the Nikkei 225 Stock Average indices are considered. TIt has been determined that the MLP neural network architecture is particularly suited in the prediction of closing index price performance. Accuracies of 72%, 68%, 69% and 64% were obtained for the prediction of closing price performance of the Dow Jones Industrial Average, JSE All Share, Nasdaq 100 and Nikkei 225 Stock Average indices, respectively. TThree designs of the Stock Quantity Selection Component were implemented and compared in terms of their complexity as well as scalability. TComplexity is defined as the number of classifiers employed by the design. Scalability is defined as the ability of the design to accommodate the classification of additional investment recommendations. TDesigns that utilized 1, 4 and 16 classifiers, respectively, were developed. These designs were implemented using MLP neural networks, RBF neural networks, Fuzzy Inference Systems as well as Adaptive Neuro-Fuzzy Inference Systems. The design that employed 4 classifiers achieved low complexity and high scalability. As a result, this design is most appropriate for the application of concern. It has also been determined that the neural network architecture as well as the Fuzzy Inference System implementation of this design performed equally well.
179

Novel methods for biological network inference : an application to circadian Ca2+ signaling network

Jin, Junyang January 2018 (has links)
Biological processes involve complex biochemical interactions among a large number of species like cells, RNA, proteins and metabolites. Learning these interactions is essential to interfering artificially with biological processes in order to, for example, improve crop yield, develop new therapies, and predict new cell or organism behaviors to genetic or environmental perturbations. For a biological process, two pieces of information are of most interest. For a particular species, the first step is to learn which other species are regulating it. This reveals topology and causality. The second step involves learning the precise mechanisms of how this regulation occurs. This step reveals the dynamics of the system. Applying this process to all species leads to the complete dynamical network. Systems biology is making considerable efforts to learn biological networks at low experimental costs. The main goal of this thesis is to develop advanced methods to build models for biological networks, taking the circadian system of Arabidopsis thaliana as a case study. A variety of network inference approaches have been proposed in the literature to study dynamic biological networks. However, many successful methods either require prior knowledge of the system or focus more on topology. This thesis presents novel methods that identify both network topology and dynamics, and do not depend on prior knowledge. Hence, the proposed methods are applicable to general biological networks. These methods are initially developed for linear systems, and, at the cost of higher computational complexity, can also be applied to nonlinear systems. Overall, we propose four methods with increasing computational complexity: one-to-one, combined group and element sparse Bayesian learning (GESBL), the kernel method and reversible jump Markov chain Monte Carlo method (RJMCMC). All methods are tested with challenging dynamical network simulations (including feedback, random networks, different levels of noise and number of samples), and realistic models of circadian system of Arabidopsis thaliana. These simulations show that, while the one-to-one method scales to the whole genome, the kernel method and RJMCMC method are superior for smaller networks. They are robust to tuning variables and able to provide stable performance. The simulations also imply the advantage of GESBL and RJMCMC over the state-of-the-art method. We envision that the estimated models can benefit a wide range of research. For example, they can locate biological compounds responsible for human disease through mathematical analysis and help predict the effectiveness of new treatments.
180

Fast Inference for Interactive Models of Text

Lund, Jeffrey A 01 September 2015 (has links)
Probabilistic models of text are a useful tool for enabling the analysis of large collections of digital text. For example, Latent Dirichlet Allocation can quickly produce topical summaries of large collections of text documents. Many important uses cases of such models include human interaction during the inference process for these models of text. For example, the Interactive Topic Model extends Latent Dirichlet Allocation to incorporate human expertiese during inference in order to produce topics which are better suited to individual user needs. However, interactive use cases of probabalistic models of text introduce new constraints on inference - the inference procedure must not only be accurate, but also fast enough to facilitate human interaction. If the inference is too slow, then the human interaction will be harmed, and the interactive aspect of the probalistic model will be less useful. Unfortunately, the most popular inference algorithms in use today either require strong approximations which can degrade the quality of some models, or require time-consuming sampling. We explore the use of Iterated Conditional Modes, an algorithm which is able to obtain locally optimal maximum a posteriori estimates, as an alternative to popular inference algorithms such as Gibbs sampling or mean field variational inference. Iterated Conditional Modes algorithm is not only fast enough to facilitate human interaction, but can produce better maximum a posteriori estimates than sampling. We demonstrate the superior performance of Iterated Conditional Modes on a wide variety of models. First we use a DP Mixture of Multinomials model applied to the problem of web search result cluster, and show that not only can we outperform previous methods in clustering quality, but we can achieve interactive runtimes when performing inference with Iterated Conditional Modes. We then apply Iterated Conditional Modes to the Interactive Topic Model. Not only is Iterated Conditional Modes much faster than the previous published Gibbs sampler, but we are better able to incorporate human feedback during inference, as measured by accuracy on a classification task using the resultant topic model. Finally, we utilize Iterated Conditional Modes with MomResp, a model used to aggregate multiple noisy crowdsourced data. Compared with Gibbs sampling, Iterated Conditional Modes is better able to recover ground truth labels from simulated noisy annotations, and runs orders of magnitude faster.

Page generated in 0.0461 seconds