• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 854
  • 403
  • 113
  • 88
  • 24
  • 19
  • 13
  • 10
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1883
  • 659
  • 330
  • 234
  • 219
  • 216
  • 212
  • 212
  • 208
  • 203
  • 189
  • 182
  • 169
  • 150
  • 143
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Applications of machine learning to agricultural land values: prediction and causal inference

Er, Emrah January 1900 (has links)
Doctor of Philosophy / Department of Agricultural Economics / Nathan P. Hendricks / This dissertation focuses on the prediction of agricultural land values and the effects of water rights on land values using machine learning algorithms and hedonic pricing methods. I predict agricultural land values with different machine learning algorithms, including ridge regression, least absolute shrinkage and selection operator, random forests, and extreme gradient boosting methods. To analyze the causal effects of water right seniority on agricultural land values, I use the double-selection LASSO technique. The second chapter presents the data used in the dissertation. A unique set of parcel sales from Property Valuation Division of Kansas constitute the backbone of the data used in the estimation. Along with parcel sales data, I collected detailed basis, water, tax, soil, weather, and urban influence data. This chapter provides detailed explanation of various data sources and variable construction processes. The third chapter presents different machine learning models for irrigated agricultural land price predictions in Kansas. Researchers, and policymakers use different models and data sets for price prediction. Recently developed machine learning methods have the power to improve the predictive ability of the models estimated. In this chapter I estimate several machine learning models for predicting the agricultural land values in Kansas. Results indicate that the predictive power of the machine learning methods are stronger compared to standard econometric methods. Median absolute error in extreme gradient boosting estimation is 0.1312 whereas it is 0.6528 in simple OLS model. The fourth chapter examines whether water right seniority is capitalized into irrigated agricultural land values in Kansas. Using a unique data set of irrigated agricultural land sales, I analyze the causal effect of water right seniority on agricultural land values. A possible concern during the estimation of hedonic models is the omitted variable bias so we use double-selection LASSO regression and its variable selection properties to overcome the omitted variable bias. I also estimate generalized additive models to analyze the nonlinearities that may exist. Results show that water rights have a positive impact on irrigated land prices in Kansas. An additional year of water right seniority causes irrigated land value to increase nearly $17 per acre. Further analysis also suggest a nonlinear relationship between seniority and agricultural land prices.
102

Labor market policies in an equilibrium matching model with heterogeneous agents and on-the-job search

Stavrunova, Olena 01 January 2007 (has links)
This dissertation quantitatively evaluates selected labor market policies in a search-matching model with skill heterogeneity where high-skilled workers can take temporary jobs with skill requirements below their skill levels. The joint posterior distribution of structural parameters of the theoretical model is obtained conditional on the data on labor markets histories of the NLSY79 respondents. The information on AFQT scores of individuals and the skill requirements of occupations is utilized to identify the skill levels of workers and complexity levels of jobs in the job-worker matches realized in the data. The model and the data are used to simulate the posterior distributions of impacts of labor market policies on the endogenous variables of interest to a policy-maker, including unemployment rates, durations and wages of low- and high-skilled workers. In particular, the effects of the following policies are analyzed: increase in proportion of high-skilled workers, subsidies for employing or hiring high- and low-skilled workers and increase in unemployment income.
103

A factoring approach for probabilistic inference in belief networks

Li, Zhaoyu 08 June 1993 (has links)
Reasoning about any realistic domain always involves a degree of uncertainty. Probabilistic inference in belief networks is one effective way of reasoning under uncertainty. Efficiency is critical in applying this technique, and many researchers have been working on this topic. This thesis is the report of our research in this area. This thesis contributes a new framework for probabilistic inference in belief networks. The previously developed algorithms depend on the topological structure of a belief network to perform inference efficiently. Those algorithms are constrained by the way they use topological information and may not work efficiently for some inference tasks. This thesis explores the essence of probabilistic inference, analyzes previously developed algorithms, and presents a factoring approach for probabilistic inference. It proposes that efficient probabilistic inference in belief networks can be considered as an optimal factoring problem. The optimal factoring framework provides an alternative perspective on probabilistic inference and a quantitative measure of efficiency for an algorithm. Using this framework, this thesis presents an optimal factoring algorithm for poly-tree networks and for arbitrary belief networks (albeit with exponential worst-case time complexity for non-poly-tree networks). Since the optimal factoring problem in general is a hard problem, a heuristic algorithm, called set-factoring, is developed for multiply-connected belief networks. Set factoring is shown to outperform previously developed algorithms. We also apply the optimal factoring framework to the problem of finding an instantiation of all nodes of a belief network which has the largest probability and present an efficient algorithm for that task. Extensive computation of probabilistic inference renders any currently used exact probabilistic inference algorithm intractable for large belief networks. One way to extend this boundary is to consider parallel hardware. This thesis also explores the issue of parallelizing probabilistic inference in belief networks. The feasibility of parallelizing probabilistic inference is demonstrated analytically and experimentally. Exponential-time numerical computation can be reduced by a polynomial-time factoring heuristic. This thesis offers an insight into the effect of the structure of a belief network on speedup and efficiency. / Graduation date: 1994
104

Efficient Computation of Probabilities of Events Described by Order Statistics and Applications to Queue Inference

Jones, Lee K., Larson, Richard C., 1943- 03 1900 (has links)
This paper derives recursive algorithms for efficiently computing event probabilities related to order statistics and applies the results in a queue inferencing setting. Consider a set of N i.i.d. random variables in [0, 1]. When the experimental values of the random variables are arranged in ascending order from smallest to largest, one has the order statistics of the set of random variables. Both a forward and a backward recursive O(N3 ) algorithm are developed for computing the probability that the order statistics vector lies in a given N-rectangle. The new algorithms have applicability in inferring the statistical behavior of Poisson arrival queues, given only the start and stop times of service of all N customers served in a period of continuous congestion. The queue inference results extend the theory of the "Queue Inference Engine" (QIE), originally developed by Larson in 1990 [8]. The methodology is extended to a third O(N 3 ) algorithm, employing both forward and backward recursion, that computes the conditional probability that a random customer of the N served waited in queue less than r minutes, given the observed customer departure times and assuming first come, first served service. To our knowledge, this result is the first O(N3 ) exact algorithm for computing points on the in-queue waiting time distribution function,conditioned on the start and stop time data. The paper concludes with an extension to the computation of certain correlations of in-queue waiting times. Illustrative computational results are included throughout.
105

The Performance of a Mechanical Design 'Compiler'

Ward, Allen C., Seering, Warren 01 January 1989 (has links)
A mechanical design "compiler" has been developed which, given an appropriate schematic, specifications, and utility function for a mechanical design, returns catalog numbers for an optimal implementation. The compiler has been successfully tested on a variety of mechanical and hydraulic power transmission designs and a few temperature sensing designs. Times required have been at worst proportional to the logarithm of the number of possible combinations of catalog numbers.
106

TYPICAL: A Knowledge Representation System for Automated Discovery and Inference

Haase, Kenneth W., Jr. 01 August 1987 (has links)
TYPICAL is a package for describing and making automatic inferences about a broad class of SCHEME predicate functions. These functions, called types following popular usage, delineate classes of primitive SCHEME objects, composite data structures, and abstract descriptions. TYPICAL types are generated by an extensible combinator language from either existing types or primitive terminals. These generated types are located in a lattice of predicate subsumption which captures necessary entailment between types; if satisfaction of one type necessarily entail satisfaction of another, the first type is below the second in the lattice. The inferences make by TYPICAL computes the position of the new definition within the lattice and establishes it there. This information is then accessible to both later inferences and other programs (reasoning systems, code analyzers, etc) which may need the information for their own purposes. TYPICAL was developed as a representation language for the discovery program Cyrano; particular examples are given of TYPICAL's application in the Cyrano program.
107

Bayesian Methods for On-Line Gross Error Detection and Compensation

Gonzalez, Ruben 11 1900 (has links)
Data reconciliation and gross error detection are traditional methods toward detecting mass balance inconsistency within process instrument data. These methods use a static approach for statistical evaluation. This thesis is concerned with using an alternative statistical approach (Bayesian statistics) to detect mass balance inconsistency in real time. The proposed dynamic Baysian solution makes use of a state space process model which incorporates mass balance relationships so that a governing set of mass balance variables can be estimated using a Kalman filter. Due to the incorporation of mass balances, many model parameters are defined by first principles. However, some parameters, namely the observation and state covariance matrices, need to be estimated from process data before the dynamic Bayesian methods could be applied. This thesis makes use of Bayesian machine learning techniques to estimate these parameters, separating process disturbances from instrument measurement noise. / Process Control
108

Principal typings for interactive ruby programming

Hnativ, Andriy 16 December 2009
A novel and promising method of software development is the interactive style of development, where code is written and incrementally tested simultaneously. Interpreted dynamic languages such as Ruby, Python, and Lua support this interactive development style. However, because they lack semantic analysis as part of a compilation phase, they do not provide type-checking. The programmer is only informed of type errors when they are encountered in the execution of the programfar too late and often at a less-informative location in the code. We introduce a typing system for Ruby, where types will be determined before execution by inferring principal typings. This system overcomes the obstacles that interactive and dynamic program development imposes on type checking; yielding an effective type-checking facility for dynamic programming languages. Our development is embodied as an extension to irb, the Ruby interactive mode, allowing us to evaluate principal typings for interactive development.
109

Testing the Role of Source Credibility on Memory for Inferences

Guillory, Jimmeka Joy 2011 August 1900 (has links)
Research shows that people have difficulty forgetting inferences they make after reading a passage, even when the information that the inferences are based on is later known to be untrue. This dissertation examined the effects of these inferences on memory for political information and tested if the credibility of the source of the correction influences whether people use the correction, or continue relying on the original information when making inferences. According to source credibility theory, there are two main factors that contribute to credibility, expertise and trustworthiness. Experiment 1 examined credibility as a function of both expertise and trustworthiness. The results from this experiment showed that having a correction from a source who is high on both factors significantly decreased the use of the original information. Experiment 2 examined credibility as a function of expertise. The Experiment 2 results showed no significant decrease in participants' use of the original information, if a correction came from a source that was simply more expert (but not more trustworthy) than another source. This finding suggests that source expertise alone is not sufficient to reduce reliance on the original information. Experiment 3, which examined credibility as a function of trustworthiness, demonstrated that having a highly trustworthy source does significantly decrease the use of the original information when making inferences. This study is the first to provide direct support for the hypothesis that making the source of a correction more believable decreases use of the original discredited information when making inferences.
110

Principal typings for interactive ruby programming

Hnativ, Andriy 16 December 2009 (has links)
A novel and promising method of software development is the interactive style of development, where code is written and incrementally tested simultaneously. Interpreted dynamic languages such as Ruby, Python, and Lua support this interactive development style. However, because they lack semantic analysis as part of a compilation phase, they do not provide type-checking. The programmer is only informed of type errors when they are encountered in the execution of the programfar too late and often at a less-informative location in the code. We introduce a typing system for Ruby, where types will be determined before execution by inferring principal typings. This system overcomes the obstacles that interactive and dynamic program development imposes on type checking; yielding an effective type-checking facility for dynamic programming languages. Our development is embodied as an extension to irb, the Ruby interactive mode, allowing us to evaluate principal typings for interactive development.

Page generated in 0.0428 seconds