Spelling suggestions: "subject:"inference"" "subject:"cnference""
101 |
Evaluation of relationship inference and weighting schemes of TheSys.January 1997 (has links)
by Chan Chi Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves [89]-91). / Abstract --- p.ii / List of Tables --- p.vii / List of Figures --- p.viii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background Information And Scope of Thesis --- p.6 / Chapter 2.1 --- Related Work --- p.6 / Chapter 2.2 --- Scope of Thesis --- p.10 / Chapter 3 --- System Architecture and Design Principle --- p.12 / Chapter 3.1 --- Overall System Architecture --- p.12 / Chapter 3.2 --- Entry Term Construct and Thesaurus Frame --- p.14 / Chapter 3.2.1 --- Semantic Classification --- p.15 / Chapter 3.2.2 --- "Word, Entry Term and Semanteme" --- p.17 / Chapter 3.2.3 --- Relationship Type and Relationship Link --- p.19 / Chapter 3.3 --- Thesaurus Module and Maintenance Module --- p.22 / Chapter 3.4 --- Data Structure --- p.22 / Chapter 3.4.1 --- Semantic Classification Tree --- p.24 / Chapter 3.4.2 --- Entry Term Construct --- p.24 / Chapter 3.4.3 --- Thesaurus Frame --- p.26 / Chapter 4 --- Relationship Inference --- p.28 / Chapter 4.1 --- Study on a Traverse of Two Relationship Links --- p.29 / Chapter 4.2 --- Grammar of the Relationship Inference Rules Definition --- p.33 / Chapter 4.3 --- Implementation Detail and API of Relationship Inference --- p.35 / Chapter 4.4 --- Evaluation on Relationship Inference --- p.39 / Chapter 5 --- Weight Schemes --- p.41 / Chapter 5.1 --- Thesaurus Frame Construction and Relationship Type Definition --- p.42 / Chapter 5.2 --- Two Kinds of Relationship Types --- p.44 / Chapter 5.3 --- Evaluation on different Weighting Scheme Formulas --- p.46 / Chapter 5.4 --- Term Ranking --- p.57 / Chapter 5.5 --- Normalization on Semantic Distance --- p.62 / Chapter 6 --- User Interface and API --- p.66 / Chapter 6.1 --- User Interface --- p.66 / Chapter 6.2 --- API --- p.74 / Chapter 6.2.1 --- Thesaurus Management --- p.75 / Chapter 6.2.2 --- Semantic Classifications --- p.76 / Chapter 6.2.3 --- Entry Terms --- p.77 / Chapter 6.2.4 --- Semantemes --- p.79 / Chapter 6.2.5 --- Relationship Types and Relationship Links --- p.80 / Chapter 6.2.6 --- Weighting Schemes --- p.83 / Chapter 7 --- Conclusion --- p.86 / Reference --- p.89 / Chapter A --- System Installation --- p.92 / Chapter A.1 --- File Organization of TheSys --- p.92 / Chapter A. 1.1 --- API Source Codes (THESYS/API) --- p.93 / Chapter A. 1.2 --- Header Files (THESYS/include) --- p.94 / Chapter A. 1.3 --- Interface Source Codes and Library (THESYS/UI and THESYS/lib) --- p.95 / Chapter A. 1.4 --- System Generated Files --- p.96 / Chapter A.2 --- Setup TheSys with its User Interfaces --- p.97 / Chapter A.3 --- Employ TheSys with Customized Applications --- p.97 / Chapter B --- API Description --- p.99 / Chapter B.1 --- Thesaurus Management --- p.102 / Chapter B.2 --- Semantic Classifications --- p.107 / Chapter B.3 --- Entry Terms --- p.116 / Chapter B.4 --- Semanemes --- p.123 / Chapter B.5 --- Relationship Types and Relationship Links --- p.130 / Chapter B.6 --- Relationship Inference --- p.141 / Chapter B.7 --- Weighting Schemes --- p.145
|
102 |
Applications of machine learning to agricultural land values: prediction and causal inferenceEr, Emrah January 1900 (has links)
Doctor of Philosophy / Department of Agricultural Economics / Nathan P. Hendricks / This dissertation focuses on the prediction of agricultural land values and the effects of water rights on land values using machine learning algorithms and hedonic pricing methods. I predict agricultural land values with different machine learning algorithms, including ridge regression, least absolute shrinkage and selection operator, random forests, and extreme gradient boosting methods. To analyze the causal effects of water right seniority on agricultural land values, I use the double-selection LASSO technique.
The second chapter presents the data used in the dissertation. A unique set of parcel sales from Property Valuation Division of Kansas constitute the backbone of the data used in the estimation. Along with parcel sales data, I collected detailed basis, water, tax, soil, weather, and urban influence data. This chapter provides detailed explanation of various data sources and variable construction processes.
The third chapter presents different machine learning models for irrigated agricultural land price predictions in Kansas. Researchers, and policymakers use different models and data sets for price prediction. Recently developed machine learning methods have the power to improve the predictive ability of the models estimated. In this chapter I estimate several machine learning models for predicting the agricultural land values in Kansas. Results indicate that the predictive power of the machine learning methods are stronger compared to standard econometric methods. Median absolute error in extreme gradient boosting estimation is 0.1312 whereas it is 0.6528 in simple OLS model.
The fourth chapter examines whether water right seniority is capitalized into irrigated agricultural land values in Kansas. Using a unique data set of irrigated agricultural land sales, I analyze the causal effect of water right seniority on agricultural land values. A possible concern during the estimation of hedonic models is the omitted variable bias so we use double-selection LASSO regression and its variable selection properties to overcome the omitted variable bias. I also estimate generalized additive models to analyze the nonlinearities that may exist. Results show that water rights have a positive impact on irrigated land prices in Kansas. An additional year of water right seniority causes irrigated land value to increase nearly $17 per acre. Further analysis also suggest a nonlinear relationship between seniority and agricultural land prices.
|
103 |
Labor market policies in an equilibrium matching model with heterogeneous agents and on-the-job searchStavrunova, Olena 01 January 2007 (has links)
This dissertation quantitatively evaluates selected labor market policies in a search-matching model with skill heterogeneity where high-skilled workers can take temporary jobs with skill requirements below their skill levels. The joint posterior distribution of structural parameters of the theoretical model is obtained conditional on the data on labor markets histories of the NLSY79 respondents. The information on AFQT scores of individuals and the skill requirements of occupations is utilized to identify the skill levels of workers and complexity levels of jobs in the job-worker matches realized in the data. The model and the data are used to simulate the posterior distributions of impacts of labor market policies on the endogenous variables of interest to a policy-maker, including unemployment rates, durations and wages of low- and high-skilled workers. In particular, the effects of the following policies are analyzed: increase in proportion of high-skilled workers, subsidies for employing or hiring high- and low-skilled workers and increase in unemployment income.
|
104 |
A factoring approach for probabilistic inference in belief networksLi, Zhaoyu 08 June 1993 (has links)
Reasoning about any realistic domain always involves a degree of uncertainty.
Probabilistic inference in belief networks is one effective way of reasoning under
uncertainty. Efficiency is critical in applying this technique, and many researchers
have been working on this topic. This thesis is the report of our research in this
area.
This thesis contributes a new framework for probabilistic inference in belief
networks. The previously developed algorithms depend on the topological structure
of a belief network to perform inference efficiently. Those algorithms are constrained
by the way they use topological information and may not work efficiently for some
inference tasks. This thesis explores the essence of probabilistic inference, analyzes
previously developed algorithms, and presents a factoring approach for probabilistic
inference. It proposes that efficient probabilistic inference in belief networks can be
considered as an optimal factoring problem.
The optimal factoring framework provides an alternative perspective on
probabilistic inference and a quantitative measure of efficiency for an algorithm.
Using this framework, this thesis presents an optimal factoring algorithm for poly-tree
networks and for arbitrary belief networks (albeit with exponential worst-case
time complexity for non-poly-tree networks). Since the optimal factoring problem
in general is a hard problem, a heuristic algorithm, called set-factoring, is developed
for multiply-connected belief networks. Set factoring is shown to outperform
previously developed algorithms. We also apply the optimal factoring framework
to the problem of finding an instantiation of all nodes of a belief network which has
the largest probability and present an efficient algorithm for that task.
Extensive computation of probabilistic inference renders any currently used
exact probabilistic inference algorithm intractable for large belief networks. One
way to extend this boundary is to consider parallel hardware. This thesis also
explores the issue of parallelizing probabilistic inference in belief networks. The
feasibility of parallelizing probabilistic inference is demonstrated analytically and
experimentally. Exponential-time numerical computation can be reduced by a
polynomial-time factoring heuristic. This thesis offers an insight into the effect
of the structure of a belief network on speedup and efficiency. / Graduation date: 1994
|
105 |
Efficient Computation of Probabilities of Events Described by Order Statistics and Applications to Queue InferenceJones, Lee K., Larson, Richard C., 1943- 03 1900 (has links)
This paper derives recursive algorithms for efficiently computing event probabilities related to order statistics and applies the results in a queue inferencing setting. Consider a set of N i.i.d. random variables in [0, 1]. When the experimental values of the random variables are arranged in ascending order from smallest to largest, one has the order statistics of the set of random variables. Both a forward and a backward recursive O(N3 ) algorithm are developed for computing the probability that the order statistics vector lies in a given N-rectangle. The new algorithms have applicability in inferring the statistical behavior of Poisson arrival queues, given only the start and stop times of service of all N customers served in a period of continuous congestion. The queue inference results extend the theory of the "Queue Inference Engine" (QIE), originally developed by Larson in 1990 [8]. The methodology is extended to a third O(N 3 ) algorithm, employing both forward and backward recursion, that computes the conditional probability that a random customer of the N served waited in queue less than r minutes, given the observed customer departure times and assuming first come, first served service. To our knowledge, this result is the first O(N3 ) exact algorithm for computing points on the in-queue waiting time distribution function,conditioned on the start and stop time data. The paper concludes with an extension to the computation of certain correlations of in-queue waiting times. Illustrative computational results are included throughout.
|
106 |
The Performance of a Mechanical Design 'Compiler'Ward, Allen C., Seering, Warren 01 January 1989 (has links)
A mechanical design "compiler" has been developed which, given an appropriate schematic, specifications, and utility function for a mechanical design, returns catalog numbers for an optimal implementation. The compiler has been successfully tested on a variety of mechanical and hydraulic power transmission designs and a few temperature sensing designs. Times required have been at worst proportional to the logarithm of the number of possible combinations of catalog numbers.
|
107 |
TYPICAL: A Knowledge Representation System for Automated Discovery and InferenceHaase, Kenneth W., Jr. 01 August 1987 (has links)
TYPICAL is a package for describing and making automatic inferences about a broad class of SCHEME predicate functions. These functions, called types following popular usage, delineate classes of primitive SCHEME objects, composite data structures, and abstract descriptions. TYPICAL types are generated by an extensible combinator language from either existing types or primitive terminals. These generated types are located in a lattice of predicate subsumption which captures necessary entailment between types; if satisfaction of one type necessarily entail satisfaction of another, the first type is below the second in the lattice. The inferences make by TYPICAL computes the position of the new definition within the lattice and establishes it there. This information is then accessible to both later inferences and other programs (reasoning systems, code analyzers, etc) which may need the information for their own purposes. TYPICAL was developed as a representation language for the discovery program Cyrano; particular examples are given of TYPICAL's application in the Cyrano program.
|
108 |
Bayesian Methods for On-Line Gross Error Detection and CompensationGonzalez, Ruben 11 1900 (has links)
Data reconciliation and gross error detection are traditional methods toward detecting mass balance inconsistency within process instrument data. These methods use a static approach for statistical evaluation. This thesis is concerned with using an alternative statistical approach (Bayesian statistics) to detect mass balance inconsistency in real time.
The proposed dynamic Baysian solution makes use of a state space process model which incorporates mass balance relationships so that a governing set of mass balance variables can be estimated using a Kalman filter. Due to the incorporation of mass balances, many model parameters are defined by first principles. However, some parameters, namely the observation and state covariance matrices, need to be estimated from process data before the dynamic Bayesian methods could be applied. This thesis makes use of Bayesian machine learning techniques to estimate these parameters, separating process disturbances from instrument measurement noise. / Process Control
|
109 |
Principal typings for interactive ruby programmingHnativ, Andriy 16 December 2009
A novel and promising method of software development is the interactive style of development, where code is written and incrementally tested simultaneously. Interpreted dynamic languages such as Ruby, Python, and Lua support this interactive development style. However, because they lack semantic analysis as part of a compilation phase, they do not provide type-checking. The programmer is only informed of type errors when they are encountered in the execution of the programfar too late and often at a less-informative location in the code. We introduce a typing system for Ruby, where types will be determined before execution by inferring principal typings. This system overcomes the obstacles that interactive and dynamic program development imposes on type checking; yielding an effective type-checking facility for dynamic programming languages. Our development is embodied as an extension to irb, the Ruby interactive mode, allowing us to evaluate principal typings for interactive development.
|
110 |
Testing the Role of Source Credibility on Memory for InferencesGuillory, Jimmeka Joy 2011 August 1900 (has links)
Research shows that people have difficulty forgetting inferences they make after reading a passage, even when the information that the inferences are based on is later known to be untrue. This dissertation examined the effects of these inferences on memory for political information and tested if the credibility of the source of the correction influences whether people use the correction, or continue relying on the original information when making inferences. According to source credibility theory, there are two main factors that contribute to credibility, expertise and trustworthiness. Experiment 1 examined credibility as a function of both expertise and trustworthiness. The results from this experiment showed that having a correction from a source who is high on both factors significantly decreased the use of the original information. Experiment 2 examined credibility as a function of expertise. The Experiment 2 results showed no significant decrease in participants' use of the original information, if a correction came from a source that was simply more expert (but not more trustworthy) than another source. This finding suggests that source expertise alone is not sufficient to reduce reliance on the original information. Experiment 3, which examined credibility as a function of trustworthiness, demonstrated that having a highly trustworthy source does significantly decrease the use of the original information when making inferences. This study is the first to provide direct support for the hypothesis that making the source of a correction more believable decreases use of the original discredited information when making inferences.
|
Page generated in 0.0543 seconds