• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 8
  • 1
  • Tagged with
  • 28
  • 28
  • 28
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Use Of Kullback-Leibler Divergence In Opinion Retrieval

Cen, Kun 24 September 2008 (has links)
With the huge amount of subjective contents in on-line documents, there is a clear need for an information retrieval system that supports retrieval of documents containing opinions about the topic expressed in a user’s query. In recent years, blogs, a new publishing medium, have attracted a large number of people to express personal opinions covering all kinds of topics in response to the real-world events. The opinionated nature of blogs makes them a new interesting research area for opinion retrieval. Identification and extraction of subjective contents from blogs has become the subject of several research projects. In this thesis, four novel methods are proposed to retrieve blog posts that express opinions about the given topics. The first method utilizes the Kullback-Leibler divergence (KLD) to weight the lexicon of subjective adjectives around query terms. Considering the distances between the query terms and subjective adjectives, the second method uses KLD scores of subjective adjectives based on distances from the query terms for document re-ranking. The third method calculates KLD scores of subjective adjectives for predefined query categories. In the fourth method, collocates, words co-occurring with query terms in the corpus, are used to construct the subjective lexicon automatically. The KLD scores of collocates are then calculated and used for document ranking. Four groups of experiments are conducted to evaluate the proposed methods on the TREC test collections. The results of the experiments are compared with the baseline systems to determine the effectiveness of using KLD in opinion retrieval. Further studies are recommended to explore more sophisticated approaches to identify subjectivity and promising techniques to extract opinions.
2

The Use Of Kullback-Leibler Divergence In Opinion Retrieval

Cen, Kun 24 September 2008 (has links)
With the huge amount of subjective contents in on-line documents, there is a clear need for an information retrieval system that supports retrieval of documents containing opinions about the topic expressed in a user’s query. In recent years, blogs, a new publishing medium, have attracted a large number of people to express personal opinions covering all kinds of topics in response to the real-world events. The opinionated nature of blogs makes them a new interesting research area for opinion retrieval. Identification and extraction of subjective contents from blogs has become the subject of several research projects. In this thesis, four novel methods are proposed to retrieve blog posts that express opinions about the given topics. The first method utilizes the Kullback-Leibler divergence (KLD) to weight the lexicon of subjective adjectives around query terms. Considering the distances between the query terms and subjective adjectives, the second method uses KLD scores of subjective adjectives based on distances from the query terms for document re-ranking. The third method calculates KLD scores of subjective adjectives for predefined query categories. In the fourth method, collocates, words co-occurring with query terms in the corpus, are used to construct the subjective lexicon automatically. The KLD scores of collocates are then calculated and used for document ranking. Four groups of experiments are conducted to evaluate the proposed methods on the TREC test collections. The results of the experiments are compared with the baseline systems to determine the effectiveness of using KLD in opinion retrieval. Further studies are recommended to explore more sophisticated approaches to identify subjectivity and promising techniques to extract opinions.
3

A Kullback-Leiber Divergence Filter for Anomaly Detection in Non-Destructive Pipeline Inspection

Zhou, Ruikun 14 September 2020 (has links)
Anomaly detection generally refers to algorithmic procedures aimed at identifying relatively rare events in data sets that differ substantially from the majority of the data set to which they belong. In the context of data series generated by sensors mounted on mobile devices for non-destructive inspection and monitoring, anomalies typically identify defects to be detected, therefore defining the main task of this class of devices. In this case, a useful way of operationally defining anomalies is to look at their information content with respect to the background data, which is typically noisy and therefore easily masking the relevant events if unfiltered. In this thesis, a Kullback-Leibler (KL) Divergence filter is proposed to detect signals with relatively high information content, namely anomalies, within data series. The data is generated by using the model of a broad class of proximity sensors that apply to devices commonly used in engineering practice. This includes, for example, sensory devices mounted on mobile robotic devices for the non-destructive inspection of hazardous or other environments that may not be accessible to humans for direct inspection. The raw sensory data generated by this class of sensors is often challenging to analyze due to the prevalence of noise over the signal content that reveals the presence of relevant features, as for example damage in gas pipelines. The proposed filter is built to detect the difference of information content between the data series collected by the sensor and a baseline data series, with the advantage of not requiring the design of a threshold. Moreover, differing from the traditional filters which need the prior knowledge or distribution assumptions about the data, this KL Divergence filter is model free and suitable for all kinds of raw sensory data. Of course, it is also compatible with classical signal distribution assumptions, such as Gaussian approximation, for instance. Also, the robustness and sensitivity of the KL Divergence filter are discussed under different scenarios with various signal to noise ratios of data generated by a simulator reproducing very realistic scenarios and based on models of real sensors provided by manufacturers or widely accepted in the literature.
4

Information Theoretical Measures for Achieving Robust Learning Machines

Zegers, Pablo, Frieden, B., Alarcón, Carlos, Fuentes, Alexis 12 August 2016 (has links)
Information theoretical measures are used to design, from first principles, an objective function that can drive a learning machine process to a solution that is robust to perturbations in parameters. Full analytic derivations are given and tested with computational examples showing that indeed the procedure is successful. The final solution, implemented by a robust learning machine, expresses a balance between Shannon differential entropy and Fisher information. This is also surprising in being an analytical relation, given the purely numerical operations of the learning machine.
5

Effective Authorship Attribution in Large Document Collections

Zhao, Ying, ying.zhao@rmit.edu.au January 2008 (has links)
Techniques that can effectively identify authors of texts are of great importance in scenarios such as detecting plagiarism, and identifying a source of information. A range of attribution approaches has been proposed in recent years, but none of these are particularly satisfactory; some of them are ad hoc and most have defects in terms of scalability, effectiveness, and computational cost. Good test collections are critical for evaluation of authorship attribution (AA) techniques. However, there are no standard benchmarks available in this area; it is almost always the case that researchers have their own test collections. Furthermore, collections that have been explored in AA are usually small, and thus whether the existing approaches are reliable or scalable is unclear. We develop several AA collections that are substantially larger than those in literature; machine learning methods are used to establish the value of using such corpora in AA. The results, also used as baseline results in this thesis, show that the developed text collections can be used as standard benchmarks, and are able to clearly distinguish between different approaches. One of the major contributions is that we propose use of the Kullback-Leibler divergence, a measure of how different two distributions are, to identify authors based on elements of writing style. The results show that our approach is at least as effective as, if not always better than, the best existing attribution methods-that is, support vector machines-for two-class AA, and is superior for multi-class AA. Moreover our proposed method has much lower computational cost and is cheaper to train. Style markers are the key elements of style analysis. We explore several approaches to tokenising documents to extract style markers, examining which marker type works the best. We also propose three systems that boost the AA performance by combining evidence from various marker types, motivated from the observation that there is no one type of marker that can satisfy all AA scenarios. To address the scalability of AA, we propose the novel task of authorship search (AS), inspired by document search and intended for large document collections. Our results show that AS is reasonably effective to find documents by a particular author, even within a collection consisting of half a million documents. Beyond search, we also propose the AS-based method to identify authorship. Our method is substantially more scalable than any method published in prior AA research, in terms of the collection size and the number of candidate authors; the discrimination is scaled up to several hundred authors.
6

Delay estimation in computer networks

Johnson, Nicholas Alexander January 2010 (has links)
Computer networks are becoming increasingly large and complex; more so with the recent penetration of the internet into all walks of life. It is essential to be able to monitor and to analyse networks in a timely and efficient manner; to extract important metrics and measurements and to do so in a way which does not unduly disturb or affect the performance of the network under test. Network tomography is one possible method to accomplish these aims. Drawing upon the principles of statistical inference, it is often possible to determine the statistical properties of either the links or the paths of the network, whichever is desired, by measuring at the most convenient points thus reducing the effort required. In particular, bottleneck-link detection methods in which estimates of the delay distributions on network links are inferred from measurements made at end-points on network paths, are examined as a means to determine which links of the network are experiencing the highest delay. Initially two published methods, one based upon a single Gaussian distribution and the other based upon the method-of-moments, are examined by comparing their performance using three metrics: robustness to scaling, bottleneck detection accuracy and computational complexity. Whilst there are many published algorithms, there is little literature in which said algorithms are objectively compared. In this thesis, two network topologies are considered, each with three configurations in order to determine performance in six scenarios. Two new estimation methods are then introduced, both based on Gaussian mixture models which are believed to offer an advantage over existing methods in certain scenarios. Computationally, a mixture model algorithm is much more complex than a simple parametric algorithm but the flexibility in modelling an arbitrary distribution is vastly increased. Better model accuracy potentially leads to more accurate estimation and detection of the bottleneck. The concept of increasing flexibility is again considered by using a Pearson type-1 distribution as an alternative to the single Gaussian distribution. This increases the flexibility but with a reduced complexity when compared with mixture model approaches which necessitate the use of iterative approximation methods. A hybrid approach is also considered where the method-of-moments is combined with the Pearson type-1 method in order to circumvent problems with the output stage of the former. This algorithm has a higher variance than the method-of-moments but the output stage is more convenient for manipulation. Also considered is a new approach to detection algorithms which is not dependant on any a-priori parameter selection and makes use of the Kullback-Leibler divergence. The results show that it accomplishes its aim but is not robust enough to replace the current algorithms. Delay estimation is then cast in a different role, as an integral part of an algorithm to correlate input and output streams in an anonymising network such as the onion router (TOR). TOR is used by users in an attempt to conceal network traffic from observation. Breaking the encryption protocols used is not possible without significant effort but by correlating the un-encrypted input and output streams from the TOR network, it is possible to provide a degree of certainty about the ownership of traffic streams. The delay model is essential as the network is treated as providing a pseudo-random delay to each packet; having an accurate model allows the algorithm to better correlate the streams.
7

Diagnosability performance analysis of models and fault detectors

Jung, Daniel January 2015 (has links)
Model-based diagnosis compares observations from a system with predictions using a mathematical model to detect and isolate faulty components. Analyzing which faults that can be detected and isolated given the model gives useful information when designing a diagnosis system. This information can be used, for example, to determine which residual generators can be generated or to select a sufficient set of sensors that can be used to detect and isolate the faults. With more information about the system taken into consideration during such an analysis, more accurate estimations can be computed of how good fault detectability and isolability that can be achieved. Model uncertainties and measurement noise are the main reasons for reduced fault detection and isolation performance and can make it difficult to design a diagnosis system that fulfills given performance requirements. By taking information about different uncertainties into consideration early in the development process of a diagnosis system, it is possible to predict how good performance can be achieved by a diagnosis system and avoid bad design choices. This thesis deals with quantitative analysis of fault detectability and isolability performance when taking model uncertainties and measurement noise into consideration. The goal is to analyze fault detectability and isolability performance given a mathematical model of the monitored system before a diagnosis system is developed. A quantitative measure of fault detectability and isolability performance for a given model, called distinguishability, is proposed based on the Kullback-Leibler divergence. The distinguishability measure answers questions like "How difficult is it to isolate a fault fi from another fault fj?. Different properties of the distinguishability measure are analyzed. It is shown for example, that for linear descriptor models with Gaussian noise, distinguishability gives an upper limit for the fault to noise ratio of any linear residual generator. The proposed measure is used for quantitative analysis of a nonlinear mean value model of gas flows in a heavy-duty diesel engine to analyze how fault diagnosability performance varies for different operating points. It is also used to formulate the sensor selection problem, i.e., to find a cheapest set of available sensors that should be used in a system to achieve required fault diagnosability performance. As a case study, quantitative fault diagnosability analysis is used during the design of an engine misfire detection algorithm based on the crankshaft angular velocity measured at the flywheel. Decisions during the development of the misfire detection algorithm are motivated using quantitative analysis of the misfire detectability performance showing, for example, varying detection performance at different operating points and for different cylinders to identify when it is more difficult to detect misfires. This thesis presents a framework for quantitative fault detectability and isolability analysis that is a useful tool during the design of a diagnosis system. The different applications show examples of how quantitate analysis can be applied during a design process either as feedback to an engineer or when formulating different design steps as optimization problems to assure that required performance can be achieved.
8

Collective reasoning under uncertainty and inconsistency

Adamcik, Martin January 2014 (has links)
In this thesis we investigate some global desiderata for probabilistic knowledge merging given several possibly jointly inconsistent, but individually consistent knowledge bases. We show that the most naive methods of merging, which combine applications of a single expert inference process with the application of a pooling operator, fail to satisfy certain basic consistency principles. We therefore adopt a different approach. Following recent developments in machine learning where Bregman divergences appear to be powerful, we define several probabilistic merging operators which minimise the joint divergence between merged knowledge and given knowledge bases. In particular we prove that in many cases the result of applying such operators coincides with the sets of fixed points of averaging projective procedures - procedures which combine knowledge updating with pooling operators of decision theory. We develop relevant results concerning the geometry of Bregman divergences and prove new theorems in this field. We show that this geometry connects nicely with some desirable principles which have arisen in the epistemology of merging. In particular, we prove that the merging operators which we define by means of convex Bregman divergences satisfy analogues of the principles of merging due to Konieczny and Pino-Perez. Additionally, we investigate how such merging operators behave with respect to principles concerning irrelevant information, independence and relativisation which have previously been intensively studied in case of single-expert probabilistic inference. Finally, we argue that two particular probabilistic merging operators which are based on Kullback-Leibler divergence, a special type of Bregman divergence, have overall the most appealing properties amongst merging operators hitherto considered. By investigating some iterative procedures we propose algorithms to practically compute them.
9

ASSOCIATION OF TOO SHORT ARCS USING ADMISSIBLE REGION

Surabhi Bhadauria (8695017) 24 April 2020 (has links)
<p>The near-Earth space is filled with over 300,000 artificial debris objects with a diameter larger than one cm. For objects in GEO and MEO region, the observations are made mainly through optical sensors. These sensors take observations over a short time which cover only a negligible part of the object's orbit. Two or more such observations are taken as one single Too Short Arc (TSA). Each set of TSA from an optical sensor consists of several angles, the angles of right ascension, declination, along with the rate of change of the right ascension angle and the declination angle. However, such observational data obtained from one TSA because it is covering only a very small fraction of the orbit, is not sufficient for the complete initial determination of an object's orbit. For a newly detected unknown object, only TSAs are available with no information about the orbit of the object. Therefore, two or more such TSAs that belong to the same object are required for its orbit determination. To solve this correlation problem, the framework of the probabilistic Admissible Region is used, which restricts possible orbits based on a single TSA. To propagate the Admissible Region to the time of a second TSA, it is represented in closed-form Gaussian Mixture representation. This way, a propagation with an Extended Kalman filter is possible. To decide if two TSAs are correlated, that is if they belong to the same object, respectively, an overlap between the regions is found in a suitable orbital mechanic's based coordinate frame. To compute the overlap, the information measure of Kullback-Leibler divergence is used. <br></p>
10

Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative Positioning

Nounagnon, Jeannette Donan 12 July 2016 (has links)
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking. This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty. However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy. The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications. / Ph. D.

Page generated in 0.073 seconds