Spelling suggestions: "subject:"entropy"" "subject:"syntropy""
231 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
232 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
233 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
234 |
Advances in Cross-Entropy MethodsThomas Taimre Unknown Date (has links)
The cross-entropy method is an established technique for solving difficult estimation, simulation, and optimisation problems. The method has its origins in an adaptive importance sampling procedure for rare-event estimation published by R. Y. Rubinstein in 1997. In that publication, the adaptive procedure produces a parametric probability density function whose parameters minimise the variance of the associated likelihood ratio estimator. This variance minimisation can also be viewed as minimising a measure of divergence to the minimum-variance importance sampling density over all members of the parametric family in question. Soon thereafter it was realised that the same adaptive importance sampling procedure could be used to solve combinatorial optimisation problems by viewing the set of solutions to the optimisation problem as a rare-event. This realisation led to the debut of the cross-entropy method in 1999, where it was introduced as a modification to the existing adaptive importance sampling procedure, with a different choice of directed divergence measure, in particular, the Kullback-Leibler cross-entropy. The contributions of this thesis are threefold. Firstly, in a review capacity, it provides an up-to-date consolidation of material on the cross-entropy method and its generalisations, as well as a collation of background material on importance sampling and Monte Carlo methods. The reviews are elucidated with original commentary and examples. Secondly, two new major applications of the cross-entropy methodology to optimisation problems are presented, advancing the boundary of knowledge on cross-entropy in the applied arena. Thirdly, two contributions to the methodological front are (a) an original extension of the generalised cross-entropy framework which enables one to construct state- and time-dependent importance sampling algorithms, and (b) a new algorithm for counting solutions to difficult binary-encoded problems.
|
235 |
Contributions to the Theory and Practice of Inequality MeasurementNicholas Rohde Unknown Date (has links)
The thesis presents results from five related studies concerned with the development and application of analytical techniques for the measurement of inequality. Four of the research pieces are analytical works focusing on the methodology of inequality measurement, while the fifth is an empirical study of income mobility and inequality in Australia. The most significant work of the dissertation is concerned with the derivation of a new information-theoretic index for the measurement of inequality. The proposed index is based upon the same relationship between information theory and inequality measurement used for the construction of Generalized Entropy (GE) inequality measures and is equivalent to a technique established in the field of signal processing. The measure shares the axiomatic superiority of GE measures over other measurement techniques and exhibits an additional attractive decomposition property such that the contribution of any set of individuals towards inequality is directly observable. No existing axiomatically complete measure possesses this property and thus the new measure has a degree of dominance over other techniques such as the Gini coefficient and Theil’s entropy measures. An empirical illustration of the new index using U.S. unit record income data is provided to demonstrate the alternative decomposition technique. It is shown that persons self-described as ‘White’ or ‘Japanese’ in the U.S. census drive a greater proportion of total inequality than persons from other racial groups relative to their respective population sizes. Other theoretical work in the thesis focuses on the construction and interpretation of Lorenz curves. A new parametric functional form for estimating the Lorenz curve is presented and closed form expressions for the implicit probability density function, cumulative distribution function and Gini coefficient are derived. Furthermore the proposed Lorenz curve is shown to provide a better fit to a range of real world data than other single parameter specifications such as the Pareto formulation. In a separate chapter the issue of Lorenz curve determination is addressed by determining a convex spline to interpolate Lorenz curves from grouped data. The spline is shown to provide better estimates of the Gini coefficient than other interpolation techniques and always satisfies the regularity conditions required for a Lorenz curve. Additional work on Lorenz curves examines the link between these functions and GE inequality metrics. In the thesis it is shown that these indices may be derived as direct functions of the Lorenz curve and analytical expressions for various GE measures are given in terms of Lorenz curve parameters. These results provide a basis for studying the effect of Lorenz curve construction upon the calculation of entropy based inequality measurements. The thesis concludes with an empirical study into income inequality and mobility in Australia using the HILDA unit record and household data panel. The research shows that Australian income mobility is slightly lower than in other developed countries and that much of the observed mobility occurs at the lower end of the income distribution.
|
236 |
Scaling conditional random fields for natural language processingCohn, Trevor A Unknown Date (has links) (PDF)
This thesis deals with the use of Conditional Random Fields (CRFs; Lafferty et al. (2001)) for Natural Language Processing (NLP). CRFs are probabilistic models for sequence labelling which are particularly well suited to NLP. They have many compelling advantages over other popular models such as Hidden Markov Models and Maximum Entropy Markov Models (Rabiner, 1990; McCallum et al., 2001), and have been applied to a number of NLP tasks with considerable success (e.g., Sha and Pereira (2003) and Smith et al. (2005)). Despite their apparent success, CRFs suffer from two main failings. Firstly, they often over-fit the training sample. This is a consequence of their considerable expressive power, and can be limited by a prior over the model parameters (Sha and Pereira, 2003; Peng and McCallum, 2004). Their second failing is that the standard methods for CRF training are often very slow, sometimes requiring weeks of processing time. This efficiency problem is largely ignored in current literature, although in practise the cost of training prevents the application of CRFs to many new more complex tasks, and also prevents the use of densely connected graphs, which would allow for much richer feature sets. (For complete abstract open document)
|
237 |
Network event detection with entropy measuresEimann, Raimund E. A. January 2008 (has links)
Information measures may be used to estimate the amount of information emitted by discrete information sources. Network streams are an example for such discrete information sources. This thesis investigates the use of information measures for the detection of events in network streams. Starting with the fundamental entropy and complexity measures proposed by Shannon and Kolmogorov, it reviews a range of candidate information measures for network event detection, including algorithms from the Lempel-Ziv family and a relative newcomer, the T-entropy. Using network trace data from the University of Auckland, the thesis demonstrates experimentally that these measures are in principle suitable for the detection of a wide range of network events. Several key parameters influence the detectability of network events with information measures. These include the amount of data considered in each traffic sample and the choice of observables. Among others, a study of the entropy behaviour of individual observables in event and non-event scenarios investigates the optimisation of these parameters. The thesis also examines the impact of some of the detected events on different information measures. This motivates a discussion on the sensitivity of various measures. A set of experiments demonstrating multi-dimensional network event classification with multiple observables and multiple information measures concludes the thesis.
|
238 |
Maximum Entropy Regularisation Applied to Ultrasonic Image ReconstructionBattle, David John January 1999 (has links)
Image reconstruction, in common with many other inverse problems, is often mathematically ill-posed in the sense that solutions are neither stable nor unique. Ultrasonic image reconstruction is particularly notorious in this regard, with narrow transducer bandwidths and limited - sometimes sparsely sampled apertures posing formidable difficulties for conventional signal processing. To overcome these difficulties, some form of regularisation is mandatory, whereby the ill-posed problem is restated as a closely related, well-posed problem, and then solved uniquely. This thesis explores the application of maximum entropy (MaxEnt) regularisation to the problem of reconstructing complex-valued imagery from sparsely sampled coherent ultrasonic field data, with particular emphasis on three-dimensional problems in the non-destructive evaluation (NDE) of materials. MaxEnt has not previously been applied to this class of problem, and yet in comparison with many other approaches to image reconstruction, it emerges as the clear leader in terms of resolution and overall image quality. To account for this performance, it is argued that the default image model used with MaxEnt is particularly meaningful in cases of ultrasonic scattering by objects embedded in homogeneous media. To establish physical and mathematical insights into the forward problem, linear equations describing scattering from both penetrable and impenetrable objects are first derived using the Born and physical optics approximations respectively. These equations are then expressed as a shift-invariant computational model that explicitly incorporates sparse sampling. To validate this model, time-domain scattering responses are computed and compared with analytical solutions for a simple canonical test case drawn from the field of NDE. The responses computed via the numerical model are shown to accurately reproduce the analytical responses. To solve inverse scattering problems via MaxEnt, the robust Cambridge algorithm is generalised to the complex domain and extended to handle broadband (multiple-frequency) data. Two versions of the augmented algorithm are then compared with a range of other algorithms, including several linearly regularised algorithms and lastly, due to its acknowledged status as a competitor with MaxEnt in radio-astronomy, the non-linear CLEAN algorithm. These comparisons are made through simulated 3-D imaging experiments under conditions of both complete and sparse aperture sampling with low and high levels of additive Gaussian noise. As required in any investigation of inverse problems, the experimental confirmation of algorithmic performance is emphasised, and two common imaging geometries relevant to NDE are selected for this purpose. In monostatic synthetic aperture imaging experiments involving side-drilled holes in an aluminium plate and test objects immersed in H2O, MaxEnt image reconstruction is demonstrated to be robust against grating-lobe and side-lobe formation, in addition to temporal bandwidth restriction. This enables efficient reconstruction of 2-D and 3-D images from small numbers of discrete samples in the spatial and frequency domains. The thesis concludes with a description of the design and testing of a novel polyvinylidene fluoride (PVDF) bistatic array transducer that offers advantages over conventional point-sampled arrays in terms of construction simplicity and signal-to-noise ratio. This ultra-sparse orthogonal array is the only one of its kind yet demonstrated, and was made possible by MaxEnt signal processing.
|
239 |
Contributions to the Theory and Practice of Inequality MeasurementNicholas Rohde Unknown Date (has links)
The thesis presents results from five related studies concerned with the development and application of analytical techniques for the measurement of inequality. Four of the research pieces are analytical works focusing on the methodology of inequality measurement, while the fifth is an empirical study of income mobility and inequality in Australia. The most significant work of the dissertation is concerned with the derivation of a new information-theoretic index for the measurement of inequality. The proposed index is based upon the same relationship between information theory and inequality measurement used for the construction of Generalized Entropy (GE) inequality measures and is equivalent to a technique established in the field of signal processing. The measure shares the axiomatic superiority of GE measures over other measurement techniques and exhibits an additional attractive decomposition property such that the contribution of any set of individuals towards inequality is directly observable. No existing axiomatically complete measure possesses this property and thus the new measure has a degree of dominance over other techniques such as the Gini coefficient and Theil’s entropy measures. An empirical illustration of the new index using U.S. unit record income data is provided to demonstrate the alternative decomposition technique. It is shown that persons self-described as ‘White’ or ‘Japanese’ in the U.S. census drive a greater proportion of total inequality than persons from other racial groups relative to their respective population sizes. Other theoretical work in the thesis focuses on the construction and interpretation of Lorenz curves. A new parametric functional form for estimating the Lorenz curve is presented and closed form expressions for the implicit probability density function, cumulative distribution function and Gini coefficient are derived. Furthermore the proposed Lorenz curve is shown to provide a better fit to a range of real world data than other single parameter specifications such as the Pareto formulation. In a separate chapter the issue of Lorenz curve determination is addressed by determining a convex spline to interpolate Lorenz curves from grouped data. The spline is shown to provide better estimates of the Gini coefficient than other interpolation techniques and always satisfies the regularity conditions required for a Lorenz curve. Additional work on Lorenz curves examines the link between these functions and GE inequality metrics. In the thesis it is shown that these indices may be derived as direct functions of the Lorenz curve and analytical expressions for various GE measures are given in terms of Lorenz curve parameters. These results provide a basis for studying the effect of Lorenz curve construction upon the calculation of entropy based inequality measurements. The thesis concludes with an empirical study into income inequality and mobility in Australia using the HILDA unit record and household data panel. The research shows that Australian income mobility is slightly lower than in other developed countries and that much of the observed mobility occurs at the lower end of the income distribution.
|
240 |
Analysis of access-to-space missions utilizing on-board energy management and entropic analysisWinter, Tyler Forrest, January 2008 (has links) (PDF)
Thesis (M.S.)--Missouri University of Science and Technology, 2008. / Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 18, 2008) Includes bibliographical references (p. 127-129).
|
Page generated in 0.0504 seconds