• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

IfD - information for discrimination

Cai, Di January 2004 (has links)
The problem of term mismatch and ambiguity has long been serious and outstanding in IR. The problem can result in the system formulating an incomplete and imprecise query representation, leading to a failure of retrieval. Many query reformulation methods have been proposed to address the problem. These methods employ term classes which are considered as related to individual query terms. They are hindered by the computational cost of term classification, and by the fact that the terms in some class are generally related to some specific query term belonging to the class rather than relevant to the context of the query. In this thesis we propose a series of methods for automatic query reformulation (AQR). The methods constitute a formal model called IfD, standing for Information for Discrimination. In IfD, each discrimination measure is modelled as information contained in terms supporting one of two opposite hypotheses. The extent of association of terms with the query can thus be defined based directly on the discrimination. The strength of association of candidate terms with the query can then be computed, and good terms can be selected to enhance the query. Justifications for IfD are presented from several aspects: formal interpretations of infor­mation for discrimination are introduced to show its soundness; criteria are put forward to show its rationality; properties of discrimination measures are analysed to show its appro­priateness; examples are examined to show its usability; extension is discussed to show its potential; implementation is described to show its feasibility; comparisons with other methods are made to show its flexibility; improvements in retrieval performance are exhibited to show its powerful capability. Our conclusion is that the advantage and promise IfD should make it an indispensable methodology for AQR, which we believe can be an effective technique for improvement in retrieval performance.
222

Probability models for information retrieval based on divergence from randomness

Amati, Giambattista January 2003 (has links)
This thesis devises a novel methodology based on probability theory, suitable for the construction of term-weighting models of Information Retrieval. Our term-weighting functions are created within a general framework made up of three components. Each of the three components is built independently from the others. We obtain the term-weighting functions from the general model in a purely theoretic way instantiating each component with different probability distribution forms. The thesis begins with investigating the nature of the statistical inference involved in Information Retrieval. We explore the estimation problem underlying the process of sampling. De Finetti’s theorem is used to show how to convert the frequentist approach into Bayesian inference and we display and employ the derived estimation techniques in the context of Information Retrieval. We initially pay a great attention to the construction of the basic sample spaces of Information Retrieval. The notion of single or multiple sampling from different populations in the context of Information Retrieval is extensively discussed and used through-out the thesis. The language modelling approach and the standard probabilistic model are studied under the same foundational view and are experimentally compared to the divergence-from-randomness approach. In revisiting the main information retrieval models in the literature, we show that even language modelling approach can be exploited to assign term-frequency normalization to the models of divergence from randomness. We finally introduce a novel framework for the query expansion. This framework is based on the models of divergence-from-randomness and it can be applied to arbitrary models of IR, divergence-based, language modelling and probabilistic models included. We have done a very large number of experiment and results show that the framework generates highly effective Information Retrieval models.
223

Functional programming and graph algorithms

King, David Jonathan January 1996 (has links)
This thesis is an investigation of graph algorithms in the non-strict purely functional language Haskell. Emphasis is placed on the importance of achieving an asymptotic complexity as good as with conventional languages. This is achieved by using the monadic model for including actions on the state. Work on the monadic model was carried out at Glasgow University by Wadler, Peyton Jones, and Launchbury in the early nineties and has opened up many diverse application areas. One area is the ability to express data structures that require sharing. Although graphs are not presented in this style, data structures that graph algorithms use are expressed in this style. Several examples of stateful algorithms are given including union/find for disjoint sets, and the linear time sort binsort. The graph algorithms presented are not new, but are traditional algorithms recast in a functional setting. Examples include strongly connected components, biconnected components, Kruskal's minimum cost spanning tree, and Dijkstra's shortest paths. The presentation is lucid giving more insight than usual. The functional setting allows for complete calculational style correctness proofs - which is demonstrated with many examples. The benefits of using a functional language for expressing graph algorithms are quantified by looking at the issues of execution times, asymptotic complexity, correctness, and clarity, in comparison with traditional approaches. The intention is to be as objective as possible, pointing out both the weaknesses and the strengths of using a functional language.
224

Investigating TCP performance in mobile ad hoc networks

Papanastasiou, Stylianos January 2006 (has links)
Mobile ad hoc networks (MANETs) have become increasingly important in view of their promise of ubiquitous connectivity beyond traditional fixed infrastructure networks. Such networks, consisting of potentially highly mobile nodes, have provided new challenges by introducing special consideration stemming from the unique characteristics of the wireless medium and the dynamic nature of the network topology. The TCP protocol, which has been widely deployed on a multitude of internetworks including the Internet, is naturally viewed as the de facto reliable transport protocol for use in MANETs. However, assumptions made at TCP’s inception reflected characteristics of the prevalent wired infrastructure of networks at the time and could subsequently lead to sub-optimal performance when used in wireless ad hoc environments. The basic presupposition underlying TCP congestion control is that packet losses are predominantly an indication of congestion in the network. The detrimental effect of such an assumption on TCP’s performance in MANET environments has been a long-standing research problem. Hence, previous work has focused on addressing the ambiguity behind the cause of packet loss as perceived by TCP by proposing changes at various levels across the network protocol stack, such as at the MAC mechanism of the transceiver or via coupling with the routing protocol at the network layer. The main challenge addressed by the current work is to propose new methods to ameliorate the illness-effects of TCP’s misinterpretation of the causes of packet loss in MANETs. An assumed restriction on any proposed modifications is that resulting performance increases should be achievable by introducing limited changes confined to the transport layer. Such a restriction aids incremental adoption and ease of deployment by requiring minimal implementation effort. Further, the issue of packet loss ambiguity, from a transport layer perspective, has, by definition, to be dealt with in an end-to-end fashion. As such, a proposed solution may involve implementation at the sender, the receiver or both to address TCP shortcomings. Some attempts at describing TCP behaviour in MANETs have been previously reported in the literature. However, a thorough enquiry into the performance of those TCP agents popular in terms of research and adoption has been lacking. Specifically, very little work has been performed on an exhaustive analysis of TCP variants across different MANET routing protocols and under various mobility conditions. The first part of the dissertation addresses this shortcoming through extensive simulation evaluation in order to ascertain the relative performance merits of each TCP variant in terms of achieved goodput over dynamic topologies. Careful examination reveals sub-par performance of TCP Reno, the largely equivalent performance of NewReno and SACK, whilst the effectiveness of a proactive TCP variant (Vegas) is explicitly stated and justified for the first time in a dynamic MANET environment. Examination of the literature reveals that in addition to losses caused by route breakages, the hidden terminal effect contributes significantly to non-congestion induced packet losses in MANETs, which in turn has noticeably negative impact on TCP goodput. By adapting the conservative slow start mechanism of TCP Vegas into a form suitable for reactive TCP agents, like Reno, NewReno and SACK, the second part of the dissertation proposes a new Reno-based congestion avoidance mechanism which increases TCP goodput considerably across long paths by mitigating the negative effects of hidden terminals and alleviating some of the ambiguity of non-congestion related packet loss in MANETs. The proposed changes maintain intact the end-to-end semantics of TCP and are solely applicable to the sender. The new mechanism is further contrasted with an existing transport layer-focused solution and is shown to perform significantly better in a range of dynamic scenarios. As solution from an end-to-end perspective may be applicable to either or both communicating ends, the idea of implementing receiver-side alterations is also explored. Previous work has been primarily concerned with reducing receiver-generated cumulative ACK responses by “bundling” them into as few packets as possible thereby reducing misinterpretations of packet loss due to hidden terminals. However, a thorough evaluation of such receiver-side solutions reveals limitations in common evaluation practices and the solutions themselves. In an effort to address this shortcoming, the third part of this research work first specifies a tighter problem domain, identifying the circumstances under which the problem may be tackled by an end-to-end solution. Subsequent original analysis reveals that by taking into account optimisations possible in wireless communications, namely the partial or complete omission of the RTS/CTS handshake, noticeable improvements in TCP goodput are achievable especially over long paths. This novel modification is activated in a variety of topologies and is assessed using new metrics to more accurately gauge its effectiveness in a wireless multihop environment.
225

Cheap deforestation for non-strict functional languages

Gill, Andrew John January 1996 (has links)
In functional languages intermediate data structures are often used as glue to connect separate parts of a program together. Deforestation is the process of automatically removing intermediate data structures. In this thesis we present and analyse a new approach to deforestation. This new approach is both practical and general. We analyse in detail the problem of list removal rather than the more general problem of arbitrary data structure removal. This more limited scope allows a complete evaluation of the pragmatic aspects of using our deforestation technology. We have implemented our list deforestation algorithm in the Glasgow Haskell compiler. Our implementation has allowed practical feedback. One important conclusion is that a new analysis is required to infer function arities and the linearity of lambda abstractions. This analysis renders the basic deforestation algorithm far more effective. We give a detailed assessment of our implementation of deforestation. We measure the effectiveness of our deforestation on a suite of real application programs. We also observe the costs of our deforestation algorithm.
226

A computational model of space-variant vision based on a self-organised artificial retina tessellation

Balasuriya, Sumitha January 2006 (has links)
No description available.
227

A web-based intelligent learning environment for the teaching of industrial continuous quality improvement

Chi, Xuesong January 2008 (has links)
This thesis presents a methodology for developing an intelligent platform for continuous quality improvement, in order to deliver an efficient learning environment for students to learn quality improvement techniques in a structured manner. Many quality improvement programmes often fail because these techniques and their applications are not understood in a specific domain. The proposed methodology helps students identify the fundamental link between theory and realistic systems, as well as providing educators with an effective technique for teaching continuous quality improvement. A prototype system for the web-based learning environment is described, demonstrating the implementation of the methodology, and the development of intrinsic links between a virtual learning environment and real systems. Through tests carried out during two quality engineering courses, the study demonstrates that students are immersed and motivated in the web-based virtual environments through a game-based learning paradigm with positive results. By extending the prototype modules, the capability of the proposed system to balance the relationship between quality, productivity and cost is highlighted. This delivers a holistic and multidimensional approach for quality engineering courses and training, with the opportunities to extend the benefits of the virtual learning environment to other areas of expertise, such as operations and supply chain management. This study also explores the importance of capturing the dynamic characteristics of a real system and representing it within a virtual learning environment which aims to provide a realistic experience to its users. Two artificial neural network modules (a Fuzzy Adaptive Resonance Theory neural network and a back-propagation neural network) are implemented to facilitate the understanding of statistical tools and different types of variation in a realistic process.
228

Approximate dynamic programming with parallel stochastic planning operators

Child, Christopher H. T. January 2011 (has links)
This thesis presents an approximate dynamic programming (ADP) technique for environment modelling agents. The agent learns a set of parallel stochastic planning operators (P-SPOs) by evaluating changes in its environment in response to actions, using an association rule mining approach. An approximate policy is then derived by iteratively improving state value aggregation estimates attached to the operators using the P-SPOs as a model in a Dyna-Q-like architecture. Reinforcement learning and dynamic programming are powerful techniques for automated agent decision making in stochastic environments. Dynamic programming is effective when there is a known environment model, while reinforcement learning is effective when a model is not available. The techniques derive a policy: a mapping from each environment state to an action which optimizes the long term reward the agent receives. The standard methods become less effective as the state space for the environment increases because they require values to be associated with each state, the storage and processing of which is exponential to the number of state variables. Resolving this “curse of dimensionality” is an important topic of research amongst all communities working on this problem. Two key methods are to: (i) derive an estimate of the value (approximate dynamic programming) using function approximation or state aggregation; or (ii) build a model of the environment from experience. This thesis presents a method of combining these approaches by exploiting structure in the state transition and value functions captured in a set of planning operators which are learnt through experience in the environment. Standard planning operators define the deterministic changes that occur in an environment in response to an action. This work presents Parallel Stochastic Planning Operators (P-SPOs), a novel form of planning operator providing a structured model of the state transition function in environments which are both non-deterministic and for which changes can occur outside the influence of actions. Next, an automated method for extracting P-SPOs from observations in an environment is explored using an adaptation of association rule mining. Finally, methods of relating the state transition structure encapsulated in the P-SPOs to state values, using the operators to store state value aggregation estimates, are evaluated. The framework described provides a method by which approximate dynamic programming can be applied by designers of AI agents and AI planning systems for which they have minimal prior knowledge. The framework and P-SPO based implementations are tested against standard techniques in two bench-mark stochastic environments: a “slippery gripper” block painting robot; and a “predator-prey” agent environment. Experimental results show that an agent using a P-SPO-based approach is able to learn an accurate model of its environment if successor state variables exhibit conditional independence, and an approximate model in the non-independent case. Results also demonstrate that the agent’s ability to generalise to previously unseen states using the model allow it to form an improved policy over an agent employing a standard Dyna-Q based technique. Finally, an approximate policy stored in state aggregation estimates attached to operators is shown to be optimal in experiments for which the P-SPO set contains sufficient information for effective aggregations to be formed.
229

Connection robustness for wireless moving networks using transport layer multi-homing

Behbahani, Peyman January 2010 (has links)
Given any form of mobility management through wireless communication, one useful enhancement is improving the reliability and robustness of transport-layer connections in a heterogeneous mobile environment. This is particularly true in the case of mobile networks with multiple vertical handovers. In this thesis, issues and challenges in mobility management for mobile terminals in such a scenario are addressed, and a number of techniques to facilitate and improve efficiency and the QoS for such a handover are proposed and investigated. These are initially considered in an end-to-end context and all protocols and changes happened in the middleware of the connection where the network is involved with handover issues and end user transparency is satisfied. This thesis begins by investigating mobility management solutions particularly the transport layer models, also making significant observation pertinent to multi-homing for moving networks in general. A new scheme for transport layer tunnelling based on SCTP is proposed. Consequently a novel protocol to handle seamless network mobility in heterogeneous mobile networks, named nSCTP, is proposed. Efficiency of this protocol in relation to QoS for handover parameters in an end-to-end connection while wired and wireless networks are available is considered. Analytically and experimentally it has been proved that this new scheme can significantly increase the throughput, particularly when the mobile networks roam frequently. The detailed plan for the future improvements and expansion is also provided.
230

ERES methodology and approximate algebraic computations

Christou, D. January 2011 (has links)
The area of approximate algebraic computations is a fast growing area in modern computer algebra which has attracted many researchers in recent years. Amongst the various algebraic computations, the computation of the Greatest Common Divisor (GCD) and the Least Common Multiple (LCM) of a set of polynomials are challenging problems that arise from several applications in applied mathematics and engineering. Several methods have been proposed for the computation of the GCD of polynomials using tools and notions either from linear algebra or linear systems theory. Amongst these, a matrix-based method which relies on the properties of the GCD as an invariant of the original set of polynomials under elementary row transformations and shifting elements in the rows of a matrix, shows interesting properties in relation to the problem of the GCD of sets of many polynomials. These transformations are referred to as Extended-Row-Equivalence and Shifting (ERES) operations and their iterative application to a basis matrix, which is formed directly from the coefficients of the given polynomials, formulates the ERES method for the computation of the GCD of polynomials and establishes the basic principles of the ERES methodology. The main objective of the present thesis concerns the improvement of the ERES methodology and its use for the efficient computation of the GCD and LCM of sets of several univariate polynomials with parameter uncertainty, as well as the extension of its application to other related algebraic problems. New theoretical and numerical properties of the ERES method are defined in this thesis by introducing the matrix representation of the Shifting operation, which is used to change the position of the elements in the rows of a matrix. This important theoretical result opens the way for a new algebraic representation of the GCD of a set polynomials, the remainder, and the quotient of Euclid's division for two polynomials based on ERES operations. The principles of the ERES methodology provide the means to develop numerical algorithms for the GCD and LCM of polynomials that inherently have the potential to efficiently work with sets of several polynomials with inexactly known coefficients. The present new implementation of the ERES method, referred to as the ``Hybrid ERES Algorithm", is based on the effective combination of symbolic-numeric arithmetic (hybrid arithmetic) and shows interesting computational properties concerning the approximate GCD and LCM problems. The evaluation of the quality, or ``strength", of an approximate GCD is equivalent to an evaluation of a distance problem in a projective space and it is thus reduced to an optimisation problem. An efficient implementation of an algorithm computing the strength bounds is introduced here by exploiting some of the special aspects of the respective distance problem. Furthermore, a new ERES-based method has been developed for the approximate LCM which involves a least-squares minimisation process, applied to a matrix which is formed from the remainders of Euclid's division by ERES operations. The residual from the least-squares process characterises the quality of the obtained approximate LCM. Finally, the developed framework of the ERES methodology is also applied to the representation of continued fractions to improve the stability criterion for linear systems based on the Routh-Hurwitz test.

Page generated in 0.0849 seconds