• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Learning to select for information retrieval

Peng, Jie January 2010 (has links)
The effective ranking of documents in search engines is based on various document features, such as the frequency of the query terms in each document, the length, or the authoritativeness of each document. In order to obtain a better retrieval performance, instead of using a single or a few features, there is a growing trend to create a ranking function by applying a learning to rank technique on a large set of features. Learning to rank techniques aim to generate an effective document ranking function by combining a large number of document features. Different ranking functions can be generated by using different learning to rank techniques or on different document feature sets. While the generated ranking function may be uniformly applied to all queries, several studies have shown that different ranking functions favour different queries, and that the retrieval performance can be significantly enhanced if an appropriate ranking function is selected for each individual query. This thesis proposes Learning to Select (LTS), a novel framework that selectively applies an appropriate ranking function on a per-query basis, regardless of the given query's type and the number of candidate ranking functions. In the learning to select framework, the effectiveness of a ranking function for an unseen query is estimated from the available neighbouring training queries. The proposed framework employs a classification technique (e.g. k-nearest neighbour) to identify neighbouring training queries for an unseen query by using a query feature. In particular, a divergence measure (e.g. Jensen-Shannon), which determines the extent to which a document ranking function alters the scores of an initial ranking of documents for a given query, is proposed for use as a query feature. The ranking function which performs the best on the identified training query set is then chosen for the unseen query. The proposed framework is thoroughly evaluated on two different TREC retrieval tasks (namely, Web search and adhoc search tasks) and on two large standard LETOR feature sets, which contain as many as 64 document features, deriving conclusions concerning the key components of LTS, namely the query feature and the identification of neighbouring queries components. Two different types of experiments are conducted. The first one is to select an appropriate ranking function from a number of candidate ranking functions. The second one is to select multiple appropriate document features from a number of candidate document features, for building a ranking function. Experimental results show that our proposed LTS framework is effective in both selecting an appropriate ranking function and selecting multiple appropriate document features, on a per-query basis. In addition, the retrieval performance is further enhanced when increasing the number of candidates, suggesting the robustness of the learning to select framework. This thesis also demonstrates how the LTS framework can be deployed to other search applications. These applications include the selective integration of a query independent feature into a document weighting scheme (e.g. BM25), the selective estimation of the relative importance of different query aspects in a search diversification task (the goal of the task is to retrieve a ranked list of documents that provides a maximum coverage for a given query, while avoiding excessive redundancy), and the selective application of an appropriate resource for expanding and enriching a given query for document search within an enterprise. The effectiveness of the LTS framework is observed across these search applications, and on different collections, including a large scale Web collection that contains over 50 million documents. This suggests the generality of the proposed learning to select framework. The main contributions of this thesis are the introduction of the LTS framework and the proposed use of divergence measures as query features for identifying similar queries. In addition, this thesis draws insights from a large set of experiments, involving four different standard collections, four different search tasks and large document feature sets. This illustrates the effectiveness, robustness and generality of the LTS framework in tackling various retrieval applications.
252

On the performance of probabilistic flooding in wireless mobile ad hoc networks

Bani Yassein, Muneer O. January 2006 (has links)
Broadcasting in MANET’s has traditionally been based on flooding, but this can induce broadcast storms that severely degrade network performance due to redundant retransmission, collision and contention. Probabilistic flooding, where a node rebroadcasts a newly arrived one-to-all packet with some probability, p, was an early suggestion to reduce the broadcast storm problem. The first part of this thesis investigates the effects on the performance of probabilistic flooding of a number of important MANET parameters, including node speed, traffic load and node density. It transpires that these parameters have a critical impact both on reachability and on the number of so-called “saved rebroadcast packets” achieved. For instance, across a range of rebroadcast probability values, as network density increases from 25 to 100 nodes, reachability achieved by probabilistic flooding increases from 85% to 100%. Moreover, as node speed increases from 2 to 20 m/sec, reachability increases from 90% to 100%. The second part of this thesis proposes two new probabilistic algorithms that dynamically adjust the rebroadcasting probability contingent on node distribution using only one-hop neighbourhood information, without requiring any assistance of distance measurements or location-determination devices. The performance of the new algorithm is assessed and compared to blind flooding as well as the fixed probabilistic approach. It is demonstrated that the new algorithms have superior performance characteristics in terms of both reachability and saved rebroadcasts. For instance, the suggested algorithms can improve saved rebroadcasts by up to 70% and 47% compared to blind and fixed probabilistic flooding, respectively, even under conditions of high node mobility and high network density without degrading reachability. The final part of the thesis assesses the impact of probabilistic flooding on the performance of routing protocols in MANETs. Our performance results indicate that using our new probabilistic flooding algorithms during route discovery enables AODV to achieve a higher delivery ratio of data packets while keeping a lower routing overhead compared to using blind and fixed probabilistic flooding. For instance, the packet delivery ratio using our algorithm is improved by up to 19% and 12% compared to using blind and fixed probabilistic flooding, respectively. This performance advantage is achieved with a routing overhead that is lower by up to 28% and 19% than in fixed probabilistic and blind flooding, respectively.
253

Document ranking with quantum probabilities

Zuccon, Guido January 2012 (has links)
In this thesis we investigate the use of quantum probability theory for ranking documents. Quantum probability theory is used to estimate the probability of relevance of a document given a user's query. We posit that quantum probability theory can lead to a better estimation of the probability of a document being relevant to a user's query than the common approach, i.e. the Probability Ranking Principle (PRP), which is based upon Kolmogorovian probability theory. Following our hypothesis, we formulate an analogy between the document retrieval scenario and a physical scenario, that of the double slit experiment. Through the analogy, we propose a novel ranking approach, the quantum probability ranking principle (qPRP). Key to our proposal is the presence of quantum interference. Mathematically, this is the statistical deviation between empirical observations and expected values predicted by the Kolmogorovian rule of additivity of probabilities of disjoint events in configurations such that of the double slit experiment. We propose an interpretation of quantum interference in the document ranking scenario, and examine how quantum interference can be effectively estimated for document retrieval. To validate our proposal and to gain more insights about approaches for document ranking, we (1) analyse PRP, qPRP and other ranking approaches, exposing the assumptions underlying their ranking criteria and formulating the conditions for the optimality of the two ranking principles, (2) empirically compare three ranking principles (i.e. PRP, interactive PRP, and qPRP) and two state-of-the-art ranking strategies in two retrieval scenarios, those of ad-hoc retrieval and diversity retrieval, (3) analytically contrast the ranking criteria of the examined approaches, exposing similarities and differences, (4) study the ranking behaviours of approaches alternative to PRP in terms of the kinematics they impose on relevant documents, i.e. by considering the extent and direction of the movements of relevant documents across the ranking recorded when comparing PRP against its alternatives. Our findings show that the effectiveness of the examined ranking approaches strongly depends upon the evaluation context. In the traditional evaluation context of ad-hoc retrieval, PRP is empirically shown to be better or comparable to alternative ranking approaches. However, when we turn to examine evaluation contexts that account for interdependent document relevance (i.e. when the relevance of a document is assessed also with respect to other retrieved documents, as it is the case in the diversity retrieval scenario) then the use of quantum probability theory and thus of qPRP is shown to improve retrieval and ranking effectiveness over the traditional PRP and alternative ranking strategies, such as Maximal Marginal Relevance, Portfolio theory, and Interactive PRP. This work represents a significant step forward regarding the use of quantum theory in information retrieval. It demonstrates in fact that the application of quantum theory to problems within information retrieval can lead to improvements both in modelling power and retrieval effectiveness, allowing the constructions of models that capture the complexity of information retrieval situations. Furthermore, the thesis opens up a number of lines for future research. These include (1) investigating estimations and approximations of quantum interference in qPRP, (2) exploiting complex numbers for the representation of documents and queries, and (3) applying the concepts underlying qPRP to tasks other than document ranking.
254

An automated marking system for graphical user interfaces

Gray, Geoffrey Richard January 2008 (has links)
This research investigates the feasibility and effectiveness of assessing students programming solutions to Graphical User Interface exercises in an automated fashion. Automated marking systems ease the burden on the staff involved in running a course and allow students to get results and feedback in a timely fashion. Several automated marking systems exist but are currently unable to mark GUIs. The inherent complexity of GUIs and the need for aesthetic analysis has rendered GUIs beyond the scope of most marking systems. The marking approach described in this thesis implements a number of novel concepts. By exploiting language design properties such as the hierarchical relationship between components, it was possible to develop a framework capable of testing and marking students' GUI programs. Introspectively analysing the interface enables the marking system to obtain access to the intrinsic elements contained within the GUI. Once access has been obtained, the tests can be performed on the actual interface components themselves rather than a mere representation. GUI assessment is more than functional testing, aesthetics play a major role in the creation of an interface. Existing aesthetic metrics do not provide the analytical capabilities required due to their failure to include colour. The distractive effects that colours have were quantified and incorporated into the metrics. The results of the dynamic and aesthetic testing show that through the implementation of the novel components detailed, the creation of a GUI marking system is feasible and its marking both consistent and effective. The design enables the system to return results in a timely fashion and the effects that colour has can be seen in the results of basic aesthetic testing.
255

Lifting of operations in modular monadic semantics

Jaskelioff, Mauro Javier January 2009 (has links)
Monads have become a fundamental tool for structuring denotational semantics and programs by abstracting a wide variety of computational features such as side-effects, input/output, exceptions, continuations and non-determinism. In this setting, the notion of a monad is equipped with operations that allow programmers to manipulate these computational effects. For example, a monad for side-effects is equipped with operations for setting and reading the state, and a monad for exceptions is equipped with operations for throwing and handling exceptions. When several effects are involved, one can employ the incremental approach to mod- ular monadic semantics, which uses monad transformers to build up the desired monad one effect at a time. However, a limitation of this approach is that the effect-manipulating operations need to be manually lifted to the resulting monad, and consequently, the lifted operations are non-uniform. Moreover, the number of liftings needed in a system grows as the product of the number of monad transformers and operations involved. This dissertation proposes a theory of uniform lifting of operations that extends the incremental approach to modular monadic semantics with a principled technique for lifting operations. Moreover the theory is generalized from monads to monoids in a monoidal category, making it possible to apply it to structures other than monads. The extended theory is taken to practice with the implementation of a new extensible monad transformer library in Haskell, and with the use of modular monadic semantics to obtain modular operational semantics.
256

HIPPO : an adaptive open hypertext system

Newton, Paul K. January 1998 (has links)
The hypertext paradigm offers a powerful way of modelling complex knowledge structures. Information can be arranged into networks, and connected using hypertext links. This has led to the development of more open hypertext design, which allow hypertext services to be integrated seamlessly into the user's environment. Recent research has also seen the emergence of adaptive hypertext, which uses feedback from the user to modify objects in the hypertext. The research presented in this thesis describes the HIPPO hypertext model which combines many of the ideas in open hypertext research, with existing work on adaptive hypertext systems. The idea of fuzzy anchors are introduced which allow authors to express the uncertainty and vagueness which is inherent in a hypertext anchor. Fuzzy anchors use partial truth values which allow authors to define a "degree of membership" for anchors. Anchors no longer have fixed, discrete boundaries, but have more in common with contour lines used in map design. These fuzzy anchors are used as the basis for an adaptive model, so that anchors can be modified in response to user actions. The HIPPO linking model introduces linkbase trees which combine link collections into inheritance hierarchies. These are used to construct reusable inheritance trees, which allow authors to reuse and build on existing link collections. An adaptive model is also presented to modify these linkbase hierarchies. Finally, the HIPPO system is re-implemented using a widely distributed architecture. This distributed model implements a hypertext system as a collection of lightweight, distributed services. The benefits of this distributed hypertext model are discussed, and an adaptive model is then suggested.
257

A genetic programming hyper-heuristic approach to automated packing

Hyde, Matthew January 2010 (has links)
This thesis presents a programme of research which investigated a genetic programming hyper-heuristic methodology to automate the heuristic design process for one, two and three dimensional packing problems. Traditionally, heuristic search methodologies operate on a space of potential solutions to a problem. In contrast, a hyper-heuristic is a heuristic which searches a space of heuristics, rather than a solution space directly. The majority of hyper-heuristic research papers, so far, have involved selecting a heuristic, or sequence of heuristics, from a set pre-defined by the practitioner. Less well studied are hyper-heuristics which can create new heuristics, from a set of potential components. This thesis presents a genetic programming hyper-heuristic which makes it possible to automatically generate heuristics for a wide variety of packing problems. The genetic programming algorithm creates heuristics by intelligently combining components. The evolved heuristics are shown to be highly competitive with human created heuristics. The methodology is first applied to one dimensional bin packing, where the evolved heuristics are analysed to determine their quality, specialisation, robustness, and scalability. Importantly, it is shown that these heuristics are able to be reused on unseen problems. The methodology is then applied to the two dimensional packing problem to determine if automatic heuristic generation is possible for this domain. The three dimensional bin packing and knapsack problems are then addressed. It is shown that the genetic programming hyper-heuristic methodology can evolve human competitive heuristics, for the one, two, and three dimensional cases of both of these problems. No change of parameters or code is required between runs. This represents the first packing algorithm in the literature able to claim human competitive results in such a wide variety of packing domains.
258

Reasoning about resource-bounded multi-agent systems

Nguyen, Nguyen January 2011 (has links)
The thesis presents logic-based formalisms for modelling and reasoning about resource-bounded multi-agent systems. In the field of multi-agent system, it is well-known that temporal logics such as CTL and ATL are powerful tools for reasoning about multi-agent systems. However, there is no natural way to utilise these logics for expressing and reasoning about properties of multi-agent systems where actions of agents require resources to be able to perform. This thesis extends logics including Computational Tree Logic (CTL), Coalition Logic (CL) and Alternating-time Temporal Logic (ATL) which have been used to reasoning about multi-agent systems so that the extended ones have the power to specify and to reason about properties of resource-bounded multi-agent systems. While the extension of CTL is adapted for specifying and reasoning about properties of systems of resource-bounded reasoners where the resources are explicitly memory, communication and time, the extensions of CL and ATL are generalised so that any resource-bounded multi-agent system can be modelled, specified and reasoned about. For each of the logics, we describe the range of resource-bounded multi-agent systems they can account for and axiomatisation systems for reasoning which are proved to be sound and complete. Moreover, we also study the satisfiability problem of these logics.
259

Towards safe and efficient functional reactive programming

Sculthorpe, Neil January 2011 (has links)
Functional Reactive Programming (FRP) is an approach to reactive programming where systems are structured as networks of functions operating on time-varying values (signals). FRP is based on the synchronous data-flow paradigm and supports both continuous-time and discrete-time signals (hybrid systems). What sets FRP apart from most other reactive languages is its support for systems with highly dynamic structure (dynamism) and higher-order reactive constructs (higher-order data-flow). However, the price paid for these features has been the loss of the safety and performance guarantees provided by other, less expressive, reactive languages. Statically guaranteeing safety properties of programs is an attractive proposition. This is true in particular for typical application domains for reactive programming such as embedded systems. To that end, many existing reactive languages have type systems or other static checks that guarantee domain-specific constraints, such as feedback being well-formed (causality analysis). However, compared with FRP, they are limited in their capacity to support dynamism and higher-order data-flow. On the other hand, as established static techniques do not suffice for highly structurally dynamic systems, FRP generally enforces few domain-specific constraints, leaving the FRP programmer to manually check that the constraints are respected. Thus, there is currently a trade-off between static guarantees and dynamism among reactive languages. This thesis contributes towards advancing the safety and efficiency of FRP by studying highly structurally dynamic networks of functions operating on mixed (yet distinct) continuous-time and discrete-time signals. First, an ideal denotational semantics is defined for this kind of FRP, along with a type system that captures domain-specific constraints. The correctness and practicality of the language and type system are then demonstrated by proof-of-concept implementations in Agda and Haskell. Finally, temporal properties of signals and of functions on signals are expressed using techniques from temporal logic, as motivation and justification for a range of optimisations.
260

Verifying requirements for resource-bounded agents

Abdur, Rakib January 2011 (has links)
This thesis presents frameworks for the modelling and verification of resource-bounded reasoning agents. The resources considered include the time, memory, and communication bandwidth required by agents to achieve a goal. The scalability and expressiveness of standard model checking techniques is investigated using two typical multiagent reasoning problems which can be easily parameterised to increase or decrease the problem size. Both a complexity analysis and experimental results suggest that reasonably sized problem instances are unlikely to be tractable for a standard model checker without steps to reduce the branching factor of the state space. We propose two approaches to address this problem: the use of abstract specifications to model the behaviour of some of the agents in the system, and exploiting information about the reasoning strategy adopted by the agents. Abstract specifications are given as Linear Temporal Logic (LTL) formulae which describe the external behaviour of the agents, allowing their temporal behaviour to be compactly modelled. Conversely, reasoning strategies allow the detailed specification of the ordering of steps in the agent’s reasoning process. Both approaches have been combined in an automated verification tool TVRBA for rule-based multi-agent systems which allows the designer to specify information about agents’ interaction, behaviour, and execution strategy at different levels of abstraction. The TVRBA tool generates an encoding of the system for the Maude LTL model checker, allowing properties of the system to be verified. The scalability of the new approach is illustrated using three case studies.

Page generated in 0.072 seconds