Spelling suggestions: "subject:"andsearch"" "subject:"3research""
231 |
Using document clustering and language modelling in mediated information retrievalMuresan, Gheorghe January 2002 (has links)
Our work addresses a well documented problem: users are frequently unable to articulate a query that clearly and comprehensively expresses their information need. This can be attributed to the information need being too ambiguous and not clearly defined in the user's mind, to a lack of knowledge of the domain of interest on the part of the user, to a lack of understanding of a retrieval system's conceptual model, or to an inability to use a certain query syntax. This thesis proposes a software tool that emulates the human search mediator. It helps a user explore a domain of interest, learn its structure, terminology and key concepts, and clarify and refine an information need. It can also help a user generate high-quality queries for searching the World Wide Web or other such large and heterogeneous document collections. Our work was inspired by library studies which have highlighted the role of the librarian in helping the user explore her information need, define the problem to be solved, articulate a formulation of the information need and adapt it for the retrieval system at hand in order to get information. Our approach, mediated access through a clustered collection, is based on an information access environment in which the user can explore a relatively small, well structured, pre-clustered document collection covering a particular subject domain, in order to understand the concepts encompassed and to clarify and refine her information need. At the same time, the user can ostensively indicate clusters and documents of interest so that the system builds a model of the user's topic of interest. Based on this model, the system assists and guides the user's exploration, or generates `mediated queries' that can be used to search other collections. We present the design and evaluation of WebCluster, a system that reifies the concept of mediated retrieval. Additionally, a variety of mediation experiments are presented,which provide guidelines as to which mediation strategies are more appropriate for different types of tasks. A set of experiments is presented that evaluate document clustering's capacity to group together topical documents and support mediation. In this context we propose and experimentally test a new formulation for the cluster hypothesis. We also look at the ability of language models to convey content, to represent topics and to highlight specific concepts in a given context. They are also successfully applied to generate flexible, task-dependent cluster representatives for supporting exploration through browsing and respectively searching. Our experimental results show that mediation has potential to significantly improve user queries and consequently the retrieval effectiveness.
|
232 |
An investigation into children's use of the lookback strategyCataldo, Maria Guilia January 2000 (has links)
No description available.
|
233 |
Perceptuomotor incoordination during manually-assisted searchSolman, Grayden J. F. January 2012 (has links)
The thesis introduces a novel search paradigm, and explores a previously unreported behavioural error detectable in this paradigm. In particular, the ‘Unpacking Task’ is introduced – a search task in which participants use a computer mouse to sort through random heaps of items in order to locate a unique target. The task differs from traditional search paradigms by including an active motor component in addition to purely perceptual inspection. While completing this task, participants are often found to select and move the unique target item without recognizing it, at times continuing to make many additional moves before correcting the error. This ‘unpacking error’ is explored with perceptual, memory load, and instructional manipulations, evaluating eye-movements and motor characteristics in additional to traditional response time and error rate metrics. It is concluded that the unpacking error arises because perceptual and motor systems fail to adequately coordinate during completion of the task. In particular, the motor system is found to ‘process’ items (i.e., to select and discard them) more quickly than the perceptual system is able to reliably identify those same items. On those occasions where the motor system selects and rejects the target item before the perceptual system has had time to resolve its identity, the unpacking error results. These findings have important implications for naturalistic search, where motor interaction is common, and provide further insights into the conditions under which perceptual and motor systems will interact in a coordinated or an uncoordinated fashion.
|
234 |
Novelty and Diversity in Retrieval EvaluationKolla, Maheedhar 21 December 2012 (has links)
Queries submitted to search engines rarely provide a complete and precise
description of a user's information need.
Most queries are ambiguous to some extent, having multiple interpretations.
For example, the seemingly unambiguous query ``tennis lessons'' might be submitted
by a user interested in attending classes in her neighborhood, seeking lessons
for her child, looking for online videos lessons, or planning to start a business
teaching tennis.
Search engines face the challenging task of satisfying different groups of users
having diverse information needs associated with a given query.
One solution is to optimize ranking functions to satisfy diverse sets of information
needs.
Unfortunately, existing evaluation frameworks do not support such optimization.
Instead, ranking functions are rewarded for satisfying the most likely intent
associated with a given query.
In this thesis, we propose a framework and associated evaluation metrics that are
capable of optimizing ranking functions to satisfy diverse information needs.
Our proposed measures explicitly reward those ranking functions capable of presenting
the user with information that is novel with respect to previously viewed
documents.
Our measures reflects quality of a ranking function by taking into account its
ability to satisfy diverse users submitting a query.
Moreover, the task of identifying and establishing test frameworks to compare
ranking functions on a web-scale can be tedious.
One reason for this problem is the dynamic nature of the web, where documents
are constantly added and updated, making it necessary for search engine developers
to seek additional human assessments.
Along with issues of novelty and diversity, we explore one approximate
approach to compare different ranking functions by overcoming the problem of
lacking complete human assessments.
We demonstrate that our approach is capable of accurately sorting ranking
functions based on their capability of satisfying diverse users, even in the
face of incomplete human assessments.
|
235 |
Learning multi-agent pursuit of a moving targetLu, Jieshan 11 1900 (has links)
In this thesis we consider the task of catching a moving target with multiple pursuers, also known as the “Pursuit Game”, in which coordination among the pursuers is critical. Our testbed is inspired by the pursuit problem in video games, which require fast planning to guarantee fluid frame rates. We apply supervised machine learning methods to automatically derive efficient multi-agent pursuit strategies on rectangular grids. Learning is achieved by computing training data off-line and exploring the game tree on small problems. We also generalize the data to previously unseen and larger problems by learning robust pursuit policies, and run empirical comparisons between several sets of state features using a simple learning architecture. The empirical results show that 1) the application of learning across different maps can help improve game-play performance, especially on non-trivial maps against intelligent targets, and 2) simple heuristic works effectively on simple maps or less intelligent targets.
|
236 |
Automatic generation and evaluation of recombination gamesBrowne, Cameron Bolitho January 2008 (has links)
Many new board games are designed each year, ranging from the unplayable to the truly exceptional. For each successful design there are untold numbers of failures; game design is something of an art. Players generally agree on some basic properties that indicate the quality and viability of a game, however these properties have remained subjective and open to interpretation. The aims of this thesis are to determine whether such quality criteria may be precisely defined and automatically measured through self-play in order to estimate the likelihood that a given game will be of interest to human players, and whether this information may be used to direct an automated search for new games of high quality. Combinatorial games provide an excellent test bed for this purpose as they are typically deep yet described by simple welldefined rule sets. To test these ideas, a game description language was devised to express such games and a general game system implemented to play, measure and explore them. Key features of the system include modules for measuring statistical aspects of self-play and synthesising new games through the evolution of existing rule sets. Experiments were conducted to determine whether automated game measurements correlate with rankings of games by human players, and whether such correlations could be used to inform the automated search for new high quality games. The results support both hypotheses and demonstrate the emergence of interesting new rule combinations.
|
237 |
Guided random-walk based model checkingBui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.
|
238 |
Guided random-walk based model checkingBui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.
|
239 |
Guided random-walk based model checkingBui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.
|
240 |
Mobile robot for search and rescueLitter, Jansen J. January 2004 (has links)
Thesis (M.S.)--Ohio University, June, 2004. / Title from PDF t.p. Includes bibliographical references (leaves 98-100).
|
Page generated in 0.0276 seconds