• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 2
  • Tagged with
  • 50
  • 50
  • 50
  • 30
  • 29
  • 29
  • 29
  • 29
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Global inference for sentence compression : an integer linear programming approach

Clarke, James January 2008 (has links)
In this thesis we develop models for sentence compression. This text rewriting task has recently attracted a lot of attention due to its relevance for applications (e.g., summarisation) and simple formulation by means of word deletion. Previous models for sentence compression have been inherently local and thus fail to capture the long range dependencies and complex interactions involved in text rewriting. We present a solution by framing the task as an optimisation problem with local and global constraints and recast existing compression models into this framework. Using the constraints we instil syntactic, semantic and discourse knowledge the models otherwise fail to capture. We show that the addition of constraints allow relatively simple local models to reach state-of-the-art performance for sentence compression. The thesis provides a detailed study of sentence compression and its models. The differences between automatic and manually created compression corpora are assessed along with how compression varies across written and spoken text. We also discuss various techniques for automatically and manually evaluating compression output against a gold standard. Models are reviewed based on their assumptions, training requirements, and scalability. We introduce a general method for extending previous approaches to allow for more global models. This is achieved through the optimisation framework of Integer Linear Programming (ILP). We reformulate three compression models: an unsupervised model, a semi-supervised model and a fully supervised model as ILP problems and augment them with constraints. These constraints are intuitive for the compression task and are both syntactically and semantically motivated. We demonstrate how they improve compression quality and reduce the requirements on training material. Finally, we delve into document compression where the task is to compress every sentence of a document and use the resulting summary as a replacement for the original document. For document-based compression we investigate discourse information and its application to the compression task. Two discourse theories, Centering and lexical chains, are used to automatically annotate documents. These annotations are then used in our compression framework to impose additional constraints on the resulting document. The goal is to preserve the discourse structure of the original document and most of its content. We show how a discourse informed compression model can outperform a discourse agnostic state-of-the-art model using a question answering evaluation paradigm.
2

Evaluating the impact of variation in automatically generated embodied object descriptions

Foster, Mary Ellen January 2007 (has links)
The primary task for any system that aims to automatically generate human-readable output is choice: the input to the system is usually well-specified, but there can be a wide range of options for creating a presentation based on that input. When designing such a system, an important decision is to select which aspects of the output are hard-wired and which allow for dynamic variation. Supporting dynamic choice requires additional representation and processing effort in the system, so it is important to ensure that incorporating variation has a positive effect on the generated output. In this thesis, we concentrate on two types of output generated by a multimodal dialogue system: linguistic descriptions of objects drawn from a database, and conversational facial displays of an embodied talking head. In a series of experiments, we add different types of variation to one of these types of output. The impact of each implementation is then assessed through a user evaluation in which human judges compare outputs generated by the basic version of the system to those generated by the modified version; in some cases, we also use automated metrics to compare the versions of the generated output. This series of implementations and evaluations allows us to address three related issues. First, we explore the circumstances under which users perceive and appreciate variation in generated output. Second, we compare two methods of including variation into the output of a corpus-based generation system. Third, we compare human judgements of output quality to the predictions of a range of automated metrics. The results of the thesis are as follows. The judges generally preferred output that incorporated variation, except for a small number of cases where other aspects of the output obscured it or the variation was not marked. In general, the output of systems that chose the majority option was judged worse than that of systems that chose from a wider range of outputs. However, the results for non-verbal displays were mixed: users mildly preferred agent outputs where the facial displays were generated using stochastic techniques to those where a simple rule was used, but the stochastic facial displays decreased users’ ability to identify contextual tailoring in speech while the rule-based displays did not. Finally, automated metrics based on simple corpus similarity favour generation strategies that do not diverge far from the average corpus examples, which are exactly the strategies that human judges tend to dislike. Automated metrics that measure other properties of the generated output correspond more closely to users’ preferences.
3

From surfaces to objects : recognizing objects using surface information and object models

Fisher, Robert B. January 1986 (has links)
This thesis describes research on recognizing partially obscured objects using surface information like Marr's 2D sketch ([MAR82]) and surface-based geometrical object models. The goal of the recognition process is to produce a fully instantiated object hypotheses, with either image evidence for each feature or explanations for their absence, in terms of self or external occlusion. The central point of the thesis is that using surface information should be an important part of the image understanding process. This is because surfaces are the features that directly link perception to the objects perceived (for normal "camera-like" sensing) and because surfaces make explicit information needed to understand and cope with some visual problems (e.g. obscured features). Further, because surfaces are both the data and model primitive, detailed recognition can be made both simpler and more complete. Recognition input is a surface image, which represents surface orientation and absolute depth. Segmentation criteria are proposed for forming surface patches with constant curvature character, based on surface shape discontinuities which become labeled segmentation- boundaries. Partially obscured object surfaces are reconstructed using stronger surface based constraints. Surfaces are grouped to form surface clusters, which are 3D identity-independent solids that often correspond to model primitives. These are used here as a context within which to select models and find all object features. True three-dimensional properties of image boundaries, surfaces and surface clusters are directly estimated using the surface data. Models are invoked using a network formulation, where individual nodes represent potential identities for image structures. The links between nodes are defined by generic and structural relationships. They define indirect evidence relationships for an identity. Direct evidence for the identities comes from the data properties. A plausibility computation is defined according to the constraints inherent in the evidence types. When a node acquires sufficient plausibility, the model is invoked for the corresponding image structure.Objects are primarily represented using a surface-based geometrical model. Assemblies are formed from subassemblies and surface primitives, which are defined using surface shape and boundaries. Variable affixments between assemblies allow flexibly connected objects. The initial object reference frame is estimated from model-data surface relationships, using correspondences suggested by invocation. With the reference frame, back-facing, tangential, partially self-obscured, totally self-obscured and fully visible image features are deduced. From these, the oriented model is used for finding evidence for missing visible model features. IT no evidence is found, the program attempts to find evidence to justify the features obscured by an unrelated object. Structured objects are constructed using a hierarchical synthesis process. Fully completed hypotheses are verified using both existence and identity constraints based on surface evidence. Each of these processes is defined by its computational constraints and are demonstrated on two test images. These test scenes are interesting because they contain partially and fully obscured object features, a variety of surface and solid types and flexibly connected objects. All modeled objects were fully identified and analyzed to the level represented in their models and were also acceptably spatially located. Portions of this work have been reported elsewhere ([FIS83], [FIS85a], [FIS85b], [FIS86]) by the author.
4

Nonlinear dimensionality reduction for motion synthesis and control

Bitzer, Sebastian January 2011 (has links)
Synthesising motion of human character animations or humanoid robots is vastly complicated by the large number of degrees of freedom in their kinematics. Control spaces become so large, that automated methods designed to adaptively generate movements become computationally infeasible or fail to find acceptable solutions. In this thesis we investigate how demonstrations of previously successful movements can be used to inform the production of new movements that are adapted to new situations. In particular, we evaluate the use of nonlinear dimensionality reduction techniques to find compact representations of demonstrations, and investigate how these can simplify the synthesis of new movements. Our focus lies on the Gaussian Process Latent Variable Model (GPLVM), because it has proven to capture the nonlinearities present in the kinematics of robots and humans. We present an in-depth analysis of the underlying theory which results in an alternative approach to initialise the GPLVM based on Multidimensional Scaling. We show that the new initialisation is better suited than PCA for nonlinear, synthetic data, but have to note that its advantage shrinks on motion data. Subsequently we show that the incorporation of additional structure constraints leads to low-dimensional representations which are sufficiently regular so that once learned dynamic movement primitives can be adapted to new situations without need for relearning. Finally, we demonstrate in a number of experiments where movements are generated for bimanual reaching, that, through the use of nonlinear dimensionality reduction, reinforcement learning can be scaled up to optimise humanoid movements.
5

Modelling the transition to complex, culturally transmitted communication

Ritchie, Graham R. S. January 2009 (has links)
Human language is undoubtedly one of the most complex and powerful communication systems to have evolved on Earth. Study of the evolution of this behaviour is made difficult by the lack of comparable communication systems elsewhere in the animal kingdom, and by the fact that language leaves little trace in the fossil record. The human language faculty can, however, be decomposed into several component abilities and a proposed evolutionary explanation of the whole must address (at least) the evolution of each of these components. Some of these features may also be found in other species, and thus permit use of the powerful comparative method. This thesis addresses the evolution of two such component features of human language; complex vocal signalling and the cultural transmission of these vocal signals. I argue that these features make a significant contribution to the nature of human language as we observe it today and so a better understanding of the evolutionary processes that gave rise to them will contribute to study of the evolution of language. This thesis addresses the evolution of these features firstly by identifying other communication systems found in nature that display them, and focusing in particular on the song of the oscine passerines (songbirds). Bird song is chosen as a model system because of the wealth of empirical data on nearly all aspects of the behaviour and the variety of song behaviour found in this group. There also appear to be some striking similarities in the development of language and song. I argue that a better understanding of the evolution of complex signalling and cultural transmission in songbirds and other species will provide useful insight into the evolution of these features in language. This thesis presents a series of related formal models that investigate several issues in the evolution of these features. I firstly present a simple formal model of bird song acquisition and use this in a computational model of evolution to investigate some ecological conditions under which vocal behaviour can become more or less reliant on cultural transmission. I then present a pertinent case study of two closely related songbird sub-species and develop a computational model that demonstrates that domestication, or a similar shift in the fitness landscape, may play a surprising role in the evolution of signal complexity (in some sense) and increased vocal plasticity. Finally, I present several models that investigate the plausibility and consistency of the ‘developmental stress hypothesis’, an important hypothesis drawn from the biological literature that proposes that song learning and song complexity may serve as a sexually selected mate quality indicator mechanism. These models provide the first theoretical support for this important but complex hypothesis and identify a number of relevant parameters that may affect the evolution of such a system.
6

An investigation into tournament poker strategy using evolutionary algorithms

Carter, Richard G. January 2007 (has links)
Poker has become the subject of an increasing amount of study in the computational intelligence community. The element of imperfect information presents new and greater challenges than those previously posed by games such as checkers and chess. Advances in computer poker have great potential, since reasoning under conditions of uncertainty is typical of many real world problems. To date the focus of computer poker research has centred on the development of ring game players for limit Texas hold’em. For a computer to compete in the most prestigious poker events, however, it will be required to play in a tournament setting with a no-limit betting structure. This thesis is the first academic attempt to investigate the underlying dynamics of successful no-limit tournament poker play. Professional players have proffered advice in the non-academic poker literature on correct strategies for tournament poker play. This study seeks to empirically validate their suggestions on a simplified no-limit Texas hold’em tournament framework. Starting by using exhaustive simulations, we first assess the hypothesis that a strategy including information related to game-specific factors performs better than one founded on hand strength knowledge alone. Specifically, we demonstrate that the use of information pertaining to one’s seating position, the opponents’ prior actions, the stage of the tournament, and one’s chip stack size all contribute towards a statistically significant improvement in the number of tournaments won. In extending the research to combine all factors we explain the limitations of the exhaustive simulation approach, and introduce evolutionary algorithms as a method of searching the strategy space. We then test the hypothesis that a strategy which combines information from all the aforementioned factors performs better than one which employs only a single factor. We show that an evolutionary algorithm is successfully able to resolve conflicting signals from the specified factors, and that the resulting strategies are statistically stronger than those previously discovered. Our research continues with an analysis of the results, as we interpret them in the context of poker strategy. We compare our findings to poker authors’ recommendations, and conclude with a discussion on the many possible extensions to this work.
7

Decentralised compliant control for hexapod robots : a stick insect based walking model

Rosano-Matchain, Hugo Leonardo January 2007 (has links)
This thesis aims to transfer knowledge from insect biology into a hexapod walking robot. The similarity of the robot model to the biological target allows the testing of hypotheses regarding control and behavioural strategies in the insect. Therefore, this thesis supports biorobotic research by demonstrating that robotic implementations are improved by using biological strategies and these models can be used to understand biological systems. Specifically, this thesis addresses two central problems in hexapod walking control: the single leg control mechanism and its control variables; and the different roles of the front, middle and hind legs that allow a decentralised architecture to co-ordinate complex behavioural tasks. To investigate these problems, behavioural studies on insect curve walking were combined with quantitative simulations. Behavioural experiments were designed to explore the control of turns of freely walking stick insects, Carausius morosus, toward a visual target. A program for insect tracking and kinematic analysis of observed motion was developed. The results demonstrate that the front legs are responsible for most of the body trajectory. Nonetheless, to replicate insect walking behaviour it is necessary for all legs to contribute with specific roles. Additionally, statistics on leg stepping show that middle and hind legs continuously influence each other. This cannot be explained by previous models that heavily depend on positive feedback controllers. After careful analysis, it was found that the hind legs could actively rotate the body while the middle legs move to the inside of the curve, tangentially to the body axis. The single leg controller is known to be independent from other legs but still capable of mechanical synchronisation. To explain this behaviour positive feedback controllers have been proposed. This mechanism works for the closed kinematic chain problem, but has complications when implemented in a dynamic model. Furthermore, neurophysiological data indicate that legs always respond to disturbances as a negative feedback controller. Additional experimental data presented herein indicates that legs continuously oppose forces created by other legs. This thesis proposes a model that has a velocity positive feedback control modulated via a subordination variable in cascade with a position negative feedback mechanism as the core controller. This allows legs to oppose external and internal forces without compromising inter-leg collaboration for walking. The single leg controller is implemented using a distributed artificial neural network. This network was trained with a wider range of movement to that so far found in the simulation model. The controller implemented with a plausible biological.
8

Social reasoning in multi-agent systems with the expectation-strategy-behaviour framework

Wallace, Iain Andrew January 2010 (has links)
Multi-agent Systems (MAS) provide an increasingly relevant field of research due to their many applications to modelling real world situations where the behaviour of many individual, self-motivated, agents must be reasoned about and controlled. The problem of agent social reasoning is central to MAS, where an agent reasons about its actions and interactions with other agents. This is the most important component of MAS, as it is the interactions, cooperation and competition between agents that make MAS a powerful approach suited for tackling many complex problems. Existing work focuses either on specific types of social reasoning or general purpose agent practical reasoning - reasoning directed toward actions. This thesis argues that social reasoning should be considered separately from practical reasoning. There are many possible benefits to this separation compared to existing approaches. Principally, it can allow general algorithms for agent implementation, analysis and bounded reasoning. This viewpoint is motivated by the desire to implement social reasoning agents and allow for a more general theory of social reasoning in agents. This thesis presents the novel Expectation- Strategy-Behaviour (ESB) framework for social reasoning, which provides a generic way to specify and execute agent reasoning approaches. ESB is a powerful tool, allowing an agent designer to write expressive social reasoning specifications and have a computational model generated automatically. Through a formalism and description of an implemented reasoner based on this theory it is shown that it is possible and beneficial to implement a social reasoning engine as a complementary component to practical reasoning. By using ESB to specify, and then implement, existing social reasoning schemes for joint commitment and normative reasoning, the framework is shown to be a suitable general reasoner. Examples are provided of how reasoning can be bounded in an ESB agent and the mechanism to allow analysis of agent designs is discussed. Finally, there is discussion on the merits of the ESB solution and possible future work.
9

An OS-based alternative to full hardware coherence on tiled chip-multiprocessors

Fensch, Christian January 2008 (has links)
The interconnect mechanisms (shared bus or crossbar) used in current chip-multiprocessors (CMPs) are expected to become a bottleneck that prevents these architectures from scaling to a larger number of cores. Tiled CMPs offer better scalability by integrating relatively simple cores with a lightweight point-to-point interconnect. However, such interconnects make snooping impractical and, thus, require alternative solutions to cache coherence. This thesis proposes a novel, cost-effective hardware mechanism to support shared-memory parallel applications that forgoes hardware maintained cache coherence. The proposed mech- anism is based on the key ideas that mapping of lines to physical caches is done at the page level with OS support and that hardware supports remote cache accesses. It allows only some controlled migration and replication of data and provides a sufficient degree of flexibility in the mapping through an extra level of indirection between virtual pages and physical tiles. The proposed tiled CMP architecture is evaluated on the SPLASH-2 scientific benchmarks and ALPBench multimedia benchmarks against one with private caches and a distributed direc- tory cache coherence mechanism. Experimental results show that the performance degradation is as little as 0%, and 16% on average, compared to the cache coherent architecture across all benchmarks for 16 and 32 processors.
10

Type-based amortized stack memory prediction

Campbell, Brian January 2008 (has links)
Controlling resource usage is important for the reliability, efficiency and security of software systems. Automated analyses for bounding resource usage can be invaluable tools for ensuring these properties. Hofmann and Jost have developed an automated static analysis for finding linear heap space bounds in terms of the input size for programs in a simple functional programming language. Memory requirements are amortized by representing them as a requirement for an abstract quantity, potential, which is supplied by assigning potential to data structures in proportion to their size. This assignment is represented by annotations on their types. The type system then ensures that all potential requirements can be met from the original input’s potential if a set of linear constraints can be solved. Linear programming can optimise this amount of potential subject to the constraints, yielding a upper bound on the memory requirements. However, obtaining bounds on the heap space requirements does not detect a faulty or malicious program which uses excessive stack space. In this thesis, we investigate extending Hofmann and Jost’s techniques to infer bounds on stack space usage, first by examining two approaches: using the Hofmann- Jost analysis unchanged by applying a CPS transformation to the program being analysed, then showing that this predicts the stack space requirements of the original program; and directly adapting the analysis itself, which we will show is more practical. We then consider how to deal with the different allocation patterns stack space usage presents. In particular, the temporary nature of stack allocation leads us to a system where we calculate the total potential after evaluating an expression in terms of assignments of potential to the variables appearing in the expression as well as the result. We also show that this analysis subsumes our previous systems, and improves upon them. We further increase the precision of the bounds inferred by noting the importance of expressing stack memory bounds in terms of the depth of data structures and by taking the maximum of the usage bounds of subexpressions. We develop an analysis which uses richer definitions of the potential calculation to allow depth and maxima to be used, albeit with a more subtle inference process.

Page generated in 0.1126 seconds