1 |
A probabilistic reasoning and learning system based on Bayesian belief networksLuo, Zhiyuan January 1992 (has links)
No description available.
|
2 |
A framework for the design and evaluation of magic tricks that utilises computational systems configured with psychological constraintsWilliams, Howard Manning January 2014 (has links)
A human magician blends science, psychology and performance to create a magical effect. This thesis explores what can be achieved when that human intelligence is replaced or assisted by machine intelligence. Magical effects are all in some form based on hidden mathematical, scientific or psychological principles; the parameters controlling these underpinning techniques are hard for a magician to blend to maximise the magical effect required. The complexity is often caused by interacting and conflicting physical and psychological constraints that need to be optimally balanced. Normally this tuning is done by trial and error, combined with human intuitions. This thesis focuses on applying Artificial Intelligence methods to the creation, and optimisation, of magic tricks exploiting mathematical principles. Experimentally derived, crowd sourced, data about particular perceptual and cognitive features is used, combined with a model of the underlying mathematical process, to provide a psychologically valid metric to allow optimisation of magical impact. The thesis describes an optimisation framework that can be flexibly applied to a range of different types of mathematics based tricks. Three case studies are presented as exemplars of the methodology at work, the outputs of which are: language and image based prediction and mind reading tricks, a magical jigsaw, and a mind reading card trick effect. Each trick created is evaluated through testing at public engagement events, and in a laboratory environment. Further, a demonstration of the real world efficacy of the approach for professional performers is presented in the form of sales of the tricks in a reputable magic shop in London.
|
3 |
Cross-view learningZhang, Li January 2018 (has links)
Key to achieving more efficient machine intelligence is the capability to analysing and understanding data across different views - which can be camera views or modality views (such as visual and textual). One generic learning paradigm for automated understanding data from different views called cross-view learning which includes cross-view matching, cross-view fusion and cross-view generation. Specifically, this thesis investigates two of them, cross-view matching and cross-view generation, by developing new methods for addressing the following specific computer vision problems. The first problem is cross-view matching for person re-identification which a person is captured by multiple non-overlapping camera views, the objective is to match him/her across views among a large number of imposters. Typically a person's appearance is represented using features of thousands of dimensions, whilst only hundreds of training samples are available due to the difficulties in collecting matched training samples. With the number of training samples much smaller than the feature dimension, the existing methods thus face the classic small sample size (SSS) problem and have to resort to dimensionality reduction techniques and/or matrix regularisation, which lead to loss of discriminative power for cross-view matching. To that end, this thesis proposes to overcome the SSS problem in subspace learning by matching cross-view data in a discriminative null space of the training data. The second problem is cross-view matching for zero-shot learning where data are drawn from different modalities each for a different view (e.g. visual or textual), versus single-modal data considered in the first problem. This is inherently more challenging as the gap between different views becomes larger. Specifically, the zero-shot learning problem can be solved if the visual representation/view of the data (object) and its textual view are matched. Moreover, it requires learning a joint embedding space where different view data can be projected to for nearest neighbour search. This thesis argues that the key to make zero-shot learning models succeed is to choose the right embedding space. Different from most existing zero-shot learning models utilising a textual or an intermediate space as the embedding space for achieving crossview matching, the proposed method uniquely explores the visual space as the embedding space. This thesis finds that in the visual space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. Moreover, a natural mechanism for multiple textual modalities optimised jointly in an end-to-end manner in this model demonstrates significant advantages over existing methods. The last problem is cross-view generation for image captioning which aims to automatically generate textual sentences from visual images. Most existing image captioning studies are limited to investigate variants of deep learning-based image encoders, improving the inputs for the subsequent deep sentence decoders. Existing methods have two limitations: (i) They are trained to maximise the likelihood of each ground-truth word given the previous ground-truth words and the image, termed Teacher-Forcing. This strategy may cause a mismatch between training and testing since at test-time the model uses the previously generated words from the model distribution to predict the next word. This exposure bias can result in error accumulation in sentence generation during test time, since the model has never been exposed to its own predictions. (ii) The training supervision metric, such as the widely used cross entropy loss, is different from the evaluation metrics at test time. In other words, the model is not directly optimised towards the task expectation. This learned model is therefore suboptimal. One main underlying reason responsible is that the evaluation metrics are non-differentiable and therefore much harder to be optimised against. This thesis overcomes the problems as above by exploring the reinforcement learning idea. Specifically, a novel actor-critic based learning approach is formulated to directly maximise the reward - the actual Natural Language Processing quality metrics of interest. As compared to existing reinforcement learning based captioning models, the new method has the unique advantage of a per-token advantage and value computation is enabled leading to better model training.
|
4 |
Perspective relativity : a conceptual examination of the applicability of an articulated notion of "perspective" to such matters as the problem of meaningsHeppel, V. J. H. January 1985 (has links)
The aim of this thesis is to articulate and defend a general notion of 'perspectives' and some of the ways that they relate to one another, in order to help to clarify one of the preliminary conceptual problems in cybernetics, namely, the relation between energy propagation (signal) and information propagation (message). The literature on this topic is meagre, although the literature relevant to it is too great to cover comprehensively. The approach closely follows the ideas of Thomas Kuhn and Paul Feyerabend in the philosophy of science. It is found that the perspective notion has possible uses other than that of signal and message, since the same arguments apply to a wide variety of conceptual and human situations. The concepts considered include: point of view, field space, overall view, three broad categories of perspective difference, compatible and incompatible perspectives, the effect of values and goals, and mutual sensitivity and relevance of perspective spaces. There are five chapters: the first introduces the perspective approach to the 'problem of meanings' and provides a brief introduction to the other four chapters; the second examines two fragments of the philosophical background; the third offers a relatively informal discussion of perspectives and perspective relativity; the fourth suggests an example of a terminology of perspectives (true to perspective relativity, not the only possible one); and the final chapter summarises some immediate results as well as suggesting some possible specialised applications, including political models, information retrieval and machine intelligence.
|
5 |
Using Machine Intelligence to Prioritise Code Review RequestsSaini, Nishrith January 2020 (has links)
Background: Modern Code Review (MCR) is a process of reviewing code which is a commonly used practice in software development. It is the process of reviewing any new code changes that need to be merged with the existing codebase. As a developer, one receives many code review requests daily that need to be reviewed. When the developer receives the review requests, they are not prioritised. Manuallyprioritising them is a challenging and time-consuming process. Objectives: This thesis aims to address and solve the above issues by developing a machine intelligence-based code review prioritisation tool. The goal is to identify the factors that impact code review prioritisation process with the help of feedback provided by experienced developers and literature; these factors can be used to develop and implement a solution that helps in prioritising the code review requests automatically. The solution developed is later deployed and evaluated through user and reviewer feedback in a real large-scale project. The developed prioritisation tool is named as Pineapple. Methods: A case study has been conducted at Ericsson. The identification of factors that impact the code review prioritisation process was identified through literature review and semi-structured interviews. The feasibility, usability, and usefulness of Pineapple have been evaluated using a static validation method with the help of responses provided by the developers after using the tool. Results: The results indicate that Pineapple can help developers prioritise their code review requests and assist them while performing code reviews. It was found that the majority of people believed Pineapple has the ability to decrease the lead time of the code review process while providing reliable prioritisations. The prioritisations are performed in a production environment with an average time of two seconds. Conclusions: The implementation and validation of Pineapple suggest the possible usefulness of the tool to help developers prioritise their code review requests. The tool helps to decrease the code review lead-time, along with reducing the workload on a developer while reviewing code changes.
|
6 |
Active Vision through Invariant Representations and Saccade MovementsLi, Yue 08 September 2006 (has links)
No description available.
|
7 |
Dynamically Self-reconfigurable Systems for Machine IntelligenceHe, Haibo 03 October 2006 (has links)
No description available.
|
8 |
Tactile sensation imaging system and algorithms for tumor detectionLee, Jong-Ha January 2011 (has links)
Diagnosing early formation of tumors or lumps, particularly those caused by cancer, has been a challenging problem. To help physicians detect tumors more efficiently, various imaging techniques with different imaging modalities such as computer tomography, ultrasonic imaging, nuclear magnetic resonance imaging, and mammography, have been developed. However, each of these techniques has limitations, including exposure to radiation, excessive costs, and complexity of machinery. Tissue elasticity is an important indicator of tissue health, with increased stiffness pointing to an increased risk of cancer. In addition to increased tissue elasticity, geometric parameters such as size of a tissue inclusion are also important factors in assessing the tumor. The combined knowledge of tissue elasticity and its geometry would aid in tumor identification. In this research, we present a tactile sensation imaging system (TSIS) and algorithms which can be used for practical medical diagnostic experiments for measuring stiffness and geometry of tissue inclusion. The TSIS incorporates an optical waveguide sensing probe unit, a light source unit, a camera unit, and a computer unit. The optical method of total internal reflection phenomenon in an optical waveguide is adapted for the tactile sensation imaging principle. The light sources are attached along the edges of the waveguide and illuminates at a critical angle to totally reflect the light within the waveguide. Once the waveguide is deformed due to the stiff object, it causes the trapped light to change the critical angle and diffuse outside the waveguide. The scattered light is captured by a camera. To estimate various target parameters, we develop the tactile data processing algorithm for the target elasticity measurement via direct contact. This algorithm is accomplished by adopting a new non-rigid point matching algorithm called "topology preserving relaxation labeling (TPRL)." Using this algorithm, a series of tactile data is registered and strain information is calculated. The stress information is measured through the summation of pixel values of the tactile data. The stress and strain measurements are used to estimate the elasticity of the touched object. This method is validated by commercial soft polymer samples with a known Young's modulus. The experimental results show that using the TSIS and its algorithm, the elasticity of the touched object is estimated within 5.38% relative estimation error. We also develop a tissue inclusion parameter estimation method via indirect contact for the characterization of tissue inclusion. This method includes developing a forward algorithm and an inversion algorithm. The finite element modeling (FEM) based forward algorithm is designed to comprehensively predict the tactile data based on the parameters of an inclusion in the soft tissue. This algorithm is then used to develop an artificial neural network (ANN) based inversion algorithm for extracting various characteristics of tissue inclusions, such as size, depth, and Young's modulus. The estimation method is then validated by using realistic tissue phantoms with stiff inclusions. The experimental results show that the minimum relative estimation errors for the tissue inclusion size, depth, and hardness are 0.75%, 6.25%, and 17.03%, respectively. The work presented in this dissertation is the initial step towards early detection of malignant breast tumors. / Electrical and Computer Engineering
|
9 |
Moving Toward Intelligence: A Hybrid Neural Computing Architecture for Machine Intelligence ApplicationsBai, Kang Jun 08 June 2021 (has links)
Rapid advances in machine learning have made information analysis more efficient than ever before. However, to extract valuable information from trillion bytes of data for learning and decision-making, general-purpose computing systems or cloud infrastructures are often deployed to train a large-scale neural network, resulting in a colossal amount of resources in use while themselves exposing other significant security issues. Among potential approaches, the neuromorphic architecture, which is not only amenable to low-cost implementation, but can also deployed with in-memory computing strategy, has been recognized as important methods to accelerate machine intelligence applications. In this dissertation, theoretical and practical properties of a hybrid neural computing architecture are introduced, which utilizes a dynamic reservoir having the short-term memory to enable the historical learning capability with the potential to classify non-separable functions. The hybrid neural computing architecture integrates both spatial and temporal processing structures, sidestepping the limitations introduced by the vanishing gradient. To be specific, this is made possible through four critical features: (i) a feature extractor built based upon the in-memory computing strategy, (ii) a high-dimensional mapping with the Mackey-Glass neural activation, (iii) a delay-dynamic system with historical learning capability, and (iv) a unique learning mechanism by only updating readout weights. To support the integration of neuromorphic architecture and deep learning strategies, the first generation of delay-feedback reservoir network has been successfully fabricated in 2017, better yet, the spatial-temporal hybrid neural network with an improved delay-feedback reservoir network has been successfully fabricated in 2020. To demonstrate the effectiveness and performance across diverse machine intelligence applications, the introduced network structures are evaluated through (i) time series prediction, (ii) image classification, (iii) speech recognition, (iv) modulation symbol detection, (v) radio fingerprint identification, and (vi) clinical disease identification. / Doctor of Philosophy / Deep learning strategies are the cutting-edge of artificial intelligence, in which the artificial neural networks are trained to extract key features or finding similarities from raw sensory information. This is made possible through multiple processing layers with a colossal amount of neurons, in a similar way to humans. Deep learning strategies run on von Neumann computers are deployed worldwide. However, in today's data-driven society, the use of general-purpose computing systems and cloud infrastructures can no longer offer a timely response while themselves exposing other significant security issues. Arose with the introduction of neuromorphic architecture, application-specific integrated circuit chips have paved the way for machine intelligence applications in recently years.
The major contributions in this dissertation include designing and fabricating a new class of hybrid neural computing architecture and implementing various deep learning strategies to diverse machine intelligence applications. The resulting hybrid neural computing architecture offers an alternative solution to accelerate the neural computations required for sophisticated machine intelligence applications with a simple system-level design, and therefore, opening the door to low-power system-on-chip design for future intelligence computing, what is more, providing prominent design solutions and performance improvements for internet of things applications.
|
10 |
Inteligência de máquina : esboço de uma abordagem construtivistaCosta, Antonio Carlos da Rocha January 1993 (has links)
Este trabalho introduz uma definição para noção de inteligência de máquina, estabelece a possibilidade concreta dessa definição e fornece indicações sobre a sua necessidade - isto e, dá-lhe um conteúdo objetivo e mostra o interesse e a utilidade que a definição pode ter para a ciência da computação, em geral, e para a inteligência artificial, em particular. Especificamente, toma-se uma particular leitura da definição de inteligência dada por J. Piaget e se estabelecem as condições para que essa definição possa ser interpretada no domínio das máquinas. Para tanto, uma revisão das noções fundamentais da ciência da computação se faz necessária, a fim de explicitar os aspectos dinâmicos de variabilidade, controlabilidade e adaptabilidade subjacentes a tais conceitos (maquina, programa, computação, e organização, rege adaptação de rnáquina). Por outro lado, urna, mudança de atitude face aos objetivos da inteligência, artificial também e requerida. A definição dada supõe que se reconheça, a autonomia operacional das maquinas, e isso leva, a abandonar, ou pelo menos a colocar em segundo piano, o ponto de vista que chamamos de artificialismo - a busca da imitação do comportamento inteligente de seres humanos ou animais - e a adotar o ponto de vista que denominamos de naturalismo - a consideração da inteligência de maquina como fenômeno natural nas maquinas, digno de ser estudado em si próprio. 0 trabalho apresenta os resultados da reflexão através da qual se tentou realizar tais objetivos. / This work introduces a definition for the notion of machine intelligence, establishes the concrete possibility of that definition and gives indications on its necessity - that is, it gives that notion an objective content and shows the interest and utility that the definition may have to computer science, in general, and artificial intelligence, in particular. Specifically, we take a particular reading of the definition of intelligence given by J. Piaget, and we establish the conditions under which that definition can be interpreted in the domain of machines. To achieve this, a revision of the fundamental notions of computer science was necessary, in order to make explicit the dynamical aspects of variability, controlability and adaptability that are underlying those concepts (machine, program, computation, and machine organization, regulation and adaptation). On the other hand, a change in the attitude toward the objetives of artificial intelligence was also required. The given definition pressuposes that one recognizes the operational autonomy of machines, and this implies abandonning the point of view we call artificialism - the search for the imitation of the intelligent behavior of human beings and animals - and adopting the point of view that we call naturalism - which considers that machine intelligence is a natural phenomenon in machines, that should be studied by its own. The work presents the results of the reflexion through which we tried to realize those goals.
|
Page generated in 2.9053 seconds