511 |
Semi-supervised topic models applied to mathematical document classificationEvans, Ieuan January 2017 (has links)
Our objective is to build a mathematical document classifier: a machine which for a given mathematical document $\mathbf{x}$, determines the mathematical subject area $\cc$. In particular, we wish to construct the function $f$ such that $f(\mathbf{x}, \TTheta) = \cc$ where $f$ requires the possibly unknown parameters $\TTheta$ which may be estimated using an existing corpus of labelled documents. The novelty here is that our proposed classifiers will observe a mathematical document over dual vocabularies. In particular, as a collection of both words and mathematical symbols. In this thesis, we predominantly review the claims made in \cite{Watt}: mathematical document classification is possible via symbol frequency analysis. In particular, we investigate whether this claim is justified: \cite{Watt} contains no experimental evidence which supports this. Furthermore, we extend this research further and investigate whether the inclusion of mathematical notational information improves classification accuracy over the existing single vocabulary approaches. To do so, we review a selection of machine learning methods for document classification and refine and extend these models to incorporate mathematical notational information and investigate whether these models yield higher classification performance over existing word only versions. In this research, we develop the novel mathematical document models ``Dual Latent Dirichlet Allocation'' and ``Dual Pachinko Allocation'' which are extensions to the existing topic models ``Latent Dirichlet Allocation'' and ``Pachinko Allocation'' respectively. Our proposed models observe mathematical documents over two separate vocabularies (words and mathematical symbols). Furthermore, we present Online Variational Bayes for Pachinko Allocation and our proposed models to allow for fast parameter estimation over a single pass of the data. We perform systematic analysis on these models, and we verify the claims made in \cite{Watt}, and furthermore, we observe that the inclusion of symbol data via Dual Pachinko Allocation only yields in an increase of classification performance over the single vocabulary variants and the prior art in this field.
|
512 |
Conflict, conciliation and computer-mediated communication : using online dispute resolution to explain the impact of media properties on relational communicationBillings, Matthew J. January 2008 (has links)
No description available.
|
513 |
Designing for effective freehand gestural interactionRen, Gang January 2013 (has links)
Gestural interaction has been investigated as an interaction technique for many years and could potentially deliver more natural and intuitive methods for human computer interaction. As a novel input mode compared to traditional input devices such as keyboard and mouse, researchers have been applying gestural interaction in many dif- ferent domains. There are many different gestural interaction methods and systems which have been built for both research and mass production. However, most previous gesture user interfaces rely on hand held devices or the wearing of fiducial markers for gesture tracking. The freehand gestures, which are tracked by distance sensors with- out requiring users to hold or wear special devices, are not fully explored in previous research. Considering that freehand gestural interaction could be easier to use and deployed for ordinary users in every day life, further research should be conducted for designing effective freehand gestural interaction. In an effort to extend the knowledge and understanding of human factors and inter- action design issues relating to freehand gestural interaction, we first provided a review of related works, and analysed the characteristics, design space as well as the design challenges and opportunities of freehand gestural interaction. We then progressively investigated several aspects of freehand gestural interaction including option selection in 2D and 3D layout, object selection in densely populated environments and 3D navi- gation in public settings. Based on the interaction design, prototype development and user evaluations, we gave the results of user performance, behaviour and preference. We also compared our findings with previous research to extend the state of art. Further- more, we extended the discussions to a set of practical design suggestions for effective freehand gestural interaction design for different scenarios and interaction tasks. We concluded the directions for the future development of freehand gestural interaction technologies and methods in the end of the thesis.
|
514 |
Adaptive learning for segmentation and detectionDeng, Jingjing (Eddy) January 2017 (has links)
Segmentation and detection are two fundamental problems in computer vision and medical image analysis, they are intrinsically interlinked by the nature of machine learning based classification, especially supervised learning methods. Many automatic segmentation methods have been proposed which heavily rely on hand-crafted discriminative features for specific geometry and powerful classifier for delinearating the foreground object and background region. The aimof this thesis is to investigate the adaptive schemes that can be used to derive efficient interactive segmentation methods for medical imaging applications, and adaptive detection methods for addressing generic computer vision problems. In this thesis, we consider adaptive learning as a progressive learning process that gradually builds the model given sequential supervision from user interactions. The learning process could be either adaptive re-training for smallscale models and datasets or adaptive fine-tuning for medium-large scale. In addition, adaptive learning is considered as a progressive learning process that gradually subdivides a big and difficult problem into a set of smaller but easier problems, where a final solution can be found via combining individual solvers consecutively. We first show that when discriminative features are readily available, the adaptive learning scheme can lead to an efficient interactive method for segmenting the coronary artery, where promising segmentation results can be achieved with limited user intervention. We then present a more general interactive segmentation method that integrates a CNN based cascade classifier and a parametric implicit shape representation. The features are self-learnt during the supervised training process, no hand-crafting is required. Then, the segmentation can be obtained via imposing a piecewise constant constraint to thedetection result through the proposed shape representation using region based deformation. Finally, we show the adaptive learning scheme can also be used to address the face detection problem in an unconstrained environment, where two CNN based cascade detectors are proposed. Qualitative and quantitative evaluations of proposed methods are reported, and show theefficiency of adaptive schemes for addressing segmentation and detection problems in general.
|
515 |
A conceptual e-learning system for teaching mathematicsAlghurabi, Yasser Mohammed January 2017 (has links)
Researchers have been attempting to implement effective e-learning methods that improve educational outcomes and consider their conformity with human psychology and current state of the technology. Hence, e-learning systems with a particular focus on cognitive-related aspects have emerged as a potential solution. This research study aimed on enhancing students’ conceptualisations and mental perceptions of mathematical geometric concepts using an e-learning system that was developed based on key aspects of the Cognitive Theory of Gärdenfors’ Conceptual Spaces (Gärdenfors, 2000), which utilises a combination of visual and audio. The research achieved this through an effective agent-based model that was designed to help in teaching basic mathematical geometric in primary school. The e-learning systems adapts to the individual student’s needs and pace to conceive the geometric concepts while maintaining the design objective of a flexible and proactive approach for the agents that is semi-autonomous. This research study investigated the instruction of geometric concepts for primary school students based on three national curricula (UK, New Zealand and Saudi Arabia). Sets of questions were developed to study the students’ understanding of selected concepts through a rigorous process of surveying primary school math teachers, determining the appropriate level of question difficulty and requesting the verification and appraisal of the type and format of the questions from the mathematical instructor community. Based on the generated sets of questions, a prototype of an e-learning system was developed and used in a pilot experiment that was conducted on students from the UK and Saudi Arabia to investigate whether students’ misconceptions were consistent with Gärdenfors’s Conceptual Spaces cognitive model. In addition to the variations in students’ answers, the experiment revealed consistent misconceptions based on the mistakes made on specific questions, which confirmed several aspects of the Conceptual Spaces cognitive model. These results led to the implementation of the CABELS e-learning system, which was developed based on this theory, to enhance student conceptualisations of the previously determined common misconceptions in a way that assimilates their mental perceptions of the studied concepts. CABELS includes two parallel modules, which are the language-based and visualbased modules, and involves three stages: a pre-test, lessons explaining the concepts and a post-test. The CABELS system was first used in experiments on primary school students in Saudi Arabia, which highly improved their understanding of lines and shape concepts. Furthermore, a statistical analysis of the experiment data showed that there was no noticeable effect of the teaching methods, groups, classrooms or genders on the students’ scores nor any interdependence between these variables. Therefore, these results reinforce the effectiveness of CABELS for teaching basic geometric shape concepts to primary school pupils. The effectiveness of the CABELS system was also evaluated through post-session interviews with students to assess their satisfaction and experiences with CABELS. The results showed an overall satisfaction of students regarding the use of this system, which the students indicated was mainly due to its usability and usefulness.
|
516 |
Study of the Fly algorithm for 2-D and 3-D image reconstructionAbbood, Zainab Ali January 2017 (has links)
The aim of this study is to investigate the behaviour and application of an evolutionary algorithm (EA) based on a particular approach of cooperative co-evolution algorithm (CCEA), the Parisian Approach. It evolves and keeps an entire population as an optimal solution to the problem instead of keeping only the best individual in classical EAs. The CCEA we selected is called the “Fly algorithm”. It is named after flies, because the individuals are extremely primitive and correspond to three-dimensional (3-D) points. This algorithm has been relatively overlooked despite showing promising results in real-time robotic and image reconstruction in tomography. Our focus in this study is on two types of applications: medical imaging and digital art. i) In the medical application, we aim to improve quantitative results in 3-D reconstructed volumes in positron emission tomography (PET).We investigate the use of density fields, based on Metaballs and on Gaussian functions respectively, to obtain a realistic output. We also investigate how to exploit individuals’ fitness to modulate their individual footprint in the final reconstructed volume. An individual’s fitness can be seen as a level of confidence in its 3-D position. The resulting volumes are compared with previous work in terms of normalised-cross correlation. In our test cases, data fidelity increases by more than 10% when density fields are used instead of using a naive approach. Our method also provides reconstructions comparable to those obtained using well-established techniques used in medicine (e.g., filtered back-projection (FBP) and ordered subset expectation-maximization (OSEM)). Our algorithm relies heavily on the mutation operator. We propose 4 different fully adaptive mutation operators: basic mutation, adaptive mutation variance, dual mutation and directed mutation. Their impact on the algorithm efficiency is analysed and validated on PET reconstruction. ii) In the digital art application, we present the first application of the Fly algorithm in digital art. This branch of digital art is called “evolutionary art”. The motivation is to evaluate the algorithm with a much more complex structure of flies. They are still defined as simplistic primitives (3-D points) but with colours, sizes and rotations. Different visual effects were investigated, such as mosaic-like images and spray paint rendering. An online survey (including 41 participates) was conducted to validate our approach. Participants compared our results with similar ones generated with open-source software (GIMP). Again, our method shows promising results. In conclusion, our investigations confirm that the Fly algorithm works well with a complex search space. We demonstrate a fast and accurate solution to optimise a set of parameters in both applications. The Fly algorithm can improve reconstructed image quality compared to FBP and OSEM in medical application and to GIMP in digital art application.
|
517 |
Analysis and synthesis of critical design-thinking for data visualisation designers and learnersAlnjar, Hanan R. January 2017 (has links)
Designers of data-visualisation tools think deeply about their designs, and constantly question their own judgements and design decisions. They make these judgements to ascertain how to improve their ideas, such to make something suitable and fit-forpurpose. But self-reflection is often difficult, especially learners often find it difficult to critically reflect upon their own work. Therefore, there is a need to guide learners to perform appropriate critical reflections of their work, and develop skills to make better judgements. There are many visualisation computer tools and programming libraries that help users create visualisation systems, however, there are few tools or techniques to help them systematically critique or evaluate their creations, to ascertain what is good or what is bad in their designs. Learners who wish to create data-visualisation tools are missing structures and guidelines that will aid them critique their visualisations. Such critical analysis could be achieved by creating an appropriate computing tool and using metrics and heuristics to perform the judgement, or by human judgement helped by a written guide. Subsequently, this research explores structures to help humans perform better critical evaluations. First, the dissertation uses a traditional research methodology to investigate metrics in visualisation, to explore related work and investigate how metrics are used in computers to perform judgements. We design a framework that describes how and where metrics are used in the visualisation design process. Second, the focus turns to investigate how humans think and make critical judgement on designs, especially visualisation designs. We undertake an observational study where participants critique a range of objects and designs. An in-depth analysis of this observation-study is performed; through analysis and markup of this data, the work is analysed, and themes are extracted. We used a thematic analysis approach to extract these categories. We use these categories to develop our critical analysis system. Third, we followed an iterative approach to engineer our critical-evaluation system. The output system is the Critical Design Sheet that was created after much refinement and adaption by holding several think-aloud sessions to detect design problems and refine to an effective model. Fourth, an evaluation process is performed to evaluate the usability of the Critical Design Sheet with users. Testing of the reliability and earnability of the tool was achieved by the analysis of user usage data over two different cohorts of students (PhD’s and undergraduate). Fifth, an online implementation of the Critical Design Sheet was developed, which was briefly evaluated to discover if participants were satisfied with the computer version of the critical analysis system. These five parts represent the five contributions of the dissertation, respectively: the metric framework, critical thinking workshop and its analysis, the design of the Critical Design Sheet, the evaluation of the method, and finally the prototype online system. In conclusion, this dissertation provides learners and practitioners with a technique (the CDS) that has been proven to help students successfully critique their visual designs and make decision on their creations. The CDS works by breaking down the visual design into individual categories making it easier for practitioners to critique their work.
|
518 |
DSP-enabled reconfigurable optical network devices and architectures for cloud access networksDuan, Xiao January 2018 (has links)
To meet the ever-increasing bandwidth requirements, the rapid growth in highly dynamic traffic patterns, and the increasing complexity in network operation, whilst providing high power consumption efficiency and cost-effectiveness, the approach of combining traditional optical access networks, metropolitan area networks and 4-th generation (4G)/5-th generation (5G) mobile front-haul/back-haul networks into unified cloud access networks (CANs) is one of the most preferred “future-proof” technical strategies. The aim of this dissertation research is to extensively explore, both numerically and experimentally, the technical feasibility of utilising digital signal processing (DSP) to achieve key fundamental elements of CANs from device level to network architecture level including: i) software reconfigurable optical transceivers, ii) DSP-enabled reconfigurable optical add/drop multiplexers (ROADMs), iii) network operation characteristics-transparent digital filter multiple access (DFMA) techniques, and iv) DFMA-based passive optical network (PON) with DSP-enabled software reconfigurability. As reconfigurable optical transceivers constitute fundamental building blocks of the CAN’s physical layer, digital orthogonal filtering-based novel software reconfigurable transceivers are proposed and experimentally and numerically explored, for the first time. By making use of Hilbert-pair-based 32-tap digital orthogonal filters implemented in field programmable gate arrays (FPGAs), a 2GS/s@8-bit digital-to-analogue converter (DAC)/analogue-to-digital converter (ADC), and an electro-absorption modulated laser (EML) intensity modulator (IM), world-first reconfigurable real-time transceivers are successfully experimentally demonstrated in a 25km IMDD SSMF system. The transceiver dynamically multiplexes two orthogonal frequency division multiplexed (OFDM) channels with a total capacity of 3.44Gb/s. Experimental results also indicate that the transceiver performance is fully transparent to various subcarrier modulation formats of up to 64-QAM, and that the maximum achievable transceiver performance is mainly limited by the cross-talk effect between two spectrally-overlapped orthogonal channels, which can, however, be minimised by adaptive modulation of the OFDM signals. For further transceiver optimisations, the impacts of major transceiver design parameters including digital filter tap number and subcarrier modulation format on the transmission performance are also numerically explored. II Reconfigurable optical add/drop multiplexers (ROADMs) are also vital networking devices for application in CANs as they play a critical role in offering fast and flexible network reconfiguration. A new optical-electrical-optical (O-E-O) conversion-free, software-switched flexible ROADM is extensively explored, which is capable of providing dynamic add/drop operations at wavelength, sub-wavelength and orthogonal sub-band levels in software defined networks incorporating the reconfigurable transceivers. Firstly, the basic add and drop operations of the proposed ROADMs are theoretically explored and the ROADM designs are optimised. To crucially validate the practical feasibility of the ROADMs, ROADMs are experimentally demonstrated, for the first time. Experimental results show that the add and drop operation performances are independent of the sub-band signal spectral location and add/drop power penalties are < 2dB. In addition, the ROADMs are also robust against a differential optical power dynamic range of > 2dB and a drop RF signal power range of 7.1dB. In addition to exploring key optical networking devices for CANs, the first ever DFMA PON experimental demonstrations are also conducted, by using two real-time, reconfigurable, OOFDM-modulated optical network units (ONUs) operating on spectrally overlapped multi-Gb/s orthogonal channels, and an offline optical line terminal (OLT). For multipoint-to-point upstream signal transmission over 26km SSMF in an IMDD DFMA PON, experiments show that each ONU achieves a similar upstream BER performance, excellent robustness to inter-ONU sample timing offset (STO) and a large ONU launch power variation range. Given the importance of IMDD DFMA-PON channel frequency response roll-off, both theoretical and experimental explorations are undertaken to investigate the impact of channel frequency response roll-off on the upstream transmission of the DFMA PON system Such work provides valuable insights into channel roll-off-induced performance dependencies to facilitate cost-effective practical network/transceiver/component designs.
|
519 |
Indirect 3D reconstruction through appearance predictionGodard, Clément January 2018 (has links)
As humans, we easily perceive shape and depth, which helps us navigate our environment and interact with objects around us. Automating these abilities for computers is critical for many applications such as self-driving cars, augmented reality or architectural surveying. While active 3D reconstruction methods, such as laser scanning or structured light can produce very accurate results, they are typically expensive and their use cases can be limited. In contrast, passive methods that make use of only easily captured photographs, are typically less accurate as mapping from 2D images to 3D is an under-constrained problem. In this thesis we will focus on passive reconstruction techniques. We explore ways to get 3D shape from images in two challenging situations: 1) where a collection of images features a highly specular surface whose appearance changes drastically between the images, and 2) where only one input image is available. For both cases, we pose the reconstruction task as an indirect problem. In the first situation, the rapid change in appearance of highly specular objects makes it infeasible to directly establish correspondences between images. Instead, we develop an indirect approach using a panoramic image of the environment to simulate reflections, and recover the surface which best predicts the appearance of the object. In the second situation, the ambiguity inherent in single-view reconstruction is typically solved with machine learning, but acquiring depth data for training is both difficult and expensive. We present an indirect approach, where we train a neural network to regress depth by performing the proxy task of predicting the appearance of the image when the viewpoint changes. We prove that highly specular objects can be accurately reconstructed in uncontrolled environments, producing results that are 30% more accurate compared to the initialisation surface. For single frame depth estimation, our approach improves object boundaries in the reconstructions and significantly outperforms all previously published methods. In both situations, the proposed methods shrink the accuracy gap between camera-based reconstruction versus what is achievable through active sensors.
|
520 |
Data-driven modelling of shape structureAverkiou, M. January 2015 (has links)
In recent years, the study of shape structure has shown great promise, by taking steps towards exposing shape semantics and functionality to algorithms spanning a wide range of areas in computer graphics and vision. By shape structure, we refer to the set of parts that make a shape, the relations between these parts, and the ways in which they correspond and vary between shapes of the same family. These developments have been largely driven by the abundance of 3D data, with collections of 3D models becoming increasingly prominent and websites such as Trimble 3D Warehouse offering millions of free 3D models to the public. The ability to use large amounts of data inside these shape collections for discovering shape structure has made novel approaches to acquisition, modelling, fabrication, and recognition of 3D objects possible. Discovering and modelling the structure of shapes using such data is therefore of great importance. In this thesis we address the problem of discovering and modelling shape structure from large, diverse and unorganized shape collections. Our hypothesis is that by using the large amounts of data inside such shape collections we can discover and model shape structure, and thus use such information to enable structure-aware tools for 3D modelling, including shape exploration, synthesis and editing. We make three key contributions. First, we propose an efficient algorithm for co-aligning large and diverse collections of shapes, to tackle the first challenge in detecting shape structure, which is to place shapes in a common coordinate frame. Then, we introduce a method to parameterize shapes in terms of locations and sizes of their parts, and we demonstrate its application to concurrently exploring a shape collection and synthesizing new shapes. Finally, we define a meta-representation for a shape family, which models the relations of shape parts to capture the main geometric characteristics of the family, and we demonstrate how it can be used to explore shape collections and intelligently edit shapes.
|
Page generated in 0.0494 seconds