• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Design considerations of an intelligent tutoring system for programming languages

Elsom-Cook, Mark January 1984 (has links)
The overall goal of the thesis is to attempt to highlight the major topics which must be considered in the design of any Intelligent Tutoring System and to illustrate their application within the particular domain of LISP programming. There are two major sections to the thesis. The first considers the background to the educational application of computers. It examines possible roles for the computer, explores the relationship between education theory and computer-based teaching, and identifies some important links among existing Tutoring Systems. The section concludes with a summary of the design goals which an Intelligent Tutoring System should attempt to fulfill. The second section applies the design goals to the production of an Intelligent Tutoring System for programming languages. It devises a formal semantic description for programming languages and illustrates its application to tutoring. A method for modelling the learning process is introduced. Some techniques for maintaining a structured tutoring interaction are described. The work is set within the methodology of Artificial Intelligence research. Although a fully implemented tutoring system is not described, all features discussed are implemented as short programs intended to demonstrate the feasibility of the approach taken.
192

High-fidelity graphics using unconventional distributed rendering approaches

Bugeja, Keith January 2015 (has links)
High-fidelity rendering requires a substantial amount of computational resources to accurately simulate lighting in virtual environments. While desktop computing, with the aid of modern graphics hardware, has shown promise in delivering realistic rendering at interactive rates, real-time rendering of moderately complex scenes is still unachievable on the majority of desktop machines and the vast plethora of mobile computing devices that have recently become commonplace. This work provides a wide range of computing devices with high-fidelity rendering capabilities via oft-unused distributed computing paradigms. It speeds up the rendering process on formerly capable devices and provides full functionality to incapable devices. Novel scheduling and rendering algorithms have been designed to best take advantage of the characteristics of these systems and demonstrate the efficacy of such distributed methods. The first is a novel system that provides multiple clients with parallel resources for rendering a single task, and adapts in real-time to the number of concurrent requests. The second is a distributed algorithm for the remote asynchronous computation of the indirect diffuse component, which is merged with locally-computed direct lighting for a full global illumination solution. The third is a method for precomputing indirect lighting information for dynamically-generated multi-user environments by using the aggregated resources of the clients themselves. The fourth is a novel peer-to-peer system for improving the rendering performance in multi-user environments through the sharing of computation results, propagated via a mechanism based on epidemiology. The results demonstrate that the boundaries of the distributed computing typically used for computer graphics can be significantly and successfully expanded by adapting alternative distributed methods.
193

Multi-variate image analysis for detection of biomedical anomalies

Raza, Shan-e-Ahmed January 2014 (has links)
Multi-modal images are commonly used in the field of medicine for anomaly detection, for example CT/MRI images for tumour detection. Recently, thermal imaging has demonstrated its potential for detection of anomalies (e.g., water stress, disease) in plants. In biology, multi channel imaging systems are now becoming available which combine information about the level of expression of various molecules of interest (e.g., proteins) which can be employed to investigate molecular signatures of diseases such as cancer or their subtypes. Before combining information from multiple modalities/channels, however, we need to align (register) the images together in a way that the same point in the multiple images obtained from different sources/channels corresponds to the same point on the object (e.g., a particular point on a leaf in a plant or a particular cell in a tissue) under observation. In this thesis, we propose registration methods to align multi-modal/channel images of plants and human tissues. For registration of thermal and visible light images of plants we propose a registration method using silhouette extraction. For silhouette extraction, we propose a novel multi-scale method which can be used to extract highly accurate silhouettes of diseased plants in thermal and visible light images. The extracted silhouettes can be used to register plant regions in thermal and visible light images. After alignment of multi-modal images, we combine thermal and visible light information for classification of water deficient regions of spinach canopies. We add depth information as another dimension to our set of features for detection of diseased plants. For depth estimation, we use disparity between stereo image pair. We then compare different disparity estimation algorithms and propose a method which can be used to obtain not only accurate and smooth disparity maps but also less sensitive to the acquisition noise. Our results show that by combining information from multiple modalities, classification accuracy of different classifiers can be increased. In the second part of this thesis, we propose a block-based registration method using mutual information as a similarity measure for registration of multi-channel fluorescence microscopy images. The proposed block-based approach is fast, accurate and robust to local variations in the images. In addition, we propose a method for selection of a reference image with maximal overlap i.e., a method to choose a reference image, from a stack of dozens of multi-channel images, which when used as reference image causes minimum amount of information loss during the registration process. Images registered using this method have been used in other studies to investigate techniques for mining molecular patterns of cancer. Both the registration algorithms proposed in this thesis produce highly accurate results where the block-based registration algorithm is shown to be capable of registering the images up to sub-pixel accuracy. The disparity estimation algorithm produces smooth and accurate disparity maps in the presence of noise where commonly used disparity estimation algorithms fail to perform. Our results show that by combining multi-modal image data, one can easily increase the accuracy of classifiers to detect anomalies in plants, which helps to avoid huge losses due to disease or lack of water at commercial level.
194

A framework to support multilingual mobile learning : a South African perspective

Jantjies, Mmaki January 2014 (has links)
The proliferation of mobile phone ownership across the world has motivated education technology specialists to find ways of supporting the process of learning in both formal and informal environments through mobile devices. Mobile learning has introduced an opportunity for extending resources to learners in schools through ubiquitous devices. While there have been various pedagogical guidelines on how to create mobile learning systems for learning, little research presents support for developing multilingual mobile learning technology that can be used to support high school learning. This research presents mainly three case studies contributing to the development of a framework that can be used to support the development of multilingual mobile learning software combining technical and key pedagogical considerations to support the software development process. The approaches described by this framework also take into consideration the code-switching practice which is common in multilingual classrooms. Code-switching is a technique used in multilingual classrooms by teachers and learners to support learners to both interpret and understand learning content switching between two human languages in order to gain deeper perspectives on a topic. The first case study presented in the thesis describes creating appropriate content and learning activities that can be used through mobile learning supporting the code-switching behaviour of multilingual learners in formal learning. The second case study reports on supporting learning activities and content in informal learning environments. The third case study reflects on different language support characteristics that can be embedded in systems or used as additional systems to support multilingual mobile learning content development in cases where language specialists are a rare resource. The thesis is completed through an evaluation of the framework’s practicality in supporting the pedagogical considerations to be made when developing mobile learning systems for use in multilingual high schools. The cases presented in this thesis are based on a South African context.
195

Modality based perception for selective rendering

Harvey, Carlo January 2011 (has links)
A major challenge in generating high-fidelity virtual environments for use in Virtual Reality (VR) is to be able to provide interactive rates of realism. The high-fidelity simulation of light and sound wave propagation is still unachievable in real-time. Physically accurate simulation is very computationally demanding. Only recently has visual perception been used in high-fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended by the viewer at a much lower quality with-out the difference being perceived. This thesis investigates the effect spatialised directional sounds, both discrete and converged and smells have on the visual attention of the user towards rendered scene images. These perceptual artefacts are utilised in selective rendering pipelines via the use of multi-modal maps. This work verifies the worth of investigating subliminal saccade shifts (fast movements of the eyes) from directional audio impulses via a pilot study to eye track participant's free viewing a scene with and without an audio impulse and with and without a congruency for that impulse. This experiment showed that even without an acoustic identifier in the scene, directional sound provides an impulse to guide subliminal saccade shifts. A novel technique for generating interactive discrete acoustic samples from arbitrary geometry is also presented. This work is extrapolated by investigating whether temporal auditory sound wave saliencies can be used as a feature vector in the image rendering process. The method works by producing image maps of the sound wave flux and attenuating this map via these auditory saliency feature vectors. Whilst selectively rendering, the method encodes spatial auditory distracters into the standard visual saliency map. Furthermore, this work investigates the effect various smells have on the visual attention of a user when free viewing a set of images whilst being eye tracked. This thesis explores these saccade shifts to a congruent smell object. By analysing the gaze points, the time spent attending a particular area of a scene is considered. The work presents a technique derived from measured data to modulate traditional saliency maps of image features to account for the observed results for smell congruences and shows that smell provides an impulse on visual attention. Finally, the observed data is used in applying modulated image saliency maps to address the additional effects cross-modal stimuli has on human perception when applied to a selective renderer. These multi-modal maps, derived from measured data for smells, and from sound spatialisation techniques attempt to exploit the extra stimuli presented in multi-modal VR environments and help to re-quantify the saliency map to account for observed cross-modal perceptual features of the human visual system. The multi-modal maps are tested through rigorous psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform better than image saliency maps that are naively applied to multi-modal virtual environments.
196

A quest for a better simulation-based knowledge elicitation tool

Lee, Poh Khoon Ernie January 2007 (has links)
Knowledge elicitation is a well-known bottleneck in the development of Knowledge-Based Systems (KBS). This is mainly due to the tacit property of knowledge, which renders it unfriendly for explication and therefore, analysis. Previous research shows that Visual Interactive Simulation (VIS) can be used to elicit episodic knowledge in the form of example cases of decisions from the decision makers for machine learning purposes, with a view to building a KBS subsequently. Notwithstanding, there are still issues that need to be explored; these include how to make a better use of existing commercial off-the-shelf VIS packages in order to improve the knowledge elicitation process' effectiveness and efficiency. Based in a Ford Motor Company (Ford) engine assembly plant in Dagenham (East London), an experiment was planned and performed to investigate the effects of using various VIS models with different levels of visual fidelity and settings on the elicitation process. The empirical work that was carried out can be grouped broadly into eight activities, which began with gaining an understanding of the case study. Next, it was followed by four concurrent activities of designing the experiment, adapting a current VIS model provided by Ford to support a gaming mode and then assessing it, and devising the meaures for evaluating the elicitation process. Following these, eight Ford personnel, who are proficient decision makers in the simulated operations system, were organised to play with the game models in 48 knowledge elicitation sessions over 19 weeks. In so doing, example cases were collected during the personnel's interactions with the game models. Lastly, the example cases were processed and analysed, and the findings were discussed. Eventually, it seems that the decisions elicited through a 2-Dimensional (2D) VIS model are probably more realistic than those elicited through other equivalent models with a higher level of visual fidelity. Moreover, the former also emerges to be a more efficient knowledge elicitation tool. In addition, it appears that the decisions elicited through a VIS model that is adjusted to simulate more uncommon and extreme scenes are made for a wider range of situations. Consequently, it can be concluded that using a 2D VIS model that has been adjusted to simulate more uncommon and extreme situations is the optimal VIS-based means for eliciting episodic knowledge.
197

Supporting authoring of adaptive hypermedia

Hendrix, Maurice January 2010 (has links)
It is well-known that students benefit from personalised attention. However, frequently teachers are unable to provide this, most often due to time constraints. An Adaptive Hypermedia (AH) system can offer a richer learning experience, by giving personalised attention to students. The authoring process, however, is time consuming and cumbersome. Our research explores the two main aspects to authoring of AH: authoring of content and adaptive behaviour. The research proposes possible solutions, to overcome the hurdles towards acceptance of AH in education. Automation methods can help authors, for example, teachers could create linear lessons and our prototype can add content alternatives for adaptation. Creating adaptive behaviour is more complex. Rule-based systems, XML-based conditional inclusion, Semantic Web reasoning and reusable, portable scripting in a programming language have been proposed. These methods all require specialised knowledge. Hence authoring of adaptive behaviour is difficult and teachers cannot be expected to create such strategies. We investigate three ways to address this issue. 1. Reusability: We investigate limitations regarding adaptation engines, which influence the authoring and reuse of adaptation strategies. We propose a metalanguage, as a supplement to the existing LAG adaptation language, showing how it can overcome such limitations. 2. Standardisation: There are no widely accepted standards for AH. The IMSLearning Design (IMS-LD) specification has similar goals to Adaptive Educational Hypermedia (AEH). Investigation shows that IMS-LD is more limited in terms of adaptive behaviour, but the authoring process focuses more on learning sequences and outcomes. 3. Visualisation: Another way is to simplify the authoring process of strategies using a visual tool. We define a reference model and a tool, the Conceptual Adaptation Model (CAM) and GRAPPLE Authoring Tool (GAT), which allow specification of an adaptive course in a graphical way. A key feature is the separation between content, strategy and adaptive course, which increases reusability compared to approaches that combine all factors in one model.
198

Out-of-equilibrium economic dynamics and persistent polarisation

Porter, James A. January 2012 (has links)
Most of economics is equilibrium economics of one sort or another. The study of outof- equilibrium economics has largely been neglected. This thesis, engaging with ideas and techniques from complexity science, develops frameworks and tools for out-of-equilibrium modelling. We initially focus our attention on models of exchange before examining methods of agent-based modelling. Finally we look at a set of models for social dynamics with nontrivial micro-macro interrelationships. Chapter 2 introduces complexity science and relevant economic concepts. In particular we examine the idea of complex adaptive systems, the application of complexity to economics, some key ideas from microeconomics, agent-based modelling and models of segregation and/or polarisation. Chapter 3 develops an out-of-equilibrium, fully decentralised model of bilateral exchange. Initially we study the limiting properties of our out-of-equilibrium dynamic, characterising the conditions required for convergence to pairwise and Pareto optimal allocation sets. We illustrate problems that can arise for a rigid version of the model and show how even a small amount of experimentation can overcome these. We investigate the model numerically characterising the speed of convergence and changes in ex post wealth. In chapter 4 we now explicitly model the trading structure on a network. We derive analytical results for this general network case. We investigate the e�ect of network structure on outcomes numerically and contrast the results with the fully connected case of chapter 3. We look at extensions of the model including a version with an endogenous network structure and a versions where agents can learn to accept a `worthless' but widely available good in exchanges. Chapter 5 outlines and demonstrates a new approach to agent-based modelling which draws on a number techniques from contemporary software engineering. We develop a prototype framework to illustrate how the ideas might be applied in practice in order to address methodological gaps in many current approaches. We develop example agent-based models and contrast the approach with existing agent-based modelling approaches and the kind of purpose built models which were used for the numerical results in chapters 3 and 4. Chapter 6 develops a new set of models for thinking about a wide range of social dynamics issues including human capital acquisition and migration. We analyse the models initially from a Nash equilibrium perspective. Both continuum and �nite versions of the model are developed and related. Using the criterion of stochastic stability we think about the long run behaviour of a version of the model. We introduce agent heterogeneity into the model. We conclude with a fully dynamic version of the model (using techniques from chapter 5) which looks at endogenous segregation.
199

Network coding via evolutionary algorithms

Karunarathne, Lalith January 2012 (has links)
Network coding (NC) is a relatively recent novel technique that generalises network operation beyond traditional store-and-forward routing, allowing intermediate nodes to combine independent data streams linearly. The rapid integration of bandwidth-hungry applications such as video conferencing and HDTV means that NC is a decisive future network technology. NC is gaining popularity since it offers significant benefits, such as throughput gain, robustness, adaptability and resilience. However, it does this at a potential complexity cost in terms of both operational complexity and set-up complexity. This is particularly true of network code construction. Most NC problems related to these complexities are classified as non deterministic polynomial hard (NP-hard) and an evolutionary approach is essential to solve them in polynomial time. This research concentrates on the multicast scenario, particularly: (a) network code construction with optimum network and coding resources; (b) optimising network coding resources; (c) optimising network security with a cost criterion (to combat the unintentionally introduced Byzantine modification security issue). The proposed solution identifies minimal configurations for the source to deliver its multicast traffic whilst allowing intermediate nodes only to perform forwarding and coding. In the method, a preliminary process first provides unevaluated individuals to a search space that it creates using two generic algorithms (augmenting path and linear disjoint path). An initial population is then formed by randomly picking individuals in the search space. Finally, the Multi-objective Genetic algorithm (MOGA) and Vector evaluated Genetic algorithm (VEGA) approaches search the population to identify minimal configurations. Genetic operators (crossover, mutation) contribute to include optimum features (e.g. lower cost, lower coding resources) into feasible minimal configurations. A fitness assignment and individual evaluation process is performed to identify the feasible minimal configurations. Simulations performed on randomly generated acyclic networks are used to quantify the performance of MOGA and VEGA.
200

Politics and public opinion in China : the impact of the Internet, 1993-2003

Hung, Chin-fu January 2005 (has links)
This dissertation is to provide empirical evidence as well as in-depth discussions to reflect the theme of new technologies like the Internet and its impact and implications on the political systems and public opinion in the Chinese context. It is the premise that technology can transform the mode of political communication and that this in turn can change the nature of political participation, as well as the milieu in which political discussions are made. This project concludes that the Internet has not at this stage fundamentally transformed China's political system, let alone caused a sudden political regime collapse and engendered a sweeping democratisation process. The Internet is, however, expanding people's minds, facilitating public discourse, and pushing for more transparent and accountable governance. In other words, the Chinese government is argued as not being as much in control of public debates on the Internet as it is of debates in other forms of media channels; the government cannot control and manipulate public opinion as much as it has traditionally done. This work has contributed to a more systematic picture of public opinion on political issues with documented examples, thanks to the Internet. Besides, this research has shed light on how to measure the impact of the Internet upon political debates, and to document the political impact of the Internet. Moreover, this dissertation highlights a usually neglected phenomenon that researching the political change or transformation in China can also be conducted form different aspects like the impact of Information Communication Technologies on its political system. The conventional approaches may be enriched thanks to the advent of new technologies in the increasingly networked, globalised and marketised world.

Page generated in 0.2352 seconds