• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

A role for introspection in AI research

Freed, Samuel January 2017 (has links)
The main thesis is that introspection is recommended for the development of anthropic AI. Human-like AI, distinct from rational AI, would suit robots for care for the elderly and for other tasks that require interaction with naïve humans. “Anthropic AI” is a sub-type of human-like AI, aiming for the pre-cultured, universal intelligence that is available to healthy humans regardless of time and civilisation. This is contrasted with western, modern, well-trained and adult intelligence that is often the focus of AI. Anthropic AI would pick up local cultures and habits, ignoring optimality. Introspection is recommended for the AI developer, as a source of ideas for designing an artificial mind, in the context of technology rather than science. Existing notions of introspection are analysed, and the aspiration for “clean” or “good” introspection is exposed as a mirage. Nonetheless, introspection is shown to be a legitimate source of ideas for AI using considerations of the contexts of discovery vs. justification. Moreover, introspection is shown to be a positively plausible basis for ideas for AI since if a teacher uses introspection to extract mental skills from themselves to transmit them to a student, an AI developer can also use introspection to uncover the human skills that they want to transfer to a computer. Methods and pitfalls of this approach are detailed, including the common error of polluting one's introspection with highly-educated notions such as mathematical methods. Examples are coded and run, showing promising learning behaviour. This is interpreted as a compromise between Classic AI and Dreyfus's tradition. So far AI practitioners have largely ignored the subjective, while the Phenomenologists have not written code – this thesis bridges that gap. One of the examples is shown to have Gadamerian characteristics, as recommended by (Winograd & Flores, 1986). This serves also as a response to Dreyfus's more recent publications critiquing AI (Dreyfus, 2007, 2012).
432

Co-located and distributed multicarrier space-time shift keying for wideband channels

Kadir, Mohammad Ismat January 2014 (has links)
Multicarrier (MC) transmissions are proposed for the space time shift keying (STSK) concept. Specifically, OFDM, MC CDMA and OFDMA/SC-FDMA-aided STSK are proposed for transmissions over dispersive wireless channels. Additionally, a successive relaying (SR) aided cooperative MC STSK scheme is conceived for gleaning cooperative space time diversity and for mitigating the half-duplex throughput loss of conventional relaying. Furthermore, a multiple-symbol differential sphere decoding (MSDSD) aided multicarrier STSK arrangement is proposed to dispense with channel estimation (CE). We design a novel modality of realizing STSK amalgamated with OFDM for facilitating high-rate data-transmissions through a number of low-rate parallel subchannels, thus overcoming the dispersion induced by broadband channels. A MC-CDMA aided STSK system is also proposed for mitigating the channel-induced dispersion, while providing additional frequency-domain (FD) diversity and supporting multiuser transmissions. As a further advance, we design OFDMA and SC FDMA-aided STSK systems, which are capable of communicating in dispersive multiuser scenarios, whilst maintaining a low peak-to-average power ratio (PAPR) in the SC-FDMA-aided STSK uplink. Additionally, complexity reduction techniques are proposed for OFDMA/SC-FDMA-aided STSK. We also conceive the concept of SR aided cooperative multicarrier STSK for frequency-selective channels for mitigating the typical 50% throughput loss of conventional half-duplex relaying in the context of MC-CDMA and for reducing the SR-induced interferences. We additionally propose a differentially encoded cooperative MC-CDMA STSK scheme for facilitating communications over hostile dispersive channels without requiring CE. Finally, the noncoherent multicarrier STSK arrangement is further developed by using MSDSD. The conventional differential detection suffers from a typical 3-dB performance loss, which is further aggravated in the presence of high Doppler frequencies. Hence, for the sake of mitigating this performance loss in the face of high Doppler scenarios, while maintaining a modest decoding complexity, both a hard-decision-based as well as an iterative soft-decision multiple-symbol differential sphere decoding aided multicarrier STSK arrangement is developed.
433

Reduced-complexity near-optimal Ant-Colony-aided multi-user detection for CDMA systems

Xu, Chong January 2009 (has links)
Reduced-complexity near-maximum-likelihood Ant-Colony Optimization (ACO) assisted Multi-User Detectors (MUDs) are proposed and investigated. The exhaustive search complexity of the optimal detection algorithm may be deemed excessive for practical applications. For example, a Space-Time Block Coded (STBC) two transmit assisted K = 32-user system has to search through the candidate-space for finding the final detection output during 264 times per symbol duration by invoking the Euclidean-distance-calculation of a 64-element complex-valued vector. Hence, a nearoptimal or near-ML MUDs are required in order to provide a near-optimal BER performance at a significantly reduced complexity. Specifically, the ACO assisted MUD algorithms proposed are investigated in the context of a Multi-Carrier DS-CDMA (MC DS-CDMA) system, in a Multi-Functional Antenna Array (MFAA) assisted MC DS-CDMA system and in a STBC aided DS-CDMA system. The ACO assisted MUD algorithm is shown to allow a fully loaded MU system to achieve a near-single user performance, which is similar to that of the classic Minimum Mean Square Error (MMSE) detection algorithm. More quantitatively, when the STBC assisted system support K = 32 users, the complexity imposed by the ACO based MUD algorithm is a fraction of 1 × 10−18 of that of the full search-based optimum MUD. In addition to the hard decision based ACO aided MUD a soft-output MUD was also developed,which was investigated in the context of an STBC assisted DS-CDMA system using a three-stage concatenated, iterative detection aided system. It was demonstrated that the soft-output system is capable of achieving the optimal performance of the Bayesian detection algorithm.
434

Dataflow methods in HPC, visualisation and analysis

Biddiscombe, John A. January 2017 (has links)
The processing power available to scientists and engineers using supercomputers over the last few decades has grown exponentially, permitting significantly more sophisticated simulations, and as a consequence, generating proportionally larger output datasets. This change has taken place in tandem with a gradual shift in the design and implementation of simulation and post-processing software, with a shift from simulation as a first step and visualisation/analysis as a second, towards in-situ on the fly methods that provide immediate visual feedback, place less strain on file-systems and reduce overall data-movement and copying. Concurrently, processor speed increases have dramatically slowed and multi and many-core architectures have instead become the norm for virtually all High Performance computing (HPC) machines. This in turn has led to a shift away from the traditional distributed one rank per node model, to one rank per process, using multiple processes per multicore node, and then back towards one rank per node again, using distributed and multi-threaded frameworks combined. This thesis consists of a series of publications that demonstrate how software design for analysis and visualisation has tracked these architectural changes and pushed the boundaries of HPC visualisation using dataflow techniques in distributed environments. The first publication shows how support for the time dimension in parallel pipelines can be implemented, demonstrating how information flow within an application can be leveraged to optimise performance and add features such as analysis of time-dependent flows and comparison of datasets at different timesteps. A method of integrating dataflow pipelines with in-situ visualisation is subsequently presented, using asynchronous coupling of user driven GUI controls and a live simulation running on a supercomputer. The loose coupling of analysis and simulation allows for reduced IO, immediate feedback and the ability to change simulation parameters on the fly. A significant drawback of parallel pipelines is the inefficiency caused by improper load-balancing, particularly during interactive analysis where the user may select between different features of interest, this problem is addressed in the fourth publication by integrating a high performance partitioning library into the visualization pipeline and extending the information flow up and down the pipeline to support it. This extension is demonstrated in the third publication (published earlier) on massive meshes with extremely high complexity and shows that general purpose visualization tools such as ParaView can be made to compete with bespoke software written for a dedicated task. The future of software running on many-core architectures will involve task-based runtimes, with dynamic load-balancing, asynchronous execution based on dataflow graphs, work stealing and concurrent data sharing between simulation and analysis. The final paper of this thesis presents an optimisation for one such runtime, in support of these future HPC applications.
435

An intrusion detection scheme for identifying known and unknown web attacks (I-WEB)

Kamarudin, Muhammad Hilmi January 2018 (has links)
The number of utilised features could increase the system's computational effort when processing large network traffic. In reality, it is pointless to use all features considering that redundant or irrelevant features would deteriorate the detection performance. Meanwhile, statistical approaches are extensively practised in the Anomaly Based Detection System (ABDS) environment. These statistical techniques do not require any prior knowledge on attack traffic; this advantage has therefore attracted many researchers to employ this method. Nevertheless, the performance is still unsatisfactory since it produces high false detection rates. In recent years, the demand for data mining (DM) techniques in the field of anomaly detection has significantly increased. Even though this approach could distinguish normal and attack behaviour effectively, the performance (true positive, true negative, false positive and false negative) is still not achieving the expected improvement rate. Moreover, the need to re-initiate the whole learning procedure, despite the attack traffic having previously been detected, seems to contribute to the poor system performance. This study aims to improve the detection of normal and abnormal traffic by determining the prominent features and recognising the outlier data points more precisely. To achieve this objective, the study proposes a novel Intrusion Detection Scheme for Identifying Known and Unknown Web Attacks (I-WEB) which combines various strategies and methods. The proposed I-WEB is divided into three phases namely pre-processing, anomaly detection and post-processing. In the pre-processing phase, the strengths of both filter and wrapper procedures are combined to select the optimal set of features. In the filter, Correlation-based Feature Selection (CFS) is proposed, whereas the Random Forest (RF) classifier is chosen to evaluate feature subsets in wrapper procedures. In the anomaly detection phase, the statistical analysis is used to formulate a normal profile as well as calculate the traffic normality score for every traffic. The threshold measurement is defined using Euclidean Distance (ED) alongside the Chebyshev Inequality Theorem (CIT) with the aim of improving the attack recognition rate by eliminating the set of outlier data points accurately. To improve the attack identification and reduce the misclassification rates that are first detected by statistical analysis, ensemble-learning particularly using a boosting classifier is proposed. This method uses using LogitBoost as the meta-classifier and RF as the base-classifier. Furthermore, verified attack traffic detected by ensemble learning is then extracted and computed as signatures before storing it in the signature library for future identification. This helps to reduce the detection time since similar traffic behaviour will not have to be re-executed in future.
436

Context-aware sentence categorisation : word mover's distance and character-level convolutional recurrent neural network

Fu, Xinyu January 2018 (has links)
Supervised k nearest neighbour and unsupervised hierarchical agglomerative clustering algorithm can be enhanced through word mover’s distance-based sentence distance metric to offer superior context-aware sentence categorisation performance. Advanced neural network-oriented classifier is able to achieve competing result on the benchmark streams via an aggregated recurrent unit incorporated with sophis- ticated convolving layer. The continually increasing number of textual snippets produced each year ne- cessitates ever improving information processing methods for searching, retrieving, and organising text. Central to these information processing methods are sentence classification and clustering, which have become an important application for nat- ural language processing and information retrieval. This present work proposes three novel sentence categorisation frameworks, namely hierarchical agglomerative clustering-word mover’s distance, k nearest neighbour-word mover’s distance, and convolutional recurrent neural network. Hierarchical agglomerative clustering-word mover’s distance employs word mover’s distance distortion function to effectively cluster unlabelled sentences into nearby centroid. K nearest neighbour-word mover’s distance classifies testing textual snippets through word mover’s distance-based sen- tence similarity. Both models are from the spectrum of count-based framework since they apply term frequency statistics when building the vector space matrix. Experimental evaluation on the two unsupervised learning data-sets show better per- formance of hierarchical agglomerative clustering-word mover’s distance over other competitors on mean squared error, completeness score, homogeneity score, and v-measure value. For k nearest neighbour-word mover’s distance, two benchmark textual streams are experimented to verify its superior classification performance against comparison algorithms on precision rate, recall ratio, and F1 score. Per- formance comparison is statistically validated via Mann-Whitney-U test. Through extensive experiments and results analysis, each research hypothesis is successfully verified to be yes. Unlike traditional singleton neural network, convolutional recurrent neural net- work model incorporates character-level convolutional network with character-aware recurrent neural network to form a combined framework. The proposed model ben- efits from character-aware convolutional neural network in that only salient features are selected and fed into the integrated character-aware recurrent neural network. Character-aware recurrent neural network effectively learns long sequence semantics via sophisticated update mechanism. The experiment presented in current thesis compares convolutional recurrent neural network framework against the state-of- the-art text classification algorithms on four popular benchmarking corpus. The present work also analyses three different recurrent neural network hidden recurrent cells’ impact on performance and their runtime efficiency. It is observed that min- imal gated unit achieves the optimal runtime and comparable performance against gated recurrent unit and long short-term memory. For term frequency-inverse docu- ment frequency-based algorithms, the current experiment examines word2vec, global vectors for word representation, and sent2vec embeddings and reports their perfor- mance differences. Performance comparison is statistically validated through Mann- Whitney-U test and the corresponding hypotheses are tested to be yes through the reported statistical analysis.
437

Hyper-heuristic approaches to automatically designing heuristics as mutation operators for evolutionary programming on function classes

Hong, Libin January 2018 (has links)
A hyper-heuristic is a search method or learning mechanism for selecting or generating heuristics to solve computational search problems. Researchers classify hyper-heuristics according to the source of feedback during learning: Online learning hyper-heuristics learn while solving a given instance of a problem; Offline learning hyper-heuristics learn from a set of training instances, a method that can generalise to unseen instances. Genetic programming (GP) can be considered a specialization of the more widely known genetic algorithms (GAs) where each individual is a computer program. GP automatically generates computer programs to solve specified tasks. It is a method of searching a space of computer programs. GP can be used as a kind of hyper-heuristic to be a learning algorithm when it uses some feedback from the search process. Our research mainly uses genetic programming as offline hyper-heuristic approach to automatically design various heuristics for evolutionary programming.
438

Critical Infrastructure Automated Immuno-Response System (CIAIRS)

Badri, S. K. A. January 2018 (has links)
Critical Infrastructures play a central role in the world around us and are the backbone of everyday life. Their service provision has become more widespread, to the point where it is now practically ubiquitous in many societies. Critical Infrastructure assets contribute to the economy and society as a whole. Their impact on the security, economy and health sector are extremely vital. Critical Infrastructures now possess levels of automation that require the integration of, often, mutually incompatible technologies. Their increasing complexity has led to the creation of direct and indirect interdependent connections amongst the infrastructure groupings. In addition, the data generated is vast as the intricate level of interdependency between infrastructures has grown. Since Critical Infrastructures are the backbone of everyday life, their protection from cyber-threats is an increasingly pressing issue for governments and private industries. Any failures, caused by cyber-attacks, have the ability to spread through interconnected systems and are a challenge to detect; especially as the Internet is now heavily reliant on Critical Infrastructures. This has led to different security threats facing interconnected security systems. Understanding the complexity of Critical Infrastructure interdependencies, how to take advantage of it in order to minimize the cascading problem, enables the prediction of potential problems before they happen. Therefore, this work firstly discusses the interdependency challenges facing Critical Infrastructures; and how it can be used to create a support network against cyber-attacks. In much, the same way as the human immune system is able to respond to intrusion. Next, the development of a distributed support system is presented. The system employs behaviour analysis techniques to support interconnected infrastructures and distribute security advice throughout a distributed system of systems. The approach put forward is tested through a statistical analysis methodology, in order to investigate the cascading failure effect whilst taking into account the independent variables. Moreover, our proposed system is able to detect cyber-attacks and share the knowledge with interconnected partners to create an immune system network. The development of the ‘Critical Infrastructure Auto-Immune Response System’ (CIAIRS) is presented with a detailed discussion on the main segments that comprise the framework and illustrates the functioning of the system. A semi-structured interview helped to demonstrate our approach by using a realistic simulation to construct data and evaluate the system output.
439

The development of a modular framework for Serious Games and the Internet of Things

Henry, J. M. January 2018 (has links)
The combination of Serious Games and the Internet of Things is a recent academic domain of research. By combining the software and gaming advantages of Serious Games with the interconnected hardware and middleware driven ecosystem of the Internet of Things, it is possible to develop data-driven games that source data from the local or extended physical environment to progress in the virtual environment of gaming. The following thesis presents research into Serious Games and the Internet of Things, focusing on the development of a modular framework that represents the combination of the two technologies. Current research in the domain of Smart Serious Games omits a modular framework that is application independent and outlines the software and hardware interaction between Serious Games and the Internet of Things, therefore this thesis is the first to introduce one. By developing such a framework, this thesis contributes to the academic domain and encourages new and innovative real-world applications of Smart Serious Games that include healthcare, education, simulation and others. Further to the framework, this thesis presents a survey into the network topologies for Serious Games and the Internet of Things and a computer algorithm that provides a measure of student engagement, integrated into a Smart Serious Game developed as part of the undertaken research named Student Engagement Application (SEA). This thesis utilises a semester-long experiment and the techniques of control groups and randomised control trials to investigate the compare the measures of engagement obtained through SEA and self-reflection questionnaires, and the measure of student engagement against academic performance, respectively. After statistical analysis, the data presented strong confidence in the measure of engagement through SEA, validating the effectiveness of the proposed framework for Smart Serious Games.
440

Investigation into game-based crisis scenario modelling and simulation system

Praiwattana, P. January 2018 (has links)
A crisis is an infrequent and unpredictable event. Training and preparation process requires tools for representation of crisis context. Particularly, crisis events consist of different situations, which can occur at the same time combining into complex situation and becoming a challenge in coordinating several crisis management departments. In this regards, disaster prevention, preparedness and relief can be conceptualized into a design of hypothetical crisis game. Many complex tasks during development of emergency circumstance provide an opportunity for practitioners to train their skills, which are situation analysis, decision-making, and coordination procedures. While the training in physical workouts give crisis personal a hand-on experience in the given situation, it often requires a long time to prepare with a considerable budget. Alternatively, computational framework which allows simulation of crisis models tailoring into crisis scenario can become a cost-effective substitution to this study and training. Although, there are several existing computational toolsets to simulate crisis, there is no system providing a generic functionality to define crisis scenario, simulation model, agent development, and artificial intelligence problem planning in the single unified framework. In addition, a development of genetic framework can become too complex due to a multi-disciplinary knowledge required in each component. Besides, they have not fully incorporated a game technology toolset to fasten the system development process and provide a rich set of features and functionalities to these mentioned components. To develop such crisis simulation system, there are several technologies that must be studied to derive a requirement for software engineering approach in system’s specification designs. With a current modern game technology available in the market, it enables fast prototyping of the framework integrating with cutting-edge graphic render engine, asset management, networking, and scripting library. Therefore, a serious game application for education in crisis management can be fundamentally developed early. Still, many features must be developed exclusively for the novel simulation framework on top of the selected game engine. In this thesis, we classified for essential core components to design a software specification of a serious game framework that eased crisis scenario generation, terrain design, and agent simulation in UML formats. From these diagrams, the framework was prototyped to demonstrate our proposed concepts. From the beginning, the crisis models for different disasters had been analysed for their design and environment representation techniques, thus provided a choice of based simulation technique of a cellular automata in our framework. Importantly, a study for suitability in selection of a game engine product was conducted since the state of the art game engines often ease integration with upcoming technologies. Moreover, the literatures for a procedural generation of crisis scenario context were studied for it provided a structure to the crisis parameters. Next, real-time map visualization in dynamic of resource representation in the area was developed. Then the simulation systems for a large-scale emergency response was discussed for their choice of framework design with their examples of test-case study. An agent-based modelling tool was also not provided from the game engine technology so its design and decision-making procedure had been developed. In addition, a procedural content generation (PCG) was integrated for automated map generation process, and it allowed configuration of scenario control parameters over terrain design during run-time. Likewise, the artificial planning architecture (AI planning) to solve a sequence of suitable action toward a specific goal was considered to be useful to investigate an emergency plan. However, AI planning most often requires an offline computation with a specific planning language. So the comparison study to select a fast and reliable planner was conducted. Then an integration pipeline between the planner and agent was developed over web-service architecture to separate a large computation from the client while provided ease of AI planning configuration using an editor interface from the web application. Finally, the final framework called CGSA-SIM (Crisis Game for Scenario design and Agent modelling simulation) was evaluated for run-time performance and scalability analysis. It shown an acceptable performance framerate for a real-time application in the worst 15 frame-per-seconds (FPS) with maximum visual objects. The normal gameplay performed capped 60 FPS. At same time, the simulation scenario for a wildfire situation had been tested with an agent intervention which generated a simulation data for personal or case evaluation. As a result, we have developed the CGSA-SIM framework to address the implementation challenge of incorporating an emergency simulation system with a modern game technology. The framework aims to be a generic application providing main functionality of crisis simulation game for a visualization, crisis model development and simulation, real-time interaction, and agent-based modelling with AI planning pipeline.

Page generated in 0.1543 seconds