• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The design of a control unit and parallel algorithms for a SIMD computer

Cruz, A. J. O. January 1988 (has links)
No description available.
2

Techniques of design optimisation for algorithms implemented in software

Hopson, Benjamin Thomas Ken January 2016 (has links)
The overarching objective of this thesis was to develop tools for parallelising, optimising, and implementing algorithms on parallel architectures, in particular General Purpose Graphics Processors (GPGPUs). Two projects were chosen from different application areas in which GPGPUs are used: a defence application involving image compression, and a modelling application in bioinformatics (computational immunology). Each project had its own specific objectives, as well as supporting the overall research goal. The defence / image compression project was carried out in collaboration with the Jet Propulsion Laboratories. The specific questions were: to what extent an algorithm designed for bit-serial for the lossless compression of hyperspectral images on-board unmanned vehicles (UAVs) in hardware could be parallelised, whether GPGPUs could be used to implement that algorithm, and whether a software implementation with or without GPGPU acceleration could match the throughput of a dedicated hardware (FPGA) implementation. The dependencies within the algorithm were analysed, and the algorithm parallelised. The algorithm was implemented in software for GPGPU, and optimised. During the optimisation process, profiling revealed less than optimal device utilisation, but no further optimisations resulted in an improvement in speed. The design had hit a local-maximum of performance. Analysis of the arithmetic intensity and data-flow exposed flaws in the standard optimisation metric of kernel occupancy used for GPU optimisation. Redesigning the implementation with revised criteria (fused kernels, lower occupancy, and greater data locality) led to a new implementation with 10x higher throughput. GPGPUs were shown to be viable for on-board implementation of the CCSDS lossless hyperspectral image compression algorithm, exceeding the performance of the hardware reference implementation, and providing sufficient throughput for the next generation of image sensor as well. The second project was carried out in collaboration with biologists at the University of Arizona and involved modelling a complex biological system – VDJ recombination involved in the formation of T-cell receptors (TCRs). Generation of immune receptors (T cell receptor and antibodies) by VDJ recombination is an enormously complex process, which can theoretically synthesize greater than 1018 variants. Originally thought to be a random process, the underlying mechanisms clearly have a non-random nature that preferentially creates a small subset of immune receptors in many individuals. Understanding this bias is a longstanding problem in the field of immunology. Modelling the process of VDJ recombination to determine the number of ways each immune receptor can be synthesized, previously thought to be untenable, is a key first step in determining how this special population is made. The computational tools developed in this thesis have allowed immunologists for the first time to comprehensively test and invalidate a longstanding theory (convergent recombination) for how this special population is created, while generating the data needed to develop novel hypothesis.
3

Algorithmic techniques for the micron automata processor

Roy, Indranil 21 September 2015 (has links)
Our research is the first in-depth study in the use of the Micron Automata Processor, a novel re-configurable streaming co-processor which is purpose-built to execute thousands of Non-deterministic Finite Automata (NFA) in parallel. By design, this processor is well-suited to accelerate applications which need to find all occurrences of thousands of complex string-patterns in the input data. We have validated this by implementing two such applications, one from network security and the other from bioinformatics, both of which are significantly faster than their state-of-art counterparts. Our research has also widened the scope of the applications which can be accelerated through this processor by finding ways to quickly program any generic graph into it and then search for hard to find features like maximal-cliques and Hamiltonian paths. These applications and algorithms have yielded valuable design-inputs for next generation of the chip which is currently in design phase. We hope that this work paves the way to the early adoption of this upcoming architecture and to efficient solution of some of the currently computationally challenging problems.
4

Algorithms for Computing Motorcycle Graphs

Yan, Lie 12 1900 (has links)
No description available.
5

Game Theory and Algorithm Design in Network Security and Smart Grid

Zhang, Ming January 2018 (has links)
No description available.
6

A Dynamic Taxi Ride Sharing System Using Particle Swarm Optimization

Silwal, Shrawani 30 April 2020 (has links)
No description available.
7

GRAPH-BASED ANALYSIS FOR E-COMMERCE RECOMMENDATION

Huang, Zan January 2005 (has links)
Recommender systems automate the process of recommending products and services to customers based on various types of data including customer demographics, product features, and, most importantly, previous interactions between customers and products (e.g., purchasing, rating, and catalog browsing). Despite significant research progress and growing acceptance in real-world applications, two major challenges remain to be addressed to implement effective e-commerce recommendation applications. The first challenge is concerned with making recommendations based on sparse transaction data. The second challenge is the lack of a unified framework to integrate multiple types of input data and recommendation approaches.This dissertation investigates graph-based algorithms to address these two problems. The proposed approach is centered on consumer-product graphs that represent sales transactions as links connecting consumer and product nodes. In order to address the sparsity problem, I investigate the network spreading activation algorithms and a newly proposed link analysis algorithm motivated by ideas from Web graph analysis techniques. Experimental results with several e-commerce datasets indicated that both classes of algorithms outperform a wide range of existing collaborative filtering algorithms, especially under sparse data. Two graph-based models that enhance the simple consumer-product graph were proposed to provide unified recommendation frameworks. The first model, a two-layer graph model, enhances the consumer-product graph by incorporating the consumer/product attribute information as consumer and product similarity links. The second model is based on probabilistic relational models (PRMs) developed in the relational learning literature. It is demonstrated with e-commerce datasets that the proposed frameworks not only conceptually unify many of the existing recommendation approaches but also allow the exploitation of a wider range of data patterns in an integrated manner, leading to improved recommendation performance.In addition to the recommendation algorithm design research, this dissertation also employs the random graph theory to study the topological characteristics of consumer-product graphs and the fundamental mechanisms that generate the sales transaction data. This research represents the early step towards a meta-level analysis framework for validating the fundamental assumptions made by different recommendation algorithms regarding the consumer-product interaction generation process and thus supporting systematic recommendation model/algorithm selection and evaluation.
8

Natural language processing of online propaganda as a means of passively monitoring an adversarial ideology

Holm, Raven R. 03 1900 (has links)
Approved for public release; distribution is unlimited / Reissued 30 May 2017 with Second Reader’s non-NPS affiliation added to title page. / Online propaganda embodies a potent new form of warfare; one that extends the strategic reach of our adversaries and overwhelms analysts. Foreign organizations have effectively leveraged an online presence to influence elections and distance-recruit. The Islamic State has also shown proficiency in outsourcing violence, proving that propaganda can enable an organization to wage physical war at very little cost and without the resources traditionally required. To augment new counter foreign propaganda initiatives, this thesis presents a pipeline for defining, detecting and monitoring ideology in text. A corpus of 3,049 modern online texts was assembled and two classifiers were created: one for detecting authorship and another for detecting ideology. The classifiers demonstrated 92.70% recall and 95.84% precision in detecting authorship, and detected ideological content with 76.53% recall and 95.61% precision. Both classifiers were combined to simulate how an ideology can be detected and how its composition could be passively monitored across time. Implementation of such a system could conserve manpower in the intelligence community and add a new dimension to analysis. Although this pipeline makes presumptions about the quality and integrity of input, it is a novel contribution to the fields of Natural Language Processing and Information Warfare. / Lieutenant, United States Coast Guard
9

Evolutionary many-objective optimisation : pushing the boundaries

Li, Miqing January 2015 (has links)
Many-objective optimisation poses great challenges to evolutionary algorithms. To start with, the ineffectiveness of the Pareto dominance relation, which is the most important criterion in multi-objective optimisation, results in the underperformance of traditional Pareto-based algorithms. Also, the aggravation of the conflict between proximity and diversity, along with increasing time or space requirement as well as parameter sensitivity, has become key barriers to the design of effective many-objective optimisation algorithms. Furthermore, the infeasibility of solutions' direct observation can lead to serious difficulties in algorithms' performance investigation and comparison. In this thesis, we address these challenges, aiming to make evolutionary algorithms as effective in many-objective optimisation as in two- or three-objective optimisation. First, we significantly enhance Pareto-based algorithms to make them suitable for many-objective optimisation by placing individuals with poor proximity into crowded regions so that these individuals can have a better chance to be eliminated. Second, we propose a grid-based evolutionary algorithm which explores the potential of the grid to deal with many-objective optimisation problems. Third, we present a bi-goal evolution framework that converts many objectives of a given problem into two objectives regarding proximity and diversity, thus creating an optimisation problem in which the objectives are the goals of the search process itself. Fourth, we propose a comprehensive performance indicator to compare evolutionary algorithms in optimisation problems with various Pareto front shapes and any objective dimensionality. Finally, we construct a test problem to aid the visual investigation of evolutionary search, with its Pareto optimal solutions in a two-dimensional decision space having similar distribution to their images in a higher-dimensional objective space. The work reported in this thesis is the outcome of innovative attempts at addressing some of the most challenging problems in evolutionary many-objective optimisation. This research has not only made some of the existing approaches, such as Pareto-based or grid-based algorithms that were traditionally regarded as unsuitable, now effective for many-objective optimisation, but also pushed other important boundaries with novel ideas including bi-goal evolution, a comprehensive performance indicator and a test problem for visual investigation. All the proposed algorithms have been systematically evaluated against existing state of the arts, and some of these algorithms have already been taken up by researchers and practitioners in the field.
10

Determining when to interact: The Interaction Algorithm

Sykes, Edward 07 September 2012 (has links)
Current trends in society and technology make interruption a central human computer interaction problem. Many intelligent computer systems exist, but one that determines when best to interact with a user at appropriate times as s/he performs computer-based tasks does not. In this work, an Interaction Algorithm was designed, developed and evaluated that draws from a user model and real-time observations of the user’s actions as s/he works on computer-based tasks to determine ideal times to interact with the user. This research addresses the complex problem of determining the precise time to interrupt a user and how to best support him/her during and after the interruption task. Many sub-problems have been taken into account such as determining the task difficulty, the intent of the user as s/he is performing the task and how to incorporate personal user characteristics. This research is quite timely as the number of interruptions people experience on a daily basis has grown considerably over the last decade and this growth has not shown any signs of subsiding. Furthermore, with the exponential growth of mobile computing, interruptions are permeating the user experience. Thus, systems must be developed to manage interruptions by reasoning about ideal timings of interactions and determining appropriate notification formats. This research shed light on this problem as described below: 1. The algorithm developed uses a user model in its’ reasoning computations. Most of the research in this area has focused on task-based contextual information when designing systems that reason about interruptions. Researchers support additional work should be done in this area by including subjective preferences. 2. The algorithm’s performance is quite promising at 96% accuracy in several models created. 3. The algorithm was implemented using an advanced machine learning technology—an Adaptive Neural-Fuzzy Inference System—which is a novel contribution. 4. The algorithm developed does not rely on any user involvement. In other systems, users laboriously review video sessions after working with the system and record interruption annotations so that the system can learn. 5. This research shed light on reasoning about ideal interruption points for free-form tasks. Currently, this is an unsolved problem.

Page generated in 0.0697 seconds