• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Portfolio of compositions

Luque Ancona, Sergio January 2012 (has links)
A portfolio of compositions for acoustic instruments, electronic resources alone, and for acoustic instruments and live electronics. The accompanying commentary describes the aesthetic and the context of these works, their approach to form, and traces the development of the techniques used in their composition. In particular, it discusses a variety of approaches to computer-aided algorithmic composition and stochastic processes for the generation of musical elements (e.g. chord sequences, rhythmic patterns, sound structures). Included in the commentary is a description of research into stochastic synthesis, and of the development of a personal implementation of Dynamic Stochastic Synthesis and Stochastic Concatenation of Dynamic Stochastic Synthesis in SuperCollider. LIST OF WORKS Surveillance (2011) for computer 15:00 Daisy (2011) for computer 9:40 Absorbed (2010) for 2 violas 9:00 My idea of fun (2010) for clarinet, percussion and viola 7:00 Brazil (2009) for computer 8:10 Spine (2008) for English horn, percussion, violin and double bass 8:30 "Sex, Drugs and Rock 'n Roll" was never meant to be like this (2007) for computer 9:40 Don't have any evidence (2007) for bass flute, English horn, bass clarinet, bassoon, percussion, piano, violin, viola, cello and double bass 9:30 Happy Birthday (2006/2007) for computer 8:00 Résistance (2006) for accordion 4:00 My life has been filled with terrible misfortune; most of which never happened (2004) for bass clarinet, violin, viola, cello, double bass and live electronics 9:00.
172

Penalized regression methods with application to generalized linear models, generalized additive models, and smoothing

Utami Zuliana, Sri January 2017 (has links)
Recently, penalized regression has been used for dealing problems which found in maximum likelihood estimation such as correlated parameters and a large number of predictors. The main issues in this regression is how to select the optimal model. In this thesis, Schall’s algorithm is proposed as an automatic selection of weight of penalty. The algorithm has two steps. First, the coefficient estimates are obtained with an arbitrary penalty weight. Second, an estimate of penalty weight λ can be calculated by the ratio of the variance of error and the variance of coefficient. The iteration is continued from step one until an estimate of penalty weight converge. The computational cost is minimized because the optimal weight of penalty could be obtained within a small number of iterations. In this thesis, Schall’s algorithm is investigated for ridge regression, lasso regression and two-dimensional histogram smoothing. The proposed algorithm are applied to real data sets and simulation data sets. In addition, a new algorithm for lasso regression is proposed. The performance of results of the algorithm was almost comparable in all applications. Schall’s algorithm can be an efficient algorithm for selection of weight of penalty.
173

Dataflow methods in HPC, visualisation and analysis

Biddiscombe, John A. January 2017 (has links)
The processing power available to scientists and engineers using supercomputers over the last few decades has grown exponentially, permitting significantly more sophisticated simulations, and as a consequence, generating proportionally larger output datasets. This change has taken place in tandem with a gradual shift in the design and implementation of simulation and post-processing software, with a shift from simulation as a first step and visualisation/analysis as a second, towards in-situ on the fly methods that provide immediate visual feedback, place less strain on file-systems and reduce overall data-movement and copying. Concurrently, processor speed increases have dramatically slowed and multi and many-core architectures have instead become the norm for virtually all High Performance computing (HPC) machines. This in turn has led to a shift away from the traditional distributed one rank per node model, to one rank per process, using multiple processes per multicore node, and then back towards one rank per node again, using distributed and multi-threaded frameworks combined. This thesis consists of a series of publications that demonstrate how software design for analysis and visualisation has tracked these architectural changes and pushed the boundaries of HPC visualisation using dataflow techniques in distributed environments. The first publication shows how support for the time dimension in parallel pipelines can be implemented, demonstrating how information flow within an application can be leveraged to optimise performance and add features such as analysis of time-dependent flows and comparison of datasets at different timesteps. A method of integrating dataflow pipelines with in-situ visualisation is subsequently presented, using asynchronous coupling of user driven GUI controls and a live simulation running on a supercomputer. The loose coupling of analysis and simulation allows for reduced IO, immediate feedback and the ability to change simulation parameters on the fly. A significant drawback of parallel pipelines is the inefficiency caused by improper load-balancing, particularly during interactive analysis where the user may select between different features of interest, this problem is addressed in the fourth publication by integrating a high performance partitioning library into the visualization pipeline and extending the information flow up and down the pipeline to support it. This extension is demonstrated in the third publication (published earlier) on massive meshes with extremely high complexity and shows that general purpose visualization tools such as ParaView can be made to compete with bespoke software written for a dedicated task. The future of software running on many-core architectures will involve task-based runtimes, with dynamic load-balancing, asynchronous execution based on dataflow graphs, work stealing and concurrent data sharing between simulation and analysis. The final paper of this thesis presents an optimisation for one such runtime, in support of these future HPC applications.
174

An intrusion detection scheme for identifying known and unknown web attacks (I-WEB)

Kamarudin, Muhammad Hilmi January 2018 (has links)
The number of utilised features could increase the system's computational effort when processing large network traffic. In reality, it is pointless to use all features considering that redundant or irrelevant features would deteriorate the detection performance. Meanwhile, statistical approaches are extensively practised in the Anomaly Based Detection System (ABDS) environment. These statistical techniques do not require any prior knowledge on attack traffic; this advantage has therefore attracted many researchers to employ this method. Nevertheless, the performance is still unsatisfactory since it produces high false detection rates. In recent years, the demand for data mining (DM) techniques in the field of anomaly detection has significantly increased. Even though this approach could distinguish normal and attack behaviour effectively, the performance (true positive, true negative, false positive and false negative) is still not achieving the expected improvement rate. Moreover, the need to re-initiate the whole learning procedure, despite the attack traffic having previously been detected, seems to contribute to the poor system performance. This study aims to improve the detection of normal and abnormal traffic by determining the prominent features and recognising the outlier data points more precisely. To achieve this objective, the study proposes a novel Intrusion Detection Scheme for Identifying Known and Unknown Web Attacks (I-WEB) which combines various strategies and methods. The proposed I-WEB is divided into three phases namely pre-processing, anomaly detection and post-processing. In the pre-processing phase, the strengths of both filter and wrapper procedures are combined to select the optimal set of features. In the filter, Correlation-based Feature Selection (CFS) is proposed, whereas the Random Forest (RF) classifier is chosen to evaluate feature subsets in wrapper procedures. In the anomaly detection phase, the statistical analysis is used to formulate a normal profile as well as calculate the traffic normality score for every traffic. The threshold measurement is defined using Euclidean Distance (ED) alongside the Chebyshev Inequality Theorem (CIT) with the aim of improving the attack recognition rate by eliminating the set of outlier data points accurately. To improve the attack identification and reduce the misclassification rates that are first detected by statistical analysis, ensemble-learning particularly using a boosting classifier is proposed. This method uses using LogitBoost as the meta-classifier and RF as the base-classifier. Furthermore, verified attack traffic detected by ensemble learning is then extracted and computed as signatures before storing it in the signature library for future identification. This helps to reduce the detection time since similar traffic behaviour will not have to be re-executed in future.
175

A rule-based approach for recognition of chemical structure diagrams

Sadawi, Noureddin January 2013 (has links)
In chemical literature much information is given in the form of diagrams depicting chemical structures. In order to access this information electronically, diagrams have to be recognized and translated into a processable format. Although a number of approaches have been proposed for the recognition of molecule diagrams in the literature, they traditionally employ procedural methods with limited flexibility and extensibility. This thesis presents a novel approach that models the principal recognition steps for molecule diagrams in a strictly rule based system. We develop a framework that enables the definition of a set of rules for the recognition of different bond types and arrangements as well as for resolving possible ambiguities. This allows us to view the diagram recognition problem as a process of rewriting an initial set of geometric artefacts into a graph representation of a chemical diagram without the need to adhere to a rigid procedure. We demonstrate the flexibility of the approach by extending it to capture new bond types and compositions. In experimental evaluation we can show that an implementation of our approach outperforms the currently available leading open source system. Finally, we discuss how our framework could be applied to other automatic diagram recognition tasks.
176

Speech recognition by computer : algorithms and architectures

Tyler, J. E. M. January 1988 (has links)
This work is concerned with the investigation of algorithms and architectures for computer recognition of human speech. Three speech recognition algorithms have been implemented, using (a) Walsh Analysis, (b) Fourier Analysis and (c) Linear Predictive Coding. The Fourier Analysis algorithm made use of the Prime-number Fourier Transform technique. The Linear Predictive Coding algorithm made use of LeRoux and Gueguen's method for calculating the coefficients. The system was organised so that the speech samples could be input to a PC/XT microcomputer in a typical office environment. The PC/XT was linked via Ethernet to a Sun 2/180s computer system which allowed the data to be stored on a Winchester disk so that the data used for testing each algorithm was identical. The recognition algorithms were implemented entirely in Pascal, to allow evaluation to take place on several different machines. The effectiveness of the algorithms was tested with a group of five naive speakers, results being in the form of recognition scores. The results showed the superiority of the Linear Predictive Coding algorithm, which achieved a mean recognition score of 93.3%. The software was implemented on three different computer systems. These were an 8-bit microprocessor, a sixteen-bit microcomputer based on the IBM PC/XT, and a Motorola 68020 based Sun Workstation. The effectiveness of the implementations was measured in terms of speed of execution of the recognition software. By limiting the vocabulary to ten words, it has been shown that it would be possible to achieve recognition of isolated utterances in real time using a single 68020 microprocessor. The definition of real time in this context is understood to mean that the recognition task will on average, be completed within the duration of the utterance, for all the utterances in the recogniser's vocabulary. A speech recogniser architecture is proposed which would achieve real time speech recognition without any limitation being placed upon (a) the order of the transform, and (b) the size of the recogniser's vocabulary. This is achieved by utilising a pipeline of four processors, with the pattern matching process performed in parallel on groups of words in the vocabulary.
177

An investigation into automation of fire field modelling techniques

Taylor, Stephen John January 1997 (has links)
The research described in this thesis has produced a prototype system based on fire field modelling techniques for use by members of the Fire Safety Engineering community who are not expert in modelling techniques. The system captures the qualitative reasoning of an experienced modeller in the assessment of room geometries in order to setup important initial parameters of the problem. The prototype system is based on artificial intelligence techniques, specifically expert system technology. It is implemented as a case based reasoning (CBR) system, primarily because it was discovered that the expert uses case based reasoning when manually dealing with such problems. The thesis answers three basic research questions. These are organised into a primary question and two subsidiary questions. The primary question is: how can CFD setup for fire modelling problems be automated? From this, the two subsidiary questions are concerned with how to represent the qualitative and quantitative knowledge associated with fire modelling; and selection of the most appropriate method of knowledge storage and retrieval. The thesis describes how knowledge has been acquired and represented for the system, pattern recognition issues, the methods of knowledge storage and retrieval chosen, the implementation of the prototype system and validation. Validation has shown that the system models the expert’s knowledge in a satisfactory way and that the system performs competently when faced with new problems. The thesis concludes with a section regarding new research questions arising from the research, and the further work these questions entail.
178

The use of some non-minimal representations to improve the effectiveness of genetic algorithms

Robbins, Phil January 1995 (has links)
In the unitation representation used in genetic algorithms, the number of genotypes that map onto each phenotype varies greatly. This leads to an attractor in phenotype space which impairs the performance of the genetic algorithm. The attractor is illustrated theoretically and empirically. A new representation, called the length varying representation (LVR), allows unitation chromosomes of varying length (and hence with a variety of attractors) to coexist. Chromosomes whose lengths yield attractors close to optima come to dominate the population. The LVR is shown to be more effective than the unitation representation against a variety of fitness functions. However, the LVR preferentially converges towards the low end of phenotype space. The phenotype shift representation (PSR), which retains the ability of the LVR to select for attractors that are close to optima, whilst using a fixed length chromosome and thus avoiding the asymmetries inherent in the LVR, is defined. The PSR is more effective than the LVR and the results compare favourably with previously published results from eight other algorithms. The internal operation of the PSR is investigated. The PSR is extended to cover multi-dimensional problems. The premise that improvements in performance may be attained by the insertion of introns, non-coding sequences affecting linkage, into traditional bit string chromosomes is investigated. In this investigation, using a population size of 50, there was no evidence of improvement in performance. However, the position of the optima relative to the hamming cliffs is shown to have a major effect on the performance of the genetic algorithm using the binary representation, and the inadequacy of the traditional crossover and mutation operators in this context is demonstrated. Also, the disallowance of duplicate population members was found to improve performance over the standard generational replacement strategy in all trials.
179

Applying experientialism to HCI methods

Imaz, Manuel January 2001 (has links)
The aim of this thesis is to incorporate the results of Experientialism in the domain of Human-Computer Interaction. The purpose is twofold: on the one hand it shows how some concepts of Experientialism like metaphor, image-schema, stories or conceptual integration may be used to explain where some concepts of HCI come from. On the other hand it uses the same conceptual background to support the design activity: the same concepts of Experientialism may be employed to build new conceptual artifacts in order to design User Interfaces and application software in general. One of the most fruitful ideas Experientislim may offer is conceptual integration as the basis upon which to construct new design solutions. Notwithstanding the pervasive use of metaphor in everyday language and even in HCI texts, there is a considerable amount of criticism regarding the use of metaphor in designing user interfaces based on the assumption that this practice may be the origin of troubles when using such software products. That is why one of the chapters is aimed at showing that not only the use of metaphor is pervasive in HCI but even the use of figurative language as well. Not only figurative language is usually employed but it is even one of the main tools for conceptualising new ideas and concepts required in the activity of software development. The Thesis proposes a framework aimed at designing User Interfaces based on the concepts of Experientialism. The proposal integrates two phases (analysis and design) the same way as most of software development methods do, trying to profit on the broad scope of the cognitive processes such as image-schema, metaphor and conceptual integration. These general concepts may be well suited to build conceptual models upon which to elaborate the user interfaces and the optirnalizy principles proposed to study the suitability of conceptual integration may be also used as validity criteria to evaluate such design artifacts. In order to validate such a proposal, the Thesis shows how to use the framework in two different situations: i) to explain why a problem such as the Mac trashcan -used to eject diskettes- is not a problem of using metaphors but an unfortunate design decision, and ii) to be applied in the design of a new User Interface. Other concepts of Experientialism are proposed in capturing user requirements. The concept of story is the ground on which to build scenarios or use cases, as stories are a more general cognitive process and a form of telling things at a more general level. That is why the user stories may be mapped to use cases, as both are essentially different type of stories and the capture of requirements is a way of specifying one type of stories (use cases) based on the original stories (user stories).
180

A configuration approach for selecting a data warehouse architecture

Weir, Robert January 2008 (has links)
Living in the Information Age, organisations must be able to exploit their data alongside the traditional economic resources of man, machine and money. Accordingly, organisations implement data warehouses to organise and consolidate their data, which creates a decision support system that is “subject oriented”, “time variant”, “integrated” and “non-volatile”. However, the organisation's ability to successfully exploit their data is determined by the degree of strategic alignment. As such, this study poses the question: how can a data warehouse be successfully and demonstrably aligned to an organisation's strategic objectives? This thesis demonstrates that strategic alignment can be achieved by following a new "top down" data warehouse implementation framework, the Configuration Approach, which is based upon determining an organisation's target configuration. This was achieved by employing Miles and Snow's Ideal Types to formulate a questionnaire that reveals an organisation's target configuration in terms of its approach to the Entrepreneurial, Administration and Information Systems challenges. Crucially, this thesis also provides the means to choose a data warehouse architecture that is wholly based on the organisation's target configuration. The Configuration Approach was evaluated using a single case study undergoing a period of strategic transformation where the implementation of a data warehouse was key to its strategic ambitions. The case study illustrated how it is possible to articulate an organisation's strategic configuration, which becomes the key driver for building a warehouse that demonstrably supports the resolution of its Entrepreneurial and Administration challenges. Significantly, the case study also provides a unique opportunity to demonstrate how the target configuration helps organisations to make the right choice of data warehouse architecture to satisfy the Information Systems challenge. In this case, the Configuration Approach provides a basis for challenging the architectural choices made by a consultancy on behalf of the participating organisation. Accordingly, it can be asserted that data warehouses are strategic investments, if implemented using the Configuration Approach.

Page generated in 0.1929 seconds