• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Configuration tool prototype for the Trusted Computing Exemplar project

Welliver, Terrence M. January 2009 (has links) (PDF)
Thesis (M.S. in Computer Science)--Naval Postgraduate School, December 2009. / Thesis Advisor(s): Irvine, Cynthia E. Second Reader: Clark, Paul C. "December 2009." Description based on title screen as viewed on January 27, 2010. Author(s) subject terms: Trusted computing exemplar, Least privilege separation kernel, Graphical user interface, Wxpython, Java, Configuration vector, LPSK, Configuration vector tool, TCX, GUI, SKPP. Includes bibliographical references (p. 97-98). Also available in print.
2

XMI-based transformation of UML interaction diagrams to activity diagrams /

Wong, Eric C. January 1900 (has links)
Thesis (M.Sc.) - Carleton University, 2002. / Includes bibliographical references (p. 129-133). Also available in electronic format on the Internet.
3

Evolution and adoption of UML-based development tools /

Napoles, Rodolfo, January 1900 (has links)
Thesis (M.Eng.) - Carleton University, 2005. / Includes bibliographical references (p. 92-93). Also available in electronic format on the Internet.
4

Disk based model checking /

Bao, Tonglaga, January 2004 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Computer Science, 2004. / Includes bibliographical references (p. 33-34).
5

An algorithm for computing short-range forces in molecular dynamics simulations with non-uniform particle densities

Law, Timothy R. January 2017 (has links)
We develop the projection sorting algorithm, used to compute pairwise short-range interaction forces between particles in molecular dynamics simulations. We contrast this algorithm to the state of the art and discuss situations where it may be particularly effective. We then explore the efficient implementation of the projection sorting algorithm in both on-node (shared memory parallel) and off-node (distributed memory parallel) environments. We provide AVX, AVX2, KNC and AVX-512 intrinsic implementations of the force calculation kernel. We use the modern multi- and many-core architectures: Intell Haswell, Broadwell Knights Corner (KNC) and Knights Landing (KNL), as representative slice of modern High Performance Computing (HPC) installations. In the course of implementation we use our algorithm as a means of optimising a contemporary biophysical molecular dynamics simulation of chromosome condensation. We compare state-of-the-art Molecular Dynamics (MD) algorithms and projection sorting, and experimentally demonstrate the performance gains possible with our algorithm. These experiments are carried out in single- and multi-node configurations. We observe speedups of up to 5x when comparing our algorithm to the state of the art, and up to 10x when compared to the original unoptimised simulation. These optimisations have directly affected the ability of domain scientists to carry out their work.
6

A visual adaptive authoring framework for adaptive hypermedia

Khan, Javed Arif January 2018 (has links)
In a linear hypermedia system, all users are offered a standard series of hyperlinks. Adaptive Hypermedia (AH) tailors what the user sees to the user's goals, abilities, interests, knowledge and preferences. Adaptive Hypermedia is said to be the answer to the 'lost in hyperspace' phenomenon, where the user has too many hyperlinks to choose from, and has little knowledge to select the most appropriate hyperlink. AH offers a selection of links and content that is most appropriate to the current user. In an Adaptive Educational Hypermedia (AEH) course, a student's learning experiences can be personalised using a User Model (UM), which could include information such as the student's knowledge level, preferences and culture. Beside these basic components, a Goal Model (GM) can represent the goal the users should meet and a Domain Model (DM) would represent the knowledge domain. Adaptive strategies are sets of adaptive rules that can be applied to these models, to allow the personalisation of the course for students, according to their needs. From the many interacting elements, it is clear that the authoring process is a bottleneck in the adaptive course creation, which needs to be improved in terms of interoperability, usability and reuse of the adaptive behaviour (strategies). Authoring of Adaptive Hypermedia is considered to be difficult and time consuming. There is great scope for improving authoring tools in Adaptive Educational Hypermedia system, to aid already burdened authors to create adaptive courses easily. Adaptation specifications are very useful in creating adaptive behaviours, to support the needs of a group of learners. Authors often lack the time or the skills needed to create new adaptation specifications from scratch. Creating an adaptation specification requires the author to know and remember the programming language syntax, which places a knowledge barrier for the author. LAG is a complete and useful programming language, which, however, is considered too complex for authors to deal with directly. This thesis thus proposes a visual framework (LAGBlocks) for the LAG adaptation language and an authoring tool (VASE) to utilise the proposed visual framework, to create adaptive specifications, by manipulating visual elements. It is shown that the VASE authoring tool along with the visual framework enables authors to create adaptive specifications with ease and assist authors in creating adaptive specifications which promote the "separation of concern". The VASE authoring tool offers code completeness, correctness at design time, and also allows for adaptive strategies to be used within other tools for adaptive hypermedia. The goal is thus to make adaptive specifications easier, to create and to share for authors with little or no programming knowledge and experience. This thesis looks at three aspects of authoring in adaptive educational hypermedia systems. The first aspect of the thesis is concerned with problems faced by the author of an adaptive hypermedia system; the second aspect is concerned with describing the findings gathered from investigating the previously developed authoring tools; and the final aspect of the thesis is concerned with the proposal, the implementation and the evaluation of a new authoring tool that improves the authoring process for authors with different knowledge, background and experience. The purpose of the new tool, VASE, is to enable authors to create adaptive strategies in a puzzle-building manner; moreover, the created adaptive strategies could be used within (are compatible with) other systems in adaptive hypermedia, which use the LAG programming language.
7

Towards a model of giftedness in programming : an investigation of programming characteristics of gifted students at University of Warwick

Qahmash, Ayman January 2018 (has links)
This study investigates characteristics related to learning programming for gifted first year computer science students. These characteristics include mental representations, knowledge representations, coding strategies, and attitudes and personality traits. This study was motivated by developing a theoretical framework to define giftedness in programming. In doing so, it aims to close the gap between gifted education and computer science education, allowing gifted programmers to be supported. Previous studies indicated a lack of theoretical foundation of gifted education in computer science, especially for identifying gifted programmers, which may have resulted in identification process concerns and/or inappropriate support. The study starts by investigating the relationship between mathematics and programming. We collected 3060 records of raw data of students' grades from 1996 to 2015. Descriptive statistics and the Pearson product-moment correlation test were used for the analysis. The results indicate a statistically significant positive correlation between mathematics and programming in general and between specific mathematics and programming modules. The study evolves to investigate other programming-related characteristics using case study methodology and collecting quantitative and qualitative data. A sample of n=9 cases of gifted students was selected and was interviewed. In addition, we collected the students' grades, code-writing problems and project (Witter) source codes and analysed these data using specific analysis procedures according to each method. The results indicate that gifted student programmers might possess a single or multiple characteristics that have large overlaps. We introduced a model to define giftedness in programming that consists of three profiles: mathematical ability, creativity and personal traits, and each profile consists of sub-characteristics.
8

High-dimensional-output surrogate models for uncertainty and sensitivity analyses

Triantafyllidis, Vasileios January 2018 (has links)
Computational models that describe complex physical phenomena tend to be computationally expensive and time consuming. Partial differential equation (PDE) based models in particular produce spatio-temporal data sets in high dimensional output spaces. Repeated calls of computer models to perform tasks such as sensitivity analysis, uncertainty quantification and design optimization can become computationally infeasible as a result. While constructing an emulator is one solution to approximate the outcome of expensive computer models, it is not always capable of dealing with high-dimensional data sets. To deal with high-dimensional data, in this thesis emulation strategies (Gaussian processes (GPs), artificial neural networks (ANNs) and support vector machines (SVMs)) are combined with linear and non-linear dimensionality reduction techniques (kPCA, Isomap and diffusion maps) to develop efficient emulators. For sensitivity analysis (variance based), a probabilistic framework is developed to account for the emulator uncertainty and the method is extended to multivariate outputs, with a derivation of new semi-analytical results for performing rapid sensitivity analysis of univariate or multivariate outputs. The developed emulators are also used to extend reduced order models (ROMs) based on proper orthogonal decomposition to parameter-dependent PDEs, including an extension of the discrete empirical interpolation method for non-linear problems PDE systems.
9

Dataflow methods in HPC, visualisation and analysis

Biddiscombe, John A. January 2017 (has links)
The processing power available to scientists and engineers using supercomputers over the last few decades has grown exponentially, permitting significantly more sophisticated simulations, and as a consequence, generating proportionally larger output datasets. This change has taken place in tandem with a gradual shift in the design and implementation of simulation and post-processing software, with a shift from simulation as a first step and visualisation/analysis as a second, towards in-situ on the fly methods that provide immediate visual feedback, place less strain on file-systems and reduce overall data-movement and copying. Concurrently, processor speed increases have dramatically slowed and multi and many-core architectures have instead become the norm for virtually all High Performance computing (HPC) machines. This in turn has led to a shift away from the traditional distributed one rank per node model, to one rank per process, using multiple processes per multicore node, and then back towards one rank per node again, using distributed and multi-threaded frameworks combined. This thesis consists of a series of publications that demonstrate how software design for analysis and visualisation has tracked these architectural changes and pushed the boundaries of HPC visualisation using dataflow techniques in distributed environments. The first publication shows how support for the time dimension in parallel pipelines can be implemented, demonstrating how information flow within an application can be leveraged to optimise performance and add features such as analysis of time-dependent flows and comparison of datasets at different timesteps. A method of integrating dataflow pipelines with in-situ visualisation is subsequently presented, using asynchronous coupling of user driven GUI controls and a live simulation running on a supercomputer. The loose coupling of analysis and simulation allows for reduced IO, immediate feedback and the ability to change simulation parameters on the fly. A significant drawback of parallel pipelines is the inefficiency caused by improper load-balancing, particularly during interactive analysis where the user may select between different features of interest, this problem is addressed in the fourth publication by integrating a high performance partitioning library into the visualization pipeline and extending the information flow up and down the pipeline to support it. This extension is demonstrated in the third publication (published earlier) on massive meshes with extremely high complexity and shows that general purpose visualization tools such as ParaView can be made to compete with bespoke software written for a dedicated task. The future of software running on many-core architectures will involve task-based runtimes, with dynamic load-balancing, asynchronous execution based on dataflow graphs, work stealing and concurrent data sharing between simulation and analysis. The final paper of this thesis presents an optimisation for one such runtime, in support of these future HPC applications.
10

An intrusion detection scheme for identifying known and unknown web attacks (I-WEB)

Kamarudin, Muhammad Hilmi January 2018 (has links)
The number of utilised features could increase the system's computational effort when processing large network traffic. In reality, it is pointless to use all features considering that redundant or irrelevant features would deteriorate the detection performance. Meanwhile, statistical approaches are extensively practised in the Anomaly Based Detection System (ABDS) environment. These statistical techniques do not require any prior knowledge on attack traffic; this advantage has therefore attracted many researchers to employ this method. Nevertheless, the performance is still unsatisfactory since it produces high false detection rates. In recent years, the demand for data mining (DM) techniques in the field of anomaly detection has significantly increased. Even though this approach could distinguish normal and attack behaviour effectively, the performance (true positive, true negative, false positive and false negative) is still not achieving the expected improvement rate. Moreover, the need to re-initiate the whole learning procedure, despite the attack traffic having previously been detected, seems to contribute to the poor system performance. This study aims to improve the detection of normal and abnormal traffic by determining the prominent features and recognising the outlier data points more precisely. To achieve this objective, the study proposes a novel Intrusion Detection Scheme for Identifying Known and Unknown Web Attacks (I-WEB) which combines various strategies and methods. The proposed I-WEB is divided into three phases namely pre-processing, anomaly detection and post-processing. In the pre-processing phase, the strengths of both filter and wrapper procedures are combined to select the optimal set of features. In the filter, Correlation-based Feature Selection (CFS) is proposed, whereas the Random Forest (RF) classifier is chosen to evaluate feature subsets in wrapper procedures. In the anomaly detection phase, the statistical analysis is used to formulate a normal profile as well as calculate the traffic normality score for every traffic. The threshold measurement is defined using Euclidean Distance (ED) alongside the Chebyshev Inequality Theorem (CIT) with the aim of improving the attack recognition rate by eliminating the set of outlier data points accurately. To improve the attack identification and reduce the misclassification rates that are first detected by statistical analysis, ensemble-learning particularly using a boosting classifier is proposed. This method uses using LogitBoost as the meta-classifier and RF as the base-classifier. Furthermore, verified attack traffic detected by ensemble learning is then extracted and computed as signatures before storing it in the signature library for future identification. This helps to reduce the detection time since similar traffic behaviour will not have to be re-executed in future.

Page generated in 0.0708 seconds