• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Exploring views of teachers, teacher trainees and educational experts about e-learning-based teacher training programs in Saudi Arabia : an empirical study

Zaenalabedeen, Abdulaziz January 2016 (has links)
Technological developments have led to changes in teacher training processes globally. In Saudi Arabia, a variety of e-learning platforms and processes have been adapted in education and within teacher training processes. However, there is lack of knowledge and understanding of what types of e-learning teacher training programs exist in the country. This research aims to identify and explore the existing systems and to recommend the features that should be included in the future platforms for Saudi Arabia. To accomplish the study adopts a mixed-methods research design consisting the use of literature review, questionnaire, semi-structured interviews to identify and analyse the existing e-learning systems. The study sample included 432 teachers and educational experts randomly selected among the wider population of teachers, trainee teachers and educational experts. 216 teachers, teacher trainees, and professionals/experts working within teacher training were interviewed while another 216 experts in e- learning and use of technology in training and education were also included in the study. The main data was analyzed through the use of statistical package program (SPSS). The central results indicate no statistically significant differences in the respondents’ views and that the existing teacher training system in Saudi Arabia are diverse, generic and generally lack any focus in teacher training. The research concluded that further investigations should be carried out to analyse the existing teacher training approaches and how technology is used in this process. It calls for development of a strategic plan by the government and those involved in teacher training on how technology can be adapted and used in improving teacher training by working with all parties involved. Finally, it outlines a summary of features to be included in future e-learning teacher training programmes based on the findings of this research.
72

A semantic based framework for software regulatory compliance

Jorshari, Fatemeh Zarrabi January 2016 (has links)
Software development market is currently witnessing an increasing demand for software applications conformance with the international regime of GRC for Governance, Risk and Compliance. In this thesis, we propose a compliance requirement analysis method for early stages of software development based on a semantically-rich model, where a mapping can be established from legal and regulatory requirements relevant to system context to software system goals and contexts. This research is an attempt to address the requirement of General Data Protection Regulation (GDPR, Article 25) (European Commission) for implementation of a "privacy by design” approach as part of organizational IT-systems and processes. It requires design of data protection requirements in the development of business processes for products and services. The proposed semantic model consists of a number of ontologies each corresponding to a knowledge component within the developed framework of our approach. Each ontology is a thesaurus of concepts in the compliance and risk assessment domain related to system development along with relationships and rules between concepts that compromise the domain knowledge. The main contribution of the work presented in this paper is a novel ontology-based framework that demonstrates how description-logic reasoning techniques can be used to simulate legal reasoning requirements employed by legal professions against the description of each ontology. The semantic modelling of each component of framework can highly influence the compliance of developing software system and enables the reusability, adaptability and maintainability of these components. Through the discrete modelling of these components, the flexibility and extensibility of compliance systems will be improved. Additionally, enriching ontologies with semantic rules increases the reasoning power and helps to represent rules of laws, regulations and guidelines for compliance, also mapping, refinement and inheriting of different components from each other. This novel approach offers a pedagogically effective and satisfactory learning experience for developers and compliance officers to be trained in area of compliance and query for knowledge in this domain. This thesis offers the theoretical models, design and implementation of a compliance system in accordance with this approach.
73

Real-time simulation of rail vehicle dynamics

Simpson, Michael David January 2017 (has links)
Software simulation is vitally important in a number of industries. It allows engineers to test new products before they leave the drawing board and enables tests that would otherwise be difficult or impossible to perform. Traditional engineering simulations use sophisticated numerical methods to produce models that are highly accurate, but computationally expensive and time-consuming to use. This accuracy is essential in the latter stages of the design process, but can make the early stages - which often involve frequent, iterative design changes - a lengthy and frustrating process. Additionally, the scope of such simulations is often limited by their complexity. An attempt has been made to produce an alternative, real-time simulation tool, developed using software and development practices from the video games industry, which are designed to simulate and render virtual environments efficiently in real-time. In particular, this tool makes use of real-time physics engines; iterative, constraint-based solver systems that use rigid body dynamics to approximate the movements and interactions of physical entities. This has enabled the near-real-time simulation of multi-vehicle trains, and is capable of producing reasonably realistic results, within an acceptably small error bound, for situations in which a real-time simulation would be used as an alternative to existing methods. This thesis presents the design, development and evaluation of this simulation tool, which is based on NVidia's PhysX Engine. The aim was to determine the suitability of a physics engine-based tool for simulating various aspects of rail dynamics. This thesis intends to demonstrate that such a tool, if configured and augmented appropriately, can produce results that approach those produced by traditional methods and is capable of simulating aspects of rail dynamics that are otherwise prohibitively expensive or beyond the capabilities of existing solutions, and may therefore be a useful supplement to the existing tolls used in the rail industry.
74

Formulating test oracles via anomaly detection techniques

Almaghairbe, Rafig January 2017 (has links)
Developments in the automation of test data generation have greatly improved efficiency of the software testing process but the so-called “oracle problem” (deciding the pass or fail outcome of a test execution) is still primarily an expensive and error-prone manual activity. This thesis presents an approach to build an automated test oracle using anomaly detection techniques (based on semi-supervised and unsupervised learning approaches) on dynamic execution data (test input/output pairs and execution traces). Firstly, anomaly detection techniques based on semi-supervised learning approach were investigated to automatically classify passing and failing executions. A small proportion of the test data is labelled as passing or failing and used in conjunction with the unlabelled data to build a classifier which labels the remaining outputs (classify them as passing or failing tests). A range of learning algorithms are investigated using several faulty versions of three systems along with varying types of data (inputs/outputs alone, or in combination with execution traces) and different labelling strategies (both failing and passing tests, and passing tests alone). The results show that in many cases labelling just a small proportion of the test cases -€“ as low as 10% -€“ is sufficient to build a classifier that is able to correctly categorise the large majority of the remaining test cases. This has important practical potential: when checking the test results from a system a developer need only examine a small proportion of these and use this information to train a learning algorithm to automatically classify the remainder. Secondly, anomaly detection techniques based on unsupervised learning (mainly clustering algorithms) were investigated to automatically detect passing and failing executions. The key hypothesis is that failures will group into small clusters whereas passing executions will group into larger ones. In this investigation, the same dynamic execution data and systems used in previous study were used to evaluate the proposed approach. The results show that this hypothesis to be valid, and illustrates that the approach has the potential to substantially reduce the numbers of outputs that would need to be manually examined following a test run. Finally, a comparison study was performed between existing techniques from the specifications mining domain (the data invariant detector Daikon [30]) and anomaly detection techniques (based on semi-supervised and unsupervised learning approaches). In most cases semi-supervised learning techniques (mainly Self-training approach - Naïve Bayes with EM clustering algorithm - and Co-training approach - Naïve Bayes) perform far better under both scenarios (two different labelling strategies) as an automated test classifier than Daikon especially when input/output pairs are used together with execution traces. Furthermore, unsupervised learning techniques performed on a par when compared with Daikon in several cases.
75

Algorithmic debugging for complex lazy functional programs

Faddegon, Maarten January 2017 (has links)
An algorithmic debugger finds defects in programs by systematic search. It relies on the programmer to direct the search by answering a series of yes/no questions about the correctness of specific function applications and their results. Existing algorithmic debuggers for a lazy functional language work well for small simple programs but cannot be used to locate defects in complex programs for two reasons: Firstly, to collect the information required for algorithmic debugging existing debuggers use different but complex implementations. Therefore, these debuggers are hard to maintain and do not support all the latest language features. As a consequence, programs with unsupported language features cannot be debugged. Also inclusion of a library using unsupported languages features can make algorithmic debugging unusable even when the programmer is not interested in debugging the library. Secondly, algorithmic debugging breaks down when the size or number of questions is too great for the programmer to handle. This is a pity, because, even though algorithmic debugging is a promising method for locating defects, many real-world programs are too complex for the method to be usuable. I claim that the techniques in in this thesis make algorithmic debugging useable for a much more complex lazy functional programs. I present a novel method for collecting the information required for algorithmically debugging a lazy functional program. The method is non-invasive, uses program annotations in suspected modules only and has a simple implementation. My method supports all of Haskell, including laziness, higher-order functions and exceptions. Future language extensions can be supported without changes, or with minimal changes, to the implementation of the debugger. With my method the programmer can focus on untrusted code -- lots of trusted libraries are unaffected. This makes traces, and hence the amount of questions that needs to be answered, more manageable. I give a type-generic definition to support custom types defined by the programmer. Furthermore, I propose a method that re-uses properties to answer automatically some of the questions arising during algorithmic debugging, and to replace others by simpler questions. Properties may already be present in the code for testing; the programmer can also encode a specification or reference implementation as a property, or add a new property in response to a statement they are asked to judge.
76

Testing from structured algebraic specifications : the oracle problem

Machado, Patricia D. L. January 2000 (has links)
Work in the area of specification-based testing has pointed out that testing can be effectively used to verify programs against formal specifications. The aim is to derive test information from formal specifications so that testing can be rigorously applied whenever full formal verification is not cost-effective. However, there are still several obstacles to be overcome in order to establish testing as a standard in formal frameworks. Accurate interpretation of test results is an extremely critical one. This thesis is concerned with testing programs against structured algebraic specifications where axioms are expressed in first-order logic with equations, the usual connectives and quantifiers. The main issue investigated is the so-called oracle problem, that is, whether a decision procedure can be defined for interpreting the results of tests according to a formal specification. In this context, testing consists in checking whether specification axioms are satisfied by programs. Consequently, tests exercise operations referred to by the axioms and oracles evaluate the axioms according to the results produced by the tests. The oracle problem for flat (unstructured) specifications often reduces to the problem of comparing two values of a non-observable sort, namely the equality problem, and also how to deal with quantifiers which may demand infinite test sets. Equality on non-observable sorts is interpreted up to behavioural equivalence with observational equivalence as an important special case. However, a procedure for implementing such a behavioural equality may be hard to define or even impossible. In this thesis, a solution to the oracle problem for flat specifications is presented which tackles the equality problem by using a pair of approximate equalities, one finer than behavioural equality and one coarser, and taking the syntactic position of quantifiers in formulae into account. Additionally, when structured specifications are considered, the oracle problem can be harder. The reason is that specifications may be composed of parts over different signatures, and the structure must be taken into account in order to interpret test results according to specification axioms. Also, an implementation of hidden (non-exported) symbols may be required in order to check axioms which refer to them. Two solutions to the oracle problem for structured specifications are presented in this thesis based on a compositional and a non-compositional style of testing, namely structured testing and flat testing respectively. Structured testing handles the oracle problem more effectively than flat testing and under fewer assumptions. Furthermore, testing from structured specifications may require an approach which lies in between flat and structured testing. Therefore, based on normalisation of ordinary specifications, three normal forms are presented for defining a more practical and combined approach to testing and also coping more effectively with the oracle problem. The use of normal forms gives rise to a style of testing called semi-structured testing where some parts of the specification are replaced by normal forms and the result is checked using structured testing. Testing from normal forms can be very convenient whenever the original specification is too complex or oracles cannot be defined from it.
77

Object-Oriented Specification:Analysable Patterns & Change Management

Heaven, William John Douglas January 2008 (has links)
Formal techniques have been shown to be useful in the development of correct software. But the level of expertise required of practitioners of these techniques prohibits their widespread adoption. Attempts to integrate formal specification techniques with modern, often agile, software development practices are becoming more successful. However, these new techniques do not yet have development environments that facilitate the construction of consistent specifications for the non-expert developer. :Many of the tools that support the analysis of specifications expressed in these languages give misleading feedback in cases where the specification is inconsistent. Further, logical changes made to a specification typically invalidate the results of previous analyses. This thesis is therefore concerned with the development of an environment to facilitate the construction of correct specifications. Analysis patterns are identified that guide a non-expert specifier through some of the logical pitfalls of analysing a program specification. A change management framework for program specifications is described, which minimises the number of SAT calls needed to recheck the consistency of an edited specification. A lightweight program specification language, called Loy, is defined, which can be automatically analysed by the Alloy Analyzer, through a formal encoding of Loy into Alloy. A prototype tool is presented that automates the encoding and implements the analysis patterns and change management framework in the context of Loy specifications.
78

Modularising change management in dynamic language aspect-oriented programming frameworks to reduce fragility

Waters, Robert William January 2012 (has links)
Aspect-oriented programming (AOP) is a way of specifying crosscutting concerns (program features) as aspects - a modularisation of the concern and how it crosscuts the rest of a program. In most AOP facilities, there is an implicit assumption that the functional (non-crosscutting) concerns that these crosscutting concerns interact with (base) do not change. This assumption turns out to be incorrect in the case of script-based dynamic programming languages. This type of language experiences change throughout the execution of a program. Aspects specify in selectors which program points (join points) in the base they are bound to. When these join points experience change, there is a risk of an aspect under or over matching joinpoints - termed fragility. Aspects may need to be transformed, loaded, unloaded and otherwise dynamically adapt to changes in join point presence. The state-of-the-art provides various ways of addressing the problems of fragility, though these are neither integrated nor applicable in their current form to dynamic languages. To overcome these problems, this thesis proposes an integrated solution using two novel modularisations and three supporting features to address these via change management. Adaption plans are a structured trigger-based module containing internal modules representing choices with associated rules. These choices specify the acceptable triggers and the rules are a structured, typed, parameterised tree specifying a response. Delegation points modularise change management concerns that crosscut selectors. The supportive features are reflection support. metadata, and change notification. Reflection allows the state of a program to be reasoned about and changed at runtime, supporting change management decisions. Metadata is an established concept of reducing fragility by reducing dependency on the precise structure of a program. Change notification provides a unified mechanism for identifying change, something that reduces the complexity of change management in dynamic languages.
79

An empirical assessment of model driven development in industry

Hutchinson, John Edward January 2011 (has links)
Model driven development (MDD) is one of a number of proposals for software development that promises a number of important benefits. As with any "new" approach, it would be expected that there would be proponents of the approach and those who are opposed to it. MDD has been surprisingly contentious, though, perhaps because it challenges the code-centric model of software development (which therefore challenges the natural approach of those who develop software). But it remains the case that stories abound about significant successes resulting from using MDD in industry, and at the same time, detractors claim that MDD is inherently wrong - it is an abstraction too far: the cost of raising the level of abstraction from code to models can only ever result in an increase of costs or effort that can never be recovered. This thesis reports on work that has attempted to uncover the truth in this area in a way that has never been applied on a large-scale to MDD-based software development in industry. By going to industry practitioners, via a widely completed questionnaire and a number of in-depth interviews, it reports on what real software developers are doing in real companies. The results should lay to rest the belief that "MDD doesn't work" - apparently, it does. Companies around the world are using MDD in a variety of settings and are reporting significant benefits from its use. However, there is a subtle balance of potentially positive and potentially negative impacts of MDD use which successful users in industry prove able to manage, so that they are able to take advantages of the benefits rather than being dogged by the negatives.
80

The engineering of an object-oriented software development methodology

Ramsin, Raman January 2006 (has links)
No description available.

Page generated in 0.0215 seconds