• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1383
  • 192
  • 73
  • 30
  • 27
  • 11
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 3635
  • 3635
  • 1069
  • 940
  • 902
  • 716
  • 706
  • 510
  • 470
  • 447
  • 399
  • 357
  • 291
  • 267
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

The automated translation of integrated formal specifications into concurrent programs

Yang, Letu January 2008 (has links)
The PROB model checker [LB03] provides tool support for an integrated formal specification approach, which combines the state-based B specification language [Abr96] with the event-based process algebra CSP [Hoa78]. The JCSP package [WM00b] presents a concurrent Java implementation for CSP/occam. In this thesis, we present a developing strategy for implementing such a combined specification as a concurrent Java program. The combined semantics in PROB is flexible and ideal for model checking, but is too abstract to be implemented in programming languages. Also, although the JCSP package gave us significant inspiration for implementing formal specifications in Java, we argue that it is not suitable for directly implementing the combined semantics in PROB. Therefore, we started with defining a restricted semantics from the original one in PROB. Then we developed a new Java package, JCSProB, for implementing the restricted semantics in Java. The JCSProB package implements multi-way synchronization with choice for the combined B and CSP event, as well as a new multi-threading mechanism at process level. Also, a GUI sub-package is designed for constructing GUI programs for JCSProB to allow user interaction and runtime assertion checking. A set of translation rules relates the integrated formal models to Java and JCSProB, and we also implement these rules in an automated translation tool for automatically generating Java programs from these models. To demonstrate and exercise the tool, several B/CSP models, varying both in syntactic structure and behavioural properties, are translated by the tool. The models manifest the presence and absence of various safety, deadlock, and fairness properties; the generated Java code is shown to faithfully reproduce them. Run-time safety and fairness assertion checking is also demonstrated. We also experimented with composition and decomposition on several combined models, as well as the Java programs generated from them. Composition techniques can help the user to develop large distributed systems, and can significantly improve the scalability of the development of the combined models of PROB.
352

A competency model for semi-automatic question generation in adaptive assessment

Sitthisak, Onjira January 2009 (has links)
The concept of competency is increasingly important since it conceptualises intended learning outcomes within the process of acquiring and updating knowledge. A competency model is critical to successfully managing assessment and achieving the goals of resource sharing, collaboration, and automation to support learning. Existing e learning competency standards such as the IMS Reusable Definition of Competency or Educational Objective (IMS RDCEO) specification and the HR-XML standard are not able to accommodate complicated competencies, link competencies adequately, support comparisons of competency data between different communities, or support tracking of the knowledge state of the learner. Recently, the main goal of assessment has shifted away from content-based evaluation to intended learning outcome-based evaluation. As a result, through assessment, the main focus of assessment goals has shifted towards the identification of learned capability instead of learned content. This change is associated with changes in the method of assessment. This thesis presents a system to demonstrate adaptive assessment and automatic generation of questions from a competency model, based on a sound pedagogical and technological approach. The system’s design and implementation involves an ontological database that represents the intended learning outcome to be assessed across a number of dimensions, including level of cognitive ability and subject matter content. The system generates a list of the questions and tests that are possible from a given learning outcome, which may then be used to test for understanding, and so could determine the degree to which learners actually acquire the desired knowledge. Experiments were carried out to demonstrate and evaluate the generation of assessments, the sequencing of generated assessments from a competency data model, and to compare a variety of adaptive sequences. For each experiment, methods and experimental results are described. The way in which the system has been designed and evaluated is discussed, along with its educational benefits.
353

Adaptive Regression Testing Strategy: An Empirical Study

Arafeen, Md. Junaid January 2012 (has links)
When software systems evolve, different amounts of code modifications can be involved in different versions. These factors can affect the costs and benefits of regression testing techniques, and thus, there may be no single regression testing technique that is the most cost-effective technique to use on every version. To date, many regression testing techniques have been proposed, but no research has been done on the problem of helping practitioners systematically choose appropriate techniques on new versions as systems evolve. To address this problem, we propose adaptive regression testing (ART) strategies that attempt to identify the regression testing techniques that will be the most cost-effective for each regression testing session considering organization’s situations and testing environment. To assess our approach, we conducted an experiment focusing on test case prioritization techniques. Our results show that prioritization techniques selected by our approach can be more cost-effective than those used by the control approaches.
354

The adoption of cloud-based Software as a Service (SaaS): a descriptive and empirical study of South African SMEs

Maserumule, Mabuke Dorcus 31 October 2019 (has links)
A research report submitted to the Faculty of Commerce, Law and Management, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Commerce by (MCom) in the field of Information Systems, 2019 / The purpose of this study was to describe the state of cloud-based software (SaaS) adoption among South African SMEs and to investigate the factors affecting their adoption of SaaS solutions. The technological, organisational and environmental (TOE) factors influencing cloud-based software adoption within SMEs were identified through a review of existing TOE literature. In addition, institutional theory and diffusion of innovation theory were also used to underpin the study. A research model hypothesising the outcome of the identified TOE factors on the adoption of cloud-based software was developed and tested. Specifically, factors hypothesised to influence SaaS adoption were compatibility, security concern, top management support and coercive pressures. This study employed a relational, quantitative research approach. A structured questionnaire was developed and administered as an online survey. Data was collected from a sample of 134 small and medium enterprises (SMEs) that provided usable responses. The collected data was used to firstly describe the state of adoption. Secondly, the extent to which various TOE factors impact on adoption was examined through the use of multiple regression. It was found that compatibility, security concern, top management support and coercive pressures influence adoption while trust, cost, relative advantage, complexity, geographic dispersion, normative and mimetic pressures did not have significant effects. This study adds value to the Information Systems literature as it uses the TOE framework alongside institutional theory and diffusion of innovation theory to explain the adoption of cloud-based software solutions by South African SMEs. This study provides information on the current state of adoption for cloud-based software within SMEs in South Africa. Organisations can also learn about the factors contributing to this adoption. Organisations can also be informed that for adoption to be successful, technological, organisational and environmental factors must be taken into consideration. Results assist organisations wanting to implement cloud-based software solutions. Specifically, results provide a benchmark for SMEs on where their organisations stand compared to other organisations with regards to SaaS adoption (for example whether they are lagging behind, they are on par, or whether they are innovators). This could inform their IT procurement decisions, e.g. to consider whether cloud-based software solutions are strategic and necessary to keep abreast with peers and competitors. / PH2020
355

A methodology for the collection and evaluation of software error data /

Fung, Casey Kin-Chee January 1985 (has links)
No description available.
356

Making Software More Reliable by Uncovering Hidden Dependencies

Bell, Jonathan Schaffer January 2016 (has links)
As software grows in size and complexity, it also becomes more interdependent. Multiple internal components often share state and data. Whether these dependencies are intentional or not, we have found that their mismanagement often poses several challenges to testing. This thesis seeks to make it easier to create reliable software by making testing more efficient and more effective through explicit knowledge of these hidden dependencies. The first problem that this thesis addresses, reducing testing time, directly impacts the day-to-day work of every software developer. The frequency with which code can be built (compiled, tested, and package) directly impacts the productivity of developers: longer build times mean a longer wait before determining if a change to the application being build was successful. We have discovered that in the case of some languages, such as Java, the vast majority of build time is spent running tests. Therefore, it's incredibly important to focus on approaches to accelerating testing, while simultaneously making sure that we do not inadvertently cause tests to erratically fail (i.e. become flaky). Typical techniques for accelerating tests (like running only a subset of them, or running them in parallel) often can't be applied soundly, since there may be hidden dependencies between tests. While we might think that each test should be independent (i.e. that a test's outcome isn't influenced by the execution of another test), we and others have found many examples in real software projects where tests truly have these dependencies: some tests require others to run first, or else their outcome will change. Previous work has shown that these dependencies are often complicated, unintentional, and hidden from developers. We have built several systems, VMVM and ElectricTest, that detect different sorts of dependencies between tests and use that information to soundly reduce testing time by several orders of magnitude. In our first approach, Unit Test Virtualization, we reduce the overhead of isolating each unit test with a lightweight, virtualization-like container, preventing these dependencies from manifesting. Our realization of Unit Test Virtualization for Java, VMVM eliminates the need to run each test in its own process, reducing test suite execution time by an average of 62% in our evaluation (compared to execution time when running each test in its own process). However, not all test suites isolate their tests: in some, dependencies are allowed to occur between tests. In these cases, common test acceleration techniques such as test selection or test parallelization are unsound in the absence of dependency information. When dependencies go unnoticed, tests can unexpectedly fail when executed out of order, causing unreliable builds. Our second approach, ElectricTest, soundly identifies data dependencies between test cases, allowing for sound test acceleration. To enable more broad use of general dependency information for testing and other analyses, we created Phosphor, the first and only portable and performant dynamic taint tracking system for the JVM. Dynamic taint tracking is a form of data flow analysis that applies labels to variables, and tracks all other variables derived from those tagged variables, propagating those tags. Taint tracking has many applications to software engineering and software testing, and in addition to our own work, researchers across the world are using Phosphor to build their own systems. Towards making testing more effective, we also created Pebbles, which makes it easy for developers to specify data-related test oracles on mobile devices by thinking in terms of high level objects such as emails, notes or pictures.
357

Exploring the impact of test suite granularity and test grouping technique on the cost-effectiveness of regression testing

Qiu, Xuemei 05 December 2002 (has links)
Regression testing is an expensive testing process used to validate changes made to previously tested software. Different regression testing techniques can have different impacts on the cost-effectiveness of testing. This cost-effectiveness can also vary with different characteristics of test suites. One such characteristic, test suite granularity, reflects the way in which test cases are organized within a test suite; another characteristic, test grouping technique, involves the way in which the test inputs are grouped into test cases. Various cost-benefits tradeoffs have been attributed to choices of test suite granularity and test grouping technique, but little research has formally examined these tradeoffs. In this thesis, we conducted several controlled experiments, examining the effects of test suite granularity and test grouping technique on the costs and benefits of several regression testing methodologies across ten releases of a non-trivial software system, empire. Our results expose essential tradeoffs to consider when designing test suites for use in regression testing evolving systems. / Graduation date: 2003
358

Streamlined and prioritized hierarchical relations: a technique for improving the effectiveness of theclassification-tree methodology

Kwok, Wing-hong., 郭永康. January 2001 (has links)
published_or_final_version / abstract / toc / Computer Science and Information Systems / Master / Master of Philosophy
359

Applying design metrics to large-scale telecommunications software

Pipkin, Jeffrey A. January 1996 (has links)
The design metrics developed by the Design Metrics team at Ball State University are a suite of metrics that can be applied during the design phase of software development. The benefit of the metrics lies in the fact that the metrics can be applied early in the software development cycle. The suite includes the external design metric De,the internal design metric D27 D(G), the design balance metric DB, and the design connectivity metric DC.The suite of design metrics have been applied to large-scale industrial software as well as student projects. Bell Communications Research of New Jersey has made available a software system that can be used to apply design metrics to large-scale telecommunications software. This thesis presents the suite of design metrics and attempts to determine if the characteristics of telecommunications software are accurately reflected in the conventions used to compute the metrics. / Department of Computer Science
360

Error and occurrence analysis of Stanfins redesign at Computer Sciences Corporation

Khan, Irshad A. January 1990 (has links)
At Ball State University Dr. Wayne Zage and Professor Dolores Zage are working on a Design metrics project to develop a metrics approach for analyzing software design.The purpose of this thesis is to test the hypotheses of this metric by calculating the De external design component, and to show the correlation of errors and stress points in the design phase for a large Ada Software, professionally developed at Computer Sciences Corporation.From these studies we can relatively conclude that De does indicate the error-prone module. Since the D(G) is comprised of an internal and external component it is necessary to evaluate Di to support this hypothesis on a large project. Just by viewing the external complexity, the metric does a relatively good job of pointing out high error modules, with only viewing 10% of the modules we found 33% of the errors.Comparing the results of STANFINS-R and the results of the BSU projects, the BSU projects did better in finding the errors 33% verus 53%. However in the STANFINS project, we had a better success rate of finding the error modules. Of the modules highlighted 72% did contain errors. Thus if we loosened the criteria for selection of error prone modules we might have had a large percentage of the errors captured. / Department of Computer Science

Page generated in 0.0336 seconds