• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1353
  • 192
  • 73
  • 30
  • 27
  • 11
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 3605
  • 3605
  • 1069
  • 940
  • 902
  • 710
  • 706
  • 509
  • 447
  • 442
  • 396
  • 344
  • 291
  • 263
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

A value and debt aware framework for evaluating compliance in software systems

Ojameruaye, Bendra January 2016 (has links)
Today's software systems need to be aligned with relevant laws and other prevailing regulations to control compliance. Compliance refers to the ability of a system to satisfy its functional and quality goals to levels that are acceptable to predefined standards, guidelines, principles, legislation or other norms within the application domain. Addressing compliance requirements at an early stage of software development is vital for successful development as it saves time, cost, resources and the effort of repairing software defects. We argue that the management of compliance and compliance requirements is ultimately an investment activity that requires value-driven decision-making. The work presented in this thesis revolves around improving decision support for compliance by making them value, risk and risk aware. Specifically, this thesis presents an economics-driven approach, which leverages on goal-oriented requirements engineering with portfolio-based thinking and technical debt analysis to enhance compliance related decisions at design-time. The approach is value driven and systematic; it leverages on influential work of portfolio thinking and technical to make the link between compliance requirements, risks, value and debt explicit to software engineers. The approach is evaluated with two case studies to illustrate its applicability and effectiveness.
152

Automated reasoning for reflective programs

Horsfall, Benjamin January 2014 (has links)
Reflective programming allows one to construct programs that manipulate or examine their behaviour or structure at runtime. One of the benefits is the ability to create generic code that is able to adapt to being incorporated into different larger programs, without modifications to suit each concrete setting. Due to the runtime nature of reflection, static verification is difficult and has been largely ignored or only weakly supported. This work focusses on supporting verification for cases where generic code that uses reflection is to be used in a “closed” program where the structure of the program is known in advance. This thesis first describes extensions to a verification system and semi-automated tool that was developed to reason about heap-manipulating programs which may store executable code on the heap. These extensions enable the tool to support a wider range of programs on account of the ability to provide stronger specifications. The system's underlying logic is an extension of separation logic that includes nested Hoare-triples which describe behaviour of stored code. Using this verification tool, with the crucial enhancements in this work, a specified reflective library has been created. The resulting work presents an approach where metadata is stored on the heap such that the reflective library can be implemented using primitive commands and then specified and verified, rather than developing new proof rules for the reflective operations. The supported reflective functions characterise a subset of Java's reflection library and the specifications guarantee both memory safety and a degree of functional correctness. To demonstrate the application of the developed solution two case studies are carried out, each of which focuses on different reflection features. The contribution to knowledge is a first look at how to support semi-automated static verification of reflective programs with meaningful specifications.
153

Tablet computers and technological practices within and beyond the laboratory

Burns, Ryan Patrick January 2015 (has links)
In this thesis I examine emergent technological practices relating to tablet computers in scientific research laboratories. I ask four main questions: To what extent can tablets be considered scientific instruments? How do tablets help to construct technoscientific imaginaries? What role do tablets play in the construction of technoscientific subjectivities? Can tablets, positioned as popular everyday computing devices, be considered in terms of expertise in the context of laboratory science? To answer these questions, research is presented that examines the situated practices of scientists using tablet computers. I use textual analysis to examine the marketing discourses relating to laboratory-specific tablet apps and how their material structure defines scientific community and communication. Ethnographic research into the way that tablets are being introduced as part of a new teaching laboratory in a large UK university is presented, focusing on how institutional power affects the definition of the tablet. A second ethnographic research case study addresses how two chemists define their own scientific subjectivity by constructing the tablet as a futuristic technology. In a third large ethnographic research case, I consider the way that tablets can be used in practices of inclusion and exclusion from sites of scientific knowledge. I draw on literature from media and cultural studies and science and technology studies, arguing that the two fields intersect in ways that can be productive for research in both. This serves as a contribution to knowledge, demonstrating how research into identity, politics and technologies can benefit from a focus on materiality drawn from the two disciplines. I contribute to knowledge in both fields by developing two key concepts, ‘affordance ambiguity' and ‘tablet imaginary'. These concepts can be applied in the analysis of uses of technology to better understand, firstly, how technologies are made meaningful for users and, secondly, how this individual meaning-making affects broader cultural trends and understandings of technologies.
154

Constructing runtime models with bigraphs to address ubiquitous computing service composition volatility

Krishna, Renan January 2015 (has links)
In this thesis, we explore the appropriateness of the language abstractions provided by Bigraphs to construct a model at runtime to tackle the problem of volatility in a service composition running on a mobile device. Our contributions to knowledge are as follows: 1) We have shown that Bigraphs (Milner, 2009) are suitable for expressing models at runtime. 2) We have offered Bigraph language abstractions as an appropriate solution to some of the research problems posed by the models at runtime community (Aßmann et al., 2012). 3) We have discussed the general lessons learnt from using Bigraphs for a practical application such as a model at runtime. 4) We have discussed the general lessons learnt from our experiences of designing models at runtime. 5) We have implemented the model at runtime using the BPL Tool (ITU, 2011) and have experimentally studied the response times of our Bigraphical model. We have suggested appropriate enhancements for the tool based on our experiences. We present techniques to parameterize the reaction rules so that the matching algorithm of the BPL Tool returns a single match giving us the ability to dynamically program the model at runtime. We also show how to query the Bigraph structure.
155

Resonance-oriented software design and development. / CUHK electronic theses & dissertations collection

January 2009 (has links)
Fleissner, Sebastian. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 181-189). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
156

Architecture-centric testing for security

Al-Azzani, Sarah January 2014 (has links)
This thesis presents a novel architecture-centric approach, which uses Implied Scenarios (IS) to detect design-vulnerabilities in the software architecture. It reviews security testing approaches, and draws on their limitations in addressing unpredictable behaviour in the face of evolution. The thesis introduces the concept of Security ISs as unanticipated (possibly malicious) behaviours that indicate potential insecurities in the architecture. The IS approach uses the architecture as the appropriate level of abstraction to tackle the complexity of testing. It provides potential for scalability to test large scale complex applications. It proposes a three-phased method for security testing: (1) Detecting design-level vulnerabilities in the architecture in an incremental manner by composing functionalities as they evolve. (2) Classifying the impact of detected ISs on the security of the architecture. (3) Using the detected ISs and their impact to guide the refinement of the architecture. The refinement is test-driven and incremental, where refinements are tested before they are committed. The thesis also presents SecArch, an extension to the IS approach to enhance its search-space to detect hidden race conditions. The thesis reports on the applications of the proposed approach and its extension to three case studies for testing the security of distributed and cloud architectures in the presence of uncertainty in the operating environment, unpredictability of interaction and possible security IS.
157

Trusted execution : applications and verification

Batten, Ian Gilbert January 2016 (has links)
Useful security properties arise from sealing data to specific units of code. Modern processors featuring Intel’s TXT and AMD’s SVM achieve this by a process of measured and trusted execution. Only code which has the correct measurement can access the data, and this code runs in an environment trusted from observation and interference. We discuss the history of attempts to provide security for hardware platforms, and review the literature in the field. We propose some applications which would benefit from use of trusted execution, and discuss functionality enabled by trusted execution. We present in more detail a novel variation on Diffie-Hellman key exchange which removes some reliance on random number generation. We present a modelling language with primitives for trusted execution, along with its semantics. We characterise an attacker who has access to all the capabilities of the hardware. In order to achieve automatic analysis of systems using trusted execution without attempting to search a potentially infinite state space, we define transformations that reduce the number of times the attacker needs to use trusted execution to a pre-determined bound. Given reasonable assumptions we prove the soundness of the transformation: no secrecy attacks are lost by applying it. We then describe using the StatVerif extensions to ProVerif to model the bounded invocations of trusted execution. We show the analysis of realistic systems, for which we provide case studies.
158

Prototyping parallel functional intermediate languages

Ben-Dyke, Andrew David January 1999 (has links)
Non-strict higher-order functional programming languages are elegant, concise, mathematically sound and contain few environment-specific features, making them obvious candidates for harnessing high-performance architectures. The validity of this approach has been established by a number of experimental compilers. However, while there have been a number of important theoretical developments in the field of parallel functional programming, implementations have been slow to materialise. The myriad design choices and demands of specific architectures lead to protracted development times. Furthermore, the resulting systems tend to be monolithic entities, and are difficult to extend and test, ultimatly discouraging experimentation. The traditional solution to this problem is the use of a rapid prototyping framework. However, as each existing systems tends to prefer one specific platform and a particular way of expressing parallelism (including implicit specification) it is difficult to envisage a general purpose framework. Fortunately, most of these systems have at least one point of commonality: the use of an intermediate form. Typically, these abstract representations explicitly identify all parallel components but without the background noise of syntactic and (potentially arbitrary) implementation details. To this end, this thesis outlines a framework for rapidly prototyping such intermediate languages. Based on the traditional three-phase compiler model, the design process is driven by the development of various semantic descriptions of the language. Executable versions of the specifications help to both debug and informally validate these models. A number of case studies, covering the spectrum of modern implementations, demonstrate the utility of the framework.
159

Metric learning for incorporating privileged information in prototype-based models

Fouad, Shereen January 2013 (has links)
Prototype-based classification models, and particularly Learning Vector Quantization (LVQ) frameworks with adaptive metrics, are powerful supervised classification techniques with good generalization behaviour. This thesis proposes three advanced learning methodologies, in the context of LVQ, aiming at better classification performance under various classification settings. The first contribution presents a direct and novel methodology for incorporating valuable privileged knowledge in the LVQ training phase, but not in testing. This is done by manipulating the global metric in the input space, based on distance relations revealed by the privileged information. Several experiments have been conducted that serve as illustration, and demonstrate the benefit of incorporating privileged information on the classification accuracy. Subsequently, the thesis presents a relevant extension of LVQ models, with metric learning, to the case of ordinal classification problems. Unlike in existing nominal LVQ, in ordinal LVQ the class order information is explicitly utilized during training. Competitive results have been obtained on several benchmarks, which improve upon standard LVQ as well as benchmark ordinal classifiers. Finally, a novel ordinal-based metric learning methodology is presented that is principally intended to incorporate privileged information in ordinal classification tasks. The model has been verified experimentally through a number of benchmark and real-world data sets.
160

Regression testing experiments

Sayre, Kent 05 August 1999 (has links)
Software maintenance is an expensive part of the software lifecycle: estimates put its cost at up to two-thirds of the entire cost of software. Regression testing, which tests software after it has been modified to help assess and increase its reliability, is responsible for a large part of this cost. Thus, making regression testing more efficient and effective is worthwhile. This thesis performs two experiments with regression testing techniques. The first experiment involves two regression test selection techniques, Dejavu and Pythia. These techniques select a subset of tests from the original test suite to be rerun instead of the entire original test suite in an attempt to save valuable testing time. The experiment investigates the cost and benefit tradeoffs between these techniques. The data indicate that Dejavu can occasionally select smaller test suites than Pythia while Pythia often is more efficient at figuring out which test cases to select than Dejavu. The second experiment involves the investigation of program spectra as a tool to enhance regression testing. Program spectra characterize a program's behavior. The experiment investigates the applicability of program spectra to the detection of faults in modified software. The data indicate that certain types of spectra identify faults on a consistent basis. The data also reveal cost-benefit tradeoffs among spectra types. / Graduation date: 2000

Page generated in 0.0765 seconds