• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Ray tracing methods fo hybrid global illumination algorithms

Coulthurst, David James January 2010 (has links)
Global illumination algorithms provide a way to model the different light transport phenomena seen in real life, and produce accurate images. The amount of computation to achieve accurate rendering is large, resulting in the development of many different ways of speeding it up. Some of these focus on speeding up the basic processes of rendering, such as ray tracing operations, and some on leveraging parallel hardware to speed up rendering. Two novel algorithms of this type are described in this thesis. One to allow incoherent paths to be efficiently be traced in parallel by offsetting the latency. The other discovers and exploits empty regions of space to avoid the use of acceleration structures for such operations as soft shadowing and metropolis light transport mutations. The other approach to speeding up rendering is to use a more elegant algorithm. Two such families of algorithms are photon mapping and metropolis light transport. Extensions to progressive photon mapping and hybrid algorithms using photon mapping and metropolis light transport are presented., showing significant speedup in complex scenes that present difficulties to current rendering algorithms.
82

A framework for exploiting emergent behaviour to capture 'best practice' within a programming domain

Mercer, Sarah Jane January 2004 (has links)
Inspection is a formalised process for reviewing an artefact in software engineering. It is proven to significantly reduce defects, to ensure that what is delivered is what is required, and that the finished product is effective and robust. Peer code review is a less formal inspection of code, normally classified as inadequate or substandard Inspection. Although it has an increased risk of not locating defects, it has been shown to improve the knowledge and programming skills of its participants. This thesis examines the process of peer code review, comparing it to Inspection, and attempts to describe how an informal code review can improve the knowledge and skills of its participants by deploying an agent oriented approach. During a review the participants discuss defects, recommendations and solutions, or more generally their own experience. It is this instant adaptability to new information that gives the review process the ability to improve knowledge. This observed behaviour can be described as the emergent behaviour of the group of programmers during the review. The wider distribution of knowledge is currently only performed by programmers attending other reviews. To maximise the benefits of peer code review, a mechanism is needed by which the findings from one team can be captured and propagated to other reviews / teams throughout an establishment. A prototype multi-agent system is developed with the aim of capturing the emergent properties of a team of programmers. As the interactions between the team members is unstructured and the information traded is dynamic, a distributed adaptive system is required to provide communication channels for the team and to provide a foundation for the knowledge shared. Software agents are capable of adaptivity and learning. Multi-agent systems are particularly effective at being deployed within distributed architectures and are believed to be able to capture emergent behaviour. The prototype system illustrates that the learning mechanism within the software agents provides a solid foundation upon which the ability to detect defects can be learnt. It also demonstrates that the multi-agent approach is apposite to provide the free flow communication of ideas between programmers, not only to achieve the sharing of defects and solutions but also at a high enough level to capture social information. It is assumed that this social information is a measure of one element of the review process's emergent behaviour. The system is capable of monitoring the team-perceived abilities of programmers, those who are influential on the programming style of others, and the issues upon which programmers agree or disagree. If the disagreements are classified as unimportant or stylistic issues, can it not therefore be assumed that all agreements are concepts of "Best Practice"? The conclusion is reached that code review is not a substandard Inspection but is in fact complementary to the Inspection model, as the latter improves the process of locating and identifying bugs while the former improves the knowledge and skill of the programmers, and therefore the chance of bugs not being encoded to start with. The prototype system demonstrates that it is possible to capture best practice from a review team and that agents are well suited to the task. The performance criteria of such a system have also been captured. The prototype system has also shown that a reliable level of learning can be attained for a real world task. The innovative way of concurrently deploying multiple agents which use different approaches to achieve the same goal shows remarkable robustness when learning from small example sets. The novel way in which autonomy is promoted within the agents' design but constrained within the agent community allows the system to provide a sufficiently flexible communications structure to capture emergent social behaviour, whilst ensuring that the agents remain committed to their own goals.
83

Scalable support for process-oriented programming

Ritson, Carl G. January 2013 (has links)
Process-oriented programming is a method for applying a high degree of concurrency within software design while avoiding associated pitfalls such as deadlocks and race hazards. A process-oriented computer program contains multiple distinct software processes which execute concurrently. All interaction between processes, including information exchange, occurs via explicit communication and synchronisation mechanisms. The explicit nature of interaction in process-oriented programming underpins its ability to provide manageable concurrency. These interaction mechanisms represent both a potential overhead in the execution of process-oriented software and a point of mechanical sympathy with emerging multi-core computer architectures. This thesis details engineering to reduce the overheads associated with a process-oriented style of software design and evaluate its mechanical sympathy. The first half of this thesis provides an in-depth review of facilities for concurrent programming and their support in programming languages. Common concurrent programming facilities are defined and their relationship to process-oriented design established. It contains an analysis of the significance of mechanical sympathy in programming languages, trends in hardware and software design, and relates these to process-oriented programming. The latter part of this thesis describes techniques for the compilation and execution of process-oriented software on multi-core hardware so as to achieve the maximum utilisation of parallel computing resources with the minimum overhead from process-oriented interaction mechanisms. A new runtime kernel design for the occampi programming language is presented and evaluated. This design enables efficient cache-affine work-stealing scheduling of processes on multi-core hardware using waitfree and non-blocking algorithms. This is complemented by modern compilation techniques for occam-pi program code using machine independent assembly to improve performance and portability, and methods for debugging the execution of processoriented software using a virtual machine interpreter. Through application, these methods prove the mechanical sympathy and parallel execution potential of a process-oriented software.
84

Patterns of semiosis in requirements engineering

Ketabchi, Shokoofeh January 2011 (has links)
Requirements engineering (RE) is a process of eliciting, analysing, specifying and validating requirements. It is carried out early in the development lifecycle and acts as the basis for other phases in the software development lifecycle. Therefore, proper requirements engineering improves the quality of the development cycle and, thus, the final product. Many methods and frameworks have been developed for RE. They introduce step-by-step guidelines and methods that need prior knowledge and experience to be applied properly; these are not suitable for novice analysts. Applying requirements patterns is a technique to overcome this problem. A pattern is a regularity that repeats again and again. Requirements may repeat in projects; thus, they can be defined as patterns and reused whenever needed instead of developing them from scratch. Several methods have been introduced for developing and reusing patterns; however, they are mainly concerned with technology and implementation aspects, and usually ignore users' high-level informal requirements. This research aims to develop the theory of semiosis patterns and to introduce a requirements engineering patterns method that solves mentioned problems. The proposed theory and method are mainly inspired by the semiosis process from semiotics theory. The semiosis process helps to make a connection from signs to their objects through an interpretant. The semiosis pattern theory introduces new concepts and principles for patterns in requirements engineering. The semiosis patterns method helps to create, reuse, and customise pattern for requirements engineering. To develop the semiosis patterns, a problem domain is decomposed into smaller sections (sub- domains) called problem patterns for which related requirements patterns are created, or reused, if currently exist in the repository, when the semiosis process is used to match problem patterns (regarded as signs) with requirements patterns (regarded as objects). To validate the proposed method, the information management area is chosen to carry out an extensive study and develop its patterns accordingly. Then, two case studies from the Oil and Gas industry are selected and their information management (IM) function is studied, and the developed patterns are reused and customised. Finally, the whole research including the theoretical foundation, methodology, SPM and the result of the application of SPM are critically evaluated.
85

Sharing awareness during distributed collaborative software development

Omoronyia, Inah January 2008 (has links)
Software development is a global activity unconstrained by the bounds of time and space. A major effect of this increasing scale and distribution is that the shared understanding that developers previously acquired by formal and informal face-to-face meetings is difficult to obtain. This thesis proposes and evaluates a shared entity model (called CRI) that uses information gathered automatically from developer IDE interactions to make explicit orderings of tasks, artefacts and developers that are relevant to particular work contexts in a distributed software development project. It provides a detailed description of literature related to awareness in collaborative software engineering, a thorough description of the CRI model, and the results of a qualitative empirical evaluation in a realistic development scenario.
86

Models of open source production compared to participative systems in new media art

Smith, Dominic January 2011 (has links)
The term 'Open Source' has in the past decade been used very loosely in relation to art and social practices. This research compares the production processes of Open Source software production with those of participative new media art projects. The contextual review examines the behaviours of computer scientists from the 1960s onwards, including hacking, interaction over computer networks and shared use of computers when they were a scarce resource. Collaborative environment strategies for persohat-success are traced onto free software, FLOSS (Free Libre Open Source Software) and open source. Licencing and copyright are examined in relation to distribution. The development of participative art projects is also traced, in relation to new media, and these ethics of authorship, freedom, sharing and distribution. The research compares certain political and social ethics between software and art. It identifies different levels 'openness' and different kinds of hierarchies within production systems, including hierarchies of skill, approval, gatekeeping, and time. Interviews with key open source practitioners help to identify these hierarchies. As part of a practical body of research, a series of participative projects were developed by the researcher. These included both on line and physical space participation, including the Random Information Exchange series and Shredder. These were designed to test the various principles of open source within a new media art context. Through the successes and limitations of these projects, the elements of a project that are necessary for it to be to be classed as open source were identified. The findings of the research describe important differences in hierarchical structures of projects' production and distribution, and identify key elements including the 'ownership' of projects over time, and the importance of differentiating the 'instigator' role from the 'developer' role.
87

Verification of hardware dependent software

Taylor, Ramsay G. January 2012 (has links)
Many good processes exist for ensuring the integrity of software systems, Some are analysis processes that seek to confirm that cer- tain properties hold for the system, and these rely on the ability to infer a correct model of the behaviour of the software, To ensure that such inference is possible many high-integrity systems are writ- ten in "safe" language subsets that restrict the program to constructs whose behaviour is sufficiently abstract and well defined that it can be determined independent of the execution environment. This nec- essarily prevents any assumptions about the system hardware. but consequently makes it impossible to use these techniques on software that must interact with the hardware. such as device drivers. This thesis addresses this shortcoming by taking the opposite approach: if the analyst accepts absolute hardware dependence - that the analysis will only be valid for a particular target system: the hardware that the driver is intended to control -- then the specifica- tion of the system can be used to infer the behaviour of the software that interacts with it, An analysis process is developed that operates on disassembled executable files and formal system specifications to produce CSP-OZ formal models of the software's behaviour, This analysis process is implemented in a prototype called Spurinna. that is then used in conjunction with the verification tools Z2SAL, the SAL suite, and IsabelleHOL. to demonstrate the verification of prop- erties of the software.
88

Multi-objective genetic programming with an application to intrusion detection in computer networks

Badran, Khaled January 2009 (has links)
The widespread connectivity of computers all over the world has encouraged intruders to threaten the security of computing systems by targeting the confidentiality and integrity of information, and the availability of systems. Traditional techniques such as user authentication, data encryption and firewalls have been implemented to defend computer security but still have problems and weak points. Therefore the development of intrusion detection systems (EDS) has aroused much research interest with the aim of preventing both internal and external attacks. In misuse-based, network-based IDS, huge history files of computer network usage are analysed hi order to extract useful information, and rules are extracted to judge future network usage as legal or illegal. This process is considered as data mining for intrusion detection in computer networks.
89

Integral sliding mode fault tolerant control schemes with control allocation

Hamayun, Mirza Tariq January 2013 (has links)
The key attribute of a Fault Tolerant Control (FTC) system is to maintain overall system stability and acceptable performance in the face of faults and failures within the system. In this thesis new integral sliding mode (ISM) control allocation schemes for FTC are proposed, which have the potential to maintain the nominal fault free performance for the entire system response, in the face of actuator faults and even complete failures of certain actuators. The incorporation of ISM within a control allocation framework uses the measured or estimated values of the actuator effectiveness levels to redistribute the control effort among the healthy actuators to maintain closed-loop stability. This combination allows one controller to be used in both fault free as well as in fault or failure situations. A fault tolerant control allocation scheme which relies on an a posteri approach, building on an existing state feedback controller designed using only the primary actuators, is also proposed. Retro-fitting of an ISM scheme to an existing feedback controller is advantageous from an industrial perspective, because fault tolerance can be introduced without changing the existing control loops. To deal with a wider range of operating conditions, the fault tolerant features of ISM are also extended to linear parameter varying systems. A FTC scheme considering only the availability of measured system outputs is also proposed, where now the feedback controller design is based on the estimated states. In each of the ISM fault tolerant schemes proposed, a rigorous closed-loop analysis is carried out to ensure the stability of the sliding motion in the face of faults or failures. A high fidelity benchmark model of a large transport aircraft is used to demonstrate the efficacy of the new FTC schemes.
90

Design patterns to support the migration between event-triggered and time-triggered software architectures

Lakhani, Farha Naz January 2013 (has links)
There are two main architectures used to develop software for modern embedded systems: these can be labelled as “event-triggered” (ET) and “time-triggered” (TT). This thesis is concerned with the issues involved in migration between these two architectures. Although TT architectures are widely used in safety-critical applications (for example, in aerospace and medical systems) they are less familiar to developers of mainstream embedded systems. The work in this thesis began from the premise that – for a broad class of systems that have been implemented using an ET architecture – migration to a TT architecture would improve reliability. It may be tempting to assume that conversion between ET and TT designs will simply involve converting all event-handling software routines into periodic activities. However, the required changes to the software architecture are, in many cases rather more profound. The main contribution of the work presented in this thesis is to identify ways in which the significant effort involved in migrating between existing ET architectures and “equivalent” (and effective) TT architectures could be reduced. The research has taken an innovative step in this regard by introducing the use of ‘Design patterns’ for this purpose for the first time. This thesis describes the development, experimental testing and preliminary assessment of a novel set of design patterns. The thesis goes on to evaluate the effectiveness of some of the key patterns in the development of some representative systems. The pattern evaluation process involved both controlled laboratory experiments on real-time applications, and comprehensive feedback from experts in industry. The results presented in this thesis suggest that pattern-based approaches have the potential to simplify the migration process between ET and TT architectures. The thesis concludes by presenting suggestions for future work in this important area.

Page generated in 0.0247 seconds