• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Forensic verification of operating system activity via novel data, acquisition and analysis techniques

Graves, Jamie Robert January 2009 (has links)
Digital Forensics is a nascent field that faces a number of technical, procedural and cultural difficulties that must be overcome if it is to be recognised as a scientific discipline, and not just an art. Technical problems involve the need to develop standardised tools and techniques for the collection and analysis of digital evidence. This thesis is mainly concerned with the technical difficulties faced by the domain. In particular, the exploration of techniques that could form the basis of trusted standards to scientifically verify data. This study presents a set of techniques, and methodologies that can be used to describe the fitness of system calls originating from the Windows NT platform as a form of evidence. It does so in a manner that allows for open investigation into the manner in which the activities described by this form of evidence can be verified. The performance impact on the Device Under Test (DUT) is explored via the division of the Windows NT system calls into service subsets. Of particular interest to this work is the file subset, as the system calls can be directly linked to user interaction. The subsequent quality of data produced by the collection tool is examined via the use of the Basic Local Alignment Search Tool (BLAST) sequence alignment algorithm . In doing so, this study asserts that system calls provide a recording, or time line, of evidence extracted from the operating system, which represents actions undertaken. In addition, it asserts that these interactions can be compared against known profiles (fingerprints) of activity using BLAST, which can provide a set of statistics relating to the quality of match, and a measure of the similarities of sequences under scrutiny. These are based on Karlin-Altschul statistics which provides, amongst other values, a P-Value to describe how often a sequence will occur within a search space. The manner in which these statistics are calculated is augmented by the novel generation of the NM1,5_D7326 scoring matrix based on empirical data gathered from the operating system, which is compared against the de facto, biologically generated, BLOSUM62 scoring matrix. The impact on the Windows 2000 and Windows XP DUTs of monitoring most of the service subsets, including the file subset, is statistically insignificant when simple user interactions are performed on the operating system. For the file subset, p = 0.58 on Windows 2000 Service Pack 4, and p = 0.84 on Windows XP Service Pack 1. This study shows that if the event occurred in a sequence that originated on an operating system that was not subjected to high process load or system stress, a great deal of confidence can be placed in a gapped match, using either the NM_I.5~7326 or BLOSUM62 scoring matrices, indicating an event occurred, as all fingerprints of interest (FOI) were identified. The worst-case BLOSUM62 P-Value = 1.10E-125, and worst-case NM1.5_D7326 P-Value = 1.60E-72, showing that these matrices are comparable in their sensitivity during normal system conditions. This cannot be said for sequences gathered during high process load or system stress conditions. The NM1.5_D7326 scoring matrix failed to identify any FOI. The BLOSUM62 scoring matrix returned a number of matches that may have been the FOI, as discerned via the supporting statistics, but were not positively identified within the evaluation criteria. The techniques presented in this thesis are useful, structured and quantifiable. They provide the basis for a set of methodologies that can be used for providing objective data for additional studies into this form of evidence, which can further explore the details of the calibration and analysis methods, thus supplying the basis for a trusted form of evidence, which may be described as fit-for-purpose.
72

Wild networks : the articulation of feedback and evaluation in a creative inter-disciplinary design studio

Joel, Sian January 2011 (has links)
It is argued that design exists within a collective social network of negotiation, feedback sharing and reflection that is integral to the design process. To encourage this, requires a technological solution that enables designers to access, be aware of, and evaluate the work of others, and crucially, reflect upon how they are socially influenced. However in order to develop software that accurately reveals peer valuation, an understanding is required of the sociality at work in an interdisciplinary design studio. This necessitates an acknowledgement of the complexities of the feedback sharing process that is not only socially intricate in nature but is also potentially unacknowledged. In order to develop software that addresses these issues and makes explicit the dynamics of social interaction at play in a design studio, a ‘wild networks' methodological approach is applied to two case studies, one in an educational setting, the other in a professional practice. The ‘wild networks' approach uses social network analysis, through and in conjunction with, contextual observation and is used to map the network of numerous stakeholders, actors, views and perceptions at work. This methodological technique has resulted in an understanding of social networks within a design studio, how they are shaped and formed and has facilitated the development of prototype network visualisation software based upon the needs and characteristics of real design studios. The findings from this thesis can be interpreted in various ways. Firstly the findings from the case studies and from prototype technological representations enhance previous research surrounding the idea of a social model of design. The research identifies and highlights the importance of evolving peer-to-peer feedback, and the role of visual evaluation within social networks of feedback sharing. The results can also be interpreted from a methodological viewpoint. The thesis demonstrates the use of network analysis and contextual observation in providing an effective way of understanding the interactions of designers in a studio, and as an appropriate way to inform the software design process to support creativity. Finally the results can be interpreted from a software design perspective. The research, through the application of a ‘wild networks' methodological process, identifies key features (roles, location, levels, graphics and time), for inclusion within a socially translucent, network visualisation prototype that is based upon real world research.
73

An evaluation of the power consumption and carbon footprint of a cloud infrastructure

Yampolsky, Vincent January 2010 (has links)
The Information and Communication Technology (ICT) sector represent two to three percentsof the world energy consumption and about the same percentage of GreenHouse Gas(GHG) emission. Moreover the IT-related costs represent fifty per-cents of the electricity billof a company. In January 2010 the Green Touch consortium composed of sixteen leading companies and laboratories in the IT field led by Bell's lab and Alcatel-Lucent have announced that in five years the Internet could require a thousand times less energy than it requires now. Furthermore Edinburgh Napier University is committed to reduce its carbon footprint by 25% on the 2007/8 to 2012/13 period (Edinburgh Napier University Sustainability Office, 2009) and one of the objectives is to deploy innovative C&IT solutions. Therefore there is a general interest to reduce the electrical cost of the IT infrastructure, usually led by environmental concerns. One of the most prominent technologies when Green IT is discussed is Cloud Computing (Stephen Ruth, 2009). This technology allows the on-demand self service provisioning by making resources available as a service. Its elasticity allows the automatic scaling of thedemand and hardware consolidation thanks to virtualization. Therefore an increasing number of companies are moving their resources into a cloud managed by themselves or a third party. However this is known to reduce the electricity bill of a company if the cloud is managed by a third-party off-premise but this does not say to which extent is the powerconsumption is reduced. Indeed the processing resources seem to be just located somewhere else. Moreover hardware consolidation suggest that power saving is achieved only during off-peak time (Xiaobo Fan et al, 2007). Furthermore the cost of the network is never mentioned when cloud is referred as power saving and this cost might not be negligible. Indeed the network might need upgrades because what was being done locally is done remotely with cloud computing. In the same way cloud computing is supposed to enhance the capabilities of mobile devices but the impact of cloud communication on their autonomy is mentioned anywhere. Experimentations have been performed in order to evaluate the power consumption of an infrastructure relying on a cloud used for desktop virtualization and also to measure the cost of the same infrastructure without a cloud. The overall infrastructure have been split in different elements respectively the cloud infrastructure, the network infrastructure and enddevices and the power consumption of each element have been monitored separately. The experimentation have considered different severs, network equipment (switches, wireless access-points, router) and end-devices (desktops Iphone, Ipad and Sony-Ericsson Xperia running Android). The experiments have also measured the impact of a cloud communication on the battery of mobile devices. The evaluation have considered different deployment sizes and estimated the carbon emission of the technologies tested. The cloud infrastructure happened to be power saving and not only during off-peak time from a deployment size large enough (approximately 20 computers) for the same processing power. The power saving is large enough for wide deployment (500 computers) that it could overcome the cost of a network upgrade to a Gigabit access infrastructure and still reduce the carbon emission by 4 tonnes or 43.97% over a year and on Napier campuses compared to traditional deployment with a Fast-Ethernet access-network. However the impact of cloud communication on mobile-devices is important and has increase the power consumption by 57% to 169%.
74

HI-Risk : a socio-technical method for the identification and monitoring of healthcare information security risks in the information society

van Deursen Hazelhoff Roelfze, Nicole January 2014 (has links)
This thesis describes the development of the HI-risk method to assess socio-technical information security risks. The method is based on the concept that related organisations experience similar risks and could benefit from sharing knowledge in order to take effective security measures. The aim of the method is to predict future risks by combining knowledge of past information security incidents with forecasts made by experts. HI-risks articulates the view that information security risk analysis should include human, environmental, and societal factors, and that collaboration amongst disciplines, organisations and experts is essential to improve security risk intelligence in today's information society. The HI-risk method provides the opportunity for participating organisations to register their incidents centrally. From this register, an analysis of the incident scenarios leads to the visualisation of the most frequent scenario trees. These scenarios are presented to experts in the field. The experts express their opinions about the expected frequency of occurrence for the future. Their expectation is based on their experience, their knowledge of existing countermeasures, and their insight into new potential threats. The combination of incident and expert knowledge forms a risk map. The map is the main deliverable of the HI-risk method, and organisations could use it to monitor their information security risks. The HI-risk method was designed by following the rigorous process of design science research. The empirical methods used included qualitative and quantitative techniques, such as an analysis of historical security incident data from healthcare organisations, expert elicitation through a Delphi study, and a successful test of the risk forecast in a case organisation. The research focused on healthcare, but has potential to be further developed as a knowledge-based system or expert system, applicable to any industry. That system could be used as a tool for management to benchmark themselves against other organisations, to make security investment decisions, to learn from past incidents and to provide input for policy makers.
75

On evidence gathering in 3D point clouds of static and moving objects

Abuzaina, Anas January 2015 (has links)
The recent and considerable progress in 3D sensing technologies mandates the development of efficient algorithms to process the sensed data. Many of these algorithms are based on computing and matching of 3D feature descriptors in order to estimate point correspondences between 3D datasets. The dependency on 3D feature description and computation can be a significant limitation to many 3D perception tasks; the fact that there are a variety of criteria used to describe 3D features, such as surface normals and curvature, makes feature-based approaches sensitive to noise and occlusion. In many cases, such as smooth surfaces, computation of feature descriptors can be non-informative. Moreover, the process of computing and matching features requires more computational overhead than using points directly. On the other hand, there has not been much focus on employing evidence gathering frameworks to obtain solutions for 3D perception problems. Evidence gathering approaches, which use data directly, have proved to provide robust performance against noise and occlusion. More importantly, evidence gathering approaches do not require initialisation or training, and avoid the need to solve the correspondence problem. The capability to detect, extract and reconstruct 3D bjects without relying on feature matching and estimating correspondences between 3D datasets has not been thoroughly investigated, and is certainly desirable and has many practical applications. In this thesis we present theoretical formulations and practical solutions to 3D perceptual tasks, that are based on evidence gathering. We propose a new 3D reconstruction algorithm for rotating objects that is based on motion-compensated temporal accumulation. We also propose two fast and robust Hough Transform based algorithms for 3D static parametric object detection and 3D moving parametric object extraction. Furthermore, we introduce two algorithms for 3D motion parameter estimation that are based on Reuleaux's and Chasles' kinematic theorems. The proposed algorithms estimate 3D motion parameters directly from the data by exploiting the geometry of rigid transformation. Moreover, they provide an alternative to the both local and global feature description and matching pipelines commonly used by numerous 3D object recognition and registration algorithms. Our objective is to provide new means for understanding static and dynamic scenes, captured by new 3D sensing technologies as we believe that these technologies will be dominant in the perception field as they are going under rapid development. We provide alternative solutions to commonly used feature based approaches by using new evidence gathering based methods for the processing of 3D range data.
76

Multiple objective optimisation of data and control paths in a behavioural silicon compiler

Baker, Keith Richard January 1992 (has links)
The objective of this research was to implement an `intelligent' silicon compiler that provides the ability to automatically explore the design space and optimise a design, given as a behavioural description, with respect to multiple objectives. The objective has been met by the implementation of the MOODS Silicon Compiler. The user submits goals or objectives to the system which automatically finds near optimal solutions. As objectives may be conflicting, trade-offs between synthesis tasks are essential and consequently their simultaneous execution must occur. Tasks are decomposed into behaviour preserving transformations which, due to their completeness, can be applied in any sequence to a multi-level representation of the design. An accurate evaluation of the design is ensured by feeding up technology dependent information to a cost function. The cost function guides the simulated annealing algorithm in applying transformations to iteratively optimise the design. The simulated annealing algorithm provides an abstractness from the transformations and designer's objectives. This abstractness avoids the construction of tailored heuristics which pre-program trade-offs into a system. Pre-programmed trade-offs are used in most systems by assuming a particular shape to the trade-off curve and are inappropriate as trade-offs are technology dependent. The lack of pre-programmed trade-offs in the MOODS system allows it to adapt to changes in technology or library cells. The choice of cells and their subsequent sharing are based on the user's criteria expressed in the cost function, rather than being pre-programmed into the system. The results show that implementations created by MOODS are better than or equal to those achieved by other systems. Comparisons with other systems highlighted the importance of specifying all of a design's data as the lack of data misrepresents the design leading to misleading comparisons. The MOODS synthesis system includes an efficient method for automated design space exploration where a varied set of near optimal implementations can be produced from a single behavioural specification. Design space exploration is an important aspect of designing by high-level synthesis and in the development of synthesis systems. It allows the designer to obtain a perspicuous characterization of a design's design space allowing him to investigate alternative designs.
77

Extending Event-B with discrete timing properties

Sarshogh, Mohammad Reza January 2013 (has links)
Event-B is a formal language for systems modelling, based on set theory and predicate logic. It has the advantage of mechanized proof, and it is possible to model a system in several levels of abstraction by using re�nement. Discrete timing properties are important in many critical systems. However, modelling of timing properties is not directly supported in Event-B. In this work, we identify three main categories of discrete timing properties for trigger response patterns, deadline, delay and expiry. We introduce language constructs for each of these timing properties that augment the Event-B language. We describe how these constructs have been given a semantics in terms of the standard Event-B constructs. To ease the process of using timing properties in a refinement-based development, we introduce patterns for refining the timing constructs that allow timing properties on abstract models to be replaced by timing properties on refined models. The language constructs and refinement patterns are illustrated through some generic examples. We have developed a tool to support our approach. Our tool is a plug-in to the Rodin tool-set for Event-B and automates the translation of timing properties to Event-B as well as the generation of gluing invariants, required toverify the consistency of timing properties refinement. In the end, we demonstrate the practicality of our approach, by going through the modelling and verifying process of two real-time case studies. The main focus will be the usefulness of the timing re�nement patterns in a step-wise modelling and veri�cation process of a real-time system.
78

Control of large distributed systems using games with pure strategy Nash equilibria

Chapman, Archie C. January 2009 (has links)
Control mechanisms for optimisation in large distributed systems cannot be constructed based on traditional methods of control because they are typically characterised by distributed information and costly and/or noisy communication. Furthermore, noisy observations and dynamism are also inherent to these systems, so their control mechanisms need to be flexible, agile and robust in the face of these characteristics. In such settings, a good control mechanism should satisfy the following four design requirements: (i) it should produce high quality solutions, (ii) it should be robustness and flexibility in the face of additions, removals and failures of components, (iii) it should operate by making limited use of communication, and (iv) its operation should be computational feasible. Against this background, in order to satisfy these requirements, in this thesis we adopt a design approach based on dividing control over the system across a team of self–interested agents. Such multi–agent systems (MAS) are naturally distributed (matching the application domains in question), and by pursing their own private goals, the agents can collectively implement robust, flexible and scalable control mechanisms. In more detail, the design approach we adopt is (i) to use games with pure strategy Nash equilibria as a framework or template for constructing the agents’ utility functions, such that good solutions to the optimisation problem arise at the pure strategy Nash equilibria of the game, and (ii) to derive distributed techniques for solving the games for their Nash equilibria. The specific problems we tackle can be grouped into four main topics. First, we investigate a class of local algorithms for distributed constraint optimisation problems (DCOPs). We introduce a unifying analytical framework for studying such algorithms, and develop a parameterisation of the algorithm design space, which represents a mapping from the algorithms’ components to their performance according to each of our design requirements. Second, we develop a game–theoretic control mechanism for distributed dynamic task allocation and scheduling problems. The model in question is an expansion of DCOPs to encompass dynamic problems, and the control mechanism we derive builds on the insights from our first topic to address our four design requirements. Third, we elaborate a general class of problems including DCOPs with noisy rewards and state observations, which are realistic traits of great concern in real–world problems, and derive control mechanisms for these environments. These control mechanism allow the agents to either learn their reward functions or decide when to make observations of the world’s state and/or communicate their beliefs over the state of the world, in such a manner that they perform well according to our design requirements. Fourth, we derive an optimal algorithm for computing and optimising over pure strategy Nash equilibria in games with sparse interaction structure. By exploiting the structure present in many multi-agent interactions, this distributed algorithm can efficiently compute equilibria that optimise various criteria, thus reducing the computational burden on any one agent and operating using less communication than an equivalent centralised algorithms. For each of these topics, the control mechanisms that we derive are developed such that they perform well according to all four f our design requirements. In sum, by making the above contributions to these specific topics, we demonstrate that the general approach of using games with pure strategy Nash equilibria as a template for designing MAS produces good control mechanisms for large distributed systems.
79

Enhancing retrieval and discovery of desktop documents

Mosweunyane, Gontlafetse January 2009 (has links)
Personal computers provide users with abilities to create, organize, store and access large amounts of information. Most of this information is in the form of documents in files organized in the hierarchical folder structures provided by the operating system. Operating system-provided access to these data is mainly through structure-guided navigation, and more recently through keyword search. This thesis describes the author's research into the accessibility and utilization of personal documents stored and organized using the hierarchical file system provided by common operating systems. An investigation was carried out on how users currently store and access their documents in these structures. Access and utility problems triggered a need to reconsider the navigation methods currently provided. Further investigation into navigation of personal document hierarchies using semantic metadata derived from the documents was carried out. A more intuitive exploratory interface that exposes the metadata for browsing-style navigation was implemented. The underlying organization is based on a model for navigation whereby documents are represented using index terms and associations between them exposed to create a linked, similarity-based navigation structure. Exposure of metadata-derived index terms in an interface was hypothesized to reduce the user's cognitive load and enable efficient and effective retrieval while also providing cues for discovery and recognition of associations between documents. Evaluation results of the implementation supports this hypothesis for retrieval of deeply located documents, as well as better overall effectiveness in associating and discovery of documents. The importance of semantic document metadata is also highlighted in demonstrations involving transfer of documents from the desktop to other organized document stores such as a repository.
80

An incremental refinement approach to a development of a flash-based file system in Event-B

Damchoom, Kriangsak January 2010 (has links)
Nowadays, many formal methods are used in the area of software development accompanied by a number of advanced theories and tools. However, more experiments are still required in order to provide significant evidence that will convince and encourage users to use, and gain more benefits from, those theories and tools. Event-B is a formalism used for specifying and reasoning about systems. Rodin is an open and extensible tool for Event-B specification, refinement and proof. The flash file system is a complex system. Such systems are a challenge to specify and verify at this moment in time. This system was chosen as a case study for our experiments, carried out using Event-B and the Rodin tool. The experiments were aimed at developing a rigorous model of flash-based file system; including implementation of the model, providing useful evidence and guidelines to developers and the software industry. We believe that these would convince users and make formal methods more accessible. An incremental refinement was chosen as a strategy in our development. The refinement was used for two different purposes: feature augmentation and structural refinement (covering event and machine decomposition). Several techniques and styles of modelling were investigated and compared; to produce some useful guidelines for modelling, refinement and proof. The model of the flash-based file system we have completed covers three main issues: fault-tolerance, concurrency and wear-levelling process. Our model can deal with concurrent read/write operations and other processes such as block relocation and block erasure. The model tolerates faults that may occur during reading/writing of files. We believe our development acts as an exemplar that other developers can learn from. We also provide systematic rules for translation of Event-B models into Java code. However, more work is required to make these rules more applicable and useful in the future

Page generated in 0.1904 seconds