• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Forensic verification of operating system activity via novel data, acquisition and analysis techniques

Graves, Jamie Robert January 2009 (has links)
Digital Forensics is a nascent field that faces a number of technical, procedural and cultural difficulties that must be overcome if it is to be recognised as a scientific discipline, and not just an art. Technical problems involve the need to develop standardised tools and techniques for the collection and analysis of digital evidence. This thesis is mainly concerned with the technical difficulties faced by the domain. In particular, the exploration of techniques that could form the basis of trusted standards to scientifically verify data. This study presents a set of techniques, and methodologies that can be used to describe the fitness of system calls originating from the Windows NT platform as a form of evidence. It does so in a manner that allows for open investigation into the manner in which the activities described by this form of evidence can be verified. The performance impact on the Device Under Test (DUT) is explored via the division of the Windows NT system calls into service subsets. Of particular interest to this work is the file subset, as the system calls can be directly linked to user interaction. The subsequent quality of data produced by the collection tool is examined via the use of the Basic Local Alignment Search Tool (BLAST) sequence alignment algorithm . In doing so, this study asserts that system calls provide a recording, or time line, of evidence extracted from the operating system, which represents actions undertaken. In addition, it asserts that these interactions can be compared against known profiles (fingerprints) of activity using BLAST, which can provide a set of statistics relating to the quality of match, and a measure of the similarities of sequences under scrutiny. These are based on Karlin-Altschul statistics which provides, amongst other values, a P-Value to describe how often a sequence will occur within a search space. The manner in which these statistics are calculated is augmented by the novel generation of the NM1,5_D7326 scoring matrix based on empirical data gathered from the operating system, which is compared against the de facto, biologically generated, BLOSUM62 scoring matrix. The impact on the Windows 2000 and Windows XP DUTs of monitoring most of the service subsets, including the file subset, is statistically insignificant when simple user interactions are performed on the operating system. For the file subset, p = 0.58 on Windows 2000 Service Pack 4, and p = 0.84 on Windows XP Service Pack 1. This study shows that if the event occurred in a sequence that originated on an operating system that was not subjected to high process load or system stress, a great deal of confidence can be placed in a gapped match, using either the NM_I.5~7326 or BLOSUM62 scoring matrices, indicating an event occurred, as all fingerprints of interest (FOI) were identified. The worst-case BLOSUM62 P-Value = 1.10E-125, and worst-case NM1.5_D7326 P-Value = 1.60E-72, showing that these matrices are comparable in their sensitivity during normal system conditions. This cannot be said for sequences gathered during high process load or system stress conditions. The NM1.5_D7326 scoring matrix failed to identify any FOI. The BLOSUM62 scoring matrix returned a number of matches that may have been the FOI, as discerned via the supporting statistics, but were not positively identified within the evaluation criteria. The techniques presented in this thesis are useful, structured and quantifiable. They provide the basis for a set of methodologies that can be used for providing objective data for additional studies into this form of evidence, which can further explore the details of the calibration and analysis methods, thus supplying the basis for a trusted form of evidence, which may be described as fit-for-purpose.
72

Wild networks : the articulation of feedback and evaluation in a creative inter-disciplinary design studio

Joel, Sian January 2011 (has links)
It is argued that design exists within a collective social network of negotiation, feedback sharing and reflection that is integral to the design process. To encourage this, requires a technological solution that enables designers to access, be aware of, and evaluate the work of others, and crucially, reflect upon how they are socially influenced. However in order to develop software that accurately reveals peer valuation, an understanding is required of the sociality at work in an interdisciplinary design studio. This necessitates an acknowledgement of the complexities of the feedback sharing process that is not only socially intricate in nature but is also potentially unacknowledged. In order to develop software that addresses these issues and makes explicit the dynamics of social interaction at play in a design studio, a ‘wild networks' methodological approach is applied to two case studies, one in an educational setting, the other in a professional practice. The ‘wild networks' approach uses social network analysis, through and in conjunction with, contextual observation and is used to map the network of numerous stakeholders, actors, views and perceptions at work. This methodological technique has resulted in an understanding of social networks within a design studio, how they are shaped and formed and has facilitated the development of prototype network visualisation software based upon the needs and characteristics of real design studios. The findings from this thesis can be interpreted in various ways. Firstly the findings from the case studies and from prototype technological representations enhance previous research surrounding the idea of a social model of design. The research identifies and highlights the importance of evolving peer-to-peer feedback, and the role of visual evaluation within social networks of feedback sharing. The results can also be interpreted from a methodological viewpoint. The thesis demonstrates the use of network analysis and contextual observation in providing an effective way of understanding the interactions of designers in a studio, and as an appropriate way to inform the software design process to support creativity. Finally the results can be interpreted from a software design perspective. The research, through the application of a ‘wild networks' methodological process, identifies key features (roles, location, levels, graphics and time), for inclusion within a socially translucent, network visualisation prototype that is based upon real world research.
73

An evaluation of the power consumption and carbon footprint of a cloud infrastructure

Yampolsky, Vincent January 2010 (has links)
The Information and Communication Technology (ICT) sector represent two to three percentsof the world energy consumption and about the same percentage of GreenHouse Gas(GHG) emission. Moreover the IT-related costs represent fifty per-cents of the electricity billof a company. In January 2010 the Green Touch consortium composed of sixteen leading companies and laboratories in the IT field led by Bell's lab and Alcatel-Lucent have announced that in five years the Internet could require a thousand times less energy than it requires now. Furthermore Edinburgh Napier University is committed to reduce its carbon footprint by 25% on the 2007/8 to 2012/13 period (Edinburgh Napier University Sustainability Office, 2009) and one of the objectives is to deploy innovative C&IT solutions. Therefore there is a general interest to reduce the electrical cost of the IT infrastructure, usually led by environmental concerns. One of the most prominent technologies when Green IT is discussed is Cloud Computing (Stephen Ruth, 2009). This technology allows the on-demand self service provisioning by making resources available as a service. Its elasticity allows the automatic scaling of thedemand and hardware consolidation thanks to virtualization. Therefore an increasing number of companies are moving their resources into a cloud managed by themselves or a third party. However this is known to reduce the electricity bill of a company if the cloud is managed by a third-party off-premise but this does not say to which extent is the powerconsumption is reduced. Indeed the processing resources seem to be just located somewhere else. Moreover hardware consolidation suggest that power saving is achieved only during off-peak time (Xiaobo Fan et al, 2007). Furthermore the cost of the network is never mentioned when cloud is referred as power saving and this cost might not be negligible. Indeed the network might need upgrades because what was being done locally is done remotely with cloud computing. In the same way cloud computing is supposed to enhance the capabilities of mobile devices but the impact of cloud communication on their autonomy is mentioned anywhere. Experimentations have been performed in order to evaluate the power consumption of an infrastructure relying on a cloud used for desktop virtualization and also to measure the cost of the same infrastructure without a cloud. The overall infrastructure have been split in different elements respectively the cloud infrastructure, the network infrastructure and enddevices and the power consumption of each element have been monitored separately. The experimentation have considered different severs, network equipment (switches, wireless access-points, router) and end-devices (desktops Iphone, Ipad and Sony-Ericsson Xperia running Android). The experiments have also measured the impact of a cloud communication on the battery of mobile devices. The evaluation have considered different deployment sizes and estimated the carbon emission of the technologies tested. The cloud infrastructure happened to be power saving and not only during off-peak time from a deployment size large enough (approximately 20 computers) for the same processing power. The power saving is large enough for wide deployment (500 computers) that it could overcome the cost of a network upgrade to a Gigabit access infrastructure and still reduce the carbon emission by 4 tonnes or 43.97% over a year and on Napier campuses compared to traditional deployment with a Fast-Ethernet access-network. However the impact of cloud communication on mobile-devices is important and has increase the power consumption by 57% to 169%.
74

HI-Risk : a socio-technical method for the identification and monitoring of healthcare information security risks in the information society

van Deursen Hazelhoff Roelfze, Nicole January 2014 (has links)
This thesis describes the development of the HI-risk method to assess socio-technical information security risks. The method is based on the concept that related organisations experience similar risks and could benefit from sharing knowledge in order to take effective security measures. The aim of the method is to predict future risks by combining knowledge of past information security incidents with forecasts made by experts. HI-risks articulates the view that information security risk analysis should include human, environmental, and societal factors, and that collaboration amongst disciplines, organisations and experts is essential to improve security risk intelligence in today's information society. The HI-risk method provides the opportunity for participating organisations to register their incidents centrally. From this register, an analysis of the incident scenarios leads to the visualisation of the most frequent scenario trees. These scenarios are presented to experts in the field. The experts express their opinions about the expected frequency of occurrence for the future. Their expectation is based on their experience, their knowledge of existing countermeasures, and their insight into new potential threats. The combination of incident and expert knowledge forms a risk map. The map is the main deliverable of the HI-risk method, and organisations could use it to monitor their information security risks. The HI-risk method was designed by following the rigorous process of design science research. The empirical methods used included qualitative and quantitative techniques, such as an analysis of historical security incident data from healthcare organisations, expert elicitation through a Delphi study, and a successful test of the risk forecast in a case organisation. The research focused on healthcare, but has potential to be further developed as a knowledge-based system or expert system, applicable to any industry. That system could be used as a tool for management to benchmark themselves against other organisations, to make security investment decisions, to learn from past incidents and to provide input for policy makers.
75

Improving reliability of service oriented systems with consideration of cost and time constraints in clouds

Guo, Wei January 2016 (has links)
Web service technology is more and more popular for the implementation of service oriented systems. Additionally, cloud computing platforms, as an efficient and available environment, can provide the computing, networking and storage resources in order to decrease the budget of companies to deploy and manage their systems. Therefore, more service oriented systems are migrated and deployed in clouds. However, these applications need to be improved in terms of reliability, for certain components have low reliability. Fault tolerance approaches can improve software reliability. However, more redundant units are required, which increases the cost and the execution time of the entire system. Therefore, a migration and deployment framework with fault tolerance approaches with the consideration of global constraints in terms of cost and execution time may be needed. This work proposes a migration and deployment framework to guide the designers of service oriented systems in order to improve the reliability under global constraints in clouds. A multilevel redundancy allocation model is adopted for the framework to assign redundant units to the structure of systems with fault tolerance approaches. An improved genetic algorithm is utilised for the generation of the migration plan that takes the execution time of systems and the cost constraints into consideration. Fault tolerant approaches (such as NVP, RB and Parallel) can be integrated into the framework so as to improve the reliability of the components at the bottom level. Additionally, a new encoding mechanism based on linked lists is proposed to improve the performance of the genetic algorithm in order to reduce the movement of redundant units in the model. The experiments compare the performance of encoding mechanisms and the model integrated with different fault tolerance approaches. The empirical studies show that the proposed framework, with a multilevel redundancy allocation model integrated with the fault tolerance approaches, can generate migration plans for service oriented systems in clouds with the consideration of cost and execution time.
76

A quality assessment framework for knowledge management software

Habaragamu Ralalage, Wijendra Peiris Gunathilake January 2016 (has links)
CONTEXT: Knowledge is a strategic asset to any organisation due to its usefulness in supporting innovation, performance improvement and competitive advantage. In order to gain the maximum benefit from knowledge, the effective management of various forms of knowledge is increasingly viewed as vital. A Knowledge Management System (KMS) is a class of Information System (IS) that manages organisational knowledge, and KMS software (KMSS) is a KMS component that can be used as a platform for managing various forms of knowledge. The evaluation of the effectiveness or quality of KMS software is challenging, and no systematic evidence exists on the quality evaluation of knowledge management software which considers the various aspects of Knowledge Management (KM) to ensure the effectiveness of a KMS. AIM: The overall aim is to formalise a quality assessment framework for knowledge management software (KMSS). METHOD: In order to achieve the aim, the research was planned and carried out in the stages identified in the software engineering research methods literature. The need for this research was identified through a mapping study of prior KMS research. The data collected through a Systematic Literature Review (SLR) and the evaluation of a KMSS prototype using a sample of 58 regular users of knowledge management software were used as the main sources of data for the formalisation of the quality assessment framework. A test bed for empirical data collection was designed and implemented based on key principles of learning. A formalised quality assessment framework was applied to select knowledge management software and was evaluated for effectiveness. RESULTS: The final outcome of this research is a quality assessment framework consisting of 41 quality attributes categorised under content quality, platform quality and user satisfaction. A Quality Index was formulated by integrating these three categories of quality attributes to evaluate the quality of knowledge management software. CONCLUSION: This research generates novel contributions by presenting a framework for the quality assessment of knowledge management software, never previously available in the research. This framework is a valuable resource for any organisation or individual in selecting the most suitable knowledge management software by considering the quality attributes of the software.
77

Commitment models and concurrent bilateral negotiation strategies in dynamic service markets

Ponka, Ilja January 2009 (has links)
Technologies such as Web Services, the Semantic Web and the Grid may give rise to new electronic service markets, which have so many services, providers and consumers that software agents need to be employed to search and configure the services for their users. To this end, in this thesis, we investigate bilateral negotiations between agents of service providers and consumers in such markets. Our main interest is in decommitment policies or rules that govern reneging on a commitment, which are essential for operating effectively in such dynamic settings. The work is divided into two main parts. In part I (chapters 3-8), we investigate how the decommitment policies, through the parties’ behaviour, affect the combined utility of all market participants. As a tool, we use decisions that parties make during their interaction. These decisions have previously been discussed in the law and economics literature, but this is the first empirical investigation of them in a dynamic service market setting. We also consider settings (for example, with incomplete information) that have not been addressed before. In particular, we take four of these decisions — performance, reliance, contract and selection — and consider them one by one in a variety of settings. We create a number of novel decommitment policies and increase the understanding of these decisions in electronic markets. In part II (chapters 9-11), we consider a buyer agent that engages in multiple negotiations with different seller agents concurrently and consider how decommitment policies should affect its behaviour. Specifically, we develop a detailed adaptive model for concurrent bilateral negotiation by extending the existing work in several directions. In particular, we pay special attention to choosing the opponents to negotiate with and choosing the number of negotiations to have concurrently, but we also address questions such as bilateral negotiation tactics and interconnected negotiations on different services. These ideas are again evaluated empirically.
78

On the derivation of value from geospatial linked data

Black, Jennifer January 2013 (has links)
Linked Data (LD) is a set of best practices for publishing and connecting structured data on the web. LD and Linked Open Data (LOD) are often con ated to the point where there is an expectation that LD will be free and unrestricted. The current research looks at deriving commercial value from LD. When there is both free and paid for data available the issue arises of how users will react to a situation where two or more options are provided. The current research examines the factors that would affect choices made by users, and subsequently created prototypes for users to interact with, in order to understand how consumers reacted to each of the di�erent options. Our examination of commercial providers of LD uses Ordnance Survey (OS) (the UK national mapping agency) as a case study by studying their requirements for and experiences of publishing LD, and we further extrapolate from this by comparing the OS to other potential commercial publishers of LD. Our research looks at the business case for LD and introduces the concept of LOD and Linked Closed Data (LCD). We also determine that there are two types of LD users; non-commercial users and commercial users and as such, two types of use of LD; LD as a raw commodity and LD as an application. Our experiments aim to identify the issues users would find whereby LD is accessed via an application. Our first investigation brought together technical users and users of Geographic Information (GI). With the idea of LOD and LCD we asked users what factors would affect their view of data quality. We found 3 different types of buying behaviour on the web. We also found that context actively affected the users decision, i.e. users were willing to pay when the data was to make a professional decision but not for leisure use. To enable us to observe the behaviour of consumers whilst using data online, we built a working prototype of a LD application that would enable potential users of the system to experience the data and give us feedback about how they would behave in a LD environment. This was then extended into a second LD application to find if the same principles held true if actual capital was involved and they had to make a conscious decision regarding payment. With this in mind we proposed a potential architecture for the consumption of LD on the web. We determined potential issues which affect a consumers willingness to pay for data which surround quality factors. This supported our hypothesis that context affects a consumers willingness to pay and that willingness to pay is related to a requirement to reduce search times. We also found that a consumers perception of value and criticality of purpose also affected their willingness to pay. Finally we outlined an architecture to enable users to use LD where different scenarios may be involved which may have potential payment restrictions. This work is our contribution to the issue of the business case for LD on the web and is a starting point.
79

Achieving accurate opinion consensus in large multi-agent systems

Pryymak, Oleksandr January 2013 (has links)
Modern communication technologies offer the means to share information within decentralised,large and complex networks of agents. A significant number of examples of peer-to-peer interactions can be found in domains such as sensor networks, social web communities and file-sharing networks. Nevertheless, the development of decentralised systems still presents new challenges for sharing uncertain and con icting information in large communities of agents. In particular, the problem of forming opinion consensus supported by most of the observations distributed in a large system, is still challenging. To date, this problem has been approached from two perspectives: (i) on a system-level, by analysing the complex processes of opinion sharing in order to determine which system parameters result in higher performance; and (ii) from the perspective of individual agents, by designing algorithms for interactively reaching agreements on the correct opinion or for reasoning about the accuracy of a received opinion by its additional annotation. However, both of these approaches have signi�cant weaknesses. The first requires centralised control and perfect knowledge about the configuration of the system in order to simulate it, which are unlikely to be available for large decentralised systems. Whereas, the latter algorithms introduce a significant communication overhead, whilst in many cases the capabilities of the agents are restricted and communication strictly limited. Therefore, there is a need to fill the gap between these two approaches by addressing the problem of improving the accuracy of consensus in a decentralised fashion with minimal communication expenses. With this motivation, in this thesis we focus on the problem of improving the accuracy of consensus in large, complex networks of agents. We consider challenging settings in which communication is strictly limited to the sharing of opinions, which are subjective statements about the correct state of the subject of common interest. These opinions are dynamically introduced by a small number of sensing agents which have low accuracy, and thus the correct opinion just slightly prevails in the readings. In order to form the accurate consensus, the agents have to aggregate opinions from a number of sensing agents which, however, they are very rarely in direct connection with. Against this background, we focus on improving the accuracy of consensus and develop a solution for decentralised opinion aggregation. We build our work on recent research which suggests that large networked systems exhibit a mode of collective behaviour in which the accuracy is improved. We extend this research and offer a novel opinion sharing model, which is the firrst to quantify the impact of collective behaviour on the accuracy of consensus. By investigating the properties of our model, we show that within a narrow range of parameters the accuracy of consensus is significantly improved in comparison to the accuracy of a single sensing agent. However, we show that such critical parameters cannot be predicted since they are highly dependent on the system configuration. To address this problem, we develop the Autonomous Adaptive Tuning (AAT) algorithm, which controls the parameters of each agent individually and gradually tunes the system into the critical mode of collective behaviour. AAT is the �rst decentralised algorithm which improves accuracy in settings where communication is strictly limited to opinion sharing. As a result of applying AAT, 80-90% of the agents in a large system form the correct opinion, in contrast to 60-75% for the state-of-the-art message-passing algorithm proposed for these settings, known as DACOR. Additionally, we test other research requirements by evaluating teams with different sizes and network topologies, and thereby demonstrate that AAT is both scalable and adaptive. Finally, we showed that AAT is highly robust since it significantly improves the accuracy of consensus even when only being deployed in 10% of the agents in a large heterogeneous system. However, AAT is designed for settings in which agents do not di�erentiate their opinion sources, whilst in many other opinion sharing scenarios agents can learn who their sources are. Therefore, we design the IndividualWeights Tuning (IWT) algorithm, which can benefit from such additional information. IWT is the firrst behavioural algorithm that differentiates between the peers of an agent in solving the problem of improving the accuracy of consensus. Agents running IWT attribute higher weights to opinions from peers which deliver the most surprising opinions. Crucially, by incorporating information about the source of an opinion, IWT outperforms AAT for systems with dense communication networks. Considering that IWT has higher computational cost than AAT, we conclude that IWT is more bene�cial to use in dense networks while AAT delivers a similar level of accuracy improvement in sparse networks, but with a lower computational cost.
80

Pre-emptive type checking in dynamically typed programs

Grech, Neville January 2013 (has links)
With the rise of languages such as JavaScript, dynamically typed languages have gained a strong foothold in the programming language landscape. These languages are very well suited for rapid prototyping and for use with agile programming methodologies. However, programmers would benefit from the ability to detect type errors in their code early, without imposing unnecessary restrictions on their programs. Here we describe a new type inference system that identifies potential type errors through a flow-sensitive static analysis. This analysis is invoked at a very late stage, after the compilation to bytecode and initialisation of the program. It computes for every expression the variable’s present (from the values that it has last been assigned) and future (with which it is used in the further program execution) types, respectively. Using this information, our mechanism inserts type checks at strategic points in the original program. We prove that these checks, inserted as early as possible, preempt type errors earlier than existing type systems. We further show that these checks do not change the semantics of programs that do not raise type errors. Preemptive type checking can be added to existing languages without the need to modify the existing runtime environment. We show this with an implementation for the Python language and demonstrate its effectiveness on a number of benchmarks.

Page generated in 0.0633 seconds