• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1383
  • 192
  • 73
  • 30
  • 27
  • 11
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 3635
  • 3635
  • 1069
  • 940
  • 902
  • 716
  • 706
  • 510
  • 470
  • 447
  • 399
  • 357
  • 291
  • 267
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

An evaluation of the power consumption and carbon footprint of a cloud infrastructure

Yampolsky, Vincent January 2010 (has links)
The Information and Communication Technology (ICT) sector represent two to three percentsof the world energy consumption and about the same percentage of GreenHouse Gas(GHG) emission. Moreover the IT-related costs represent fifty per-cents of the electricity billof a company. In January 2010 the Green Touch consortium composed of sixteen leading companies and laboratories in the IT field led by Bell's lab and Alcatel-Lucent have announced that in five years the Internet could require a thousand times less energy than it requires now. Furthermore Edinburgh Napier University is committed to reduce its carbon footprint by 25% on the 2007/8 to 2012/13 period (Edinburgh Napier University Sustainability Office, 2009) and one of the objectives is to deploy innovative C&IT solutions. Therefore there is a general interest to reduce the electrical cost of the IT infrastructure, usually led by environmental concerns. One of the most prominent technologies when Green IT is discussed is Cloud Computing (Stephen Ruth, 2009). This technology allows the on-demand self service provisioning by making resources available as a service. Its elasticity allows the automatic scaling of thedemand and hardware consolidation thanks to virtualization. Therefore an increasing number of companies are moving their resources into a cloud managed by themselves or a third party. However this is known to reduce the electricity bill of a company if the cloud is managed by a third-party off-premise but this does not say to which extent is the powerconsumption is reduced. Indeed the processing resources seem to be just located somewhere else. Moreover hardware consolidation suggest that power saving is achieved only during off-peak time (Xiaobo Fan et al, 2007). Furthermore the cost of the network is never mentioned when cloud is referred as power saving and this cost might not be negligible. Indeed the network might need upgrades because what was being done locally is done remotely with cloud computing. In the same way cloud computing is supposed to enhance the capabilities of mobile devices but the impact of cloud communication on their autonomy is mentioned anywhere. Experimentations have been performed in order to evaluate the power consumption of an infrastructure relying on a cloud used for desktop virtualization and also to measure the cost of the same infrastructure without a cloud. The overall infrastructure have been split in different elements respectively the cloud infrastructure, the network infrastructure and enddevices and the power consumption of each element have been monitored separately. The experimentation have considered different severs, network equipment (switches, wireless access-points, router) and end-devices (desktops Iphone, Ipad and Sony-Ericsson Xperia running Android). The experiments have also measured the impact of a cloud communication on the battery of mobile devices. The evaluation have considered different deployment sizes and estimated the carbon emission of the technologies tested. The cloud infrastructure happened to be power saving and not only during off-peak time from a deployment size large enough (approximately 20 computers) for the same processing power. The power saving is large enough for wide deployment (500 computers) that it could overcome the cost of a network upgrade to a Gigabit access infrastructure and still reduce the carbon emission by 4 tonnes or 43.97% over a year and on Napier campuses compared to traditional deployment with a Fast-Ethernet access-network. However the impact of cloud communication on mobile-devices is important and has increase the power consumption by 57% to 169%.
282

HI-Risk : a socio-technical method for the identification and monitoring of healthcare information security risks in the information society

van Deursen Hazelhoff Roelfze, Nicole January 2014 (has links)
This thesis describes the development of the HI-risk method to assess socio-technical information security risks. The method is based on the concept that related organisations experience similar risks and could benefit from sharing knowledge in order to take effective security measures. The aim of the method is to predict future risks by combining knowledge of past information security incidents with forecasts made by experts. HI-risks articulates the view that information security risk analysis should include human, environmental, and societal factors, and that collaboration amongst disciplines, organisations and experts is essential to improve security risk intelligence in today's information society. The HI-risk method provides the opportunity for participating organisations to register their incidents centrally. From this register, an analysis of the incident scenarios leads to the visualisation of the most frequent scenario trees. These scenarios are presented to experts in the field. The experts express their opinions about the expected frequency of occurrence for the future. Their expectation is based on their experience, their knowledge of existing countermeasures, and their insight into new potential threats. The combination of incident and expert knowledge forms a risk map. The map is the main deliverable of the HI-risk method, and organisations could use it to monitor their information security risks. The HI-risk method was designed by following the rigorous process of design science research. The empirical methods used included qualitative and quantitative techniques, such as an analysis of historical security incident data from healthcare organisations, expert elicitation through a Delphi study, and a successful test of the risk forecast in a case organisation. The research focused on healthcare, but has potential to be further developed as a knowledge-based system or expert system, applicable to any industry. That system could be used as a tool for management to benchmark themselves against other organisations, to make security investment decisions, to learn from past incidents and to provide input for policy makers.
283

Improving reliability of service oriented systems with consideration of cost and time constraints in clouds

Guo, Wei January 2016 (has links)
Web service technology is more and more popular for the implementation of service oriented systems. Additionally, cloud computing platforms, as an efficient and available environment, can provide the computing, networking and storage resources in order to decrease the budget of companies to deploy and manage their systems. Therefore, more service oriented systems are migrated and deployed in clouds. However, these applications need to be improved in terms of reliability, for certain components have low reliability. Fault tolerance approaches can improve software reliability. However, more redundant units are required, which increases the cost and the execution time of the entire system. Therefore, a migration and deployment framework with fault tolerance approaches with the consideration of global constraints in terms of cost and execution time may be needed. This work proposes a migration and deployment framework to guide the designers of service oriented systems in order to improve the reliability under global constraints in clouds. A multilevel redundancy allocation model is adopted for the framework to assign redundant units to the structure of systems with fault tolerance approaches. An improved genetic algorithm is utilised for the generation of the migration plan that takes the execution time of systems and the cost constraints into consideration. Fault tolerant approaches (such as NVP, RB and Parallel) can be integrated into the framework so as to improve the reliability of the components at the bottom level. Additionally, a new encoding mechanism based on linked lists is proposed to improve the performance of the genetic algorithm in order to reduce the movement of redundant units in the model. The experiments compare the performance of encoding mechanisms and the model integrated with different fault tolerance approaches. The empirical studies show that the proposed framework, with a multilevel redundancy allocation model integrated with the fault tolerance approaches, can generate migration plans for service oriented systems in clouds with the consideration of cost and execution time.
284

A quality assessment framework for knowledge management software

Habaragamu Ralalage, Wijendra Peiris Gunathilake January 2016 (has links)
CONTEXT: Knowledge is a strategic asset to any organisation due to its usefulness in supporting innovation, performance improvement and competitive advantage. In order to gain the maximum benefit from knowledge, the effective management of various forms of knowledge is increasingly viewed as vital. A Knowledge Management System (KMS) is a class of Information System (IS) that manages organisational knowledge, and KMS software (KMSS) is a KMS component that can be used as a platform for managing various forms of knowledge. The evaluation of the effectiveness or quality of KMS software is challenging, and no systematic evidence exists on the quality evaluation of knowledge management software which considers the various aspects of Knowledge Management (KM) to ensure the effectiveness of a KMS. AIM: The overall aim is to formalise a quality assessment framework for knowledge management software (KMSS). METHOD: In order to achieve the aim, the research was planned and carried out in the stages identified in the software engineering research methods literature. The need for this research was identified through a mapping study of prior KMS research. The data collected through a Systematic Literature Review (SLR) and the evaluation of a KMSS prototype using a sample of 58 regular users of knowledge management software were used as the main sources of data for the formalisation of the quality assessment framework. A test bed for empirical data collection was designed and implemented based on key principles of learning. A formalised quality assessment framework was applied to select knowledge management software and was evaluated for effectiveness. RESULTS: The final outcome of this research is a quality assessment framework consisting of 41 quality attributes categorised under content quality, platform quality and user satisfaction. A Quality Index was formulated by integrating these three categories of quality attributes to evaluate the quality of knowledge management software. CONCLUSION: This research generates novel contributions by presenting a framework for the quality assessment of knowledge management software, never previously available in the research. This framework is a valuable resource for any organisation or individual in selecting the most suitable knowledge management software by considering the quality attributes of the software.
285

Commitment models and concurrent bilateral negotiation strategies in dynamic service markets

Ponka, Ilja January 2009 (has links)
Technologies such as Web Services, the Semantic Web and the Grid may give rise to new electronic service markets, which have so many services, providers and consumers that software agents need to be employed to search and configure the services for their users. To this end, in this thesis, we investigate bilateral negotiations between agents of service providers and consumers in such markets. Our main interest is in decommitment policies or rules that govern reneging on a commitment, which are essential for operating effectively in such dynamic settings. The work is divided into two main parts. In part I (chapters 3-8), we investigate how the decommitment policies, through the parties’ behaviour, affect the combined utility of all market participants. As a tool, we use decisions that parties make during their interaction. These decisions have previously been discussed in the law and economics literature, but this is the first empirical investigation of them in a dynamic service market setting. We also consider settings (for example, with incomplete information) that have not been addressed before. In particular, we take four of these decisions — performance, reliance, contract and selection — and consider them one by one in a variety of settings. We create a number of novel decommitment policies and increase the understanding of these decisions in electronic markets. In part II (chapters 9-11), we consider a buyer agent that engages in multiple negotiations with different seller agents concurrently and consider how decommitment policies should affect its behaviour. Specifically, we develop a detailed adaptive model for concurrent bilateral negotiation by extending the existing work in several directions. In particular, we pay special attention to choosing the opponents to negotiate with and choosing the number of negotiations to have concurrently, but we also address questions such as bilateral negotiation tactics and interconnected negotiations on different services. These ideas are again evaluated empirically.
286

On the derivation of value from geospatial linked data

Black, Jennifer January 2013 (has links)
Linked Data (LD) is a set of best practices for publishing and connecting structured data on the web. LD and Linked Open Data (LOD) are often con ated to the point where there is an expectation that LD will be free and unrestricted. The current research looks at deriving commercial value from LD. When there is both free and paid for data available the issue arises of how users will react to a situation where two or more options are provided. The current research examines the factors that would affect choices made by users, and subsequently created prototypes for users to interact with, in order to understand how consumers reacted to each of the di�erent options. Our examination of commercial providers of LD uses Ordnance Survey (OS) (the UK national mapping agency) as a case study by studying their requirements for and experiences of publishing LD, and we further extrapolate from this by comparing the OS to other potential commercial publishers of LD. Our research looks at the business case for LD and introduces the concept of LOD and Linked Closed Data (LCD). We also determine that there are two types of LD users; non-commercial users and commercial users and as such, two types of use of LD; LD as a raw commodity and LD as an application. Our experiments aim to identify the issues users would find whereby LD is accessed via an application. Our first investigation brought together technical users and users of Geographic Information (GI). With the idea of LOD and LCD we asked users what factors would affect their view of data quality. We found 3 different types of buying behaviour on the web. We also found that context actively affected the users decision, i.e. users were willing to pay when the data was to make a professional decision but not for leisure use. To enable us to observe the behaviour of consumers whilst using data online, we built a working prototype of a LD application that would enable potential users of the system to experience the data and give us feedback about how they would behave in a LD environment. This was then extended into a second LD application to find if the same principles held true if actual capital was involved and they had to make a conscious decision regarding payment. With this in mind we proposed a potential architecture for the consumption of LD on the web. We determined potential issues which affect a consumers willingness to pay for data which surround quality factors. This supported our hypothesis that context affects a consumers willingness to pay and that willingness to pay is related to a requirement to reduce search times. We also found that a consumers perception of value and criticality of purpose also affected their willingness to pay. Finally we outlined an architecture to enable users to use LD where different scenarios may be involved which may have potential payment restrictions. This work is our contribution to the issue of the business case for LD on the web and is a starting point.
287

Achieving accurate opinion consensus in large multi-agent systems

Pryymak, Oleksandr January 2013 (has links)
Modern communication technologies offer the means to share information within decentralised,large and complex networks of agents. A significant number of examples of peer-to-peer interactions can be found in domains such as sensor networks, social web communities and file-sharing networks. Nevertheless, the development of decentralised systems still presents new challenges for sharing uncertain and con icting information in large communities of agents. In particular, the problem of forming opinion consensus supported by most of the observations distributed in a large system, is still challenging. To date, this problem has been approached from two perspectives: (i) on a system-level, by analysing the complex processes of opinion sharing in order to determine which system parameters result in higher performance; and (ii) from the perspective of individual agents, by designing algorithms for interactively reaching agreements on the correct opinion or for reasoning about the accuracy of a received opinion by its additional annotation. However, both of these approaches have signi�cant weaknesses. The first requires centralised control and perfect knowledge about the configuration of the system in order to simulate it, which are unlikely to be available for large decentralised systems. Whereas, the latter algorithms introduce a significant communication overhead, whilst in many cases the capabilities of the agents are restricted and communication strictly limited. Therefore, there is a need to fill the gap between these two approaches by addressing the problem of improving the accuracy of consensus in a decentralised fashion with minimal communication expenses. With this motivation, in this thesis we focus on the problem of improving the accuracy of consensus in large, complex networks of agents. We consider challenging settings in which communication is strictly limited to the sharing of opinions, which are subjective statements about the correct state of the subject of common interest. These opinions are dynamically introduced by a small number of sensing agents which have low accuracy, and thus the correct opinion just slightly prevails in the readings. In order to form the accurate consensus, the agents have to aggregate opinions from a number of sensing agents which, however, they are very rarely in direct connection with. Against this background, we focus on improving the accuracy of consensus and develop a solution for decentralised opinion aggregation. We build our work on recent research which suggests that large networked systems exhibit a mode of collective behaviour in which the accuracy is improved. We extend this research and offer a novel opinion sharing model, which is the firrst to quantify the impact of collective behaviour on the accuracy of consensus. By investigating the properties of our model, we show that within a narrow range of parameters the accuracy of consensus is significantly improved in comparison to the accuracy of a single sensing agent. However, we show that such critical parameters cannot be predicted since they are highly dependent on the system configuration. To address this problem, we develop the Autonomous Adaptive Tuning (AAT) algorithm, which controls the parameters of each agent individually and gradually tunes the system into the critical mode of collective behaviour. AAT is the �rst decentralised algorithm which improves accuracy in settings where communication is strictly limited to opinion sharing. As a result of applying AAT, 80-90% of the agents in a large system form the correct opinion, in contrast to 60-75% for the state-of-the-art message-passing algorithm proposed for these settings, known as DACOR. Additionally, we test other research requirements by evaluating teams with different sizes and network topologies, and thereby demonstrate that AAT is both scalable and adaptive. Finally, we showed that AAT is highly robust since it significantly improves the accuracy of consensus even when only being deployed in 10% of the agents in a large heterogeneous system. However, AAT is designed for settings in which agents do not di�erentiate their opinion sources, whilst in many other opinion sharing scenarios agents can learn who their sources are. Therefore, we design the IndividualWeights Tuning (IWT) algorithm, which can benefit from such additional information. IWT is the firrst behavioural algorithm that differentiates between the peers of an agent in solving the problem of improving the accuracy of consensus. Agents running IWT attribute higher weights to opinions from peers which deliver the most surprising opinions. Crucially, by incorporating information about the source of an opinion, IWT outperforms AAT for systems with dense communication networks. Considering that IWT has higher computational cost than AAT, we conclude that IWT is more bene�cial to use in dense networks while AAT delivers a similar level of accuracy improvement in sparse networks, but with a lower computational cost.
288

Pre-emptive type checking in dynamically typed programs

Grech, Neville January 2013 (has links)
With the rise of languages such as JavaScript, dynamically typed languages have gained a strong foothold in the programming language landscape. These languages are very well suited for rapid prototyping and for use with agile programming methodologies. However, programmers would benefit from the ability to detect type errors in their code early, without imposing unnecessary restrictions on their programs. Here we describe a new type inference system that identifies potential type errors through a flow-sensitive static analysis. This analysis is invoked at a very late stage, after the compilation to bytecode and initialisation of the program. It computes for every expression the variable’s present (from the values that it has last been assigned) and future (with which it is used in the further program execution) types, respectively. Using this information, our mechanism inserts type checks at strategic points in the original program. We prove that these checks, inserted as early as possible, preempt type errors earlier than existing type systems. We further show that these checks do not change the semantics of programs that do not raise type errors. Preemptive type checking can be added to existing languages without the need to modify the existing runtime environment. We show this with an implementation for the Python language and demonstrate its effectiveness on a number of benchmarks.
289

Developing a model of mobile Web uptake in the developing world

Purwandari, Betty January 2013 (has links)
This research was motivated by the limited penetration of the Internet within emerging economies and the ‘mobile miracle’, which refers to a steep increase of mobile phone penetration. In the context of the developing world, harnessing the ‘mobile miracle’ to improve Internet access can leverage the potential of the Web. However, no comprehensive model exists, which can identify and measure indicators of Mobile Web uptake. The absence of such a model creates problems in understanding the impact of the Mobile Web. This has generated the key question under study in this thesis: “What is a suitable model for Mobile Web uptake and its impact in the developing world?” In order to address the research question, the Model of Mobile Web Uptake in the Developing World (MMWUDW) was created. It was informed by a literature review, pilot study in Kenya and expert reviews. The MMWUDW was evaluated using Structural Equation Modelling (SEM) with the primary data that consisted of the questionnaire and interview data from Indonesia. The SEM analysis was triangulated with the questionnaire results and interview findings. Examining the primary data to evaluate the MMWUDW was essential to understand why people used mobile phones to make or follow links on the Web. The MMWUDW has three main factors. These are Mobile Web maturity, uptake and impact. The results of the SEM suggested that mobile networks, percentage of income for mobile credits, literacy and digital literacy did not affect Mobile Web uptake. In contrast, web-enabled phones, Web applications or contents, and mobile operator services strongly indicated Mobile Web maturity, which was a prerequisite for Mobile Web uptake. The uptake then created Mobile Web impact, which included both positive and negative features; ease of access to information and a convenient way to communicate; being entertained and empowered; maintaining of social cohesion and economic benefits, as well as wasting time and money, and being exposed to cyber bullying. Moreover, the research identified areas for improvement in the Mobile Web and regression equations to measure the factors and indicators of the MMWUDW. Possible future work comprises advancement of the MMWUDW and new Web Science research on the Mobile Web in developing countries.
290

Direct write printed flexible electronic devices on fabrics

Li, Yi January 2014 (has links)
This thesis describes direct write printing methods to achieve flexible electronic devices on fabrics by investigating, low temperature process; and functional conductor, insulator and semiconductor inks. The objective is to print flexible electronic devices onto fabrics solely by inkjet printing or pneumatic dispenser printing. Antennas and capacitors, as intermediate inkjet printed electronic devices, are addressed before transistor fabrication. There are many publications that report inkjet printed flexible electronic devices. However, none of the reported methods use fabrics as the target substrate or are processed under a sufficiently low temperature (≤150 oC) to enable fabrics to survive. The target substrate in this research, standard 65/35 polyester cotton fabric, has a maximum thermal curing condition of 180 oC for 15 minutes and 150 oC for 45 minutes. Therefore the total effective curing time is best below 150 oC within 30 minutes to minimise any potential degradation of the fabric substrate. This thesis reports on an inkjet printed flexible half wavelength fabric dipole antenna, an inkjet printed fabric patch antenna, an all inkjet printed SU-8 capacitor, an all inkjet printed fabric capacitor and an inkjet printed transistor on a silicon dioxide coated silicon wafer. The measured fabric dipole antenna peak operating frequency is 1.897 GHz with 74.1 % efficiency and 3.6 dBi gain. The measured fabric patch antenna peak operating frequency is around 2.48 GHz with efficiency up to 57 % and 5.09 dBi gain. The measured capacitance of the printed capacitor is 48.5 pF (2.47 pF/mm2) at 100 Hz using the inkjet printed SU-8. The capacitance of an all inkjet printed flexible fabric capacitor is 163 pF (23.1 pF/mm2) at 100Hz with the UV curable PVP dielectric ink developed as part of this work.

Page generated in 0.0475 seconds