• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1963
  • 183
  • 182
  • 147
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 16
  • 11
  • 9
  • 7
  • Tagged with
  • 2877
  • 2877
  • 750
  • 637
  • 506
  • 499
  • 393
  • 336
  • 314
  • 300
  • 299
  • 289
  • 288
  • 277
  • 276
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1401

Limiting vulnerability exposure through effective patch management: threat mitigation through vulnerability remediation

White, Dominic Stjohn Dolin 08 February 2007 (has links)
This document aims to provide a complete discussion on vulnerability and patch management. The first chapters look at the trends relating to vulnerabilities, exploits, attacks and patches. These trends describe the drivers of patch and vulnerability management and situate the discussion in the current security climate. The following chapters then aim to present both policy and technical solutions to the problem. The policies described lay out a comprehensive set of steps that can be followed by any organisation to implement their own patch management policy, including practical advice on integration with other policies, managing risk, identifying vulnerability, strategies for reducing downtime and generating metrics to measure progress. Having covered the steps that can be taken by users, a strategy describing how best a vendor should implement a related patch release policy is provided. An argument is made that current monthly patch release schedules are inadequate to allow users to most effectively and timeously mitigate vulnerabilities. The final chapters discuss the technical aspect of automating parts of the policies described. In particular the concept of 'defense in depth' is used to discuss additional strategies for 'buying time' during the patch process. The document then goes on to conclude that in the face of increasing malicious activity and more complex patching, solid frameworks such as those provided in this document are required to ensure an organisation can fully manage the patching process. However, more research is required to fully understand vulnerabilities and exploits. In particular more attention must be paid to threats, as little work as been done to fully understand threat-agent capabilities and activities from a day to day basis. / TeX output 2007.02.08:2212 / Adobe Acrobat 9.51 Paper Capture Plug-in
1402

Using semantic knowledge to improve compression on log files

Otten, Frederick John 19 November 2008 (has links)
With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. / TeX / pdfTeX-1.40.3
1403

Systems support for distributed learning environments

Allison, Colin January 2003 (has links)
This thesis contends that the growing phenomena of multi-user networked "learning environments" should be treated as distributed interactive systems and that their developers should be aware of the systems and networks issues involved in their construction and maintenance. Such environments are henceforth referred to as distributed learning environments, or DLEs. Three major themes are identified as part of systems support: i) shared resource coherence in DLEs; ii) Quality of Service for the end- users of DLEs; and iii) the need for an integrating framework to develop, deploy and manage DLEs. The thesis reports on several distinct implementations and investigations that are each linked by one or more of those themes. Initially, responsiveness and coherence emerged as potentially conflicting requirements, and although a system was built that successfully resolved this conflict it proved difficult to move from the "clean room" conditions of a research project into a real world learning context. Accordingly, subsequent systems adopted a web-based approach to aid deployment in realistic settings. Indeed, production versions of these systems have been used extensively in credit-bearing modules in several Scottish Universities. Interactive responsiveness then emerged as a major Quality of Service issue in its own right, and motivated a series of investigations into the sources of delay, as experienced by end users of web-oriented distributed learning environments. Investigations into this issue provided insight into the nature of web-oriented interactive distributed learning and highlighted the need to be QoS-aware. As the volume and the range of usage of distributed learning applications increased the need for an integrating framework emerged. This required identifying and supporting a wide variety of educational resource types and also the key roles occupied by users of the system, such as tutors, students, supervisors, service providers, administrators, examiners. The thesis reports on the approaches taken and lessons learned from researching, designing and implementing systems which support distributed learning. As such, it constitutes a documented body of work that can inform the future design and deployment of distributed learning environments.
1404

Design and investigation of scalable multicast recursive protocols for wired and wireless ad hoc networks

Al-Balas, Firas January 2009 (has links)
The ever-increasing demand on content distribution and media streaming over the Internet has created the need for efficient methods of delivering information. One of the most promising approaches is based on multicasting. However, multicast solutions have to cope with several constraints as well as being able to perform in different environments such as wired, wireless, and ad hoc environments. Additionally, the scale and size of the Internet introduces another dimension of difficulty. Providing scalable multicast for mobile hosts in wireless environment and in mobile ad hoc networks (MANETs) is a challenging problem. In the past few years, several protocols have been proposed to efficient multicast solutions over the Internet, but these protocols did not give efficient solution for the scalability issue. In this thesis, scalable multicast protocols for wired, wireless and wireless ad hoc networks are proposed and evaluated. These protocols share the idea of building up a multicast tree gradually and recursively as join/leave of the multicast group members using a dynamic branching node-based tree (DBT) concept. The DBT uses a pair of branching node messages (BNMs). These messages traverse between a set of dynamically assigned branching node routers (BNRs) to build the multicast tree. In the proposed protocols only the branching node routers (BNRs) carry the state information about their next BNRs rather than the multicast group members, which gives a fixed size of control packet header size as the multicast group size increases, i.e. a good solution to the problem of scalability. Also the process of join/leave of multicast group members is carried out locally which gives low join/leave latency. The proposed protocols include: Scalable Recursive Multicast protocol (SReM) which is proposed using the DBT concepts mentioned above, Mobile Scalable Recursive Multicast protocol (MoSReM) which is an extension for SReM by taking into consideration the mobility feature in the end hosts and performing an efficient roaming process, and finally, a Scalable Ad hoc Recursive Multicast protocol (SARM) to achieve the mobility feature for all nodes and performing an efficient solution for link recovery because of node movement. By cost analysis and an extensive simulation, the proposed protocols show many positive features like fixed size control messages, being scalable, low end to end delay, high packet rate delivery and low normalized routing overhead. The thesis concludes by discussing the contributions of the proposed protocols on scalable multicast in the Internet society.
1405

A common analysis framework for simulated streaming-video networks

Mulumba, Patrick January 2009 (has links)
Distributed media streaming has been driven by the combination of improved media compression techniques and an increase in the availability of bandwidth. This increase has lead to the development of various streaming distribution engines (systems/services), which currently provide the majority of the streaming media available throughout the Internet. This study aimed to analyse a range of existing commercial and open-source streaming media distribution engines, and classify them in such a way as to define a Common Analysis Framework for Simulated Streaming-Video Networks (CAFSS-Net). This common framework was used as the basis for a simulation tool intended to aid in the development and deployment of streaming media networks and predict the performance impacts of both network configuration changes, video features (scene complexity, resolution) and general scaling. CAFSS-Net consists of six components: the server, the client(s), the network simulator, the video publishing tools, the videos and the evaluation tool-set. Test scenarios are presented consisting of different network configurations, scales and external traffic specifications. From these test scenarios, results were obtained to determine interesting observations attained and to provide an overview of the different test specications for this study. From these results, an analysis of the system was performed, yielding relationships between the videos, the different bandwidths, the different measurement tools and the different components of CAFSS-Net. Based on the analysis of the results, the implications for CAFSS-Net highlighted different achievements and proposals for future work for the different components. CAFSS-Net was able to successfully integrate all of its components to evaluate the different streaming scenarios. The streaming server, client and video components accomplished their objectives. It is noted that although the video publishing tool was able to provide the necessary compression/decompression services, proposals for the implementation of alternative compression/decompression schemes could serve as a suitable extension. The network simulator and evaluation tool-set components were also successful, but future tests (particularly in low bandwidth scenarios) are suggested in order to further improve the accuracy of the framework as a whole. CAFSS-Net is especially successful with analysing high bandwidth connections with the results being similar to those of the physical network tests.
1406

An evaluation of security issues in cloud-based file sharing technologies

Fana, Akhona January 2015 (has links)
Cloud computing is one of the most promising technologies for backup and data storage that provides flexible access to data. Cloud computing plays a vital role in remote backup. It is so unfortunate that this computing technique has flaws that thrilled and edgy end users in implementing it effectively. These flaws include factors like lack of integrity, confidentiality and privacy to information. A secure cloud is impossible unless the computer-generated environment is appropriately secured. In any form of technology it is always advisable that security challenges must be prior identified and fixed before the implementation of that particular technology. Primarily, this study will focus on finding security issues in cloud computing with the objective of finding concerns like credential theft and session management in the ―Cloud‖. Main arguments like HTTP banner disclosure, Bash ―ShellShock‖ Injection and password issues were discovered during the stages of study implementation. These challenges may provide information that will permit hackers in manipulating and exploiting cloud environment. Identifying credential theft and session management in cloud-based file sharing technologies a mixed method approach was implemented throughout the course of the study due to the nature of study and unity of analysis. Penetration tests were performed as security testing technique. Prevention and guideline of security threats leads to a friendly and authentic world of technology.
1407

A scalable architecture for the demand-driven deployment of location-neutral software services

MacInnis, Robert F. January 2010 (has links)
This thesis presents a scalable service-oriented architecture for the demand-driven deployment of location-neutral software services, using an end-to-end or ‘holistic’ approach to address identified shortcomings of the traditional Web Services model. The architecture presents a multi-endpoint Web Service environment which abstracts over Web Service location and technology and enables the dynamic provision of highly-available Web Services. The model describes mechanisms which provide a framework within which Web Services can be reliably addressed, bound to, and utilized, at any time and from any location. The presented model eases the task of providing a Web Service by consuming deployment and management tasks. It eases the development of consumer agent applications by letting developers program against what a service does, not where it is or whether it is currently deployed. It extends the platform-independent ethos of Web Services by providing deployment mechanisms which can be used independent of implementation and deployment technologies. Crucially, it maintains the Web Service goal of universal interoperability, preserving each actor’s view upon the system so that existing Service Consumers and Service Providers can participate without any modifications to provider agent or consumer agent application code. Lastly, the model aims to enable the efficient consumption of hosting resources by providing mechanisms to dynamically apply and reclaim resources based upon measured consumer demand.
1408

The distributed utility model applied to optimal admission control & QoS adaptation in multimedia systems & enterprise networks

Akbar, Md Mostofa 05 November 2018 (has links)
Allocation and reservation of resources, such as CPU cycles and I/O bandwidth of multimedia servers and link bandwidth in the network, is essential to ensure Quality of Service (QoS) of multimedia services delivered over the Internet. We propose a Distributed Multimedia Server System (DMSS) configured out of a collection of networked multimedia servers where multimedia data are partitioned and replicated among the servers. We also introduce Utility Model-Distributed (UM-D), the distributed version of the Utility Model, for admission control and QoS adaptation of multimedia sessions to maximize revenue from multimedia services for the DMSS. Two control architectures, a centralized and a distributed, have been proposed to solve the admission control problem formalized by the UM-D. In the centralized broker architecture, the admission control in a DMSS can be mapped to the Multidimensional Multiple-choice Knapsack Problem (MMKP), a variant of the classical 0–1 Knapsack Problem. An exact solution of MMKP, an NP-hard problem, is not applicable for the on line admission control problem in the DMSS. We therefore developed three new heuristics, M-HEU, I-HEU and C-HEU for solving the MMKP for on-line real-time admission control and QoS adaptation. We present a qualitative analysis of the performance of these heuristics to solve admission control problems based on the worst-case complexity analysis and the experimental results from different sized data sets. The fully distributed admission control problem in a DMSS, on the other hand, maps to the Multidimensional Multiple-choice Multi Knapsack Problem (MMMKP), a new variant of the Knapsack Problem. We have developed D-HEU and A-HEU, two new distributed heuristics to solve the MMMKP. D-HEU requires a large number of messages and it is not suitable for a on line admission controller. A-HEU finds the solution with fewer messages but achieves less optimality than D-HEU. We have applied the admission control strategy described in the UM-D to the set of Media Server Farms providing streaming videos to users. The performance of different heuristics in the broker has been discussed using the simulation results. We have also shown application of UM-D to Distributed SLA (Service Level Agreement) Controllers in Enterprise Networks. Simulation results and qualitative comparison of different heuristics are also provided. / Graduate
1409

Počítačové sítě - výukový materiál pro žáky základní školy / Computer networks - educational materials for elementary school pupils

SIKORA, Jindřich January 2013 (has links)
In this thesis, I explain to pupils principle accessible form of computer networks, the method of their protection and monitoring. I created an interactive teaching material for primary school pupils, who is familiar with this issue so that your concept of what a computer network works, how it can be secured against intruders, and how we can monitor the operation of the network.
1410

A influência de redes sociais na cultura de segurança / The influence social networks in the safety culture

PEREIRA, CARLOS H.V. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:33:16Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:06:16Z (GMT). No. of bitstreams: 0 / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP

Page generated in 0.0541 seconds