• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1963
  • 183
  • 182
  • 147
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 16
  • 11
  • 9
  • 7
  • Tagged with
  • 2877
  • 2877
  • 750
  • 637
  • 506
  • 499
  • 393
  • 336
  • 314
  • 300
  • 299
  • 289
  • 288
  • 277
  • 276
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1441

A framework for the development of a personal information security agent

Stieger, Ewald Andreas January 2011 (has links)
Nowadays information is everywhere. Organisations process, store and create information in unprecedented quantities to support their business processes. Similarly, people use, share and synthesise information to accomplish their daily tasks. Indeed, information and information technology are the core of business activities, and a part of daily life. Information has become a crucial resource in today‘s information age and any corruption, destruction or leakage of information can have a serious negative impact on an organisation. Thus, information should be kept safe. This requires the successful implementation of information security, which ensures that information assets are only used, modified and accessed by authorised people. Information security faces many challenges; and organisations still have not successfully addressed them. One of the main challenges is the human element. Information security depends to a large extent on people and their ability to follow and apply sound security practices. Unfortunately, people are often not very security-conscious in their behaviour; and this is the cause of many security breaches. There are a variety of reasons for this such as a lack of knowledge and a negative attitude to security. Many organisations are aware of this; and they attempt to remedy the situation by means of information security awareness programs. These programs aim to educate, train and increase the security awareness of individuals. However, information security awareness programs are not always successful. They are not a once-off remedy that can quickly cure information security. The programs need to be implemented effectively, and they require an ongoing effort. Unfortunately, this is where many organisations fail. Furthermore, changing individuals‘ security behaviour is difficult due to the complexity of factors that influence everyday behaviour. In view of the above, this research project proposes an alternative approach in the form of a personal information security agent. The goal of this agent is to influence individuals to adopt more secure behaviour. There are a variety of factors that need to be considered, in order to achieve this goal, and to positively influence security behaviour. Consequently, this research establishes criteria and principles for such an agent, based on the theory and practice. From a theoretical point of view, a variety of factors that influence human behaviour such as self-efficacy and normative beliefs were investigated. Furthermore, the field of persuasive technology has provided for strategies that can be used by technology to influence individuals. On the practical side, a prototype of a personal information security agent was created and evaluated through a technical software review process. The evaluation of the prototype showed that the theoretical criteria have merit but their effectiveness is largely dependent on how they are implemented. The criteria were thus revised, based on the practical findings. The findings also suggest that a personal information security agent, based on the criteria, may be able to positively influence individuals to be more secure in their behaviour. The insights gained by the research are presented in the form of a framework that makes both theoretical and practical recommendations for developing a personal information security agent. One may, consequently, conclude that the purpose of this research is to provide a foundation for the development of a personal information security agent to positively influence computer users to be more security-conscious in their behavior.
1442

Performance monitoring in transputer-based multicomputer networks

Jiang, Jie Cheng January 1990 (has links)
Parallel architectures, like the transputer-based multicomputer network, offer potentially enormous computational power at modest cost. However, writing programs on a multicomputer to exploit parallelism is very difficult due to the lack of tools to help users understand the run-time behavior of the parallel system and detect performance bottlenecks in their programs. This thesis examines the performance characteristics of parallel programs in a multicomputer network, and describes the design and implementation of a real-time performance monitoring tool on transputers. We started with a simple graph theoretical model in which a parallel computation is represented as a weighted directed acyclic graph, called the execution graph. This model allows us to easily derive a variety of performance metrics for parallel programs, such as program execution time, speedup, efficiency, etc. From this model, we also developed a new analysis method called weighted critical path analysts (WCPA), which incorporates the notion of parallelism into critical path analysis and helps users identify the program activities which have the most impact on performance. Based on these ideas, the design of a real-time performance monitoring tool was proposed and implemented on a 74-node transputer-based multicomputer. Major problems in parallel and distributed monitoring addressed in this thesis are: global state and global clock, minimization of monitoring overhead, and the presentation of meaningful data. New techniques and novel approaches to these problems have been investigated and implemented in our tool. Lastly, benchmarks are used to measure the accuracy and the overhead of our monitoring tool. We also demonstrate how this tool was used to improve the performance of an actual parallel application by more than 50%. / Science, Faculty of / Computer Science, Department of / Graduate
1443

A theoretical study of wireless networks in local area networks

Nagar, Bansi 07 October 2014 (has links)
M.Com. (Computer Auditing) / With all the technology available in today’s world, people have become more connected to each other as well as to the world around them. This has been echoed by Rutledge (2009:1), who stated: “Emerging technologies are linking the world, but we no longer need wires and cables to connect people. People are no longer trapped by geography. We are, however, facing a digital tsunami as communications technology becomes cheaper, simpler, and more culturally-acceptable.” The new wireless technology has become an aid to most organizations, making networking simpler, cheaper and more effective, and has not only changed the way businesses operates but has changed the entire world of communications. It has not only caused a change in technology, but a change in the way of life. This is emphasized by Lawlor (2007:3), who stated: “Information technology has been a major driving force behind globalization and that information technology has now become a key component of a corporation’s global business strategy.” It is evident that the use of wireless technologies has changed the mode in which work is carried out and the manner in which communication takes place today. It has made it easier, more effective and efficient than before with wired technology. Wireless networks provide computing suppleness. It aids employees and individuals to take advantage of mobile networking for e-mail, Internet access, and sharing files regardless of where they are in the office or in a local area network (hereafter LAN). The advantage of the wireless setting is that it can be moved around at will, with no need for cables, leaving employees free to work from anywhere...
1444

An audit and risk handling prototype for firewall technology.

Van der Walt, Estee 04 June 2008 (has links)
Throughout the years, computer networks have grown in size and complexity. This growth attributed to the need for network security. As more and more people use computers and the Internet, more confidential documentation are being kept on computers and sent to other locations over a network. To implement network security, the security administrator should firstly identify all the needs, resources, threats and risks of the organisation to ensure that all areas of the network is included within the network security policy. The network security policy contains, amongst others, the information security services needed within the organisation’s network for security. These information security services can be implemented via many different security mechanisms. Firewalls are but one of these security mechanisms. Today, firewalls are implemented in most organisations for network security purposes. The author, however, feels that the implementation of only a firewall is not enough. Tools such as log file analysers and risk analysers can be added to firewall technology to investigate and analyse the current network security status further for an indication of network failure or attacks not easily detectable by firewalls. Firewalls and these tools do, however, also have their own problems. Firewalls rarely use the information stored within its log files and the risk handling services provided are not very effective. Most analysis tools use only one form of log file as input and therefore report on only one aspect of the network’s security. The output of the firewalls is rarely user-friendly and is often not real-time. The detection of security problems is consequently a very difficult task for any security administrator. To address the problems, the researcher has developed a prototype that improves on these problems. The firewall analyser (FA) is a prototype of an An audit and risk handling prototype for firewall technology Page iii analysis tool that performs log file- and risk analysis of the underlying networks of the organisation. Although the prototype represents only an example of the functionality added to a firewall, it illustrates the concept of the necessity and value of implementing such a tool for network security purposes. The FA solves the problems found in firewalls, log file- and risk analysis tools by reporting on the latest security status of the network through the use of a variety of log files. The FA uses not only the firewall log files as input to cover a greater area of the network in its analysis process, but also Windows NT log files. The real-time reports of the FA are user-friendly and aid the security administrator immensely in the process of implementing and enforcing network security. / Eloff, J.H.P., Prof.
1445

Power-benefit analysis of erasure encoding with redundant routing in sensor networks.

Vishwanathan, Roopa 12 1900 (has links)
One of the problems sensor networks face is adversaries corrupting nodes along the path to the base station. One way to reduce the effect of these attacks is multipath routing. This introduces some intrusion-tolerance in the network by way of redundancy but at the cost of a higher power consumption by the sensor nodes. Erasure coding can be applied to this scenario in which the base station can receive a subset of the total data sent and reconstruct the entire message packet at its end. This thesis uses two commonly used encodings and compares their performance with respect to power consumed for unencoded data in multipath routing. It is found that using encoding with multipath routing reduces the power consumption and at the same time enables the user to send reasonably large data sizes. The experiments in this thesis were performed on the Tiny OS platform with the simulations done in TOSSIM and the power measurements were taken in PowerTOSSIM. They were performed on the simple radio model and the lossy radio model provided by Tiny OS. The lossy radio model was simulated with distances of 10 feet, 15 feet and 20 feet between nodes. It was found that by using erasure encoding, double or triple the data size can be sent at the same power consumption rate as unencoded data. All the experiments were performed with the radio set at a normal transmit power, and later a high transmit power.
1446

A CAM-Based, High-Performance Classifier-Scheduler for a Video Network Processor.

Tarigopula, Srivamsi 05 1900 (has links)
Classification and scheduling are key functionalities of a network processor. Network processors are equipped with application specific integrated circuits (ASIC), so that as IP (Internet Protocol) packets arrive, they can be processed directly without using the central processing unit. A new network processor is proposed called the video network processor (VNP) for real time broadcasting of video streams for IP television (IPTV). This thesis explores the challenge in designing a combined classification and scheduling module for a VNP. I propose and design the classifier-scheduler module which will classify and schedule data for VNP. The proposed module discriminates between IP packets and video packets. The video packets are further processed for digital rights management (DRM). IP packets which carry regular traffic will traverse without any modification. Basic architecture of VNP and architecture of classifier-scheduler module based on content addressable memory (CAM) and random access memory (RAM) has been proposed. The module has been designed and simulated in Xilinx 9.1i; is built in ISE simulator with a throughput of 1.79 Mbps and a maximum working frequency of 111.89 MHz at a power dissipation of 33.6mW. The code has been translated and mapped for Spartan and Virtex family of devices.
1447

Internet baseada em redes opticas / Internet upon optical networks

Reigada, Eduardo 20 December 2004 (has links)
Orientador: Nelson Luis Saldanha da Fonseca / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-07T22:03:52Z (GMT). No. of bitstreams: 1 Reigada_Eduardo_M.pdf: 1564911 bytes, checksum: d3dcb7f7ab42448bd05af8a76b3ff1d7 (MD5) Previous issue date: 2004 / Resumo: A Internet está cada vez mais se concretizando como o novo meio universal de telecomunicação. Dentro de todos os meios existentes hoje, o IP sobre WDM (Wavelength Division Multiplexing), assim como sua variação o DWDM (Dense Wavelength Division Multiplexing), tem sido visualizado como a arquitetura mais promissora para o novo paradigma da Internet. Conseqüentemente o projeto de redes IP, utilizando-se como meio de transporte fibras ópticas, é um fator crucial neste novo paradigma, pois a capacidade de uma rápida recuperação da rede é a base na qual se dá sustentabilidade ao modelo. Neste trabalho, analisar-se-ão as propostas existentes hoje para a interconexão de roteadores IP com o núcleo de redes ópticas, tanto do Internet Engineering Task Force (IETF), quanto de outras organizações internacionais. Mostrar-se-ão algumas alternativas de arquitetura para integração do IP sobre DWDM. Tratar-se-ão também de questões como problemas de roteamento, sinalização, controle e capacidade de recuperação da rede. Finalmente, analisar-se-ão quais seriam os próximos passos vislumbrados hoje que as redes IP sobre DWDM deveriam seguir, através de uma análise do modelo adotado pela CANARIE / Abstract: Internet does become the ¿de facto¿ universal communication carrier. IP over WDM, as well as its variation named DWDM, has been considered as the most promising architecture for the new paradigm of the Internet. IP network, using fiber optics as transmission medium, is of paramount importance for this new paradigm, given its capacity of fast network recovery. In this work, we analyze the existent proposals for IP network based on an Optical core, standardized in the IETF, as well as others international organizations. We show some alternatives to integrate IP over DWDM. We discuss also topics as routing, signaling, network recovery control and capacity. The main idea is that the physical layer can provide a fast protection and the network layer can provide a more intelligent recovery. Finally, we analyze which would be the steps visualized today for the deployment of the IP over DWDM networks, through an analysis of the model adopted by CANARIE / Mestrado / Redes de Computadores / Mestre em Ciência da Computação
1448

Algoritmos para o problema do mapeamento de redes virtuais / Algorithms for the virtual network embedding problem

Silva, Igor Rosberg de Medeiros, 1986- 24 August 2018 (has links)
Orientadores: Eduardo Cândido Xavier, Nelson Luis Saldanha da Fonseca / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-24T12:31:15Z (GMT). No. of bitstreams: 1 Silva_IgorRosbergdeMedeiros_M.pdf: 1354578 bytes, checksum: 029116a11f51931ed73b6b4b5c6d6ff9 (MD5) Previous issue date: 2014 / Resumo: Virtualização de Redes tem recebido recentemente atenção da comunidade científica, uma vez que ela provê mecanismos para lidar com o problema da ossificação da atual arquitetura da Internet. Através da decomposição de Provedores de Serviço de Internet em Provedores de Infraestrutura e Provedores de Serviço, a Virtualização de Redes permite que várias redes virtuais heterogêneas compartilhem o mesmo substrato físico. Um dos principais problemas relacionados à Virtualização de Redes é o Problema do Mapeamento de Redes Virtuais no substrato, que é NP-Difícil. Muitos algoritmos e heurísticas para encontrar bons mapeamentos, de modo a otimizar o uso da banda passante na rede física, têm sido propostos. Neste trabalho, apresentam-se dois novos algoritmos baseados na metaheurística Busca Tabu, o VNE-TS e o VNE-TS-Clustering. Propõe-se também um algoritmo de seleção de redes virtuais, o 2ks-VN-Selector, que se baseia no Problema da Mochila Bidimensional, cujo objetivo é aumentar o rendimento em Provedores de Infraestrutura. Os resultados obtidos pelos uso das heurísticas VNE-TS e VNE-TS-Clustering, são comparandos com os resultados obtidos pelo algoritmo VNE-PSO,uma das melhores heurísticas de mapeamento proposta na literatura para o Problema do Mapeamento de Redes Virtuais. São comparados, também, os resultados da política de seleção 2ks-VN-Selector com os obtidos pela política Most Prize First. Resultados mostram tanto VNE-TS quanto VNE-TS-Clustering rejeitam menos requisições do que o VNE-PSO e que o algoritmo de seleção 2ks-VN-Selector é capaz de aumentar o rendimento de Provedores de Infraestrutura em relação ao algoritmo Most Prize First / Abstract: In recent years, network virtualization has gained considerable attention from the scientific community, since it provides mechanisms to overcome the ossification problem of the current Internet architecture. Through separation of Internet Service Providers into Infrastructure Providers and Service Providers, network virtualization allows multiple heterogeneous virtual networks to share the same physical substrate. One of the main problems regarding network virtualization is the Network Embedding Problem, which is NP-Hard. Several algorithms and heuristics to find a set of good mappings that optimize the use of the bandwidth in substrate networks have been proposed. In this work, we present two new embedding heuristics based on the Tabu Search metaheuristic: the VNE-TS and VNE-TS-Clustering. We also propose a virtual network selection policy, the 2ks-VN-Selector, based on the Bidimensional Knapsack Problem, which aims to increase the profit of Infrastructure Providers. We compare the results obtained by using the VNE-TS and VNE-TS-Clustering heuristics, to those obtained by using the VNE-PSO, one of the best heuristics proposed in the literature for the Embedding Problem. We also compare the effects of the 2ks-VN-Selector with those obtained by using another well known selection policy: Most Prize First. Results show that both VNE-TS and VNE-TS-Clustering reject less virtual network requests than VNE-PSO and that the selection algorithm 2ks-VN-Selector is able to increase the profit of Infrastructure Providers when compared to the Most Prize First algorithm / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
1449

Intranet concept for small business

Lenaburg, Allen Gregg 01 January 2004 (has links)
The purpose of this project is to build a working intranet containing core applications that create the framework for a small business intranet. Small businesses may benefit from an intranet because of its ability to effectively streamline the processes for retrieving and distributing information. Intranets are internal networks using TCP/IP protocols, Web server software, and browser client software to share information created in HTML within an organization, and to access company databases.
1450

Design of a local area network and a wide area network to connect the US Navy's training organization

Hill, Kevin Carlos 24 October 2009 (has links)
US Navy training commands use a local area and a wide area network known as the Versatile Training System II (VTS). VTS furnishes word processing, electronic mail, and data base functions, all of which can be transferred throughout the network. Enabling this rather old system is a mainframe at each training site with user terminals dispersed throughout the command. The system was installed and is maintained by civilian contractors. VTS does not have the capabilities to develop and maintain curriculum, because advanced word processing and graphics are required. This results in the Navy's training commands having redundant computer systems. Due to the shortcomings of VTS, a need exists to establish local area networks at training commands. Additionally, a wide area network is required that would give a standard package of electronic mail and file transfer capabilities. All of this must be accomplished using existing command computer resources and at a more economical price than the remaining life cycle cost of VTS. To facilitate the design, the systems engineering concept is utilized. A specific design is developed to fill the identified deficiency. Existing resources and "off the shelf" material are to be used exclusively. / Master of Science

Page generated in 0.0635 seconds