• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 7
  • 6
  • 3
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 106
  • 106
  • 60
  • 60
  • 24
  • 23
  • 19
  • 14
  • 13
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Changing Privacy Concerns in the Internet Era.

Demir, Irfan 08 1900 (has links)
Privacy has always been a respected value regardless of national borders, cultural differences, and time in every society throughout history. This study focuses on the unprecedented changes in the traditional forms of privacy and consequent concerns with regard to invasion of privacy along with the recent emergence and wide use of the Internet. Government intrusion into private domains through the Internet is examined as a major concern. Privacy invasions by Web marketers, hacker threats against privacy, and employer invasion of employee privacy at the workplace are discussed respectively. Then a set of possible solutions to solve the current problems and alleviate the concerns in this field is offered. Legal remedies that need to be performed by the government are presented as the initial solution. Then encryption is introduced as a strong technical method that may be helpful. Finally, a set of individual measures emphasized as complementary practical necessities. Nevertheless, this study indicates that technology will keep making further changes in the form and concerns of privacy that possibly may outdate these findings in the near future, however, privacy itself will always remain as a cherished social value as it has always been so far.
62

An Integrative Approach for Examining the Determinants of Abnormal Returns: The Cases of Internet Security Breach and Ecommerce Initiative

Andoh-Baidoo, Francis Kofi 01 January 2006 (has links)
Researchers in various business disciplines use the event study methodology to assess the market value of firms through capital market reaction to news in the public media about the firm's activities. Capital market reaction is assessed based on cumulative abnormal return (sum of abnormal returns over the event window). In this study, the event study methodology is used to assess the impact that two important information technology activities, Internet security breach and ecommerce initiative, have on the market value of firms. While prior research on the relationship between these business activities and cumulative abnormal return involved the use of regression analysis, in this study, we use decision tree induction and regression.For the Internet security breach study, we use negative cumulative abnormal return as a surrogate for damage to the breached firm. In contrast to what has been reported in the research literature, our results suggest that the relationship between cumulative abnormal return and the independent variables for both the Internet security breach and ecommerce initiative studies is complex, often involving conditional interactions between the independent variables. We report that the incomplete contract theory is unable to effectively explain the relationship between cumulative abnormal return and the organizational variables. Other ecommerce theories provide support to the findings from our analysis. We show that both attack and firm characteristics are determinants of damage to breached firms.Our results revealed that the use of decision tree induction presents additional insight to that provided by regression models. We illustrate that there is value in using data mining techniques to study the market value of e-commerce initiative and Internet security breach and that this approach has applicability in other domains and that Decision Tree can enhance the event study methodology.We demonstrate that Decision Tree induction can be used for both theory building and theory testing. We specifically employ Decision Tree induction to test and enhance ecommerce theories and develop a theoretical model for cumulative abnormal return and ecommerce. We also present theoretical models for Internet security breach and damage to the breached firm. These models can be used by decision makers in Internet security and ecommerce investments strategic formulations and implementations.
63

Using Machine Learning to improve Internet Privacy

Zimmeck, Sebastian January 2017 (has links)
Internet privacy lacks transparency, choice, quantifiability, and accountability, especially, as the deployment of machine learning technologies becomes mainstream. However, these technologies can be both privacy-invasive as well as privacy-protective. This dissertation advances the thesis that machine learning can be used for purposes of improving Internet privacy. Starting with a case study that shows how the potential of a social network to learn ethnicity and gender of its users from geotags can be estimated, various strands of machine learning technologies to further privacy are explored. While the quantification of privacy is the subject of well-known privacy metrics, such as k-anonymity or differential privacy, I discuss how some of those metrics can be leveraged in tandem with machine learning algorithms for purposes of quantifying the privacy-invasiveness of data collection practices. Further, I demonstrate how the current notice-and-choice paradigm can be realized by automatic machine learning privacy policy analysis. The implemented system notifies users efficiently and accurately on applicable data practices. Further, by analyzing software data flows users are enabled to compare actual to described data practices and regulators can enforce those at scale. The emerging cross-device tracking practices of ad networks, analytics companies, and others can be supplemented by machine learning technologies as well to notify users of privacy practices across devices and give them the choice they are entitled to by law. Ultimately, cross-device tracking is a harbinger of the emerging Internet of Things, for which I envision intelligent personal assistants that help users navigating through the increasing complexity of privacy notices and choices.
64

End-to-End Security of Information Flow in Web-based Applications

Singaravelu, Lenin 25 June 2007 (has links)
Web-based applications and services are increasingly being used in security-sensitive tasks. Current security protocols rely on two crucial assumptions to protect the confidentiality and integrity of information: First, they assume that end-point software used to handle security-sensitive information is free from vulnerabilities. Secondly, these protocols assume point-to-point communication between a client and a service provider. However, these assumptions do not hold true with large and complex vulnerable end point software such as the Internet browser or web services middleware or in web service compositions where there can be multiple value-adding service providers interposed between a client and the original service provider. To address the problem of large and complex end-point software, we present the AppCore approach which uses manual analysis of information flow, as opposed to purely automated approaches, to split existing software into two parts: a simplified trusted part that handles security-sensitive information and a legacy, untrusted part that handles non-sensitive information without access to sensitive information. Not only does this approach avoid many common and well-known vulnerabilities in the legacy software that compromised sensitive information, it also greatly reduces the size and complexity of the trusted code, thereby making exhaustive testing or formal analysis more feasible. We demonstrate the feasibility of the AppCore approach by constructing AppCores for two real-world applications: a client-side AppCore for https-based applications and an AppCore for web service platforms. Our evaluation shows that security improvements and complexity reductions (over a factor of five) can be attained with minimal modifications to existing software (a few tens of lines of code, and proxy settings of a browser) and an acceptable performance overhead (a few percent). To protect the communication of sensitive information between the clients and service providers in web service compositions, we present an end-to-end security framework called WS-FESec that provides end-to-end security properties even in the presence of misbehaving intermediate services. We show that WS-FESec is flexible enough to support the lattice model of secure information flow and it guarantees precise security properties for each component service at a modest cost of a few milliseconds per signature or encrypted field.
65

Modeling and Simulations of Worms and Mitigation Techniques

Abdelhafez, Mohamed 14 November 2007 (has links)
Internet worm attacks have become increasingly more frequent and have had a major impact on the economy, making the detection and prevention of these attacks a top security concern. Several countermeasures have been proposed and evaluated in recent literature. However, the eect of these proposed defensive mechanisms on legitimate competing traffic has not been analyzed. The first contribution of this thesis is a comparative analysis of the effectiveness of several of these proposed mechanisms, including a measure of their effect on normal web browsing activities. In addition, we introduce a new defensive approach that can easily be implemented on existing hosts, and which significantly reduces the rate of spread of worms using TCP connections to perform the infiltration. Our approach has no measurable effect on legitimate traffic. The second contribution is presenting a variant of the flash worm that we term Compact Flash or CFlash that is capable of spreading even faster than its predecessor. We perform a comparative study between the flash worm and the CFlash worm using a full-detail packet-level simulator, and the results show the increase in propagation rate of the new worm given the same set of parameters. The third contribution is the study of the behavior of TCP based worms in MANETs. We develop an analytical model for the worm spread of TCP worms in the MANETs environment that accounts for payloadsize, bandwidthsharing, radio range, nodal density and several other parameters specific for MANET topologies. We also present numerical solutions for the model and verify the results using packetlevel simulations. The results show that the analytical model developed here matches the results of the packetlevel simulation in most cases.
66

Improving internet security via large-scale passive and active dns monitoring

Antonakakis, Emmanouil Konstantinos 04 June 2012 (has links)
The Domain Name System (DNS) is a critical component of the Internet. DNS provides the ability to map human-readable and memorable domain names to machine-level IP addresses and other records. These mappings lie at the heart of the Internet's success and are essential for the majority of core Internet applications and protocols. The critical nature of DNS means that it is often the target of abuse. Cyber-criminals rely heavily upon the reliability and scalability of the DNS protocol to serve as an agile platform for their illicit operations. For example, modern malware and Internet fraud techniques rely upon DNS to locate their remote command-and-control (C&C) servers through which new commands from the attacker are issued, serve as exfiltration points for information stolen from the victims' computers, and to manage subsequent updates to their malicious toolset. The research described in this thesis scientifically addresses problems in the area of DNS-based detection of illicit operations. In detail, this research studies new methods to quantify and track dynamically changing reputations for DNS based on passive network measurements. The research also investigates methods for the creation of early warning systems for DNS. These early warning systems enables the research community to identify emerging threats (e.g., new botnets and malware infections) across the DNS hierarchy in a timelier manner.
67

Desenvolvimento e análise de desempenho de um 'packet/session filter' / Development and performance analysis of a packet/session filter

Spohn, Marco Aurelio January 1997 (has links)
A segurança em sistemas de computação é uma das áreas mais críticas nas ciências da informação. Freqüentemente se ouve falar em alguma "brecha" descoberta em algum sistema. Não importando quão grave seja o problema, já existe certa paranóia em relação a questões de segurança. Esses furos na segurança podem ser explorados de diversas maneiras: para obter benefícios financeiros indevidos, personificação maliciosa a custos de uma pretensa "brincadeira", perda de informações sigilosas, danos em arquivos do sistema, etc. Esses são alguns dos motivos porque se investe cada vez mais em segurança. Como se não bastasse, ainda existem os problemas relativos a legislação: por um lado, o governo pode vir a exigir que a segurança em sistemas de informação seja encarada como um problema federal e impor controle sobre tudo que seja utilizado nos mecanismos de proteção; por outro lado, outros podem reclamar a necessidade de privacidade e liberdade de escolha em problemas relativos a segurança. Os sistemas são, a medida que agregam mais recursos, muito complexos e extensos, propiciando um meio fértil para acúmulo de erros e conseqüentes problemas de segurança. A conectividade alterou a realidade da computação: as redes de computadores estão aí para provar. Entretanto, os sistemas em rede a medida que facilitam a realização de uma série de tarefas também apresentam vulnerabilidades. As redes públicas são um exemplo crítico, pois todas as informações passam por meios não confiáveis sujeitos a alteração ou utilização por terceiros. A Internet talvez represente um dos ambientes mais críticos a nível de segurança. Devido a sua abrangência e velocidade com que cresce, ela é um dos ambientes mais propícios a disseminação e exploração de "furos" de segurança. Existem, entretanto, um esforço constante em desenvolvimento de mecanismos para protegê-la. "Internet Firewalls", são um conjunto de mecanismos utilizados para formar uma barreira de segurança entre a rede interna e a Internet. isolando-a das principais ameaças. Os componentes básicos de um "firewall" são o "packet filter" e o "proxy server". Basicamente, o que esses componentes fazem é uma filtragem dos pacotes. 0 "packet filter", como o nome sugere, faz uma filtragem dos pacotes até o nível de transporte (UDP e TCP). 0 "proxy server" atua como um procurador entre o cliente e o servidor real realizando uma filtragem a nível de aplicação; portanto, é uma tarefa mais apurada. Cada um desses componentes básicos apresentam os seus benefícios à segurança da rede interna. Um dos problemas desses mecanismos, é que eles acarretam em um "overhead" ao sistema, degradando o desempenho. Um "packet filter" tem de ser suficientemente rápido para que não seja o gargalo do sistema. De fato, como é apresentado nesse trabalho, pode-se conseguir um filtro tão rápido quanto necessário. A filtragem dos pacotes é baseada nas regras de filtragem que são elaboradas segundo a política de segurança, a qual estabelece qual o tipo de tráfego que pode existir entre a rede interna e a Internet. Cada pacote que passa pelo filtro é comparado com as regras de filtragem na ordem em que elas foram especificadas pelo administrador. Dependendo do número de regras, e este é um fator relevante porque são em grande número os serviços utilizados na rede, o tempo médio de filtragem aumenta. Uma solução para melhorar o desempenho na filtragem, é realizar a filtragem dos pacotes por sessão ("session filter"). Nesse caso somente se consegue atender aos serviços baseados no TCP (que é orientado a conexão); entretanto, considerando que os serviços mais utilizados na rede são baseados no TCP esse é um mecanismo viável. Na filtragem por sessão apenas o pacote de solicitação de conexão é comparado com a lista de regras de filtragem. Os pacotes subsequentes são verificados junto a uma "cache" de sessões ativas. Caso o pacote pertença a alguma sessão válida ele é passado adiante; caso contrário, ele é descartado. O acesso a "cache" deve ser suficientemente rápido para justificar esse procedimento. Além do ganho em desempenho, o filtro de sessões apresenta a vantagem de monitoramento: todas as sessões ativas entre a rede interna e a Internet estão registradas na "cache". Esta dissertação apresenta o projeto e a implementação de um "Packet/session filter". O filtro foi implementado como um módulo do "kernel" do sistema operacional "FreeBSD". O filtro "ip_fw" disponível como "freeware" na referida plataforma, serviu como referência básica para o desenvolvimento do filtro proposto. O utilitário "ipfw", disponível para gerenciar o filtro de pacotes "ip_fw", foi adaptado para interagir com o filtro desenvolvido. Os testes de desempenho apresentam resultados esperados. Ou seja, o filtro de sessão melhora consideravelmente o processo de filtragem em relação a um filtro convencional. O recurso de monitoramento das sessões ativas também representa uma facilidade a mais no controle e obtenção de estatísticas para o "firewall". / Security in computer systems is one of the most critical areas in "information sciences". Often it is heard something about a new "hole" discovered in some system. No matter how important is such a problem, there is a "paranoia" regarding questions about security. These security holes could be explored in a diversity of ways: to obtain financial benefits, to impersonate somebody to spoofing, to obtain secret informations, to damage to file systems, etc. Those are some of the reasons because so much time and money are spent in security. There are also some problems concerning legislation: government can demand that security in information systems is a federal problem and it can lead to impose control over everything that is used in protection mechanisms; on the other hand, there is the question of privacy and freedom of choice concerning to security. By adding new resources to the systems, the complexity is increase and new bugs and security problems can arise. Connectivity has changed the computer reality: computer networks show that. However, network systems also present weak points. Public networks are a critical example because all information flow by untrusted medias subject to modification or misuse by outsiders. The Internet may be one of the most critical environments concerning security. Because the internet covers almost all the world and grows so fast, it's able to dissiminate "holes" in security. There is, however, a constant effort to develop new mechanisms to protect the Internet, Internet Firewalls are a set of mechanisms used to build a security barrier between the internal network and the internet, maintaining the internal network far away from the main threats. The basic components of a firewall are the packet filter and the proxy server. These two components basically do a packet screening. The "packet filter", as the name suggests, makes a packet filtering up to the transport layer (UDP and TCP). The Proxy Server acts like a proxy between the client and the real server, the proxy does a packet filtering at the application level; therefore, it's a refined task. Each one of these components present their own benefits to the internal network security. A problem of those mechanisms is that they bring an overhead to the system, slowing the performance. A packet filter must be fast enough, otherwise it'll be the bottleneck in the system. The present work slows that, it's possible to obtain a filter as fast as necessary. The packet filtering is based on the filter rules which are prepared following the security policy that establishes what kind of traffic can flow between the internal network and the Internet. Each packet that passes through the filter is compared with the filtering rules to check if it is in the sequence specified by the administrator. Depending on the number of rules, and this is an important issue because there is a great number of services available in the network, the filtering mean time increases. A solution to improve the filtering process is to make the packet filtering by session (session filter). In this case it's only possible to attend to TCP, based services (connection oriented); however, considering that the most used services in the internet are based on TCP this is a feasible mechanism. In the session filtering only the first packet is compared against the filtering rules. The next packets are verified against a session cache. If the packet refers to a valid session it's sent forward; otherwise, it's dropped. The access time to the cache must be fast enough in order to allow applying this procedure. Beyond the performance improvement the session filter presents the advantage of monitoring: all the active sessions are recorded in the "cache". This work presents the project and development of a "packet/session filter". The filter was developed as a kernel module of the operating system "FreeBSD". The filter "ip_fw" available as freeware served as a basic reference to the implementation of the filter here presented. The "ipfw" utility available to manage the "ipfw" packet filter was adapted to interact to the developed filter. The performance benchmark presents expected results. The session filter enhances the filtering process when compared to the conventional packet filter. The monitoring facility also represents an enhancement the control and statistics gathering.
68

Desenvolvimento e análise de desempenho de um 'packet/session filter' / Development and performance analysis of a packet/session filter

Spohn, Marco Aurelio January 1997 (has links)
A segurança em sistemas de computação é uma das áreas mais críticas nas ciências da informação. Freqüentemente se ouve falar em alguma "brecha" descoberta em algum sistema. Não importando quão grave seja o problema, já existe certa paranóia em relação a questões de segurança. Esses furos na segurança podem ser explorados de diversas maneiras: para obter benefícios financeiros indevidos, personificação maliciosa a custos de uma pretensa "brincadeira", perda de informações sigilosas, danos em arquivos do sistema, etc. Esses são alguns dos motivos porque se investe cada vez mais em segurança. Como se não bastasse, ainda existem os problemas relativos a legislação: por um lado, o governo pode vir a exigir que a segurança em sistemas de informação seja encarada como um problema federal e impor controle sobre tudo que seja utilizado nos mecanismos de proteção; por outro lado, outros podem reclamar a necessidade de privacidade e liberdade de escolha em problemas relativos a segurança. Os sistemas são, a medida que agregam mais recursos, muito complexos e extensos, propiciando um meio fértil para acúmulo de erros e conseqüentes problemas de segurança. A conectividade alterou a realidade da computação: as redes de computadores estão aí para provar. Entretanto, os sistemas em rede a medida que facilitam a realização de uma série de tarefas também apresentam vulnerabilidades. As redes públicas são um exemplo crítico, pois todas as informações passam por meios não confiáveis sujeitos a alteração ou utilização por terceiros. A Internet talvez represente um dos ambientes mais críticos a nível de segurança. Devido a sua abrangência e velocidade com que cresce, ela é um dos ambientes mais propícios a disseminação e exploração de "furos" de segurança. Existem, entretanto, um esforço constante em desenvolvimento de mecanismos para protegê-la. "Internet Firewalls", são um conjunto de mecanismos utilizados para formar uma barreira de segurança entre a rede interna e a Internet. isolando-a das principais ameaças. Os componentes básicos de um "firewall" são o "packet filter" e o "proxy server". Basicamente, o que esses componentes fazem é uma filtragem dos pacotes. 0 "packet filter", como o nome sugere, faz uma filtragem dos pacotes até o nível de transporte (UDP e TCP). 0 "proxy server" atua como um procurador entre o cliente e o servidor real realizando uma filtragem a nível de aplicação; portanto, é uma tarefa mais apurada. Cada um desses componentes básicos apresentam os seus benefícios à segurança da rede interna. Um dos problemas desses mecanismos, é que eles acarretam em um "overhead" ao sistema, degradando o desempenho. Um "packet filter" tem de ser suficientemente rápido para que não seja o gargalo do sistema. De fato, como é apresentado nesse trabalho, pode-se conseguir um filtro tão rápido quanto necessário. A filtragem dos pacotes é baseada nas regras de filtragem que são elaboradas segundo a política de segurança, a qual estabelece qual o tipo de tráfego que pode existir entre a rede interna e a Internet. Cada pacote que passa pelo filtro é comparado com as regras de filtragem na ordem em que elas foram especificadas pelo administrador. Dependendo do número de regras, e este é um fator relevante porque são em grande número os serviços utilizados na rede, o tempo médio de filtragem aumenta. Uma solução para melhorar o desempenho na filtragem, é realizar a filtragem dos pacotes por sessão ("session filter"). Nesse caso somente se consegue atender aos serviços baseados no TCP (que é orientado a conexão); entretanto, considerando que os serviços mais utilizados na rede são baseados no TCP esse é um mecanismo viável. Na filtragem por sessão apenas o pacote de solicitação de conexão é comparado com a lista de regras de filtragem. Os pacotes subsequentes são verificados junto a uma "cache" de sessões ativas. Caso o pacote pertença a alguma sessão válida ele é passado adiante; caso contrário, ele é descartado. O acesso a "cache" deve ser suficientemente rápido para justificar esse procedimento. Além do ganho em desempenho, o filtro de sessões apresenta a vantagem de monitoramento: todas as sessões ativas entre a rede interna e a Internet estão registradas na "cache". Esta dissertação apresenta o projeto e a implementação de um "Packet/session filter". O filtro foi implementado como um módulo do "kernel" do sistema operacional "FreeBSD". O filtro "ip_fw" disponível como "freeware" na referida plataforma, serviu como referência básica para o desenvolvimento do filtro proposto. O utilitário "ipfw", disponível para gerenciar o filtro de pacotes "ip_fw", foi adaptado para interagir com o filtro desenvolvido. Os testes de desempenho apresentam resultados esperados. Ou seja, o filtro de sessão melhora consideravelmente o processo de filtragem em relação a um filtro convencional. O recurso de monitoramento das sessões ativas também representa uma facilidade a mais no controle e obtenção de estatísticas para o "firewall". / Security in computer systems is one of the most critical areas in "information sciences". Often it is heard something about a new "hole" discovered in some system. No matter how important is such a problem, there is a "paranoia" regarding questions about security. These security holes could be explored in a diversity of ways: to obtain financial benefits, to impersonate somebody to spoofing, to obtain secret informations, to damage to file systems, etc. Those are some of the reasons because so much time and money are spent in security. There are also some problems concerning legislation: government can demand that security in information systems is a federal problem and it can lead to impose control over everything that is used in protection mechanisms; on the other hand, there is the question of privacy and freedom of choice concerning to security. By adding new resources to the systems, the complexity is increase and new bugs and security problems can arise. Connectivity has changed the computer reality: computer networks show that. However, network systems also present weak points. Public networks are a critical example because all information flow by untrusted medias subject to modification or misuse by outsiders. The Internet may be one of the most critical environments concerning security. Because the internet covers almost all the world and grows so fast, it's able to dissiminate "holes" in security. There is, however, a constant effort to develop new mechanisms to protect the Internet, Internet Firewalls are a set of mechanisms used to build a security barrier between the internal network and the internet, maintaining the internal network far away from the main threats. The basic components of a firewall are the packet filter and the proxy server. These two components basically do a packet screening. The "packet filter", as the name suggests, makes a packet filtering up to the transport layer (UDP and TCP). The Proxy Server acts like a proxy between the client and the real server, the proxy does a packet filtering at the application level; therefore, it's a refined task. Each one of these components present their own benefits to the internal network security. A problem of those mechanisms is that they bring an overhead to the system, slowing the performance. A packet filter must be fast enough, otherwise it'll be the bottleneck in the system. The present work slows that, it's possible to obtain a filter as fast as necessary. The packet filtering is based on the filter rules which are prepared following the security policy that establishes what kind of traffic can flow between the internal network and the Internet. Each packet that passes through the filter is compared with the filtering rules to check if it is in the sequence specified by the administrator. Depending on the number of rules, and this is an important issue because there is a great number of services available in the network, the filtering mean time increases. A solution to improve the filtering process is to make the packet filtering by session (session filter). In this case it's only possible to attend to TCP, based services (connection oriented); however, considering that the most used services in the internet are based on TCP this is a feasible mechanism. In the session filtering only the first packet is compared against the filtering rules. The next packets are verified against a session cache. If the packet refers to a valid session it's sent forward; otherwise, it's dropped. The access time to the cache must be fast enough in order to allow applying this procedure. Beyond the performance improvement the session filter presents the advantage of monitoring: all the active sessions are recorded in the "cache". This work presents the project and development of a "packet/session filter". The filter was developed as a kernel module of the operating system "FreeBSD". The filter "ip_fw" available as freeware served as a basic reference to the implementation of the filter here presented. The "ipfw" utility available to manage the "ipfw" packet filter was adapted to interact to the developed filter. The performance benchmark presents expected results. The session filter enhances the filtering process when compared to the conventional packet filter. The monitoring facility also represents an enhancement the control and statistics gathering.
69

Desenvolvimento e análise de desempenho de um 'packet/session filter' / Development and performance analysis of a packet/session filter

Spohn, Marco Aurelio January 1997 (has links)
A segurança em sistemas de computação é uma das áreas mais críticas nas ciências da informação. Freqüentemente se ouve falar em alguma "brecha" descoberta em algum sistema. Não importando quão grave seja o problema, já existe certa paranóia em relação a questões de segurança. Esses furos na segurança podem ser explorados de diversas maneiras: para obter benefícios financeiros indevidos, personificação maliciosa a custos de uma pretensa "brincadeira", perda de informações sigilosas, danos em arquivos do sistema, etc. Esses são alguns dos motivos porque se investe cada vez mais em segurança. Como se não bastasse, ainda existem os problemas relativos a legislação: por um lado, o governo pode vir a exigir que a segurança em sistemas de informação seja encarada como um problema federal e impor controle sobre tudo que seja utilizado nos mecanismos de proteção; por outro lado, outros podem reclamar a necessidade de privacidade e liberdade de escolha em problemas relativos a segurança. Os sistemas são, a medida que agregam mais recursos, muito complexos e extensos, propiciando um meio fértil para acúmulo de erros e conseqüentes problemas de segurança. A conectividade alterou a realidade da computação: as redes de computadores estão aí para provar. Entretanto, os sistemas em rede a medida que facilitam a realização de uma série de tarefas também apresentam vulnerabilidades. As redes públicas são um exemplo crítico, pois todas as informações passam por meios não confiáveis sujeitos a alteração ou utilização por terceiros. A Internet talvez represente um dos ambientes mais críticos a nível de segurança. Devido a sua abrangência e velocidade com que cresce, ela é um dos ambientes mais propícios a disseminação e exploração de "furos" de segurança. Existem, entretanto, um esforço constante em desenvolvimento de mecanismos para protegê-la. "Internet Firewalls", são um conjunto de mecanismos utilizados para formar uma barreira de segurança entre a rede interna e a Internet. isolando-a das principais ameaças. Os componentes básicos de um "firewall" são o "packet filter" e o "proxy server". Basicamente, o que esses componentes fazem é uma filtragem dos pacotes. 0 "packet filter", como o nome sugere, faz uma filtragem dos pacotes até o nível de transporte (UDP e TCP). 0 "proxy server" atua como um procurador entre o cliente e o servidor real realizando uma filtragem a nível de aplicação; portanto, é uma tarefa mais apurada. Cada um desses componentes básicos apresentam os seus benefícios à segurança da rede interna. Um dos problemas desses mecanismos, é que eles acarretam em um "overhead" ao sistema, degradando o desempenho. Um "packet filter" tem de ser suficientemente rápido para que não seja o gargalo do sistema. De fato, como é apresentado nesse trabalho, pode-se conseguir um filtro tão rápido quanto necessário. A filtragem dos pacotes é baseada nas regras de filtragem que são elaboradas segundo a política de segurança, a qual estabelece qual o tipo de tráfego que pode existir entre a rede interna e a Internet. Cada pacote que passa pelo filtro é comparado com as regras de filtragem na ordem em que elas foram especificadas pelo administrador. Dependendo do número de regras, e este é um fator relevante porque são em grande número os serviços utilizados na rede, o tempo médio de filtragem aumenta. Uma solução para melhorar o desempenho na filtragem, é realizar a filtragem dos pacotes por sessão ("session filter"). Nesse caso somente se consegue atender aos serviços baseados no TCP (que é orientado a conexão); entretanto, considerando que os serviços mais utilizados na rede são baseados no TCP esse é um mecanismo viável. Na filtragem por sessão apenas o pacote de solicitação de conexão é comparado com a lista de regras de filtragem. Os pacotes subsequentes são verificados junto a uma "cache" de sessões ativas. Caso o pacote pertença a alguma sessão válida ele é passado adiante; caso contrário, ele é descartado. O acesso a "cache" deve ser suficientemente rápido para justificar esse procedimento. Além do ganho em desempenho, o filtro de sessões apresenta a vantagem de monitoramento: todas as sessões ativas entre a rede interna e a Internet estão registradas na "cache". Esta dissertação apresenta o projeto e a implementação de um "Packet/session filter". O filtro foi implementado como um módulo do "kernel" do sistema operacional "FreeBSD". O filtro "ip_fw" disponível como "freeware" na referida plataforma, serviu como referência básica para o desenvolvimento do filtro proposto. O utilitário "ipfw", disponível para gerenciar o filtro de pacotes "ip_fw", foi adaptado para interagir com o filtro desenvolvido. Os testes de desempenho apresentam resultados esperados. Ou seja, o filtro de sessão melhora consideravelmente o processo de filtragem em relação a um filtro convencional. O recurso de monitoramento das sessões ativas também representa uma facilidade a mais no controle e obtenção de estatísticas para o "firewall". / Security in computer systems is one of the most critical areas in "information sciences". Often it is heard something about a new "hole" discovered in some system. No matter how important is such a problem, there is a "paranoia" regarding questions about security. These security holes could be explored in a diversity of ways: to obtain financial benefits, to impersonate somebody to spoofing, to obtain secret informations, to damage to file systems, etc. Those are some of the reasons because so much time and money are spent in security. There are also some problems concerning legislation: government can demand that security in information systems is a federal problem and it can lead to impose control over everything that is used in protection mechanisms; on the other hand, there is the question of privacy and freedom of choice concerning to security. By adding new resources to the systems, the complexity is increase and new bugs and security problems can arise. Connectivity has changed the computer reality: computer networks show that. However, network systems also present weak points. Public networks are a critical example because all information flow by untrusted medias subject to modification or misuse by outsiders. The Internet may be one of the most critical environments concerning security. Because the internet covers almost all the world and grows so fast, it's able to dissiminate "holes" in security. There is, however, a constant effort to develop new mechanisms to protect the Internet, Internet Firewalls are a set of mechanisms used to build a security barrier between the internal network and the internet, maintaining the internal network far away from the main threats. The basic components of a firewall are the packet filter and the proxy server. These two components basically do a packet screening. The "packet filter", as the name suggests, makes a packet filtering up to the transport layer (UDP and TCP). The Proxy Server acts like a proxy between the client and the real server, the proxy does a packet filtering at the application level; therefore, it's a refined task. Each one of these components present their own benefits to the internal network security. A problem of those mechanisms is that they bring an overhead to the system, slowing the performance. A packet filter must be fast enough, otherwise it'll be the bottleneck in the system. The present work slows that, it's possible to obtain a filter as fast as necessary. The packet filtering is based on the filter rules which are prepared following the security policy that establishes what kind of traffic can flow between the internal network and the Internet. Each packet that passes through the filter is compared with the filtering rules to check if it is in the sequence specified by the administrator. Depending on the number of rules, and this is an important issue because there is a great number of services available in the network, the filtering mean time increases. A solution to improve the filtering process is to make the packet filtering by session (session filter). In this case it's only possible to attend to TCP, based services (connection oriented); however, considering that the most used services in the internet are based on TCP this is a feasible mechanism. In the session filtering only the first packet is compared against the filtering rules. The next packets are verified against a session cache. If the packet refers to a valid session it's sent forward; otherwise, it's dropped. The access time to the cache must be fast enough in order to allow applying this procedure. Beyond the performance improvement the session filter presents the advantage of monitoring: all the active sessions are recorded in the "cache". This work presents the project and development of a "packet/session filter". The filter was developed as a kernel module of the operating system "FreeBSD". The filter "ip_fw" available as freeware served as a basic reference to the implementation of the filter here presented. The "ipfw" utility available to manage the "ipfw" packet filter was adapted to interact to the developed filter. The performance benchmark presents expected results. The session filter enhances the filtering process when compared to the conventional packet filter. The monitoring facility also represents an enhancement the control and statistics gathering.
70

A study of South African computer usersʹ password usage habits and attitude towards password security

Friedman, Brandon January 2014 (has links)
The challenge of having to create and remember a secure password for each user account has become a problem for many computer users and can lead to bad password management practices. Simpler and less secure passwords are often selected and are regularly reused across multiple user accounts. Computer users within corporations and institutions are subject to password policies, policies which require users to create passwords of a specified length and composition and change passwords regularly. These policies often prevent users from reusing previous selected passwords. Security vendors and professionals have sought to improve or even replace password authentication. Technologies such as multi-factor authentication and single sign-on have been developed to complement or even replace password authentication. The objective of the study was to investigate the password habits of South African computer and internet users. The aim was to assess their attitudes toward password security, to determine whether password policies affect the manner in which they manage their passwords and to investigate their exposure to alternate authentication technologies. The results from the online survey demonstrated that password practices of the participants across their professional and personal contexts were generally insecure. Participants often used shorter, simpler and ultimately less secure passwords. Participants would try to memorise all of their passwords or reuse the same password on most of their accounts. Many participants had not received any security awareness training, and additional security technologies (such as multi-factor authentication or password managers) were seldom used or provided to them. The password policies encountered by the participants in their organisations did little towards encouraging the users to apply more secure password practices. Users lack the knowledge and understanding about password security as they had received little or no training pertaining to it.

Page generated in 0.0633 seconds