• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 6
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 18
  • 8
  • 8
  • 7
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Detecting Hidden Wireless Cameras through Network Traffic Analysis

Cowan, KC Kaye 02 October 2020 (has links)
Wireless cameras dominate the home surveillance market, providing an additional layer of security for homeowners. Cameras are not limited to private residences; retail stores, public bathrooms, and public beaches represent only some of the possible locations where wireless cameras may be monitoring people's movements. When cameras are deployed into an environment, one would typically expect the user to disclose the presence of the camera as well as its location, which should be outside of a private area. However, adversarial camera users may withhold information and prevent others from discovering the camera, forcing others to determine if they are being recorded on their own. To uncover hidden cameras, a wireless camera detection system must be developed that will recognize the camera's network traffic characteristics. We monitor the network traffic within the immediate area using a separately developed packet sniffer, a program that observes and collects information about network packets. We analyze and classify these packets based on how well their patterns and features match those expected of a wireless camera. We used a Support Vector Machine classifier and a secondary-level of classification to reduce false positives to design and implement a system that uncovers the presence of hidden wireless cameras within an area. / Master of Science / Wireless cameras may be found almost anywhere, whether they are used to monitor city traffic and report on travel conditions or to act as home surveillance when residents are away. Regardless of their purpose, wireless cameras may observe people wherever they are, as long as a power source and Wi-Fi connection are available. While most wireless camera users install such devices for peace of mind, there are some who take advantage of cameras to record others without their permission, sometimes in compromising positions or places. Because of this, systems are needed that may detect hidden wireless cameras. We develop a system that monitors network traffic packets, specifically based on their packet lengths and direction, and determines if the properties of the packets mimic those of a wireless camera stream. A double-layered classification technique is used to uncover hidden wireless cameras and filter out non-wireless camera devices.
2

Detecting Remote Attacks

Han, Wang-tzu 30 July 2004 (has links)
With the advanced technology, our life has improved, however, it also brings the new model of crime events. Because the intrusion technique and intrusion tools are developed day by day, many computer crimes such as overstep system authority, intrusion events, computer crime, and network attack incidents are happening everywhere and everyday. In fact, those kinds of animus attack behaviors are troublesome problems. Staffs of network management may have to read security advisory, which is sent out by security organization. For example, they have to subscribe advisories for Computer Emergency Response Team or security mail list to continuously accumulate their security information. In addition, in the security protect system, they may need to spend huge fund to purchase firewall system, intrusion detection system, antivirus system and other related security protect systems. These attack behaviors have been evolved from one computer attacked to heavy attack by new intrusion model such as worm to proceed large scale spread attacking recently. Furthermore, each attack use different communication protocol and port, which is aimed at the system vulnerability, it is not easy to detect these attacks. If we can observe the variation of network traffic to detect the unusual hosts, for controlling the usage of network or occurring extraordinary phenomenon, it could help network managers to discover and solve network attack problems in time. Lately, many intrusion events have been happened increasingly, and the denial-of-service has become the most serious network event of the Computer Crime and Security Survey of FBI/CSI in 2003. Therefore, in various attacking types, we choose vulnerability scan and denial-of-service as our research direction. This research extend to develop IPAudit[16], a network traffic monitor system, which is to detect hosts flows traffic of the local area network. We establish network attack rules by using data miningclassification (C4.5) to analyze attack data, and we estimate the correctness percentage of classification. This study also uses different attack applications for the same attack type to process the cross experiment. The result has shown that the technology of data mining classification (C4.5) can help us to forecast efficiently the same attack type events.
3

Automatic Forensic Analysis of PCCC Network Traffic Log

Senthivel, Saranyan 09 August 2017 (has links)
Most SCADA devices have a few built-in self-defence mechanisms and tend to implicitly trust communications received over the network. Therefore, monitoring and forensic analysis of network traffic is a critical prerequisite for building an effective defense around SCADA units. In this thesis work, We provide a comprehensive forensic analysis of network traffic generated by the PCCC(Programmable Controller Communication Commands) protocol and present a prototype tool capable of extracting both updates to programmable logic and crucial configuration information. The results of our analysis shows that more than 30 files are transferred to/from the PLC when downloading/uplloading a ladder logic program using RSLogix programming software including configuration and data files. Interestingly, when RSLogix compiles a ladder-logic program, it does not create any lo-level representation of a ladder-logic file. However the low-level ladder logic is present and can be extracted from the network traffic log using our prototype tool. the tool extracts SMTP configuration from the network log and parses it to obtain email addresses, username and password. The network log contains password in plain text.
4

Real-time analysis of aggregate network traffic for anomaly detection

Kim, Seong Soo 29 August 2005 (has links)
The frequent and large-scale network attacks have led to an increased need for developing techniques for analyzing network traffic. If efficient analysis tools were available, it could become possible to detect the attacks, anomalies and to appropriately take action to contain the attacks before they have had time to propagate across the network. In this dissertation, we suggest a technique for traffic anomaly detection based on analyzing the correlation of destination IP addresses and distribution of image-based signal in postmortem and real-time, by passively monitoring packet headers of traffic. This address correlation data are transformed using discrete wavelet transform for effective detection of anomalies through statistical analysis. Results from trace-driven evaluation suggest that the proposed approach could provide an effective means of detecting anomalies close to the source. We present a multidimensional indicator using the correlation of port numbers as a means of detecting anomalies. We also present a network measurement approach that can simultaneously detect, identify and visualize attacks and anomalous traffic in real-time. We propose to represent samples of network packet header data as frames or images. With such a formulation, a series of samples can be seen as a sequence of frames or video. Thisenables techniques from image processing and video compression such as DCT to be applied to the packet header data to reveal interesting properties of traffic. We show that ??scene change analysis?? can reveal sudden changes in traffic behavior or anomalies. We show that ??motion prediction?? techniques can be employed to understand the patterns of some of the attacks. We show that it may be feasible to represent multiple pieces of data as different colors of an image enabling a uniform treatment of multidimensional packet header data. Measurement-based techniques for analyzing network traffic treat traffic volume and traffic header data as signals or images in order to make the analysis feasible. In this dissertation, we propose an approach based on the classical Neyman-Pearson Test employed in signal detection theory to evaluate these different strategies. We use both of analytical models and trace-driven experiments for comparing the performance of different strategies. Our evaluations on real traces reveal differences in the effectiveness of different traffic header data as potential signals for traffic analysis in terms of their detection rates and false alarm rates. Our results show that address distributions and number of flows are better signals than traffic volume for anomaly detection. Our results also show that sometimes statistical techniques can be more effective than the NP-test when the attack patterns change over time.
5

Network Traffic Analysis and Anomaly Detection : A Comparative Case Study

Babu, Rona January 2022 (has links)
Computer security is to protect the data inside the computer, relay the information, expose the information, or reduce the level of security to some extent. The communication contents are the main target of any malicious intent to interrupt one or more of the three aspects of the information security triad (confidentiality, integrity, and availability). This thesis aims to provide a comprehensive idea of network traffic analysis, various anomaly or intrusion detection systems, the tools used for it, and finally, a comparison of two Network Traffic Analysis (NTA) tools available in the market: Splunk and Security Onion and comparing their finding to analyse their feasibility and efficiency on Anomaly detection. Splunk and Security Onion were found to be different in the method of monitoring, User Interface (UI), and the observations noted. Further scope for future works is also suggested from the conclusions made.
6

A Visualization Framework for SiLK Data exploration and Scan Detection

El-Shehaly, Mai Hassan 21 September 2009 (has links)
Network packet traces, despite having a lot of noise, contain priceless information, especially for investigating security incidents or troubleshooting performance problems. However, given the gigabytes of flow crossing a typical medium sized enterprise network every day, spotting malicious activity and analyzing trends in network behavior becomes a tedious task. Further, computational mechanisms for analyzing such data usually take substantial time to reach interesting patterns and often mislead the analyst into reaching false positives, benign traffic being identified as malicious, or false negatives, where malicious activity goes undetected. Therefore, the appropriate representation of network traffic data to the human user has been an issue of concern recently. Much of the focus, however, has been on visualizing TCP traffic alone while adapting visualization techniques for the data fields that are relevant to this protocol's traffic, rather than on the multivariate nature of network security data in general, and the fact that forensic analysis, in order to be fast and effective, has to take into consideration different parameters for each protocol. In this thesis, we bring together two powerful tools from different areas of application: SiLK (System for Internet-Level Knowledge), for command-based network trace analysis; and ComVis, a generic information visualization tool. We integrate the power of both tools by aiding simplified interaction between them, using a simple GUI, for the purpose of visualizing network traces, characterizing interesting patterns, and fingerprinting related activity. To obtain realistic results, we applied the visualizations on anonymized packet traces from Lawrence Berkley National Laboratory, captured on selected hours across three months. We used a sliding window approach in visually examining traces for two transport-layer protocols: ICMP and UDP. The main contribution of this research is a protocol-specific framework of visualization for ICMP and UDP data. We explored relevant header fields and the visualizations that worked best for each of the two protocols separately. The resulting views led us to a number of guidelines that can be vital in the creation of "smart books" describing best practices in using visualization and interaction techniques to maintain network security; while creating visual fingerprints which were found unique for individual types of scanning activity. Our visualizations use a multiple-views approach that incorporates the power of two-dimensional scatter plots, histograms, parallel coordinates, and dynamic queries. / Master of Science
7

Network Service Misuse Detection: A Data Mining Approach

Hsiao, Han-wei 01 September 2004 (has links)
As network services progressively become essential communication and information delivery mechanisms of business operations and individuals¡¦ activities, a challenging network management issue emerges: network service misuse. Network service misuse is formally defined as ¡§abuses or unethical, surreptitious, unauthorized, or illegal uses of network services by those who attempt to mask their uses or presence that evade the management and monitoring of network or system administrators.¡¨ Misuses of network services would inappropriately use resources of network service providers (i.e., server machines), compromise the confidentiality of information maintained in network service providers, and/or prevent other users from using the network normally and securely. Motivated by importance of network service misuse detection, we attempt to exploit the use of router-based network traffic data for facilitating the detection of network service misuses. Specifically, in this thesis study, we propose a cross-training method for learning and predicting network service types from router-based network traffic data. In addition, we also propose two network service misuse detection systems for detecting underground FTP servers and interactive backdoors, respectively. Our evaluations suggest that the proposed cross-training method (specifically, NN->C4.5) outperforms traditional classification analysis techniques (namely C4.5, backpropagation neural network, and Naïve Bayes classifier). In addition, our empirical evaluation conducted in a real-world setting suggests that the proposed underground FTP server detection system could effectively identify underground FTP servers, achieving a recall rate of 95% and a precision rate of 34% (by the NN->C4.5 cross-training technique). Moreover, our empirical evaluation also suggests that the proposed interactive backdoor detection system have the capability in capturing ¡§true¡¨ (or more precisely, highly suspicious) interactive backdoors existing in a real-world network environment.
8

Um modelo para tarifação confiável em computação em nuvem. / A model for reliable billing in cloud computing.

DANTAS, Ana Cristina Alves de Oliveira. 09 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-09T19:09:21Z No. of bitstreams: 1 ANA CRISTINA ALVES DE OLIVEIRA DANTAS - TESE PPGCC 2015..pdf: 7095339 bytes, checksum: 1129b2ebd21ae60871fd33c142249912 (MD5) / Made available in DSpace on 2018-05-09T19:09:21Z (GMT). No. of bitstreams: 1 ANA CRISTINA ALVES DE OLIVEIRA DANTAS - TESE PPGCC 2015..pdf: 7095339 bytes, checksum: 1129b2ebd21ae60871fd33c142249912 (MD5) Previous issue date: 2015-11-30 / A computação em nuvem define uma infraestrutura virtual para prestação de serviços em rede sob demanda. Os clientes contratam serviços em que a infraestrutura primária de hardware e software encontra-se em centros de dados remotos e não localmente e sobre seu próprio domínio. Há uma necessidade de ferramentas de monitoramento regulatório, que possam operar dentro da infraestrutura do provedor, ou fora dele, deixando os clientes a par do estado atual ou do histórico do desempenho dos serviços contratados. A computação em nuvem é fortemente dependente das redes computadores e o desempenho dos serviços em nuvem pode ser monitorado via métricas de rede. O conhecimento de métricas de desempenho sobre a execução dos serviços contribui para promover a relação de confiança entre cliente e provedor, bem como fornece subsídios para contestações em faturas, caso necessário. Um modelo detarifaçãoconfiável envolve a disponibilização de métricas de desempenho dos serviços contratados, de modo que o cliente possa aferir as tarifas cobradas. Clientes e provedores podem alternar papeis em diferentes níveis de prestação de serviços de computação em nuvem. Um cliente no nível de infraestrutura pode ser um provedor de dados, por exemplo. Um modelo de tarifação confiável fornece subsídios também ao provedor de serviços para melhorar a alocação de recursos, bem como indicadores para investimentos em infraestrutura que evitem perdas financeiras causadas pelo pagamento de multas por descumprimento de acordo de nível de serviço. O objeto desta tese de doutorado é desenvolver um modelo para tarifação confiável de serviços de computação em nuvem que envolva a detecção e notificação de anomalias de tráfego de rede em tempo real que auxilie na estimativa do custo causado por tais anomalias para o modelo de negócio e que contribua para um processo de alocação de recursos capaz de reduzir custos com penalidades financeiras. A validação do modelo foi realizada por meio de escalonamento de recursos baseado em custo. O modelo de tarifação confiável integrado ao mecanismo de escalonamento reduziu custos e perdas financeiras provenientes de violações de acordos de nível de serviço. / Cloud computing defines a virtual infrastructure to provide network services on demand. Customers contract services where the primary infrastructure of hardware software is in remote data centers and on the customer own domain. Sharing the same network, or the same physical machine, amongvarious tenants entails some concerns related to information confidentiality, security, troubleshooting, separation of responsibilities for guaranteeing the quality of the technical goals across the different abstraction levels, and how the customer may monitor the use of services and eventual failures. Prior to cloud computing, allowed the service providers dominate the entire chain of information, providing information to enable them to manage the business globally to avoid financial losses and increase profits. With the use of cloud computing services, the customer possesses no control over levels virtualization services that are supporting the level you are operating. A client in infrastructure level can be a data provider, for instance. Thus, it is important to have appropriate tools to keep track of the performance of the contracted services. Cloud computing is heavily dependent on computer networks. In this sense, it is abusiness differential to provide network performance metrics either for the customers, which is an important non-functional requirement that is sometimes ignored by many cloud service providers. The disposal of real-time performance metrics contributes to promote trust relationship between customer and provider, and to aid the provider to better dimension resources to avoid finacial losses. The object of this doctoral thesis is to develop a model for reliable charging of cloud computing services cloud that accomplishes the network traffic anomaly detection and appropriate notification in real time, as well as enables the estimation of the cost caused by anomalies to business model.
9

An approach for profiling distributed applications through network traffic analysis

Vieira, Thiago Pereira de Brito 05 March 2013 (has links)
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T17:32:13Z No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T14:22:30Z (GMT) No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-13T14:22:30Z (GMT). No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-03-05 / Distributed systems has been adopted for building modern Internet services and cloud computing infrastructures, in order to obtain services with high performance, scalability, and reliability. Cloud computing SLAs require low time to identify, diagnose and solve problems in a cloud computing production infrastructure, in order to avoid negative impacts into the quality of service provided for its clients. Thus, the detection of error causes, diagnose and reproduction of errors are challenges that motivate efforts to the development of less intrusive mechanisms for monitoring and debugging distributed applications at runtime. Network traffic analysis is one option to the distributed systems measurement, although there are limitations on capacity to process large amounts of network traffic in short time, and on scalability to process network traffic where there is variation of resource demand. The goal of this dissertation is to analyse the processing capacity problem for measuring distributed systems through network traffic analysis, in order to evaluate the performance of distributed systems at a data center, using commodity hardware and cloud computing services, in a minimally intrusive way. We propose a new approach based on MapReduce, for deep inspection of distributed application traffic, in order to evaluate the performance of distributed systems at runtime, using commodity hardware. In this dissertation we evaluated the effectiveness of MapReduce for a deep packet inspection algorithm, its processing capacity, completion time speedup, processing capacity scalability, and the behavior followed by MapReduce phases, when applied to deep packet inspection for extracting indicators of distributed applications. / Sistemas distribuídos têm sido utilizados na construção de modernos serviços da Internet e infraestrutura de computação em núvem, com o intuito de obter serviços com alto desempenho, escalabilidade e confiabilidade. Os acordos de níves de serviço adotados pela computação na núvem requerem um reduzido tempo para identificar, diagnosticar e solucionar problemas em sua infraestrutura, de modo a evitar que problemas gerem impactos negativos na qualidade dos serviços prestados aos seus clientes. Então, a detecção de causas de erros, diagnóstico e reprodução de erros provenientes de sistemas distribuídos são desafios que motivam esforços para o desenvolvimento de mecanismos menos intrusivos e mais eficientes, para o monitoramento e depuração de aplicações distribuídas em tempo de execução. A análise de tráfego de rede é uma opção para a medição de sistemas distribuídos, embora haja limitações na capacidade de processar grande quantidade de tráfego de rede em curto tempo, e na escalabilidade para processar tráfego de rede sob variação de demanda de recursos. O objetivo desta dissertação é analisar o problema da capacidade de processamento para mensurar sistemas distribuídos através da análise de tráfego de rede, com o intuito de avaliar o desempenho de sistemas distribuídos de um data center, usando hardware não especializado e serviços de computação em núvem, de uma forma minimamente intrusiva. Nós propusemos uma nova abordagem baseada em MapReduce para profundamente inspecionar tráfego de rede de aplicações distribuídas, com o objetivo de avaliar o desempenho de sistemas distribuídos em tempo de execução, usando hardware não especializado. Nesta dissertação nós avaliamos a eficácia do MapReduce para um algoritimo de avaliação profunda de pacotes, sua capacidade de processamento, o ganho no tempo de conclusão de tarefas, a escalabilidade na capacidade de processamento, e o comportamento seguido pelas fases do MapReduce, quando aplicado à inspeção profunda de pacotes, para extrair indicadores de aplicações distribuídas.
10

Segmentace obrazových dat pomocí hlubokých neuronových sítí / Image Segmentation with Deep Neural Network

Pazderka, Radek January 2019 (has links)
This master's thesis is focused on segmentation of the scene from traffic environment. The solution to this problem is segmentation neural networks, which enables classification of every pixel in the image. In this thesis is created segmentation neural network, that has reached better results than present state-of-the-art architectures. This work is also focused on the segmentation of the top view of the road, as there are no freely available annotated datasets. For this purpose, there was created automatic tool for generation of synthetic datasets by using PC game Grand Theft Auto V. The work compares the networks, that have been trained solely on synthetic data and the networks that have been trained on both real and synthetic data. Experiments prove, that the synthetic data can be used for segmentation of the data from the real environment. There has been implemented a system, that enables work with segmentation neural networks.

Page generated in 0.0944 seconds