• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 103
  • Tagged with
  • 431
  • 431
  • 373
  • 283
  • 125
  • 73
  • 73
  • 58
  • 27
  • 21
  • 18
  • 17
  • 17
  • 17
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

PowerScan: A Framework for Dynamic Analysis and Anti-Virus Based Identification of Malware

Langerud, Thomas, Lillesand, Jøran Vagnby January 2008 (has links)
<p>This thesis describes the design and implementation of a framework, PowerScan, which provides the ability to combine multiple tools in the analysis of a malware sample. The framework utilizes XML configuration in order to provide extendability so that new tools can be added post compilation without significant effort. The framework deals with three major types of malware analysis: 1. Surface scan of a sample with multiple on-demand anti-virus engines. 2. Execution of malware sample with real-time (on-access) anti-virus engines running in the background. 3. Execution of malware sample with different dynamic analysis solutions running. These tools may monitor the file system, registry, network or other aspects of the operating systems during execution. The reasoning behind each of these phases are: 1. Using multiple scanners increases the probability that at least one of the vendors has created a detection signature for the given malware. 2. Executing the sample ensures that the malware code sooner or later will be written to disk or memory. This should greatly enhance detection rate for samples obfuscated using packers with encryption or other techniques, as the code at some point must be deobfuscated before execution. Additionally, on-demand scanners might use more advanced (and resource consuming) techniques when monitoring files executed on the system. As for surface scanning, the odds of correctly identifying the malware also increases when using more scanners. 3. Although several good sandbox analysis tools exist, the solution presented here allows the malware analyst choose which analysis tools to use - and even use different tool for analyzing the same aspect of the execution. A thorough description of both design, implementation and testing is given in the report. In addition to the implementation of the PowerScan framework described above, the theory behind all involved components is presented. This includes description of the Microsoft Windows platform (which is used for executing malware in PowerScan, and the one definitely most targeted by malware at the time of writing), virtualization (which is used in the virtual machines), anti-virus technology, malware hiding techniques and more. Surveys of the usability of different anti-virus engines and dynamic analysis tools in the framework have been conducted and are presented in the appendices, together with a comprehensive user guide.</p>
72

Newspaper on e-paper with WiFi transfer : The newspaper of the future

Aune, Håkon Rørvik January 2008 (has links)
<p>An e-newspaper is the result of newspaper content adapted to electronic paper. Electronic paper is a display technology which has many favourable attributes. It is a passive display technology which means that it can have the appearence of paper due to its high contrast, it is readable like paper due to only changing its pixels when the image changes and also uses very little power making it suitable for mobile uses. Making an e-newspaper service has many challenges, the areas focused on in this thesis are namely layout and distribution. There is great differences between an online newspaper and a paper newspaper, and the question is which direction is suitable for an e-newspaper. Some of the factors that plays a role includes rates of updates, feel of quality, ``charm'' and of course personal preferences. Distribution is split in two different approaches, pull or push. The latter most preferable because it gives the quickest updates in an automatic fashion. The reason for considering pull distribution is because it is more suited for devices that wishes to limit the time with an active air interface. There was made a demonstration of such an e-newspaper service, with a standard template, online newspaper inspired, layout and pull distribution to accomodate the chosen e-paper device's poor battery life. The pull distribution was further user initiated to conserve even more power. The software made for this demonstration can be found in the accompanying archive file. There was a test group, which evaluated the demonstration for a period of time. They did not completely agree with the layout choices, as they favoured a layout closer to the original paper source. That the download was user initiated was not a great problem though.</p>
73

Fiber to the Home with an Emphasis on Greenfield Developments

Vathanagopalan, Rushanth January 2008 (has links)
<p>Fibre-to-the-home (FTTH) is experiencing great public acceptance throughout the world, as well as in Norway. This thesis investigates the technological, economical and strategic aspects of FTTH with an emphasis on greenfield deployment. The thesis claims that the Active Optical Network (AON) and Gigabit Passive Optical Network (G-PON) are similar to some extent when compared to cost, while non-cost comparison shows that these two models have different qualities. Wavelength Division Multiplexing Passive Optical Network (WDM-PON) is commercially unavailable to a large degree for the time being, but would satisfy future needs better when it becomes standardised. The established passive infrastructure could also be considered for Wi-Fi services, and as feeder network for base stations for mobile broadband. Further, the report claims that the property developer should consider the opportunity to be the network owner in greenfield FTTH deployments in order to reduce the civil work costs. However, the ownership would vary with context for a particular FTTH deployment. Both open and closed business models are investigated and it is argued that both have advantages as well as disadvantages. Two different finance models are also briefly introduced in this thesis; two-part tariff pricing model and a price discriminating model, respectively. These are suggested as alternative ways of thinking, in order to address the various economic capabilities possessed by different customer groupings. The final part of the thesis is based on a case study of Lundåsen area in Trondheim, where a hypothetical greenfield FTTH deployment is evaluated. The cost expenditures (CAPEX) for each user were calculated to be almost 65% less than the norm for brownfield deployments in Norway. It is also estimated that the home price premium (the extra amount of money value added to price of a house) would be around 1% of the house price due to the fact that FTTH is implemented in the residence. Thus, this is a large contributor to the total revenues from a greenfield FTTH roll-out. The profitability analysis also showed a positive end result, thus indicating that FTTH roll-out in an analogous project like the case study should strongly be considered.</p>
74

Performance Analysis of the Dropping Scheme of DMP Network Nodes

Dai, Kaiyu January 2008 (has links)
<p>The Distributed Multimedia Play (DMP) is new generation of multimedia communication systems which provides near-natural virtual collaborations for long- distance users. To approach near-natural perceptions, DMP produces huge amount of traffics and requires very restrict delay of the Multimedia Content packets. If the traffic of a network node of DMP overloads, this node has to drop some packets selectively. To ensure the graceful degradation of quality, a special dropping scheme is designed particularly for the DMP network nodes. This thesis analyzes the performance of the DMP nodes with the special dropping scheme. This thesis focuses on building analytic models to calculate the delay and loss rate of the traffics of the DMP nodes. Two simulators are developed to verify the analytic models. Several experiments are carried out on the two simulators to test the performance of the network nodes in different conditions. More specifically, the thesis work will be aiming at the following objectives: 1. To provide a concise and clear introduction to DMP and its special dropping scheme. 2. To describe the simulation models of the two simulators, explaining their functions and the input and output of them. 3. To build analytic models to calculate the delay and the loss rate of the packets passing through one DMP network node or passing through several network nodes of a DMP collaboration. 4. To provide some suggestion about how to improve the performance of the DMP networks. The main contributions of this Master’s thesis are the two simulators and the analytic models. Three analytic models are proposed to calculate the delay and loss rate of one DMP network node. They are M/D/1 with infinite queue model, M/D/1 with dynamic queue model and M/B/1 with dynamic queue model. The calculation results of those models are consistent well with the simulation results in the experiments. The analytic models can also be applied to universal queuing systems in which dropping queue and normal queue converge.</p>
75

The Wireless Tram : Providing Wireless Trondheim/Wi-Fi coverage using mobile WiMAX as backhaul

Karlsen, Rein Sigve January 2008 (has links)
<p>In spring 2008, a pre-mobile WiMAX network is installed in Trondheim. Its purpose is to cover the tram line and provide wireless backhaul to a Wi-Fi hotspot inside the tram, thus providing Internet access to travelling passengers. Testing is carried out in order to investigate if WiMAX can be used in such scenario. Based on radio propagation modelling, two sectors are initially set up to cover the entire tram line. The WiMAX system operates in the 2.6 GHz band with a bandwidth of 5 MHz. Testing reveals that reception along the first half of the tram line, covered by the first sector, is mostly according to the Okumura-Hata models for open and sub-urban areas. Adequate signal quality for a high grade of service (-80 dBm) is achieved up to 1.5 km from the base station. The second half of the tram line, covered by the second sector, is mostly according to the Okumura-Hata models for sub-urban and urban areas. The coverage provided by the second site is only enough to provide adequate signal quality half a kilometre from the base station. It is concluded that two sectors are not adequate to cover the entire tram line from start to end. The impact of 2nd order diversity is also investigated with respect to signal quality and throughput. Separation in space and by polarisation are both tested. A noticeable increase in average throughput of 50% is measured when using diversity with separation in space. Separation by polarisation yields an average increase of 14% only. Improvements in coverage when using diversity is very small, measured to a few hundred meters. It is noted that the throughput achieved along the tram line varies with the speed of the tram. When standing still, maximum TCP throughput in the downlink is measured to just above 6 Mbps, which is only 70% of the theoretical maximum of 9 Mbps when using an UL/DL ratio of 50/50. While in motion, the TCP throughput seldom rises above 1 Mbps. It is expected that the low throughput in motion is caused by a combination of Doppler effects and the use of an omni-directional antenna on the tram.</p>
76

Control plane of an OpMiGua network : Simulation or analytic study, analysing the feasibility of suggested solutions.

Rahmati, Mohammad Yaser January 2008 (has links)
<p>This report contains the introduction and result of the master’s thesis studied in the last semester of my study in the NTNU in 2008. The thesis was about studying a control plan for OpMiGua by using the existing technologies and solutions. For this propose a solution was suggested and a simulation is developed to test this solution. The report starts with an introduction of different networking techniques and the motivation for the OpMiGua. The report contains mainly three parts. The first part is the theoretical and technical background, where the Optical network and different technologies around that is introduced, including OpMiGua network and Control plan techniques as MPLS and GMPLS. The first part is from chapter 1 to 4. Second part is about the use of GMPLS in the OpMiGua and the way of use it with respect to the classification of traffic. It is introduced in the chapter 5. Introducing continues with the solutions as GST_LSP, a path for the GST packets and how it can be established and how the packets can be labeled. It also introduces the SM_LSP, a path for SM packets and its properties and differences in compare to GST_LSPs. The last part begin from chapter 6 through 9 where an analyses of the GST_LSP establishment is performed, a simulation structure is explained and the result of the simulation is discussed with respect to updating rate of the Gasco (Gaps statistic container ) which is a database in the suggested solution. The simulation is developed in view of the establishment process of GST-LSPs and the kind of information about each GST_LPSs which is aimed to update the Gasco with. The simulation also extended to study the SM-LPS switching request (or request for establishment) by using the data in the Gasco, and see what is the degree of successfulness of the SM_LSPs requests. The simulation of GST_LSP is about how this kind of path routes and establishes by using of GMPLS properties. This include finding shortest path in the studied network1, establishing the path by using wavelength labeling introduced in the GMPLS and when a path is established , traffics generates and sends through the path. The second process in the simulation is about what and how properties of the data which are flowing in the GSL path is going to be recorded and sends to the Gasco. The main property which is suggested to be importance in this study is peak delay or in another word, the longest burst duration in a path in different periods. This peak delay registers in the Gasco and is used to find the proper path for SM-LSP with respect to longest time a SM packet should with before it sends through the SM- path. Chapter 9 which contain a conclusion and a further work ends the report.</p>
77

Malware Analysis; : A Systematic Approach

Wedum, Petter Langeland January 2008 (has links)
<p>An almost incomprehensible amount of data and information is stored on millions and millions of computers worldwide. The computers, interconnected in local, national and international networks, use and share a high number of various software programs. Individuals, corporations, hospitals, communication networks, authorities among others are totally dependent on the reliability and accessibility of the data and information stored, and on the correct and predictable operation of the soft ware programs, the computers and the networks connecting them. Malware types have different objectives and apply different techniques, but they all compromise security in one way or another. To be able to defend against the threat imposed by malware we need to understand both how and why the malware exists. Malware is under constant development, exploiting new vulnerabilities, employing more advanced techniques, and finding new ways to compromise computer security. This document presents the nature of malware today and outlines some analytical techniques used by security experts. Furthermore, a process for analyzing malware samples with the goal of discovering the behaviour of the samples and techniques used by the samples is presented. A flowchart of malware analysis, with tools and procedures, is suggested. The analysis process is shown to be effective and to minimize the time consumption of manual malware analysis. An analysis is performed on two distinct malware samples, disclosing behaviour, location, encryption techniques, and other techniques employed by the samples. It is demonstrated that the two malware samples, both using advanced techniques, have different objectives and varying functionality. Although complex in behaviour, the malware samples show evidence of lacking programming skills with the malware designers, rendering the malware less effective than intended. Both samples are distributed in a packed form. The process of unpacking each of the samples is described together with an outlining of the unpacking process.</p>
78

Visualization of Network Traffic to Detect Malicious Network Activity

Jin, Zhihua January 2008 (has links)
<p>Today, enormous logging data monitoring the traffics of the Internet is generated everyday. However, the network administrators still have very limited insight into the logging data, mainly due to the lack of efficient analyzing approaches. Most of the existing network monitoring or analysis tools either mainly focus on the throughput of the network in order to assist network structure planning and optimization, which is too high level for security analysis, or dig to too low level into every packet, which is too inefficient in practice. Unfortunately, not all network traffics are legitimate. As a matter of fact, a lot of malicious traffics flow through the Internet all the time. Such malicious traffics can lead to various cyber-crimes, and exhaust considerable network bandwidth. The expression that what you do not see can hurt you perfectly suits the situation here. In order to help the network administrators to discover malicious activities in their network traffics, this thesis attempt to explore suitable visualization techniques to distinguish malicious traffics from massive background traffics by using visual patterns, to which the human visual perception system is sensitive and can thus processes efficiently. To achieve such goal, we first extract the visual patterns of malicious activities from known malicious traffics. Then, we look for the same visual patterns in the normal traffics. When the same visual pattern is found, we identify the relevant malicious activities. The tool used in our experimentation is designed and implemented according to the experiences learned from previous related works, with special regards to human visual perception theory. The result of our experimentation shows that some malicious activities which can not be easily identified in traditional analyzing approaches before, can be identified by our visualization system under certain conditions.</p>
79

Simulation of New Security Elements in an Ad Hoc Network

Ashraful Karim, Syed Md. January 2009 (has links)
<p>The candidate will configure a simulation of an ad hoc network for first responders in a crisis scenario using the NS2 simulation platform. The task will involve adjustment and experimentation with simulation parameters. Finally, new security protocol elements developed in the OASIS project at SINTEF ICT will be introduced through modification of the protocol definitions for NS2 written in C++. The work will be performed at SINTEF ICT.</p>
80

Web Applications Security : A security model for client-side web applications

Prabhakara, Deepak January 2009 (has links)
<p>The Web has evolved to support sophisticated web applications. These web applications are exposed to a number of attacks and vulnerabilities. The existing security model is unable to cope with these increasing attacks and there is a need for a new security model that not only provides the required security but also supports recent advances like AJAX and mashups. The attacks on client-side Web Applications can be attributed to four main reasons – 1) lack of a security context for Web Browsers to take decisions on the legitimacy of requests, 2) inadequate JavaScript security, 3) lack of a Network Access Control and 4) lack of security in Cross-Domain Web Applications. This work explores these four reasons and proposes a new security model that attempts to improve overall security for Web Applications. The proposed security model allows developers of Web Applications to define fine-grained security policies and Web Browsers enforce these rules; analogous to a configurable firewall for each Web Application. The Browser has disallows all unauthorized requests, thus preventing most common attacks like Cross-Site Script Injections, Cross-Frame Scripting and Cross-Site Tracing. In addition the security model defines a framework for secure Cross-Domain Communication, thus allowing secure mashups of Web Services. The security model is backward compatible, does not affect the current usability of the Web Applications and has cross-platform applicability. The proposed security model was proven to protect against most common attacks, by a proof-of-concept implementation that was tested against a comprehensive list of known attacks.</p>

Page generated in 0.3703 seconds