• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 2
  • 1
  • Tagged with
  • 28
  • 28
  • 27
  • 27
  • 27
  • 27
  • 27
  • 27
  • 27
  • 27
  • 14
  • 13
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Cryptanalysis of IEEE 802.11i TKIP.

Lodhi, Ammar January 2010 (has links)
This thesis is based upon the work of Beck and Tews [ 24 ]. It presents and experimentally Validate the Beck and Tews attack on a network with QoS client Associated with a Non- QoS AP.This is done by Slightly extending the source code Provided by Beck and Tews . A detailed study of the wireless security protocols has Also Been done followed by description of how the original Beck and Tews attack works.Martin Beck define a new approach of obtaining keystreams [ 1 ] Which has Been Thoroughly analyzed . A description of how the different packets are exceptional in obtaining more usable keystreams has Been dealer. The experimental validation of how extra keystream bytes are Obtained through the new approach [ 1 ] has Been done . This was done Using one of the network security tools.
12

Network based QoE Optimization for "Over The Top" Services

Haugene, Kristian, Jacobsen, Alexander January 2011 (has links)
This report focuses on the quality aspects of media delivery over the Internet. Weinvestigate the constructs of Knowledge Plane, Monitor Plane and Action Planeas controlling functions for the Internet. Our goal is to implement functionality formonitoring services in a home network, allowing the router to reason and take actionsto obtain an optimal traffic situation based on user preferences. The actions takento alter ongoing traffic are implemented in a modular router framework called Click.We will use this router to affect the media stream TCP connections into behavingin accordance with the networks optimal state. New features are implemented tocomplement the functionality found in Click, giving us the tools needed to obtainthe wanted results.Our focus is on adaptive video streaming in general and Silverlight SmoothStreaming in particular. Using custom Silverlight client code, we implemented asolution which allows the applications to report usage statistics to the home gateway.This information will be used by the home gateway to obtain an overview of traffic inthe network. Presenting this information to the user, we retrieve the user preferencesfor the given video streams. The router then dynamically reconfigures itself, andstarts altering TCP packets to obtain an optimal flow of traffic in the home network.Our system has been implemented on a Linux PC where it runs in its currentform. All the different areas of the solution, ranging from the clients, router, Knowl-edge Plane and traffic manipulation elements are put together. They form a workingsystem for QoE/QoS optimization which we have tested and demonstrated. In ad-dition to testing the concept on our own streaming services, the reporting featurefor Silverlight clients has also been implemented in a private build of TV2 Sumo.This is the Internet service of the largest commercial television station in Norway.Further testing with the TV2 Sumo client has given promising results. The systemis working as it is, although we would like to see more complex action reasoning toimprove convergence time for achieving the correct bit rate.
13

Dependability Differentiation in Cloud Services

Chilwan, Ameen January 2011 (has links)
As cloud computing is becoming more mature and pervasive, almost all types of services are being deployed in clouds. This has also widened the spectrum of cloud users which encompasses from domestic users to large companies. One of the main concerns of large companies outsourcing their IT functions to clouds is the availability of their functions. On the other hand, availability requirements for domestic users are not very strict. This requires the cloud service providers to guarantee different dependability levels for different users and services. This thesis is based upon this requirement of dependability differentiation of cloud services depending upon the nature of services and target users.In this thesis, different types of services are identified and grouped together both according to their deployment nature and their target users. Also a range of techniques for guaranteeing dependability in the cloud environment are identified and classified. In order to quantify dependability provided by different techniques, a cloud system is modeled. Two different levels of dependability differentiation are considered, namely; differentiation depending upon the state of standby replica and differentiation depending upon the spatial separation of active and standby replicas. These two levels are separately modeled by using Markov state diagrams and reliability block diagrams respectively. Due to the limitations imposed by Markov models, the former differentiation level is also studied by using a simulation.Finally, numerical analysis is conducted and different techniques are compared. Also the best technique for each user and service class is identified depending upon the results obtained. The most crucial components for guaranteeing dependability in cloud environment are also identified. This will direct the future prospects of study and also provide an idea to cloud service providers about the cloud components that are worth investing in, for enhancing service availability.
14

OTN switching

Knudsen-Baas, Per Harald January 2011 (has links)
Increasing traffic volumes in the Internet put strict requirements to the architectureof optical core networks. The exploding number of Internet users, andmassive increase in Internet content consumption forces carriers to constantlyupgrade and transform their core networks in order to cope with the trafficgrowth. The choice of both physical components and transport protocols inthe core network is crucial in order to provide satisfactorily performance.Data traffic in the core network consists of a wide variety of protocols.OTN is a digital wrapper technology, responsible for encapsulating existingframes of data, regardless of native protocol, and adding additional overheadfor addressing, OAM and error control. The wrapped signal is thentransported directly over wavelengths in the optical transport network. Thecommon OTN wrapper overhead makes it possible to monitor and controlthe signals, regardless of the protocol type being transported.OTN is standardized by the ITU through a series of recommendations,the two most important being ITU-T G.709 - "Interfaces for the OpticalTransport Network", and ITU-T G.872 - "Architecture of the Optical TransportNetwork". OTN uses a flexible TDM hierarchy in order to provide highwavelength utilization. The TDM hierarchy makes it possible to performswitching at various sub-wavelength bit rates in network nodes.An introduction to OTN and an overview of recent progress in OTNstandardization is given in the thesis. An OTN switch which utilizes theflexible multiplexing hierarchy of OTN is proposed, and its characteristics istested in a network scenario, comparing it to the packet switched alternative.Simulation results reveal that OTN switching doesn’t provide any performancebenefits compared to packet switching in the core network. OTNswitches do however provide bypass of intermediate IP routers, reducing therequirements for router processing power in each network node. This reducesoverall cost, and improves network scalability.An automatically reconfigurable OTN switch which rearranges link subcapacitiesbased on differences in output buffer queue lengths is also proposedand simulated in the thesis. Simulation results show that the reconfigurableOTN switch has better performance than both pure packet switching andregular OTN switching in the network scenario.
15

Tool-chain development for end-user composite services

Mbaabu, Frank January 2011 (has links)
Telephony has become an integral part in the day to day communication and new telephony services are quickly being deployed in the industry. There is a need for users to be provided with new services on the fly; these services can be composed from existing services to provide an added-value service. The vision is to allow ordinary people, who are the end users, to easily compose a set of available services and run them on their devices while they are on the move without requiring specialized IT or telecom skills.An end user service composition approach is followed that reduces the composition complexity and difficulty from the end user perspective. The approach enables the end users to personalize the compositions with a powerful presentation and supporting the end users to dynamically customize the service composition.A scenario based approach is followed whereby different practical composition scenarios are explored to shed light on several aspects of how the end users can personalize the composition process using the tool that has been presented by creating compositions that create added value services for the scenarios looked into.
16

Developing a Web Application for Smart Home Technology

Lundeland, Jonas, Waage, Øystein January 2012 (has links)
With AMS comes great possibilities for increased energy efficiency, but to achieve its full potential, the end users must be provided with the necessary means of monitoring and controlling their consumption. This thesis describes the process of developing a web application prototype meant to serve such a purpose. It explains the various architectural and technological decisions that support the prototype, and it elaborates on how data from the users’ smart meters can be synthesized with price information to help users see the economic effect of their current consumption pattern. A working prototype has been developed and security- and performance tests have been carried out to mitigate bottlenecks and prevent security breaches. Observations during the pilot project have shown promising trends and it is hoped that this thesis will inspire further innovation in the field of smart energy solutions.
17

Telepresence Quality

Puig Conca, Daniel January 2012 (has links)
Nowadays, one of the aims of telepresence systems is to provide a sensation of nearness to people who are interacting with this type of systems. Many factors have a relevant repercussion in providing this feeling and some aspects are more important than others, depending on the scope of use. This thesis presents several studies made in order to analyse the degree of importance each factor has.One of these factors treated is the delay which limits the interactivity. For this reason, in this thesis, a method is proposed to measure the delay through a telepresence system. Another factor treated has been the frame rate in order to figure out which is its influence.In addition, an stereoscopic 3D setup was performed to analyse the degree of perceived depth which was introduced into the system.Finally, several pilot tests focused on musical rehearsals were made to evaluate the influence of the delay. The recording was made at 60fps in a high-definition quality.Subjective opinions about the interactivity and perception of this sort of systems were gathered.It was concluded that this sort of system was viable for interactive applications like conducting a choir, but an effort must be done when decreasing the amount of delay added by end devices. In fact, the conductor tolerated a delay (round-trip) about 118ms in rhythmic music, being still possible to conduct with difficulties. In contrast, the delay tolerance increased up to 160ms when conducting a more melodic piece of music. However, the use of 3D when there is more than one viewer does not produce much benefits. Instead of that, it is proposed to analyse multi-view systems as a future research.
18

An API to Wi-Fi Direct Using Reactive Building Blocks

Gabrielsen, Erlend Bjerke January 2012 (has links)
Implementing unfamiliar functionalities in smartphone applications can be a difficult and a tedious task. Owing to the fact that the API do not have a formal way of representing the sequence of events may be one reason. This thesis describes the development process of various Arctis building blocks based on Android's API of Wi-Fi Direct. The objective of these blocks was to simplify the implementation of Wi-Fi Direct by confining a predictable sequence of events.An Android application was developed in order to test the functionalities, and to validate the prospects of portability for the various building blocks. The work resulted in a construction of three main building blocks, where each of them is responsible for a Wi-Fi Direct related function. Developers will be able to seamlessly utilize the Wi-Fi Direct functionality by combining and implementing these building blocks into their own applications.
19

Multibiometric Systems

Dhamala, Pushpa January 2012 (has links)
Multibiometric Systems
20

Android Apps and Permissions: Security and Privacy Risks

Boksasp, Trond, Utnes, Eivind January 2012 (has links)
This thesis investigates the permissions requested by Android applications, and the possibility of identifying suspicious applications based only on information presented to the user before an application is downloaded. During the course of this project, a large data set consisting of applications published on Google Play and three different third-party Android application markets was collected over a two-month period. These applications are analysed using manual pattern recognition and k-means clustering, focusing on the permissions they request. The pattern analysis is based on a smaller data set consisting of confirmed malicious applications. The method is evaluated based on its ability to recognise malicious potential in the analysed applications. The k-means clustering analysis takes the whole data set into consideration, in the attempt of uncovering suspicious patterns. This method is evaluated based on its ability to uncover distinct suspicious permission patterns and the findings acquired after further analysis of the clustering results.

Page generated in 0.0644 seconds