• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13660
  • 131
  • 61
  • 22
  • 17
  • 10
  • 10
  • 7
  • 6
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 14043
  • 13801
  • 13794
  • 13794
  • 13794
  • 13794
  • 3282
  • 3225
  • 3181
  • 2292
  • 1660
  • 1172
  • 1107
  • 981
  • 980
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Performance comparison between Apache and NGINX under slow rate DoS attacks

Al-Saydali, Josef, Al-Saydali, Mahdi January 2021 (has links)
One of the novel threats to the internet is the slow HTTP Denial of Service (DoS) attack on the application level targeting web server software. The slow HTTP attack can leave a high impact on web server availability to normal users, and it is affordable to be established compared to other types of attacks, which makes it one of the most feasible attacks against web servers. This project investigates the slow HTTP attack impact on the Apache and Nginx servers comparably, and review the available configurations for mitigating such attack. The performance of the Apache and NGINX servers against slow HTTP attack has been compared, as these two servers are the most globally used web server software. Identifying the most resilient web server software against this attack and knowing the suitable configurations to defeat it play a key role in securing web servers from one of the major threats on the internet. From comparing the results of the experiments that have been conducted on the two web servers, it has been found that NGINX performs better than the Apache server under slow rate DoS attack without using any configured defense mechanism. However, when defense mechanisms have been applied to both servers, the Apache server acted similarly to NGINX and was successful to defeat the slow rate DoS attack.
32

HTTP Live Streaming : En studie av strömmande videoprotokoll

Swärd, Rikard January 2013 (has links)
Användningen av strömmande video ökar snabbt just nu. Ett populärt konceptär adaptive bitrate streaming som går ut på att en video kodas i flera olikabithastigheter. Dessa videor tas sedan och delas upp i små filer och görstillgänglig via internet. När du vill spela upp en sådan video laddar du först hemen fil som beskriver vart filerna finns och i vilka bithastigheter de är kodade i.Mediaspelaren kan där efter börja ladda hem filerna och spela upp dom. Om defysiska förutsättningarna, som exempelvis nedladdningshastighet eller CPUbelastning,ändras under uppspelningen kan mediaspelaren enkelt byta kvalitépå videon genom att börja ladda filer av en annan bithastighet och slippa attvideon laggar. Denna rapport tar därför en närmare titt på fyra tekniker inomadaptive bitrate streaming. De som undersöks är HTTP Live Streaming,Dynamic Adaptive Streaming over HTTP, HTTP Dynamic Streaming ochSmooth Streaming med avseende på vilka protokoll som dom använder.Rapporten undersöker även hur Apple och FFmpeg har implementerat HTTPLive streaming med avseende på hur mycket data som behövs läsas i en filinnan videon kan börja spelas upp. Rapporten visar att det inte är så storaskillnader mellan de fyra teknikerna. Dock sticker Dynamic AdaptiveStreaming over HTTP ut lite genom att vara helt oberoende av vilket ljud ellervideoprotokoll som används. Rapporten visar också på en brist i specificeringenav HTTP Live Streaming då det inte är specificerat att första komplettabildrutan i videoströmmen bör ligga i början av filen. I Apples implementationbehövs upp till 30 kB data läsas innan uppspelning kan påbörjas medan iFFmpegs implementation är det ca 600 byte. / The use of streaming video is growing rapidly at the moment. A popular conceptis adaptive bitrate streaming, which is when a video gets encoded in severaldifferent bit rates. These videos are then split into small files and made availablevia the internet. When you want to play such a video, you first download afile that describes where the files are located and in what bitrates they are encodedin. The media player then begin downloading the files and play them. Ifthe physical conditions, such as the download speed or CPU load, changes duringplayback, the media player can easily change the quality of the video bystarting to downloading files of a different bit rate and avoid that the video lags.This report will take a closer look at four techniques in adaptive bitrate streaming.They examined techniques are HTTP Live Streaming, Dynamic AdaptiveStreaming over HTTP, HTTP Dynamic Streaming and Smooth Streaming andwhich protocols they use. The report also examines how Apple and FFmpeg hasimplemented HTTP Live Streaming with respect to how much data is needed toread a file before the video can begin to be played. The report shows that thereare no large differences between the four techniques. However, Dynamic AdaptiveStreaming over HTTP stood out a bit by being completely independent ofany audio or video protocols. The report also shows a shortcoming in the specificationof HTTP Live Streaming as it is not specified that the first completeframe of the video stream should be at the beginning of the file. In Apple's implementationits needed to read up to 30 KB of data before playback can bestarted while in FFmpeg's implementation its about 600 bytes.
33

AN IMPLEMENTATION OF DYNAMIC DATA ACQUISITION MEASUREMENTS

Pesciotta, Eric, Portnoy, Michael 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / As data acquisition systems evolve and begin utilizing new avenues of acquisition such as Ethernet, an entirely new range of flight test capabilities become available. These new capabilities, defined by acquisition, monitoring, and varying of test measurements, enhance previous operation as they can now be realized during flight. Achieving such high levels of integration between ground station and test vehicle involves complex network protocols. Implementing such systems from scratch would be a time consuming and costly proposition. Fortunately, employing Internet protocols (TCP/IP) over Ethernet provides a cornucopia of readily available technology. Using state-of-the-art integration techniques, modern data acquisition systems can leverage years of proven technology offered by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). This paper discusses an implementation of dynamic data acquisition measurements for use in network data acquisition systems. The methodology used to determine whether or not a measurement can and should be variable during a flight test is examined in detail along with a discussion of the advantages of dynamically varying flight test measurements. Finally, an implementation is presented which successfully integrates Internet protocols with modern flight test equipment using the techniques described above for dynamic data acquisition measurements.
34

Dynamic Partial Reconfigurable FPGA

Zhou, Ruoxing January 2011 (has links)
Partial Reconfigurable FPGA provides ability of reconfigure the FPGA duringrun-time. But the reconfigurable part is disabled while performing reconfiguration. In order to maintain the functionality of system, data stream should be hold for RP during that time. Due to this feature, the reconfiguration time becomes critical to designed system. Therefore this thesis aims to build a functional partial reconfigurable system and figure out how much time the reconfiguration takes. A XILINX ML605 evaluation board is used for implementing the system, which has one static part and two partial reconfigurable modules, ICMP and HTTP. A Web Client sends different packets to the system requesting different services. These packets’ type information are analyzed and the requests are held by a MicroBlaze core, which also triggers the system’s self-reconfiguration. The reconfiguration swaps the system between ICMP and HTTP modules to handle the requests. Therefore, the reconfiguration time is defined between detection of packet type and completion of reconfiguration. A counter is built in SP for measuring the reconfiguration time. Verification shows that this system works correctly. Analyze of test results indicates that reconfiguration takes 231ms and consumes 9274KB of storage, which saves 93% of time and 50% of storage compared with static FPGA configuration.
35

Comparative Performance Analysis of MANET Routing Protocols in Internet Based Mobile Ad-hoc Networks

Zabin, Mahe, Mannam, Roja Rani January 2012 (has links)
In crucial times, such as natural disasters like Earthquakes, Floods, military attack, rescue and emergency operations, etc., it is not possible to maintain an infrastructure. In these situations, wireless Mobile Ad-Hoc networks can be an alternative to wired networks. In our thesis, due to the importance of MANET (Mobile Ad-hoc Network) applications, we do research on MANET and its subtype IMANET (Internet based Mobile Ad-hoc Network). In MANETs, finding an optimum path among nodes is not a simple issue due to the random mobility of nodes and topology changes frequently. Simple routing algorithms like Shortest Path, Dijksta‟s and Link State fail to find route in such dynamic scenarios. A number of ad-hoc protocols (Proactive, Reactive, Hybrid and Position based) have been developed for MANETs. In this thesis, we have designed an IMANET in OPNET 14.5 and tested the performance of three different routing protocols namely OLSR (Optimum Link State Routing), TORA (Temporarily Ordered Routing Algorithm) and AODV (Ad-hoc On-demand Distance Vector) in different scenarios by varying the number of nodes and the size of the area. The experimental results demonstrate that among the three protocols, none of the routing protocol can ensure good quality HTTP and voice communication in all our considered scenarios.
36

A Power Saving Mechanism for Web Traffic in IEEE 802.11 Wireless LAN

Jiang, Jyum-Hao 26 July 2010 (has links)
Web browsing via Wi-Fi wireless access networks has become a basic function on a variety of consumer mobile electronic devices, such as smart phones, PDAs, and the Apple iPad. It has been found that in terms of energy consumption, wireless communications/networking plays an important role in mobile devices. Since the power-saving mode (PSM) of the IEEE 802.11 a/b/g standard is not tailored for the HTTP protocol, we propose a novel power saving scheme that exploits the characteristics of web applications. After sending HTTP requests, the proposed power saving scheme updates the estimated value of RTT based on the information contained in the TCP timestamp header field. Next, the proposed scheme adjusts the value of the listening period based on the estimated value of RTT. When all TCP connections have been closed, the wireless network card could enter the deep-sleeping mode. In this case, the value of the listening period could be larger than one second, since the user is reading the webpage and is unlikely to send another HTTP request within one second. The usage of the deep-sleeping mode can significantly reduce the power consumption of mobile devices.
37

Single sign-on : Kerberos i webbapplikationer

Gustafsson Westman, Hans January 2010 (has links)
<p>Detta arbete undersöker ett par olika tekniker för att implementera single sign on med Kerberos i webbapplikationer. Undersökningen har gjorts på HTTP-autentisering som bygger på Microsofts NegotiateAuth och Cosign från University of Michigan. Dessa två tekniker har undersökts för att se hur de står sig mot varandra på kriterier såsom komplexitet, arbetsinsats och mjukvarukrav.Resultatet visar att HTTP-autentisering är väldigt simpel att implementera men kräver dock att användarens webbläsare konfigureras för den. Cosign är mer komplext men använder sig av Cookies vilket gör att de flesta webbläsare stödjer tekniken utan extra konfiguration.</p>
38

Reliable content delivery using persistent data sessions in a highly mobile environment /

Pantoleon, Periklis K. January 2004 (has links) (PDF)
Thesis (M.S. in Computer Science)--Naval Postgraduate School, March 2004. / Thesis advisor(s): Wen Su, John Gibson. Includes bibliographical references (p. 189). Also available online.
39

Performance analysis of transmission protocols for H.265 encoder

UMESH, AKELLA January 2015 (has links)
In recent years there has been a predominant increase in multimedia services such as live streaming, Video on Demand (VoD), video conferencing, videos for the learning. Streaming of high quality videos has become a challenge for service providers to enhance the user’s watching experience. The service providers cannot guarantee the perceived quality. In order to enhance the user’s expectations, it is also important to estimate the quality of video perceived by the user. There are different video streaming protocols that are used to stream from server to client. In this research, we aren’t focused on the user’s experience. We are mainly focused on the performance behavior of the protocols. In this study, we investigate the performance of the HTTP, RTSP and WebRTC protocols when streaming is carried out for H.265 encoder. The study addresses for the objective assessment of different protocols over VoD streaming at the network and application layers. Packet loss and delay variations are altered at the network layer using network emulator NetEm when streaming from server to client. The metrics at the network layer and application layer are collected and analyzed. The video is streamed from server to a client, the quality of the video is checked by some of the users. The research method has been carried out using an experimental testbed. The metrics such as packet counts at network layer and stream bitrate at application layer are collected for HTTP, RTSP and WebRTC protocols. Variable delays and packet losses are injected into the network to emulate real world. Based on the results obtained, it was found at the application layer that, out of the three protocols, HTTP, RTSP and WebRTC, the stream bitrate of the video transmitted using HTTP was less when compared to the other. Hence, HTTP performs better in the application layer. At the network layer, the packet counts of the video transmitted were collected using TCP port for HTTP and UDP port for RTSP and WebRTC protocols. The performance of HTTP was found to be stable in most of the scenarios. On comparing RTSP and WebRTC, the number of packet counts collected were more in number for RTSP when compared to WebRTC. This is because, the protocol and also the streamer are using more resources to transmit the video. Hence, both the protocols RTSP and WebRTC are performing better relatively.
40

Online Content Popularity in the Twitterverse: A Case Study of Online News

2014 January 1900 (has links)
With the advancement of internet technology, online news content has become very popular. People can now get live updates of the world's news through online news sites. Social networking sites are also very popular among Internet users, for sharing pictures, videos, news links and other online content. Twitter is one of the most popular social networking and microblogging sites. With Twitter's URL shortening service, a news link can be included in a tweet with only a small number of characters, allowing the rest of the tweet to be used for expressing views on the news story. Social links can be unidirectional in Twitter, allowing people to follow any person or organization and get their tweet updates, and share those updates with their own followers if desired. Through Twitter thousands of news links are tweeted every day. Whenever there is a popular new story, different news sites will publish identical or nearly identical versions (``clones'') of that story. Though these clones have the same or very similar content, the level of popularity they achieve may be quite different due to content agnostic factors such as influential tweeters, time of publication and the popularities of the news sites. It is very important for the content provider site to know about which factor plays a important role to make their news link popular. In this thesis research, a data set is collected containing the tweets made for the 218 members of 25 distinct sets of news story clones. The collected data is analyzed with respect to basic popularity characteristics concerning number of tweets of various types, relative publication times of clone set members, tweet timing and number of tweeter followers. Then, several other factors are investigated to see their impact in making some news story clones more popular than others. It is found that multiple content-agnostic factors i.e. maximum number of followers, self promotional tweets plays an impact on news site's stories overall popularity, and a first step is taken at quantifying their relative importance.

Page generated in 0.021 seconds