• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 3
  • Tagged with
  • 13
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cost Aware Virtual Content Delivery Network for Streaming Multimedia : Cloud Based Design and Performance Analysis

Vishnubhotla Venkata Krishna, Sai Datta January 2015 (has links)
Significant portion of today’s internet traffic emerge from multimedia services. When coupled with growth in number of users accessing these services, there is tremendous increase in network traffic. CDNs aid in handling this traffic and offer reliable services by distributing content across different locations. The concept of virtualization transformed traditional data centers into flexible cloud infrastructure. With the advent of cloud computing technology, multimedia providers have scope for establishing CDN using network operator’s cloud environment. However, the main challenge while establishing such CDN is implementing a cost efficient and dynamic mechanism which guarantees good service quality to users. This thesis aims to develop, implement and assess the performance of a model that coordinates deployment of virtual servers in the cloud. A solution which dynamically spawns and releases virtual servers according to variations in user demand has been proposed. Cost-based heuristic algorithm is presented for deciding the placement of virtual servers in OpenStack based federated clouds. Further, the proposed model is implemented on XIFI cloud and its performance is measured. Results of the performance study indicate that virtual CDNs offer reliable and prompt services. With virtual CDNs, multimedia providers can regulate expenses and have greater level of flexibility for customizing the virtual servers deployed at different locations.
2

Virtualiserat datacenter med heterogena kunder för moln eller network-as-a-service-miljö. : Analys resursanvändning vid delad virtualiserings värdmaskin för webbservrar samt videokonferensservrar.

Undin, Daniel January 2014 (has links)
Detta projekt undersöker en praktisk lösning av virtualisering i Ubuntu-server med kvm och virt-manager för användning i en moln- eller Network-as-a-service-miljö. Projektet inkluderar även en jämförelse mellan webbservermjukvaran NginX och Apache2 och en jämförelse mellan videokonferensmjukvaran BigBlueButton och OpenMeetings genom att mäta CPU, Minnes och Nätverksbelastning till de virtuella servrarna vid 1 till 20 uppkopplingar från en klientmaskin. Utifrån projektets resultat rekommenderas Apache2 som webbserver då installationen är enklare och skillnaden i resursanvädning är försummbar och Openmeetings som videokonferesserver då detta alternativ är det mer kompletta.
3

Webbserverprogram: Öppen källkods-alternativ till Apache

Svantesson, Carlhåkan January 2012 (has links)
Det har blivit allt vanligare för företag att marknadsföra sig via Internet vilket oftast innebär att företaget behöver en webbplats. Denna webbplats använder ett webbserverprogram för att hantera kundernars förfrågningar och det webbserverprogram som är störst på marknaden med god marginal är Apache. Apache har existerat i över 15 år och är öppen-källkod. Det här examensarbetet undersöker om det finns några öppen källkods-alternativ till det marknadsledande webbserverprogrammet Apache genom att titta på funktionalitet och prestanda. Prestandatesterna har genomförs både med statiska och dynamiska webbsidor. De alternativ som undersöks är Nginx och Lighttpd. Resultaten visar på att både Nginx och Lighttpd i det stora hela presterar bättre än Apache. Det här syns främst i de statiska prestandatesterna där Nginx och Lighttpd presterar mer än dubbelt så bra som Apache. I de dynamiska prestandatesterna så har Nginx och Apache jämförbar prestanda medan Lighttpd inte riktigt kommer upp i samma prestanda. Nginx saknar viss funktionalitet i jämförelse med de andra två, det är dock inga kritiska funktioner som saknas.
4

Mainframes and media streaming solutions : How to make mainframes great again

Berg, Linus, Ståhl, Felix January 2020 (has links)
Mainframes has been used for well over 50 years and are built for processing demanding workloads fast, with the latest models using IBM’s z/Architecture processors. In the time of writing, the mainframes are a central unit of the world’s largest corporations in banking, finance and health care. Performing, for example, heavy loads of transaction processing. When IBM bought RedHat and acquired the container orchestration platform OpenShift, the IBM lab in Poughkeepsie figured that a new opportunity for the mainframe might have opened. A media streaming server built with OpenShift, running on a mainframe. This is interesting because a media streaming solution built with OpenShift might perform better on a mainframe than on a traditional server. The initial question they proposed was ’Is it worth running streaming solutions on OpenShift on a Mainframe?’. First, the solution has to be built and tested on a mainframe to confirm that such a solution actually works. Later, IBM will perform a benchmark to see if the solution is viable to sell. The authors method includes finding the best suitable streaming software according to some criterias that has to be met. Nginx was the winner, being the only tested software that was open-source, scalable, runnable in a container and supported adaptive streaming. With the software selected, configuration with Nginx, Docker and OpenShift resulted in a fully functional proof-of-concept. Unfortunately, due to the Covid-19 pandemic, the authors never got access to a mainframe, as promised, to test the solution, however, OpenShift is platform agnostic and should, theoretically, run on a mainframe. The authors built a base solution that can easily be expanded with functionality, the functionality left to be built by IBM engineers is included in the future works section, it includes for example, live streaming, and mainframe benchmarking. / Stordatorer har använts i över 50 år och är byggda för att snabbt kunna bearbeta krävande arbetsbelastningar, med de senaste modellerna som använder IBMs z/Architecture processorer. I skrivande stund är stordatorerna en central enhet i världens största företag inom bank, finans och hälsovård. De utför, till exempel, väldigt stora mängder transaktionsbehandling. När IBM köpte RedHat och förvärvade container-hanteringsplattformen OpenShift, tänkte laboratoriet i Poughkeepsie att en ny möjlighet för stordatorn kanske hade öppnats. En mediaströmningsserver byggd med OpenShift, som körs på en stordator. Detta är intressant eftersom en mediaströmningslösning byggd med OpenShift kan fungera bättre på en stordator än på en traditionell server. Den initiala frågan som ställdes var ’Är det värt att köra strömningslösningar på Openshift på en Mainframe?’. Först måste lösningen byggas och testas på en stordator för att bekräfta att en sådan lösning faktiskt fungerar. Senare kommer IBM att utföra ett riktmärke för att se om lösningen är lämplig att sälja.    Författarnas metod inkluderar att hitta den bästa strömningsprogramvaran enligt vissa kriterier som måste uppfyllas. Nginx var vinnaren samt den enda testade programvaran som var öppen källkod, skalbar, körbar i en container och stödde adaptiv strömning. Med den valda programvaran resulterade konfigurationen av Nginx, Docker och OpenShift i en fullt funktionell konceptlösning. På grund av Covid-19-pandemin, fick författarna aldrig tillgång till en stordator, som utlovat, för att testa lösningen. OpenShift är dock plattformsagnostisk och ska teoretiskt sett kunna köras på en stordator. Det som författarna lämnade åt framtida ingenjörer att utforska är en studie som inkluderar fler mjukvaror, även betalversioner, eftersom den här studien endast innehåller öppen källkod. Samt en utvidgning av den befintliga lösningens funktionensuppsättning.
5

Utredning och impementation av säkerhetslösningar för publika API:er

Grahn, Kristoffer January 2020 (has links)
Examensarbetet går igenom vanliga säkerhetsrisker med publika API:er och ger information om IIS, Apache, Nginx, OAuth 2.0 och några av deras säkerhetsmoduler som kan implementeras. IIS och Apache har inbyggda hanteringsprocesser för att motverka ”Distributed-Denial-of-Service” (DDoS) attacker som jämförs med varandra utifrån analys av en befintlig rapport som testar två olika DDoS attacktyper. Säkerhetslösningarnas autentiseringsmoduler bryts ner i olika verifieringsprocesser, där det framkommer att verifieringsprocesserna har en gemensam svaghet mot ”Man-in-The-Middle” (MitM) attacker. Rapporten går in djupare hur man kan skydda sig mot MitM attacker med bra krypteringsprotokoll, ”Transport Layer Security” (TLS), samt undersöker den nyaste versionen TLS 1.3 / The thesis examines common security risks with public APIs and provides information about IIS, Apache, Nginx and OAuth 2.0 and some of the security modules they provide that can be implemented. IIS and Apache have builtin modules for handling Distributed-Denial-of-Service (DDoS) attacks that are compared against eachother through analyzing a existing report that tests two different DDoS attack types. The security solutions authentication modules are broken down into different types of verification processes, where it comes forth that the processes share a common security risk against Man-in-the-Middle (MitM) attacks. The report goes through how you can protect against MitM attacks with secure encryption protocols, Transport Layer Security (TLS), and analyzes the newest version TLS 1.3.
6

Caching HTTP : A comparative study of caching reverse proxies Varnish and Nginx

Logren Dély, Tobias January 2014 (has links)
With the amount of users on the web steadily increasing websites must at times endure heavy loads and risk grinding to a halt beneath the flood of visitors. One solution to this problem is by using HTTP reverse proxy caching, which acts as an intermediate between web application and user. Content from the application is stored and passed on, avoiding the need for the application produce it anew for every request. One popular application designed solely for this task is Varnish; another interesting application for the task is Nginx which is primarily designed as a web server. This thesis compares the performance of the two applications in terms of number of requests served in relation to response time, as well as system load and free memory. With both applications using their default configuration, the experiments find that Nginx performs better in the majority of tests performed. The difference is however very slightly in tests with low request rate.
7

Platformě nezávislé aplikační rozhraní na architektuře REST. / Platform independent application interface based on REST architecture.

Herma, Tomáš January 2014 (has links)
This Diploma thesis deals with creation of web application, REST API, SDK for Android and iPhone platform and example application for these two platforms. The first part of work analyses the current application interfaces. The second part describes the selected technologies and implementation.
8

HTTP Based Adaptive Bitrate Streaming Protocols in Live Surveillance Systems

Dzabic, Daniel, Jacob, Mårtensson January 2018 (has links)
This thesis explores possible solutions to replace Adobe Flash Player by using toolsalready built into modern web browsers, and explores the tradeoffs between bitrate, qual-ity, and delay when using an adaptive bitrate for live streamed video. Using an adaptivebitrate for streamed video was found to reduce stalls in playback for the client by adapt-ing to the available bandwidth. A newer codec can further compress the video file sizewhile maintaining the same video quality. This can improve the viewing experience forclients on a restricted or a congested network. The tests conducted in this thesis showthat producing an adaptive bitrate stream and changing codecs is a very CPU intensiveprocess.
9

Performance comparison between Apache and NGINX under slow rate DoS attacks

Al-Saydali, Josef, Al-Saydali, Mahdi January 2021 (has links)
One of the novel threats to the internet is the slow HTTP Denial of Service (DoS) attack on the application level targeting web server software. The slow HTTP attack can leave a high impact on web server availability to normal users, and it is affordable to be established compared to other types of attacks, which makes it one of the most feasible attacks against web servers. This project investigates the slow HTTP attack impact on the Apache and Nginx servers comparably, and review the available configurations for mitigating such attack. The performance of the Apache and NGINX servers against slow HTTP attack has been compared, as these two servers are the most globally used web server software. Identifying the most resilient web server software against this attack and knowing the suitable configurations to defeat it play a key role in securing web servers from one of the major threats on the internet. From comparing the results of the experiments that have been conducted on the two web servers, it has been found that NGINX performs better than the Apache server under slow rate DoS attack without using any configured defense mechanism. However, when defense mechanisms have been applied to both servers, the Apache server acted similarly to NGINX and was successful to defeat the slow rate DoS attack.
10

Running Multiple Versions of Services With Continuous Delivery

Wik, Lucas January 2017 (has links)
Continuous Delivery is a software development discipline where the software is always kept in a release ready state. It has proven to be a challenge for companies to adopt the practices of Continuous Delivery, but the benefits it brings may well be worth overcoming the challenges the adoption process brings. But the problem with the challenges is that they appear to be unknown, different adoption cases report different problems, some even consider something a solution whilst another case considered the same subject to be a problem. Thus adopting Continuous Delivery is a tricky process. But the tech company IST is interested in adopting Continuous Delivery and are looking to take a soft start by adding new functionality to their service-oriented system, this functionality is to be able to run multiple versions of their services at the same time. This study has implemented this functionality in their system and then researched possible issues or benefits the functionality would have towards Continuous Delivery. Finally, a discussion was made on how the author thinks approaching Continuous Delivery should be done by any company or developer interested.

Page generated in 0.0477 seconds