Spelling suggestions: "subject:"pervers"" "subject:"ververs""
141 |
Média a negativní politická kampaň: volby do PSP ČR v roce 2013 / Media and negative political campaign: czech parliamentary election in 2013Mašková, Kateřina January 2015 (has links)
The main objective of the thesis is to analyse the role of selected Czech dailies and online news servers during the assertion of negative political campaigns before Czech parliamentary elections in 2013. The research led to answering the question, how were the newspapers, in comparison to the servers, concerned with the transfer of negative contents performed by Czech political subjects during the period of election campaigning. As the source of analysed data, the thesis used dailies Mlada fronta Dnes and Pravo and online news servers Aktualne.cz and iHNed.cz. The thesis observed a share of negative evaluative remarks in the news content dealing with the topic of Czech political campaign which was concerned with the negative campaigning of Czech political parties, as well as the way in which was this topic commented by the selected media. Crucial part of the thesis is a comparison of the parliamentary elections of 2013 with the preceding elections of 2006 and 2010. The aim was to show how the elections of 2013 fit in a trend of utilisation of negative political communication in Czech setting and how has changed the media coverage of the phenomenon during the observed years. From the thesis emerges the result that the dailies covered negative political assertions much more often than the online...
|
142 |
Překlápění obsahů (shovelware) mezi tištěnými médii a zpravodajskými servery v České republice / Czech Shovelware: from Czech Print Media to News Servers or ConverselyNémethová, Eva January 2019 (has links)
This thesis investigates the trend of reusing the content of printed media in their on-line counterparts, based on several selected Czech national newspapers (Mladá fronta DNES, Lidové noviny, Právo) and their corresponding news servers (idnes.cz, lidovky.cz, novinky.cz) over the course of one constructed week in 2015. The theoretical section examines the interrelatedness of printed and on-line media, the impact of digitization on the transformation of the journalistic profession, and its influence on the content and recipients of media communications. The fundamental questions posed by this research are as follows: What percentage of the printed content is reused in the newspaper's corresponding on-line version and vice versa, and which of the selected media reuse the most content? The research operates with three initial hypotheses regarding the quantity and frequency of content conversion: printed journals reuse their content in on-line news servers at a rate of up to 5%, news servers reuse their content in printed journals at a rate of up to 10%, and the practice of shovelware is most frequently employed by the periodicals Mladá fronta DNES and Právo. The aim of the research is to reveal and interpret shovelware trends in the selected media and evaluate how this practice is perceived. The...
|
143 |
Designing and implementing a small scale Internet Service ProviderBrown, Johan, Gustafsson Brokås, Alexander, Hurtig, Niklas, Johansson, Tobias January 2009 (has links)
<p>The objective of this thesis is to design and implement a small scaleInternet Service Provider (ISP) for the NetCenter sub department atMälardalen University. The ISP is intended to give NetCenter a networkseparate from the University’s network, providing them with a moreflexible environment for lab purposes. This will give their students anopportunity to experience a larger backbone with Internet accessibility,which has not been previously available. At the same time it will place theteachers in control of the network in the NetCenter lab premises.The network is designed with a layered approach including an Internetaccess layer, a larger core segment and a distribution layer with aseparated lab network. It also incorporates both a public and a privateserver network, housing servers running e.g. Windows Active Directory,external DNS services, monitoring tools and logging applications. TheInternet access is achieved by peering with SUNET providing a full BGPfeed.This thesis report presents methods, implementations and results involvedin successfully creating the NetCenter ISP as both a lab network and anInternet provider with a few inevitable shortcomings; the most prominentbeing an incomplete Windows Domain setup.</p>
|
144 |
Jämförelse av Hypervisor & Zoner : Belastningstester vid drift av webbservrarNyquist, Johan, Manfredsson, Alexander January 2013 (has links)
Virtualisering av datorer rent generellt innebär att man delar upp hela eller delar av enmaskinkonfiguration i flera exekveringsmiljöer. Det är inte bara datorn i sig som kanvirtualiseras utan även delar av det, såsom minnen, lagring och nätverk. Virtualiseringanvänds ofta för att kunna nyttja systemets resurser mer effektivt. En hypervisorfungerar som ett lager mellan operativsystemet och den underliggande hårdvaran. Meden hypervisor har virtuella maskiner sitt egna operativsystems kärna. En annan tekniksom bortser från detta mellanlager kallas zoner. Zoner är en naturlig del avoperativsystemet och alla instanser delar på samma kärna, vilket inte ger någon extraoverhead. Problemet är att hypervisorn är en resurskrävande teknik. Genom att användazoner kan detta problem undkommas genom att ta bort hypervisorlagret och istället köramed instanser som kommunicerar direkt med operativsystemets kärna. Detta ärteoretiskt grundande och ingen tidigare forskning har utförts, därmed påkallades dennautredning. För att belysa problemet använde vi oss av Apache som webbserver.Verktyget Httperf användes för att kunna utföra belastningstester mot webbservern.Genom att göra detta kunde vi identifiera att den virtualiserade servern presterade sämreän en fysisk server (referensmaskin). Även att den nyare tekniken zoner bidrar till lägreoverhead, vilket gör att systemet presterar bättre än med den traditionella hypervisorn.För att styrka vår teori utfördes två tester. Det första testet bestod utav en virtualiseradserver, andra testet bestod av tre virtuella servrar. Anledningen var att se hur de olikateknikerna presterade vid olika scenarion. Det visade sig i båda fallen att zonerpresterade bättre och att det inte tappade lika mycket i prestanda i förhållande tillreferensmaskinerna. / Virtualization of computers in general means that the whole or parts of a machineconfiguration is split in multiple execution enviornments. It is not just the computeritself that can be virtualized, but also the resources such as memory, storage andnetworking. Virtualization is often used to utilize system resources more efficient. Ahypervisor acts as a layer between the operating system and the underlying hardware.With a hypervisor a virtual machine has its own operating system kernel. Anothertechnique that doesn't use this middle layer is called zones. Zones are a natural part ofthe operating system and all instances share the same core, this does not provide anyadditional overhead. The problem with hypervisors is that it is a rescource-demandingtechnique. The advantage with zones is that you should be able to avoid the problem byremoving the hypervisor layer and instead run instances that communicate directly tothe operating system kernel. This is just a theoretical foundation. No previous researchhas been done, which result in this investigation. To illustrate the problem we usedApache as a web server. Httperf will be used as a tool to benchmark the web server. Bydoing this we were able to identify that the virtualized server did not perform quite aswell as a physical server. Also that the new technique (zones) did contribute with loweroverhead, making the system perform better than the traditional hypervisor. In order toprove our theory two tests were performed. The first test consisted of one virtual serverand the other test consisted of three virtual servers. The reason behind this was to seehow the different techniques performed in different scenarios. In both cases we foundthat zones performed better and did not drop as much performance in relation to ourreference machines.
|
145 |
Caching Techniques For Dynamic Web ServersSuresha, * 07 1900 (has links)
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites.
A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code.
We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation.
In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
|
146 |
Optimization algorithms for video service deliveryABOUSABEA, Emad Mohamed Abd Elrahman 12 September 2012 (has links) (PDF)
The aim of this thesis is to provide optimization algorithms for accessing video services either in unmanaged or managed ways. We study recent statistics about unmanaged video services like YouTube and propose suitable optimization techniques that could enhance files accessing and reduce their access costs. Moreover, this cost analysis plays an important role in decision making about video files caching and hosting periods on the servers. Under managed video services called IPTV, we conducted experiments for an open-IPTV collaborative architecture between different operators. This model is analyzed in terms of CAPEX and OPEX costs inside the domestic sphere. Moreover, we introduced a dynamic way for optimizing the Minimum Spanning Tree (MST) for multicast IPTV service. In nomadic access, the static trees could be unable to provide the service in an efficient manner as the utilization of bandwidth increases towards the streaming points (roots of topologies). Finally, we study reliable security measures in video streaming based on hash chain methodology and propose a new algorithm. Then, we conduct comparisons between different ways used in achieving reliability of hash chains based on generic classifications
|
147 |
A security architecture for protecting dynamic components of mobile agentsYao, Ming January 2004 (has links)
New techniques,languages and paradigms have facilitated the creation of distributed applications in several areas. Perhaps the most promising paradigm is the one that incorporates the mobile agent concept. A mobile agent in a large scale network can be viewed as a software program that travels through a heterogeneous network, crossing various security domains and executing autonomously in its destination. Mobile agent technology extends the traditional network communication model by including mobile processes, which can autonomously migrate to new remote servers. This basic idea results in numerous benefits including flexible, dynamic customisation of the behavior of clients and servers and robust interaction over unreliable networks. In spite of its advantages, widespread adoption of the mobile agent paradigm is being delayed due to various security concerns. Currently available mechanisms for reducing the security risks of this technology do not e±ciently cover all the existing threats. Due to the characteristics of the mobile agent paradigm and the threats to which it is exposed, security mechanisms must be designed to protect both agent hosting servers and agents. Protection to agent-hosting servers' security is a reasonably well researched issue, and many viable mechanisms have been developed to address it. Protecting agents is technically more challenging and solutions to do so are far less developed. The primary added complication is that, as an agent traverses multiple servers that are trusted to different degrees, the agent's owner has no control over the behaviors of the agent-hosting servers. Consequently the hosting servers can subvert the computation of the passing agent. Since it is infeasible to enforce the remote servers to enact the security policy that may prevent the server from corrupting agent's data, cryptographic mechanisms defined by the agent's owner may be one of the feasible solutions to protect agent's data.Hence the focus of this thesis is the development and deployment of cryptographic mechanisms for securing mobile agents in an open environment. Firstly, requirements for securing mobile agents' data are presented. For a sound mobile agent application, the data in an agent that is collected from each visiting server must be provided integrity. In some applications where servers intend to keep anonymous and will reveal their identities only under certain cir- cumstances, privacy is also necessitated. Aimed at these properties, four new schemes are designed to achieve different security levels: two schemes direct at preserving integrity for the agent's data, the other two focus on attaining data privacy. There are four new security techniques designed to support these new schemes. The first one is joint keys to discourage two servers from colluding to forge a victim server's signature. The second one is recoverable key commitment to enable detection of any illegal operation of hosting servers on an agent's data. The third one is conditionally anonymous digital signature schemes, utilising anonymous public-key certificates, to allow any server to digitally sign a document without leaking its identity. The fourth one is servers' pseudonyms that are analogues of identities, to enable servers to be recognised as legitimate servers while their identities remain unknown to anyone. Pseudonyms can be deanonymised with the assistance of authorities. Apart from these new techniques, other mechanisms such as hash chaining relationship and mandatory verification process are adopted in the new schemes. To enable the inter-operability of these mechanisms, a security architecture is therefore developed to integrate compatible techniques to provide a generic solution for securing an agent's data. The architecture can be used independently of the particular mobile agent application under consideration. It can be used for guiding and supporting developers in the analysis of security issues during the design and implementation of services and applications based on mobile agents technology.
|
148 |
Etude de performances sur processeurs multicoeur : environnement d'exécution événementiel efficace et étude comparative de modèles de programmation / Performance studies on multicore processors : efficient event-driven runtime and programming models comparisonGeneves, Sylvain 05 April 2013 (has links)
Cette thèse traite des performances des serveurs de données en multi-cœur. Plus précisément nous nous intéressons au passage à l'échelle avec le nombre de cœurs. Dans un premier temps, nous étudions le fonctionnement interne d'un support d'exécution événementiel multi-cœur. Nous montrons tout d'abord que le faux-partage ainsi que les mécanismes de communications inter-cœurs dégradent fortement les performances et empêchent le passage à l'échelle des applications. Nous proposons alors plusieurs optimisations pour pallier ces comportements. Dans un second temps, nous comparons les performances en multi-cœur de trois serveurs Web chacun représentatif d'un modèle de programmation. Nous remarquons que les différences de performances observées entre les serveurs varient lorsque le nombre de cœurs augmente. Après une analyse approfondie des performances observées, nous identifions la cause de la limitation du passage à l'échelle des serveurs étudiés. Nous présentons une proposition ainsi qu'un ensemble de pistes pour lever cette limitation. / This thesis studies the performances of data servers on multicores. More precisely, we focus on the scalability with the number of cores. First, we study the internals of an event-driven multicore runtime. We demonstrate that false sharing and inter-core communications hurt performances badly, and prevent applications from scaling. We then propose several optimisations to fix these issues. In a second part, we compare the multicore performances of three Webservers, each reprensentative of a programming model. We observe that the differences between each server's performances vary as the number of cores increases. We are able to pinpoint the cause of the scalability limitation observed. We present one approach and some perspectives to overcome this limit.
|
149 |
Modeling, characterization, and optimization of web server power in data centers = Modelagem, caracterização e otimização de potência em centro de dados / Modelagem, caracterização e otimização de potência em centro de dadosPiga, Leonardo de Paula Rosa, 1985- 11 August 2013 (has links)
Orientadores: Sandro Rigo, Reinaldo Alvarenga Bergamaschi / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-24T00:17:07Z (GMT). No. of bitstreams: 1
Piga_LeonardodePaulaRosa_D.pdf: 5566406 bytes, checksum: 5fcce79bb9fc83646106c7580e0d77fc (MD5)
Previous issue date: 2013 / Resumo: Para acompanhar uma demanda crescente pelos recursos computacionais, empresas de TI precisaram construir instalações que comportam centenas de milhares de computadores chamadas centro de dados. Este ambiente é altamente dependente de energia elétrica, um recurso que é cada vez mais caro e escasso. Neste contexto, esta tese apresenta uma abordagem para otimizar potência e desempenho em centro de dados Web. Para isto, apresentamos uma infraestrutura para medir a potência dissipada por computadores de prateleiras, desenvolvemos modelos empíricos que estimam a potência de servidores Web e, por fim, implementamos uma de nossas heurísticas de otimização de potência global em um aglomerado de nós de processamento chamado AMD SeaMicro SM15k. A infraestrutura de medição de potência é composta por: uma placa personalizada, que é capaz de medir potência e é instalada em computadores de prateleira; um conversor de dados analógico/digital que amostra os valores de potência; e um software controlador. Mostramos uma nova metodologia para o desenvolvimento de modelos de potência para servidores Web que diminuem a quantidade de parâmetros dos modelos e reduzem as relações não lineares entre medidas de desempenho e potência do sistema. Avaliamos a nossa metodologia em dois servidores Web, um constituído por um processador AMD Opteron e outro por processador Intel i7. Nossos melhores modelos tem erro médio absoluto de 1,92% e noventa percentil para o erro absoluto de 2,66% para o sistema com processador Intel i7. O erro médio para o sistema composto pelo processador AMD Opteron é de 1,46% e o noventa percentil para o erro absoluto é igual a 2,08%. A implantação do sistema de otimização de potência global foi feita em um aglomerado de nós de processamento SeaMicro SM15k. A implementação se baseia no conceito de Virtual Power States, uma combinação de taxa de utilização de CPU com os estados de potência P e C disponíveis em processadores modernos, e no nosso algoritmo de otimização chamado Slack Recovery. Propomos e implementamos também um novo mecanismo capaz de controlar a utilização da CPU. Nossos resultados experimentais mostram que o nosso sistema de otimização pode reduzir o consumo de potência em até 16% quando comparado com o governador de potência do Linux chamado performance e em até 6,7% quando comparado com outro governador de potência do Linux chamado ondemand / Abstract: To keep up with an increasing demand for computational resources, IT companies need to build facilities that host hundreds of thousands of computers, the data centers. This environment is highly dependent on electrical energy, a resource that is becoming expensive and limited. In this context, this thesis develops a global data center-level power and performance optimization approach for Web Server data centers. It presents a power measurement framework for commodity servers, develops empirical models for estimating the power consumed by Web servers, and implements one of the global power optimization heuristics on a state-of-the-art, high-density SeaMicro SM15k cluster by AMD. The power measuring framework is composed of a custom made board, which is able to capture the power consumption; a data acquisition device that samples the measured values; and a piece of software that manages the framework. We show a novel method for developing full system Web server power models that prunes model parameters and reduces non-linear relationships among performance measurements and system power. The Web server power models use as parameters performance indicators read from the machine internal performance counters. We evaluate our approach on an AMD Opteron-based Web server and on an Intel i7-based Web server. Our best model displays an average absolute error of 1.92% for the Intel i7 server and 1.46% for AMD Opteron as compared to actual measurements, and 90th percentile for the absolute percent error equals to 2.66% for Intel i7 and 2.08% for AMD Opteron. We deploy the global power management system in a state-of-the-art SeaMicro SM15k cluster. The implementation relies on the concept of Virtual Power States, a combination of CPU utilization rate to the P/C power states available in modern processors, and on our global optimization algorithm called Slack Recovery. We also propose and implement a novel mechanism to control utilization rates in each server, a key aspect of our power/performance optimization system. Experimental results show that our Slack Recovery-based system can reduce up to 16% of the power consumption when compared to the Linux performance governor and 6.7% when compared to the Linux ondemand governor / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
|
150 |
Novas técnicas de distribuição de carga para servidores Web geograficamente distribuídos / An infrastructure for coordination of supply chain activities based on web services choreographyNakai, Alan Massaru, 1979- 14 September 2012 (has links)
Orientadores: Edmundo Roberto Mauro Madeira, Luiz Eduardo Buzato / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-21T19:31:37Z (GMT). No. of bitstreams: 1
Nakai_AlanMassaru_D.pdf: 3199160 bytes, checksum: b939ce461412f42df4f371a7cf37d2db (MD5)
Previous issue date: 2012 / Resumo: A distribuição de carga é um problema intrínseco a sistemas distribuídos. Esta tese aborda este problema no contexto de servidores web geograficamente distribuídos. A replicação de servidores web em datacenters distribuídos geograficamente provê tolerância a falhas e a possibilidade de fornecer melhores tempos de resposta aos clientes. Uma questão chave em cenários como este é a eficiência da solução de distribuição de carga empregada para dividir a carga do sistema entre as réplicas do servidor. A distribuição de carga permite que os provedores façam melhor uso dos seus recursos, amenizando a necessidade de provisão extra e ajudando a tolerar picos de carga até que o sistema seja ajustado. O objetivo deste trabalho foi estudar e propor novas soluções de distribuição de carga para servidores web geograficamente distribuídos. Para isso, foram implementadas duas ferramentas para apoiar a análise e o desenvolvimento de novas soluções, uma plataforma de testes construída sobre a implementação real de um serviço web e um software de simulação baseado em um modelo realístico de geração de carga para web. As principais contribuições desta tese são as propostas de quatro novas soluções de distribuição de carga que abrangem três diferentes tipos: soluções baseadas em DNS, baseadas em clientes e baseadas em servidores / Abstract: Load balancing is a problem that is intrinsic to distributed systems. In this thesis, we study this problem in the context of geographically distributed web servers. The replication of web servers on geographically distributed datacenters allows the service provider to tolerate failures and to improve the response times perceived by clients. A key issue for achieving good performance in such a deployment is the efficiency of the load balancing solution used to distribute client requests among the replicated servers. The load balancing allows providers to make better use of their resources, soften the need for over-provision, and help tolerate abrupt load peaks until the system can be adjusted. The objective of this work was to study and propose load balancing solutions for geographically distributed web servers. In order to accomplish this objective, we have implemented two tools that support the analysis and development of load balancing solutions, a testbed that was built on top of a real web service implementation and simulation software that is based on a realistic model for web load generation. The main contributions of this thesis are the proposals of four new load balancing solutions that comprehend three types: DNS-based, client-based, and server-based / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
|
Page generated in 0.0589 seconds