• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 306
  • 291
  • 118
  • 94
  • 51
  • 50
  • 37
  • 22
  • 19
  • 10
  • 9
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1118
  • 305
  • 294
  • 219
  • 156
  • 149
  • 127
  • 125
  • 124
  • 120
  • 115
  • 112
  • 104
  • 103
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

IMPROVING ROUTING AND CACHING PERFORMANCE IN DHT BASED PEER-TO-PEER SYSTEMS

XU, ZHIYONG January 2003 (has links)
No description available.
252

Optimizing Request Routing in Heterogeneous Web Computation Environments

Shedimbi, Prudhvi Rao 20 October 2016 (has links)
No description available.
253

Replacing batch-based data extraction withevent streaming with Apache Kafka : A comparative study

Axelsson, Richard January 2022 (has links)
For growing organisations that have built their data flow around a monolithic database server, anever-increasing number of applications and an ever-increasing demand for data freshness willeventually push the existing system to its limits, prompting either hardware upgrades or anupdated data architecture. Switching from an approach of full extractions of data at regularintervals to an approach where only changes are extracted, resource consumption couldpotentially be decreased, while simultaneously increasing data freshness. The objective of this thesis is to provide insights into how implementing an event streamingsetup with Apache Kafka connected to SQL Server through the Debezium source connectoraffects resource consumption on the database server. Other studies in related work have oftenbeen focused on steps further downstream in the data pipeline. This thesis can thereforecontribute to an area where more knowledge is needed. Through an empirical study done using two different setups in the same system, traditional dataextraction in batches and extraction through event streaming is measured and compared. The point of measurement is the SQL Server database from which data is extracted. Both memoryutilisation and CPU utilisation is measured, using SQL Server Profiler. Different parameters fortable sizes, volumes of data and intervals between changes are used to simulate differentscenarios. One of the takeaways of the results is that, at the same number of total changes, the size of theindividual transactions has a large impact on the resource consumption caused by eventstreaming. The study shows that an overhead cost is involved with each transaction, and also thatthe regular polling that the source connector performs causes resource consumption even inidleness. The thesis concludes that event streaming can offer reduced resource consumption on thedatabase server. However, when the source table size is small, and the number of changes large,extraction in batches is less resource-intensive.
254

Exploiting Limited Customer Choice and Server Flexibility

He, Yu-Tong 12 1900 (has links)
<p>Flexible queuing systems arise III a variety of applications, such as service operations, computer/communication systems and manufacturing. In such a system, customer types vary in the flexibility of choosing servers; servers vary in the flexibility of which types of customers to serve. This thesis studies several resource allocation policies which address the concerns of limited customer choice and server flexibility. First, to accommodate different levels of flexibility, we propose the MinDrift affinity routing (MARa) policy and three variants: MARO-2/k, MARO-flex and MARa-tree. These policies are designed to maximize the system capacity by using the first moments of the interarrival times and the service times, at the same time they require only a small amount of state information in minimizing the delay in the system. Using diffusion limits for systems with Poisson arrival processes, we prove that MARa, MARO-flex and MARO-tree have the same heavy traffic optimality properties and the optimality is achieved independent of the flexibility levels. By providing their applications in distributed computing systems, we show that the MARa related policies (which require significantly less state information) outperform the MinDrift(Q) policy (which requires global state information), in heterogeneous server systems with either high or medium loads. Second, when no state information is available, we propose both the random routing policy which asymptotically minimizes the delay in the system by using the second moments of the service times, and the pooling strategy which further reduces the delay by combining appropriate parallel single-server queues into a number of multi-server queues. Overall, this thesis intends to provide insights on designing effective policies for allocating servers' times to serve multiple types of customers.</p> / Doctor of Philosophy (PhD)
255

Modeling and performance analysis of scalable web servers not deployed on the Cloud

Aljohani, A.M.D., Holton, David R.W., Awan, Irfan U. January 2013 (has links)
No / Over the last few years, cloud computing has become quite popular. It offers Web-based companies the advantage of scalability. However, this scalability adds complexity which makes analysis and predictable performance difficult. There is a growing body of research on load balancing in cloud data centres which studies the problem from the perspective of the cloud provider. Nevertheless, the load balancing of scalable web servers deployed on the cloud has been subjected to less research. This paper introduces a simple queueing model to analyse the performance metrics of web server under varying traffic loads. This assists web server managers to manage their clusters and understand the trade-off between QoS and cost. In this proposed model two thresholds are used to control the scaling process. A discrete-event simulation (DES) is presented and validated via an analytical solution.
256

A suitable server placement for peer-to-peer live streaming

Yuan, X.Q., Yin, H., Min, Geyong, Liu, X., Hui, W., Zhu, G.X. January 2013 (has links)
No / With the rapid growth of the scale, complexity, and heterogeneity of Peer-to-Peer (P2P) systems, it has become a great challenge to deal with the peer's network-oblivious traffic and self-organization problems. A potential solution is to deploy servers in appropriate locations. However, due to the unique features and requirements of P2P systems, the traditional placement models cannot yield the desirable service performance. To fill this gap, we propose an efficient server placement model for P2P live streaming systems. Compared to the existing solutions, this model takes the Internet Service Provider (ISP) friendly problem into account and can reduce the cross-network traffic among ISPs. Specifically, we introduce the peers' contribution into the proposed model, which makes it more suitable for P2P live streaming systems. Moreover, we deploy servers based on the theoretical solution subject to practical data and apply them to practical live streaming applications. The experimental results show that this new model can reduce the amount of cross-network traffic and improve the system efficiency, has a better adaptability to Internet environment, and is more suitable for P2P systems than the traditional placement models.
257

Improving the Security, Privacy, and Anonymity of a Client-Server Network through the Application of a Moving Target Defense

Morrell, Christopher Frank 03 May 2016 (has links)
The amount of data that is shared on the Internet is growing at an alarming rate. Current estimates state that approximately 2.5 exabytes of data were generated every day in 2012. This rate is only growing as people continue to increase their on-line presence. As the amount of data grows, so too do the number of people who are attempting to gain access to the data. Attackers try many methods to gain access to information, including a number of attacks that occur at the network layer. A network-based moving target defense is a technique that obfuscates the location of a machine on the Internet by arbitrarily changing its IP address periodically. MT6D is one of these techniques that leverages the size of the IPv6 address space to make it statistically impossible for an attacker to find a specific target machine. MT6D was designed with a number of limitations that include manually generated static configurations and support for only peer to peer networks. This work presents extensions to MT6D that provide dynamically generated configurations, a secure and dynamic means of exchanging configurations, and with these new features, an ability to function as a server supporting a large number of clients. This work makes three primary contributions to the field of network-based moving target defense systems. First, it provides a means to exchange arbitrary information in a way that provides network anonymity, authentication, and security. Second, it demonstrates a technique that gives MT6D the capability to exchange configuration information by only sharing public keys. Finally, it introduces a session establishment protocol that clients can use to establish concurrent connections with an MT6D server. / Ph. D.
258

Empirical Analysis of Algorithms for the k-Server and Online Bipartite Matching Problems

Mahajan, Rutvij Sanjay 14 August 2018 (has links)
The k–server problem is of significant importance to the theoretical computer science and the operations research community. In this problem, we are given k servers, their initial locations and a sequence of n requests that arrive one at a time. All these locations are points from some metric space and the cost of serving a request is given by the distance between the location of the request and the current location of the server selected to process the request. We must immediately process the request by moving a server to the request location. The objective in this problem is to minimize the total distance traveled by the servers to process all the requests. In this thesis, we present an empirical analysis of a new online algorithm for k-server problem. This algorithm maintains two solutions, online solution, and an approximately optimal offline solution. When a request arrives we update the offline solution and use this update to inform the online assignment. This algorithm is motivated by the Robust-Matching Algorithm [RMAlgorithm, Raghvendra, APPROX 2016] for the closely related online bipartite matching problem. We then give a comprehensive experimental analysis of this algorithm and also provide a graphical user interface which can be used to visualize execution instances of the algorithm. We also consider these problems under stochastic setting and implement a lookahead strategy on top of the new online algorithm. / MS / Motivated by real-time logistics, we study the online versions of the well-known bipartite matching and the k-server problems. In this problem, there are servers (delivery vehicles) located in different parts of the city. When a request for delivery is made, we have to immediately assign a delivery vehicle to this request without any knowledge of the future. Making cost-effective assignments, therefore, becomes incredibly challenging. In this thesis, we implement and empirically evaluate a new algorithm for the k-server and online matching problems.
259

Datenschutzkonformes Nutzertracking auf Webseiten

Kiehm, Lisa Katharina 25 June 2024 (has links)
Von den frühen Tagen der Logfile-Analysen bis hin zur heutigen Verwendung von fortschrittlichen Tracking-Systemen wie Google Analytics hat sich die Nutzerverfolgung im Netz stetig weiterentwickelt. Doch während sie Websitebetreibern und Werbedienstleistern wertvolle Informationen liefert, wirft sie auch Fragen hinsichtlich des Privatsphäre- und Datenschutzes auf. Das Sammeln von persönlichen Daten und deren anschließende Verwendung ruft bei vielen Menschen Besorgnis hervor. Die Gesetzgeber reagieren darauf mit immer strengeren Datenschutzgesetzen, die das Aggregieren, Verarbeiten und Speichern von personenbezogenen Daten in der Webanalyse einschränken. Viele Unternehmen stehen daher vor der Herausforderung, ihre Tracking-Infrastruktur zu überdenken und an die Vorgaben anzupassen. Spätestens mit der bevorstehenden Abschaffung der sogenannten Third-Party-Cookies sind Websitebetreiber gezwungen, aktiv zu werden. Diese Arbeit zielt darauf ab, Tracking-Technologien und -Strategien hinsichtlich ihrer Zukunftssicherheit zu analysieren, um einen Kompromiss zwischen den Interessen der Gesetzgebung und den Anbietern sowie Nutzern von Tracking-Tools zu finden.:Inhaltsverzeichnis Abkürzungsverzeichnis 1. Einleitung 2. Datenschutzrechtliche Rahmenbedingungen 2.1 Geschichte des Datenschutzrechts 2.2 Die DSGVO: Auswirkungen und Grundsätze 2.3 Rechtliche Einordnung von Tracking-Technologien 3. Grundlagen des Webtrackings 3.1 Cookies 3.1.1 Funktionsweise 3.1.2 Unterscheidung nach Lebensdauer 3.1.3 Unterscheidung nach Quelle 3.1.4 Unterscheidung nach Nutzungsart 3.1.5 Third-Party-Cookies in der Kritik 3.2 Tracking-Pixel 3.3 Device Fingerprinting 3.4 Datenqualität in der Krise 4. Tracking-Strategien in der Praxis 4.1 CNAME-Cloaking 4.1.1 Implementierung 4.1.2 Risiken 4.1.3 Datenschutzrechtliche Einordnung 4.2 Server Side Tracking 4.2.1 Tagging mit dem Google Tag Manager 4.2.2 Risiken 4.2.3 Datenschutzrechtliche Einordnung 4.3 Shynet 4.3.1 Implementierung und Quellcode-Analyse 4.3.2 Risiken 4.3.3 Datenschutzrechtliche Einordnung 5. Status Quo und Ausblick 5.1 Google Consent Mode v2 5.2 Browseranbieter 5.3 Cookie Pledge 5.4 E-Privacy-Verordnung 6. Fazit Literaturverzeichnis Abbildungsverzeichnis Eigenständigkeitserklärung
260

SERVING INTERACTIVE WEB PAGES TO TechSat21 CUSTOMERS

Self, Lance 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / TechSat21 is an innovative satellite program sponsored by the Air Force Research Laboratory Space Vehicles Directorate and the Air Force Office of Scientific Research. Its mission is to control a cluster of satellites that, when combined, create a “virtual satellite” with which to conduct various experiments in sparse aperture sensing and formation flying. Because TechSat21 customers have a need to view very large data sets that vary from the payload to the satellite state of health1 a modern viewing method using Java Server Pages and Active Server Pages is being developed to meet these interactive dynamic demands.

Page generated in 0.0888 seconds