• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 305
  • 291
  • 118
  • 94
  • 51
  • 50
  • 37
  • 22
  • 19
  • 10
  • 9
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1117
  • 305
  • 294
  • 219
  • 156
  • 149
  • 127
  • 125
  • 124
  • 120
  • 115
  • 112
  • 104
  • 103
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Jahresbericht 1999/2000 Abteilung Kommunikation und Datenverarbeitung, FZR-324

Fülle, Ruprecht 31 March 2010 (has links) (PDF)
Bericht über Dienste und Weiterentwicklung der IT-Infrastruktur des FZR im Zeitraum 1999 bis 2000 in den Tätigkeitsbereichen Zentrale Server, Datennetz und Benutzerservice.
32

A transputer based scalable data acquisition system

Ward, Michael Patrick January 1995 (has links)
No description available.
33

Performance analysis of a controlled database unit with single queue configuration subject to control delays with decision errors

Kussard, Michael. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Department of Electrical and Computer Engineering, 2006. / Includes bibliographical references (leaves 126-128).
34

Architecting energy efficient servers

Kgil, Tae Ho. January 1900 (has links)
Thesis (Ph.D.)--University of Michigan, 2007. / Adviser: Trevor N. Mudge. Includes bibliographical references.
35

Vergleichende Implementierung einer verteilten Anwendung unter Nutzung von CORBA/IIOP, RMI und JSP

Tandjung, Kristian. January 2001 (has links)
Stuttgart, Univ., Diplomarb., 2001.
36

A framework for data decay in client-server model /

Taber, Matthew. January 2009 (has links)
Thesis (M.S.)--Rochester Institute of technology, 2009. / Typescript. Includes bibliographical references (leaves 57-59).
37

Komplexní přístup k webové prezentaci prostorově orientovaných dat

Vildomec, Jan January 2008 (has links)
No description available.
38

Jahresbericht 1999/2000 Abteilung Kommunikation und Datenverarbeitung, FZR-324

Fülle, Ruprecht January 2001 (has links)
Bericht über Dienste und Weiterentwicklung der IT-Infrastruktur des FZR im Zeitraum 1999 bis 2000 in den Tätigkeitsbereichen Zentrale Server, Datennetz und Benutzerservice.
39

Improving Network Performance and Document Dissemination by Enhancing Cache Consistency on the Web Using Proxy and Server Negotiation

Doswell, Felicia 06 September 2005 (has links)
Use of proxy caches in the World Wide Web is beneficial to the end user, network administrator, and server administrator since it reduces the amount of redundant traffic that circulates through the network. In addition, end users get quicker access to documents that are cached. However, the use of proxies introduces additional issues that need to be addressed. In particular, there is growing concern over how to maintain cache consistency and coherency among cached versions of documents. The existing consistency protocols used in the Web are proving to be insufficient to meet the growing needs of the Internet population. For example, too many messages sent over the network are due to caches guessing when their copy is inconsistent. One option is to apply the cache coherence strategies already in use for many other distributed systems, such as parallel computers. However, these methods are not satisfactory for the World Wide Web due to its larger size and more diverse access patterns. Many decisions must be made when exploring World Wide Web coherency, such as whether to provide consistency at the proxy level (client pull) or to allow the server to handle it (server push). What trade offs are inherent for each of these decisions? The relevant usage of any method strongly depends upon the conditions of the network (e.g., document types that are frequently requested or the state of the network load) and the resources available (e.g., disk space and type of cache available). Version 1.1 of HTTP is the first protocol version to give explicit rules for consistency on the Web. Many proposed algorithms require changes to HTTP/1.1. However, this is not necessary to provide a suitable solution. One goal of this dissertation is to study the characteristics of document retrieval and modification to determine their effect on proposed consistency mechanisms. A set of effective consistency policies is identified from the investigation. The main objective of this dissertation is to use these findings to design and implement a consistency algorithm that provides improved performance over the current mechanisms proposed in the literature. Optimistically, we want an algorithm that provides strong consistency. However, we do not want to further degrade the network or cause undue burden on the server to gain this advantage. We propose a system based on the notion of soft-state and based on server push. In this system, the proxy would have some influence on what state information is maintained at the server (spatial consideration) as well as how long to maintain the information (temporal consideration). We perform a benchmark study of the performance of the new algorithm in comparison with existing proposed algorithms. Our results show that the Synchronous Nodes for Consistency (SINC) framework provides an average of 20% control message savings by limiting how much polling occurs with the current Web cache consistency mechanism, Adaptive Client Polling. In addition, the algorithm shows 30% savings on state space overhead at the server by limiting the amount of per-proxy and per-document state information required at the server. / Ph. D.
40

The CloudBrowser Web Application Framework

McDaniel, Brian Newsom 06 June 2012 (has links)
While more and more applications are moving from the desktop to the web, users still expect their applications to behave like they did on the desktop. Specifically, users expect that user interface state is preserved across visits, and that the state of the interface truly reflects the state of the underlying data. Unfortunately, achieving this ideal is difficult for web application developers due to the distributed nature of the web. Modern AJAX applications rely on asynchronous network programming to synchronize the client-side user interface with server-side data. Furthermore, since the HTTP protocol is stateless, preserving interface state across visits requires a significant amount of manual work on behalf of the developer. CloudBrowser is a web application framework that supports the development of rich Internet applications whose entire user interface and application logic resides on the server, while all client/server communication is provided by the framework. CloudBrowser is ideal for single- page web applications, which is the current trend in web development. CloudBrowser thus hides the distributed nature of these applications from the developer, creating an environment similar to that provided by a desktop user interface library. CloudBrowser preserves the user interface state in a server-side virtual browser that is maintained across visits. Further- more, multiple clients can connect to a single server-side interface instance, providing built-in co-browsing support. Unlike other server-centric frameworks, CloudBrowser's exclusive use of the HTML document model and associated JavaScript execution environment allows it to exploit existing client-side user interface libraries and toolkits while transparently providing access to other application tiers. We have implemented a prototype of CloudBrowser as well as several example applications to demonstrate the benefits of its server-centric design. / Master of Science

Page generated in 0.047 seconds