• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 36
  • 22
  • 12
  • 9
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 231
  • 231
  • 95
  • 81
  • 44
  • 38
  • 37
  • 33
  • 32
  • 30
  • 29
  • 26
  • 24
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Extensible Resource Management for Networked Virtual Computing

Grit, Laura E. January 2007 (has links)
Thesis (Ph. D.)--Duke University, 2007.
22

Prozessorientiertes Management von Client-Server-Systemen /

Kirsch, Jürgen. January 1999 (has links)
Zugl.: Saarbrücken, Universiẗat, Diss., 1998 u.d.T.: Kirsch, Jürgen: Prozessorientiertes Informationssystem-Management.
23

A third generation object-oriented process model:roles and architectures in focus

Kivistö, K. (Kari) 21 November 2000 (has links)
Abstract This thesis examines and evaluates the Object-Oriented Client/Server (OOCS) model, a process model that can be used when IT organizations develop object-oriented client/server applications. In particular, it defines the roles in the development team and combines them into the process model. Furthermore, the model focuses on the client/server architecture, considering it explicitly. The model has been under construction for several years and it has been tested in a number of industrial projects. Feedback from practice has thus been an important source when the model has been evolving into its current form. Another source for evolution has been other process models and technical progress in this field. This thesis reveals the theoretical and practical aspects that have influenced the model's characteristics and developmnt. The object-oriented paradigm has been the driving force when creating the OOCS model. The first object-oriented development models were, however, both inadequate and contradictory to each other. The OOCS model utilizes the best practices from these early models. The model also defines artifacts to be delivered in each phase. The artifacts are synchronized with the Unified Modeling Language (UML), a new standard modeling notation. From the very beginning the OOCS model has included a strong client/server viewpoint, which is not stated so clearly in other object-oriented models. A three-tier division of the application (presentation, business logic, data management) can be found in each phase. This division has become crucial in recent years, when applications have been built on distributed architecture. The team-based roles included in the model are based on the work of a few other researchers, although this topic has not gained the importance it should have had. Namely, it is people that develop the application and their involvement in the process should be stated explicitly. The roles of the developers are closely connected to the OOCS process model via the concept of activities included in the model. The roles concentrate mainly on project members, but company-level aspects have also been considered. This thesis summarizes the work carried out in the last five years. It shows how the model has evolved in practice and how other models have contributed to it. The team-based OOCS model is in use in some IT organizations. The cases presented in this thesis illustrate how to adapt the model into specific organizational needs.
24

Secure Network-Centric Application Access

Varma, Nitesh 23 December 1998 (has links)
In the coming millennium, the establishment of virtual enterprises will become increasingly common. In the engineering sector, global competition will require corporations to create agile partnerships to use each other’s engineering resources in mutually profitable ways. The Internet offers a medium for accessing such resources in a globally networked environment. However, remote access of resources require a secure and mutually trustable environment, which is lacking in the basic infrastructure on which the Internet is based. Fortunately, efforts are under way to provide the required security services on the Internet. This thesis presents a model for making distributed engineering software tools accessible via the Internet. The model consists of an extensible clientserver system interfaced with the engineering software tool on the server-side. The system features robust security support based on public-key and symmetric cryptography. The system has been demonstrated by providing Web-based access to a .STL file repair program through a Java-enabled Web browser. / Master of Science
25

Improved analysis of flow time scheduling

Liu, Kin-shing., 廖建誠. January 2005 (has links)
published_or_final_version / abstract / Computer Science / Master / Master of Philosophy
26

Deriving mathematical significance in palaeontological data from large-scale database technologies

Hewzulla, Dilshat January 2000 (has links)
No description available.
27

A Software Architecture for Client-Server Telemetry Data Analysis

Brockett, Douglas M., Aramaki, Nancy J. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / An increasing need among telemetry data analysts for new mechanisms for efficient access to high-speed data in distributed environments has led BBN to develop a new architecture for data analysis. The data sets of concern can be from either real-time or post-test sources. This architecture consists of an expandable suite of tools based upon a data distribution software "backbone" which allows the interchange of high volume data streams among server processes and client workstations. One benefit of this architecture is that it allows one to assemble software systems from a set of off-the-shelf, interoperable software modules. This modularity and interoperability allows these systems to be configurable and customizable, while requiring little applications programming by the system integrator.
28

Reducing Third Parties in the Network through Client-Side Intelligence

Kontaxis, Georgios January 2018 (has links)
The end-to-end argument describes the communication between a client and server using functionality that is located at the end points of a distributed system. From a security and privacy perspective, clients only need to trust the server they are trying to reach instead of intermediate system nodes and other third-party entities. Clients accessing the Internet today and more specifically the World Wide Web have to interact with a plethora of network entities for name resolution, traffic routing and content delivery. While individual communications with those entities may some times be end to end, from the user's perspective they are intermediaries the user has to trust in order to access the website behind a domain name. This complex interaction lacks transparency and control and expands the attack surface beyond the server clients are trying to reach directly. In this dissertation, we develop a set of novel design principles and architectures to reduce the number of third-party services and networks a client's traffic is exposed to when browsing the web. Our proposals bring additional intelligence to the client and can be adopted without changes to the third parties. Websites can include content, such as images and iframes, located on third-party servers. Browsers loading an HTML page will contact these additional servers to satisfy external content dependencies. Such interaction has privacy implications because it includes context related to the user's browsing history. For example, the widespread adoption of "social plugins" enables the respective social networking services to track a growing part of its members' online activity. These plugins are commonly implemented as HTML iframes originating from the domain of the respective social network. They are embedded in sites users might visit, for instance to read the news or do shopping. Facebook's Like button is an example of a social plugin. While one could prevent the browser from connecting to third-party servers, it would break existing functionality and thus be unlikely to be widely adopted. We propose a novel design for privacy-preserving social plugins that decouples the retrieval of user-specific content from the loading of third-party content. Our approach can be adopted by web browsers without the need for server-side changes. Our design has the benefit of avoiding the transmission of user-identifying information to the third-party server while preserving the original functionality of the plugins. In addition, we propose an architecture which reduces the networks involved when routing traffic to a website. Users then have to trust fewer organizations with their traffic. Such trust is necessary today because for example we observe that only 30% of popular web servers offer HTTPS. At the same time there is evidence that network adversaries carry out active and passive attacks against users. We argue that if end-to-end security with a server is not available the next best thing is a secure link to a network that is close to the server and will act as a gateway. Our approach identifies network vantage points in the cloud, enables a client to establish secure tunnels to them and intelligently routes traffic based on its destination. The proliferation of infrastructure-as-a-service platforms makes it practical for users to benefit from the cloud. We determine that our architecture is practical because our proposed use of the cloud aligns with existing ways end-user devices leverage it today. Users control both endpoints of the tunnel and do not depend on the cooperation of individual websites. We are thus able to eliminate third-party networks for 20% of popular web servers, reduce network paths to 1 hop for an additional 20% and shorten the rest. We hypothesize that user privacy on the web can be improved in terms of transparency and control by reducing the systems and services that are indirectly and automatically involved. We also hypothesize that such reduction can be achieved unilaterally through client-side initiatives and without affecting the operation of individual websites.
29

A mobile object container for dynamic component composition

Yung, Chor-ho. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2001. / Includes bibliographical references (leaves 111-113).
30

A Java based client server database web application

Hefner, Wayne. January 2000 (has links)
Thesis (M.S.)--Kutztown University of Pennsylvania, 2000. / Source: Masters Abstracts International, Volume: 45-06, page: 3187. Typescript. Abstract precedes thesis as preliminary leaf. Includes bibliographical references (leaves 75-76).

Page generated in 0.0734 seconds