• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 16
  • 15
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 67
  • 62
  • 41
  • 27
  • 26
  • 26
  • 18
  • 16
  • 16
  • 15
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Proposta para aumento da escalabilidade do sistema WSE-OS por meio do escalonamento de conexões e gerenciamento da replicação de dados dos servidores /

Lima, Leonardo José de. January 2013 (has links)
Orientador: Roberta Spolon / Banca: José Remo Ferreira Brega / Banca: Sarita Mazzini Bruschi / Resumo: Devido a queda gradual no custo de aquisição de novos computadores, há cada vez mais dispositivos computacionais adentrando o mercado. A grande quantidade de novos dispositivos gera heterogeneidade entre eles e esta dificulta a administração de ambientes computacionais, pois é necessário manter os sistemas funcionando em compatibilidade com dispositivos bastante distintos simultaneamente. O sistema WSE-OS propõe uma solução de centralização de dados e recursos que aborda o problema da heterogeneidade de maneira eficaz. Fazendo uso da tecnologia wireless a ferramenta WSE-OS utiliza uma estrutura Thin Client que permite aos seus clientes executarem instanciações de sistemas operacionais virtualizados armazenados no servidor. Este trabalho apresenta uma proposta que altera a estrutura do WSE-OS incluindo a capacidade de operar com múltiplos servidores, tendo como objetivo aumentar a escalabilidade, disponibilidade e confiabilidade da ferramenta por meio de técnicas de replicação do servidor e escalonamento das conexões. A replicação de dados consiste em detectar as alterações sofridas nos dados contidos em um determinado servidor e transmiti-las aos demais priorizando a consistência entre as réplicas. O escalonamento de conexões funciona ativamente distribuindo os clientes dentre os servidores para melhorar o desempenho da ferramenta / Abstract: Due to a gradual decrease in the cost of purchasing new computers, there is more and more computing devices entering the market. The large quantity of new devices creates heterogeneity among them and this complicates the administration of computing environments, because is necessary to keep the systems running in compatibility with quite different devices simultaneously. The WSE-OS system proposes a solution for centralizing data and resources that addresses this problem effectively. Using wireless networking technology, the WSE-OS tool uses a Thin Client structure that allows its clients to execute instantiations of virtualized operating systems stored on the server. This paper presents a proposal that changes WSE-OS's structure including the ability to run with multiple servers, having as its goal increase scalability, availability and reliability through server's data replication and staggering of connections. Data replication consists in detecting changes on data from a given server and transmit it to the others prioritizing the consistency among replicas. The staggering of connections works on actively distributing the clients among servers to improve system's performance / Mestre
42

Sins : um editor Xchart na forma de plugin para o ambiente eclipse / Sins : an Xchart editor as a plugin for the eclipse environment

Kollross, Diogo 10 October 2007 (has links)
Orientador: Hans Kurt Edmund Liesenberg / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-11T00:52:22Z (GMT). No. of bitstreams: 1 Kollross_Diogo_M.pdf: 31909718 bytes, checksum: 107b2440c38f63fc3880fa52f54ea0d7 (MD5) Previous issue date: 2007 / Resumos: Sistemas reativos têm grande importância em muitas áreas da engenharia e da computação, mas a qualidade e maturidade das metodologias e ferramentas de apoio ao desenvolvimento deixam a desejar em relação às voltadas a sistemas transformacionais. Uma das metodologias de destaque é a Arquitetura Orientada a Modelos, onde os sistemas reativos são descritos por modelos que podem ser diretamente traduzidos em formas executáveis. A linguagem mais bem sucedida na modelagem de sistemas reativos é Statechart, que deu origem a variações como os diagramas de máquinas de estado do padrão UML e à linguagem Xchart. Essa linguagem é uma extensão de Statechart que introduz construções para controle de processos externos, história de ativações e hierarquização de eventos. Para superar as limitações da ferramenta já existente para edição de diagramas Xchart conhecida como Smart, foi desenvolvido o editor Sins (Sins 1s Not Smart), implementado como plugin para o ambiente integrado de desenvolvimento Eclipse. Com o editor Sins é possível editar os diagramas através de manipulação direta, diagramar a especificação automaticamente e gerar o código fonte correspondente na linguagem textual TEXchart. O algoritmo de layout implementado é uma variação do algoritmo de Sugiyama, modificado para melhorar a legibilidade do dia:grama ao garantir a consistência na apresentação de suas estruturas e gerar mapas semelhantes aos desenhados livremente / Abstract: Reactive systems have great importance in many areas of Engineering and Computing, but the quality and maturity of the development support methodologies and tools Iack when compared to those directed to transformational systems. One of the outstanding methodologies is Model Oriented Architecture, where the reactive systems are described by models that can be directly translated to executable form. The best succeeded language for modeling of reactive systems is Statechart, which is the origin of variations like state machine diagrams from the UML standard and the Xchart Language. This language is an extension of Statechart that introduces eIements for external process control, activation history and hierarchization of events. To overcome the limitations of the already existing tool for the edition of Xchart diagramas known as Smart, the Sins editor was developed (Sins Is Not Smart), implemented as a plugin for the Eclipse IDE. With the Sins editor it is possible to edit diagramas through direct manipulation, layout the specification automatically and generate the corresponding source code in the textual Language TEXchart. The implemented layout algorithm is a variation of the Sugiyama algorithm, modified for better legibility of the diagram by assuring consistency in the presentation of its structures and generation of layouts similar to those freely drawn / Mestrado / Sistemas de Computação / Mestre em Ciência da Computação
43

A technology reference model for client/server software development

Nienaber, R. C. (Rita Charlotte) 06 1900 (has links)
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development. / Computing / M. Sc. (Information Systems)
44

A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation. / Computing / M. Sc. (Computer Science)
45

Eidolon: adapting distributed applications to their environment.

Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.
46

Multiple strategy process migration.

De Paoli, Damien, mikewood@deakin.edu.au January 1996 (has links)
The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.
47

Graph and geometric algorithms on distributed networks and databases

Nanongkai, Danupon 16 May 2011 (has links)
In this thesis, we study the power and limit of algorithms on various models, aiming at applications in distributed networks and databases. In distributed networks, graph algorithms are fundamental to many applications. We focus on computing random walks which are an important primitive employed in a wide range of applications but has always been computed naively. We show that a faster solution exists and subsequently develop faster algorithms by exploiting random walk properties leading to two immediate applications. We also show that this algorithm is optimal. Our technique in proving a lower bound show the first non-trivial connection between communication complexity and lower bounds of distributed graph algorithms. We show that this technique has a wide range of applications by proving new lower bounds of many problems. Some of these lower bounds show that the existing algorithms are tight. In database searching, we think of the database as a large set of multi-dimensional points stored in a disk and want to help the users to quickly find the most desired point. In this thesis, we develop an algorithm that is significantly faster than previous algorithms both theoretically and experimentally. The insight is to solve the problem on the streaming model which helps emphasize the benefits of sequential access over random disk access. We also introduced the randomization technique to the area. The results were complemented with a lower bound. We also initiat a new direction as an attempt to get a better query. We are the first to quantify the output quality using "user satisfaction" which is made possible by borrowing the idea of modeling users by utility functions from game theory and justify our approach through a geometric analysis.
48

Eidolon: adapting distributed applications to their environment.

Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.
49

A technology reference model for client/server software development

Nienaber, R. C. (Rita Charlotte) 06 1900 (has links)
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development. / Computing / M. Sc. (Information Systems)
50

A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation. / Computing / M. Sc. (Computer Science)

Page generated in 0.1217 seconds