• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 16
  • 15
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 66
  • 61
  • 40
  • 27
  • 26
  • 26
  • 18
  • 16
  • 16
  • 15
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Sins : um editor Xchart na forma de plugin para o ambiente eclipse / Sins : an Xchart editor as a plugin for the eclipse environment

Kollross, Diogo 10 October 2007 (has links)
Orientador: Hans Kurt Edmund Liesenberg / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-11T00:52:22Z (GMT). No. of bitstreams: 1 Kollross_Diogo_M.pdf: 31909718 bytes, checksum: 107b2440c38f63fc3880fa52f54ea0d7 (MD5) Previous issue date: 2007 / Resumos: Sistemas reativos têm grande importância em muitas áreas da engenharia e da computação, mas a qualidade e maturidade das metodologias e ferramentas de apoio ao desenvolvimento deixam a desejar em relação às voltadas a sistemas transformacionais. Uma das metodologias de destaque é a Arquitetura Orientada a Modelos, onde os sistemas reativos são descritos por modelos que podem ser diretamente traduzidos em formas executáveis. A linguagem mais bem sucedida na modelagem de sistemas reativos é Statechart, que deu origem a variações como os diagramas de máquinas de estado do padrão UML e à linguagem Xchart. Essa linguagem é uma extensão de Statechart que introduz construções para controle de processos externos, história de ativações e hierarquização de eventos. Para superar as limitações da ferramenta já existente para edição de diagramas Xchart conhecida como Smart, foi desenvolvido o editor Sins (Sins 1s Not Smart), implementado como plugin para o ambiente integrado de desenvolvimento Eclipse. Com o editor Sins é possível editar os diagramas através de manipulação direta, diagramar a especificação automaticamente e gerar o código fonte correspondente na linguagem textual TEXchart. O algoritmo de layout implementado é uma variação do algoritmo de Sugiyama, modificado para melhorar a legibilidade do dia:grama ao garantir a consistência na apresentação de suas estruturas e gerar mapas semelhantes aos desenhados livremente / Abstract: Reactive systems have great importance in many areas of Engineering and Computing, but the quality and maturity of the development support methodologies and tools Iack when compared to those directed to transformational systems. One of the outstanding methodologies is Model Oriented Architecture, where the reactive systems are described by models that can be directly translated to executable form. The best succeeded language for modeling of reactive systems is Statechart, which is the origin of variations like state machine diagrams from the UML standard and the Xchart Language. This language is an extension of Statechart that introduces eIements for external process control, activation history and hierarchization of events. To overcome the limitations of the already existing tool for the edition of Xchart diagramas known as Smart, the Sins editor was developed (Sins Is Not Smart), implemented as a plugin for the Eclipse IDE. With the Sins editor it is possible to edit diagramas through direct manipulation, layout the specification automatically and generate the corresponding source code in the textual Language TEXchart. The implemented layout algorithm is a variation of the Sugiyama algorithm, modified for better legibility of the diagram by assuring consistency in the presentation of its structures and generation of layouts similar to those freely drawn / Mestrado / Sistemas de Computação / Mestre em Ciência da Computação
42

A technology reference model for client/server software development

Nienaber, R. C. (Rita Charlotte) 06 1900 (has links)
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development. / Computing / M. Sc. (Information Systems)
43

A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation. / Computing / M. Sc. (Computer Science)
44

Eidolon: adapting distributed applications to their environment.

Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.
45

Multiple strategy process migration.

De Paoli, Damien, mikewood@deakin.edu.au January 1996 (has links)
The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.
46

Graph and geometric algorithms on distributed networks and databases

Nanongkai, Danupon 16 May 2011 (has links)
In this thesis, we study the power and limit of algorithms on various models, aiming at applications in distributed networks and databases. In distributed networks, graph algorithms are fundamental to many applications. We focus on computing random walks which are an important primitive employed in a wide range of applications but has always been computed naively. We show that a faster solution exists and subsequently develop faster algorithms by exploiting random walk properties leading to two immediate applications. We also show that this algorithm is optimal. Our technique in proving a lower bound show the first non-trivial connection between communication complexity and lower bounds of distributed graph algorithms. We show that this technique has a wide range of applications by proving new lower bounds of many problems. Some of these lower bounds show that the existing algorithms are tight. In database searching, we think of the database as a large set of multi-dimensional points stored in a disk and want to help the users to quickly find the most desired point. In this thesis, we develop an algorithm that is significantly faster than previous algorithms both theoretically and experimentally. The insight is to solve the problem on the streaming model which helps emphasize the benefits of sequential access over random disk access. We also introduced the randomization technique to the area. The results were complemented with a lower bound. We also initiat a new direction as an attempt to get a better query. We are the first to quantify the output quality using "user satisfaction" which is made possible by borrowing the idea of modeling users by utility functions from game theory and justify our approach through a geometric analysis.
47

Eidolon: adapting distributed applications to their environment.

Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.
48

A technology reference model for client/server software development

Nienaber, R. C. (Rita Charlotte) 06 1900 (has links)
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development. / Computing / M. Sc. (Information Systems)
49

A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

Conradie, Pieter Wynand 06 1900 (has links)
The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation. / Computing / M. Sc. (Computer Science)
50

Um sistema de monitoramento para caracterização de algoritmos distribuídos / A monitor system to characterization of distributed algorithms

Fachini, Elizeu Elieber 24 February 2016 (has links)
Submitted by Milena Rubi (milenarubi@ufscar.br) on 2016-10-25T21:55:38Z No. of bitstreams: 1 FACHINI_Elizeu_2016.pdf: 7355773 bytes, checksum: 57880fc3ade64c5d25c3ec2901d87e9b (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2016-10-25T21:55:54Z (GMT) No. of bitstreams: 1 FACHINI_Elizeu_2016.pdf: 7355773 bytes, checksum: 57880fc3ade64c5d25c3ec2901d87e9b (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2016-10-25T21:56:04Z (GMT) No. of bitstreams: 1 FACHINI_Elizeu_2016.pdf: 7355773 bytes, checksum: 57880fc3ade64c5d25c3ec2901d87e9b (MD5) / Made available in DSpace on 2016-10-25T21:56:15Z (GMT). No. of bitstreams: 1 FACHINI_Elizeu_2016.pdf: 7355773 bytes, checksum: 57880fc3ade64c5d25c3ec2901d87e9b (MD5) Previous issue date: 2016-02-24 / Não recebi financiamento / Monitoring is the act of collecting information concerning the characteristics and status of resources of interest. It can be used to the management and allocation of resources, detection and correction of failures and also to the evaluation of performance parameters. To automatically accomplish the monitoring a tool is needed that has functionalities related the acquiring, processing, distributing and presenting of monitoring events. In this work we are interested in a monitoring system to give support to the experimental execution of distributed algorithms, with the objective of correlating the device status with the execution data and, this way, make possible an analysis of cluster resources used by the application. Then, it’s needed a tool with particular characteristics, such as the ability to collect data with a small time period, with low intrusiveness and making the full data available. As was not possible find in the literature a tool with the features required, we developed a new monitoring tool named MSPlus. The features of this tool were evaluated through experiments with the isolated tool and comparing it with other tool. Additionally, we apply the tool in a distribucted system to monitor a distribucted algorithm. / O monitoramento é o ato de coletar informações referentes às características e estado dos recursos de interesse. Ele pode ser utilizado para gerência e alocação de recursos, detec- ção e correção de falhas e também para avaliação de parâmetros de desempenho. Para realizar o monitoramento de modo automático é necessário a utilização de ferramentas, que tem funcionalidades referentes a captação, processamento, distribuição e apresentação dos eventos de monitoramento. Neste trabalho temos interesse em um sistema de monitoramento para dar suporte à execução experimental de algoritmos distribuídos, com o objetivo de relacionar o estado dos dispositivos com os dados da execução e, desta forma, permitir uma análise do uso de recursos do aglomerado pela aplicação. É necessário então uma ferramenta com características particulares como fazer a coleta de informações com pequeno intervalo de tempo, com baixa intrusividade e realizar o armazenamento total dos dados. Como não foi possível encontrar na literatura uma ferramenta com as características desejadas, desenvolvemos uma nova ferramenta de monitoramento chamada MSPlus. As características dessa nova ferramenta foram analisadas através de experimentos de forma isolada e em comparação a outra ferramenta. Adicionalmente, aplicamos a ferramenta em um sistema distribuído monitorando um algoritmo distribuído.

Page generated in 0.1401 seconds