• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 2
  • 1
  • Tagged with
  • 24
  • 24
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Understanding Java applications using frequency of method calls /

Ghadiri, Amirali. January 2007 (has links)
Thesis (M.Sc.)--York University, 2007. Graduate Programme in Computer Science. / Typescript. Includes bibliographical references (leaves 122-124). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR31995
12

CREWS : a Component-driven, Run-time Extensible Web Service framework /

Parry, Dominic Charles. January 2004 (has links)
Thesis (M. Sc. (Computer Science))--Rhodes University, 2004.
13

CREWS : a Component-driven, Run-time Extensible Web Service framework

Parry, Dominic Charles January 2004 (has links)
There has been an increased focus in recent years on the development of re-usable software, in the form of objects and software components. This increase, together with pressures from enterprises conducting transactions on the Web to support all business interactions on all scales, has encouraged research towards the development of easily reconfigurable and highly adaptable Web services. This work investigates the ability of Component-Based Software Development (CBSD) to produce such systems, and proposes a more manageable use of CBSD methodologies. Component-Driven Software Development (CDSD) is introduced to enable better component manageability. Current Web service technologies are also examined to determine their ability to support extensible Web services, and a dynamic Web service architecture is proposed. The work also describes the development of two proof-of-concept systems, DREW Chat and Hamilton Bank. DREW Chat and Hamilton Bank are implementations of Web services that support extension dynamically and at run-time. DREW Chat is implemented on the client side, where the user is given the ability to change the client as required. Hamilton Bank is a server-side implementation, which is run-time customisable by both the user and the party offering the service. In each case, a generic architecture is produced to support dynamic Web services. These architectures are combined to produce CREWS, a Component-driven Runtime Extensible Web Service solution that enables Web services to support the ever changing needs of enterprises. A discussion of similar work is presented, identifying the strengths and weaknesses of our architecture when compared to other solutions.
14

Identifying Testing Requirements for Modified Software

Apiwattanapong, Taweesup 09 July 2007 (has links)
Throughout its lifetime, software must be changed for many reasons, such as bug fixing, performance tuning, and code restructuring. Testing modified software is the main activity performed to gain confidence that changes behave as they are intended and do not have adverse effects on the rest of the software. A fundamental problem of testing evolving software is determining whether test suites adequately exercise changes and, if not, providing suitable guidance for generating new test inputs that target the modified behavior. Existing techniques evaluate the adequacy of test suites based only on control- and data-flow testing criteria. They do not consider the effects of changes on program states and, thus, are not sufficiently strict to guarantee that the modified behavior is exercised. Also, because of the lack of this guarantee, these techniques can provide only limited guidance for generating new test inputs. This research has developed techniques that will assist testers in testing evolving software and provide confidence in the quality of modified versions. In particular, this research has developed a technique to identify testing requirements that ensure that the test cases satisfying them will result in different program states at preselected parts of the software. This research has also developed supporting techniques for identifying testing requirements. Such techniques include (1) a differencing technique, which computes differences and correspondences between two software versions and (2) two dynamic-impact-analysis techniques, which identify parts of software that are likely affected by changes with respect to a set of executions.
15

A fine-grained model for design pattern direction in Eiffel systems /

Lebon, Maurice. January 2007 (has links)
Thesis (M.Sc.)--York University, 2007. Graduate Programme in Software Engineering. / Typescript. Includes bibliographical references (leaves 153-155). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR38796
16

Automating software evolution: towards using constraints with action for model evolution /

Alam, Shahid, January 1900 (has links)
Thesis (M.App.Sc.) - Carleton University, 2007. / Includes bibliographical references (p. 150-154). Also available in electronic format on the Internet.
17

Users' acceptance of legacy systems integration in the National Department of Human Settlements.

Mathule, L. R. January 2015 (has links)
M. Tech. Business Information Systems / Legacy systems are standalone computer applications mostly based on old technologies used in many organizations notwithstanding the availability of more streamlined systems and newer applications. The systems are in place due to the fact that it is costly to replace them, and or they respond adequately to users' requests towards the function they are designed to do. Legacy systems play an important role in today's business because they consist of application programs that may not be upgraded and old data which may not be reformatted to suit new systems. Further, these systems are still alive because of their good pedigree and distinct characteristics. If used in silos, the Legacy systems as part of information systems make the sharing of information, security and management controls, a nightmare. As a result, this affects the process of decision making at the operation and top management levels. Synchronization of reports from the different business units becomes a problem and in the long run the whole business is rendered ineffective and inefficient. This study is calling for the need to integrate legacy systems into enterprise resource planning system. Much as this is so, there is still limited understanding of the factors that contribute to the users' acceptance of the integration of these Legacy systems into an Enterprise Resource Planning System (ERP SYSTEM). This study therefore sought to determine factors influencing users' acceptance of Legacy systems' integration into an ERP System by taking a case of the National Department of Human Settlements.
18

Evolving Legacy Software Systems with a Resource and Performance-Sensitive Autonomic Interaction Manager

Unknown Date (has links)
Retaining business value in a legacy commercial enterprise resource planning system today often entails more than just maintaining the software to preserve existing functionality. This type of system tends to represent a significant capital investment that may not be easily scrapped, replaced, or re-engineered without considerable expense. A legacy system may need to be frequently extended to impart new behavior as stakeholder business goals and technical requirements evolve. Legacy ERP systems are growing in prevalence and are both expensive to maintain and risky to evolve. Humans are the driving factor behind the expense, from the engineering costs associated with evolving these types of systems to the labor costs required to operate the result. Autonomic computing is one approach that addresses these challenges by imparting self-adaptive behavior into the evolved system. The contribution of this dissertation aims to add to the body of knowledge in software engineering some insight and best practices for development approaches that are normally hidden from academia by the competitive nature of the retail industry. We present a formal architectural pattern that describes an asynchronous, low-complexity, and autonomic approach. We validate the pattern with two real-world commercial case studies and a reengineering simulation to demonstrate that the pattern is repeatable and agnostic with respect to the operating system, programming language, and communication protocols. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
19

From monolithic architectural style to microservice one : structure-based and task-based approaches / Du style architectural monolithique vers le style microservice : approches basées sur la structure et sur les tâches

Selmadji, Anfel 03 October 2019 (has links)
Les technologies logicielles ne cessent d'évoluer pour faciliter le développement, le déploiement et la maintenance d'applications dans différents domaines. En parallèle, ces applications évoluent en continu pour garantir une bonne qualité de service et deviennent de plus en plus complexes. Cette évolution implique souvent des coûts de développement et de maintenance de plus en plus importants, auxquels peut s'ajouter une augmentation des coûts de déploiement sur des infrastructures d'exécution récentes comme le cloud. Réduire ces coûts et améliorer la qualité de ces applications sont actuellement des objectifs centraux du domaine du génie logiciel. Récemment, les microservices sont apparus comme un exemple de technologie ou style architectural favorisant l'atteinte de ces objectifs.Alors que les microservices peuvent être utilisés pour développer de nouvelles applications, il existe des applications monolithiques (i.e., monolithes) cons-truites comme une seule unité et que les propriétaires (e.g., entreprise, etc.) souhaitent maintenir et déployer sur le cloud. Dans ce cas, il est fréquent d'envisager de redévelopper ces applications à partir de rien ou d'envisager une migration vers de nouveaux styles architecturaux. Redévelopper une application ou réaliser une migration manuellement peut devenir rapidement une tâche longue, source d'erreurs et très coûteuse. Une migration automatique apparaît donc comme une solution évidente.L'objectif principal de notre thèse est de contribuer à proposer des solutions pour l'automatisation du processus de migration d'applications monolithiques orientées objet vers des microservices. Cette migration implique deux étapes : l'identification de microservices et le packaging de ces microservices. Nous nous focalisons sur d'identification en s'appuyant sur une analyse du code source. Nous proposons en particulier deux approches.La première consiste à identifier des microservices en analysant les relations structurelles entre les classes du code source ainsi que les accès aux données persistantes. Dans cette approche, nous prenons aussi en compte les recommandations d'un architecte logiciel. L'originalité de ce travail peut être vue sous trois aspects. Tout d'abord, les microservices sont identifiés en se basant sur l'évaluation d'une fonction bien définie mesurant leur qualité. Cette fonction repose sur des métriques reflétant la "sémantique" du concept "microservice". Deuxièmement, les recommandations de l'architecte logiciel ne sont exploitées que lorsqu'elles sont disponibles. Enfin, deux modèles algorithmiques ont été utilisés pour partitionner les classes d'une application orientée objet en microservices : un algorithme de regroupement hiérarchique et un algorithme génétique.La deuxième approche consiste à extraire à partir d'un code source orienté objet un workflow qui peut être utilisé en entrée de certaines approches existantes d'identification des microservices. Un workflow décrit le séquencement de tâches constituant une application suivant deux formalismes: un flot de contrôle et/ou un flot de données. L'extraction d'un workflow à partir d'un code source nécessite d'être capable de définir une correspondance entre les concepts du mon-de objet et ceux d'un workflow.Pour valider nos deux approches, nous avons implémenté deux prototypes et mené des expérimentations sur plusieurs cas d'étude. Les microservices identifiés ont été évalués qualitativement et quantitativement. Les workflows obtenus ont été évalués manuellement sur un jeu de tests. Les résultats obtenus montrent respectivement la pertinence des microservices identifiés et l'exactitude des workflows obtenus. / Software technologies are constantly evolving to facilitate the development, deployment, and maintenance of applications in different areas. In parallel, these applications evolve continuously to guarantee an adequate quality of service, and they become more and more complex. Such evolution often involves increased development and maintenance costs, that can become even higher when these applications are deployed in recent execution infrastructures such as the cloud. Nowadays, reducing these costs and improving the quality of applications are main objectives of software engineering. Recently, microservices have emerged as an example of a technology or architectural style that helps to achieve these objectives.While microservices can be used to develop new applications, there are monolithic ones (i.e., monoliths) built as a single unit and their owners (e.g., companies, etc.) want to maintain and deploy them in the cloud. In this case, it is common to consider rewriting these applications from scratch or migrating them towards recent architectural styles. Rewriting an application or migrating it manually can quickly become a long, error-prone, and expensive task. An automatic migration appears as an evident solution.The ultimate aim of our dissertation is contributing to automate the migration of monolithic Object-Oriented (OO) applications to microservices. This migration consists of two steps: microservice identification and microservice packaging. We focus on microservice identification based on source code analysis. Specifically, we propose two approaches.The first one identifies microservices from the source code of a monolithic OO application relying on code structure, data accesses, and software architect recommendations. The originality of our approach can be viewed from three aspects. Firstly, microservices are identified based on the evaluation of a well-defined function measuring their quality. This function relies on metrics reflecting the "semantics" of the concept "microservice". Secondly, software architect recommendations are exploited only when they are available. Finally, two algorithmic models have been used to partition the classes of an OO application into microservices: clustering and genetic algorithms.The second approach extracts from an OO source code a workflow that can be used as an input of some existing microservice identification approaches. A workflow describes the sequencing of tasks constituting an application according to two formalisms: control flow and /or data flow. Extracting a workflow from source code requires the ability to map OO conceptsinto workflow ones.To validate both approaches, we implemented two prototypes and conducted experiments on several case studies. The identified microservices have been evaluated qualitatively and quantitatively. The extracted workflows have been manually evaluated relying on test suites. The obtained results show respectively the relevance of the identified microservices and the correctness of the extracted workflows.
20

Formalização de um modelo de processo de reengenharia centrado no usuário para conversão de aplicações desktop em RIAs

Buzatto, David 12 May 2010 (has links)
Made available in DSpace on 2016-06-02T19:05:42Z (GMT). No. of bitstreams: 1 2997.pdf: 4158163 bytes, checksum: 540ba66fcb2c536c7a72d8add391f0b3 (MD5) Previous issue date: 2010-05-12 / Financiadora de Estudos e Projetos / The software reengineering becomes important because of the need that organizations have in adjusting to new trends, technologies and user requirements. The term organization should be understood like universities or companies that develop software for use by a large number of people. Thinking about the adequacy of users requirements, is presented in this work the reengineering process of a software called Cognitor, which is a tool designed to support teachers in the process of creating electronic teaching materials. The need to reengineer Cognitor was perceived through a case study where several changes were pointed by users. Some of these changes are: text editor improvement, preview of the images that are inserted in the content pages, feedback to the users, among others. During the reengineering of this software, it was formalized a software reengineering process model, user-centered, for the conversion of desktop applications in RIAs (Rich Internet Application), called UC-RIA (User Centered Rich Internet Application). The process model was named as UC-RIA due to the participation of the potential users during the application s reengineering process, because they were involved in prototyping and in validation of the graphical interfaces of the new version. The results of this study show the capability of the proposed software reengineering model to be used as a support in organizations for the reengineering of their software, mainly because it inserts the users in the reengineering process during the application s Prototyping phase, bringing software to users real needs. / A reengenharia de software se faz importante devido à necessidade que as organizações têm em se adequar às novas tendências, tecnologias e exigências dos usuários. Inclui-se ao termo organizações , empresas ou universidades que desenvolvem softwares para serem utilizados por um grande número de pessoas. Pensando na adequação das exigências dos usuários, este trabalho apresenta a reengenharia de um software chamado Cognitor, que é uma ferramenta criada para apoiar os educadores no processo de criação de material didático eletrônico. A percepção da necessidade da reengenharia do Cognitor se deu através de um estudo de caso onde foram relatadas várias alterações que o software deveria sofrer, tais como: melhoria no editor de texto, pré-visualização das imagens que são inseridas nas páginas de conteúdo, feedback ao usuário, entre outras. Durante a reengenharia desse software, consequentemente, foi formalizado um modelo de processo de reengenharia de software, centrado no usuário, para a conversão de aplicações desktop em RIAs (Rich Internet Application), denominado UC-RIA (User Centered Rich Internet Application). Foi dado o nome de UC-RIA ao modelo de processo devido à participação dos potenciais usuários da aplicação durante o processo de reengenharia, pois estes estiveram envolvidos tanto na prototipação, quanto na validação das interfaces gráficas da nova versão. Os resultados obtidos neste trabalho mostram a potencialidade do modelo de reengenharia de software UC-RIA em ser utilizado como apoio às organizações durante a reengenharia de seus softwares, principalmente por inserir os usuários no processo de reengenharia durante a fase de Prototipação da aplicação, aproximando os softwares às reais necessidades dos usuários.

Page generated in 0.0871 seconds