• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 8
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 44
  • 20
  • 13
  • 13
  • 11
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

RESTful API vs. GraphQL a CRUD performance comparison

Niklasson, Alexander, Werèlius, Vincent January 2023 (has links)
The utilization of Application Programming Interfaces (APIs) has experiencedsignificant growth due to the increasing number of applications being devel-oped. APIs serve as a means to transfer data between different applications.While RESTful has been the standard API since its emergence around 2000,it is now being challenged by Facebook’s GraphQL, which was introducedin 2015. This study aims to fill a knowledge gap in the existing literatureon API performance evaluation by extending the focus beyond read opera-tions to include CREATE, UPDATE, and DELETE operations in both REST-ful APIs and GraphQL. Previous studies have predominantly examined theperformance of read operations, but there is a need to comprehensively un-derstand the behavior and effectiveness of additional CRUD operations. Toaddress this gap, we conducted a series of controlled experiments and anal-yses to evaluate the response time and RAM utilization of RESTful APIsand GraphQL when executing CREATE, UPDATE, and DELETE operations.We tested various scenarios and performance metrics to gain insights into thestrengths and weaknesses of each approach. Our findings indicate that con-trary to our initial beliefs, there are no significant differences between the twoAPI technologies in terms of CREATE, UPDATE, and DELETE operations.However, RESTful did slightly outperform GraphQL in the majority of tests.We also observed that GraphQL’s inherent batching functionality resulted infaster response times and lower RAM usage throughout the tests. On the otherhand, RESTful, despite its simpler queries, exhibited faster response times inGET operations, consistent with related work. Lastly, our findings suggestthat RESTful uses slightly less RAM compared to GraphQL in the context ofCREATE, UPDATE, and DELETE operations.
12

Development of a tool allowing to create and use JSON schemas so as to enhance the validation of existing projects

Charles-Elie, Simon January 2017 (has links)
A mobile application is typically divided into two sides that communicate with each other: the front-end (i.e. what the user can see and interact with on the phone) and the back-end (the hidden ”server” side, which processes requests from the front-end). Ways to improve their production cycle are constantly investigated by corporations such as Applidium, which is a French startup company specialized in mobile applications. For instance, the firm often has to deal with external back-ends that are not properly documented, which makes the development of products intricate. Furthermore, test and documentation files for certain parts of projects are manually written, which is time consuming, and are all largely based on the same information (back-end descriptions). Hence, this information frequently finds itself scattered in different files, sometimes in different versions. Having identified issues that most regularly disrupt the work of the company’s employees, a certain number of goals to solve these are set, such as, notably, centralizing all back-end-related information into one authoritative source, and automatizing the generation of test and documentation files. A tool (in the form of a web application) allowing users to describe back-ends, called Pericles, is then proposed as the outcome of the master thesis, to deal with the described problems and materialize the defined objectives. Finally, a qualitative evaluation is performed through a questionnaire designed to assess how users feel the tool helps them in their work, which constitutes the metric for this project. The evaluation suggests that the implemented tool is relevant with respect to the fixed goals, and allows to infer its propensity to help Applidium’s developers and project managers by making the development and validation of projects easier.
13

Forensic Insights: Analyzing and Visualizing Fitbit Cloud Data

Poorvi Umesh Hegde (17635896) 15 December 2023 (has links)
<p dir="ltr">Wearable devices are ubiquitous. There are over 1.1 billion wearable devices in the<br>market today[1]. The market is projected to grow at a rate of 14.6% annually till 2030[2].<br>These devices collect and store a large amount of data[3]. A major amount of this collected<br>data is stored in the cloud. For many years now, law enforcement organizations have been<br>continuously encountering cases that involve a wearable device in some capacity. There have<br>also been examples of how these wearable devices have helped in crime investigations and<br>insurance fraud investigations [4],[5],[6],[7],[8]. The article [4] performs an analysis of 5 case<br>studies and 57 news articles and shows how the framing of wearables in the context of the<br>crimes helped those cases. However, there still isn’t enough awareness and understanding<br>among law enforcement agencies on leveraging the data collected by these devices to solve<br>crimes. Many of the fitness trackers and smartwatches in the market today have more or<br>less similar functionalities of tracking data on an individual’s fitness-related activities, heart<br>rate, sleep, temperature, and stress [9]. One of the major players in the smartwatch space is<br>Fitbit. Fitbit synchronizes the data that it collects, directly to Fitbit Cloud [10]. It provides<br>an Android app and a web dashboard for users to access some of these data, but not all.<br>Application developers on the other hand can make use of Fitbit APIs to use user’s data.<br>These APIs can also be leveraged by law enforcement agencies to aid in digital forensic<br>investigations. There have been previous studies where they have developed tools that make<br>use of Fitbit Web APIs [11],[12], [13] but for various other purposes, not for forensic research.<br>There are a few studies on the topic of using fitness tracker data for forensic investigations<br>[14],[15]. But very few have used the Fitbit developer APIs [16]. Thus this study aims to<br>propose a proof-of-concept platform that can be leveraged by law enforcement agencies to<br>access and view the data stored on the Fitbit cloud on a person of interest. The results<br>display data on 12 categories - activity, body, sleep, breathing, devices, friends, nutrition,<br>heart rate variability, ECG, temperature, oxygen level, and cardio data, in a tabular format<br>that is easily viewable and searchable. This data can be further utilized for various analyses.<br>The tool developed is Open Source and well documented, thus anyone can reproduce the<br>process.<br>12<br></p>
14

Building a Modular Analytics Platform for 3D Home Design

Öhman, Jesper January 2022 (has links)
With society undergoing a large-scale technological revolution, companies often find themselves having to adapt by developing digitized products and applications. However, these applications typically produce large amounts of data. An additional problem that companies in the 3D Home Design industry face is having to provide a massive amount of options for their products, which without proper usage metrics can lead to large amounts of wasted resources. This thesis aims to combat these problems by designing a prototype of a modular analytics platform, consisting of a data parser, a back-end server &amp; API as well as a web interface. The system is capable of displaying highly customizable visual graphs of user statistics as well as breakdowns for how much each product gets picked by the client's users. The thesis system is built on a MERN stack, consisting of MongoDB, Express.js, React.js and Node.js, and is written purely in JavaScript. The thesis achieved moderate success by successfully implementing a modular analytics platform as well as correctly providing tools that can identify obsolete products, which in return could potentially reduce the amount of resources wasted by companies that adopt the solution.
15

Multi agent system for web database processing, on data extraction from online social networks

Abdulrahman, Ruqayya January 2012 (has links)
In recent years, there has been a flood of continuously changing information from a variety of web resources such as web databases, web sites, web services and programs. Online Social Networks (OSNs) represent such a field where huge amounts of information are being posted online over time. Due to the nature of OSNs, which offer a productive source for qualitative and quantitative personal information, researchers from various disciplines contribute to developing methods for extracting data from OSNs. However, there is limited research which addresses extracting data automatically. To the best of the author's knowledge, there is no research which focuses on tracking the real time changes of information retrieved from OSN profiles over time and this motivated the present work. This thesis presents different approaches for automated Data Extraction (DE) from OSN: crawler, parser, Multi Agent System (MAS) and Application Programming Interface (API). Initially, a parser was implemented as a centralized system to traverse the OSN graph and extract the profile's attributes and list of friends from Myspace, the top OSN at that time, by parsing the Myspace profiles and extracting the relevant tokens from the parsed HTML source files. A Breadth First Search (BFS) algorithm was used to travel across the generated OSN friendship graph in order to select the next profile for parsing. The approach was implemented and tested on two types of friends: top friends and all friends. In case of top friends, 500 seed profiles have been visited; 298 public profiles were parsed to get 2197 top friends' profiles and 2747 friendship edges, while in case of all friends, 250 public profiles have been parsed to extract 10,196 friends' profiles and 17,223 friendship edges. This approach has two main limitations. The system is designed as a centralized system that controlled and retrieved information of each user's profile just once. This means that the extraction process will stop if the system fails to process one of the profiles; either the seed profile (first profile to be crawled) or its friends. To overcome this problem, an Online Social Network Retrieval System (OSNRS) is proposed to decentralize the DE process from OSN through using MAS. The novelty of OSNRS is its ability to monitor profiles continuously over time. The second challenge is that the parser had to be modified to cope with changes in the profiles' structure. To overcome this problem, the proposed OSNRS is improved through use of an API tool to enable OSNRS agents to obtain the required fields of an OSN profile despite modifications in the representation of the profile's source web pages. The experimental work shows that using API and MAS simplifies and speeds up the process of tracking a profile's history. It also helps security personnel, parents, guardians, social workers and marketers in understanding the dynamic behaviour of OSN users. This thesis proposes solutions for web database processing on data extraction from OSNs by the use of parser and MAS and discusses the limitations and improvements.
16

Application of Web Mashup Technology to Oyster Information Services

Chuindja Ngniah, Christian 15 December 2012 (has links)
Web mashup is a lightweight technology used to integrate data from remote sources without direct access to their databases. As a data consumer, a Web mashup application creates new contents by retrieving data through the Web application programming interface (API) provided by the external sources. As a data provider, the service program publishes its Web API and implements the specified functions. In the project reported by this thesis, we have implemented two Web mashup applications to enhance the Web site oystersentinel.org: the Perkinsus marinus model and the Oil Spill model. Each model overlay geospatial data from a local database on top of a coastal map from Google Maps. In addition, we have designed a Web-based data publishing service. In this experimental system, we illustrated a successful Web mashup interface that allows outside developers to access the data about the local oyster stock assessment.
17

InGriDE: um ambiente integrado e extensível de desenvolvimento para computação em grade / InGriDE: an integrated and extensible development environment for grid computing

Guerra, Eduardo Leal 07 May 2007 (has links)
Recentes avanços proporcionaram às grades computacionais um bom nível de maturidade. Esses sistemas têm sido implantados em ambientes de produção de qualidade na comunidade de pesquisa acadêmica e vêm despertando um grande interesse da indústria. Entretanto, desenvolver aplicações para essas infra-estruturas heterogêneas e distribuídas ainda é uma tarefa complexa e propensa a erros. As iniciativas de facilitar essa tarefa resultaram, na maioria dos casos, em ferramentas não integradas e baseadas em características específicas de cada grade computacional. O presente trabalho tem como objetivo minimizar a dificuldade de desenvolvimento de aplicações para a grade através da construção de um ambiente integrado e extensível de desenvolvimento (IDE) para computação em grade chamado InGriDE. O InGriDE fornece um conjunto único de ferramentas compatíveis com diferentes sistemas de middleware, desenvolvidas baseadas na interface de programação Grid Application Toolkit (GAT). O conjunto de funcionalidades do InGriDE foi desenvolvido com base na plataforma Eclipse que, além de fornecer um arcabouço para construção de IDEs, facilita a extensão do conjunto inicial de funcionalidades. Para validar a nossa solução, utilizamos em nosso estudo de caso o middleware InteGrade, desenvolvido no nosso grupo de pesquisa. Os resultados obtidos nesse trabalho mostraram a viabilidade de fornecer independência de middleware para IDEs através do uso de uma interface genérica de programação como o GAT. Além disso, os benefícios obtidos com o uso do Eclipse como arcabouço para construção de IDEs indicam que os recursos fornecidos por esse tipo de arcabouço atendem de forma eficiente as necessidades inerentes ao processo de desenvolvimento de aplicações para a grade. / Computational grids have evolved considerably over the past few years. These systems have been deployed in production environments in the academic research community and have increased the interest by the industrial community. However, developing applications over heterogeneous and distributed infrastructure is still a complex and error prone process. The initiatives to facilitate this task, in the majority of the cases, resulted in isolated, middleware-specific tools. This work has the objective of minimizing the difficulty of developing grid applications through the construction of an integrated and extensible development environment for grid computing, called InGriDE. InGriDE provides a unique set of tools, compliant with different middleware systems, based on the Grid Application Toolkit (GAT). We developed the InGriDE set of features, based on the Eclipse platform, which provides both a framework for building IDEs and the possibility to extend the initial set of features. To validate our solution we used the InteGrade middleware, developed in our research group, as our case study. The results obtained from our work showed the viability of providing middleware independence to IDEs through the use of a generic application programming interface like GAT. Moreover, the benefits obtained through the use of Eclipse as our framework for building IDEs indicates that this kind of framework satisfies the requirements inherent to the grid application development process in a efficient way.
18

Σχεδιασμός και ανάπτυξη διεπαφής πελάτη-εξυπηρετητή για υποστήριξη συλλογισμού σε κατανεμημένες εφαρμογές του σημαντικού ιστού

Αγγελόπουλος, Παναγιώτης 21 September 2010 (has links)
Η έρευνα αναφορικά με την εξέλιξη του Παγκόσμιου Ιστού (WWW) κινείται τα τελευταία χρόνια προς πιο ευφυείς και αυτοματοποιημένους τρόπους ανακάλυψης και εξαγωγής της πληροφορίας. Ο Σημαντικός Ιστός (Semantic Web) είναι μία επέκταση του σημερινού Ιστού, όπου στην πληροφορία δίνεται σαφώς προσδιορισμένη σημασία, δίνοντας έτσι τη δυνατότητα στις μηχανές να μπορούν πλέον να επεξεργάζονται καλύτερα και να «κατανοούν» τα δεδομένα, τα οποία μέχρι σήμερα απλώς παρουσιάζουν. Για να λειτουργήσει ο Σημαντικός Ιστός, οι υπολογιστές θα πρέπει να έχουν πρόσβαση σε οργανωμένες συλλογές πληροφοριών, που καλούνται οντολογίες (ontologies). Οι οντολογίες παρέχουν μια μέθοδο αναπαράστασης της γνώσης στο Σημαντικό Ιστό και μπορούν επομένως να αξιοποιηθούν από τα υπολογιστικά συστήματα για τη διεξαγωγή αυτοματοποιημένου συλλογισμού (automated reasoning). Για την περιγραφή και την αναπαράσταση των οντολογιών του Σημαντικού Ιστού σε γλώσσες αναγνώσιμες από τη μηχανή, έχουν προταθεί και βρίσκονται υπό εξέλιξη διάφορες πρωτοβουλίες, με πιο σημαντική τη Γλώσσα Οντολογίας Ιστού (Web Ontology Language – OWL). H γλώσσα αυτή αποτελεί πλέον τη βάση για την αναπαράσταση γνώσης στο Σημαντικό Ιστό, λόγω της προώθησής της από το W3C, και του αυξανόμενου βαθμού υιοθέτησής της στις σχετικές εφαρμογές. Το βασικότερο εργαλείο για την υλοποίηση εφαρμογών που διαχειρίζονται OWL οντολογίες, είναι το OWL API. Το OWL API αποτελείται από προγραμματιστικές βιβλιοθήκες και μεθόδους, οι οποίες παρέχουν μια υψηλού επιπέδου διεπαφή για την πρόσβαση και τον χειρισμό OWL οντολογιών. Το θεωρητικό υπόβαθρο που εγγυάται την εκφραστική και συλλογιστική ισχύ των οντολογιών, παρέχεται από τις Λογικές Περιγραφής (Description Logics). Οι Λογικές Περιγραφής αποτελούν ένα καλώς ορισμένο αποφασίσιμο υποσύνολο της Λογικής Πρώτης Τάξης και καθιστούν δυνατή την αναπαράσταση και ανακάλυψη γνώσης στο Σημαντικό Ιστό. Για την ανακάλυψη άρρητης πληροφορίας ενδείκνυται, επομένως, να αξιοποιηθούν συστήματα βασισμένα σε Λογικές Περιγραφής. Τα συστήματα αυτά ονομάζονται και εργαλεία Συλλογισμού (Reasoners). Χαρακτηριστικά παραδείγματα τέτοιων εργαλείων αποτελούν τα FaCT++ και Pellet. Από τα παραπάνω γίνεται προφανής ο λόγος για τον οποίο, τόσο το OWL API, όσο και τα εργαλεία Συλλογισμού, χρησιμοποιούνται από προτεινόμενα μοντέλα υλοποίησης εφαρμογών του Σημαντικού Ιστού επόμενης γενιάς (WEB 3.0), για την επικοινωνία και την υποβολή «έξυπνων» ερωτημάτων σε βάσεις γνώσης (knowledge bases). Στα μοντέλα αυτά προτείνεται, επίσης, η χρήση κατανεμημένης αρχιτεκτονικής 3-επιπέδων (3-tier distributed architecture), για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. Σκοπός της διπλωματικής αυτής είναι ο σχεδιασμός και η ανάπτυξη μιας διεπαφής Πελάτη – Εξυπηρετητή (Client – Server interface) για την υποστήριξη υπηρεσιών Συλλογισμού (reasoning) σε κατανεμημένες εφαρμογές του Σημαντικού Ιστού. Πιο συγκεκριμένα, η διεπαφή που θα υλοποιήσουμε αποτελείται από δύο μέρη. Το πρώτο παρέχει τα απαραίτητα αρχεία για την εκτέλεση ενός εργαλείου Συλλογισμού σε κάποιο απομακρυσμένο μηχάνημα (Server). Με τον τρόπο αυτό, το συγκεκριμένο μηχάνημα θα παρέχει απομακρυσμένες (remote) υπηρεσίες Συλλογισμού. Το δεύτερο μέρος (Client) περιέχει αρχεία, που δρουν συμπληρωματικά στις βιβλιοθήκες του OWL API, και του δίνουν νέες δυνατότητες. Συγκεκριμένα, δίνουν την δυνατότητα σε μια εφαρμογή, που είναι υλοποιημένη με το OWL API, να χρησιμοποιήσει τις υπηρεσίες που προσφέρονται από κάποιο απομακρυσμένο εργαλείο Συλλογισμού. Συνεπώς, η διεπαφή μας θα δώσει την δυνατότητα υιοθέτησης της χρήσης του OWL API και των εργαλείων Συλλογισμού από κατανεμημένες αρχιτεκτονικές για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. / In the past few years, the research that focus on the development of the World Wide Web (WWW) has moved towards more brilliant and automated ways of discovering and exporting the information. The Semantic Web is an extension of the current Web, that explicitly defines the information, thus providing the machines with the possibility to better process and “comprehend” the data, which until now they simply present. For the Semantic Web to function properly, computers must have access to organized collections of information, that are called ontologies. Ontologies provide a method of representing knowledge in the Semantic Web and, consequently, they can be used by computing systems in order to conduct automated reasoning. In order to describe and represent the ontologies of the Semantic Web in machine-readable language, various initiatives have been proposed and are under development, most important of which is the Web Ontology Language - OWL. This language constitutes the base for representing knowledge in the Semantic Web, due to its promotion from the W3C, and its increasing degree of adoption from relative applications. The main tool for the development of applications that manages OWL ontologies, is the OWL API. The OWL API consists of programming libraries and methods, that provide a higher-level interface for accessing and handling OWL ontologies. The theoretical background that guarantees the expressivity and the reasoning of ontologies, is provided from Description Logics. Description Logics constitute a well defined and decidable subset of First Order Logic and make possible the representation and discovery of knowledge in the Semantic Web. As a consequence, in order to discover “clever” information, we have to develop and use systems that are based in Description Logics. These systems are also called Reasoners. Characteristic examples of such tools are FaCT++ and Pellet. From above, it must be obvious why both the OWL API and the Reasoners are used by proposed models of developing next generation (WEB 3.0) Semantic Web applications, for the communication and the submission of “intelligent” questions in knowledge bases. These models also propose the use of a 3-level distributed architecture (3-tier distributed architecture), for the development of Semantic Web applications. Aim of this diploma thesis is to design and implement a Client-Server interface to support Reasoning in distributed applications of the Semantic Web. Specifically, the interface that we will implement consists of two parts. First part provides the essential files for a Reasoner to run in a remote machine (Server). As a result, this machine will provide remote Reasoning services. Second part (Client) contains files, that act additionally to (enhance) the libraries of the OWL API, and give them new features. More precisely, they provide an application, that is implemented with OWL API, with the possibility of using the services that are offered by a remote Reasoner. Consequently, our interface will make possible the use of the OWL API and the Reasoners from proposed distributed architectures for the development of Semantic Web applications.
19

Σχεδίαση και ανάπτυξη πλατφόρμας για την υποστήριξη επεξεργασίας δεδομένων του συστήματος Δι@ύγεια

Κριμπάς, Γεώργιος 08 May 2013 (has links)
Στόχος της διπλωματικής εργασίας είναι η σχεδίαση και ανάπτυξη συστήματος βασισμένου στον Παγκόσμιο Ιστό, το οποίο θα υποστηρίζει διαδικασίες επεξεργασίας και ανάλυσης δεδομένων που αφορούν τις αποφάσεις των κυβερνητικών οργάνων και της διοικητικής δραστηριότητας, όπως αυτές δημοσιεύονται από το πρόγραμμα «Δι@ύγεια» στον ιστότοπο http://et.diavgeia.gov.gr/. Στόχος η βελτίωση των διαδικασιών επεξεργασίας και ανάλυσης των δεδομένων του συστήματος «Δι@ύγεια» δίνοντας έμφαση στην οικονομική ανάλυση τους. Σκοπός μας είναι η παροχή υπηρεσιών ώστε να διευκολυνθούν δραστηριότητες που σχετίζονται με την οικονομική ανάλυση των αποφάσεων. Για την υλοποίηση της διπλωματικής εργασίας θα χρησιμοποιηθεί η διεπαφή (API – application programming interface) που διαθέτει το σύστημα «Δι@ύγεια» για την προσκόμιση του συνόλου των δημοσιευμένων αποφάσεων και την αποθήκευση με τρόπο που να επιτρέπει την κατάλληλη οπτικοποίηση και εξαγωγή των οικονομικών δεδομένων. / -
20

Uma abordagem para adaptação de clientes do Java Collections framework baseada em técnicas de migração de APÌs. / An approach to client adaptation of the Java Collections framework based on API migration techniques.

MAIA, Mikaela Anuska Oliveira. 16 April 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-16T18:53:29Z No. of bitstreams: 1 MIKAELA ANUSKA OLIVEIRA MAIA - DISSERTAÇÃO PPGCC 2014..pdf: 1160102 bytes, checksum: 5eb99698589be1aeca83623ca4b79e2f (MD5) / Made available in DSpace on 2018-04-16T18:53:29Z (GMT). No. of bitstreams: 1 MIKAELA ANUSKA OLIVEIRA MAIA - DISSERTAÇÃO PPGCC 2014..pdf: 1160102 bytes, checksum: 5eb99698589be1aeca83623ca4b79e2f (MD5) Previous issue date: 2014-08 / Apesar da diversidade que a API do Java Collections Framework(JCF) provê, com uma variedade de implementações para várias estruturas de dados, os desenvolvedores podem escolher interfaces ou classes inadequadas, em termos de eficiência ou propósito. Isto pode acontecer devido à documentação da API ser insuficiente ou a falta de análise ponderada pelo desenvolvedor de acordo com exigências do contexto. É possível a substituição manual, em paralelo com uma análise do contexto do programa. No entanto, isso é cansativo e suscetível a erros,desestimulando a modificação. Neste trabalho, nós definimos uma abordagem semi-automática para a seleção de interfaces e implementações dentro do JCF e a modificação de clientes do JCF, com base em técnicas de migração de API. A abordagem ajuda o usuário a escolher a coleção mais apropriada, com base em requisitos coletados por meio de perguntas mais intuitivas para o usuário. A seleção é resolvida com uma árvore de decisão que, a partir das respostas dadas pelo desenvolvedor, decide qual é a interface e implementação mais adequada do JCF. Após esta decisão, a modificação do programa é realizado por meio de adaptadores, minimizando a modificação do código fonte. Nós avaliamos a abordagem, implementada em uma ferramenta de apoio, com um estudo experimental que compreende estudantes de Ciência da Computação distribuídos aleatoriamente em grupos, os quais realizaram mudanças para clientes do JCF por diferentes métodos: manualmente, utilizando-se do EclipseJavaSearch e nossa abordagem. Os resultados foram avaliados na qualidade, esforço e tempo gasto. Descobrimos que a maioria dos usuários teve dificuldades em escolher a interface ou implementação apropriada para os requisitos apresentados. Nossa abordagem evidenciou uma melhora no esforço de selecionar a melhor coleção para a exigência, poupando algum tempo no processo. Sobre a qualidade da coleção selecionada, encontramos o mesmo comportamento usando as duas ferramentas. / Despite the API diversity that the Java Collections Framework (JCF) provides, with diverse implementations for several data structures, developers may choose inappropriate interfaces or classes, in terms of efficiency or purpose. This may happen due to insufficient API documentation or the lack of thoughtful analysis by the developer according to context requirements. A possible solution is manual replacement, in parallel with an analysis of the program context. However, this is tiresome and error-prone, discouraging the modification. In this work, we define a semi-automatic approach for (i) the selection of interfaces and implementation within the JCF and (ii) the modification of JCF clients, based on API migration techniques. The approach helps the user in choosing the most appropriate collection, based on requirements collected by means of simple yes/no questions. The selection is resolved with a decision tree that, from the answers given by the developer, decides which is the most adequate interface (and implementation) from the JCF. After this decision, the actual program modification is performed by means of adapters, minimizing the source code modification We evaluate the approach, as implemented in a supporting tool,with an experimental study comprising computer science students randomly distributed into groups,whose task was performing changes to JCF clients by different methods (manually, using Eclipse’s Java Search and our approach); the results were evaluated on quality, effort and time spent. We found that most students had a hard time choosing the right interface or implementation for the given requirements. Our approach seemed to improve the effort of selecting the best collection for the requirement, saving sometime in the process. Regarding the quality of the collection selected, we found the same behavior using both tools.

Page generated in 0.5149 seconds