• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Framework for Evaluating the Computational Aspects of Mobile Phones

Aguilar, David Pedro 19 March 2008 (has links)
With sales reaching $4.4 billion dollars in the first half of 2006 in the United States alone, and an estimated 80% of the world receiving coverage for their wireless phones in that year, interest in these devices as more than mere communicators has greatly increased. In the mid-to-late 1990s, digital cameras began to be incorporated into cellphones, followed shortly thereafter by Global Positioning System (GPS) hardware allowing location-based services to be offered to customers. Since then the use of mobile phone hardware for non-communication purposes has continued to expand. Some models, such as the Motorola V3M, have been specifically geared toward the storage and display of music and visual media, as well as receiving Internet broadcasts. It is perhaps surprising, therefore, that relatively little has been done from an academic standpoint to provide a qualitative and comprehensive method of evaluating the performance of mobile phones regarding their ability to function as computing devices. While some manuals do offer comparisons of Application Programming Interfaces (APIs) that aid in the development of cellphone applications, little documentation exists to provide objective measurements of performance parameters. This dissertation proposes a framework for evaluating the performance of mobile phones from a computational angle, focusing on three criteria: the processing power of the Central Processing Unit (CPU), data transfer capabilities, and the performance of the phone's GPS functionality for the appropriation of geographic location data. Power consumption has always been a major source of interest in the study of computer systems, and the limited hardware resources of mobile devices such as laptop computers, Personal Data Assistants (PDAs) and cellular telephones makes this a key concern. The power consumption factors associated with operation are therefore considered alongside the three core criteria being studied in this framework. In addition to framework design, software tools for the evaluation of cellphones were also developed, and these were applied to a test case of the Sanyo SCP-7050 model. This provides an example of the utility of the framework in evaluating existing phone models and a foundation for the assessment of new models as they are released.
2

Forensic Insights: Analyzing and Visualizing Fitbit Cloud Data

Poorvi Umesh Hegde (17635896) 15 December 2023 (has links)
<p dir="ltr">Wearable devices are ubiquitous. There are over 1.1 billion wearable devices in the<br>market today[1]. The market is projected to grow at a rate of 14.6% annually till 2030[2].<br>These devices collect and store a large amount of data[3]. A major amount of this collected<br>data is stored in the cloud. For many years now, law enforcement organizations have been<br>continuously encountering cases that involve a wearable device in some capacity. There have<br>also been examples of how these wearable devices have helped in crime investigations and<br>insurance fraud investigations [4],[5],[6],[7],[8]. The article [4] performs an analysis of 5 case<br>studies and 57 news articles and shows how the framing of wearables in the context of the<br>crimes helped those cases. However, there still isn’t enough awareness and understanding<br>among law enforcement agencies on leveraging the data collected by these devices to solve<br>crimes. Many of the fitness trackers and smartwatches in the market today have more or<br>less similar functionalities of tracking data on an individual’s fitness-related activities, heart<br>rate, sleep, temperature, and stress [9]. One of the major players in the smartwatch space is<br>Fitbit. Fitbit synchronizes the data that it collects, directly to Fitbit Cloud [10]. It provides<br>an Android app and a web dashboard for users to access some of these data, but not all.<br>Application developers on the other hand can make use of Fitbit APIs to use user’s data.<br>These APIs can also be leveraged by law enforcement agencies to aid in digital forensic<br>investigations. There have been previous studies where they have developed tools that make<br>use of Fitbit Web APIs [11],[12], [13] but for various other purposes, not for forensic research.<br>There are a few studies on the topic of using fitness tracker data for forensic investigations<br>[14],[15]. But very few have used the Fitbit developer APIs [16]. Thus this study aims to<br>propose a proof-of-concept platform that can be leveraged by law enforcement agencies to<br>access and view the data stored on the Fitbit cloud on a person of interest. The results<br>display data on 12 categories - activity, body, sleep, breathing, devices, friends, nutrition,<br>heart rate variability, ECG, temperature, oxygen level, and cardio data, in a tabular format<br>that is easily viewable and searchable. This data can be further utilized for various analyses.<br>The tool developed is Open Source and well documented, thus anyone can reproduce the<br>process.<br>12<br></p>
3

Multi agent system for web database processing, on data extraction from online social networks

Abdulrahman, Ruqayya January 2012 (has links)
In recent years, there has been a flood of continuously changing information from a variety of web resources such as web databases, web sites, web services and programs. Online Social Networks (OSNs) represent such a field where huge amounts of information are being posted online over time. Due to the nature of OSNs, which offer a productive source for qualitative and quantitative personal information, researchers from various disciplines contribute to developing methods for extracting data from OSNs. However, there is limited research which addresses extracting data automatically. To the best of the author's knowledge, there is no research which focuses on tracking the real time changes of information retrieved from OSN profiles over time and this motivated the present work. This thesis presents different approaches for automated Data Extraction (DE) from OSN: crawler, parser, Multi Agent System (MAS) and Application Programming Interface (API). Initially, a parser was implemented as a centralized system to traverse the OSN graph and extract the profile's attributes and list of friends from Myspace, the top OSN at that time, by parsing the Myspace profiles and extracting the relevant tokens from the parsed HTML source files. A Breadth First Search (BFS) algorithm was used to travel across the generated OSN friendship graph in order to select the next profile for parsing. The approach was implemented and tested on two types of friends: top friends and all friends. In case of top friends, 500 seed profiles have been visited; 298 public profiles were parsed to get 2197 top friends' profiles and 2747 friendship edges, while in case of all friends, 250 public profiles have been parsed to extract 10,196 friends' profiles and 17,223 friendship edges. This approach has two main limitations. The system is designed as a centralized system that controlled and retrieved information of each user's profile just once. This means that the extraction process will stop if the system fails to process one of the profiles; either the seed profile (first profile to be crawled) or its friends. To overcome this problem, an Online Social Network Retrieval System (OSNRS) is proposed to decentralize the DE process from OSN through using MAS. The novelty of OSNRS is its ability to monitor profiles continuously over time. The second challenge is that the parser had to be modified to cope with changes in the profiles' structure. To overcome this problem, the proposed OSNRS is improved through use of an API tool to enable OSNRS agents to obtain the required fields of an OSN profile despite modifications in the representation of the profile's source web pages. The experimental work shows that using API and MAS simplifies and speeds up the process of tracking a profile's history. It also helps security personnel, parents, guardians, social workers and marketers in understanding the dynamic behaviour of OSN users. This thesis proposes solutions for web database processing on data extraction from OSNs by the use of parser and MAS and discusses the limitations and improvements.
4

Σχεδιασμός και ανάπτυξη διεπαφής πελάτη-εξυπηρετητή για υποστήριξη συλλογισμού σε κατανεμημένες εφαρμογές του σημαντικού ιστού

Αγγελόπουλος, Παναγιώτης 21 September 2010 (has links)
Η έρευνα αναφορικά με την εξέλιξη του Παγκόσμιου Ιστού (WWW) κινείται τα τελευταία χρόνια προς πιο ευφυείς και αυτοματοποιημένους τρόπους ανακάλυψης και εξαγωγής της πληροφορίας. Ο Σημαντικός Ιστός (Semantic Web) είναι μία επέκταση του σημερινού Ιστού, όπου στην πληροφορία δίνεται σαφώς προσδιορισμένη σημασία, δίνοντας έτσι τη δυνατότητα στις μηχανές να μπορούν πλέον να επεξεργάζονται καλύτερα και να «κατανοούν» τα δεδομένα, τα οποία μέχρι σήμερα απλώς παρουσιάζουν. Για να λειτουργήσει ο Σημαντικός Ιστός, οι υπολογιστές θα πρέπει να έχουν πρόσβαση σε οργανωμένες συλλογές πληροφοριών, που καλούνται οντολογίες (ontologies). Οι οντολογίες παρέχουν μια μέθοδο αναπαράστασης της γνώσης στο Σημαντικό Ιστό και μπορούν επομένως να αξιοποιηθούν από τα υπολογιστικά συστήματα για τη διεξαγωγή αυτοματοποιημένου συλλογισμού (automated reasoning). Για την περιγραφή και την αναπαράσταση των οντολογιών του Σημαντικού Ιστού σε γλώσσες αναγνώσιμες από τη μηχανή, έχουν προταθεί και βρίσκονται υπό εξέλιξη διάφορες πρωτοβουλίες, με πιο σημαντική τη Γλώσσα Οντολογίας Ιστού (Web Ontology Language – OWL). H γλώσσα αυτή αποτελεί πλέον τη βάση για την αναπαράσταση γνώσης στο Σημαντικό Ιστό, λόγω της προώθησής της από το W3C, και του αυξανόμενου βαθμού υιοθέτησής της στις σχετικές εφαρμογές. Το βασικότερο εργαλείο για την υλοποίηση εφαρμογών που διαχειρίζονται OWL οντολογίες, είναι το OWL API. Το OWL API αποτελείται από προγραμματιστικές βιβλιοθήκες και μεθόδους, οι οποίες παρέχουν μια υψηλού επιπέδου διεπαφή για την πρόσβαση και τον χειρισμό OWL οντολογιών. Το θεωρητικό υπόβαθρο που εγγυάται την εκφραστική και συλλογιστική ισχύ των οντολογιών, παρέχεται από τις Λογικές Περιγραφής (Description Logics). Οι Λογικές Περιγραφής αποτελούν ένα καλώς ορισμένο αποφασίσιμο υποσύνολο της Λογικής Πρώτης Τάξης και καθιστούν δυνατή την αναπαράσταση και ανακάλυψη γνώσης στο Σημαντικό Ιστό. Για την ανακάλυψη άρρητης πληροφορίας ενδείκνυται, επομένως, να αξιοποιηθούν συστήματα βασισμένα σε Λογικές Περιγραφής. Τα συστήματα αυτά ονομάζονται και εργαλεία Συλλογισμού (Reasoners). Χαρακτηριστικά παραδείγματα τέτοιων εργαλείων αποτελούν τα FaCT++ και Pellet. Από τα παραπάνω γίνεται προφανής ο λόγος για τον οποίο, τόσο το OWL API, όσο και τα εργαλεία Συλλογισμού, χρησιμοποιούνται από προτεινόμενα μοντέλα υλοποίησης εφαρμογών του Σημαντικού Ιστού επόμενης γενιάς (WEB 3.0), για την επικοινωνία και την υποβολή «έξυπνων» ερωτημάτων σε βάσεις γνώσης (knowledge bases). Στα μοντέλα αυτά προτείνεται, επίσης, η χρήση κατανεμημένης αρχιτεκτονικής 3-επιπέδων (3-tier distributed architecture), για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. Σκοπός της διπλωματικής αυτής είναι ο σχεδιασμός και η ανάπτυξη μιας διεπαφής Πελάτη – Εξυπηρετητή (Client – Server interface) για την υποστήριξη υπηρεσιών Συλλογισμού (reasoning) σε κατανεμημένες εφαρμογές του Σημαντικού Ιστού. Πιο συγκεκριμένα, η διεπαφή που θα υλοποιήσουμε αποτελείται από δύο μέρη. Το πρώτο παρέχει τα απαραίτητα αρχεία για την εκτέλεση ενός εργαλείου Συλλογισμού σε κάποιο απομακρυσμένο μηχάνημα (Server). Με τον τρόπο αυτό, το συγκεκριμένο μηχάνημα θα παρέχει απομακρυσμένες (remote) υπηρεσίες Συλλογισμού. Το δεύτερο μέρος (Client) περιέχει αρχεία, που δρουν συμπληρωματικά στις βιβλιοθήκες του OWL API, και του δίνουν νέες δυνατότητες. Συγκεκριμένα, δίνουν την δυνατότητα σε μια εφαρμογή, που είναι υλοποιημένη με το OWL API, να χρησιμοποιήσει τις υπηρεσίες που προσφέρονται από κάποιο απομακρυσμένο εργαλείο Συλλογισμού. Συνεπώς, η διεπαφή μας θα δώσει την δυνατότητα υιοθέτησης της χρήσης του OWL API και των εργαλείων Συλλογισμού από κατανεμημένες αρχιτεκτονικές για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. / In the past few years, the research that focus on the development of the World Wide Web (WWW) has moved towards more brilliant and automated ways of discovering and exporting the information. The Semantic Web is an extension of the current Web, that explicitly defines the information, thus providing the machines with the possibility to better process and “comprehend” the data, which until now they simply present. For the Semantic Web to function properly, computers must have access to organized collections of information, that are called ontologies. Ontologies provide a method of representing knowledge in the Semantic Web and, consequently, they can be used by computing systems in order to conduct automated reasoning. In order to describe and represent the ontologies of the Semantic Web in machine-readable language, various initiatives have been proposed and are under development, most important of which is the Web Ontology Language - OWL. This language constitutes the base for representing knowledge in the Semantic Web, due to its promotion from the W3C, and its increasing degree of adoption from relative applications. The main tool for the development of applications that manages OWL ontologies, is the OWL API. The OWL API consists of programming libraries and methods, that provide a higher-level interface for accessing and handling OWL ontologies. The theoretical background that guarantees the expressivity and the reasoning of ontologies, is provided from Description Logics. Description Logics constitute a well defined and decidable subset of First Order Logic and make possible the representation and discovery of knowledge in the Semantic Web. As a consequence, in order to discover “clever” information, we have to develop and use systems that are based in Description Logics. These systems are also called Reasoners. Characteristic examples of such tools are FaCT++ and Pellet. From above, it must be obvious why both the OWL API and the Reasoners are used by proposed models of developing next generation (WEB 3.0) Semantic Web applications, for the communication and the submission of “intelligent” questions in knowledge bases. These models also propose the use of a 3-level distributed architecture (3-tier distributed architecture), for the development of Semantic Web applications. Aim of this diploma thesis is to design and implement a Client-Server interface to support Reasoning in distributed applications of the Semantic Web. Specifically, the interface that we will implement consists of two parts. First part provides the essential files for a Reasoner to run in a remote machine (Server). As a result, this machine will provide remote Reasoning services. Second part (Client) contains files, that act additionally to (enhance) the libraries of the OWL API, and give them new features. More precisely, they provide an application, that is implemented with OWL API, with the possibility of using the services that are offered by a remote Reasoner. Consequently, our interface will make possible the use of the OWL API and the Reasoners from proposed distributed architectures for the development of Semantic Web applications.
5

Uma abordagem para adaptação de clientes do Java Collections framework baseada em técnicas de migração de APÌs. / An approach to client adaptation of the Java Collections framework based on API migration techniques.

MAIA, Mikaela Anuska Oliveira. 16 April 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-16T18:53:29Z No. of bitstreams: 1 MIKAELA ANUSKA OLIVEIRA MAIA - DISSERTAÇÃO PPGCC 2014..pdf: 1160102 bytes, checksum: 5eb99698589be1aeca83623ca4b79e2f (MD5) / Made available in DSpace on 2018-04-16T18:53:29Z (GMT). No. of bitstreams: 1 MIKAELA ANUSKA OLIVEIRA MAIA - DISSERTAÇÃO PPGCC 2014..pdf: 1160102 bytes, checksum: 5eb99698589be1aeca83623ca4b79e2f (MD5) Previous issue date: 2014-08 / Apesar da diversidade que a API do Java Collections Framework(JCF) provê, com uma variedade de implementações para várias estruturas de dados, os desenvolvedores podem escolher interfaces ou classes inadequadas, em termos de eficiência ou propósito. Isto pode acontecer devido à documentação da API ser insuficiente ou a falta de análise ponderada pelo desenvolvedor de acordo com exigências do contexto. É possível a substituição manual, em paralelo com uma análise do contexto do programa. No entanto, isso é cansativo e suscetível a erros,desestimulando a modificação. Neste trabalho, nós definimos uma abordagem semi-automática para a seleção de interfaces e implementações dentro do JCF e a modificação de clientes do JCF, com base em técnicas de migração de API. A abordagem ajuda o usuário a escolher a coleção mais apropriada, com base em requisitos coletados por meio de perguntas mais intuitivas para o usuário. A seleção é resolvida com uma árvore de decisão que, a partir das respostas dadas pelo desenvolvedor, decide qual é a interface e implementação mais adequada do JCF. Após esta decisão, a modificação do programa é realizado por meio de adaptadores, minimizando a modificação do código fonte. Nós avaliamos a abordagem, implementada em uma ferramenta de apoio, com um estudo experimental que compreende estudantes de Ciência da Computação distribuídos aleatoriamente em grupos, os quais realizaram mudanças para clientes do JCF por diferentes métodos: manualmente, utilizando-se do EclipseJavaSearch e nossa abordagem. Os resultados foram avaliados na qualidade, esforço e tempo gasto. Descobrimos que a maioria dos usuários teve dificuldades em escolher a interface ou implementação apropriada para os requisitos apresentados. Nossa abordagem evidenciou uma melhora no esforço de selecionar a melhor coleção para a exigência, poupando algum tempo no processo. Sobre a qualidade da coleção selecionada, encontramos o mesmo comportamento usando as duas ferramentas. / Despite the API diversity that the Java Collections Framework (JCF) provides, with diverse implementations for several data structures, developers may choose inappropriate interfaces or classes, in terms of efficiency or purpose. This may happen due to insufficient API documentation or the lack of thoughtful analysis by the developer according to context requirements. A possible solution is manual replacement, in parallel with an analysis of the program context. However, this is tiresome and error-prone, discouraging the modification. In this work, we define a semi-automatic approach for (i) the selection of interfaces and implementation within the JCF and (ii) the modification of JCF clients, based on API migration techniques. The approach helps the user in choosing the most appropriate collection, based on requirements collected by means of simple yes/no questions. The selection is resolved with a decision tree that, from the answers given by the developer, decides which is the most adequate interface (and implementation) from the JCF. After this decision, the actual program modification is performed by means of adapters, minimizing the source code modification We evaluate the approach, as implemented in a supporting tool,with an experimental study comprising computer science students randomly distributed into groups,whose task was performing changes to JCF clients by different methods (manually, using Eclipse’s Java Search and our approach); the results were evaluated on quality, effort and time spent. We found that most students had a hard time choosing the right interface or implementation for the given requirements. Our approach seemed to improve the effort of selecting the best collection for the requirement, saving sometime in the process. Regarding the quality of the collection selected, we found the same behavior using both tools.
6

Multi agent system for web database processing, on data extraction from online social networks.

Abdulrahman, Ruqayya January 2012 (has links)
In recent years, there has been a ood of continuously changing information from a variety of web resources such as web databases, web sites, web services and programs. Online Social Networks (OSNs) represent such a eld where huge amounts of information are being posted online over time. Due to the nature of OSNs, which o er a productive source for qualitative and quantitative personal information, researchers from various disciplines contribute to developing methods for extracting data from OSNs. However, there is limited research which addresses extracting data automatically. To the best of the author's knowledge, there is no research which focuses on tracking the real time changes of information retrieved from OSN pro les over time and this motivated the present work. This thesis presents di erent approaches for automated Data Extraction (DE) from OSN: crawler, parser, Multi Agent System (MAS) and Application Programming Interface (API). Initially, a parser was implemented as a centralized system to traverse the OSN graph and extract the pro- le's attributes and list of friends from Myspace, the top OSN at that time, by parsing the Myspace pro les and extracting the relevant tokens from the parsed HTML source les. A Breadth First Search (BFS) algorithm was used to travel across the generated OSN friendship graph in order to select the next pro le for parsing. The approach was implemented and tested on two types of friends: top friends and all friends. In case of top friends, 500 seed pro les have been visited; 298 public pro les were parsed to get 2197 top friends pro les and 2747 friendship edges, while in case of all friends, 250 public pro les have been parsed to extract 10,196 friends' pro les and 17,223 friendship edges. This approach has two main limitations. The system is designed as a centralized system that controlled and retrieved information of each user's pro le just once. This means that the extraction process will stop if the system fails to process one of the pro les; either the seed pro le ( rst pro le to be crawled) or its friends. To overcome this problem, an Online Social Network Retrieval System (OSNRS) is proposed to decentralize the DE process from OSN through using MAS. The novelty of OSNRS is its ability to monitor pro les continuously over time. The second challenge is that the parser had to be modi ed to cope with changes in the pro les' structure. To overcome this problem, the proposed OSNRS is improved through use of an API tool to enable OSNRS agents to obtain the required elds of an OSN pro le despite modi cations in the representation of the pro le's source web pages. The experimental work shows that using API and MAS simpli es and speeds up the process of tracking a pro le's history. It also helps security personnel, parents, guardians, social workers and marketers in understanding the dynamic behaviour of OSN users. This thesis proposes solutions for web database processing on data extraction from OSNs by the use of parser and MAS and discusses the limitations and improvements. / Taibah University
7

Feeding a data warehouse with data coming from web services. A mediation approach for the DaWeS prototype / Alimenter un entrepôt de données par des données issues de services web. Une approche médiation pour le prototype DaWeS

Samuel, John 06 October 2014 (has links)
Cette thèse traite de l’établissement d’une plateforme logicielle nommée DaWeS permettant le déploiement et la gestion en ligne d’entrepôts de données alimentés par des données provenant de services web et personnalisés à destination des petites et moyennes entreprises. Ce travail s’articule autour du développement et de l’expérimentation de DaWeS. L’idée principale implémentée dans DaWeS est l’utilisation d’une approche virtuelle d’intégration de données (la médiation) en tant queprocessus ETL (extraction, transformation et chargement des données) pour les entrepôts de données gérés par DaWeS. A cette fin, un algorithme classique de réécriture de requêtes (l’algorithme inverse-rules) a été adapté et testé. Une étude théorique sur la sémantique des requêtes conjonctives et datalog exprimées avec des relations munies de limitations d’accès (correspondant aux services web) a été menée. Cette dernière permet l’obtention de bornes supérieures sur les nombres d’appels aux services web requis dans l’évaluation de telles requêtes. Des expérimentations ont été menées sur des services web réels dans trois domaines : le marketing en ligne, la gestion de projets et les services d’aide aux utilisateurs. Une première série de tests aléatoires a été effectuée pour tester le passage à l’échelle. / The role of data warehouse for business analytics cannot be undermined for any enterprise, irrespective of its size. But the growing dependence on web services has resulted in a situation where the enterprise data is managed by multiple autonomous and heterogeneous service providers. We present our approach and its associated prototype DaWeS [Samuel, 2014; Samuel and Rey, 2014; Samuel et al., 2014], a DAta warehouse fed with data coming from WEb Services to extract, transform and store enterprise data from web services and to build performance indicators from them (stored enterprise data) hiding from the end users the heterogeneity of the numerous underlying web services. Its ETL process is grounded on a mediation approach usually used in data integration. This enables DaWeS (i) to be fully configurable in a declarative manner only (XML, XSLT, SQL, datalog) and (ii) to make part of the warehouse schema dynamic so it can be easily updated. (i) and (ii) allow DaWeS managers to shift from development to administration when they want to connect to new web services or to update the APIs (Application programming interfaces) of already connected ones. The aim is to make DaWeS scalable and adaptable to smoothly face the ever-changing and growing web services offer. We point out the fact that this also enables DaWeS to be used with the vast majority of actual web service interfaces defined with basic technologies only (HTTP, REST, XML and JSON) and not with more advanced standards (WSDL, WADL, hRESTS or SAWSDL) since these more advanced standards are not widely used yet to describe real web services. In terms of applications, the aim is to allow a DaWeS administrator to provide to small and medium companies a service to store and query their business data coming from their usage of third-party services, without having to manage their own warehouse. In particular, DaWeS enables the easy design (as SQL Queries) of personalized performance indicators. We present in detail this mediation approach for ETL and the architecture of DaWeS. Besides its industrial purpose, working on building DaWeS brought forth further scientific challenges like the need for optimizing the number of web service API operation calls or handling incomplete information. We propose a bound on the number of calls to web services. This bound is a tool to compare future optimization techniques. We also present a heuristics to handle incomplete information.
8

Uma proposta de API para desenvolvimento de aplicações multiusuário e multidispositivo para TV Digital utilizando o Middleware Ginga

Silva, Lincoln David Nery e 08 August 2008 (has links)
Made available in DSpace on 2015-05-14T12:36:47Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 8775685 bytes, checksum: 7021be54b3d48e2a9247804ad1a980ab (MD5) Previous issue date: 2018-08-08 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The Interactive Digital TV applications progress does not occur at the same speed we found at Web or Desktop applications. This fact is due to constraints encountered in both hardware and the middleware in which applications run, and also due to the limited way we have to interact with the TV: with the traditional remote control. In the Brazilian scene, the middleware Ginga specification allows the incorporation of new functionalities through the Device Integration API, which is target of this dissertation. The API allows TVDI applications to use mobile devices both as a means of interaction, and to share its multimedia resources. As a result of the API use, TVDI applications are able to employ new possibilities not available in others existing Digital TV middlewares, like the use of multimedia resources and multiuser support. The new API has been implemented and applied to develop TVDI applications aiming to explore the new advanced features available. / avanço das aplicações de TV Digital Interativa não ocorre na mesma velocidade que as aplicações para WEB ou Desktop. Tal fato se deve tanto por limitações encontradas no hardware e no middleware no qual as aplicações são executadas, quanto pela limitação do dispositivo usado na interação dos usuários com a TV. No panorama nacional, a especificação do middleware Ginga permite a incorporação de novas funcionalidades através da API de Integração de Dispositivos, alvo desse trabalho. Esta API que permite que aplicações de TVDI usem dispositivos móveis tanto como meio de interação, como para compartilhamento de seus recursos multimídia. Como resultado do uso da API proposta, as aplicações de TVDI passam a contar com novas possibilidades até então não disponíveis nos middlewares de TV Digital existentes; como a utilização de mais de um dispositivo simultaneamente, o suporte ao desenvolvimento de aplicações multiusuário e o acesso a recursos de captura de mídias contínuas disponíveis em aparelhos como celulares, que podem ser integrados aos aparelhos de TV. A API resultante desse trabalho foi implementada e utilizada no desenvolvimento de aplicações para TVDI voltadas a explorar os novos recursos avançados disponíveis.
9

Large language models as an interface to interact with API tools in natural language

Tesfagiorgis, Yohannes Gebreyohannes, Monteiro Silva, Bruno Miguel January 2023 (has links)
In this research project, we aim to explore the use of Large Language Models (LLMs) as an interface to interact with API tools in natural language. Bubeck et al. [1] shed some light on how LLMs could be used to interact with API tools. Since then, new versions of LLMs have been launched and the question of how reliable a LLM can be in this task remains unanswered. The main goal of our thesis is to investigate the designs of the available system prompts for LLMs, identify the best-performing prompts, and evaluate the reliability of different LLMs when using the best-identified prompts. We will employ a multiple-stage controlled experiment: A literature review where we reveal the available system prompts used in the scientific community and open-source projects; then, using F1-score as a metric we will analyse the precision and recall of the system prompts aiming to select the best-performing system prompts in interacting with API tools; and in a latter stage, we compare a selection of LLMs with the best-performing prompts identified earlier. From these experiences, we realize that AI-generated system prompts perform better than the current prompts used in open-source and literature with GPT-4, zero-shot prompts have better performance in this specific task with GPT-4 and that a good system prompt in one model does not generalize well into other models.
10

Comparative study of open source and dot NET environments for ontology development.

Mahoro, Leki Jovial 05 1900 (has links)
M. Tech. (Department of Information & Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / Many studies have evaluated and compared the existing open-sources Semantic Web platforms for ontologies development. However, none of these studies have included the dot NET-based semantic web platforms in the empirical investigations. This study conducted a comparative analysis of open-source and dot NET-based semantic web platforms for ontologies development. Two popular dot NET-based semantic web platforms, namely, SemWeb.NET and dotNetRDF were analyzed and compared against open-source environments including Jena Application Programming Interface (API), Protégé and RDF4J also known as Sesame Software Development Kit (SDK). Various metrics such as storage mode, query support, consistency checking, interoperability with other tools, and many more were used to compare two categories of platforms. Five ontologies of different sizes are used in the experiments. The experimental results showed that the open-source platforms provide more facilities for creating, storing and processing ontologies compared to the dot NET-based tools. Furthermore, the experiments revealed that Protégé and RDF4J open-source and dotNetRDF platforms provide both graphical user interface (GUI) and command line interface for ontologies processing, whereas, Jena open-source and SemWeb.NET are command line platforms. Moreover, the results showed that the open-source platforms are capable of processing multiple ontologies’ files formats including Resource Description Framework (RDF) and Ontology Web Language (OWL) formats, whereas, the dot NET-based tools only process RDF ontologies. Finally, the experiment results indicate that the dot NET-based platforms have limited memory size as they failed to load and query large ontologies compared to open-source environments.

Page generated in 0.1544 seconds