• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 8
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 49
  • 49
  • 44
  • 21
  • 14
  • 13
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Řízení informačních toků využíváním systému Business Intelligence ve vybrané firmě / Management of Information Flows Use of Business Intelligence in the Company

Zemanovičová, Monika January 2015 (has links)
This diploma thesis proposes usage of Business Intelligence tools in the chosen company. It considers the costs for installing, assesses the economic benefits and based on the analysis, it proposes appropriate solutions to the currently unsatisfactory situation in the company.
12

Proximal Policy Optimization in StarCraft

Liu, Yuefan 05 1900 (has links)
Deep reinforcement learning is an area of research that has blossomed tremendously in recent years and has shown remarkable potential in computer games. Real-time strategy game has become an important field of artificial intelligence in game for several years. This paper is about to introduce a kind of algorithm that used to train agents to fight against computer bots. Not only because games are excellent tools to test deep reinforcement learning algorithms for their valuable insight into how well an algorithm can perform in isolated environments without the real-life consequences, but also real-time strategy games are a very complex genre that challenges artificial intelligence agents in both short-term or long-term planning. In this paper, we introduce some history of deep learning and reinforcement learning. Then we combine them with StarCraft. PPO is the algorithm which have some of the benefits of trust region policy optimization (TRPO), but it is much simpler to implement, more general for environment, and have better sample complexity. The StarCraft environment: Blood War Application Programming Interface (BWAPI) is open source to test. The results show that PPO can work well in BWAPI and train units to defeat the opponents. The algorithm presented in the thesis is corroborated by experiments.
13

RESTful API vs. GraphQL a CRUD performance comparison

Niklasson, Alexander, Werèlius, Vincent January 2023 (has links)
The utilization of Application Programming Interfaces (APIs) has experiencedsignificant growth due to the increasing number of applications being devel-oped. APIs serve as a means to transfer data between different applications.While RESTful has been the standard API since its emergence around 2000,it is now being challenged by Facebook’s GraphQL, which was introducedin 2015. This study aims to fill a knowledge gap in the existing literatureon API performance evaluation by extending the focus beyond read opera-tions to include CREATE, UPDATE, and DELETE operations in both REST-ful APIs and GraphQL. Previous studies have predominantly examined theperformance of read operations, but there is a need to comprehensively un-derstand the behavior and effectiveness of additional CRUD operations. Toaddress this gap, we conducted a series of controlled experiments and anal-yses to evaluate the response time and RAM utilization of RESTful APIsand GraphQL when executing CREATE, UPDATE, and DELETE operations.We tested various scenarios and performance metrics to gain insights into thestrengths and weaknesses of each approach. Our findings indicate that con-trary to our initial beliefs, there are no significant differences between the twoAPI technologies in terms of CREATE, UPDATE, and DELETE operations.However, RESTful did slightly outperform GraphQL in the majority of tests.We also observed that GraphQL’s inherent batching functionality resulted infaster response times and lower RAM usage throughout the tests. On the otherhand, RESTful, despite its simpler queries, exhibited faster response times inGET operations, consistent with related work. Lastly, our findings suggestthat RESTful uses slightly less RAM compared to GraphQL in the context ofCREATE, UPDATE, and DELETE operations.
14

Development of a tool allowing to create and use JSON schemas so as to enhance the validation of existing projects

Charles-Elie, Simon January 2017 (has links)
A mobile application is typically divided into two sides that communicate with each other: the front-end (i.e. what the user can see and interact with on the phone) and the back-end (the hidden ”server” side, which processes requests from the front-end). Ways to improve their production cycle are constantly investigated by corporations such as Applidium, which is a French startup company specialized in mobile applications. For instance, the firm often has to deal with external back-ends that are not properly documented, which makes the development of products intricate. Furthermore, test and documentation files for certain parts of projects are manually written, which is time consuming, and are all largely based on the same information (back-end descriptions). Hence, this information frequently finds itself scattered in different files, sometimes in different versions. Having identified issues that most regularly disrupt the work of the company’s employees, a certain number of goals to solve these are set, such as, notably, centralizing all back-end-related information into one authoritative source, and automatizing the generation of test and documentation files. A tool (in the form of a web application) allowing users to describe back-ends, called Pericles, is then proposed as the outcome of the master thesis, to deal with the described problems and materialize the defined objectives. Finally, a qualitative evaluation is performed through a questionnaire designed to assess how users feel the tool helps them in their work, which constitutes the metric for this project. The evaluation suggests that the implemented tool is relevant with respect to the fixed goals, and allows to infer its propensity to help Applidium’s developers and project managers by making the development and validation of projects easier.
15

Forensic Insights: Analyzing and Visualizing Fitbit Cloud Data

Poorvi Umesh Hegde (17635896) 15 December 2023 (has links)
<p dir="ltr">Wearable devices are ubiquitous. There are over 1.1 billion wearable devices in the<br>market today[1]. The market is projected to grow at a rate of 14.6% annually till 2030[2].<br>These devices collect and store a large amount of data[3]. A major amount of this collected<br>data is stored in the cloud. For many years now, law enforcement organizations have been<br>continuously encountering cases that involve a wearable device in some capacity. There have<br>also been examples of how these wearable devices have helped in crime investigations and<br>insurance fraud investigations [4],[5],[6],[7],[8]. The article [4] performs an analysis of 5 case<br>studies and 57 news articles and shows how the framing of wearables in the context of the<br>crimes helped those cases. However, there still isn’t enough awareness and understanding<br>among law enforcement agencies on leveraging the data collected by these devices to solve<br>crimes. Many of the fitness trackers and smartwatches in the market today have more or<br>less similar functionalities of tracking data on an individual’s fitness-related activities, heart<br>rate, sleep, temperature, and stress [9]. One of the major players in the smartwatch space is<br>Fitbit. Fitbit synchronizes the data that it collects, directly to Fitbit Cloud [10]. It provides<br>an Android app and a web dashboard for users to access some of these data, but not all.<br>Application developers on the other hand can make use of Fitbit APIs to use user’s data.<br>These APIs can also be leveraged by law enforcement agencies to aid in digital forensic<br>investigations. There have been previous studies where they have developed tools that make<br>use of Fitbit Web APIs [11],[12], [13] but for various other purposes, not for forensic research.<br>There are a few studies on the topic of using fitness tracker data for forensic investigations<br>[14],[15]. But very few have used the Fitbit developer APIs [16]. Thus this study aims to<br>propose a proof-of-concept platform that can be leveraged by law enforcement agencies to<br>access and view the data stored on the Fitbit cloud on a person of interest. The results<br>display data on 12 categories - activity, body, sleep, breathing, devices, friends, nutrition,<br>heart rate variability, ECG, temperature, oxygen level, and cardio data, in a tabular format<br>that is easily viewable and searchable. This data can be further utilized for various analyses.<br>The tool developed is Open Source and well documented, thus anyone can reproduce the<br>process.<br>12<br></p>
16

Building a Modular Analytics Platform for 3D Home Design

Öhman, Jesper January 2022 (has links)
With society undergoing a large-scale technological revolution, companies often find themselves having to adapt by developing digitized products and applications. However, these applications typically produce large amounts of data. An additional problem that companies in the 3D Home Design industry face is having to provide a massive amount of options for their products, which without proper usage metrics can lead to large amounts of wasted resources. This thesis aims to combat these problems by designing a prototype of a modular analytics platform, consisting of a data parser, a back-end server &amp; API as well as a web interface. The system is capable of displaying highly customizable visual graphs of user statistics as well as breakdowns for how much each product gets picked by the client's users. The thesis system is built on a MERN stack, consisting of MongoDB, Express.js, React.js and Node.js, and is written purely in JavaScript. The thesis achieved moderate success by successfully implementing a modular analytics platform as well as correctly providing tools that can identify obsolete products, which in return could potentially reduce the amount of resources wasted by companies that adopt the solution.
17

Multi agent system for web database processing, on data extraction from online social networks

Abdulrahman, Ruqayya January 2012 (has links)
In recent years, there has been a flood of continuously changing information from a variety of web resources such as web databases, web sites, web services and programs. Online Social Networks (OSNs) represent such a field where huge amounts of information are being posted online over time. Due to the nature of OSNs, which offer a productive source for qualitative and quantitative personal information, researchers from various disciplines contribute to developing methods for extracting data from OSNs. However, there is limited research which addresses extracting data automatically. To the best of the author's knowledge, there is no research which focuses on tracking the real time changes of information retrieved from OSN profiles over time and this motivated the present work. This thesis presents different approaches for automated Data Extraction (DE) from OSN: crawler, parser, Multi Agent System (MAS) and Application Programming Interface (API). Initially, a parser was implemented as a centralized system to traverse the OSN graph and extract the profile's attributes and list of friends from Myspace, the top OSN at that time, by parsing the Myspace profiles and extracting the relevant tokens from the parsed HTML source files. A Breadth First Search (BFS) algorithm was used to travel across the generated OSN friendship graph in order to select the next profile for parsing. The approach was implemented and tested on two types of friends: top friends and all friends. In case of top friends, 500 seed profiles have been visited; 298 public profiles were parsed to get 2197 top friends' profiles and 2747 friendship edges, while in case of all friends, 250 public profiles have been parsed to extract 10,196 friends' profiles and 17,223 friendship edges. This approach has two main limitations. The system is designed as a centralized system that controlled and retrieved information of each user's profile just once. This means that the extraction process will stop if the system fails to process one of the profiles; either the seed profile (first profile to be crawled) or its friends. To overcome this problem, an Online Social Network Retrieval System (OSNRS) is proposed to decentralize the DE process from OSN through using MAS. The novelty of OSNRS is its ability to monitor profiles continuously over time. The second challenge is that the parser had to be modified to cope with changes in the profiles' structure. To overcome this problem, the proposed OSNRS is improved through use of an API tool to enable OSNRS agents to obtain the required fields of an OSN profile despite modifications in the representation of the profile's source web pages. The experimental work shows that using API and MAS simplifies and speeds up the process of tracking a profile's history. It also helps security personnel, parents, guardians, social workers and marketers in understanding the dynamic behaviour of OSN users. This thesis proposes solutions for web database processing on data extraction from OSNs by the use of parser and MAS and discusses the limitations and improvements.
18

Application of Web Mashup Technology to Oyster Information Services

Chuindja Ngniah, Christian 15 December 2012 (has links)
Web mashup is a lightweight technology used to integrate data from remote sources without direct access to their databases. As a data consumer, a Web mashup application creates new contents by retrieving data through the Web application programming interface (API) provided by the external sources. As a data provider, the service program publishes its Web API and implements the specified functions. In the project reported by this thesis, we have implemented two Web mashup applications to enhance the Web site oystersentinel.org: the Perkinsus marinus model and the Oil Spill model. Each model overlay geospatial data from a local database on top of a coastal map from Google Maps. In addition, we have designed a Web-based data publishing service. In this experimental system, we illustrated a successful Web mashup interface that allows outside developers to access the data about the local oyster stock assessment.
19

InGriDE: um ambiente integrado e extensível de desenvolvimento para computação em grade / InGriDE: an integrated and extensible development environment for grid computing

Guerra, Eduardo Leal 07 May 2007 (has links)
Recentes avanços proporcionaram às grades computacionais um bom nível de maturidade. Esses sistemas têm sido implantados em ambientes de produção de qualidade na comunidade de pesquisa acadêmica e vêm despertando um grande interesse da indústria. Entretanto, desenvolver aplicações para essas infra-estruturas heterogêneas e distribuídas ainda é uma tarefa complexa e propensa a erros. As iniciativas de facilitar essa tarefa resultaram, na maioria dos casos, em ferramentas não integradas e baseadas em características específicas de cada grade computacional. O presente trabalho tem como objetivo minimizar a dificuldade de desenvolvimento de aplicações para a grade através da construção de um ambiente integrado e extensível de desenvolvimento (IDE) para computação em grade chamado InGriDE. O InGriDE fornece um conjunto único de ferramentas compatíveis com diferentes sistemas de middleware, desenvolvidas baseadas na interface de programação Grid Application Toolkit (GAT). O conjunto de funcionalidades do InGriDE foi desenvolvido com base na plataforma Eclipse que, além de fornecer um arcabouço para construção de IDEs, facilita a extensão do conjunto inicial de funcionalidades. Para validar a nossa solução, utilizamos em nosso estudo de caso o middleware InteGrade, desenvolvido no nosso grupo de pesquisa. Os resultados obtidos nesse trabalho mostraram a viabilidade de fornecer independência de middleware para IDEs através do uso de uma interface genérica de programação como o GAT. Além disso, os benefícios obtidos com o uso do Eclipse como arcabouço para construção de IDEs indicam que os recursos fornecidos por esse tipo de arcabouço atendem de forma eficiente as necessidades inerentes ao processo de desenvolvimento de aplicações para a grade. / Computational grids have evolved considerably over the past few years. These systems have been deployed in production environments in the academic research community and have increased the interest by the industrial community. However, developing applications over heterogeneous and distributed infrastructure is still a complex and error prone process. The initiatives to facilitate this task, in the majority of the cases, resulted in isolated, middleware-specific tools. This work has the objective of minimizing the difficulty of developing grid applications through the construction of an integrated and extensible development environment for grid computing, called InGriDE. InGriDE provides a unique set of tools, compliant with different middleware systems, based on the Grid Application Toolkit (GAT). We developed the InGriDE set of features, based on the Eclipse platform, which provides both a framework for building IDEs and the possibility to extend the initial set of features. To validate our solution we used the InteGrade middleware, developed in our research group, as our case study. The results obtained from our work showed the viability of providing middleware independence to IDEs through the use of a generic application programming interface like GAT. Moreover, the benefits obtained through the use of Eclipse as our framework for building IDEs indicates that this kind of framework satisfies the requirements inherent to the grid application development process in a efficient way.
20

Σχεδιασμός και ανάπτυξη διεπαφής πελάτη-εξυπηρετητή για υποστήριξη συλλογισμού σε κατανεμημένες εφαρμογές του σημαντικού ιστού

Αγγελόπουλος, Παναγιώτης 21 September 2010 (has links)
Η έρευνα αναφορικά με την εξέλιξη του Παγκόσμιου Ιστού (WWW) κινείται τα τελευταία χρόνια προς πιο ευφυείς και αυτοματοποιημένους τρόπους ανακάλυψης και εξαγωγής της πληροφορίας. Ο Σημαντικός Ιστός (Semantic Web) είναι μία επέκταση του σημερινού Ιστού, όπου στην πληροφορία δίνεται σαφώς προσδιορισμένη σημασία, δίνοντας έτσι τη δυνατότητα στις μηχανές να μπορούν πλέον να επεξεργάζονται καλύτερα και να «κατανοούν» τα δεδομένα, τα οποία μέχρι σήμερα απλώς παρουσιάζουν. Για να λειτουργήσει ο Σημαντικός Ιστός, οι υπολογιστές θα πρέπει να έχουν πρόσβαση σε οργανωμένες συλλογές πληροφοριών, που καλούνται οντολογίες (ontologies). Οι οντολογίες παρέχουν μια μέθοδο αναπαράστασης της γνώσης στο Σημαντικό Ιστό και μπορούν επομένως να αξιοποιηθούν από τα υπολογιστικά συστήματα για τη διεξαγωγή αυτοματοποιημένου συλλογισμού (automated reasoning). Για την περιγραφή και την αναπαράσταση των οντολογιών του Σημαντικού Ιστού σε γλώσσες αναγνώσιμες από τη μηχανή, έχουν προταθεί και βρίσκονται υπό εξέλιξη διάφορες πρωτοβουλίες, με πιο σημαντική τη Γλώσσα Οντολογίας Ιστού (Web Ontology Language – OWL). H γλώσσα αυτή αποτελεί πλέον τη βάση για την αναπαράσταση γνώσης στο Σημαντικό Ιστό, λόγω της προώθησής της από το W3C, και του αυξανόμενου βαθμού υιοθέτησής της στις σχετικές εφαρμογές. Το βασικότερο εργαλείο για την υλοποίηση εφαρμογών που διαχειρίζονται OWL οντολογίες, είναι το OWL API. Το OWL API αποτελείται από προγραμματιστικές βιβλιοθήκες και μεθόδους, οι οποίες παρέχουν μια υψηλού επιπέδου διεπαφή για την πρόσβαση και τον χειρισμό OWL οντολογιών. Το θεωρητικό υπόβαθρο που εγγυάται την εκφραστική και συλλογιστική ισχύ των οντολογιών, παρέχεται από τις Λογικές Περιγραφής (Description Logics). Οι Λογικές Περιγραφής αποτελούν ένα καλώς ορισμένο αποφασίσιμο υποσύνολο της Λογικής Πρώτης Τάξης και καθιστούν δυνατή την αναπαράσταση και ανακάλυψη γνώσης στο Σημαντικό Ιστό. Για την ανακάλυψη άρρητης πληροφορίας ενδείκνυται, επομένως, να αξιοποιηθούν συστήματα βασισμένα σε Λογικές Περιγραφής. Τα συστήματα αυτά ονομάζονται και εργαλεία Συλλογισμού (Reasoners). Χαρακτηριστικά παραδείγματα τέτοιων εργαλείων αποτελούν τα FaCT++ και Pellet. Από τα παραπάνω γίνεται προφανής ο λόγος για τον οποίο, τόσο το OWL API, όσο και τα εργαλεία Συλλογισμού, χρησιμοποιούνται από προτεινόμενα μοντέλα υλοποίησης εφαρμογών του Σημαντικού Ιστού επόμενης γενιάς (WEB 3.0), για την επικοινωνία και την υποβολή «έξυπνων» ερωτημάτων σε βάσεις γνώσης (knowledge bases). Στα μοντέλα αυτά προτείνεται, επίσης, η χρήση κατανεμημένης αρχιτεκτονικής 3-επιπέδων (3-tier distributed architecture), για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. Σκοπός της διπλωματικής αυτής είναι ο σχεδιασμός και η ανάπτυξη μιας διεπαφής Πελάτη – Εξυπηρετητή (Client – Server interface) για την υποστήριξη υπηρεσιών Συλλογισμού (reasoning) σε κατανεμημένες εφαρμογές του Σημαντικού Ιστού. Πιο συγκεκριμένα, η διεπαφή που θα υλοποιήσουμε αποτελείται από δύο μέρη. Το πρώτο παρέχει τα απαραίτητα αρχεία για την εκτέλεση ενός εργαλείου Συλλογισμού σε κάποιο απομακρυσμένο μηχάνημα (Server). Με τον τρόπο αυτό, το συγκεκριμένο μηχάνημα θα παρέχει απομακρυσμένες (remote) υπηρεσίες Συλλογισμού. Το δεύτερο μέρος (Client) περιέχει αρχεία, που δρουν συμπληρωματικά στις βιβλιοθήκες του OWL API, και του δίνουν νέες δυνατότητες. Συγκεκριμένα, δίνουν την δυνατότητα σε μια εφαρμογή, που είναι υλοποιημένη με το OWL API, να χρησιμοποιήσει τις υπηρεσίες που προσφέρονται από κάποιο απομακρυσμένο εργαλείο Συλλογισμού. Συνεπώς, η διεπαφή μας θα δώσει την δυνατότητα υιοθέτησης της χρήσης του OWL API και των εργαλείων Συλλογισμού από κατανεμημένες αρχιτεκτονικές για την υλοποίηση εφαρμογών του Σημαντικού Ιστού. / In the past few years, the research that focus on the development of the World Wide Web (WWW) has moved towards more brilliant and automated ways of discovering and exporting the information. The Semantic Web is an extension of the current Web, that explicitly defines the information, thus providing the machines with the possibility to better process and “comprehend” the data, which until now they simply present. For the Semantic Web to function properly, computers must have access to organized collections of information, that are called ontologies. Ontologies provide a method of representing knowledge in the Semantic Web and, consequently, they can be used by computing systems in order to conduct automated reasoning. In order to describe and represent the ontologies of the Semantic Web in machine-readable language, various initiatives have been proposed and are under development, most important of which is the Web Ontology Language - OWL. This language constitutes the base for representing knowledge in the Semantic Web, due to its promotion from the W3C, and its increasing degree of adoption from relative applications. The main tool for the development of applications that manages OWL ontologies, is the OWL API. The OWL API consists of programming libraries and methods, that provide a higher-level interface for accessing and handling OWL ontologies. The theoretical background that guarantees the expressivity and the reasoning of ontologies, is provided from Description Logics. Description Logics constitute a well defined and decidable subset of First Order Logic and make possible the representation and discovery of knowledge in the Semantic Web. As a consequence, in order to discover “clever” information, we have to develop and use systems that are based in Description Logics. These systems are also called Reasoners. Characteristic examples of such tools are FaCT++ and Pellet. From above, it must be obvious why both the OWL API and the Reasoners are used by proposed models of developing next generation (WEB 3.0) Semantic Web applications, for the communication and the submission of “intelligent” questions in knowledge bases. These models also propose the use of a 3-level distributed architecture (3-tier distributed architecture), for the development of Semantic Web applications. Aim of this diploma thesis is to design and implement a Client-Server interface to support Reasoning in distributed applications of the Semantic Web. Specifically, the interface that we will implement consists of two parts. First part provides the essential files for a Reasoner to run in a remote machine (Server). As a result, this machine will provide remote Reasoning services. Second part (Client) contains files, that act additionally to (enhance) the libraries of the OWL API, and give them new features. More precisely, they provide an application, that is implemented with OWL API, with the possibility of using the services that are offered by a remote Reasoner. Consequently, our interface will make possible the use of the OWL API and the Reasoners from proposed distributed architectures for the development of Semantic Web applications.

Page generated in 0.2082 seconds