• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 32
  • 27
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 46
  • 44
  • 31
  • 31
  • 27
  • 23
  • 22
  • 22
  • 21
  • 19
  • 19
  • 17
  • 16
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Acesso a dados baseado em ontologias com NoSQL / Ontology-based data access with NoSQL

Agena, Barbara Tieko 27 November 2017 (has links)
O acesso a dados baseado em ontologia (OBDA, de Ontology-Based Data Access) propõe facilitar ao usuário acesso a dados sem o conhecimento específico de como eles estão armazenados em suas fontes. Para isso, faz-se uso de uma ontologia como camada conceitual de alto nível, explorando sua capacidade de descrever o domínio e lidar com a incompletude dos dados. Atualmente, os sistemas NoSQL (Not Only SQL) estão se tornando populares, oferecendo recursos que os sistemas de bancos de dados relacionais não suportam. Desta forma, surgiu a necessidade dos sistemas OBDA se moldarem a estes novos tipos de bancos de dados. O objetivo desta pesquisa é propor uma arquitetura nova para sistemas OBDA possibilitando o acesso a dados em bancos de dados relacionais e bancos de dados NoSQL. Para tal, foi proposta a utilização de um mapeamento mais simples responsável pela comunicação entre ontologia e bancos de dados. Foram construídos dois protótipos de sistemas OBDA para sistemas NoSQL e sistemas de bancos de dados relacional para uma validação empírica da arquitetura proposta neste trabalho. / Ontology-based data access (OBDA) proposes to facilitate user access to data without specific knowledge of how they are stored in their sources. For this, an ontology is used as a high level conceptual layer, exploring its capacity to describe the domain and deal with the incompleteness of the data. Currently, NoSQL (Not Only SQL) systems are becoming popular, offering features that relational database systems do not support. In this way, the need arose for shaping OBDA systems to deal with these new types of databases. The objective of this research is to propose a new architecture for OBDA systems allowing access to data in relational databases and NoSQL databases. For this, we propose the use of a simpler mapping responsible for the communication between ontology and databases. Two OBDA system prototypes were constructed: one for NoSQL systems and one for relational database systems for an empirical validation.
32

Analyse et évaluation de structures orientées document / Analysis and evaluation of document-oriented structures

Gomez Barreto, Paola 13 December 2018 (has links)
De nos jours, des millions de sources de données différentes produisent une énorme quantité de données non structurées et semi-structurées qui changent constamment. Les systèmes d'information doivent gérer ces données tout en assurant la scalabilité et la performance. En conséquence, ils ont dû s'adapter pour supporter des bases de données hétérogènes, incluant des bases de données No-SQL. Ces bases de données proposent une structure de données sans schéma avec une grande flexibilité, mais sans séparation claire des couches logiques et physiques. Les données peuvent être dupliquées, fragmentées et/ou incomplètes, et ils peuvent aussi changer à mesure des besoins de métier.La flexibilité et l’absence de schéma dans les systèmes NoSQL orientés documents, telle que MongoDB, permettent d’explorer des nouvelles alternatives de structuration sans faire face aux contraintes. Le choix de la structuration reste important et critique parce qu’il y a plusieurs impacts à considérer et il faut choisir parmi des nombreuses d’options de structuration. Nous proposons donc de revenir sur une phase de conception dans laquelle des aspects de qualité et les impacts de la structure sont pris en compte afin de prendre une décision d’une manière plus avertie.Dans ce cadre, nous proposons SCORUS, un système pour l’analyse et l’évaluation des structures orientés document qui vise à faciliter l’étude des possibilités de semi-structurations orientées document, telles que MongoDB, et à fournir des métriques objectives pour mieux faire ressortir les avantages et les inconvénients de chaque solution par rapport aux besoins des utilisateurs. Pour cela, une séquence de trois phases peut composer un processus de conception. Chaque phase peut être aussi effectuée indépendamment à des fins d’analyse et de réglage. La stratégie générale de SCORUS est composée par :1. Génération d’un ensemble d’alternatives de structuration : dans cette phase nous proposons de partir d’une modélisation UML des données et de produire automatiquement un large ensemble de variantes de structuration possibles pour ces données.2. Evaluation d’alternatives en utilisant un ensemble de métriques structurelles : cette évaluation prend un ensemble de variantes de structuration et calcule les métriques au regard des données modélisées.3. Analyse des alternatives évaluées : utilisation des métriques afin d’analyser l’intérêt des alternatives considérées et de choisir la ou les plus appropriées. / Nowadays, millions of different data sources produce a huge quantity of unstructured and semi-structured data that change constantly. Information systems must manage these data but providing at the same time scalability and performance. As a result, they have had to adapt it to support heterogeneous databases, included NoSQL databases. These databases propose a schema-free with great flexibility but with a no clear separation of the logical and physical layers. Data can be duplicated, split and/or incomplete, and it can also change as the business needs.The flexibility and absence of schema in document-oriented NoSQL systems, such as MongoDB, allows new structuring alternatives to be explored without facing constraints. The choice of the structuring remains important and critical because there are several impacts to consider and it is necessary to choose among many of options of structuring. We therefore propose to return to a design phase in which aspects of quality and the impacts of the structure are considered in order to make a decision in a more informed manner.In this context, we propose SCORUS, a system for the analysis and evaluation of document-oriented structures that aims to facilitate the study of document-oriented semi-structuring possibilities, such as MongoDB, and to provide objective metrics for better highlight the advantages and disadvantages of each solution in relation to the needs of the users. For this, a sequence of three phases can compose a design process. Each phase can also be performed independently for analysis and adjustment purposes. The general strategy of SCORUS is composed by:1. Generation of a set of structuration alternatives: in this phase we propose to start from UML modeling of the data and to automatically produce a large set of possible structuring variants for this data.2. Evaluation of Alternatives Using a Set of Structural Metrics: This evaluation takes a set of structuring variants and calculates the metrics against the modeled data.3. Analysis of the evaluated alternatives: use of the metrics to analyze the interest of the considered alternatives and to choose the most appropriate one(s).
33

Modélisation NoSQL des entrepôts de données multidimensionnelles massives / Modeling Multidimensional Data Warehouses into NoSQL

El Malki, Mohammed 08 December 2016 (has links)
Les systèmes d’aide à la décision occupent une place prépondérante au sein des entreprises et des grandes organisations, pour permettre des analyses dédiées à la prise de décisions. Avec l’avènement du big data, le volume des données d’analyses atteint des tailles critiques, défiant les approches classiques d’entreposage de données, dont les solutions actuelles reposent principalement sur des bases de données R-OLAP. Avec l’apparition des grandes plateformes Web telles que Google, Facebook, Twitter, Amazon… des solutions pour gérer les mégadonnées (Big Data) ont été développées et appelées « Not Only SQL ». Ces nouvelles approches constituent une voie intéressante pour la construction des entrepôts de données multidimensionnelles capables de supporter des grandes masses de données. La remise en cause de l’approche R-OLAP nécessite de revisiter les principes de la modélisation des entrepôts de données multidimensionnelles. Dans ce manuscrit, nous avons proposé des processus d’implantation des entrepôts de données multidimensionnelles avec les modèles NoSQL. Nous avons défini quatre processus pour chacun des deux modèles NoSQL orienté colonnes et orienté documents. De plus, le contexte NoSQL rend également plus complexe le calcul efficace de pré-agrégats qui sont habituellement mis en place dans le contexte ROLAP (treillis). Nous avons élargis nos processus d’implantations pour prendre en compte la construction du treillis dans les deux modèles retenus.Comme il est difficile de choisir une seule implantation NoSQL supportant efficacement tous les traitements applicables, nous avons proposé deux processus de traductions, le premier concerne des processus intra-modèles, c’est-à-dire des règles de passage d’une implantation à une autre implantation du même modèle logique NoSQL, tandis que le second processus définit les règles de transformation d’une implantation d’un modèle logique vers une autre implantation d’un autre modèle logique. / Decision support systems occupy a large space in companies and large organizations in order to enable analyzes dedicated to decision making. With the advent of big data, the volume of analyzed data reaches critical sizes, challenging conventional approaches to data warehousing, for which current solutions are mainly based on R-OLAP databases. With the emergence of major Web platforms such as Google, Facebook, Twitter, Amazon...etc, many solutions to process big data are developed and called "Not Only SQL". These new approaches are an interesting attempt to build multidimensional data warehouse capable of handling large volumes of data. The questioning of the R-OLAP approach requires revisiting the principles of modeling multidimensional data warehouses.In this manuscript, we proposed implementation processes of multidimensional data warehouses with NoSQL models. We defined four processes for each model; an oriented NoSQL column model and an oriented documents model. Each of these processes fosters a specific treatment. Moreover, the NoSQL context adds complexity to the computation of effective pre-aggregates that are typically set up within the ROLAP context (lattice). We have enlarged our implementations processes to take into account the construction of the lattice in both detained models.As it is difficult to choose a single NoSQL implementation that supports effectively all the applicable treatments, we proposed two translation processes. While the first one concerns intra-models processes, i.e., pass rules from an implementation to another of the same NoSQL logic model, the second process defines the transformation rules of a logic model implementation to another implementation on another logic model.
34

Intégrer des sources de données hétérogènes dans le Web de données / Integrating heterogeneous data sources in the Web of data

Michel, Franck 03 March 2017 (has links)
Le succès du Web de Données repose largement sur notre capacité à atteindre les données stockées dans des silos invisibles du web. Dans les 15 dernières années, des travaux ont entrepris d’exposer divers types de données structurées au format RDF. Dans le même temps, le marché des bases de données (BdD) est devenu très hétérogène avec le succès massif des BdD NoSQL. Celles-ci sont potentiellement d’importants fournisseurs de données liées. Aussi, l’objectif de cette thèse est de permettre l’intégration en RDF de sources de données hétérogènes, et notamment d'alimenter le Web de Données avec les données issues des BdD NoSQL. Nous proposons un langage générique, xR2RML, pour décrire le mapping de sources hétérogènes vers une représentation RDF arbitraire. Ce langage étend des travaux précédents sur la traduction de sources relationnelles, CSV/TSV et XML en RDF. Sur cette base, nous proposons soit de matérialiser les données RDF, soit d'évaluer dynamiquement des requêtes SPARQL sur la base native. Dans ce dernier cas, nous proposons une approche en deux étapes : (i) traduction d’une requête SPARQL en une requête pivot, abstraite, en se basant sur le mapping xR2RML ; (ii) traduction de la requête abstraite en une requête concrète, prenant en compte les spécificités du langage de requête de la BdD cible. Un souci particulier est apporté à l'optimisation des requêtes, aux niveaux abstrait et concret. Nous démontrons l’applicabilité de notre approche via un prototype pour la populaire base MongoDB. Nous avons validé la méthode dans un cas d’utilisation réel issu du domaine des humanités numériques. / To a great extent, the success of the Web of Data depends on the ability to reach out legacy data locked in silos inaccessible from the web. In the last 15 years, various works have tackled the problem of exposing various structured data in the Resource Description Format (RDF). Meanwhile, the overwhelming success of NoSQL databases has made the database landscape more diverse than ever. NoSQL databases are strong potential contributors of valuable linked open data. Hence, the object of this thesis is to enable RDF-based data integration over heterogeneous data sources and, in particular, to harness NoSQL databases to populate the Web of Data. We propose a generic mapping language, xR2RML, to describe the mapping of heterogeneous data sources into an arbitrary RDF representation. xR2RML relies on and extends previous works on the translation of RDBs, CSV/TSV and XML into RDF. With such an xR2RML mapping, we propose either to materialize RDF data or to dynamically evaluate SPARQL queries on the native database. In the latter, we follow a two-step approach. The first step performs the translation of a SPARQL query into a pivot abstract query based on the xR2RML mapping of the target database to RDF. In the second step, the abstract query is translated into a concrete query, taking into account the specificities of the database query language. Great care is taken of the query optimization opportunities, both at the abstract and the concrete levels. To demonstrate the effectiveness of our approach, we have developed a prototype implementation for MongoDB, the popular NoSQL document store. We have validated the method using a real-life use case in Digital Humanities.
35

Effektiv och underhållssäker lagring av medicinsk data

Ekberg, Albin, Holm, Jacob January 2014 (has links)
Creating a database to manage medical data is not the easiest. We create a database to be used for a presentation tool that presents medical data about patients that is stored in the database. We examine which of the three databases, MySQL with relational design, MySQL with EAV design and MongoDB that are best suited for storing medical data. The analysis i performed in two steps. The first step handles the database that is most effective to retriev data. The second step examines how easy it is to change the structure of the various databases. The results show that depending on whether efficiency or maintenance is most important, different databases are the best choise. MySQL with relational design proves to be most effective while MongoDB is the easiest to maintain.
36

Prevention of Privilege Abuse on NoSQL Databases : Analysis on MongoDB access control / Förebyggande av Privilegier Missbruk på NoSQL-databaser : Analys på MongoDB-åtkomstkontroll

Ishak, Marwah January 2021 (has links)
Database security is vital to retain confidentiality and integrity of data as well as prevent security threats such as privilege abuse. The most common form of privilege abuse is excessive privilege abuse, which entails assigning users with excessive privileges beyond their job function, which can be abused deliberately or inadvertently. The thesis’s objective is to determine how to prevent privilege abuse in the NoSQL database MongoDB. Prior studies have noted the importance of access control to secure databases from privilege abuse. Access control is essential to manage and protect the accessibility of the data stored and restrict unauthorised access. Therefore, the study analyses MongoDB’s embedded access control through experimental testing to test various built-in and advanced privileges roles in preventing privilege abuse. The results indicate that privilege abuse can be prevented if users are granted roles composed of the least privileges. Additionally, the results indicate that assigning users with excessive privileges exposes the system to privilege abuse. The study also underlines that an inaccurate allocation of privileges or permissions to users of databases may have profound consequences for the system and organisation, such as data breach and data manipulation. Hence, organisations that utilise information technology should be obliged to protect their interests and databases from others and their members through access control policies. / Datasäkerhet är avgörande för att bevara datats konfidentialitet och integritet samt för att förhindra säkerhetshot som missbruk av privilegier. Missbruk av överflödig privilegier, är den vanligaste formen av privilegier missbruk. Detta innebär att en användare tilldelas obegränsad behörighet utöver det som behövs för deras arbete, vilket kan missbrukas medvetet eller av misstag. Examensarbetets mål är att avgöra hur man kan förhindra missbruk av privilegier i NoSQL-databasen MongoDB. Tidigare studier har noterat vikten av åtkomstkontroll för att säkra databaser från missbruk av privilegier. Åtkomstkontroll är viktigt för att hantera och skydda åtkomlighet för de lagrade data samt begränsa obegränsad åtkomst. Därför analyserar arbetet MongoDBs inbäddade åtkomstkontroll genom experimentell testning för att testa olika inbyggda och avancerade priviligierade roller för att förhindra missbruk av privilegier. Resultaten indikerar att missbruk av privilegier kan förhindras om användare får roller som har färre privilegier. Dessutom visar resultaten att tilldelning av användare med obegränsade privilegier utsätter systemet för missbruk av privilegier. Studien understryker också att en felaktig tilldelning av privilegier eller behörigheter för databasanvändare kan få allvarliga konsekvenser för systemet och organisationen, såsom dataintrång och datamanipulation. Därför bör organisationer som använder informationsteknologi ha som plikt att skydda sina tillgångar och databaser från obehöriga men även företagets medarbetare som inte är beroende av datat genom policys för åtkomstkontroll.
37

A Comparative Analysis of Database Management Systems for Time Series Data / En jämförelse av databashanteringssystem för tidsseriedata

Verner-Carlsson, Tove, Lomanto, Valerio January 2023 (has links)
Time series data refers to data recorded over time, often periodically, and can rapidly accumulate into vast quantities. To effectively present, analyse, or conduct research on such data it must be stored in an accessible manner. For convenient storage, database management systems (DBMSs) are employed. There are numerous types of such systems, each with their own advantages and disadvantages, making different trade-offs between desired qualities. In this study we conduct a performance comparison between two contrasting DBMSs for time series data. The first system evaluated is PostgreSQL, a popular relational DBMS, equipped with the time series-specific extension TimescaleDB. The second comparand is MongoDB, one of the most well-known and widely used NoSQL systems, with out-of-the-box time series tailoring. We address the question of which out of these DBMSs is better suited for time series data by comparing their query execution times. This involves setting up two databases populated with sample time series data — in our case, publicly available weather data from the Swedish Meteorological and Hydrological Institute. Subsequently, a set of trial queries designed to mimic real-world use cases are executed against each database, while measuring their runtimes. The benchmark results are compared and analysed query-by-query, to identify relative performance differences. Our study finds considerable variation in the relative performance of the two systems, with PostgreSQL outperforming MongoDB in some queries (by up to more than two orders of magnitude) and MongoDB resulting in faster execution in others (by a factor of over 30 in one case). Based on these findings, we conclude that certain queries, and their corresponding real-world use cases, may be better suited for one of the two DBMSs due to the alignment between query structure and the strengths of that system. We further explore other possible explanations for our results, elaborating on factors impacting the efficiency with which each DBMS can execute the provided queries, and consider potential improvements. / I takt med att mängden data världen över växer exponentiellt, ökar också behovet av effektiva lagringsmetoder. En ofta förekommande typ av data är tidsseriedata, där varje värde är associerat med en tidpunkt. Det kan till exempel vara något som mäts en gång om dagen, en gång i timmen, eller med någon annan periodicitet. Ett exempel på sådan data är klimat- och väderdata. Sveriges meteorologiska och hydrologiska institut samlar varje minut in mätvärden från tusentals mätstationer runt om i landet, så som lufttemperatur, vindhastighet och nederbördsmängd. Det leder snabbt till oerhört stora datamängder, som måste lagras för att effektivt kunna analyseras, förmedlas vidare, och bevaras för eftervärlden. Sådan lagring sker i databaser. Det finns många olika typer av databaser, där de vanligaste är relationella databaser och så kallande NoSQL-databaser. I den här uppsatsen undersöker vi två olika databashanteringssystem, och deras lämplighet för lagring av tidsseriedata. Specifikt jämför vi prestandan för det relationella databashanteringssystemet PostgreSQL, utökat med tillägget TimescaleDB som optimerar systemet för användande med tidsseriedata, och NoSQL-systemet MongoDB som har inbyggd tidsserieanpassning. Vi utför jämförelsen genom att implementera två databasinstanser, en per komparand, fyllda med SMHI:s väderdata och därefter mäta exekveringstiderna för ett antal utvalda uppgifter som relaterar till behandling av tidsseriedata. Studien konstaterar att inget av systemen genomgående överträffar det andra, utan det varierar beroende på uppgift. Resultaten indikerar att TimescaleDB är bättre på komplexa uppgifter och uppgifter som involverar att plocka ut all data inom ett visst tidsintervall, emedan MongoDB presterar bättre när endast data från en delmängd av mätstationerna efterfrågas.
38

Analys och jämförelse av relationsdatabaser vid behandling av spatiala data : En studie kring prestanda hos relationsdatabaser / Analysis and comparison of relational databases when processing spatial data : A study on the performance of relational databases

Karlsson, David January 2023 (has links)
Det finns en stor mängd databaser som används inom många olika sorters användningsområden. Bland dessa finns det sådana som har funktion för att behandla spatiala data. Problemet som detta medför är att välja en databas som kan hantera en viss tänkt typ av spatiala data med bäst prestanda. Denna rapport presenterar en utredning för detta utifrån ett dataset som erhållits från Norconsult Digital. Bland de databaser som valts finns tre SQL databaser (PostgreSQL, MySQL och SQLite) och en NoSQL databas (MongoDB). Dessa databaser genomgick fem likvärdiga operationer/tester som resulterade i att PostgreSQL med dess GiST/SP-GiST index och MongoDB presterade på en nivå långt över resterande databaser som testades. Utifrån detta arbete kan det konstateras att fler utförliga prestandatester bör utföras, där större och mer komplexa dataset, samt fler alternativ till databaser och spatiala index bör finnas med. Detta för att ge en bättre bild över vilka databaser, med stöd för spatiala data, som presterar bättre. / There are a large number of databases that are used in many different areas. Among these, some have a function for processing spatial data. The problem that this entails is the choice of a database that can handle a certain type of spatial data with the best possible performance. This report presents an analysis of this based on a dataset obtained from Norconsult Digital. Among the chosen databases are three SQL databases (PostgreSQL, MySQL and SQLite) and one NoSQL database (MongoDB). These databases underwent five identical operations/tests resulting in PostgreSQL with its GiST/SP-GiST index and MongoDB performing at a level well above the rest of the databases tested. Based on this work, it can be concluded that more detailed performance tests should be carried out, where larger and more complex datasets, as well as more alternatives to databases and spatial indexes, should be included. This is to give a better picture of which databases, with support for spatial data, perform better.
39

Webbapplikation för filtrering och visualisering av data : utvecklad med Pythonramverket Dash / Web application for data filtration and visualization : developed with the Python framework Dash

Blomqvist, Andreas, de Brun Mangs, William, Elfstrand, Tobias, Grahn, David, Holm, Hampus, Matstoms, Axel, Mersh, Jamie, Ngo, Robin, Wåtz, Christopher January 2023 (has links)
Denna rapport behandlar skapandet av en webbapplikation för filtreraring och visualisering av data i Pythonramverket Dash. Rapporten ger en översikt av gruppens arbetsmetodik och projektets utveckling. Webbapplikationen utvecklades inom kursen TDDD96 Kandidatprojekt i programvaruutveckling av nio studenter som studerar civilingenjör Datateknik och civilingenjör Mjukvaruteknik. Detta uppdrag fick projektgruppen av företaget Ericsson. Projektets resultat blev en fungerade webbapplikation efterfrågad funktionalitet. Resultatet och arbetsmetodiken, bland annat testdriven utveckling, diskuteras i rapporten med fokus på hur utvecklingsprocessen förbättrades. Rapportens slutsatser är att ramverket Dash lämpar sig för webbutveckling i ett mindre projekt, särskilt för datavisualisering, och att produkten skapar värde för kunden.
40

Reducing Unnecessary Sign-ups by The Development Solution of Super-client Driving Multiple Sub- clients( SDMS)

Zhao, Xiaolin January 2021 (has links)
Nowadays more and more web applications are becoming part of everybody’s daily life. Lots of Internet users are bothered by having to create new accounts on websites. But at the same time, it is believed that sign-up as well as sign-in is a good registration solution which is difficult to replace.  In this thesis we considered a certain scenario in which a number of people need a short period of co-operation for certain tasks by using a web application. If everyone creates an account, it will be significantly annoying since it will increase everyone’s work and extend the working period. Due to such consideration we have supplied a possible solution that one user with an account works as a super-client, and then generates short-lived login codes or links for others who work as sub-clients. This solution is called SDMS which is short for Super-client Driving Multiple Sub-clients.  The thesis work contains the description and analysis of SDMS as well as designing and developing an example application. The example application is an online board game assistance platform, whose user scenario exactly fulfils the case of multiple users co-operating for a certain task that has been mentioned in the previous paragraph. Finally, we draw the conclusion that SDMS could improve the user experience in certain scenarios. / Numera blir allt fler webbapplikationer en del av allas dagliga liv. Massor av Internetanvändare stors av att behöva skapa nya konton på webbplatser. Men samtidigt tror man att både registrering och inloggning ar en bra registreringslösning som ar svar att ersatta.  I denna avhandling övervägde vi ett visst scenario dar ett antal personer behöver en kort period av samarbete for vissa uppgifter genom att använda en webbapplikation. Om alla skapar ett konto blir det väldigt irriterande eftersom det kommer att oka allas arbete och förlänga arbetsperioden. Pa grund av sådan övervägande har vi tillhandahållit en möjlig lösning att en användare med ett konto fungerar som en superklient och sedan genererar kortlivade inloggningskoder eller lankar for andra som arbetar som underklienter. Denna lösning kallas SDMS, vilket ar en förkortning for Super-client Driving Multiple Sub-clients.  Examensarbetet innehåller beskrivning och analys av SDMS samt utformning och utveckling av en exempelapplikation. Exempelapplikationen ar en online brädspelassistansplattform, vars användarscenario exakt uppfyller fallet med att flera användare samarbetar for en viss uppgift som har nämnts i föregående stycke. Slutligen drar vi slutsatsen att SDMS kan förbättra användarupplevelsen i vissa scenarier.

Page generated in 0.4129 seconds