• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 187
  • 152
  • 115
  • 30
  • 25
  • 12
  • 10
  • 8
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 599
  • 196
  • 151
  • 142
  • 127
  • 106
  • 90
  • 80
  • 79
  • 74
  • 72
  • 68
  • 64
  • 63
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Masterdata och API / Masterdata and API

Alvin, Axel, Axelborn, Lukas January 2022 (has links)
Dagens samhälle är beroende av ett ständigt flöde av information och data. Företag och organisationer har ofta enorma mängder data som rör allt från kunder och personal till försäljningsstatistik och patientjournaler. Utvecklingen har gått mycket snabbt och många företag och organisationer har inte haft tid eller resurser för att hålla sina system uppdaterade för att hantera dessa enorma mängder data. I detta arbete har uppgiften varit att koppla samman databaser från flera olika system i syfte att göra underhåll och hantering av dessa enklare. Dessa system behandlar i regel samma typ av data (personaldata indelat i grupper i form av enheter) men den benämns på olika sätt, exempelvis med olika ID. Detta leder till att datan saknar relation på så vis att det är mycket svårt att avgöra vilka enheter som korresponderar med varandra då de saknar gemensamma nämnare. Som en lösning på detta skapades ytterligare två databaser sammankopplade med övriga genom ett API, där data kopplas samman genom att tilldelas ett gemensamt ID, ett master-id. På så vis kan användare och utvecklare enkelt söka efter ett objekt från ett system och få tillbaka all data för korresponderande objekt i andra system. Som tillägg skapades också ett semi-automatiserat system i form av ett användargränssnitt som används för sammankoppling av objekt. / Today’s society depends on a constant flow of information and data. Companies and organisations often hold huge amounts of data, ranging from customers and staff to sales statistics and patient records. The pace of change has been very fast and many companies and organisations have not had the time or resources to keep their systems up to date to handle these huge amounts of data. In this thesis, the task has been to link databases from multiple systems to make maintenance and management easier. These systems generally process the same type of data (personnel data divided into groups in the form of units) which are named in different ways, for example with different IDs. As a result, the data is unrelated in a way that makes it very difficult to determine which entities correspond to each other as they have no common denominator. As a solution to this, two additional databases were created and linked to each other through an API, where the data is linked by being assigned a common ID, or a master-ID. In this way, users and developers can easily search for an object from one system and get in return all the data for the corresponding objects in other systems. In addition, a semi-automated system was created in the form of a user interface used for linking objects.
82

Návrh databáze pro připojení systému SAP jako zdroje dat pro webovou aplikaci / Database design for connecting SAP as a data source for a Web application

MARHOUN, Lukáš January 2016 (has links)
The thesis deals with connecting SAP ERP system via local database system MS SQL Server using the tools SAP BI, data synchronization between systems and advanced usage of T-SQL language for preparing data for web applications and reports written in PHP. The thesis contains a brief overview of the SAP system and the possibility of connecting to the SAP system. The general principles of described solution can be used in conjunction with other systems and programming languages.
83

Utveckling av produktprototyp för sortering av hushållsavfall / Development of a product prototype for sorting of household waste

Hamrin, Hamrin January 2015 (has links)
Abstract Embedded systems are involved more and more into our daily lives thanks to the concept of the Internet of Things (IoT). An important step in this development is the communication between the systems that been used. The possibilities of sending data in a compressed format based on a protocol standard and use a server with built-in functions, can be a good basis for complex system solutions constructed in Internet of Things (IoT). The simple protocol Messages Queue Telemetry Transport (MQTT) is described to be a protocol that minimizes any bottlenecks in the Machine - To - Machine (M2M) communications while it offers a number of implementing security solutions as data encryption, unique user credentials (username and password) And authentication thereof, and three different Quality of Service (QoS) levels since the data is transmitted over TCP / IP. Along with this server solution is examined in this report, the ability to implement the protocol in a real communication between the development board and an Android mobile application, where the data handled by the broker HiveMQ and stored in a MySQL database and then transferred via a web server to the mobile application. The purpose of the report is therefore to examine the implementations possibility for MQTT in a real scenario with the broker HiveMQ. Where the project resulted in a complete communications solution that corresponds to the protocol can be implemented as well as a theoretical explanation of the security solutions that can be taken to and how well the protocol can scale in a theoretical example. During the work, the development board CC3200 LaunchPad used as target platform. Keywords: CC3200 LaunchPad, HiveMQ, Broker, SQL, Android
84

Monitoring and Analysis of Disk throughput and latency in servers running Cassandra database

Kalidindi, Rajeev varma January 2016 (has links)
Context. Light weight process virtualization has been used in the past e.g., Solaris zones, jails in Free BSD and Linux’s containers (LXC). But only since 2013 is there a kernel support for user namespace and process grouping control that make the use of lightweight virtualization interesting to create virtual environments comparable to virtual machines. Telecom providers have to handle the massive growth of information due to the growing number of customers and devices. Traditional databases are not designed to handle such massive data ballooning. NoSQL databases were developed for this purpose. Cassandra, with its high read and write throughputs, is a popular NoSQL database to handle this kind of data. Running the database using operating system virtualization or containerization would offer a significant performance gain when compared to that of virtual machines and also gives the benefits of migration, fast boot up and shut down times, lower latency and less use of physical resources of the servers. Objectives. This thesis aims to investigate the trade-off in performance while loading a Cassandra cluster in bare-metal and containerized environments. A detailed study of the effect of loading the cluster in each individual node in terms of Latency, CPU and Disk throughput will be analyzed. Methods. We implement the physical model of the Cassandra cluster based on realistic and commonly used scenarios or database analysis for our experiment. We generate different load cases on the cluster for bare-metal and Cassandra in docker scenarios and see the values of CPU utilization, Disk throughput and latency using standard tools like sar and iostat. Statistical analysis (Mean value analysis, higher moment analysis, and confidence intervals) are done on measurements on specific interfaces in order to increase the reliability of the results. Results.Experimental results show a quantitative analysis of measurements consisting Latency, CPU and Disk throughput while running a Cassandra cluster in Bare Metal and Container Environments.A statistical analysis summarizing the performance of Cassandra cluster is surveyed. Results.Experimental results show a quantitative analysis of measurements consisting Latency, CPU and Disk throughput while running a Cassandra cluster in Bare Metal and Container Environments.A statistical analysis summarizing the performance of Cassandra cluster is surveyed. Conclusions. With the detailed analysis, the resource utilization of the database was similar in both the bare-metal and container scenarios. Disk throughput is similar in the case of mixed load and containers have a slight overhead in the case of write loads for both the maximum load case and 66% of maximum load case. The latency values inside the container are slightly higher for all the cases. The mean value analysis and higher moment analysis helps us in doing a finer analysis of the results. The confidence intervals calculated show that there is a lot of variation in the disk performance which might be due to compactions happening randomly. Future work in the area can be done on compaction strategies.
85

Adaptation of Relational Database Schema / Adaptation of Relational Database Schema

Chytil, Martin January 2012 (has links)
In the presented work we study evolution of a database schema and its impact on related issues. The work contains a review of important problems related to the change in a respective storage of the data. It describes existing approaches of these problems as well. In detail the work analyzes an impact of database schema changes on database queries, which relate to the particular database schema. The approach presented in this thesis shows a ability to model database queries together with a database schema model. The thesis describes a solution how to adapt database queries related to the evolved database schema. Finally the work contains a number of experiments that verify a proposal of the presented solution.
86

Agilní správa databáze / Agile database management

Kotyza, David January 2009 (has links)
This diploma thesis is focused on agile management of relational databases. The gole is to provide detail analysis of changes which are performed on daily bases by DBA or software developers and describe how these changes can hugely affect performance of database system and its data. The principles of best known development methodics are described in part one (chapter 2 and 3). Following second part (chapter 4) contains descriptions of basics steps of agile strategies which have been often used in application solutions. Finally the third part (chapter 5 and following) contains detail information about usual performed database tasks.
87

En prestandajämförelse mellan databaskopplingar i R / A performance Comparision between database connections in R

Linnarsson, Gustaf January 2015 (has links)
De traditionella databaserna har sedan länge varit byggda på relationsdatamodellen och skrivna i SQL. Men ju större datamängder det började komma desto mer kapacitet behövdes för att kunna lagra dessa, därför skapades NoSQL. Eftersom det blev sådana stora datamängder så blev det naturligtvis intressant att analysera all data. Men då det är sådana enorma mängde data så är det omöjligt att gå igenom rad för rad. Inom statistik och analys världens så finns det en rad olika hjälpmedel, ett av dessa är R. Den här studien kommer att försöka ta reda på om det finns något databasalternativ som är bättre än det på att arbeta tillsammans med R. Syftet är att kunna ge företag och privatpersoner en bra bild om vad de skall välja när det kommer till databasalternativ hur de på enklast sätt skall kunna plocka in data för analys genom ett experiment. Resultatet av experimentet visar att MySQL var det snabbare alternativet för den datamängd som användes. Troligtvis kommer det att skifta om större datamängder testas.
88

Kundportal

Karelius, Martin January 2019 (has links)
Dynamic Precision is a company in Herrljunga focused on electronics. A branch of Dynamic Precision is repairs of electronic equipment from third party companies and they send electronic equipment from their end-customers to Dynamic Precision that repair the units and then return them, usually direct to the end-customer. Dynamic Precision had need of a customer portal so that their customers could check the status of their sent in equipment and this customer portal became an internship project. The project was developed in ASPNET Core and its database was created in Microsoft SQL-Server. The project has been completed with somewhat intact time-planning, even though the construction of the middleware took longer time than planned. The focus of the project has been on creating a usable and secure application and a reasonable secure solution has been reached. The planned amount of users is very low, but it still needs to have its accessibility improved so that a users possible handicap wont stop the user from using the application. / Dynamic Precision är ett elektronikföretag i Herrljunga. En gren av Dynamic Precision är elektronikreparationer åt andra företag och dessa företag skickar in reparationsenheter från slutkunder som Dynamic Precision sedan reparerar och skickar tillbaka, då oftast direkt till slutkund. Dynamic Precision hade behov av en kundportal så att deras kunder fick möjlighet att och kontrollera status på sina reparationsenheter och denna portal blev då ett examensarbete. Projektet utvecklades i ASPNET Core och dess databas skapades upp i Microsoft SQLServer. Projektet har genomförts med någorlunda intakt tidsplanering, trots att konstruktionen av middleware tog längre tid än planerat. Fokus har legat i att få en applikation som är användbar och säker och en trolig rimlig säkerhetsnivå har nåtts. Den tänkta mängden användare av applikationen kommer vara mycket låg, trots detta behöver tillgängligheten ses över så att eventuella handikapp inte hindrar användaren av applikationen.
89

Physical Plan Instrumentation in Databases: Mechanisms and Applications

Psallidas, Fotis January 2019 (has links)
Database management systems (DBMSs) are designed with the goal set to compile SQL queries to physical plans that, when executed, provide results to the SQL queries. Building on this functionality, an ever-increasing number of application domains (e.g., provenance management, online query optimization, physical database design, interactive data profiling, monitoring, and interactive data visualization) seek to operate on how queries are executed by the DBMS for a wide variety of purposes ranging from debugging and data explanation to optimization and monitoring. Unfortunately, DBMSs provide little, if any, support to facilitate the development of this class of important application domains. The effect is such that database application developers and database system architects either rewrite the database internals in ad-hoc ways; work around the SQL interface, if possible, with inevitable performance penalties; or even build new databases from scratch only to express and optimize their domain-specific application logic over how queries are executed. To address this problem in a principled manner in this dissertation, we introduce a prototype DBMS, namely, Smoke, that exposes instrumentation mechanisms in the form of a framework to allow external applications to manipulate physical plans. Intuitively, a physical plan is the underlying representation that DBMSs use to encode how a SQL query will be executed, and providing instrumentation mechanisms at this representation level allows applications to express and optimize their logic on how queries are executed. Having such an instrumentation-enabled DBMS in-place, we then consider how to express and optimize applications that rely their logic on how queries are executed. To best demonstrate the expressive and optimization power of instrumentation-enabled DBMSs, we express and optimize applications across several important domains including provenance management, interactive data visualization, interactive data profiling, physical database design, online query optimization, and query discovery. Expressivity-wise, we show that Smoke can express known techniques, introduce novel semantics on known techniques, and introduce new techniques across domains. Performance-wise, we show case-by-case that Smoke is on par with or up-to several orders of magnitudes faster than state-of-the-art imperative and declarative implementations of important applications across domains. As such, we believe our contributions provide evidence and form the basis towards a class of instrumentation-enabled DBMSs with the goal set to express and optimize applications across important domains with core logic over how queries are executed by DBMSs.
90

A fuzzy database query system with a built-in knowledge base.

January 1995 (has links)
by Chang Yu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 111-115). / Acknowledgement --- p.i / Abstract --- p.ii / List of Tables --- p.vii / List of Figures --- p.viii / Chapter 1 --- INTRODUCTION --- p.1 / Chapter 1.1 --- Motivation and Objectives --- p.1 / Chapter 1.2 --- Outline of the Work of This Thesis --- p.4 / Chapter 1.3 --- Organization of the Thesis --- p.5 / Chapter 2 --- REVIEW OF RELATED WORKS --- p.6 / Chapter 2.1 --- Deduce2 --- p.6 / Chapter 2.2 --- ARES --- p.8 / Chapter 2.3 --- VAGUE --- p.10 / Chapter 2.4 --- Fuzzy Sets-Based Approaches --- p.12 / Chapter 2.5 --- Some General Remarks --- p.14 / Chapter 3 --- A FUZZY DATABASE QUERY LANGUAGE --- p.18 / Chapter 3.1 --- Basic Concepts of Fuzzy Sets --- p.18 / Chapter 3.2 --- The Syntax of the Fuzzy Query Language --- p.21 / Chapter 3.3 --- Fuzzy Operators --- p.25 / Chapter 3.3.1 --- AND --- p.27 / Chapter 3.3.2 --- OR --- p.27 / Chapter 3.3.3 --- COMB --- p.28 / Chapter 3.3.4 --- POLL --- p.28 / Chapter 3.3.5 --- HURWICZ --- p.30 / Chapter 3.3.6 --- REGRET --- p.31 / Chapter 4 --- SYSTEM DESIGN --- p.35 / Chapter 4.1 --- General Requirements and Definitions --- p.35 / Chapter 4.1.1 --- Requirements of the system --- p.36 / Chapter 4.1.2 --- Representation of membership functions --- p.38 / Chapter 4.2 --- Overall Architecture --- p.41 / Chapter 4.3 --- Interface --- p.44 / Chapter 4.4 --- Knowledge Base --- p.46 / Chapter 4.5 --- Parser --- p.51 / Chapter 4.6 --- ORACLE --- p.52 / Chapter 4.7 --- Data Manager --- p.53 / Chapter 4.8 --- Fuzzy Processor --- p.57 / Chapter 5 --- IMPLEMENTION --- p.59 / Chapter 5.1 --- Some General Considerations --- p.59 / Chapter 5.2 --- Knowledge Base --- p.60 / Chapter 5.2.1 --- Converting a concept into conditions --- p.60 / Chapter 5.2.2 --- Concept trees --- p.62 / Chapter 5.3 --- Data Manager --- p.64 / Chapter 5.3.1 --- Some issues on the implementation --- p.64 / Chapter 5.3.2 --- Dynamic library --- p.67 / Chapter 5.3.3 --- Precompiling process --- p.68 / Chapter 5.3.4 --- Calling standard --- p.71 / Chapter 6 --- CASE STUDIES --- p.76 / Chapter 6.1 --- A Database for Job Application/Recruitment --- p.77 / Chapter 6.2 --- Introduction to the Knowledge Base --- p.79 / Chapter 6.3 --- Cases --- p.79 / Chapter 6.3.1 --- Crispy queries --- p.79 / Chapter 6.3.2 --- Fuzzy queries --- p.82 / Chapter 6.3.3 --- Concept queries --- p.85 / Chapter 6.3.4 --- Fuzzy Match --- p.87 / Chapter 6.3.5 --- Fuzzy operator --- p.88 / Chapter 7 --- CONCLUSION --- p.93 / Appendix A Sample Data in DATABASE --- p.96 / Bibliography --- p.111

Page generated in 0.0365 seconds