• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 33
  • 24
  • 22
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 164
  • 59
  • 55
  • 48
  • 48
  • 44
  • 31
  • 30
  • 30
  • 29
  • 29
  • 23
  • 20
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Abordagem para integração automática de dados estruturados e não estruturados em um contexto Big Data / Approach for automatic integration of structured and unstructured data in a Big Data context

Keylla Ramos Saes 22 November 2018 (has links)
O aumento de dados disponíveis para uso tem despertado o interesse na geração de conhecimento pela integração de tais dados. No entanto, a tarefa de integração requer conhecimento dos dados e também dos modelos de dados utilizados para representá-los. Ou seja, a realização da tarefa de integração de dados requer a participação de especialistas em computação, o que limita a escalabilidade desse tipo de tarefa. No contexto de Big Data, essa limitação é reforçada pela presença de uma grande variedade de fontes e modelos heterogêneos de representação de dados, como dados relacionais com dados estruturados e modelos não relacionais com dados não estruturados, essa variedade de representações apresenta uma complexidade adicional para o processo de integração de dados. Para lidar com esse cenário é necessário o uso de ferramentas de integração que reduzam ou até mesmo eliminem a necessidade de intervenção humana. Como contribuição, este trabalho oferece a possibilidade de integração de diversos modelos de representação de dados e fontes de dados heterogêneos, por meio de uma abordagem que permite o do uso de técnicas variadas, como por exemplo, algoritmos de comparação por similaridade estrutural dos dados, algoritmos de inteligência artificial, que através da geração do metadados integrador, possibilita a integração de dados heterogêneos. Essa flexibilidade permite lidar com a variedade crescente de dados, é proporcionada pela modularização da arquitetura proposta, que possibilita que integração de dados em um contexto Big Data de maneira automática, sem a necessidade de intervenção humana / The increase of data available to use has piqued interest in the generation of knowledge for the integration of such data bases. However, the task of integration requires knowledge of the data and the data models used to represent them. Namely, the accomplishment of the task of data integration requires the participation of experts in computing, which limits the scalability of this type of task. In the context of Big Data, this limitation is reinforced by the presence of a wide variety of sources and heterogeneous data representation models, such as relational data with structured and non-relational models with unstructured data, this variety of features an additional complexity representations for the data integration process. Handling this scenario is required the use of integration tools that reduce or even eliminate the need for human intervention. As a contribution, this work offers the possibility of integrating diverse data representation models and heterogeneous data sources through the use of varied techniques such as comparison algorithms for structural similarity of the artificial intelligence algorithms, data, among others. This flexibility, allows dealing with the growing variety of data, is provided by the proposed modularized architecture, which enables data integration in a context Big Data automatically, without the need for human intervention
132

Abordagem para integração automática de dados estruturados e não estruturados em um contexto Big Data / Approach for automatic integration of structured and unstructured data in a Big Data context

Saes, Keylla Ramos 22 November 2018 (has links)
O aumento de dados disponíveis para uso tem despertado o interesse na geração de conhecimento pela integração de tais dados. No entanto, a tarefa de integração requer conhecimento dos dados e também dos modelos de dados utilizados para representá-los. Ou seja, a realização da tarefa de integração de dados requer a participação de especialistas em computação, o que limita a escalabilidade desse tipo de tarefa. No contexto de Big Data, essa limitação é reforçada pela presença de uma grande variedade de fontes e modelos heterogêneos de representação de dados, como dados relacionais com dados estruturados e modelos não relacionais com dados não estruturados, essa variedade de representações apresenta uma complexidade adicional para o processo de integração de dados. Para lidar com esse cenário é necessário o uso de ferramentas de integração que reduzam ou até mesmo eliminem a necessidade de intervenção humana. Como contribuição, este trabalho oferece a possibilidade de integração de diversos modelos de representação de dados e fontes de dados heterogêneos, por meio de uma abordagem que permite o do uso de técnicas variadas, como por exemplo, algoritmos de comparação por similaridade estrutural dos dados, algoritmos de inteligência artificial, que através da geração do metadados integrador, possibilita a integração de dados heterogêneos. Essa flexibilidade permite lidar com a variedade crescente de dados, é proporcionada pela modularização da arquitetura proposta, que possibilita que integração de dados em um contexto Big Data de maneira automática, sem a necessidade de intervenção humana / The increase of data available to use has piqued interest in the generation of knowledge for the integration of such data bases. However, the task of integration requires knowledge of the data and the data models used to represent them. Namely, the accomplishment of the task of data integration requires the participation of experts in computing, which limits the scalability of this type of task. In the context of Big Data, this limitation is reinforced by the presence of a wide variety of sources and heterogeneous data representation models, such as relational data with structured and non-relational models with unstructured data, this variety of features an additional complexity representations for the data integration process. Handling this scenario is required the use of integration tools that reduce or even eliminate the need for human intervention. As a contribution, this work offers the possibility of integrating diverse data representation models and heterogeneous data sources through the use of varied techniques such as comparison algorithms for structural similarity of the artificial intelligence algorithms, data, among others. This flexibility, allows dealing with the growing variety of data, is provided by the proposed modularized architecture, which enables data integration in a context Big Data automatically, without the need for human intervention
133

Jämförelse av svarstider för olika bilddatabaser för Javabaserade http-servrar / Benchmark of different image databases for Java-based http-servers

Bäcklin, Staffan January 2016 (has links)
Denna kandidatuppsats berör databaser i javabaserade bildhanteringssystem där bilderna lagras och hämtas som binära objekt. I MySQL och en del andra databashanterare kallas detta format för Blob(Binary large object). För att bildhanteringssystemet skall fungera bra krävs det att man använder en snabb databas. Syftet har varit att av ett urval databaser utse den databas som är snabbast i avseende på svarstider för hämtning av bilder som lagras som binära objekt i databaser. Databaserna är de fyra välkända databashanterarna MySQL, MariaDB, PostGreSQL och MongoDB. Testerna har utförts med databaserna integrerade i Javabaserade klient-server moduler för att så mycket som möjligt spegla de villkor som förekommer i ett bildhanteringssystem. De testverktyg som har använts är JMeter som är en avancerad applikation för mätning av svarstider och PerfMon som övervakar åtgång av systemresurser. MongoDB var den snabbaste bilddatabasen men det finns många osäkerhetsfaktorer som måste beaktas vilket också beskrivs i denna kandidatuppsats. Trots att många åtgärder för att motverka osäkerhetsfaktorerna har gjorts, förblir mätosäkerheten stor. Mer åtgärder för att isolera databasernas del av svarstiderna i ett klient-server system måste göras. Förslag på åtgärder redogörs i denna kandidatuppsats. / This bachelor thesis concerns databases in Java-based imaging system where the images are stored and retrieved as binary objects. In MySQL and in some other database management systems this format is called Blob (Binary Large Object). For the imaging system to work well, it is necessary to use a fast database. The aim has been that out of a sample of databases designate the database that is the fastest in terms of response times for downloading images stored as binary objects in databases. The databases are the four well-known database management systems MySQL, MariaDB, PostgreSQL and MongoDB. The tests have been conducted with the databases integrated into Java-based client-server modules in order to as much as possible mirror the conditions prevailing in an imaging system. The test tool that has been used is JMeter which is an advanced application for measuring response times and PerfMon to monitor the consumption of system resources. MongoDB was the fastest image database, but there are many uncertainties that must be considered, which is also explained in this bachelor thesis. Although many measures to counter the uncertainties have been made, the measurement uncertainty remains big. Further measures to isolate the database part of the response times in a client-server system must be made. Proposed measures are described in this bachelor thesis.
134

Systém pro analýzu a vyhodnocení jízd autoškoly / A System for a Driving School Trip Analysis and Evaluation

Šoulák, Martin January 2017 (has links)
The objective of this master thesis is to design and develop a real-time storage system for geographic data from driving school trips. The system provides tools for analysis and evaluation of practice trips. This system is an extension of the DoAutoskoly.cz project which is described in the text. The next part contains an introduction to geographical data, spatial data and available databases with spatial extensions. The understanding to spatial databases is very important for the system design, an explanation of a solution for a database layer and implementation of major parts. Solution for a graphical view of the results and possible extensions of the system are described in the last part of this thesis.
135

Prohlížečová hra s umělou inteligencí / Browser Game with Artificial Intelligence

Moravec, Michal January 2019 (has links)
Thesis describes design and implementation of a web browser game, which can be played by multiple players via the internet. The main goal is to manage the economy, although players can cooperate (trading) or play against each other (battles). NoSQL database is used for persistent storage of progress, which is also described in the thesis. Apart from human players there are also agents/bots, which play the game autonomously via state machines generated by genetic algorithms. Paper describes design and functionality of either the genetic algorithms, but also the state machines.
136

Synthesis and evaluation of a data management system for machine-to-machine communication

Jordaan, Pieter Willem January 2013 (has links)
A use case for a data management system for machine-to-machine communication was defined. A centralized system for managing data flow and storage is required for machines to securely communicate with other machines. Embedded devices are typical endpoints that must be serviced by this system and the system must, therefore, be easy to use. These systems have to bill the data usage of the machines that make use of its services. Data management systems are subject to variable load and must there- fore be able to scale dynamically on demand in order to service end- points. For robustness of such an online-service it must be highly available. By following design science research as the research methodology, cloud-based computing was investigated as a target deployment for such a data management system in this research project. An implementation of a cloud-based system was synthesised, evaluated and tested, and shown to be valid for this use case. Empirical testing and a practical field test validated the proposal. / Thesis (MIng (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013.
137

Synthesis and evaluation of a data management system for machine-to-machine communication

Jordaan, Pieter Willem January 2013 (has links)
A use case for a data management system for machine-to-machine communication was defined. A centralized system for managing data flow and storage is required for machines to securely communicate with other machines. Embedded devices are typical endpoints that must be serviced by this system and the system must, therefore, be easy to use. These systems have to bill the data usage of the machines that make use of its services. Data management systems are subject to variable load and must there- fore be able to scale dynamically on demand in order to service end- points. For robustness of such an online-service it must be highly available. By following design science research as the research methodology, cloud-based computing was investigated as a target deployment for such a data management system in this research project. An implementation of a cloud-based system was synthesised, evaluated and tested, and shown to be valid for this use case. Empirical testing and a practical field test validated the proposal. / Thesis (MIng (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013.
138

Användning av NoSQL- och Relationsdatabaser : En undersökning av användarvänlighet och inverkan på arbetseffektivitet

Gustafsson, Joakim January 2018 (has links)
Syftet med denna studie var att klargöra hur användarvänligheten ser ut för NoSQL- respektive Relationsdatabaser samt hur arbetseffektiviteten kan påverkas av dessa datamodeller. Denna problematik har belysts med forskningsfrågan ”Hur skiljer sig användarvänligheten mellan NoSQL-databaser och Relationsdatabaser, och vad fås för inverkan på arbetseffektiviteten vid en blandad användning av de två datamodellerna?” samt underfrågan ”Hur påverkar respektive datamodells användarvänlighet arbetseffektiviteten?”. En teoretisk grund inom problemområdet upprättades med hjälp av olika typer av litterära medel. Genom individuella intervjuer med totalt elva uppgiftslämnare samlades sedan datamaterial in, som transkriberades och analyserades genom kombinering av kategorier. Studiens resultat visade att de främsta dragen hos Relationsdatabasers användarvänlighet är att de är standardiserade sett till tillägg och dokumentation samt naturliga för sitt syfte. NoSQL-databaser har sina främsta drag inom användarvänlighet i en lättförståelig datastruktur, flexibilitet vid hantering av data samt snabba svar och korta väntetider vid frågeställningar. Generellt upplevdes Relationsdatabaser ha en högre användarvänlighet än NoSQL-databaser. Vidare pekade resultaten på att en negativ inverkan på arbetseffektiviteten är mest trolig vid en blandad användning de två datamodellerna. Detta beror främst på att det blir mer tidsresurser på inlärning och flera olika system att administrera. Sannolikheten att ingen utmärkande inverkan skulle ske är mindre, men det sprungar i fåtal fall ur att modifierbarhet och kompabilitet inte är vanliga moment i arbetet. Minst troligt är en positiv inverkan, som i sina fall grundar sig i att flera datamodeller ger en bredare modifierbarhet. Likaså nåddes resultatet att arbetseffektiviteten kan få en positiv inverkan av Relationsdatabasers användarvänlighet i avseendet kompabilitet, och av NoSQL-databasers användarvänlighet i avseendet modifierbarhet. På en övergripande nivå kan Relationsdatabasers användarvänlighet ha en större positiv inverkan på arbetseffektiviteten än NoSQL-databasers.
139

Vývoj webové aplikace pomocí moderních technologií / Web application development with modern technologies

Ibragimov, Ahliddin January 2016 (has links)
This thesis covers the development of web applications with modern programming languages and technologies. The objective of thesis is the research and analysis of modern technologies and subsequent selection of particular programming languages, frameworks and database for development of web application applying those techniques. Based on analysis of modern trends I chose the following technologies: backend development using popular Spring framework based on Java EE, frontend implementation using one of the most widespread scripting languages JavaScript and its known framework AngularJS developed by Google. For data persistence I chose NoSQL database MongoDB. I provided detailed documentation of each implementation step: project definition, requirements analysis, design, implementation and testing.
140

Návrh postupu tvorby aplikace pro Linked Open Data / The proposal of application development process for Linked Open Data

Budka, Michal January 2014 (has links)
This thesis deals with the issue of Linked Open Data. The goal of this thesis is to introduce the reader to this issue as a whole and to the possibility of using Linked Open Data for developing useful applications by proposing a new development process focusing on such applications. The theoretical part offers an insight into the issue of Open Data, Linked Open Data and the NoSQL database systems and their usability in this field. It focuses mainly on graph database systems and compares them with relational database systems using predefined criteria. Additionally, the goal of this thesis is to develop an application using the proposed development process, which provides a tool for data presentation and statistical visualisation for open data sets published by the Supreme Audit Office and the Czech Trade Inspection. The application is mainly developed for the purpose of verifying the proposed development process and to demonstrate the connectivity of open data published by two different organizations.The thesis includes the process of selecting a development methodology, which is then used for optimising work on the implementation of the resulting application and the process of selecting a graph database system, that is used to store and modify open data for the purposes of the application.

Page generated in 0.0225 seconds