• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 264
  • 102
  • 78
  • 76
  • 65
  • 49
  • 49
  • 48
  • 47
  • 44
  • 39
  • 37
  • 36
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Parallel Algorithm for Reduction of Data Processing Time in Big Data

Silva, Jesús, Hernández Palma, Hugo, Niebles Núẽz, William, Ovallos-Gazabon, David, Varela, Noel 07 January 2020 (has links)
Technological advances have allowed to collect and store large volumes of data over the years. Besides, it is significant that today's applications have high performance and can analyze these large datasets effectively. Today, it remains a challenge for data mining to make its algorithms and applications equally efficient in the need of increasing data size and dimensionality [1]. To achieve this goal, many applications rely on parallelism, because it is an area that allows the reduction of cost depending on the execution time of the algorithms because it takes advantage of the characteristics of current computer architectures to run several processes concurrently [2]. This paper proposes a parallel version of the FuzzyPred algorithm based on the amount of data that can be processed within each of the processing threads, synchronously and independently.
72

Real-time event based visualization of multivariate abstract datasets : Implementing and evaluating a dashboard visualization prototype / Händelsebaserad visualisering av multivariata abstrakta datamängder i realtid : Implementering och utvärdering av en prototypisk dashboardvisualisering

Ahrsjö, Carl January 2015 (has links)
As datasets in general grow in size and complexity over time while the human cognitive ability to interpret said datasets essentially stays the same, it becomes important to enable intuitive visualization methods for analysis. Based on previous research in the field of information visualization and visual analytics, a dashboard visualization prototype handling real-time event based traffic was implemented and evaluated. The real-time data is collected by a script and sent to a self-implemented web server that opens up a websocket connection with the dashboard client where the data is then visualized. Said data consisted of transactions and related metadata of an ecommerce retail site applied to a real customer scenario. The dashboard was developed using an agile method, continuously involving the thesis supervisor in the design and functionality process. The final design also depended on the results of an interview with a representative from one of the two target groups. The two target groups consisted of 5 novice and 5 expert users to the field of information visualization and visual analytics. The intuitiveness of the dashboard visualization prototype was evaluated by conducting two user studies, one for each target group, where the test subjects were asked to interact with the dashboard visualization, answer some questions and lastly solving a predefined set of tasks. The time spent solving said tasks, the amount of serious misinterpretations and the number of wrong answers was recorded and evaluated. The results from the user study showed that the use of colors, icons, level on animation, the choice of visualization method and level of interaction were the most important aspects for carrying out an efficient analytical process according to the test subjects. The test subjects desired to zoom in on each component, to filter the contents of the dashboard and to get additional information about the components on-demand. The most important result produced from developing the dashboard was how to handle the scalability of the application. It is highly important that the websocket connection remain stable when scaling out to handle more concurrent HTTP requests. The research also conclude that the dashboard should handle visualization methods that are intuitive for all users, that the real-time data needs to be put into relation to historical data if one wishes to carry out a valid analytical process and that real-time data can be used to discover trends and patterns in an early-as-possible stage. Lastly, the research provides a set of guidelines for scalability, modularity, intuitiveness and relations between datasets.
73

Natural Language Understanding for Multi-Level Distributed Intelligent Virtual Sensors

Papangelis, Angelos, Kyriakou, Georgios January 2021 (has links)
In our thesis we explore the Automatic Question/Answer Generation (AQAG) and the application of Machine Learning (ML) in natural language queries. Initially we create a collection of question/answer tuples conceptually based on processing received data from (virtual) sensors placed in a smart city. Subsequently we train a Gated Recurrent Unit(GRU) model on the generated dataset and evaluate the accuracy we can achieve in answering those questions. This will help in turn to address the problem of automatic sensor composition based on natural language queries. To this end, the contribution of this thesis is two-fold: on one hand we are providing anautomatic procedure for dataset construction, based on natural language question templates, and on the other hand we apply a ML approach that establishes the correlation between the natural language queries and their virtual sensor representation, via their functional representation. We consider virtual sensors to be entities as described by Mihailescu et al, where they provide an interface constructed with certain properties in mind. We use those sensors for our application domain of a smart city environment, thus constructing our dataset around questions relevant to it.
74

A Toolset for Qualitative Dataset Generation of Virtual Reality Environment User Interaction / Ett verktyg för generering av kvalitativa dataset om användarinteraktion i virtuella miljöer

Roos, Daniel, Aaro, Gustav January 2018 (has links)
Virtual reality (VR) is a medium of human interaction which is becoming more popular by the day in today's technological advancements. The applications are being developed at the same rate as the technology itself and we have only seen the start of the possible benefits it could bring society. As the technology advances it will gain a lot of trust, and the potential use cases of virtual environments will be allowed to become more complex. Already today, they often involve network streaming components which often has very strict optimization requirements in order to be able to run in real-time with minimal delay under normal network conditions. In order to reach the required optimizations it is important to understand how users interact with such virtual environments. To support and facilitate the understanding of this kind of interaction we have developed a method for creating qualitative datasets containing extensive information about the 3D scene as well as the sensor data from the head-mounted display (HMD). We then apply this method to create a sample dataset from a virtual 3D environment and try to analyze the data collected through some simple methods for demonstrational purposes.
75

A comparative evaluation of 3d and spatio-temporal deep learning techniques for crime classification and prediction

Matereke, Tawanda Lloyd January 2021 (has links)
>Magister Scientiae - MSc / This research is on a comparative evaluation of 3D and spatio-temporal deep learning methods for crime classification and prediction using the Chicago crime dataset, which has 7.29 million records, collected from 2001 to 2020. In this study, crime classification experiments are carried out using two 3D deep learning algorithms, i.e., 3D Convolutional Neural Network and the 3D Residual Network. The crime classification models are evaluated using accuracy, F1 score, Area Under Receiver Operator Curve (AUROC), and Area Under Curve - Precision-Recall (AUCPR). The effectiveness of spatial grid resolutions on the performance of the classification models is also evaluated during training, validation and testing.
76

Data augmentation for attack detection on IoT Telehealth Systems

Khan, Zaid A. 11 March 2022 (has links)
Telehealth is an online health care system that is extensively used in the current pandemic situation. Our proposed technique is considered a fog computing-based attack detection architecture to protect IoT Telehealth Networks. As for IoT Telehealth Networks, the sensor/actuator edge devices are considered the weakest link in the IoT system and are obvious targets of attacks such as botnet attacks. In this thesis, we introduce a novel framework that employs several machine learning and data analysis techniques to detect those attacks. We evaluate the effectiveness of the proposed framework using two publicly available datasets from real-world scenarios. These datasets contain a variety of attacks with different characteristics. The robustness of the proposed framework and its ability, to detect and distinguish between the existing IoT attacks that are tested by combining the two datasets for cross-evaluation. This combination is based on a novel technique for generating supplementary data instances, which employs GAN (generative adversarial networks) for data augmentation and to ensure that the number of samples and features are balanced. / Graduate
77

Text to Image Synthesis via Mask Anchor Points and Aesthetic Assessment

Baraheem, Samah Saeed 15 June 2020 (has links)
No description available.
78

Depth Estimation from Structured Light Fields

Li, Yan 03 July 2020 (has links) (PDF)
Light fields have been populated as a new geometry representation of 3D scenes, which is composed of multiple views, offering large potentials to improve the depth perception in the scenes. The light fields can be captured by different camera sensors, in which different acquisitions give rise to different representations, mainly containing a line of camera views - 3D light field representation, a grid of camera views - 4D light field representation. When the captured position is uniformly distributed, the outputs are the structured light fields. This thesis focuses on depth estimation from the structured light fields. The light field representations (or setups) differ not only in terms of 3D and 4D, but also the density or baseline of camera views. Rather than the objective of reconstructing high quality depths from dense (narrow-baseline) light fields, we put efforts into a general objective, i.e. reconstructing depths from a wide range of light field setups. Hence a series of depth estimation methods from light fields, including traditional and deep learningbased methods, are presented in this thesis. Extra efforts are made for achieving the high performance on aspects of depth accuracy and computation efficiency. Specifically, 1) a robust traditional framework is put forward for estimating the depth in sparse (wide-baseline) light fields, where a combination of the cost calculation, the window-based filtering and the optimization are conducted; 2) the above-mentioned framework is extended with the extra new or alternative components to the 4D light fields. This new framework shows the ability of being independent of the number of views and/or baseline of 4D light fields when predicting the depth; 3) two new deep learning-based methods are proposed for the light fields with the narrow-baseline, where the features are learned from the Epipolar-Plane-Image and light field images. One of the methods is designed as a lightweight model for more practical goals; 4) due to the dataset deficiency, a large-scale and diverse synthetic wide-baseline dataset with labeled data are created. A new lightweight deep model is proposed for the 4D light fields with the wide-baseline. Besides, this model also works on the 4D light fields with the narrow baseline if trained on the narrow-baseline datasets. Evaluations are made on the public light field datasets. Experimental results show the proposed depth estimation methods from a wide range of light field setups are capable of achieving the high quality depths, and some even outperform state-of-the-art methods. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
79

AUTONOMOUS SAFE LANDING ZONE DETECTION FOR UAVs UTILIZING MACHINE LEARNING

Nepal, Upesh 01 May 2022 (has links)
One of the main challenges of the integration of unmanned aerial vehicles (UAVs) into today’s society is the risk of in-flight failures, such as motor failure, occurring in populated areas that can result in catastrophic accidents. We propose a framework to manage the consequences of an in-flight system failure and to bring down the aircraft safely without causing any serious accident to people, property, and the UAV itself. This can be done in three steps: a) Detecting a failure, b) Finding a safe landing spot, and c) Navigating the UAV to the safe landing spot. In this thesis, we will look at part b. Specifically, we are working to develop an active system that can detect landing sites autonomously without any reliance on UAV resources. To detect a safe landing site, we are using a deep learning algorithm named "You Only Look Once" (YOLO) that runs on a Jetson Xavier NX computing module, which is connected to a camera, for image processing. YOLO is trained using the DOTA dataset and we show that it can detect landing spots and obstacles effectively. Then by avoiding the detected objects, we find a safe landing spot. The effectiveness of this algorithm will be shown first by comprehensive simulations. We also plan to experimentally validate this algorithm by flying a UAV and capturing ground images, and then applying the algorithm in real-time to see if it can effectively detect acceptable landing spots.
80

Creating a Customizable Component Based ETL Solution for the Consumer / Skapandet av en anpassningsbar komponentbaserad ETL-lösning för konsumenten

Retelius, Philip, Bergström Persson, Eddie January 2021 (has links)
In today's society, an enormous amount of data is created that is stored in various databases. Since the data is in many cases stored in different databases, there is a demand from organizations with a lot of data to be able to merge separated data and get an extraction of this resource. Extract, Transform and Load System (ETL) is a solution that has made it possible to easily merge different databases. However, the ETL market has been owned by large actors such as Amazon and Microsoft and the solutions offered are completely owned by these actors. This leaves the consumer with little ownership of the solution. Therefore, this thesis proposes a framework to create a component based ETL which gives consumers an opportunity to own and develop their own ETL solution that they can customize to their own needs. The result of the thesis is a prototype ETL solution that is built with the idea of being able to configure and customize the prototype and it accomplishes this by being independent of inflexible external libraries and a level of modularity that makes adding and removing components easy. The results of this thesis are verified with a test that shows how two different files containing data can be combined. / I dagens samhälle skapas det en enorm mängd data som är lagrad i olika databaser. Eftersom data i många fall är lagrat i olika databaser, finns det en efterfrågan från organisationer med mycket data att kunna slå ihop separerad data och få en utvinning av denna resurs. Extract, Transform and Load System (ETL) är en lösning som gjort det möjligt att slå ihop olika databaser. Dock är problemet denna expansion av ETL teknologi. ETL marknaden blivit ägd av stora aktörer såsom Amazon och Microsoft och de lösningar som erbjuds är helt ägda av dem. Detta lämnar konsumenten med lite ägodel av lösningen. Därför föreslår detta examensarbete ett ramverk för att skapa ett komponentbaserat ETL verktyg som ger konsumenter en möjlighet att utveckla en egen ETL lösning som de kan skräddarsy efter deras egna förfogande. Resultatet av examensarbete är en prototyp ETL-lösning som är byggd för att kunna konfigurera och skräddarsy prototypen. Lösningen lyckas med detta genom att vara oberoende av oflexibla externa bibliotek och en nivå av modularitet som gör addering och borttagning av komponenter enkelt. Resultatet av detta examensarbete är verifierat av ett test som visar på hur två olika filer med innehållande data kan kombineras.

Page generated in 0.0595 seconds