• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1288
  • 349
  • 214
  • 91
  • 65
  • 53
  • 40
  • 36
  • 27
  • 17
  • 13
  • 13
  • 13
  • 13
  • 7
  • Tagged with
  • 2665
  • 2665
  • 835
  • 819
  • 591
  • 571
  • 449
  • 409
  • 404
  • 333
  • 310
  • 284
  • 259
  • 248
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Využití umělé inteligence na kapitálových trzích / The Use of Artificial Intelligence on Stock Market

Kudelás, Stanislav January 2013 (has links)
The diploma thesis focuses on specification of model for trading support. It points to the possible use of artificial intelligence tools. It contains suggestion of model for trading support using artificial intelligence.
292

Automated event prioritization for security operation center using graph-based features and deep learning

Jindal, Nitika 06 April 2020 (has links)
A security operation center (SOC) is a cybersecurity clearinghouse responsible for monitoring, collecting and analyzing security events from organizations’ IT infrastructure and security controls. Despite their popularity, SOCs are facing increasing challenges and pressure due to the growing volume, velocity and variety of the IT infrastructure and security data observed on a daily basis. Due to the mixed performance of current technological solutions, e.g. intrusion detection system (IDS) and security information and event management (SIEM), there is an over-reliance on manual analysis of the events by human security analysts. This creates huge backlogs and slows down considerably the resolution of critical security events. Obvious solutions include increasing the accuracy and efficiency of crucial aspects of the SOC automation workflow, such as the event classification and prioritization. In the current thesis, we present a new approach for SOC event classification and prioritization by identifying a set of new machine learning features using graph visualization and graph metrics. Using a real-world SOC dataset and by applying different machine learning classification techniques, we demonstrate empirically the benefit of using the graph-based features in terms of improved classification accuracy. Three different classification techniques are explored, namely, logistic regression, XGBoost and deep neural network (DNN). The experimental evaluation shows for the DNN, the best performing classifier, area under curve (AUC) values of 91% for the baseline feature set and 99% for the augmented feature set that includes the graph-based features, which is a net improvement of 8% in classification performance. / Graduate
293

Evaluating the use of neural networks to predict river flow gauge values

Walford, Wesley Michael January 2017 (has links)
Without improved water management the global population could be facing serious water shortages. River flow discharge rates are one factor that could contribute to improving water management, being able to predict a forecasted river flow value would provide support in the management of water resources. This research investigates the use of an artificial neural network (ANN) to create a model that predicts river flow gauge values. The Driel Barrage monitoring station on the Thukela river in South Africa was used as a case study. The research makes use of data from the Department of Water and Sanitation (DWS) and weather forecast data from the European Center For Medium- Range Forecasts (ECMWF) to train the predictive model. An evaluation of the ANN model identified that the model is highly sensitive to selected weather parameters and is sensitive to the initial weights used in the ANN. These were overcome using an ANN ensemble and selective scenarios to identify the best weather parameters to use as input into the ANN model. Five weather parameters and a correlation coefficient cut-off value produced the most accurate prediction by the ANN. The research found that ANNs can be used for predicting river flow gauge values but to improve the results a greater ensemble, additional data and different ANN structures may create a better performing model. For the ANN model to be used in practice the research needs to be extended to evaluate the whole catchment area and a range of rivers in South Africa. / Dissertation (MSc)--University of Pretoria, 2017. / Geography, Geoinformatics and Meteorology / MSc / Unrestricted
294

Learning Lighting Models with Shader-Based Neural Networks

Qin He (8784458) 01 May 2020 (has links)
<p>To correctly reproduce the appearance of different objects in computer graphics applications, numerous lighting models have been proposed over the past several decades. These models are among the most important components in the modern graphics pipeline since they decide the final pixel color shown in the generated images. More physically valid parameters and functions have been introduced into recent models. These parameters expanded the range of materials that can be represented and made virtual scenes more realistic, but they also made the lighting models more complex and dependent on measured data.</p> <p>Artificial neural networks, or neural networks are famous for their ability to deal with complex data and to approximate arbitrary functions. They have been adopted by many data-driven approaches for computer graphics and proven to be effective. Furthermore, neural networks have also been used by the artists for creative works and proven to have the ability of supporting creation of visual effects, animation and computational arts. Therefore, it is reasonable to consider artificial neural networks as potential tools for representing lighting models. Since shaders are used for general-purpose computing, neural networks can be further combined with modern graphics pipeline using shader implementation. </p> <p>In this research, the possibilities of shader-based neural networks to be used as an alternative to traditional lighting models are explored. Fully connected neural networks are implemented in fragment shader to reproduce lighting results in the graphics pipeline, and trained in compute shaders. Implemented networks are proved to be able to approximate mathematical lighting models. In this thesis, experiments are described to prove the ability of shader-based neural networks, to explore the proper network architecture and settings for different lighting models. Further explorations of possibilities of manually editing parameters are also described. Mean-square errors and runtime are taken as measurements of success to evaluate the experiments. Rendered images are also reported for visual comparison and evaluation.</p>
295

The Effect of 5-anonymity on a classifier based on neural network that is applied to the adult dataset

Paulson, Jörgen January 2019 (has links)
Privacy issues relating to having data made public is relevant with the introduction of the GDPR. To limit problems related to data becoming public, intentionally or via an event such as a security breach, anonymization of datasets can be employed. In this report, the impact of the application of 5-anonymity to the adult dataset on a classifier based on a neural network predicting whether people had an income exceeding $50,000 was investigated using precision, recall and accuracy. The classifier was trained using the non-anonymized data, the anonymized data, and the non-anonymized data with those attributes which were suppressed in the anonymized data removed. The result was that average accuracy dropped from 0.82 to 0.76, precision from 0.58 to 0.50, and recall increased from 0.82 to 0.87. The average values and distributions seem to support the estimation that the majority of the performance impact of anonymization in this case comes from the suppression of attributes.
296

Object Recognition with Progressive Refinement for Collaborative Robots Task Allocation

Wu, Wenbo 18 December 2020 (has links)
With the rapid development of deep learning techniques, the application of Convolutional Neural Network (CNN) has benefited the task of target object recognition. Several state-of-the-art object detectors have achieved excellent performance on the precision for object recognition. When it comes to applying the detection results for the real world application of collaborative robots, the reliability and robustness of the target object detection stage is essential to support efficient task allocation. In this work, collaborative robots task allocation is based on the assumption that each individual robotic agent possesses specialized capabilities to be matched with detected targets representing tasks to be performed in the surrounding environment which impose specific requirements. The goal is to reach a specialized labor distribution among the individual robots based on best matching their specialized capabilities with the corresponding requirements imposed by the tasks. In order to further improve task recognition with convolutional neural networks in the context of robotic task allocation, this thesis proposes an innovative approach for progressively refining the target detection process by taking advantage of the fact that additional images can be collected by mobile cameras installed on robotic vehicles. The proposed methodology combines a CNN-based object detection module with a refinement module. For the detection module, a two-stage object detector, Mask RCNN, for which some adaptations on region proposal generation are introduced, and a one-stage object detector, YOLO, are experimentally investigated in the context considered. The generated recognition scores serve as input for the refinement module. In the latter, the current detection result is considered as the a priori evidence to enhance the next detection for the same target with the goal to iteratively improve the target recognition scores. Both the Bayesian method and the Dempster-Shafer theory are experimentally investigated to achieve the data fusion process involved in the refinement process. The experimental validation is conducted on indoor search-and-rescue (SAR) scenarios and the results presented in this work demonstrate the feasibility and reliability of the proposed progressive refinement framework, especially when the combination of adapted Mask RCNN and D-S theory data fusion is exploited.
297

Attributed Multi-Relational Attention Network for Fact-checking URL Recommendation

You, Di 06 June 2019 (has links)
To combat fake news, researchers mostly focused on detecting fake news and journalists built and maintained fact-checking sites (e.g., Snopes.com and Politifact.com). However, fake news dissemination has been greatly promoted by social media sites, and these fact-checking sites have not been fully utilized. To overcome these problems and complement existing methods against fake news, in this thesis, we propose a deep-learning based fact-checking URL recommender system to mitigate impact of fake news in social media sites such as Twitter and Facebook. In particular, our proposed framework consists of a multi-relational attentive module and a heterogeneous graph attention network to learn complex/semantic relationship between user-URL pairs, user-user pairs, and URL-URL pairs. Extensive experiments on a real-world dataset show that our proposed framework outperforms seven state-of-the-art recommendation models, achieving at least 3~5.3% improvement.
298

Prediction of Covid'19 Cases using LSTM

Tanveer, Hafsa January 2021 (has links)
No description available.
299

Investigation of real-time lightweight object detection models based on environmental parameters

Persson, Dennis January 2022 (has links)
As the world is moving towards a more digital world with the majority of people having tablets, smartphones and smart objects, solving real-world computational problems with handheld devices seems more common. Detection or tracking of objects using a camera is starting to be used in all kinds of fields, from self-driving cars, sorting items to x-rays, referenced in Introduction. Object detection is very calculation heavy which is why a good computer is necessary for it to work relatively fast. Object detection using lightweight models is not as accurate as a heavyweight model because the model trades accuracy for inference to work relatively fast on such devices. As handheld devices get more powerful and people have better access to object detection models that can work on limited-computing devices, the ability to build their own small object detection machines at home or at work increases substantially. Knowing what kind of factors that have a big impact on object detection can help the user to design or choose the correct model. This study aims to explore what kind of impact distance, angle and light have on Inceptionv2 SSD, MobileNetv3 Large SSD and MobileNetv3 Small SSD on the COCO dataset. The results indicate that distance is the most dominant factor on the Inceptionv2 SSD model using the COCO dataset. The data for the MobileNetv3 SSD models indicate that the angle might have the biggest impact on these models but the data is too inconclusive to say that with certainty. With the knowledge of knowing what kind of factors that affect a certain model’s performance the most, the user can make a more informed choice to their field of use.
300

Fantastic spiking neural networks and how to train them

Weinberg, David January 2021 (has links)
Spiking neural networks are a new generation of neural networks that use neuronal models that are more biologically plausible than the typically used perceptron model. They do not use analog values to perform computations, as is the case in regular neural networks, but rely on spatio-temporal information encoded into sequences of delta-functions known as spike trains. Spiking neural networks are highly energy efficient compared to regular neural networks which makes them highly attractive in certain applications. This thesis implements two approaches for training spiking neural networks. The first approach uses surrogate gradient descent to deal with the issues of non-differentiability that arise with training spiking neural networks. The second approach is based on Bayesian probability theory and uses variational inference for parameter estimation and leads to a Bayesian spiking neural network. The two methods are tested on two datasets from the spiking neural network literature and limited hyperparameter studies are performed. The results indicate that both training methods work on the two datasets but that the Bayesian implementation yields a lower accuracy on test data. Moreover, the Bayesian implementation appear to be robust to the choice of prior parameter distribution. / <p>Sekretess</p>

Page generated in 0.0406 seconds