• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Deep learning methods for predicting flows in power grids : novel architectures and algorithms / Méthode d'apprentissage profond (deep learning) pour prévoir les flux dans les réseaux de transports d'électricité : nouvelles architectures et algorithmes

Donnot, Benjamin 13 February 2019 (has links)
Cette thèse porte sur les problèmes de sécurité sur le réseau électrique français exploité par RTE, le Gestionnaire de Réseau de Transport (GRT). Les progrès en matière d'énergie durable, d'efficacité du marché de l'électricité ou de nouveaux modes de consommation poussent les GRT à exploiter le réseau plus près de ses limites de sécurité. Pour ce faire, il est essentiel de rendre le réseau plus "intelligent". Pour s'attaquer à ce problème, ce travail explore les avantages des réseaux neuronaux artificiels. Nous proposons de nouveaux algorithmes et architectures d'apprentissage profond pour aider les opérateurs humains (dispatcheurs) à prendre des décisions que nous appelons " guided dropout ". Ceci permet de prévoir les flux électriques consécutifs à une modification volontaire ou accidentelle du réseau. Pour se faire, les données continues (productions et consommations) sont introduites de manière standard, via une couche d'entrée au réseau neuronal, tandis que les données discrètes (topologies du réseau électrique) sont encodées directement dans l'architecture réseau neuronal. L’architecture est modifiée dynamiquement en fonction de la topologie du réseau électrique en activant ou désactivant des unités cachées. Le principal avantage de cette technique réside dans sa capacité à prédire les flux même pour des topologies de réseau inédites. Le "guided dropout" atteint une précision élevée (jusqu'à 99% de précision pour les prévisions de débit) tout en allant 300 fois plus vite que des simulateurs de grille physiques basés sur les lois de Kirchoff, même pour des topologies jamais vues, sans connaissance détaillée de la structure de la grille. Nous avons également montré que le "guided dropout" peut être utilisé pour classer par ordre de gravité des évènements pouvant survenir. Dans cette application, nous avons démontré que notre algorithme permet d'obtenir le même risque que les politiques actuellement mises en œuvre tout en n'exigeant que 2 % du budget informatique. Le classement reste pertinent, même pour des cas de réseau jamais vus auparavant, et peut être utilisé pour avoir une estimation globale de la sécurité globale du réseau électrique. / This thesis addresses problems of security in the French grid operated by RTE, the French ``Transmission System Operator'' (TSO). Progress in sustainable energy, electricity market efficiency, or novel consumption patterns push TSO's to operate the grid closer to its security limits. To this end, it is essential to make the grid ``smarter''. To tackle this issue, this work explores the benefits of artificial neural networks. We propose novel deep learning algorithms and architectures to assist the decisions of human operators (TSO dispatchers) that we called “guided dropout”. This allows the predictions on power flows following of a grid willful or accidental modification. This is tackled by separating the different inputs: continuous data (productions and consumptions) are introduced in a standard way, via a neural network input layer while discrete data (grid topologies) are encoded directly in the neural network architecture. This architecture is dynamically modified based on the power grid topology by switching on or off the activation of hidden units. The main advantage of this technique lies in its ability to predict the flows even for previously unseen grid topologies. The "guided dropout" achieves a high accuracy (up to 99% of precision for flow predictions) with a 300 times speedup compared to physical grid simulators based on Kirchoff's laws even for unseen contingencies, without detailed knowledge of the grid structure. We also showed that guided dropout can be used to rank contingencies that might occur in the order of severity. In this application, we demonstrated that our algorithm obtains the same risk as currently implemented policies while requiring only 2% of today's computational budget. The ranking remains relevant even handling grid cases never seen before, and can be used to have an overall estimation of the global security of the power grid.
582

Automated event prioritization for security operation center using graph-based features and deep learning

Jindal, Nitika 06 April 2020 (has links)
A security operation center (SOC) is a cybersecurity clearinghouse responsible for monitoring, collecting and analyzing security events from organizations’ IT infrastructure and security controls. Despite their popularity, SOCs are facing increasing challenges and pressure due to the growing volume, velocity and variety of the IT infrastructure and security data observed on a daily basis. Due to the mixed performance of current technological solutions, e.g. intrusion detection system (IDS) and security information and event management (SIEM), there is an over-reliance on manual analysis of the events by human security analysts. This creates huge backlogs and slows down considerably the resolution of critical security events. Obvious solutions include increasing the accuracy and efficiency of crucial aspects of the SOC automation workflow, such as the event classification and prioritization. In the current thesis, we present a new approach for SOC event classification and prioritization by identifying a set of new machine learning features using graph visualization and graph metrics. Using a real-world SOC dataset and by applying different machine learning classification techniques, we demonstrate empirically the benefit of using the graph-based features in terms of improved classification accuracy. Three different classification techniques are explored, namely, logistic regression, XGBoost and deep neural network (DNN). The experimental evaluation shows for the DNN, the best performing classifier, area under curve (AUC) values of 91% for the baseline feature set and 99% for the augmented feature set that includes the graph-based features, which is a net improvement of 8% in classification performance. / Graduate
583

CAN DEEP LEARNING BEAT TRADITIONAL ECONOMETRICS IN FORECASTING OF REALIZED VOLATILITY?

Björnsjö, Filip January 2020 (has links)
Volatility modelling is a field dominated by classic Econometric methods such as the Nobel Prize winning Autoregressive conditional heteroskedasticity (ARCH) model. This paper therefore investigates if the field of Deep Learning can live up to the hype and outperform classic Econometrics in forecasting of realized volatility. By letting the Heterogeneous AutoRegressive model of Realized Volatility with multiple jump components (HAR-RV-CJ) represent the Econometric field as benchmark model, we compare its efficiency in forecasting realized volatility to four Deep Learning models. The results of the experiment show that the HAR-RV-CJ performs in line with the four Deep Learning models: Feed Forward Neural Network (FNN), Recurrent Neural Network (RNN), Long Short Term Memory network (LSTM) and Gated Recurrent Unit Network (GRU). Hence, the paper cannot conclude that the field of Deep Learning is superior to classic Econometrics in forecasting of realized volatility.
584

AI-based Age Estimation using X-ray Hand Images : A comparison of Object Detection and Deep Learning models

Westerberg, Erik January 2020 (has links)
Bone age assessment can be useful in a variety of ways. It can help pediatricians predict growth, puberty entrance, identify diseases, and assess if a person lacking proper identification is a minor or not. It is a time-consuming process that is also prone to intra-observer variation, which can cause problems in many ways. This thesis attempts to improve and speed up bone age assessments by using different object detection methods to detect and segment bones anatomically important for the assessment and using these segmented bones to train deep learning models to predict bone age. A dataset consisting of 12811 X-ray hand images of persons ranging from infant age to 19 years of age was used. In the first research question, we compared the performance of three state-of-the-art object detection models: Mask R-CNN, Yolo, and RetinaNet. We chose the best performing model, Yolo, to segment all the growth plates in the phalanges of the dataset. We proceeded to train four different pre-trained models: Xception, InceptionV3, VGG19, and ResNet152, using both the segmented and unsegmented dataset and compared the performance. We achieved good results using both the unsegmented and segmented dataset, although the performance was slightly better using the unsegmented dataset. The analysis suggests that we might be able to achieve a higher accuracy using the segmented dataset by adding the detection of growth plates from the carpal bones, epiphysis, and the diaphysis. The best performing model was Xception, which achieved a mean average error of 1.007 years using the unsegmented dataset and 1.193 years using the segmented dataset. / <p>Presentationen gjordes online via Zoom. </p>
585

Deep learning for promoter recognition: a robust testing methodology

Perez Martell, Raul Ivan 29 April 2020 (has links)
Understanding DNA sequences has been an ongoing endeavour within bioinfor- matics research. Recognizing the functionality of DNA sequences is a non-trivial and complex task that can bring insights into understanding DNA. In this thesis, we study deep learning models for recognizing gene regulating regions of DNA, more specifi- cally promoters. We first consider DNA modelling as a language by training natural language processing models to recognize promoters. Afterwards, we delve into current models from the literature to learn how they achieve their results. Previous works have focused on limited curated datasets to both train and evaluate their models using cross-validation, obtaining high-performing results across a variety of metrics. We implement and compare three models from the literature against each other, us- ing their datasets interchangeably throughout the comparison tests. This highlights shortcomings within the training and testing datasets for these models, prompting us to create a robust promoter recognition testing dataset and developing a testing methodology, that creates a wide variety of testing datasets for promoter recognition. We then, test the models from the literature with the newly created datasets and highlight considerations to take in choosing a training dataset. To help others avoid such issues in the future, we open-source our findings and testing methodology. / Graduate
586

VECTOR REPRESENTATION TO ENHANCE POSE ESTIMATION FROM RGB IMAGES

Zongcheng Chu (8791457) 03 May 2020 (has links)
Head pose estimation is an essential task to be solved in computer vision. Existing research for pose estimation based on RGB images mainly uses either Euler angles or quaternions to predict pose. Nevertheless, both Euler angle- and quaternion-based approaches encounter the problem of discontinuity when describing three-dimensional rotations. This issue makes learning visual pattern more difficult for the convolutional neural network(CNN) which, in turn, compromises the estimation performance. To solve this problem, we introduce TriNet, a novel method based on three vectors converted from three Euler angles(roll, pitch, yaw). The orthogonality of the three vectors enables us to implement a complementary multi-loss function, which effectively reduces the prediction error. Our method achieves state-of-the-art performance on the AFLW2000, AFW and BIWI datasets. We also extend our work to general object pose estimation and show results in the experiment part.
587

Privacy-Preserving Facial Recognition Using Biometric-Capsules

Tyler Stephen Phillips (8782193) 04 May 2020 (has links)
<div>In recent years, developers have used the proliferation of biometric sensors in smart devices, along with recent advances in deep learning, to implement an array of biometrics-based recognition systems. Though these systems demonstrate remarkable performance and have seen wide acceptance, they present unique and pressing security and privacy concerns. One proposed method which addresses these concerns is the elegant, fusion-based Biometric-Capsule (BC) scheme. The BC scheme is provably secure, privacy-preserving, cancellable and interoperable in its secure feature fusion design. </div><div><br></div><div>In this work, we demonstrate that the BC scheme is uniquely fit to secure state-of-the-art facial verification, authentication and identification systems. We compare the performance of unsecured, underlying biometrics systems to the performance of the BC-embedded systems in order to directly demonstrate the minimal effects of the privacy-preserving BC scheme on underlying system performance. Notably, we demonstrate that, when seamlessly embedded into a state-of-the-art FaceNet and ArcFace verification systems which achieve accuracies of 97.18% and 99.75% on the benchmark LFW dataset, the BC-embedded systems are able to achieve accuracies of 95.13% and 99.13% respectively. Furthermore, we also demonstrate that the BC scheme outperforms or performs as well as several other proposed secure biometric methods.</div>
588

Physics Informed Neural Networks for Engineering Systems

Sukirt (8828960) 13 May 2020 (has links)
<div>This thesis explores the application of deep learning techniques to problems in fluid mechanics, with particular focus on physics informed neural networks. Physics</div><div>informed neural networks leverage the information gathered over centuries in the</div><div>form of physical laws mathematically represented in the form of partial differential</div><div>equations to make up for the dearth of data associated with engineering and physi-</div><div>cal systems. To demonstrate the capability of physics informed neural networks, an</div><div>inverse and a forward problem are considered. The inverse problem involves discov-</div><div>ering a spatially varying concentration ?field from the observations of concentration</div><div>of a passive scalar. A forward problem involving conjugate heat transfer is solved as</div><div>well, where the boundary conditions on velocity and temperature are used to discover</div><div>the velocity, pressure and temperature ?fields in the entire domain. The predictions of</div><div>the physics informed neural networks are compared against simulated data generated</div><div>using OpenFOAM.</div>
589

Réseaux de neurones convolutionnels profonds pour la détection de petits véhicules en imagerie aérienne / Deep neural networks for the detection of small vehicles in aerial imagery

Ogier du Terrail, Jean 20 December 2018 (has links)
Cette thèse présente une tentative d'approche du problème de la détection et discrimination des petits véhicules dans des images aériennes en vue verticale par l'utilisation de techniques issues de l'apprentissage profond ou "deep-learning". Le caractère spécifique du problème permet d'utiliser des techniques originales mettant à profit les invariances des automobiles et autres avions vus du ciel.Nous commencerons par une étude systématique des détecteurs dits "single-shot", pour ensuite analyser l'apport des systèmes à plusieurs étages de décision sur les performances de détection. Enfin nous essayerons de résoudre le problème de l'adaptation de domaine à travers la génération de données synthétiques toujours plus réalistes, et son utilisation dans l'apprentissage de ces détecteurs. / The following manuscript is an attempt to tackle the problem of small vehicles detection in vertical aerial imagery through the use of deep learning algorithms. The specificities of the matter allows the use of innovative techniques leveraging the invariance and self similarities of automobiles/planes vehicles seen from the sky.We will start by a thorough study of single shot detectors. Building on that we will examine the effect of adding multiple stages to the detection decision process. Finally we will try to come to grips with the domain adaptation problem in detection through the generation of better looking synthetic data and its use in the training process of these detectors.
590

Security Framework for the Internet of Things Leveraging Network Telescopes and Machine Learning

Shaikh, Farooq Israr Ahmed 04 April 2019 (has links)
The recent advancements in computing and sensor technologies, coupled with improvements in embedded system design methodologies, have resulted in the novel paradigm called the Internet of Things (IoT). IoT is essentially a network of small embedded devices enabled with sensing capabilities that can interact with multiple entities to relay information about their environments. This sensing information can also be stored in the cloud for further analysis, thereby reducing storage requirements on the devices themselves. The above factors, coupled with the ever increasing needs of modern society to stay connected at all times, has resulted in IoT technology penetrating all facets of modern life. In fact IoT systems are already seeing widespread applications across multiple industries such as transport, utility, manufacturing, healthcare, home automation, etc. Although the above developments promise tremendous benefits in terms of productivity and efficiency, they also bring forth a plethora of security challenges. Namely, the current design philosophy of IoT devices, which focuses more on rapid prototyping and usability, results in security often being an afterthought. Furthermore, one needs to remember that unlike traditional computing systems, these devices operate under the assumption of tight resource constraints. As such this makes IoT devices a lucrative target for exploitation by adversaries. This inherent flaw of IoT setups has manifested itself in the form of various distributed denial of service (DDoS) attacks that have achieved massive throughputs without the need for techniques such as amplification, etc. Furthermore, once exploited, an IoT device can also function as a pivot point for adversaries to move laterally across the network and exploit other, potentially more valuable, systems and services. Finally, vulnerable IoT devices operating in industrial control systems and other critical infrastructure setups can cause sizable loss of property and in some cases even lives, a very sobering fact. In light of the above, this dissertation research presents several novel strategies for identifying known and zero-day attacks against IoT devices, as well as identifying infected IoT devices present inside a network along with some mitigation strategies. To this end, network telescopes are leveraged to generate Internet-scale notions of maliciousness in conjunction with signatures that can be used to identify such devices in a network. This strategy is further extended by developing a taxonomy-based methodology which is capable of categorizing unsolicited IoT behavior by leveraging machine learning (ML) techniques, such as ensemble learners, to identify similar threats in near-real time. Furthermore, to overcome the challenge of insufficient (malicious) training data within the IoT realm, a generative adversarial network (GAN) based framework is also developed to identify known and unseen attacks on IoT devices. Finally, a software defined networking (SDN) based solution is proposed to mitigate threats from unsolicited IoT devices.

Page generated in 0.0556 seconds