• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

'Understanding mathematics in depth' : an investigation into the conceptions of secondary mathematics teachers on two UK subject knowledge enhancement courses

Stevenson, Mary January 2013 (has links)
This thesis is an investigation into conceptions of ‘understanding mathematics in depth’, as articulated by two specific groups of novice secondary mathematics teachers in the UK. Most participants in the sample interviewed have completed one of two government funded mathematics subject knowledge enhancement courses, which were devised with an aim of strengthening students’ understanding of fundamental mathematics. Qualitative data was drawn from semi-structured interviews with 21 subjects and more in-depth case studies of two of the sample. The data reveals some key themes common to both groups, and also some clear differences. The data also brings to light some new emergent theory which is particularly relevant in novice teachers’ contexts. To provide background context to this study, quantitative data on pre-service mathematics Postgraduate Certificate in Education (PGCE) students is also presented, and it is shown that, at the university in the study, there is no relationship between degree classification on entry to PGCE, and effectiveness as a teacher as measured on exit from the course. The data also shows that there are no significant differences in subject knowledge and overall performance on exit from PGCE, between students who have previously followed a subject knowledge enhancement course, and those who have followed more traditional degree routes.
532

Real-Time Instance and Semantic Segmentation Using Deep Learning

Kolhatkar, Dhanvin 10 June 2020 (has links)
In this thesis, we explore the use of Convolutional Neural Networks for semantic and instance segmentation, with a focus on studying the application of existing methods with cheaper neural networks. We modify a fast object detection architecture for the instance segmentation task, and study the concepts behind these modifications both in the simpler context of semantic segmentation and the more difficult context of instance segmentation. Various instance segmentation branch architectures are implemented in parallel with a box prediction branch, using its results to crop each instance's features. We negate the imprecision of the final box predictions and eliminate the need for bounding box alignment by using an enlarged bounding box for cropping. We report and study the performance, advantages, and disadvantages of each. We achieve fast speeds with all of our methods.
533

Automatic Programming Code Explanation Generation with Structured Translation Models

January 2020 (has links)
abstract: Learning programming involves a variety of complex cognitive activities, from abstract knowledge construction to structural operations, which include program design,modifying, debugging, and documenting tasks. In this work, the objective was to explore and investigate the barriers and obstacles that programming novice learners encountered and how the learners overcome them. Several lab and classroom studies were designed and conducted, the results showed that novice students had different behavior patterns compared to experienced learners, which indicates obstacles encountered. The studies also proved that proper assistance could help novices find helpful materials to read. However, novices still suffered from the lack of background knowledge and the limited cognitive load while learning, which resulted in challenges in understanding programming related materials, especially code examples. Therefore, I further proposed to use the natural language generator (NLG) to generate code explanations for educational purposes. The natural language generator is designed based on Long Short Term Memory (LSTM), a deep-learning translation model. To establish the model, a data set was collected from Amazon Mechanical Turks (AMT) recording explanations from human experts for programming code lines. To evaluate the model, a pilot study was conducted and proved that the readability of the machine generated (MG) explanation was compatible with human explanations, while its accuracy is still not ideal, especially for complicated code lines. Furthermore, a code-example based learning platform was developed to utilize the explanation generating model in programming teaching. To examine the effect of code example explanations on different learners, two lab-class experiments were conducted separately ii in a programming novices’ class and an advanced students’ class. The experiment result indicated that when learning programming concepts, the MG code explanations significantly improved the learning Predictability for novices compared to control group, and the explanations also extended the novices’ learning time by generating more material to read, which potentially lead to a better learning gain. Besides, a completed correlation model was constructed according to the experiment result to illustrate the connections between different factors and the learning effect. / Dissertation/Thesis / Doctoral Dissertation Engineering 2020
534

Limitations of Classical Tomographic Reconstructions from Restricted Measurements and Enhancing with Physically Constrained Machine Learning

January 2020 (has links)
abstract: This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given. The traditional tomographic image reconstruction approach involves Fourier domain representations. The classic Filtered Back Projection algorithm will be discussed and used for comparison throughout the work. Bayesian statistics and information entropy considerations will be described. The Maximum Entropy reconstruction method will be derived and its performance in limited angular measurement scenarios will be examined. Many new approaches become available once the reconstruction problem is placed within an algebraic form of Ax=b in which the measurement geometry and instrument response are defined as the matrix A, the measured object as the column vector x, and the resulting measurements by b. It is straightforward to invert A. However, for the limited angle measurement scenarios of interest in this work, the inversion is highly underconstrained and has an infinite number of possible solutions x consistent with the measurements b in a high dimensional space. The algebraic formulation leads to the need for high performing regularization approaches which add constraints based on prior information of what is being measured. These are constraints beyond the measurement matrix A added with the goal of selecting the best image from this vast uncertainty space. It is well established within this work that developing satisfactory regularization techniques is all but impossible except for the simplest pathological cases. There is a need to capture the "character" of the objects being measured. The novel result of this effort will be in developing a reconstruction approach that will match whatever reconstruction approach has proven best for the types of objects being measured given full angular coverage. However, when confronted with limited angle tomographic situations or early in a series of measurements, the approach will rely on a prior understanding of the "character" of the objects measured. This understanding will be learned by a parallel Deep Neural Network from examples. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
535

Physical layer security in emerging wireless transmission systems

Bao, Tingnan 06 July 2020 (has links)
Traditional cryptographic encryption techniques at higher layers require a certain form of information sharing between the transmitter and the legitimate user to achieve security. Besides, it also assumes that the eavesdropper has an insufficient computational capability to decrypt the ciphertext without the shared information. However, traditional cryptographic encryption techniques may be insufficient or even not suit- able in wireless communication systems. Physical layer security (PLS) can enhance the security of wireless communications by leveraging the physical nature of wireless transmission. Thus, in this thesis, we study the PLS performance in emerging wireless transmission systems. The thesis consists of two main parts. We first consider the PLS design and analysis for ground-based networks em- ploying random unitary beamforming (RUB) scheme at the transmitter. With RUB technique, the transmitter serves multiple users with pre-designed beamforming vectors, selected using limited channel state information (CSI). We study multiple-input single-output single-eavesdropper (MISOSE) transmission system, multi-user multiple-input multiple-output single-eavesdropper (MU-MIMOSE) transmission system, and massive multiple-input multiple-output multiple-eavesdropper (massive MI- MOME) transmission system. The closed-form expressions of ergodic secrecy rate and the secrecy outage probability (SOP) for these transmission scenarios are derived. Besides, the effect of artificial noise (AN) on secrecy performance of RUB-based transmission is also investigated. Numerical results are presented to illustrate the trade-off between performance and complexity of the resulting PLS design. We then investigate the PLS design and analysis for unmanned aerial vehicle (UAV)-based networks. We first study the secrecy performance of UAV-assisted relaying transmission systems in the presence of a single ground eavesdropper. We derive the closed-form expressions of ergodic secrecy rate and intercept probability. When multiple aerial and ground eavesdroppers are located in the UAV-assisted relaying transmission system, directional beamforming technique is applied to enhance the secrecy performance. Assuming the most general κ-μ shadowed fading channel, the SOP performance is obtained in the closed-form expression. Exploiting the derived expressions, we investigate the impact of different parameters on secrecy performance. Besides, we utilize a deep learning approach in UAV-based network analysis. Numerical results show that our proposed deep learning approach can predict secrecy performance with high accuracy and short running time. / Graduate
536

Deep learning for image compression / Apprentissage profond pour la compression d'image

Dumas, Thierry 07 June 2019 (has links)
Ces vingt dernières années, la quantité d’images et de vidéos transmises a augmenté significativement, ce qui est principalement lié à l’essor de Facebook et Netflix. Même si les capacités de transmission s’améliorent, ce nombre croissant d’images et de vidéos transmises exige des méthodes de compression plus efficaces. Cette thèse a pour but d’améliorer par l’apprentissage deux composants clés des standards modernes de compression d’image, à savoir la transformée et la prédiction intra. Plus précisément, des réseaux de neurones profonds sont employés car ils ont un grand pouvoir d’approximation, ce qui est nécessaire pour apprendre une approximation fidèle d’une transformée optimale (ou d’un filtre de prédiction intra optimal) appliqué à des pixels d’image. En ce qui concerne l’apprentissage d’une transformée pour la compression d’image via des réseaux de neurones, un défi est d’apprendre une transformée unique qui est efficace en termes de compromis débit-distorsion, à différents débits. C’est pourquoi deux approches sont proposées pour relever ce défi. Dans la première approche, l’architecture du réseau de neurones impose une contrainte de parcimonie sur les coefficients transformés. Le niveau de parcimonie offre un contrôle sur le taux de compression. Afin d’adapter la transformée à différents taux de compression, le niveau de parcimonie est stochastique pendant la phase d’apprentissage. Dans la deuxième approche, l’efficacité en termes de compromis débit-distorsion est obtenue en minimisant une fonction de débit-distorsion pendant la phase d’apprentissage. Pendant la phase de test, les pas de quantification sont progressivement agrandis selon un schéma afin de compresser à différents débits avec une unique transformée apprise. Concernant l’apprentissage d’un filtre de prédiction intra pour la compression d’image via des réseaux de neurones, le problème est d’obtenir un filtre appris qui s’adapte à la taille du bloc d’image à prédire, à l’information manquante dans le contexte de prédiction et au bruit de quantification variable dans ce contexte. Un ensemble de réseaux de neurones est conçu et entraîné de façon à ce que le filtre appris soit adaptatif à ces égards. / Over the last twenty years, the amount of transmitted images and videos has increased noticeably, mainly urged on by Facebook and Netflix. Even though broadcast capacities improve, this growing amount of transmitted images and videos requires increasingly efficient compression methods. This thesis aims at improving via learning two critical components of the modern image compression standards, which are the transform and the intra prediction. More precisely, deep neural networks are used for this task as they exhibit high power of approximation, which is needed for learning a reliable approximation of an optimal transform (or an optimal intra prediction filter) applied to image pixels. Regarding the learning of a transform for image compression via neural networks, a challenge is to learn an unique transform that is efficient in terms of rate-distortion while keeping this efficiency when compressing at different rates. That is why two approaches are proposed to take on this challenge. In the first approach, the neural network architecture sets a sparsity on the transform coefficients. The level of sparsity gives a direct control over the compression rate. To force the transform to adapt to different compression rates, the level of sparsity is stochastically driven during the training phase. In the second approach, the rate-distortion efficiency is obtained by minimizing a rate-distortion objective function during the training phase. During the test phase, the quantization step sizes are gradually increased according a scheduling to compress at different rates using the single learned transform. Regarding the learning of an intra prediction filter for image compression via neural networks, the issue is to obtain a learned filter that is adaptive with respect to the size of the image block to be predicted, with respect to missing information in the context of prediction, and with respect to the variable quantization noise in this context. A set of neural networks is designed and trained so that the learned prediction filter has this adaptibility.
537

Towards an Accurate ECG Biometric Authentication System with Low Acquisition Time

Arteaga Falconi, Juan Sebastian 31 January 2020 (has links)
Biometrics is the study of physical or behavioral traits that establishes the identity of a person. Forensics, physical security and cyber security are some of the main fields that use biometrics. Unlike traditional authentication systems—such as password based—biometrics cannot be lost, forgotten or shared. This is possible because biometrics establishes the identity of a person based on a physiological/behavioural characteristic rather than what the person possess or remembers. Biometrics has two modes of operation: identification and authentication. Identification finds the identity of a person among a group of persons. Authentication determines if the claimed identity of a person is truthful. Biometric person authentication is an alternative to passwords or graphical patterns. It prevents shoulder surfing attacks, i.e., people watching from a short distance. Nevertheless, biometric traits of conventional authentication techniques like fingerprints, face—and to some extend iris—are easy to capture and duplicate. This denotes a security risk for modern and future applications such as digital twins, where an attacker can copy and duplicate a biometric trait in order to spoof a biometric system. Researchers have proposed ECG as biometric authentication to solve this problem. ECG authentication conceals the biometric traits and reduces the risk of an attack by duplication of the biometric trait. However, current ECG authentication solutions require 10 or more seconds of an ECG signal in order to have accurate results. The accuracy is directly proportional to the ECG signal time-length for authentication. This is inconvenient to implement ECG authentication in an end-user product because a user cannot wait 10 or more seconds to gain access in a secure manner to their device. This thesis addresses the problem of spoofing by proposing an accurate and secure ECG biometric authentication system with relatively short ECG signal length for authentication. The system consists of an ECG acquisition from lead I (two electrodes), signal processing approaches for filtration and R-peak detection, a feature extractor and an authentication process. To evaluate this system, we developed a method to calculate the Equal Error Rate—EER—with non-normal distributed data. In the authentication process, we propose an approach based on Support Vector Machine—SVM—and achieve 4.5% EER with 4 seconds of ECG signal length for authentication. This approach opens the door for a deeper understanding of the signal and hence we enhanced it by applying a hybrid approach of Convolutional Neural Networks—CNN—combined with SVM. The purpose of this hybrid approach is to improve accuracy by automatically detect and extract features with Deep Learning—in this case CNN—and then take the output into a one-class SVM classifier—Authentication; which proved to outperform accuracy for one-class ECG classification. This hybrid approach reduces the EER to 2.84% with 4 seconds of ECG signal length for authentication. Furthermore, we investigated the combination of two different biometrics techniques and we improved the accuracy to 0.46% EER, while maintaining a short ECG signal length for authentication of 4 seconds. We fuse Fingerprint with ECG at the decision level. Decision level fusion requires information that is available from any biometric technique. Fusion at different levels—such as feature level fusion—requires information about features that are incompatible or hidden. Fingerprint minutiae are composed of information that differs from ECG peaks and valleys. Therefore fusion at the feature level is not possible unless the fusion algorithm provides a compatible conversion scheme. Proprietary biometric hardware does not provide information about the features or the algorithms; therefore, features are hidden and not accessible for feature level fusion; however, the result is always available for a decision level fusion.
538

Classification of Heart Views in Ultrasound Images

Pop, David January 2020 (has links)
In today’s society, we experience an increasing challenge to provide healthcare to everyone in need due to the increasing number of patients and the shortage of medical staff. Computers have contributed to mitigating this challenge by offloading the medical staff from some of the tasks. With the rise of deep learning, countless new possibilities have opened to help the medical staff even further. One domain where deep learning can be applied is analysis of ultrasound images. In this thesis we investigate the problem of classifying standard views of the heart in ultrasound images with the help of deep learning. We conduct mainly three experiments. First, we use NasNet mobile, InceptionV3, VGG16 and MobileNet, pre-trained on ImageNet, and finetune them to ultrasound heart images. We compare the accuracy of these networks to each other and to the baselinemodel, a CNN that was proposed in [23]. Then we assess a neural network’s capability to generalize to images from ultrasound machines that the network is not trained on. Lastly, we test how the performance of the networks degrades with decreasing amount of training data. Our first experiment shows that all networks considered in this study have very similar performance in terms of accuracy with Inception V3 being slightly better than the rest. The best performance is achieved when the whole network is finetuned to our problem instead of finetuning only apart of it, while gradually unlocking more layers for training. The generalization experiment shows that neural networks have the potential to generalize to images from ultrasound machines that they are not trained on. It also shows that having a mix of multiple ultrasound machines in the training data increases generalization performance. In our last experiment we compare the performance of the CNN proposed in [23] with MobileNet pre-trained on ImageNet and MobileNet randomly initialized. This shows that the performance of the baseline model suffers the least with decreasing amount of training data and that pre-training helps the performance drastically on smaller training datasets.
539

Approaches for Efficient Autonomous Exploration using Deep Reinforcement Learning

Thomas Molnar (8735079) 24 April 2020 (has links)
<p>For autonomous exploration of complex and unknown environments, existing Deep Reinforcement Learning (Deep RL) approaches struggle to generalize from computer simulations to real world instances. Deep RL methods typically exhibit low sample efficiency, requiring a large amount of data to develop an optimal policy function for governing an agent's behavior. RL agents expect well-shaped and frequent rewards to receive feedback for updating policies. Yet in real world instances, rewards and feedback tend to be infrequent and sparse.</p><p> </p><p>For sparse reward environments, an intrinsic reward generator can be utilized to facilitate progression towards an optimal policy function. The proposed Augmented Curiosity Modules (ACMs) extend the Intrinsic Curiosity Module (ICM) by Pathak et al. These modules utilize depth image and optical flow predictions with intrinsic rewards to improve sample efficiency. Additionally, the proposed Capsules Exploration Module (Caps-EM) pairs a Capsule Network, rather than a Convolutional Neural Network, architecture with an A2C algorithm. This provides a more compact architecture without need for intrinsic rewards, which the ICM and ACMs require. Tested using ViZDoom for experimentation in visually rich and sparse feature scenarios, both the Depth-Augmented Curiosity Module (D-ACM) and Caps-EM improve autonomous exploration performance and sample efficiency over the ICM. The Caps-EM is superior, using 44% and 83% fewer trainable network parameters than the ICM and D-ACM, respectively. On average across all “My Way Home” scenarios, the Caps-EM converges to a policy function with 1141% and 437% time improvements over the ICM and D-ACM, respectively.</p>
540

Detección de anomalías en componentes mecánicos en base a Deep Learning y Random Cut Forests

Aichele Figueroa, Diego Andrés January 2019 (has links)
Memoria para optar al título de Ingeniero Civil Mecánico / Dentro del área de mantenimiento, el monitorear un equipo puede ser de gran utilidad ya que permite advertir cualquier anomalía en el funcionamiento interno de éste, y así, se puede corregir cualquier desperfecto antes de que se produzca una falla de mayor gravedad. En data mining, detección de anomalías es el ejercicio de identificar elementos anómalos, es decir, aquellos elementos que difieren a lo común dentro de un set de datos. Detección de anomalías tiene aplicación en diferentes dominios, por ejemplo, hoy en día se utiliza en bancos para detectar compras fraudulentas y posibles estafas a través de un patrón de comportamiento del usuario, por ese motivo se necesitan abarcar grandes cantidades de datos por lo que su desarrollo en aprendizajes de máquinas probabilísticas es imprescindible. Cabe destacar que se ha desarrollado una variedad de algoritmos para encontrar anomalías, una de las más famosas es el Isolated Forest dentro de los árboles de decisión. Del algoritmo de Isolated Forest han derivado distintos trabajos que proponen mejoras para éste, como es el Robust Random Cut Forest el cual, por un lado permite mejorar la precisión para buscar anomalías y, también, entrega la ventaja de poder realizar un estudio dinámico de datos y buscar anomalías en tiempo real. Por otro lado, presenta la desventaja de que entre más atributos contengan los sets de datos más tiempo de cómputo tendrá para detectar una anomalía. Por ende, se utilizará un método de reducción de atributos, también conocido como reducción de dimensión, por último se estudiará como afectan tanto en efectividad y eficiencia al algoritmo sin reducir la dimensión de los datos. En esta memoria se analiza el algoritmo Robust Random Cut Forest para finalmente entregar una posible mejora a éste. Para poner en prueba el algoritmo se realiza un experimento de barras de acero, donde se obtienen como resultado sus vibraciones al ser excitado por un ruido blanco. Estos datos se procesan en tres escenarios distintos: Sin reducción de dimensiones, análisis de componentes principales(principal component analysis) y autoencoder. En base a esto, el primer escenario (sin reducción de dimensiones) servirá para establecer un punto de orientación, para ver como varían el escenario dos y tres en la detección de anomalía, en efectividad y eficiencia. %partida para detección de anomalía, luego se ver si esta mejora Luego, se realiza el estudio en el marco de tres escenarios para detectar puntos anómalos; En los resultados se observa una mejora al reducir las dimensiones en cuanto a tiempo de cómputo (eficiencia) y en precisión (efectividad) para encontrar una anomalía, finalmente los mejores resultados son con análisis de componentes principales (principal component analysis).

Page generated in 0.0782 seconds