• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 319
  • 30
  • 18
  • 11
  • 8
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 460
  • 229
  • 189
  • 179
  • 153
  • 130
  • 122
  • 107
  • 102
  • 99
  • 84
  • 82
  • 82
  • 76
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Etude et conception de circuits innovants exploitant les caractéristiques des nouvelles technologies mémoires résistives / Study and design of an innovative chip leveraging the characteristics of resistive memory technologies

Lorrain, Vincent 09 January 2018 (has links)
Dans cette thèse, nous étudions les approches calculatoires dédiées des réseaux de neurones profonds et plus particulièrement des réseaux de neurones convolutionnels (CNN). En effet, l'efficacité des réseaux de neurones convolutionnels en font des structures calculatoires intéressantes dans de nombreuses applications. Nous étudions les différentes possibilités d'implémentation de ce type de réseaux pour en déduire leur complexité calculatoire. Nous montrons que la complexité calculatoire de ce type de structure peut rapidement devenir incompatible avec les ressources de l'embarqué. Pour résoudre cette problématique, nous avons fait une exploration des différents modèles de neurones et architectures susceptibles de minimiser les ressources nécessaires à l'application. Dans un premier temps, notre approche a consisté à explorer les possibles gains par changement de modèle de neurones. Nous montrons que les modèles dits impulsionnels permettent en théorie de réduire la complexité calculatoire tout en offrant des propriétés dynamiques intéressantes, mais nécessitent de repenser entièrement l'architecture matériel de calcul. Nous avons alors proposé notre approche impulsionnelle du calcul des réseaux de neurones convolutionnels avec une architecture associée. Nous avons mis en place une chaîne logicielle et de simulation matérielle dans le but d'explorer les différents paradigmes de calcul et implémentation matérielle et évaluer leur adéquation avec les environnements embarqués. Cette chaîne nous permet de valider les aspects calculatoires mais aussi d'évaluer la pertinence de nos choix architecturaux. Notre approche théorique a été validée par notre chaîne et notre architecture a fait l'objet d'une simulation en FDSOI 28 nm. Ainsi nous avons montré que cette approche est relativement efficace avec des propriétés intéressantes un terme de passage à l'échelle, de précision dynamique et de performance calculatoire. Au final, l'implémentation des réseaux de neurones convolutionnels en utilisant des modèles impulsionnels semble être prometteuse pour améliorer l'efficacité des réseaux. De plus, cela permet d'envisager des améliorations par l'ajout d'un apprentissage non supervisé type STDP, l'amélioration du codage impulsionnel ou encore l'intégration efficace de mémoire de type RRAM. / In this thesis, we study the dedicated computational approaches of deep neural networks and more particularly the convolutional neural networks (CNN).We highlight the convolutional neural networks efficiency make them interesting choice for many applications. We study the different implementation possibilities of this type of networks in order to deduce their computational complexity. We show that the computational complexity of this type of structure can quickly become incompatible with embedded resources. To address this issue, we explored differents models of neurons and architectures that could minimize the resources required for the application. In a first step, our approach consisted in exploring the possible gains by changing the model of neurons. We show that the so-called spiking models theoretically reduce the computational complexity while offering interesting dynamic properties but require a complete rethinking of the hardware architecture. We then proposed our spiking approach to the computation of convolutional neural networks with an associated architecture. We have set up a software and hardware simulation chain in order to explore the different paradigms of computation and hardware implementation and evaluate their suitability with embedded environments. This chain allows us to validate the computational aspects but also to evaluate the relevance of our architectural choices. Our theoretical approach has been validated by our chain and our architecture has been simulated in 28 nm FDSOI. Thus we have shown that this approach is relatively efficient with interesting properties of scaling, dynamic precision and computational performance. In the end, the implementation of convolutional neural networks using spiking models seems to be promising for improving the networks efficiency. Moreover, it allows improvements by the addition of a non-supervised learning type STDP, the improvement of the spike coding or the efficient integration of RRAM memory.
32

Djupinlärning för kameraövervakning

Blomqvist, Linus January 2020 (has links)
Allt fler misshandelsbrott sker i Sverige enligt Brå. För att reducera detta kan det som fångats på övervakningskameror användas i brottsutredningar, för att senare användas som bevismaterial till att döma den eller de skyldiga till brottet. Genom att optimera övervakningen kan företag använda sig av automatiserad igenkänning. Automatisering för igenkänningen av normala kontra onormala beteenden går att lösa med djupinlärning. Syftet med denna undersökning är att finna en lämplig modell som kan identifiera det onormala beteendet (till exempel ett slagsmål). Modell arkitekturen som användes under projektet var 3D ResNet, eftersom den klara av en djupare arkitektur. Ett djupare nätverk, innebär bättre prediktion av problemet. 3DResNet-34 var den modell arkitekturen som gav högst noggrannhet med 93,33%. Implementering av projektet utfördes i ramverket PyTorch. Undersökningen har visat att med           hjälp av överförd inlärning går det att återanvända kunskap från förtränade modeller och applicera dessa kunskaper på det aktuella problemet. Detta bidrar till en mer pålitligare modell med noggrann prediktion på nytt övervaknings           material. / According to Brå, more assault crimes are taking place in Sweden. To reduce this, information that was captured on surveillance cameras can be used in criminal investigations, to convict the perpetrator or perpetrators of the crime. To optimize monitoring, companies can use automation. Automation of the recognition of normal versus abnormal activities can be solved with deep learning. The purpose of this study is to find a suitable model that can identify               the abnormal activity (for example, a fight). The model architecture used during the project was 3D ResNet, because it was capable of handling deeper architectures. Having a deeper network means better prediction of the problem.           3D ResNet-34 was the model architecture that gave the highest accuracy with 93,33%. Implementation of the project was carried out in the framework of PyTorch. The study has shown that with the help of transfer learning it is possible to transfer knowledge from pre-trained models and apply this          knowledge to the current problem. This contributes to a more reliable model with accurate prediction for new surveillance footage.
33

Automated Classification of Steel Samples : An investigation using Convolutional Neural Networks

Ahlin, Björn, Gärdin, Marcus January 2017 (has links)
Automated image recognition software has earlier been used for various analyses in the steel making industry. In this study, the possibility to apply such software to classify Scanning Electron Microscope (SEM) images of two steel samples was investigated. The two steel samples were of the same steel grade but with the difference that they had been treated with calcium for a different length of time.  To enable automated image recognition, a Convolutional Neural Network (CNN) was built. The construction of the software was performed with open source code provided by Keras Documentation, thus ensuring an easily reproducible program. The network was trained, validated and tested, first for non-binarized images and then with binarized images. Binarized images were used to ensure that the network's prediction only considers the inclusion information and not the substrate. The non-binarized images gave a classification accuracy of 99.99 %. For the binarized images, the classification accuracy obtained was 67.9%.  The results show that it is possible to classify steel samples using CNNs. One interesting aspect of the success in classifying steel samples is that further studies on CNNs could enable automated classification of inclusions. / Automatiserad bildigenkänning har tidigare använts inom ståltillverkning för olika sorters analyser. Den här studiens syfte är att undersöka om bildigenkänningsprogram applicerat på Svepelektronmikroskopi (SEM) bilder kan klassificera två stålprover. Stålproven var av samma sort, med skillnaden att de behandlats med kalcium olika lång tid. För att möjliggöra den automatiserade bildigenkänningen byggdes ett Convolutional Neural Network (CNN). Nätverket byggdes med hjälp av öppen kod från Keras Documentation. Detta för att programmet enkelt skall kunna reproduceras. Nätverket tränades, validerades och testades, först för vanliga bilder och sedan för binariserade bilder. Binariserade bilder användes för att tvinga programmet att bara klassificera med avseende på inneslutningar och inte på grundmatrisen. Resultaten på klassificeringen för vanliga bilder gav en träffsäkerhet på 99.99%. För binariserade bilder blev träffsäkerheten för klassificeringen 67.9%. Resultaten visar att det är möjligt att använda CNNs för att klassificera stålprover. En intressant möjlighet som vidare studier på CNNs kan leda till är att automatisk klassificering av inneslutningar kan möjliggöras.
34

Objektivitet och unga turkar : En kvalitativ jämförelse av två videomediers dramatiska framställning av nyheter på internet

Lundvall, Mattias, Cederqvist, Adrian January 2013 (has links)
The ways of reaching out to the people to supply news are substantially increasing after the emerge of the Internet. Because of the Internet anyone can make their message heard. The Young Turks claims to be the largest online news show in the world and they have over 650 000 subscribers on Youtube. Therefore it is important to study how alternative media differs from more traditional media.  This study focuses on alternative journalism on Youtube. The aim of this study is to identify the difference between the Youtube channel The Young Turks and CNN. The weight in the study lays in the ways the news is being presented. This study uses qualitative content from four news videos made by The Young Turks and four news videos made by CNN. This study focuses on the specific content in a few news videos rather than the overall content in several news videos. A dramatic analysis method is used to determine how the two media channels differ.  The result shows that the news videos made by CNN used objectivity as a means to claim their professionalism more often than The Young Turks. CNN’s news reporting was also clearer in presenting sources, compared to The Young Turks.
35

Face Recognition with Preprocessing and Neural Networks

Habrman, David January 2016 (has links)
Face recognition is the problem of identifying individuals in images. This thesis evaluates two methods used to determine if pairs of face images belong to the same individual or not. The first method is a combination of principal component analysis and a neural network and the second method is based on state-of-the-art convolutional neural networks. They are trained and evaluated using two different data sets. The first set contains many images with large variations in, for example, illumination and facial expression. The second consists of fewer images with small variations. Principal component analysis allowed the use of smaller networks. The largest network has 1.7 million parameters compared to the 7 million used in the convolutional network. The use of smaller networks lowered the training time and evaluation time significantly. Principal component analysis proved to be well suited for the data set with small variations outperforming the convolutional network which need larger data sets to avoid overfitting. The reduction in data dimensionality, however, led to difficulties classifying the data set with large variations. The generous amount of images in this set allowed the convolutional method to reach higher accuracies than the principal component method.
36

Kompetent, men kriminell : Framställningen av Hillary Clinton i CNN och Fox News

Hudatzky, Emilia January 2017 (has links)
This essay aims to find out how Hillary Clinton is portrayed in news articles dated between October 8th to November 8th 2016 gathered from Fox News and CNN. With the help of qualitative framing analysis the study looks closer on 14 articles from the chosen time period to reveal which frames that are visible in the news material. The study also raises questions about how those frames portray masculinity or femininity and how the results of the study differs from studies done by other researchers. Results reveal that there are three prominent frames visible in the chosen material; a game frame, a scandal frame and a frame about competence and trustworthiness. Hillary Clinton is mostly portrayed as a masculine, competent yet criminal person, and the previous research matches with the findings in some areas that concern scandals and trust and differs in others that concern gender stereotypes.
37

Medierna och Syrien : En kvalitativ innehållsanalys av tre globala mediers rapportering av Syrienkonflikten.

Akhmedov, Samir, Orbring, Gustav January 2016 (has links)
I september 2015 anslöt sig Ryssland som en militär part till kriget i Syrien genom att börja bomba oppositionella styrkor. I samband med detta hamnade Ryssland och USA ännu en gång på motsatta sidor av en militär konflikt, Ryssland stödjer den syriska regimen, samtidigt som USA tillsammans med flera andra länder, däribland Qatar, stödjer oppositionen. Vi har valt att avgränsa oss till fem specifika händelser under den pågående Syrienkonflikten och tre mediers rapportering av dessa. Medierna som kommer att undersökas är USA- baserade CNN, Ryssland-baserade RT (f.d. Russia Today) och Qatar-baserade Al-Jazeera. Att undersöka dessa tre konkurrenter är intressant då dessa är bland de största medierna i världen och når därmed miljontals människor med sin information. Dessutom står länderna där medierna är baserade i på olika sidor av den väpnade konflikten. Syftet är att undersöka om rapporteringen har varit allsidig eller tendentiös. Dessutom ska uppsatsen undersöka om rapporteringen är opartisk eller propagandistisk. Artiklarna analyseras med hjälp av en av oss konstruerad undersökningsmodell, som tar avstamp i Lundgren, Ney & Thuréns massmedieretoriska analysmodell, och som kompletterats med kritiska frågor hämtade från Hermans och Chomskys teorier om kritik av internationell krigsrapportering samt kriterier hämtade från den svenske statsvetaren Jörgen Westerståhls forskning om objektivitet och opartiskhet i medierapporteringen. Resultatet av undersökningen visar att RTs och CNNs rapportering till stor del var tendentiös, och innehöll det som Herman & Chomsky kallar för propaganda på nästan alla punkter, samtidigt som de uppfyllde väldigt få av Westerståhls punkter för opartiskhet. Vanligast var att båda medierna hade en stor obalans i användningen av vilken part som får komma till tals. Den tendentiösa rapporteringen kan förklaras genom bristen på förstahandskällor, en stor mängd tendentiösa källor - men även hur CNN och RT finansieras. CNNs kommersiella syften, samt det faktum att RT finansieras av den ryska staten, kan vara faktorer som påverkar rapporteringen. Men Al-Jazeeras rapportering såg dock annorlunda ut. I undersökningen hittades inte en enda artikel där någon propagandapunkt uppnåddes, samtidigt som punkterna för opartisk rapportering till största del uppfylldes. Dessutom visades en helt annan allsidighet i rapporteringen från kanalen.
38

A comparison of image and object level annotation performance of image recognition cloud services and custom Convolutional Neural Network models

Nilsson, Kristian, Jönsson, Hans-Eric January 2019 (has links)
Recent advancements in machine learning has contributed to an explosive growth of the image recognition field. Simultaneously, multiple Information Technology (IT) service providers such as Google and Amazon have embraced cloud solutions and software as a service. These factors have helped mature many computer vision tasks from scientific curiosity to practical applications. As image recognition is now accessible to the general developer community, a need arises for a comparison of its capabilities, and what can be gained from choosing a cloud service over a custom implementation. This thesis empirically studies the performance of five general image recognition services (Google Cloud Vision, Microsoft Computer Vision, IBM Watson, Clarifai and Amazon Rekognition) and image recognition models of the Convolutional Neural Network (CNN) architecture that we ourselves have configured and trained. Image and object level annotations of images extracted from different datasets were tested, both in their original state and after being subjected to one of the following six types of distortions: brightness, color, compression, contrast, blurriness and rotation. The output labels and confidence scores were compared to the ground truth of multiple levels of concepts, such as food, soup and clam chowder. The results show that out of the services tested, there is currently no clear top performer over all categories and they all have some variations and similarities in their output, but on average Google Cloud Vision performs the best by a small margin. The services are all adept at identifying high level concepts such as food and most mid-level ones such as soup. However, in terms of further specifics, such as clam chowder, they start to vary, some performing better than others in different categories. Amazon was found to be the most capable at identifying multiple unique objects within the same image, on the chosen dataset. Additionally, it was found that by using synonyms of the ground truth labels, performance increased as the semantic gap between our expectations and the actual output from the services was narrowed. The services all showed vulnerability to image distortions, especially compression, blurriness and rotation. The custom models all performed noticeably worse, around half as well compared to the cloud services, possibly due to the difference in training data standards. The best model, configured with three convolutional layers, 128 nodes and a layer density of two, reached an average performance of almost 0.2 or 20%. In conclusion, if one is limited by a lack of experience with machine learning, computational resources and time, it is recommended to make use of one of the cloud services to reach a more acceptable performance level. Which to choose depends on the intended application, as the services perform differently in certain categories. The services are all vulnerable to multiple image distortions, potentially allowing adversarial attacks. Finally, there is definitely room for improvement in regards to the performance of these services and the computer vision field as a whole.
39

Bidirectional long short-term memory network for proto-object representation

Zhou, Quan 09 October 2018 (has links)
Researchers have developed many visual saliency models in order to advance the technology in computer vision. Neural networks, Convolution Neural Networks (CNNs) in particular, have successfully differentiate objects in images through feature extraction. Meanwhile, Cummings et al. has proposed a proto-object image saliency (POIS) model that shows perceptual objects or shapes can be modelled through the bottom-up saliency algorithm. Inspired from their work, this research is aimed to explore the imbedding features in the proto-object representations and utilizing artificial neural networks (ANN) to capture and predict the saliency output of POIS. A combination of CNN and a bi-directional long short-term memory (BLSTM) neural network is proposed for this saliency model as a machine learning alternative to the border ownership and grouping mechanism in POIS. As ANNs become more efficient in performing visual saliency tasks, the result of this work would extend their application in computer vision through successful implementation for proto-object based saliency.
40

Extracting Information from Encrypted Data using Deep Neural Networks

Lagerhjelm, Linus January 2018 (has links)
In this paper we explore various approaches to using deep neural networks to per- form cryptanalysis, with the ultimate goal of having a deep neural network deci- pher encrypted data. We use long short-term memory networks to try to decipher encrypted text and we use a convolutional neural network to perform classification tasks on encrypted MNIST images. We find that although the network is unable to decipher encrypted data, it is able to perform classification on encrypted data. We also find that the networks performance is depending on what key were used to en- crypt the data. These findings could be valuable for further research into the topic of cryptanalysis using deep neural networks.

Page generated in 0.0232 seconds