• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 486
  • 247
  • 201
  • 191
  • 163
  • 139
  • 127
  • 112
  • 105
  • 102
  • 90
  • 88
  • 85
  • 83
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Attacks and Vulnerabilities of Hardware Accelerators for Machine Learning: Degrading Accuracy Over Time by Hardware Trojans

Niklasson, Marcus, Uddberg, Simon January 2024 (has links)
The increasing application of Neural Networks (NNs) in various fields has heightened the demand for specialized hardware to enhance performance and efficiency. Field-Programmable Gate Arrays (FPGAs) have emerged as a popular choice for implementing NN accelerators due to their flexibility, high performance, and ability to be customized for specific NN architectures. However, the trend of outsourcing Integrated Circuit (IC) design to third parties has introduced new security vulnerabilities, particularly in the form of Hardware Trojans (HTs). These malicious alterations can severely compromise the integrity and functionality of NN accelerators. Building upon this, this study investigates a novel type of HT that degrades the accuracy of Convolutional Neural Network (CNN) accelerators over time. Two variants of the attack are presented: Gradually Degrading Accuracy Trojan (GDAT) and Suddenly Degrading Accuracy Trojan (SDAT), implemented in various components of the CNN accelerator. The approach presented leverages a sensitivity analysis to identify the most impactful targets for the trojan and evaluates the attack’s effectiveness based on stealthiness, hardware overhead, and impact on accuracy.  The overhead of the attacks was found to be competitive when compared to other trojans, and has the potential to undermine trust and cause economic damages if deployed. Out of the components targeted, the memory component for the feature maps was identified as the most vulnerable to this attack, closely followed by the bias memory component. The feature map trojans resulted in a significant accuracy degradation of 78.16% with a 0.15% and 0.29% increase in Look-Up-Table (LUT) utilization for the SDAT and GDAT variants, respectively. In comparison, the bias trojans caused an accuracy degradation of 63.33% with a LUT utilization increase of 0.20% and 0.33% for the respective trojans. The power consumption overhead was consistent at 0.16% for both the attacks and trojan versions.
242

Crowd Counting Camera Array and Correction

Fausak, Andrew Todd 05 1900 (has links)
"Crowd counting" is a term used to describe the process of calculating the number of people in a given context; however, crowd counting has multiple challenges especially when images representing a given crowd span multiple cameras or images. In this thesis, we propose a crowd counting camera array and correction (CCCAC) method using a camera array of scaled, adjusted, geometrically corrected, combined, processed, and then corrected images to determine the number of people within the newly created combined crowd field. The purpose of CCCAC is to transform and combine valid regions from multiple images from different sources and order as a uniform proportioned set of images for a collage or discrete summation through a new precision counting architecture. Determining counts in this manner within normalized view (collage), results in superior counting accuracy than processing individual images and summing totals with prior models. Finally, the output from the counting model is adjusted with learned results over time to perfect the counting ability of the entire counting system itself. Results show that CCCAC crowd counting corrected and uncorrected methods perform superior to raw image processing methods.
243

Nedstängningen av USA:s federala regering 2018–2019 : - En studie av Framing av Fox News och CNN / Shutdown of the US federal government 2018-2019 : - A study of Framing of Fox News and CNN

Leandersson, Pontus January 2019 (has links)
The purpose of this paper is to study how the latest U.S government shutdown was framed and what kind of framing was used in two major news outlets, Fox News and CNN. The paper ask two questions first, how is the shutdown presented in CNN and Fox News coverage? And second, what examples of framing can be seen in the different company’s coverage? To find this out the paper uses a qualitative text analysis of several news articles from these outlets with framing theory as a theoretical background for the paper. The paper reaches the conclusion that for the most part in the more news focused articles both CNN and Fox News portray the shutdown as a conflict between different actors as well as a problem meanwhile the opinion pieces show similar framing but differ in that they often include a pro/anti Trump framing besides the conflict framing to the articles.
244

Редакционный нейтралитет в международном информационном телевещании / L'identité contre la neutralité dans la politique éditoriale des chaînes transnationales d'information / Identity vs. neutrality in the editorial policies of transnational news channels

Loctier, Denis 26 January 2012 (has links)
Cette œuvre représente une tentative cherchant à mettre en lumière les facteurs qui expliquent la visée des chaînes transnationales de l’information pour créer cet effet de neutralité dans la couverture éditoriale des conflits. Elle analyse également les conditions qui rendent possibles cette stratégie par les limites internes de la rédaction, ainsi que par le biais des attentes des publics. La position privilégiée de l'auteur, qui a depuis plus d'une décennie travaillé comme journaliste au sein de l'équipe éditoriale internationale d'Euronews, lui a permis de tester ses hypothèses et les constatations de l'intérieur de la chaine. Cette thèse examine les stratégies éditoriales des grands chaînes paneuropéens et mondiales d'information, en les plaçant dans la perspective de l'évolution historique de la diffusion transnationale et en analysant des évolutions actuelles dans le contexte de la mondialisation. En se concentrant en particulier sur la question de la neutralité déclarée habituellement dans les politiques éditoriales de ces chaînes, l'étude analyse les divergences dans l'interprétation de ce principe de base aussi bien par les équipes éditoriales comme par des groupes cibles des différents organismes de télédiffusion – un phénomène crucial qui créent des identités distinctes derrière la neutralité prétendue des flux d'information. Les méthodes employées dans la préparation de cette thèse intègrent l’observation participante à l’intérieur d’une chaine de nouvelles internationales de premier plan, une analyse de la perception du contenu des nouvelles par des téléspectateurs se classant parmi des camps politiques antagonistes, et une revue de la littérature et des publications périodiques russes et européennes. / This work represents an attempt to shed light onto the factors that explain the yearning of the transnational news channels to creating the impression of editorial neutrality in conflict coverage. It also demonstrates the conditionality of this guideline by the internal editorial limitations, as well as by the bias of the audience groups. The privileged position of the author, who has for more than a decade been working as a staff journalist within the international editorial team of Euronews, allowed him to test his hypotheses and findings from within. This dissertation examines the editorial strategies of the major paneuropean and global news channels, putting them in the perspective of historical evolution of transnational broadcasting and analysing the current developments in the context of the globalising world of today. Focusing in particular on the issue of commonly declared neutrality in the channels' editorial policies, the study analyses divergencies in interpreting this basic principle both by the editorial teams and by various audience groups of different broadcasters, creating distinct identities behind the supposedly neutral informational flows. The methods employed in preparing this dissertation include involved observational research of a leading international news channel, the analysis of the perception of news content by viewers ranking among confronting political camps, and reviews of topical Russian and European periodicals and literature.
245

L'image médiatique de l'identité iranienne contemporaine à travers le discours des télévisions arabes et occidentales / The image of the contemporary Iranian identity through the discourse of Arab and Western tv channels

Ahmadi, Ali 18 November 2014 (has links)
Cette thèse étudie la représentation de l’Iran contemporain à travers le discours des chaînes d’information en continu arabes et occidentales. L’étude des chaînes d’information en continu est une excellente occasion d’analyser les différentes représentations de l’Autre en étudiant comment ces chaînes construisent différentes représentations des identités à travers des stéréotypes et un contraste idéologique réducteur entre «nous» et «eux». La problématique de cette recherche repose sur l'analyse comparative du discours des chaînes de télévisions transnationales (BBC, CNN et France 24, comme des chaînes occidentales et des chaînes Al-Jazeera et Al-Arabiya comme des chaînes arabes), et leurs façons de représenter, parmi les évènements du monde, l'Autre, en l'occurrence l'identité iranienne. Les médias transnationaux produisent et distribuent des nouvelles, des images et des contenus symboliques relatifs aux problèmes que les téléspectateurs auraient, principalement voire exclusivement appris auparavant (ou pas), à partir de leurs médias nationaux. L’étude de la représentation de l'Autre, est un modèle utile qui cherche à exposer d’une façon scientifique les routines du processus de représentation des médias et la dynamique sous-jacente du pouvoir des représentations télévisuelles de l'Autre. Ce qui précédait cette ère de la postmodernité était l’enfermement du regard médiatique dans les frontières des Nations ou bien des empires coloniaux. La globalisation a introduit l’Autre au cœur même du local. Les représentations stéréotypées et les images de l’Iran dans les journaux télévisés et les émissions des chaînes semblent rétablir les distances spatiales, politiques et socio-culturelles entre les pays et semblent reproduire la supériorité occidentale surtout pour les chaînes américaines. Les chaînes arabes sont axées sur une forte orientation religieuse, raciale et ethnique lors de leur couverture liée à l’Iran. L’information est influencée par le processus de cadrage. Le cadrage fait par des chaînes arabes et occidentales tend alors à refléter et à renforcer l'idéologie dominante du pays d’origine. Les résultats de l'étude soulignent que les nouvelles internationales peuvent être interprétées par une vue combinée, dans laquelle les influences de la propagande sur la couverture médiatique sont interconnectées avec le système des médias et des intérêts nationaux, et paradoxalement par l’ancrage dans le territoire local dépendant de l'idéologie dominante du pays. / This thesis examines the representation of contemporary Iran through the discourse of Arab and Western news channels. The study of news channels is an excellent opportunity to analyze the different representations of the Other by studying how these chains build different representations of identities through a reducing stereotypes and ideological contrast between "us" and "them ". The problem of this research is based on the comparative analysis of the discourse of transnational television channels (BBC, CNN, France 24, as Western channels and Al-Jazeera and Al-Arabiya as Arab channels), and ways of represent among the events of the world, the Other in this case the Iranian identity. Transnational media produce and distribute news, images and symbolic content related issues that viewers would primarily or exclusively learned before (or not) from their national media. The study of the representation of the Other, is a useful model that seeks to explain a scientific way routines process media representation and the underlying dynamics of the power of television representations of the Other. What preceded this era of postmodernity was enclosing the media look into the borders of Nations or colonial empires. Globalization has brought the Other at the heart of local. Stereotypical representations and images of Iran in the news and emissions chains seem restore spatial distances, political and socio-cultural relations between the countries and seem to reproduce Western superiority especially for U.S. channels. Arabic channels are based on a strong religious orientation, racial and ethnic in their coverage related to Iran. The information is influenced by the delineation process. Framing done by Arab and Western chains can be expected to reflect and reinforce the country of origin dominant ideology. The results of the study highlight that international news can be interpreted by a combined view, in which the influences of propaganda on media coverage are interconnected with the system of media and national interests, the territory under the dominant ideology of the country.
246

Produktmatchning EfficientNet vs. ResNet : En jämförelse / Product matching EfficientNet vs. ResNet

Malmgren, Emil, Järdemar, Elin January 2021 (has links)
E-handeln ökar stadigt och mellan åren 2010 och 2014 var det en ökning på antalet konsumenter som handlar online från 28,9% till 34,2%. Otillräcklig information kring en produkts pris tvingar köpare att leta bland flera olika återförsäljare efter det bästa priset. Det finns olika sätt att ta fram informationen som krävs för att kunna jämföra priser. En metod för att kunna jämföra priser är automatiserad produktmatchning. Denna metod använder algoritmer för bildigenkänning där dess syfte är att detektera, lokalisera och känna igen objekt i bilder. Bildigenkänningsalgoritmer har ofta problem med att hitta objekt i bilder på grund av yttre faktorer såsom belysning, synvinklar och om bilden innehåller mycket onödig information. Tidigare har algoritmer såsom ANN (artificial neural network), random forest classifier och support vector machine används men senare undersökningar har visat att CNN (convolutional neural network) är bättre på att hitta viktiga egenskaper hos objekt som gör dem mindre känsliga mot dessa yttre faktorer. Två exempel på alternativa CNN-arkitekturer som vuxit fram är EfficientNet och ResNet som båda har visat bra resultat i tidigare forskning men det finns inte mycket forskning som hjälper en välja vilken CNN-arkitektur som leder till ett så bra resultat som möjligt. Vår frågeställning är därför: Vilken av EfficientNet- och ResNetarkitekturerna ger det högsta resultatet på produktmatchning med utvärderingsmåtten f1-score, precision och recall? Resultatet av studien visar att EfficientNet är den över lag bästa arkitekturen för produktmatchning på studiens datamängd. Resultatet visar också att ResNet var bättre än EfficientNet på att föreslå rätt matchningar av bilderna. De matchningarna ResNet gör stämmer mer än de matchningar EfficientNet föreslår då Resnet fick ett högre recall än vad EfficientNet fick.  EfficientNet uppnår dock en bättre recall som visar att EfficientNet är bättre än ResNet på att hitta fler eller alla korrekta matchningar bland sina potentiella matchningar. Men skillnaden i recall är större mellan modellerna vilket göra att EfficientNet får en högre f1-score och är över lag bättre än ResNet, men vad som är viktigast kan diskuteras. Är det viktigt att de föreslagna matchningarna är korrekta eller att man hittar alla korrekta matchningar. Är det viktigaste att de föreslagna matchningarna är korrekta har ResNet ett övertag men är det viktigare att hitta alla korrekta matchningar har EfficientNet ett övertag. Resultatet beror därför på vad som anses vara viktigast för att avgöra vilken av arkitekturerna som ger bäst resultat. / E-commerce is steadily increasing and between the years 2010 and 2014, there was an increase in the number of consumers shopping online from 28,9% to 34,2%. Insufficient information about the price of a product forces buyers to search among several different retailers for the best price. There are different ways to produce the information required to be able to compare prices. One method to compare prices is automated product matching. This method uses image recognition algorithms where its purpose is to detect, locate and recognize objects in images. Image recognition algorithms often have problems finding objects in images due to external factors such as brightness, viewing angles and if the image contains a lot of unnecessary information. In the past, algorithms such as ANN, random forest classifier and support vector machine have been used, but recent studies have shown that CNN is better at finding important properties of objects that make them less sensitive to these external factors. Two examples of alternative CNN architectures that have emerged are EfficientNet and ResNet, both of which have shown good results in previous studies, but there is not a lot of research that helps one choose which CNN architecture that leads to the best possible result. Our question is therefore: Which of the EfficientNet and ResNet architectures gives the highest result on product matching with the evaluation measures f1-score, precision, and recall? The results of the study show that EfficientNet is the overall best architecture for product matching on the dataset. The results also show that ResNet was better than EfficientNet in proposing the right matches for the images. The matches ResNet makes are more accurate than the matches EfficientNet suggests when Resnet received a higher precision than EfficientNet. However, EfficientNet achieves a better recall that shows that EfficientNet is better than ResNet at finding more or all correct matches among its potential matches. The difference in recall is greater than the difference in precision between the models, which means that EfficientNet gets a higher f1-score and is generally better than ResNet, but what is most important can be discussed. Is it important that the suggested matches are correct or that you find all the correct matches? If the most important thing is that the proposed matches are correct, ResNet has an advantage, but if it is more important to find all correct matches, EfficientNet has an advantage. The result therefore depends on what is considered to be most important in determining which of the architectures gives the best results
247

Automatic identification of northern pike (Exos Lucius) with convolutional neural networks

Lavenius, Axel January 2020 (has links)
The population of northern pike in the Baltic sea has seen a drasticdecrease in numbers in the last couple of decades. The reasons for this are believed to be many, but the majority of them are most likely anthropogenic. Today, many measures are being taken to prevent further decline of pike populations, ranging from nutrient runoff control to habitat restoration. This inevitably gives rise to the problem addressed in this project, namely: how can we best monitor pike populations so that it is possible to accurately assess and verify the effects of these measures over the coming decades? Pike is currently monitored in Sweden by employing expensive and ineffective manual methods of individual marking of pike by a handful of experts. This project provides evidence that such methods could be replaced by a Convolutional Neural Network (CNN), an automatic artificial intelligence system, which can be taught how to identify pike individuals based on their unique patterns. A neural net simulates the functions of neurons in the human brain, which allows it to perform a range of tasks, while a CNN is a neural net specialized for this type of visual recognition task. The results show that the CNN trained in this project can identify pike individuals in the provided data set with upwards of 90% accuracy, with much potential for improvement.
248

Image forgery detection using textural features and deep learning

Malhotra, Yishu 06 1900 (has links)
La croissance exponentielle et les progrès de la technologie ont rendu très pratique le partage de données visuelles, d'images et de données vidéo par le biais d’une vaste prépondérance de platesformes disponibles. Avec le développement rapide des technologies Internet et multimédia, l’efficacité de la gestion et du stockage, la rapidité de transmission et de partage, l'analyse en temps réel et le traitement des ressources multimédias numériques sont progressivement devenus un élément indispensable du travail et de la vie de nombreuses personnes. Sans aucun doute, une telle croissance technologique a rendu le forgeage de données visuelles relativement facile et réaliste sans laisser de traces évidentes. L'abus de ces données falsifiées peut tromper le public et répandre la désinformation parmi les masses. Compte tenu des faits mentionnés ci-dessus, la criminalistique des images doit être utilisée pour authentifier et maintenir l'intégrité des données visuelles. Pour cela, nous proposons une technique de détection passive de falsification d'images basée sur les incohérences de texture et de bruit introduites dans une image du fait de l'opération de falsification. De plus, le réseau de détection de falsification d'images (IFD-Net) proposé utilise une architecture basée sur un réseau de neurones à convolution (CNN) pour classer les images comme falsifiées ou vierges. Les motifs résiduels de texture et de bruit sont extraits des images à l'aide du motif binaire local (LBP) et du modèle Noiseprint. Les images classées comme forgées sont ensuite utilisées pour mener des expériences afin d'analyser les difficultés de localisation des pièces forgées dans ces images à l'aide de différents modèles de segmentation d'apprentissage en profondeur. Les résultats expérimentaux montrent que l'IFD-Net fonctionne comme les autres méthodes de détection de falsification d'images sur l'ensemble de données CASIA v2.0. Les résultats discutent également des raisons des difficultés de segmentation des régions forgées dans les images du jeu de données CASIA v2.0. / The exponential growth and advancement of technology have made it quite convenient for people to share visual data, imagery, and video data through a vast preponderance of available platforms. With the rapid development of Internet and multimedia technologies, performing efficient storage and management, fast transmission and sharing, real-time analysis, and processing of digital media resources has gradually become an indispensable part of many people’s work and life. Undoubtedly such technological growth has made forging visual data relatively easy and realistic without leaving any obvious visual clues. Abuse of such tampered data can deceive the public and spread misinformation amongst the masses. Considering the facts mentioned above, image forensics must be used to authenticate and maintain the integrity of visual data. For this purpose, we propose a passive image forgery detection technique based on textural and noise inconsistencies introduced in an image because of the tampering operation. Moreover, the proposed Image Forgery Detection Network (IFD-Net) uses a Convolution Neural Network (CNN) based architecture to classify the images as forged or pristine. The textural and noise residual patterns are extracted from the images using Local Binary Pattern (LBP) and the Noiseprint model. The images classified as forged are then utilized to conduct experiments to analyze the difficulties in localizing the forged parts in these images using different deep learning segmentation models. Experimental results show that both the IFD-Net perform like other image forgery detection methods on the CASIA v2.0 dataset. The results also discuss the reasons behind the difficulties in segmenting the forged regions in the images of the CASIA v2.0 dataset.
249

Image-classification for Brain Tumor using Pre-trained Convolutional Neural Network : Bildklassificering för hjärntumör medhjälp av förtränat konvolutionell tneuralt nätverk

Osman, Ahmad, Alsabbagh, Bushra January 2023 (has links)
Brain tumor is a disease characterized by uncontrolled growth of abnormal cells inthe brain. The brain is responsible for regulating the functions of all other organs,hence, any atypical growth of cells in the brain can have severe implications for itsfunctions. The number of global mortality in 2020 led by cancerous brains was estimatedat 251,329. However, early detection of brain cancer is critical for prompttreatment and improving patient’s quality of life as well as survival rates. Manualmedical image classification in diagnosing diseases has been shown to be extremelytime-consuming and labor-intensive. Convolutional Neural Networks (CNNs) hasproven to be a leading algorithm in image classification outperforming humans. Thispaper compares five CNN architectures namely: VGG-16, VGG-19, AlexNet, EffecientNetB7,and ResNet-50 in terms of performance and accuracy using transferlearning. In addition, the authors discussed in this paper the economic impact ofCNN, as an AI approach, on the healthcare sector. The models’ performance isdemonstrated using functions for loss and accuracy rates as well as using the confusionmatrix. The conducted experiment resulted in VGG-19 achieving best performancewith 97% accuracy, while EffecientNetB7 achieved worst performance with93% accuracy. / Hjärntumör är en sjukdom som kännetecknas av okontrollerad tillväxt av onormalaceller i hjärnan. Hjärnan är ansvarig för att styra funktionerna hos alla andra organ,därför kan all onormala tillväxt av celler i hjärnan ha allvarliga konsekvenser för dessfunktioner. Antalet globala dödligheten ledda av hjärncancer har uppskattats till251329 under 2020. Tidig upptäckt av hjärncancer är dock avgörande för snabb behandlingoch för att förbättra patienternas livskvalitet och överlevnadssannolikhet.Manuell medicinsk bildklassificering vid diagnostisering av sjukdomar har visat sigvara extremt tidskrävande och arbetskrävande. Convolutional Neural Network(CNN) är en ledande algoritm för bildklassificering som har överträffat människor.Denna studie jämför fem CNN-arkitekturer, nämligen VGG-16, VGG-19, AlexNet,EffecientNetB7, och ResNet-50 i form av prestanda och noggrannhet. Dessutom diskuterarförfattarna i studien CNN:s ekonomiska inverkan på sjukvårdssektorn. Modellensprestanda demonstrerades med hjälp av funktioner om förlust och noggrannhetsvärden samt med hjälp av en Confusion matris. Resultatet av det utfördaexperimentet har visat att VGG-19 har uppnått bästa prestanda med 97% noggrannhet,medan EffecientNetB7 har uppnått värsta prestanda med 93% noggrannhet.
250

Normalization of Deep and Shallow CNNs tasked with Medical 3D PET-scans : Analysis of technique applicability

Pllashniku, Edlir, Stanikzai, Zolal January 2021 (has links)
There has in recent years been interdisciplinary research on utilizing machine learning for detecting and classifying neurodegenerative disorders with the sole goal of outperforming state-of-the-art models in terms of metrics such as accuracy, specificity, and sensitivity. Specifically, these studies have been conducted using existing networks on ”novel” methods of pre-processing data or by developing new convolutional neural networks. As of now, no work has looked into how different normalization techniques affect a deep or shallow convolutional neural network in terms of numerical stability, its performance, explainability, and interpretability. This work delves into what normalization technique is most suitable for deep and shallow convolutional neural networks. Two baselines were created, one shallow and one deep, and applied eight different normalization techniques to these model architectures. Conclusions were drawn based on our analysis of numerical stability, performance (metrics), and methods of Explainable Artificial Intelligence. Our findings indicate that normalization techniques affect models differently regarding the mentioned aspects of our analysis, especially numerical stability and explainability. Moreover, we show that there should indeed be a preference to select one method over the other in future studies of this interdisciplinary field.

Page generated in 0.056 seconds