• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 21
  • 19
  • 9
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 378
  • 219
  • 189
  • 146
  • 136
  • 127
  • 115
  • 93
  • 91
  • 73
  • 71
  • 61
  • 56
  • 55
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

No Person Detected

Riley, Holly Jane 27 July 2023 (has links)
The collection of Victorian-themed wearables and accessories of  "No Person Detected" serves as an innovative solution to the issues surrounding biometric technology and the invasion of privacy. This wearable technology was designed to counteract the involuntary recording of an individual's unique biometric data through the use of body cameras and CCTV, which can be accessed by law enforcement and marketing companies. The technology represents a democratization of design ideas and collaboration that allows individuals to create adversarial fashion and provides a level of biometric protection. This thesis explores the potential of technological innovation and collaboration to result in a more privacy-conscious society, one where individuals can take control of their personal data and protect themselves against the dangers of biometric tracking. The convergence of fashion, technology, and design has the potential to revolutionize how we approach privacy in a digital age, and "No Person Detected" represents an exciting step towards that future. / Master of Fine Arts / As technology becomes a larger component of our daily lives, our digital footprint continues to expand, leaving behind sensitive identifying information. From this data, law enforcement agencies such as the FBI and ICE derive insights and conclusions about our lives. Due to unreliable data, facial recognition technology (FRT) has demonstrated implicit bias, particularly toward racialized bodies. This highlights the need for public education and responsible online behavior and raises questions about the privacy and security of personal data. At the intersection of fashion, history, and technology, "No Person Detected" aims to fight against the involuntary collection of biometric data in an adversarial way. With the proliferation of FRT and the accumulation of personal data from a variety of sources, it is crucial that both businesses and individuals establish transparent policies to protect user data. This thesis highlights both the historical context of racism in policing and the significance of privacy in the digital age.
132

A state-trait approach for bridging the gap between basic and applied occupational psychological constructs / 状態・特性アプローチによる職業活動に関わる基礎的および応用的心理学的構成概念の統合的理解

Yamashita, Jumpei 23 May 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24821号 / 情博第837号 / 新制||情||140(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 熊田 孝恒, 教授 西田 眞也, 教授 内田 由紀子, 准教授 中島 亮一 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
133

Multi Planar Conditional Generative Adversarial Networks

Somosmita Mitra (11197152) 30 July 2021 (has links)
<div>Brain tumor sub region segmentation is a challenging problem in Magnetic Resonance imaging. The tumor regions tend to suffer from lack of homogeneity, textural differences, variable location, and their ability to proliferate into surrounding tissue. </div><div> The segmentation task thus requires an algorithm which can be indifferent to such influences and robust to external interference. In this work we propose a conditional generative adversarial network which learns off multiple planes of reference. Using this learning, we evaluate the quality of the segmentation and back propagate the loss for improving the learning. The results produced by the network show competitive quality in both the training and the testing data-set.</div><div><br></div>
134

Towards Designing Robust Deep Learning Models for 3D Understanding

Hamdi, Abdullah 04 1900 (has links)
This dissertation presents novel methods for addressing important challenges related to the robustness of Deep Neural Networks (DNNs) for 3D understanding and in 3D setups. Our research focuses on two main areas, adversarial robustness on 3D data and setups and the robustness of DNNs to realistic 3D scenarios. One paradigm for 3D understanding is to represent 3D as a set of 3D points and learn functions on this set directly. Our first work, AdvPC, addresses the issue of limited transferability and ease of defense against current 3D point cloud adversarial attacks. By using a point cloud Auto-Encoder to generate more transferable attacks, AdvPC surpasses state-of-the-art attacks by a large margin on 3D point cloud attack transferability. Additionally, AdvPC increases the ability to break defenses by up to 38\% as compared to other baseline attacks on the ModelNet40 dataset. Another paradigm of 3D understanding is to perform 2D processing of multiple images of the 3D data. The second work, MVTN, addresses the problem of selecting viewpoints for 3D shape recognition using a Multi-View Transformation Network (MVTN) to learn optimal viewpoints. It combines MVTN with multi-view approaches leading to state-of-the-art results on standard benchmarks ModelNet40, ShapeNet Core55, and ScanObjectNN. MVTN also improves robustness to realistic scenarios like rotation and occlusion. Our third work analyzes the Semantic Robustness of 2D Deep Neural Networks, addressing the problem of high sensitivity toward semantic primitives in DNNs by visualizing the DNN global behavior as semantic maps and observing the interesting behavior of some DNNs. Additionally, we develop a bottom-up approach to detect robust regions of DNNs for scalable semantic robustness analysis and benchmarking of different DNNs. The fourth work, SADA, showcases the problem of lack of robustness in DNNs specifically for the safety-critical applications of autonomous navigation, beyond the simple classification setup. We present a general framework (BBGAN) for black-box adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task. BBGAN is trained to generate failure cases that consistently fool a trained agent on tasks such as object detection, self-driving, and autonomous UAV racing.
135

TwinLossGAN: Domain Adaptation Learning for Semantic Segmentation

Song, Yuehua 19 August 2022 (has links)
Most semantic segmentation methods based on Convolutional Neural Networks (CNNs) rely on supervised pixel-level labelling, but because pixel-level labelling is time-consuming and laborious, synthetic images are generated by software, and their label information is already embedded inside the data; therefore, labelling can be done automatically. This advantage makes synthetic datasets widely used in training deep learning models for real-world cases. Still, compared to supervised learning with real-world labelled images, the accuracy of the models trained using synthetic datasets is not high when applied to real-world data. So, researchers have turned their interest to Unsupervised Domain Adaptation (UDA), which is mainly used to transfer knowledge learned from one domain to another. That is why we can use synthetic data to train the model. Then, the model can use what it learned to deal with real-world problems. UDA is an essential part of transfer learning. It aims to make two domain feature distributions as close as possible. In other words, UDA is mainly used to migrate the learned knowledge from one domain to another, so the knowledge and distribution learned from the source domain feature space can be migrated to the target space to improve the prediction accuracy of the target domain. However, compared with the traditional supervised learning model, the accuracy of UDA is not high when the trained UDA is used for scene segmentation of real images. The reason for the low accuracy of UDA is that the domain gap between the source and target domains is too large. The image distribution information learned by the model from the source domain cannot be applied to the target domain, which limits the development of UDA. Therefore we propose a new UDA model called TwinLossGAN, which will reduce the domain gap in two steps. The first step is to mix images from the source and target domains. The purpose is to allow the model to learn the features of images from both domains well. Mixing is performed by selecting a synthetic image on the source domain and then selecting a real-world image on the target domain. The two selected images are input to the segmenter to obtain semantic segmentation results separately. Then, the segmentation results are fed into the mixing module. The mixing model uses the ClassMix method to copy and paste some segmented objects from one image into another using segmented masks. Additionally, it generates inter-domain composite images and the corresponding pseudo-label. Then, in the second step, we modify a Generative Adversarial Network (GAN) to reduce the gap between domains further. The original GAN network has two main parts: generator and discriminator. In our proposed TwinLossGAN, the generator performs semantic segmentation on the source domain images and the target domain images separately. Segmentations are trained in parallel. The source domain synthetic images are segmented, and the loss is computed using synthetic labels. At the same time, the generated inter-domain composite images are fed to the segmentation module. The module compares its semantic segmentation results with the pseudo-label and calculates the loss. These calculated twin losses are used as generator loss for the GAN cycle for iterations. The GAN discriminator examines whether the semantic segmentation results originate from the source or target domain. The premise was that we retrieved data from GTA5 and SYNTHIA as the source domain data and images from CityScapes as the target domain data. The result was that the accuracy indicated by the TwinLossGAN that we proposed was much higher than the base UDA models.
136

Securing Connected and Automated Surveillance Systems Against Network Intrusions and Adversarial Attacks

Siddiqui, Abdul Jabbar 30 June 2021 (has links)
In the recent years, connected surveillance systems have been witnessing an unprecedented evolution owing to the advancements in internet of things and deep learning technologies. However, vulnerabilities to various kinds of attacks both at the cyber network-level and at the physical worldlevel are also rising. This poses danger not only to the devices but also to human life and property. The goal of this thesis is to enhance the security of an internet of things, focusing on connected video-based surveillance systems, by proposing multiple novel solutions to address security issues at the cyber network-level and to defend such systems at the physical world-level. In order to enhance security at the cyber network-level, this thesis designs and develops solutions to detect network intrusions in an internet of things such as surveillance cameras. The first solution is a novel method for network flow features transformation, named TempoCode. It introduces a temporal codebook-based encoding of flow features based on capturing the key patterns of benign traffic in a learnt temporal codebook. The second solution takes an unsupervised learning-based approach and proposes four methods to build efficient and adaptive ensembles of neural networks-based autoencoders for intrusion detection in internet of things such as surveillance cameras. To address the physical world-level attacks, this thesis studies, for the first time to the best of our knowledge, adversarial patches-based attacks against a convolutional neural network (CNN)- based surveillance system designed for vehicle make and model recognition (VMMR). The connected video-based surveillance systems that are based on deep learning models such as CNNs are highly vulnerable to adversarial machine learning-based attacks that could trick and fool the surveillance systems. In addition, this thesis proposes and evaluates a lightweight defense solution called SIHFR to mitigate the impact of such adversarial-patches on CNN-based VMMR systems, leveraging the symmetry in vehicles’ face images. The experimental evaluations on recent realistic intrusion detection datasets prove the effectiveness of the developed solutions, in comparison to state-of-the-art, in detecting intrusions of various types and for different devices. Moreover, using a real-world surveillance dataset, we demonstrate the effectiveness of the SIHFR defense method which does not require re-training of the target VMMR model and adds only a minimal overhead. The solutions designed and developed in this thesis shall pave the way forward for future studies to develop efficient intrusion detection systems and adversarial attacks mitigation methods for connected surveillance systems such as VMMR.
137

Quality Assessment of Conversational Agents : Assessing the Robustness of Conversational Agents to Errors and Lexical Variability / Kvalitetsutvärdering av konversationsagenter : Att bedöma robustheten hos konversationsagenter mot fel och lexikal variabilitet

Guichard, Jonathan January 2018 (has links)
Assessing a conversational agent’s understanding capabilities is critical, as poor user interactions could seal the agent’s fate at the very beginning of its lifecycle with users abandoning the system. In this thesis we explore the use of paraphrases as a testing tool for conversational agents. Paraphrases, which are different ways of expressing the same intent, are generated based on known working input by performing lexical substitutions and by introducing multiple spelling divergences. As the expected outcome for this newly generated data is known, we can use it to assess the agent’s robustness to language variation and detect potential understanding weaknesses. As demonstrated by a case study, we obtain encouraging results as it appears that this approach can help anticipate potential understanding shortcomings, and that these shortcomings can be addressed by the generated paraphrases. / Att bedöma en konversationsagents språkförståelse är kritiskt, eftersom dåliga användarinteraktioner kan avgöra om agenten blir en framgång eller ett misslyckande redan i början av livscykeln. I denna rapport undersöker vi användningen av parafraser som ett testverktyg för dessa konversationsagenter. Parafraser, vilka är olika sätt att uttrycka samma avsikt, skapas baserat på känd indata genom att utföra lexiska substitutioner och genom att introducera flera stavningsavvikelser. Eftersom det förväntade resultatet för denna indata är känd kan vi använda resultaten för att bedöma agentens robusthet mot språkvariation och upptäcka potentiella förståelssvagheter. Som framgår av en fallstudie får vi uppmuntrande resultat, eftersom detta tillvägagångssätt verkar kunna bidra till att förutse eventuella brister i förståelsen, och dessa brister kan hanteras av de genererade parafraserna.
138

Semi-supervised Learning for Real-world Object Recognition using Adversarial Autoencoders

Mittal, Sudhanshu January 2017 (has links)
For many real-world applications, labeled data can be costly to obtain. Semi-supervised learning methods make use of substantially available unlabeled data along with few labeled samples. Most of the latest work on semi-supervised learning for image classification show performance on standard machine learning datasets like MNIST, SVHN, etc. In this work, we propose a convolutional adversarial autoencoder architecture for real-world data. We demonstrate the application of this architecture for semi-supervised object recognition. We show that our approach can learn from limited labeled data and outperform fully-supervised CNN baseline method by about 4% on real-world datasets. We also achieve competitive performance on the MNIST dataset compared to state-of-the-art semi-supervised learning techniques. To spur research in this direction, we compiled two real-world datasets: Internet (WIS) dataset and Real-world (RW) dataset which consists of more than 20K labeled samples each, comprising of small household objects belonging to ten classes. We also show a possible application of this method for online learning in robotics. / I de flesta verklighetsbaserade tillämpningar kan det vara kostsamt att erhålla märkt data. Inlärningsmetoder som är semi-övervakade använder sig oftast i stor utsträckning av omärkt data med stöd av en liten mängd märkt data. Mycket av det senaste arbetet inom semiövervakade inlärningsmetoder för bildklassificering visar prestanda på standardiserad maskininlärning så som MNIST, SVHN, och så vidare. I det här arbetet föreslår vi en convolutional adversarial autoencoder arkitektur för verklighetsbaserad data. Vi demonstrerar tillämpningen av denna arkitektur för semi-övervakad objektidentifiering och visar att vårt tillvägagångssätt kan lära sig av ett begränsat antal märkt data. Därmed överträffar vi den fullt övervakade CNN-baslinjemetoden med ca. 4% på verklighetsbaserade datauppsättningar. Vi uppnår även konkurrenskraftig prestanda på MNIST datauppsättningen jämfört med moderna semi-övervakade inlärningsmetoder. För att stimulera forskningen i den här riktningen, samlade vi två verklighetsbaserade datauppsättningar: Internet (WIS) och Real-world (RW) datauppsättningar, som består av mer än 20 000 märkta prov vardera, som utgörs av små hushållsobjekt tillhörandes tio klasser. Vi visar också en möjlig tillämpning av den här metoden för online-inlärning i robotik.
139

Generative adversarial networks as integrated forward and inverse model for motor control / Generativa konkurrerande nätverk som integrerad framåtriktad och invers modell för rörelsekontroll

Lenninger, Movitz January 2017 (has links)
Internal models are believed to be crucial components in human motor control. It has been suggested that the central nervous system (CNS) uses forward and inverse models as internal representations of the motor systems. However, it is still unclear how the CNS implements the high-dimensional control of our movements. In this project, generative adversarial networks (GAN) are studied as a generative model of movement data. It is shown that, for a relatively small number of effectors, it is possible to train a GAN which produces new movement samples that are plausible given a simulator environment. It is believed that these models can be extended to generate high-dimensional movement data. Furthermore, this project investigates the possibility to use a trained GAN as an integrated forward and inverse model for motor control. / Interna modeller tros vara en viktig del av mänsklig rörelsekontroll. Det har föreslagits att det centrala nervsystemet (CNS) använder sig av framåtriktade modeller och inversa modeller för intern representation av motorsystemen. Dock är det fortfarande okänt hur det centrala nervsystemet implementerar denna högdimensionella kontroll. Detta examensarbete undersöker användningen av generativa konkurrerande nätverk som generativ modell av rörelsedata. Experiment visar att dessa nätverk kan tränas till att generera ny rörelsedata av en tvådelad arm och att den genererade datan efterliknar träningsdatan. Vi tror att nätverken även kan modellera mer högdimensionell rörelsedata. I projektet undersöks även användningen av dessa nätverk som en integrerad framåtriktad och invers modell.
140

Attacking Computer Vision Models Using Occlusion Analysis to Create Physically Robust Adversarial Images

Loh, Jacobsen 01 June 2020 (has links) (PDF)
Self-driving cars rely on their sense of sight to function effectively in chaotic and uncontrolled environments. Thanks to recent developments in computer vision, specifically convolutional neural networks, autonomous vehicles have developed the ability to see at or above human-level capabilities, which in turn has allowed for rapid advances in self-driving cars. Unfortunately, much like humans being confused by simple optical illusions, convolutional neural networks are susceptible to simple adversarial inputs. As there is no overlap between the optical illusions that fool humans and the adversarial examples that threaten convolutional neural networks, little is understood as to why these adversarial examples dupe such advanced models and what effective mitigation techniques might exist to resolve these issues. This thesis focuses on these adversarial images. By extending existing work, this thesis is able to offer a unique perspective on adversarial examples. Furthermore, these extensions are used to develop a novel attack that can generate physically robust adversarial examples. These physically robust instances provide a unique challenge as they transcend both individual models and the digital domain, thereby posing a significant threat to the efficacy of convolutional neural networks and their dependent applications.

Page generated in 0.0466 seconds