• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 9
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 103
  • 103
  • 43
  • 38
  • 26
  • 24
  • 22
  • 21
  • 21
  • 17
  • 16
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Identifiering av UNO-kort : En jämförelse av bildigenkänningstekniker

Al-Asadi, Yousif, Streit, Jennifer January 2023 (has links)
Att spela sällskapsspelet UNO är en typ av umgängesform där målet är att trivas. EnUNO-kortlek har 5 olika färger (blå, röd, grön, gul och joker) och olika symboler.Detta kan vara frustrerande för en person med nedsatt färgseende att delta, då enstor andel av spelet är beroende av att identifiera färgen på varje kort. Övergripandesyftet med detta arbete är att utveckla en prototyp för objektigenkänning av UNOkort som stöd för färgnedsatta. Arbetet sker genom jämförelse av objektigenkänningsmetoder som Convolutional Neural Network (CNN) och Template Matchinginspirerade metoder: hue template test samt binary template test. Detta kommer attjämföras i samband med igenkänning av färg och symbol tillsammans och separerat. Utvecklandet av prototypen kommer att utföras genom att träna två olika CNNmodeller, där en modell fokuserar endast på symboler och den andra fokuserar påbåde färg och symbol. Dessa modeller kommer att tränas med hjälp av YOLOv5 algoritmen som anses vara State Of The Art (SOTA) inom CNN med snabb exekvering. Samtidigt kommer template test att utvecklas med hjälp av OpenCV och genom att skapa mallar för korten. Dessa används för att göra en jämförelse av kortetsom ska identifieras med hjälp av mallen. Utöver detta kommer K Nearest Neighbor(KNN), en maskininlärningsalgoritm att utvecklas med syfte att identifiera endastfärg på korten. Slutligen utförs en jämförelse mellan dessa metoder genom mätningav prestanda som består av accuracy, precision, recall och latency. Jämförelsen kommer att ske mellan varje metod genom en confusion matrix för färger och symbolerför respektive modell. Resultatet av studien visade på att modellen som kombinerar CNN och KNN presterade bäst vid valideringen av de olika metoderna. Utöver detta visar studien atttemplate test är snabbare att implementera än CNN på grund av tiden för träningensom ett neuralt nätverk kräver. Dessutom visar latency att det finns en skillnad mellan de olika modellerna, där CNN presterade bäst. / Engaging in the social game of UNO represents a form of social interaction aimed atpromoting enjoyment. Each UNO card deck consists of five different colors (blue,red, green, yellow and joker) and various symbols. However participating in such agame can be frustrating for individuals with color vision impairment. Since a substantial portion of the game relies on accurately identifying the color of each card.The overall purpose of this research is to develop a prototype for object recognitionof UNO cards to support individuals with color vision impairment. This thesis involves comparing object recognition methods, namely Convolutional Neural Network (CNN) and Template Matching (TM). Each method will be compared with respect to color and symbol recognition both separately and combined.   The development of such a prototype will be through creating and training two different CNN models, where the first model focuses on solely symbol recognitionwhile the other model incorporates both color and symbol recognition. These models will be trained though an algorithm called YOLOv5 which is considered state-ofthe-art (SOTA) with fast execution. At the same time, two models of TM inspiredmethods, hue template test and binary template test, will be developed with thehelp of OpenCV and by creating templates for the cards. Each template will be usedas a way to compare the detected card in order to classify it. Additionally, the KNearest Neighbor (KNN) algorithm, a machine learning algorithm, will be developed specifically to identify the color of the cards. Finally a comparative analysis ofthese methods will be conducted by evaluating performance metrics such as accuracy, precision, recall and latency. The comparison will be carried out in betweeneach method using a confusion matrix for color and symbol in respective models. The study’s findings revealed that the model combining CNN and KNN demonstrated the best performance during the validation of the different models. Furthermore, the study shows that template tests are faster to implement than CNN due tothe training that a neural network requires. Moreover, the execution time showsthat there is a difference between the different models, where CNN achieved thehighest performance.
62

Robustness of Image Classification Using CNNs in Adverse Conditions

Ingelstam, Theo, Skåntorp, Johanna January 2022 (has links)
The usage of convolutional neural networks (CNNs) has revolutionized the field of computer vision. Though the algorithms used in image recognition have improved significantly in the past decade, they are still limited by the availability of training data. This paper aims to gain a better understanding of how limitations in the training data might affect the performance of the system. A robustness study was conducted. The study utilizes three different image datasets; pre-training CNN models on the ImageNet or CIFAR-10 datasets, and then training on the MAdWeather dataset, whose main characteristic is containing images with differing levels of obscurity in front of the objects in the images. The MAdWeather dataset is used in order to test how accurately a model can identify images that differ from its training dataset. The study shows that CNNs performance on one condition does not translate well to other conditions. / Bildklassificering med hjälp av datorer har revolutionerats genom introduktionen av CNNs. Och även om algoritmerna har förbättrats avsevärt, så är de fortsatt begränsade av tillgänglighet av data. Syftet med detta projekt är att få en bättre förståelse för hur begränsningar i träningsdata kan påverka prestandan för en modell. En studie genomförs för att avgöra hur robust en modell är mot att förutsättningarna, under vilka bilderna tas, förändras. Studien använder sig av tre olika dataset: ImageNet och CIFAR-10, för förträning av modellerna, samt MAdWeather för vidare träning. MAdWeather är speciellt framtaget med bilder där objekten är till olika grad grumlade. MAdWeather datasetet används vidare för att avgöra hur bra en modell är på att klassificera bilder som tagits fram under omständigheter som avviker från träningsdatan. Studien visar att CNNs prestanda på en viss omständighet, inte kan generaliseras till andra omständigheter. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
63

Bildklassificering av bilar med hjälp av deep learning / Image Classification of Cars using Deep Learning

Lindespång, Victor January 2017 (has links)
Den här rapporten beskriver hur en bildklassificerare skapades med förmågan att via en given bild på en bil avgöra vilken bilmodell bilen är av. Klassificeringsmodellen utvecklades med hjälp av bilder som företaget CAB sparat i samband med försäkringsärenden som behandlats via deras nuvarande produkter. Inledningsvis i rapporten så beskrivs teori för maskininlärning och djupinlärning på engrundläggande nivå för att leda in läsaren på ämnesområdet som rör rapporten, och fortsätter sedan med problemspecifika metoder som var till nytta för det aktuella problemet. Rapporten tar upp metoder för hur datan bearbetats i förväg, hur träningsprocessen gick  till med de valda verktygen samt diskussion kring resultatet och vad som påverkade det – med kommentarer om vad som kan göras i framtiden för att förbättra slutprodukten. / This report describes how an image classifier was created with the ability to identify car makeand model from a given picture of a car. The classifier was developed using pictures that the company CAB had saved from insurance errands that was managed through their current products. First of all the report begins with a brief theoretical introduction to machine learning and deep learning to guide the reader in to the subject of the report, and then continues with problemspecific methods that were of good use for the project. The report brings up methods for how the data was processed before training took place, how the training process went with the chosen tools for this project and also discussion about the result and what effected it – with comments about what can be done in the future to improve the end product.
64

A new approach to automatic saliency identification in images based on irregularity of regions

Al-Azawi, Mohammad Ali Naji Said January 2015 (has links)
This research introduces an image retrieval system which is, in different ways, inspired by the human vision system. The main problems with existing machine vision systems and image understanding are studied and identified, in order to design a system that relies on human image understanding. The main improvement of the developed system is that it uses the human attention principles in the process of image contents identification. Human attention shall be represented by saliency extraction algorithms, which extract the salient regions or in other words, the regions of interest. This work presents a new approach for the saliency identification which relies on the irregularity of the region. Irregularity is clearly defined and measuring tools developed. These measures are derived from the formality and variation of the region with respect to the surrounding regions. Both local and global saliency have been studied and appropriate algorithms were developed based on the local and global irregularity defined in this work. The need for suitable automatic clustering techniques motivate us to study the available clustering techniques and to development of a technique that is suitable for salient points clustering. Based on the fact that humans usually look at the surrounding region of the gaze point, an agglomerative clustering technique is developed utilising the principles of blobs extraction and intersection. Automatic thresholding was needed in different stages of the system development. Therefore, a Fuzzy thresholding technique was developed. Evaluation methods of saliency region extraction have been studied and analysed; subsequently we have developed evaluation techniques based on the extracted regions (or points) and compared them with the ground truth data. The proposed algorithms were tested against standard datasets and compared with the existing state-of-the-art algorithms. Both quantitative and qualitative benchmarking are presented in this thesis and a detailed discussion for the results has been included. The benchmarking showed promising results in different algorithms. The developed algorithms have been utilised in designing an integrated saliency-based image retrieval system which uses the salient regions to give a description for the scene. The system auto-labels the objects in the image by identifying the salient objects and gives labels based on the knowledge database contents. In addition, the system identifies the unimportant part of the image (background) to give a full description for the scene.
65

Método para execução de redes neurais convolucionais em FPGA. / A method for execution of convolutional neural networks in FPGA.

Sousa, Mark Cappello Ferreira de 26 April 2019 (has links)
Redes Neurais Convolucionais têm sido utilizadas com sucesso para reconhecimento de padrões em imagens. Porém, o seu alto custo computacional e a grande quantidade de parâmetros envolvidos dificultam a execução em tempo real deste tipo de rede neural artificial em aplicações embarcadas, onde o poder de processamento e a capacidade de armazenamento de dados são restritos. Este trabalho estudou e desenvolveu um método para execução em tempo real em FPGAs de uma Rede Neural Convolucional treinada, aproveitando o poder de processamento paralelo deste tipo de dispositivo. O foco deste trabalho consistiu na execução das camadas convolucionais, pois estas camadas podem contribuir com até 99% da carga computacional de toda a rede. Nos experimentos, um dispositivo FPGA foi utilizado conjugado com um processador ARM dual-core em um mesmo substrato de silício. Apenas o dispositivo FPGA foi utilizado para executar as camadas convolucionais da Rede Neural Convolucional AlexNet. O método estudado neste trabalho foca na distribuição eficiente dos recursos do FPGA por meio do balanceamento do pipeline formado pelas camadas convolucionais, uso de buffers para redução e reutilização de memória para armazenamento dos dados intermediários (gerados e consumidos pelas camadas convolucionais) e uso de precisão numérica de 8 bits para armazenamento dos kernels e aumento da vazão de leitura dos mesmos. Com o método desenvolvido, foi possível executar todas as cinco camadas convolucionais da AlexNet em 3,9 ms, com a frequência máxima de operação de 76,9 MHz. Também foi possível armazenar todos os parâmetros das camadas convolucionais na memória interna do FPGA, eliminando possíveis gargalos de acesso à memória externa. / Convolutional Neural Networks have been used successfully for pattern recognition in images. However, their high computational cost and the large number of parameters involved make it difficult to perform this type of artificial neural network in real time in embedded applications, where the processing power and the data storage capacity are restricted. This work studied and developed methods for real-time execution in FPGAs of a trained convolutional neural network, taking advantage of the parallel processing power of this type of device. The focus of this work was the execution of convolutional layers, since these layers can contribute up to 99% of the computational load of the entire network. In the experiments, an FPGA device was used in conjunction with a dual-core ARM processor on the same silicon substrate. The FPGA was used to perform convolutional layers of the AlexNet Convolutional Neural Network. The methods studied in this work focus on the efficient distribution of the FPGA resources through the balancing of the pipeline formed by the convolutional layers, the use of buffers for the reduction and reuse of memory for the storage of intermediate data (generated and consumed by the convolutional layers) and 8 bits for storage of the kernels and increase of the flow of reading of them. With the developed methods, it was possible to execute all five AlexNet convolutional layers in 3.9 ms with the maximum operating frequency of 76.9 MHz. It was also possible to store all the parameters of the convolutional layers in the internal memory of the FPGA, eliminating possible external access memory bottlenecks.
66

Segmentation et classification dans les images de documents numérisés / Segmentation and classification of digitized document images

Ouji, Asma 01 June 2012 (has links)
Les travaux de cette thèse ont été effectués dans le cadre de l'analyse et du traitement d'images de documents imprimés afin d'automatiser la création de revues de presse. Les images en sortie du scanner sont traitées sans aucune information a priori ou intervention humaine. Ainsi, pour les caractériser, nous présentons un système d'analyse de documents composites couleur qui réalise une segmentation en zones colorimétriquement homogènes et qui adapte les algorithmes d'extraction de textes aux caractéristiques locales de chaque zone. Les informations colorimétriques et textuelles fournies par ce système alimentent une méthode de segmentation physique des pages de presse numérisée. Les blocs issus de cette décomposition font l'objet d'une classification permettant, entre autres, de détecter les zones publicitaires. Dans la continuité et l'expansion des travaux de classification effectués dans la première partie, nous présentons un nouveau moteur de classification et de classement générique, rapide et facile à utiliser. Cette approche se distingue de la grande majorité des méthodes existantes qui reposent sur des connaissances a priori sur les données et dépendent de paramètres abstraits et difficiles à déterminer par l'utilisateur. De la caractérisation colorimétrique au suivi des articles en passant par la détection des publicités, l'ensemble des approches présentées ont été combinées afin de mettre au point une application permettant la classification des documents de presse numérisée par le contenu. / In this thesis, we deal with printed document images processing and analysis to automate the press reviews. The scanner output images are processed without any prior knowledge nor human intervention. Thus, to characterize them, we present a scalable analysis system for complex documents. This characterization is based on a hybrid color segmentation suited to noisy document images. The color analysis customizes text extraction algorithms to fit the local image properties. The provided color and text information is used to perform layout segmentation in press images and to compute features on the resulting blocks. These elements are classified to detect advertisements. In the second part of this thesis, we deal with a more general purpose: clusternig and classification. We present a new clustering approach, named ACPP, which is completely automated, fast and easy to use. This approach's main features are its independence of prior knowledge about the data and theoretical parameters that should be determined by the user. Color analysis, layout segmentation and the ACPP classification method are combined to create a complete processing chain for press images.
67

Appariement de formes basé sur une squelettisation hiérarchique / Shape matching based on a hierarchical skeletonization

Leborgne, Aurélie 11 July 2016 (has links)
Les travaux effectués durant cette thèse portent sur l’appariement de formes planes basé sur une squelettisation hiérarchique. Dans un premier temps, nous avons abordé la création d’un squelette de forme grâce à un algorithme associant des outils de la géométrie discrète et des filtres. Cette association permet d’acquérir un squelette regroupant les propriétés désirées dans le cadre de l’appariement. Néanmoins, le squelette obtenu reste une représentation de la forme ne différenciant pas les branches représentant l’allure générale de celles représentant un détail de la forme. Or, lors de l’appariement, il semble plus intéressant d’associer des branches ayant le même ordre d’importance, mais aussi de donner plus de poids aux associations décrivant un aspect global des formes. Notre deuxième contribution porte sur la résolution de ce problème. Elle concerne donc la hiérarchisation des branches du squelette, précédemment créé, en leur attribuant une pondération reflétant leur importance dans la forme. À cet effet, nous lissons progressivement une forme et étudions la persistance des branches pour leur attribuer un poids. L’ultime étape consiste donc à apparier les formes grâce à leur squelette hiérarchique modélisé par un hypergraphe. En d’autres termes, nous associons les branches deux à deux pour déterminer une mesure de dissimilarité entre deux formes. Pour ce faire, nous prenons en compte la géométrie des formes, la position relative des différentes parties des formes ainsi que de leur importance. / The works performed during this thesis focuses on the matching of planar shapes based on a hierarchical skeletonisation. First, we approached the creation of a shape skeleton using an algorithm combining the tools of discrete geometry and filters. This combination allows to acquire a skeleton gathering the desired properties in the context of matching. Nevertheless, the resulting skeleton remains a representation of the shape, which does not differentiate branches representing the general shape of those coming from a detail of the shape. But when matching, it seems more interesting to pair branches of the same order of importance, but also to give more weight to associations describing an overall appearance of shapes. Our second contribution focuses on solving this problem. It concerns the prioritization of skeletal branches, previously created by assigning a weight reflecting their importance in shape. To this end, we gradually smooth a shape and study the persistence of branches to assign a weight. The final step is to match the shapes with their hierarchical skeleton modeled as a hypergraph. In other words, we associate the branches two by two to determine a dissimilarity measure between two shapes. To do this, we take into account the geometry of the shapes, the relative position of different parts of the shapes and their importance.
68

Reconhecimento de imagens de marcas de gado utilizando redes neurais convolucionais e máquinas de vetores de suporte

Santos, Carlos Alexandre Silva dos 26 September 2017 (has links)
Submitted by Marlucy Farias Medeiros (marlucy.farias@unipampa.edu.br) on 2017-10-31T17:44:17Z No. of bitstreams: 1 Carlos_Alexandre Silva_dos Santos - 2017.pdf: 27850839 bytes, checksum: c4399fa8396d3b558becbfa67b7dd777 (MD5) / Approved for entry into archive by Marlucy Farias Medeiros (marlucy.farias@unipampa.edu.br) on 2017-10-31T18:24:21Z (GMT) No. of bitstreams: 1 Carlos_Alexandre Silva_dos Santos - 2017.pdf: 27850839 bytes, checksum: c4399fa8396d3b558becbfa67b7dd777 (MD5) / Made available in DSpace on 2017-10-31T18:24:21Z (GMT). No. of bitstreams: 1 Carlos_Alexandre Silva_dos Santos - 2017.pdf: 27850839 bytes, checksum: c4399fa8396d3b558becbfa67b7dd777 (MD5) Previous issue date: 2017-09-26 / O reconhecimento automático de imagens de marca de gado é uma necessidade para os órgãos governamentais responsáveis por esta atividade. Para auxiliar neste processo, este trabalho propõe uma arquitetura que seja capaz de realizar o reconhecimento automático dessas marcas. Nesse sentido, uma arquitetura foi implementada e experimentos foram realizados com dois métodos: Bag-of-Features e Redes Neurais Convolucionais (CNN). No método Bag-of-Features foi utilizado o algoritmo SURF para extração de pontos de interesse das imagens e para criação do agrupa mento de palavras visuais foi utilizado o clustering K-means. O método Bag-of-Features apresentou acurácia geral de 86,02% e tempo de processamento de 56,705 segundos para um conjunto de 12 marcas e 540 imagens. No método CNN foi criada uma rede completa com 5 camadas convolucionais e 3 camadas totalmente conectadas. A 1 ª camada convolucional teve como entrada imagens transformadas para o formato de cores RGB. Para ativação da CNN foi utilizada a função ReLU, e a técnica de maxpooling para redução. O método CNN apresentou acurácia geral de 93,28% e tempo de processamento de 12,716 segundos para um conjunto de 12 marcas e 540 imagens. O método CNN consiste de seis etapas: a) selecionar o banco de imagens; b) selecionar o modelo de CNN pré-treinado; c) pré-processar as imagens e aplicar a CNN; d) extrair as características das imagens; e) treinar e classificar as imagens utilizando SVM; f) avaliar os resultados da classificação. Os experimentos foram realizados utilizando o conjunto de imagens de marcas de gado de uma prefeitura municipal. Para avaliação do desempenho da arquitetura proposta foram utilizadas as métricas de acurácia geral, recall, precisão, coeficiente Kappa e tempo de processamento. Os resultados obtidos foram satisfatórios, nos quais o método CNN apresentou os melhores resultados em comparação ao método Bag-of-Features, sendo 7,26% mais preciso e 43,989 segundos mais rápido. Também foram realizados experimentos com o método CNN em conjuntos de marcas com número maior de amostras, o qual obteve taxas de acurácia geral de 94,90% para 12 marcas e 840 imagens, e 80,57% para 500 marcas e 22.500 imagens, respectivamente. / The automatic recognition of cattle branding is a necessity for government agencies responsible for this activity. In order to improve this process, this work proposes an architecture which is able of performing the automatic recognition of these brandings. The proposed software implements two methods, namely: Bag-of-Features and CNN. For the Bag-of-Features method, the SURF algorithm was used in order to extract points of interest from the images. We also used K-means clustering to create the visual word cluster. The Bag-of-Features method presented a overall accuracy of 86.02% and a processing time of 56.705 seconds in a set containing 12 brandings and 540 images. For the CNN method, we created a complete network with five convolutional layers, and three layers fully connected. For the 1st convolutional layer we converted the input images into the RGB color for mat. In order to activate the CNN, we performed an application of the ReLU, and used the maxpooling technique for the reduction. The CNN method presented 93.28% of overall accuracy and a processing time of 12.716 seconds for a set containing 12 brandings and 540 images. The CNN method includes six steps: a) selecting the image database; b) selecting the pre-trained CNN model; c) pre-processing the images and applying the CNN; d) extracting the features from the images; e) training and classifying the images using SVM; f) assessing the classification results. The experiments were performed using the cattle branding image set of a City Hall. Metrics of overall accuracy, recall, precision, Kappa coefficient, and processing time were used in order to assess the performance of the proposed architecture. Results were satisfactory. The CNN method showed the best results when compared to Bag-of-Features method, considering that it was 7.26% more accurate and 43.989 seconds faster. Also, some experiments were conducted with the CNN method for sets of brandings with a greater number of samples. These larger sets presented a overall accuracy rate of 94.90% for 12 brandings and 840 images, and 80.57% for 500 brandings and 22,500 images, respectively.
69

Processo de design baseado no projeto axiomático para domínios próximos: estudo de caso na análise e reconhecimento de textura. / Design process based on the axiomatic design for close domain: case study in texture analysis and recognition.

Queiroz, Ricardo Alexandro de Andrade 19 December 2011 (has links)
O avanço tecnológico recente tem atraído tanto a comunidade acadêmica quanto o mercado para a investigação de novos métodos, técnicas e linguagens formais para a área de Projeto de Engenharia. A principal motivação é o atendimento à demanda para desenvolver produtos e sistemas cada vez mais completos e que satisfaçam as necessidades do usuário final. Necessidades estas que podem estar ligadas, por exemplo, à análise e reconhecimento de objetos que compõe uma imagem pela sua textura, um processo essencial na automação de uma enorme gama de aplicações como: visão robótica, monitoração industrial, sensoriamento remoto, segurança e diagnóstico médico assistido. Em vista da relevância das inúmeras aplicações envolvidas e pelo fato do domínio de aplicação ser muito próximo do contexto do desenvolvedor, é apresentada uma proposta de um processo de design baseado no Projeto Axiomático como sendo o mais indicado para esta situação. Especificamente, se espera que no estudo de caso da análise de textura haja uma convergência mais rápida para a solução - se esta existir. No estudo de caso, se desenvolve uma nova concepção de arquitetura de rede neural artificial (RNA), auto-organizável, com a estrutura espacial bidimensional da imagem de entrada preservada, tendo a extração e reconhecimento/classificação de textura em uma única fase de aprendizado. Um novo conceito para o paradigma da competição entre os neurônios também é estabelecida. O processo é original por permitir que o desenvolvedor assuma concomitantemente o papel do cliente no projeto, e especificamente por estabelecer o processo de sistematização e estruturação do raciocínio lógico do projetista para a solução do problema a ser desenvolvido e implementado em RNA. / The recent technological advance has attracted the industry and the academic community to research and propose methods, seek for new techniques, and formal languages for engineering design in order to respond to the growing demand for sophisticated product and systems that fully satisfy customers needs. It can be associated, for instance, with an application of object recognition using texture features, essential to a variety of applications domains, such as robotic vision, industrial inspection, remote sensing, security and medical image diagnosis. Considering the importance of the large number of applications mentioned before, and due to their characteristic where both application and developer domain are very close to each other, this work aims to present a design process based on ideas extracted from axiomatic design to accelerate the development for the classical approach to texture analysis. Thus, a case study is accomplished where a new conception of neural network architecture is specially designed for the following proposal: preserving the two-dimensional spatial structure of the input image, and performing texture feature extraction and classification within the same architecture. As a result, a new mechanism for neuronal competition is also developed as specific knowledge for the domain. In fact, the process proposed has some originality because it does take into account that the developer assumes also the customers role on the project, and establishes the systematization process and structure of logical reasoning of the developer in order to develop and implement the solution in neural network domain.
70

Medical Identity Theft and Palm Vein Authentication: The Healthcare Manager's Perspective

Cerda III, Cruz 01 January 2018 (has links)
The Federal Bureau of Investigation reported that cyber actors will likely increase cyber intrusions against healthcare systems and their concomitant medical devices because of the mandatory transition from paper to electronic health records, lax cyber security standards, and a higher financial payout for medical records in the deep web. The problem addressed in this quantitative correlational study was uncertainty surrounding the benefits of palm vein authentication adoption relative to the growing crime of medical identity theft. The purpose of this quantitative correlational study was to understand healthcare managers' and doctors' perceptions of the effectiveness of palm vein authentication technology. The research questions were designed to investigate the relationship between intention to adopt palm vein authentication technology and perceived usefulness, complexity, security, peer influence, and relative advantage. The unified theory of acceptance and use of technology was the theoretical basis for this quantitative study. Data were gathered through an anonymous online survey of 109 healthcare managers and doctors, and analyzed using principal axis factoring, Pearson's product moment correlation, multiple linear regression, and 1-way analysis of variance. The results of the study showed a statistically significant positive correlation between perceived usefulness, security, peer influence, relative advantage, and intention to adopt palm vein authentication. No statistically significant correlation existed between complexity and intention to adopt palm vein authentication. These findings indicate that by effectively using palm vein authentication, organizations can mitigate the risk of medical fraud and its associated costs, and positive social change can be realized.

Page generated in 0.4673 seconds