• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 9
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 103
  • 103
  • 43
  • 38
  • 26
  • 24
  • 22
  • 21
  • 21
  • 17
  • 16
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Evaluating mobile edge-computing on base stations : Case study of a sign recognition application

Castellanos Nájera, Eduardo January 2015 (has links)
Mobile phones have evolved from feature phones to smart phones with processing power that can compete with personal computers ten years ago. Nevertheless, the computing power of personal computers has also multiplied in the past decade. Consequently, the gap between mobile platforms and personal computers and servers still exists. Mobile Cloud Computing (MCC) has emerged as a paradigm that leverages this difference in processing power. It achieve this goal by augmenting smart phones with resources from the cloud, including processing power and storage capacity. Recently, Mobile Edge Computing (MEC) has brought the benefits from MCC one hop away from the end user. Furthermore, it also provides additional advantages, e.g., access to network context information, reduced latency, and location awareness. This thesis explores the advantages provided by MEC in practice by augmenting an existing application called Human-Centric Positioning System (HoPS). HoPS is a system that relies on context information and information extracted from a photograph of signposts to estimate a user's location. This thesis presents the challenges of enabling HoPS in practice, and implement strategies that make use of the advantages provided by MEC to tackle the challenges. Afterwards, it presents an evaluation of the resulting system, and discusses the implications of the results. To summarise, we make three primary contributions in this thesis: (1) we find out that it is possible to augment HoPS and improve its response time by a factor of four by offloading the code processing; (2) we can improve the overall accuracy of HoPS by leveraging additional processing power at the MEC; (3) we observe that improved network conditions can lead to reduced response time, nevertheless, the difference becomes insignificant compared with the heavy processing required. / Utvecklingen av mobiltelefoner har skett på en rusande takt. Dagens smartphones har mer processorkraft än vad stationära datorer hade för tio år sen. Samtidigt så har även datorernas processorer blivit mycket starkare. Därmed så finns det fortfarande klyftor mellan mobil plattform och datorer och servrar. Mobile Cloud Computing (MCC) används idag som en hävstång för de olika plattformernas processorkraft. Den uppnår detta genom att förbättra smartphonens processorkraft och datorminne med hjälp från datormolnet. På sistånde så har Mobile Edge Computing (MEC) gjort så att förmånerna med MCC är ett steg ifrån slutanvändaren. Dessutom så finns det andra fördelar med MEC, till exempel tillgång till nätverkssammanhangsinformation, reducerad latens, och platsmedvetenhet. Denna tes utforskar de praktiska fördelarna med MEC genom att använda tillämpningsprogrammet Human-Centric Positioning System (HoPS). HoPS är ett system som försöker att hitta platsen där användaren befinner sig på genom att använda sammanhängande information samt information från bilder med vägvisare. Tesen presenterar även de hinder som kan uppstå när HoPS implementeras i verkligheten, och använder förmåner från MEC för att hitta lösningar till eventuella hinder. Sedan så utvärderar och diskuterar tesen det resulterande systemet. För att sammanfatta så består tesen av tre huvuddelar: (1) vi tar reda på att det är möjligt att förbättra HoPS och minska svarstiden med en fjärdedel genom att avlasta kodsprocessen; (2) vi tar reda på att man kan generellt förbättra HoPS noggrannhet genom att använda den utökade processorkraften från MEC; (3) vi ser att förbättrade nätverksförutsättningar kan leda till minskad svarstid, dock så är skillnaden försumbar jämfört med hur mycket bearbetning av information som krävs.
32

System Agnostic GUI Testing : Analysis of Augmented Image Recognition Testing

Amundberg, Joel, Moberg, Martin January 2021 (has links)
No description available.
33

Positioning and tracking using image recognition and triangulation

Boström, Viktor January 2021 (has links)
Triangulation is used in a wide range of applications of position estimation. Usually it is donebymeasuring angles by hand to estimate positions in land surveying, navigation and astronomy.Withthe rise of image recognition arises the possibility to triangulate automatically. The aim ofthis thesisis to use the image recognition camera Pixy2 to triangulate a target i threedimensions. It is basedon previous projects on the topic to extend the system to estimatepositions over a larger spaceusing more Pixy2s. The setup used five Pixy2s with pan-tilt kitsand one Raspberry Pi 4 B. Somelimitations to the hardware was discovered, limiting the extentof the space in which triangulationcould be successfully performed. Furthermore, there weresome issues with the image recognitionalgorithm in the environment positioning was performed.The thesis was successful in that it managesto triangulate positions over a larger area thanprevious projects and in all three dimensions. Thesystem could also follow a target’s trajectory,albeit, there were some gaps in the measurements.
34

Optical Three-Dimensional Image Matching Using Holographic Information

Kim, Taegeun 04 September 2000 (has links)
We present a three-dimensional (3-D) optical image matching technique and location extraction techniques of matched 3-D objects for optical pattern recognition. We first describe the 3-D matching technique based on two-pupil optical heterodyne scanning. A hologram of the 3-D reference object is first created and then represented as one pupil function with the other pupil function being a delta function. The superposition of each beam modulated by the two pupils generates a scanning beam pattern. This beam pattern scans the 3-D target object to be recognized. The output of the scanning system gives out the 2-D correlation of the hologram of the reference object and that of the target object. When the 3-D image of the target object is matched with that of the reference object, the output of the system generates a strong correlation peak. This theory of 3-D holographic matching is analyzed in terms of two-pupil optical scanning. Computer simulation and optical experiment results are presented to reinforce the developed theory. The second part of the research concerns the extraction of the location of a 3-D image matched object. The proposed system basically performs a correlation of the hologram of a 3-D reference object and that of a 3-D target object, and hence 3-D matching is possible. However, the system does not give out the depth location of matched 3-D target objects directly because the correlation of holograms is a 2-D correlation and hence not 3-D shift invariant. We propose two methods to extract the location of matched 3-D objects directly from the correlation output of the system. One method is to use the optical system that focuses the output correlation pattern along depth and arrives at the 3-D location at the focused location. However, this technique has a drawback in that only the location of 3-D targets that are farther away from the 3-D reference object can be extracted. Thus, in this research, we propose another method in which the extraction of a location for a matched 3-D object is possible without the aforementioned drawback. This method applies the Wigner distribution to the power fringe-adjusted filtered correlation output to extract the 3-D location of a matched object. We analyze the proposed method and present computer simulation and optical experiment results. / Ph. D.
35

A Comparative study of cancer detection models using deep learning

Omar Ali, Nasra January 2020 (has links)
Leukemi är en form av cancer som kan vara en dödlig sjukdom. För att rehabilitera och behandla sjukdomen krävs det en korrekt och tidig diagnostisering. För att minska väntetiden för testresultaten har de ordinära metoderna transformerats till automatiserade datorverktyg som kan analyser och diagnostisera symtom.I detta arbete, utfördes det en komparativ studie. Det man jämförde var två olika metoder som detekterar leukemia. Den ena metoden är en genetisk sekvenserings metod som är en binär klassificering och den andra metoden en bildbehandlings metod som är en fler-klassad klassificeringsmodell. Modellerna hade olika inmatningsvärden, däremot använde sig de båda av Convolutional neural network (CNN) som nätverksarkitektur och fördelade datavärdena med en 3-way cross-validation teknik. Utvärderings metoderna för att analysera resultaten är learning curves, confusion matrix och klassifikation rapport. Resultaten visade att den genetiska sekvenserings metoden hade fler antal värden som var korrekt förutsagda med 98 % noggrannhet. Den presterade bättre än bildbehandlings metoden som hade värde på 81% noggrannhet. Storlek på de olika datauppsättningar kan vara en orsak till algoritmernas olika testresultat. / Leukemia is a form of cancer that can be a fatal disease, and to rehabilitate and treat it requires a correct and early diagnosis. Standard methods have transformed into automated computer tools for analyzing, diagnosing, and predicting symptoms.In this work, a comparison study was performed by comparing two different leukemia detection methods. The methods were a genomic sequencing method, which is a binary classification model and a multi-class classification model, which was an images-processing method. The methods had different input values. However, both of them used a Convolutional neural network (CNN) as network architecture. They also split their datasets ​​using 3-way cross-validation. The evaluation methods for analyzing the results were learning curves, confusion matrix, and classification report. The results showed that the genome model had better performance and had several numbers of values ​​that were correctly predicted with a total accuracy of 98%. This value was compared to the image processing method results that have a value of 81% total accuracy. The size of the different data sets can be a cause of the different test results of the algorithms.
36

Application of Analogical Reasoning for Use in Visual Knowledge Extraction

Combs, Kara Lian January 2021 (has links)
No description available.
37

A Small Classification Experiment Between Dolls and Humans With CNN

Reinders, Ylva, Runnstrand, Josefin January 2021 (has links)
This study is about a small experiment using CNNmodels to see how well they differentiate between dolls andhumans. The experiment used two different kinds of CNNmodels one which was built after a classic model and one morerudimental model. The models were tested on how accuratelythey predicted the right answer. The experiment was a threeclassedproblem and had a set of different parameters to testwhat would make it harder for the system to classify the imagescorrectly. The original images were digitally enhanced to testdifferent conditions. The models were tested on a dataset withnegative images of the original images, one set with highercontrast than the original, one set with different light conditions,one set with higher brightness and three different levels of lowresolution on the images. The study concludes that brightness andlighting are the two most difficult conditions. The contours in theimage are the most important part for successful classification. / Studien är på ett litet experiment med CNNmodellerför att se hur väl de skiljer mellan dockor ochmänniskor. Experimentet använder två olika typer av CNNmodeller,en som byggdes efter en klassisk modell och en merrudimentär modell. Modellerna testades på hur exakt de kanbestämma de olika klasserna. Experimentet var ett treklassproblem och bilderna testades med olika typer av förhållanden,för att se vad som skulle göra det svårare för modellen attklassificera bilderna korrekt. Original bilderna gjordes om föratt studera olika typer av förhållanden. Modellerna testades på ett dataset med negativa bilder av originalbilderna, enuppsättning med högre kontrast än originalet, en uppsättningmed olika ljusförhållanden, en uppsättning med högre ljusstyrkaoch tre olika nivåer med låg upplösning av bilderna. I studiendrogs slutsatsen att ljusstyrka och belysning är de två svårasteförhållandena. Konturerna på objekten i bilden är den viktigastefaktorn för en framgångsrik klassificering. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
38

VISUAL AND SEMANTIC KNOWLEDGE TRANSFER FOR NOVEL TASKS

Ye, Meng January 2019 (has links)
Data is a critical component in a supervised machine learning system. Many successful applications of learning systems on various tasks are based on a large amount of labeled data. For example, deep convolutional neural networks have surpassed human performance on ImageNet classification, which consists of millions of labeled images. However, one challenge in conventional supervised learning systems is their generalization ability. Once a model is trained on a specific dataset, it can only perform the task on those \emph{seen} classes and cannot be used for novel \emph{unseen} classes. In order to make the model work on new classes, one has to collect and label new data and then re-train the model. However, collecting data and labeling them is labor-intensive and costly, in some cases, it is even impossible. Also, there is an enormous amount of different tasks in the real world. It is not applicable to create a dataset for each of them. These problems raise the need for Transfer Learning, which is aimed at using data from the \emph{source} domain to improve the performance of a model on the \emph{target} domain, and these two domains have different data or different tasks. One specific case of transfer learning is Zero-Shot Learning. It deals with the situation where \emph{source} domain and \emph{target} domain have the same data distribution but do not have the same set of classes. For example, a model is given animal images of `cat' and `dog' for training and will be tested on classifying 'tiger' and 'wolf' images, which it has never seen. Different from conventional supervised learning, Zero-Shot Learning does not require training data in the \emph{target} domain to perform classification. This property gives ZSL the potential to be broadly applied in various applications where a system is expected to tackle unexpected situations. In this dissertation, we develop algorithms that can help a model effectively transfer visual and semantic knowledge learned from \emph{source} task to \emph{target} task. More specifically, first we develop a model that learns a uniform visual representation of semantic attributes, which help alleviate the domain shift problem in Zero-Shot Learning. Second, we develop an ensemble network architecture with a progressive training scheme, which transfers \emph{source} domain knowledge to the \emph{target} domain in an end-to-end manner. Lastly, we move a step beyond ZSL and explore Label-less Classification, which transfers knowledge from pre-trained object detectors into scene classification tasks. Our label-less classification takes advantage of word embeddings trained from unorganized online text, thus eliminating the need for expert-defined semantic attributes for each class. Through comprehensive experiments, we show that the proposed methods can effectively transfer visual and semantic knowledge between tasks, and achieve state-of-the-art performances on standard datasets. / Computer and Information Science
39

Object Recognition in Satellite imagesusing improved ConvolutionalRecurrent Neural Network

NATTALA, TARUN January 2023 (has links)
Background:The background of this research lies in detecting the images from satellites. The recognition of images from satellites has become increasingly importantdue to the vast amount of data that can be obtained from satellites. This thesisaims to develop a method for the recognition of images from satellites using machinelearning techniques. Objective:The main objective of this thesis is a unique approach to recognizingthe data with a CRNN algorithm that involves image recognition in satellite imagesusing machine learning, specifically the CRNN (Convolutional Recurrent Neural Network) architecture. The main task is classifying the images accurately, and this isachieved by utilizing object classification algorithms. The CRNN architecture ischosen because it can effectively extract features from satellite images using Convolutional Blocks and leverage the great memory power of the Long Short-TermMemory (LSTM) networks to connect the extracted features efficiently. The connected features improve the accuracy of our model significantly. Method:The proposed method involves doing a literature review to find currentimage recognition models and then experimentation by training a CRNN, CNN andRNN and then comparing their performance using metrics mentioned in the thesis work. Results:The performance of the proposed method is evaluated using various metrics, including precision, recall, F1 score and inference speed, on a large dataset oflabeled images. The results indicate that high accuracy is achieved in detecting andclassifying objects in satellite images through our approach. The potential utilization of our proposed method can span various applications such as environmentalmonitoring, urban planning, and disaster management. Conclusion:The classification on the satellite images is performed using the 2 datasetsfor ships and cars. The proposed architectures are CRNN, CNN, and RNN. These3 models are compared in order to find the best performing algorithm. The resultsindicate that CRNN has the best accuracy and precision and F1 score and inferencespeed, indicating a strong performance by the CRNN. Keywords: Comparison of CRNN, CNN, and RNN, Image recognition, MachineLearning, Algorithms,You Only Look Once. Version3, Satellite images, Aerial Images, Deep Learning
40

An evaluation of image preprocessing for classification of Malaria parasitization using convolutional neural networks / En utvärdering av bildförbehandlingsmetoder för klassificering av malariaparasiter med hjälp av Convolutional Neural Networks

Engelhardt, Erik, Jäger, Simon January 2019 (has links)
In this study, the impact of multiple image preprocessing methods on Convolutional Neural Networks (CNN) was studied. Metrics such as accuracy, precision, recall and F1-score (Hossin et al. 2011) were evaluated. Specifically, this study is geared towards malaria classification using the data set made available by the U.S. National Library of Medicine (Malaria Datasets n.d.). This data set contains images of thin blood smears, where uninfected and parasitized blood cells have been segmented. In the study, 3 CNN models were proposed for the parasitization classification task. Each model was trained on the original data set and 4 preprocessed data sets. The preprocessing methods used to create the 4 data sets were grayscale, normalization, histogram equalization and contrast limited adaptive histogram equalization (CLAHE). The impact of CLAHE preprocessing yielded a 1.46% (model 1) and 0.61% (model 2) improvement over the original data set, in terms of F1-score. One model (model 3) provided inconclusive results. The results show that CNN’s can be used for parasitization classification, but the impact of preprocessing is limited. / I denna studie studerades effekten av flera bildförbehandlingsmetoder på Convolutional Neural Networks (CNN). Mätvärden såsom accuracy, precision, recall och F1-score (Hossin et al. 2011) utvärderades. Specifikt är denna studie inriktad på malariaklassificering med hjälp av ett dataset som tillhandahålls av U.S. National Library of Medicine (Malaria Datasets n.d.). Detta dataset innehåller bilder av tunna blodutstryk, med segmenterade oinfekterade och parasiterade blodceller. I denna studie föreslogs 3 CNN-modeller för parasiteringsklassificeringen. Varje modell tränades på det ursprungliga datasetet och 4 förbehandlade dataset. De förbehandlingsmetoder som användes för att skapa de 4 dataseten var gråskala, normalisering, histogramutjämning och kontrastbegränsad adaptiv histogramutjämning (CLAHE). Effekten av CLAHE-förbehandlingen gav en förbättring av 1.46% (modell 1) och 0.61% (modell 2) jämfört med det ursprungliga datasetet, vad gäller F1-score. En modell (modell 3) gav inget resultat. Resultaten visar att CNN:er kan användas för parasiteringsklassificering, men effekten av förbehandling är begränsad.

Page generated in 0.1201 seconds