101 |
Multimodal Image Classification In Fluoropolymer AFM And Chest X-Ray ImagesMeshnick, David Chad 26 May 2023 (has links)
No description available.
|
102 |
Advanced Processing Techniques and Applications of Synthetic Aperture Radar InterferometryMestre-Quereda, Alejandro 06 September 2019 (has links)
Synthetic Aperture Radar interferometry (InSAR) is a powerful and established technique, which is based on exploiting the phase difference between pairs of SAR images, and which aims to measure changes in the Earth’s surface. The quality of the interferometric phase is therefore the most crucial factor for deriving reliable products by means of this technique. Unfortunately, the quality of the phase is often degraded due to multiple decorrelation factors, such as the geometrical or temporal decorrelation. Accordingly, central to this PhD thesis is the development of advanced processing techniques and algorithms to extensively reduce such disturbing effects caused by decorrelation. These new techniques include an improved range spectral filter which fully utilizes an external Digital Elevation Model (DEM) to reduce geometrical decorrelation between pairs of SAR images, especially in areas strongly influenced by topography where conventional methods are limited; an improved filter for the final interferometric phase the goal of which is to remove any remaining noise (for instance, noise caused by temporal decorrelation) while, simultaneously, phase details are appropriately preserved; and polarimetric optimization algorithms which also try to enhance the quality of the phase by exploring all the polarization diversity. Moreover, the exploitation of InSAR data for crop type mapping has also been evaluated in this thesis. Specifically, we have tested if the multitemporal interferometric coherence is a valuable feature which can be used as input to a machine learning algorithm to generate thematic maps of crop types. We have shown that InSAR data are sensitive to the temporal evolution of crops, and, hence, they constitute an alternative or a complement to conventional radiometric, SAR-based, classifications.
|
103 |
Comparison of Urban Tree Canopy Classification With High Resolution Satellite Imagery and Three Dimensional Data Derived From LIDAR and Stereoscopic SensorsBaller, Matthew Lee 22 August 2008 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Despite growing recognition as a significant natural resource, methods for accurately estimating urban tree canopy cover extent and change over time are not well-established. This study evaluates new methods and data sources for mapping urban tree canopy cover, assessing the potential for increased accuracy by integrating high-resolution satellite imagery and 3D imagery derived from LIDAR and stereoscopic sensors. The results of urban tree canopy classifications derived from imagery, 3D data, and vegetation index data are compared across multiple urban land use types in the City of Indianapolis, Indiana. Results indicate that incorporation of 3D data and vegetation index data with high resolution satellite imagery does not significantly improve overall classification accuracy. Overall classification accuracies range from 88.34% to 89.66%, with resulting overall Kappa statistics ranging from 75.08% to 78.03%, respectively. Statistically significant differences in accuracy occurred only when high resolution satellite imagery was not included in the classification treatment and only the vegetation index data or 3D data were evaluated. Overall classification accuracy for these treatment methods were 78.33% for both treatments, with resulting overall Kappa statistics of 51.36% and 52.59%.
|
104 |
Using Generative Adversarial Networks to Classify Structural Damage Caused by EarthquakesDelacruz, Gian P 01 June 2020 (has links) (PDF)
The amount of structural damage image data produced in the aftermath of an earthquake can be staggering. It is challenging for a few human volunteers to efficiently filter and tag these images with meaningful damage information. There are several solution to automate post-earthquake reconnaissance image tagging using Machine Learning (ML) solutions to classify each occurrence of damage per building material and structural member type. ML algorithms are data driven; improving with increased training data. Thanks to the vast amount of data available and advances in computer architectures, ML and in particular Deep Learning (DL) has become one of the most popular image classification algorithms producing results comparable to and in some cases superior to human experts. These kind of algorithms need the input images used for the training to be labeled, and even if there is a large amount of images most of them are not labeled and it takes structural engineers a large amount of time to do it. The current data earthquakes image data bases do not contain the label information or is incomplete slowing significantly the advance of a solution and are incredible difficult to search. To be able to train a ML algorithm to classify one of the structural damages it took the architecture school an entire year to gather 200 images of the specific damage. That number is clearly not enough to avoid overfitting so for this thesis we decided to generate synthetic images for the specific structural damage. In particular we attempt to use Generative Adversarial Neural Networks (GANs) to generate the synthetic images and enable the fast classification of rail and road damage caused by earthquakes. Fast classification of rail and road damage can allow for the safety of people and to better prepare the reconnaissance teams that manage recovery tasks. GANs combine classification neural networks with generative neural networks. For this thesis we will be combining a convolutional neural network (CNN) with a generative neural network. By taking a classifier trained in a GAN and modifying it to classify other images the classifier can take advantage of the GAN training without having to find more training data. The classifier trained in this way was able to achieve an 88\% accuracy score when classifying images of structural damage caused by earthquakes.
|
105 |
An Investigation in the Use of Hyperspectral Imagery Using Machine Learning for Vision-Aided NavigationEge, Isaac Thomas 15 May 2023 (has links)
No description available.
|
106 |
Using Layer-wise Relevance Propagation and Sensitivity Analysis Heatmaps to understand the Classification of an Image produced by a Neural Network / Användning av Layer-wise Relevance Propagationoch Sensitivity Analysis heatmaps för att förstå klassificering avbilder utförd av ett neuralt nätverkRosenlew, Matilda, Ljungdahl, Timas January 2019 (has links)
Neural networks are regarded as state of the art within many areas of machine learning, however due to their growing complexity and size, a question regarding their trustability and understandability has been raised. Thus, neural networks are often being considered a "black-box". This has lead to the emersion of evaluation methods trying to decipher these complex networks. Two of these methods, layer-wise relevance propagation (LRP) and sensitivity analysis (SA), are used to generate heatmaps, which presents pixels in the input image that have an impact on the classification. In this report, the aim is to do a usability-analysis by evaluating and comparing these methods to see how they can be used in order to understand a particular classification. The method used in this report is to iteratively distort image regions that were highlighted as important by the two heatmapping-methods. This lead to the findings that distorting essential features of an image according to the LRP heatmaps lead to a decrease in classification score, while distorting inessential features of an image according to the combination of SA and LRP heatmaps lead to an increase in classification score. The results corresponded well with the theory of the heatmapping-methods and lead to the conclusion that a combination of the two evaluation methods is advocated for, to fully understand a particular classification. / Neurala nätverk betraktas som den senaste tekniken i många områden inom maskininlärning, dock har deras pålitlighet och förståelse ifrågasatts på grund av deras växande komplexitet och storlek. Således, blir neurala nätverk ofta sedda som en "svart låda". Detta har lett till utvecklingen av evalueringsmetoder som ämnar att tolka dessa komplexa nätverk. Två av dessa metoder, layer-wise relevance propagation (LRP) och sensitivity analysis (SA), används för att generera färgdiagram som visar pixlar i indata-bilden som har en påverkan på klassificeringen. I den här rapporten, är målet att göra en användarbarhets-analys genom att utvärdera och jämföra dessa metoder för att se hur de kan användas för att förstå en specifik klassificering. Metoden som används i denna rapport är att iterativt förvränga bilder genom att följa de två färgdiagrams-metoderna. Detta ledde till insikterna att förvrängning av väsentliga delar av bilden, vilket framgick ur LRP färgdiagrammen, tydligt minskade sannolikheten för klassen. Det framkom även att förvrängning av oväsentliga delar, som framgick genom att kombinera SA och LRP färgdiagrammen, ökade sannolikheten för klassen. Resultaten stämde väl överens med teorin och detta ledde till slutsatsen att en kombination av metoderna rekommenderas för att förstå en specifik klassificering.
|
107 |
Transfer Learning Approach to Powder Bed Fusion Additive Manufacturing Defect DetectionWu, Michael 01 June 2021 (has links) (PDF)
Laser powder bed fusion (LPBF) remains a predominately open-loop additive manufacturing process with minimal in-situ quality and process control. Some machines feature optical monitoring systems but lack automated analytical capabilities for real-time defect detection. Recent advances in machine learning (ML) and convolutional neural networks (CNN) present compelling solutions to analyze images in real-time and to develop in-situ monitoring.
Approximately 30,000 selective laser melting (SLM) build images from 31 previous builds are gathered and labeled as either “okay” or “defect”. Then, 14 open-sourced CNN were trained using transfer learning to classify the SLM build images. These models were evaluated by F1 score and down selected to the top 3 models. The top 3 models were then retrained and evaluated using Dietterich’s 5x2 cross-validation and compared with pairwise student t-tests. The pairwise t-test results show no statistically significant difference in performance between VGG- 19, Xception, and InceptionResNet. All models are strong candidates for future development and refinement.
Additional work addresses the entire model development process and establishes a foundation for future work. Collaborations with computer science students has produced an image pre-processing program to enhance as-taken SLM images. Other outcomes include initial work to overlay CAD layer images and preliminary hardware integration plan for the SLM machine. The results from this work have demonstrated the potential of an optical layer-wise image defect detection system when paired with a CNN.
|
108 |
Image classification in Drone using Euclidean distanceGangavarapu, Mohith, Pawar, Arjun January 2022 (has links)
Drone vision is a surging area of research, primarily due to its surveillance and military uses.A camera-equipped drone is capable of carrying out a variety of operations like imagedetection, recognition, and classification. Image processing is an important part of theprocess; it is used in denoising and smoothing the image before recognition.We aimed to classify different images and command the drone to carry out various tasksdepending on the image shown. If shown a certain image, the drone would take off and landrespectively.We use the Euclidean distance algorithm to calculate the distance between two images. If thedistance equals zero, the images are equal. While the ideal result of 0 is impossible due tonoise, we can use digital image processing methods to reduce noise.We were able to classify basic images to some degree of accuracy; the drone was able tocarry out given tasks after a successful image classification.While Euclidean distance might be the first choice for most image-classification algorithms,it has many limitations. This might call for the use of other image processing algorithms toachieve better results.
|
109 |
The Augmented WorkerBecerra-Rico, Josue January 2022 (has links)
Augmented Reality (AR) and Mixed Reality (MR) have increased in attention recently and there are several implementations in video games and entertainment but also work-related applications. The technology can be used to guide workers in order to do the work faster and reduce human error while performing their tasks. The potential of this kind of technology is evaluated in this thesis through a proof-of-concept prototype which guides a novice in the kitchen in following a recipe and completing a dish. The thesis shows a comparison between five different object detection algorithms, selecting the best in terms of time performance, energy performance and detection accuracy. Then the selected object detection algorithm is implemented in the prototype application.
|
110 |
Comparison of Discriminative and Generative Image ClassifiersBudh, Simon, Grip, William January 2022 (has links)
In this report a discriminative and a generative image classifier, used for classification of images with handwritten digits from zero to nine, are compared. The aim of this project was to compare the accuracy of the two classifiers in absence and presence of perturbations to the images. This report describes the architectures and training of the classifiers using PyTorch. Images were perturbed in four ways for the comparison. The first perturbation was a model-specific attack that perturbed images to maximize likelihood of misclassification. The other three image perturbations changed pixels in a stochastic fashion. Furthermore, The influence of training using perturbed images on the robustness of the classifier, against image perturbations, was studied. The conclusions drawn in this report was that the accuracy of the two classifiers on unperturbed images was similar and the generative classifier was more robust against the model-specific attack. Also, the discriminative classifier was more robust against the stochastic noise and was significantly more robust against image perturbations when trained on perturbed images. / I den här rapporten jämförs en diskriminativ och en generativ bildklassificerare, som används för klassificering av bilder med handskrivna siffror från noll till nio. Syftet med detta projekt var att jämföra träffsäkerheten hos de två klassificerarna med och utan störningar i bilderna. Denna rapport beskriver arkitekturerna och träningen av klassificerarna med hjälp av PyTorch. Bilder förvrängdes på fyra sätt för jämförelsen. Den första bildförvrängningen var en modellspecifik attack som förvrängde bilder för att maximera sannolikheten för felklassificering. De andra tre bildförvrängningarna ändrade pixlar på ett stokastiskt sätt. Dessutom studerades inverkan av träning med störda bilder på klassificerarens robusthet mot bildstörningar. Slutsatserna som drogs i denna rapport är att träffsäkerheten hos de två klassificerarna på oförvrängda bilder var likartad och att den generativa klassificeraren var mer robust mot den modellspecifika attacken. Dessutom var den diskriminativa klassificeraren mer robust mot slumpmässiga bildförvrängningar och var betydligt mer robust mot bildstörningar när den tränades på förvrängda bilder. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
|
Page generated in 0.0986 seconds