131 |
Thermal Imaging-Based Instance Segmentation for Automated Health Monitoring of Steel Ladle Refractory Lining / Infraröd-baserad Instanssegmentering för Automatiserad Övervakning av Eldfast Murbruk i StålskänkBråkenhielm, Emil, Drinas, Kastrati January 2022 (has links)
Equipment and machines can be exposed to very high temperatures in the steel mill industry. One particularly critical part is the ladles used to hold and pour molten iron into mouldings. A refractory lining is used as an insulation layer between the outer steel shell and the molten iron to protect the ladle from the hot iron. Over time, or if the lining is not completely cured, the lining wears out or can potentially fail. Such a scenario can lead to a breakout of molten iron, which can cause damage to equipment and, in the worst case, workers. Previous work analyses how critical areas can be identified in a proactive matter. Using thermal imaging, the failing spots on the lining could show as high-temperature areas on the outside steel shell. The idea is that the outside temperature corresponds to the thickness of the insulating lining. The detection of these spots is identified when temperatures over a given threshold are registered within the thermal camera's field of view. The images must then be manually analyzed over time, to follow the progression of a detected spot. The existing solution is also prone to the background noise of other hot objects. This thesis proposes an initial step to automate monitoring the health of refractory lining in steel ladles. The report will investigate the usage of Instance Segmentation to isolate the ladle from its background. Thus, reducing false alarms and background noise in an autonomous monitoring setup. The model training is based on Mask R-CNN on our own thermal images, with pre-trained weights from visual images. Detection is done on two classes: open or closed ladle. The model proved reasonably successful on a small dataset of 1000 thermal images. Different models were trained with and without augmentation, pre-trained weights as well multi-phase fine-tuning. The highest mAP of 87.5\% was achieved on a pre-trained model with image augmentation without fine-tuning. Though it was not tested in production, temperature readings could lastly be extracted on the segmented ladle, decreasing the risk of false alarms from background noise.
|
132 |
Point clouds in the application of Bin PickingAnand, Abhijeet January 2023 (has links)
Automatic bin picking is a well-known problem in industrial automation and computer vision, where a robot picks an object from a bin and places it somewhere else. There is continuous ongoing research for many years to improve the contemporary solution. With camera technology advancing rapidly and available fast computation resources, solving this problem with deep learning has become a current interest for several researchers. This thesis intends to leverage the current state-of-the-art deep learning based methods of 3D instance segmentation and point cloud registration and combine them to improve the bin picking solution by improving the performance and make them robust. The problem of bin picking becomes complex when the bin contains identical objects with heavy occlusion. To solve this problem, a 3D instance segmentation is performed with Fast Point Cloud Clustering (FPCC) method to detect and locate the objects in the bin. Further, an extraction strategy is proposed to choose one predicted instance at a time. Inthe next step, a point cloud registration technique is implemented based on PointNetLK method to estimate the pose of the selected object from the bin. The above implementation is trained, tested, and evaluated on synthetically generated datasets. The synthetic dataset also contains several noisy point clouds to imitate a real situation. The real data captured at the company ’SICK IVP’ is also tested with the implemented model. It is observed that the 3D instance segmentation can detect and locate the objects available in the bin. In a noisy environment, the performance degrades as the noise level increase. However, the decrease in the performance is found to be not so significant. Point cloud registration is observed to register best with the full point cloud of the object, when compared to point cloud with missing points.
|
133 |
Strategies for the Characterization and Virtual Testing of SLM 316L Stainless SteelHendrickson, Michael Paul 02 August 2023 (has links)
The selective laser melting (SLM) process allows for the control of unique part form and function characteristics not achievable with conventional manufacturing methods and has thus gained interest in several industries such as the aerospace and biomedical fields. The fabrication processing parameters selected to manufacture a given part influence the created material microstructure and the final mechanical performance of the part. Understanding the process-structure and structure-performance relationships is very important for the design and quality assurance of SLM parts. Image based analysis methods are commonly used to characterize material microstructures, but are very time consuming, traditionally requiring manual segmentation of imaged features. Two Python-based image analysis tools are developed here to automate the instance segmentation of manufacturing defects and subgranular cell features commonly found in SLM 316L stainless steel (SS) for quantitative analysis. A custom trained mask region-based convolution neural network (Mask R-CNN) model is used to segment cell features from scanning electron microscopy (SEM) images with an instance segmentation accuracy nearly identical to that of a human researcher, but about four orders of magnitude faster. The defect segmentation tool uses techniques from the OpenCV Python library to identify and segment defect instances from optical images. A melt pool structure generation tool is also developed to create custom melt-pool geometries based on a few user inputs with the ability to create functionally graded structures for use in a virtual testing framework. This tool allows for the study of complex melt-pool geometries and graded structures commonly seen in SLM parts and is applied to three finite element analyses to investigate the effects of different melt-pool geometries on part stress concentrations. / Master of Science / Recent advancements in additive manufacturing (AM) processes like the selective laser melting (SLM) process are revolutionizing the way many products are manufactured. The geometric form and material microstructure of SLM parts can be controlled by manufacturing settings, referred to as fabrication processing parameters, in ways not previously possible via conventional manufacturing techniques such as machining and casting. The improved geometric control of SLM parts has enabled more complex part geometries as well as significant manufacturing cost savings for some parts. With improved control over the material microstructure, the mechanical performance of SLM parts can be finely tailored and optimized for a particular application. Complex functionally graded materials (FGM) can also easily be created with the SLM process by varying the fabrication processing parameters spatially within the manufactured part to improve mechanical performance for a desired application. The added control offered by the SLM process has created a need for understanding how changes in the fabrication processing parameters affect the material structure, and in turn, how the produced structure affects the mechanical properties of the part. This study presents three different tools developed for the automated characterization of SLM 316L stainless steel (SS) material structures and the generation of realistic material structures for numerical simulation of mechanical performance. A defect content tool is presented to automatically identify and create binary segmentations of defects in SLM parts, consisting of small air pockets within the volume of the parts, from digital optical images. A machine learning based instance segmentation tool is also trained on a custom data set and used to measure the size of nanoscale cell features unique to 316L (SS) and some other metal alloys processed with SLM from scanning electron microscopy (SEM) images. Both these tools automate the laborious process of segmenting individual objects of interest from hundreds or thousands of images and are shown to have an accuracy very close to that of manually produced results from a human. The results are also used to analyze three different samples produced with different fabrication processing parameters which showed similar process-structure relationships with other studies. The SLM structure generation tool is developed to create melt pool structures similar to those seen in SLM parts from the successive melting and solidification of material from the laser scanning path. This structural feature is unique to AM processes such as SLM, and the example test cases investigated in this study shows that changes in the melt pool structure geometry have a measurable effect, slightly above 10% difference, on the stress and strain response of the material when a tensile load is applied. The melt pool structure generation tool can create complex geometries capable of varying spatially to create FGMs from a few user inputs, and when applied to existing simulation methods for SLM parts, offers improved estimates for the mechanical response of SLM parts.
|
134 |
Online Panoptic Mapping of Indoor Environments : A complete panoptic mapping framework / Realtid Panoptisk Kartläggning av Inomhusmiljöer : Ett komplett panoptiskt kartläggningsramverkG Sneltvedt, Isak January 2024 (has links)
Replicating a real-world environment is crucial for creating simulations, computer vision, global and local path planning, and localization. While computer-aided design software is a standard tool for such a task, it may not always be practical or effective. An alternative approach is mapping, which uses sensory input and computer vision technologies to reconstruct the environment. However, developing such software requires knowledge of various fields, making it a challenging task. This thesis deep-dives into a state-of-the-art mapping framework and explores potential improvements, providing a foundation for an open-source project. The resulting software can replicate a real-world environment while storing panoptic classification data on a voxel level. Through 3D object matching and probability theory, the mapping software is resilient to object misclassifications and retains consistency in the different instances of observed objects. The final software is designed to make it easy to use in a different project by substituting the simulation data with a semantic, instance, or panoptic segmentation model. Additionally, the software integrates certain functionalities that facilitate the visualization of diverse classes or a particular class instance. / Att replikera en verklig miljö är avgörande för att skapa simuleringar, datorseende, global och lokal vägplanering samt lokalisering. Trots att ett datorstött designprogram är ett standardverktyg för sådana uppgifter kanske det inte alltid är praktiskt eller effektivt. Ett alternativt tillvägagångssätt är kartläggning, som använder sensorisk input och datorseendeteknik för att uppnå reskonstruering av omgivningar. Att utveckla sådan programvara kräver dock kunskap inom olika områden, vilket gör det till en utmanande uppgift. Den här avhandlingen fördjupar sig i ett toppmodernt kartläggningsramverk och utforskar potentiella förbättringar, vilket ger en grund för ett öppet källkodsprojekt. Resultatet av denna avhandling är en programvara som kan replikera en verklig miljö samtidigt som den lagrar panoptisk klassificeringsdata på en voxelnivå. Genom 3D-objektmatchning och sannolikhetsteori är kartläggningsprogramvaran motståndskraftig mot felaktiga objektklassificeringar och är koncekvent avseende förekomsten av olika observerade objekt. Den slutliga programvaran är utformad med fokus på att göra den enkel att använda i andra projekt genom att ersätta simuleringsdata med en semantisk, instans eller panoptisk segmenteringsmodell. Dessutom integrerar programvaran funktioner som underlättar visualiseringen av antingen olika klasser eller en specifik instans av en klass.
|
135 |
A FaaS Instance Management Algorithm for Minimizing Cost subject to Response Time / Algoritm för hantering av FaaS-instanser för att minimera kostnaderna med hänsyn till svarstidenZhang, Tianyu January 2022 (has links)
With the development of cloud computing technologies, the concept of Function as a Service (FaaS) has become increasingly popular over the years. Developers can choose to create applications in the form of functions, and delegate the deployment and management of the infrastructure to the FaaS provider. Before a function can be executed at the infrastructure of the FaaS service provider, an environment to execute a function needs to be initiated; this environment initialization is known as cold start. Loading and maintaining a function is costly for FaaS providers, especially the cold start process which costs more system resources like Central Processing Unit (CPU) and memory than keeping functions alive. Therefore it is essential to prevent a cold start whenever possible because this would lead to an increase in both the response time and the cost. An instance management policy need to be implemented to reduce the probability of cold starts while minimizing costs. This project’s objective is to develop an instance management algorithm to minimize total costs while meeting response time requirements. By investigating three widely used instance management algorithms we found that none of them utilize the dependency existing between functions. We believe these dependencies can be useful to reduce response time and cold start probability by predicting next invocations. By leveraging this observation, we proposed a novel Dependency Based Algorithm (DBA). By using extensive simulations we showed that proposed algorithm can solve the problem and provide low response time with low costs compare to baselines. / I och med utvecklingen av molntjänster har konceptet FaaS (Function as a Service) blivit alltmer populärt under årens lopp. Utvecklare kan välja att skapa applikationer i form av funktioner och delegera utplaceringen och förvaltningen av infrastrukturen till FaaS-leverantören. Innan en funktion kan exekveras i FaaS-tjänsteleverantörens infrastruktur måste en miljö för att exekvera en funktion initieras; denna miljöinitialisering kallas kallstart. Att ladda och underhålla en funktion är kostsamt för FaaS-leverantörerna, särskilt kallstartsprocessen som kostar mer systemresurser som CPU och minne än att hålla funktionerna vid liv. Därför är det viktigt att förhindra en kallstart när det är möjligt eftersom detta skulle leda till en ökning av både svarstiden och kostnaden. En policy för hantering av instanser måste införas för att minska sannolikheten för kallstarter och samtidigt minimera kostnaderna. Projektets mål är att utveckla en algoritm för hantering av instanser för att minimera de totala kostnaderna samtidigt som kraven på svarstid uppfylls. Genom att undersöka tre allmänt använda algoritmer för hantering av instanser fann vi att ingen av dem utnyttjar det beroende som finns mellan funktioner. Vi tror att dessa beroenden kan vara användbara för att minska svarstiden och sannolikheten för kallstart genom att förutsäga nästa anrop. Genom att utnyttja denna observation föreslog vi en ny beroendebaserad algoritm. Med hjälp av omfattande simuleringar visade vi att den föreslagna algoritmen kan lösa problemet och ge en låg svarstid med låga kostnader jämfört med baslinjerna.
|
136 |
Instance segmentation using 2.5D dataÖhrling, Jonathan January 2023 (has links)
Multi-modality fusion is an area of research that has shown promising results in the domain of 2D and 3D object detection. However, multi-modality fusion methods have largely not been utilized in the domain of instance segmentation. This master’s thesis investigated if multi-modality fusion methods can be applied to deep learning instance segmentation models to improve their performance on multi-modality data. The two multi-modality fusion methods presented, input extension and feature fusions, were applied to a two-stage instance segmentation model, Mask R-CNN, and a single-stage instance segmentation model, RTMDet. Models were trained on different variations of preprocessed RGBD and ToF data provided by SICK IVP, as well as RGBD data from the publicly available NYUDepth dataset. The master’s thesis concludes that the multi-modality fusion method presented as feature fusion can be applied to the Mask R-CNN model to improve the networks performance by 1.8%points (1.8%pt.) bounding box mAP and 1.6%pt. segmentation mAP on SICK RGBD, 7.7%pt. bounding box mAP and 7.4%pt. segmentation mAP on ToF, and 7.4%pt. bounding box mAP and 7.4%pt. segmentation mAP on NYUDepth. The RTMDet model saw little to no improvements from the inclusion of depth but had similar baseline performance as the improved Mask R-CNN model that utilized feature fusion. The input extension method saw no improvements to performance as it faced technical implementation limitations.
|
137 |
Instance Segmentation for Printed Circuit Board (PCB) Component Analysis : Exploring CNNs and Transformers for Component Detection on Printed Circuit BoardsMöller, Oliver January 2023 (has links)
In the intricate domain of Printed Circuit Boards (PCBs), object detection poses unique challenges, particularly given the broad size spectrum of components, ranging from a mere 2 pixels to several thousand pixels within a single high-resolution image, often averaging 4000x3000 pixels. Such resolutions are atypical in the realm of deep learning for computer vision, making the task even more demanding. Further complexities arise from the significant intra-class variability and minimal inter-class differences for certain component classes. In this master thesis, we rigorously evaluated the performance of a CNN-based object detection framework (FCOS) and a transformer model (DETR) for the task. Additionally, by integrating the novel foundational model from Meta, named ”Segment Anything,” we advanced the pipeline to include instance segmentation. The resultant model is proficient in detecting and segmenting component instances on PCB images, achieving an F1 score of 81% and 82% for the primary component classes of resistors and capacitors, respectively. Overall, when aggregated over 18 component classes, the model attains a commendable F1 score of 74%. This study not only underscores the potential of advanced deep learning techniques in PCB analysis but also paves the way for future endeavors in this interdisciplinary convergence of electronics and computer vision / I det komplicerade området med kretskort (PCB) innebär objektdetektering unika utmaningar, särskilt med tanke på det breda storleksspektrumet av komponenter, från bara 2 pixlar till flera tusen pixlar i en enda högupplöst bild, ofta i genomsnitt 4000x3000 pixlar. Sådana upplösningar är atypiska när det gäller djupinlärning för datorseende, vilket gör uppgiften ännu mer krävande. Ytterligare komplexitet uppstår från den betydande variationen inom klassen och minimala skillnader mellan klasserna för vissa komponentklasser. I denna masteruppsats utvärderade vi noggrant prestandan hos ett CNNbaserat ramverk för objektdetektering (FCOS) och en transformatormodell (DETR) för uppgiften. Genom att integrera den nya grundmodellen från Meta, med namnet ”Segment Anything”, utvecklade vi dessutom pipelinen för att inkludera instanssegmentering. Den resulterande modellen är skicklig på att upptäcka och segmentera komponentinstanser på PCB-bilder och uppnår en F1-poäng på 81% och 82% för de primära komponentklasserna resistorer respektive kondensatorer. När modellen aggregeras över 18 komponentklasser uppnår den en F1-poäng på 74%. Denna studie understryker inte bara potentialen hos avancerade djupinlärningstekniker vid PCB-analys utan banar också väg för framtida insatser inom denna tvärvetenskapliga konvergens av elektronik och datorseende.
|
138 |
Vectorizing Instance-Based Integration ProcessesBoehm, Matthias, Habich, Dirk, Preissler, Steffen, Lehner, Wolfgang, Wloka, Uwe 13 January 2023 (has links)
The inefficiency of integration processes as an abstraction of workflow-based integration tasks is often reasoned by low resource utilization and significant waiting times for external systems. Due to the increasing use of integration processes within IT infrastructures, the throughput optimization has high influence on the overall performance of such an infrastructure. In the area of computational engineering, low resource utilization is addressed with vectorization techniques. In this paper, we introduce the concept of vectorization in the context of integration processes in order to achieve a higher degree of parallelism. Here, transactional behavior and serialized execution must be ensured.In conclusion of our evaluation, the message throughput can be significantly increased.
|
139 |
Study of Effect of Coverage and Purity on Quality of Learned RulesGandharva, Kumar 22 June 2015 (has links)
No description available.
|
140 |
Live Cell Imaging Analysis Using Machine Learning and Synthetic Food Image GenerationYue Han (18390447) 17 April 2024 (has links)
<p dir="ltr">Live cell imaging is a method to optically investigate living cells using microscopy images. It plays an increasingly important role in biomedical research as well as drug development. In this thesis, we focus on label-free mammalian cell tracking and label-free abnormally shaped nuclei segmentation of microscopy images. We propose a method to use a precomputed velocity field to enhance cell tracking performance. Additionally, we propose an ensemble method, Weighted Mask Fusion (WMF), combining the results of multiple segmentation models with shape analysis, to improve the final nuclei segmentation mask. We also propose an edge-aware Mask RCNN and introduce a hybrid architecture, an ensemble of CNNs and Swin-Transformer Edge Mask R-CNNs (HER-CNN), to accurately segment irregularly shaped nuclei of microscopy images. Our experiments indicate that our proposed method outperforms other existing methods for cell tracking and abnormally shaped nuclei segmentation.</p><p dir="ltr">While image-based dietary assessment methods reduce the time and labor required for nutrient analysis, the major challenge with deep learning-based approaches is that the performance is heavily dependent on the quality of the datasets. Challenges with food data include suffering from high intra-class variance and class imbalance. In this thesis, we present an effective clustering-based training framework named ClusDiff for generating high-quality and representative food images. From experiments, we showcase our method’s effectiveness in enhancing food image generation. Additionally, we conduct a study on the utilization of synthetic food images to address the class imbalance issue in long-tailed food classification.</p>
|
Page generated in 0.0721 seconds