• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Automatic Image Segmentation of Healthy and Atelectatic Lungs in Computed Tomography

Cuevas, Luis Maximiliano 15 June 2010 (has links)
Computed tomography (CT) has become a standard in pulmonary imaging which allows the analysis of diseases like lung nodules, emphysema and embolism. The improved spatial and temporal resolution involves a dramatic increase in the amount of data that has to be stored and processed. This has motivated the development of computer aided diagnostics (CAD) systems that have released the physician from the tedious task of manually delineating the boundary of the structures of interest from such a large number of images, a pre-processing step known as image segmentation. Apart from being impractical, the manual segmentation is prone to high intra and inter observer subjectiveness. Automatic segmentation of the lungs with atelectasis poses a challenge because in CT images they have similar texture and gray level as the surrounding tissue. Consequently, the available graphical information is not sufficient to distinguish the boundary of the lung. The present work aims to close the existing gap left by the segmentation of atelectatic lungs in volume CT data. A-priori knowledge of anatomical information plays a key role in the achievement of this goal.
322

AI-based Age Estimation from Mammograms

Dissanayake Lekamlage, Dilukshi Charitha Subashini Dissanayake, Afzal, Fabia January 2020 (has links)
Background: Age estimation has attracted attention because of its various clinical and medical applications. There are many studies on human age estimation from biomedical images such as X-ray images, MRI, facial images, dental images etc. However, there is no research done on mammograms for age estimation. Therefore, in our research, we focus on age estimation from mammogram images. Objectives: The purpose of this study is to make an AI-based model for estimating age from mammogram images based on the pectoral muscle segment and check its accuracy. At first, we segment the pectoral muscle from mammograms. Then we extract deep learning features and handcrafted features from the pectoral muscle segment as well as other regions for comparison. From these features, we built models to estimate the age. Methods: We have selected an experiment method to answer our research question. We have used the U-net model for pectoral muscle segmentation. After that, we have extracted handcrafted features and deep learning features from pectoral muscle using ResNet-50 and Xception. Then we trained Support Vector Regression and Random Forest models to estimate the age based on the pectoral muscle of mammograms. Finally, we observed how accurately these models are in estimating the age by comparing the MSE and MAE values. We have considered breast region (BR) and the whole MLO to answer our research question. Results: The MAE values for both SVR and RF models from handcrafted features is around 10 in years in all cases. On the other hand, with deep learning features MAE is less as compared to handcrafted features. In our experiment, the least observed error value for MAE was around 8.4656 years for the model that extracted the features from the whole MLO using ResNet50 and SVR as the regression model. Conclusions: We have concluded that the breast region (BR) is more accurate in estimating the age compared to PM by having least MAE and MSE values in its models. Moreover, we were able to observe that handcrafted feature models are not as accurate as deep feature models in estimating the age from mammograms.
323

Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping / Radar och optisk datafusion för objektbaserad kartering av urbant marktäcke

Jacob, Alexander January 2011 (has links)
The creation and classification of segments for object based urban land cover mapping is the key goal of this master thesis. An algorithm based on region growing and merging was developed, implemented and tested. The synergy effects of a fused data set of SAR and optical imagery were evaluated based on the classification results. The testing was mainly performed with data of the city of Beijing China. The dataset consists of SAR and optical data and the classified land cover/use maps were evaluated using standard methods for accuracy assessment like confusion matrices, kappa values and overall accuracy. The classification for the testing consists of 9 classes which are low density buildup, high density buildup, road, park, water, golf course, forest, agricultural crop and airport. The development was performed in JAVA and a suitable graphical interface for user friendly interaction was created parallel to the development of the algorithm. This was really useful during the period of extensive testing of the parameter which easily could be entered through the dialogs of the interface. The algorithm itself treats the pixels as a connected graph of pixels which can always merge with their direct neighbors, meaning sharing an edge with those. There are three criteria that can be used in the current state of the algorithm, a mean based spectral homogeneity measure, a variance based textural homogeneity measure and fragmentation test as a shape measure. The algorithm has 3 key parameters which are the minimum and maximum segments size as well as a homogeneity threshold measure which is based on a weighted combination of relative change due to merging two segments. The growing and merging is divided into two phases the first one is based on mutual best partner merging and the second one on the homogeneity threshold. In both phases it is possible to use all three criteria for merging in arbitrary weighting constellations. A third step is the check for the fulfillment of minimum size which can be performed prior to or after the other two steps. The segments can then in a supervised manner be labeled interactively using once again the graphical user interface for creating a training sample set. This training set can be used to derive a support vector machine which is based on a radial base function kernel. The optimal settings for the required parameters of this SVM training process can be found from a cross-validation grid search process which is implemented within the program as well. The SVM algorithm is based on the LibSVM java implementation. Once training is completed the SVM can be used to predict the whole dataset to get a classified land-cover map. It can be exported in form of a vector dataset. The results yield that the incorporation of texture features already in the segmentation is superior to spectral information alone especially when working with unfiltered SAR data. The incorporation of the suggested shape feature however doesn’t seem to be of advantage, especially when taking the much longer processing time into account, when incorporating this criterion. From the classification results it is also evident, that the fusion of SAR and optical data is beneficial for urban land cover mapping. Especially the distinction of urban areas and agricultural crops has been improved greatly but also the confusion between high and low density could be reduced due to the fusion. / Dragon 2 Project
324

Algorithm Oriented to the Detection of the Level of Blood Filling in Venipuncture Tubes Based on Digital Image Processing

Castillo, Jorge, Apfata, Nelson, Kemper, Guillermo 01 January 2021 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / This article proposes an algorithm oriented to the detection of the level of blood filling in patients, with detection capacity in millimeters. The objective of the software is to detect the amount of blood stored into the venipuncture tube and avoid coagulation problems due to excess fluid. It also aims to avoid blood levels below that required, depending on the type of analysis to be performed. The algorithm acquires images from a camera positioned in a rectangular structure located within an enclosure, which has its own internal lighting to ensure adequate segmentation of the pixels of the region of interest. The algorithm consists of an image improvement stage based on gamma correction, followed by a segmentation stage of the area of ​​pixels of interest, which is based on thresholding by HSI model, in addition to filtering to accentuate the contrast between the level of filling and staining, and as a penultimate stage, the location of the filling level due to changes in the vertical tonality of the image. Finally, the level of blood contained in the tube is obtained from the detection of the number of pixels that make up the vertical dimension of the tube filling. This number of pixels is then converted to physical dimensions expressed in millimeters. The validation results show an average percentage error of 0.96% by the proposed algorithm. / Revisión por pares
325

Applying Machine Learning to Detect Historical Remains in Swedish Forestry Using LIDAR Data / Tillämpning av maskininlärning för att upptäcka historiska lämningar inom svenskt skogsbruk med hjälp av LIDAR-data

Abdulin, Ruslan January 2021 (has links)
Historical remains in Swedish forests are at risk of being damaged by heavy machineryduring regular soil preparation, scarification, and regeneration activities. Thereason for this is that the exact locations of these remains are often unknown or theirrecords are inaccurate. Some of the most vulnerable historical remains are the tracesleft after years of charcoal production. In this thesis, we design and implement acomputer vision artificial intelligent model capable of identifying these traces usingtwo accessible visualizations of Light Detection and Ranging (LIDAR) data. Themodel we used was the ResNet34 Convolutional Neural Network pre-trained on theImageNet dataset. The model took advantage of the image segmentation approachand required only a small number of annotations distributed on original images fortraining. During the process of data preparation, the original images were heavilyaugmented, which bolstered the training dataset. Results showed that the model candetect charcoal burners sites and mark them on both types of LIDAR visualizations.Being implemented on modern frameworks and featured with state-of-art machinelearning techniques, the model may reduce the costs of surveys of this type of historicalremains and thereby help save cultural heritage.
326

Semantic Segmentation of Urban Scene Images Using Recurrent Neural Networks

Daliparthi, Venkata Satya Sai Ajay January 2020 (has links)
Background: In Autonomous Driving Vehicles, the vehicle receives pixel-wise sensor data from RGB cameras, point-wise depth information from the cameras, and sensors data as input. The computer present inside the Autonomous Driving vehicle processes the input data and provides the desired output, such as steering angle, torque, and brake. To make an accurate decision by the vehicle, the computer inside the vehicle should be completely aware of its surroundings and understand each pixel in the driving scene. Semantic Segmentation is the task of assigning a class label (Such as Car, Road, Pedestrian, or Sky) to each pixel in the given image. So, a better performing Semantic Segmentation algorithm will contribute to the advancement of the Autonomous Driving field. Research Gap: Traditional methods, such as handcrafted features and feature extraction methods, were mainly used to solve Semantic Segmentation. Since the rise of deep learning, most of the works are using deep learning to dealing with Semantic Segmentation. The most commonly used neural network architecture to deal with Semantic Segmentation was the Convolutional Neural Network (CNN). Even though some works made use of Recurrent Neural Network (RNN), the effect of RNN in dealing with Semantic Segmentation was not yet thoroughly studied. Our study addresses this research gap. Idea: After going through the existing literature, we came up with the idea of “Using RNNs as an add-on module, to augment the skip-connections in Semantic Segmentation Networks through residual connections.” Objectives and Method: The main objective of our work is to improve the Semantic Segmentation network’s performance by using RNNs. The Experiment was chosen as a methodology to conduct our study. In our work, We proposed three novel architectures called UR-Net, UAR-Net, and DLR-Net by implementing our idea to the existing networks U-Net, Attention U-Net, and DeepLabV3+ respectively. Results and Findings: We empirically showed that our proposed architectures have shown improvement in efficiently segmenting the edges and boundaries. Through our study, we found that there is a trade-off between using RNNs and Inference time of the model. Suppose we use RNNs to improve the performance of Semantic Segmentation Networks. In that case, we need to trade off some extra seconds during the inference of the model. Conclusion: Our findings will not contribute to the Autonomous driving field, where we need better performance in real-time. But, our findings will contribute to the advancement of Bio-medical Image segmentation, where doctors can trade-off those extra seconds during inference for better performance.
327

NOVEL MODEL-BASED AND DEEP LEARNING APPROACHES TO SEGMENTATION AND OBJECT DETECTION IN 3D MICROSCOPY IMAGES

Camilo G Aguilar Herrera (9226151) 13 August 2020 (has links)
<div><div><div><p>Modeling microscopy images and extracting information from them are important problems in the fields of physics and material science. </p><p><br></p><p>Model-based methods, such as marked point processes (MPPs), and machine learning approaches, such as convolutional neural networks (CNNs), are powerful tools to perform these tasks. Nevertheless, MPPs present limitations when modeling objects with irregular boundaries. Similarly, machine learning techniques show drawbacks when differentiating clustered objects in volumetric datasets.</p><p> </p><p>In this thesis we explore the extension of the MPP framework to detect irregularly shaped objects. In addition, we develop a CNN approach to perform efficient 3D object detection. Finally, we propose a CNN approach together with geometric regularization to provide robustness in object detection across different datasets.</p><p><br></p><p>The first part of this thesis explores the addition of boundary energy to the MPP by using active contours energy and level sets energy. Our results show this extension allows the MPP framework to detect material porosity in CT microscopy images and to detect red blood cells in DIC microscopy images.</p><p><br></p><p>The second part of this thesis proposes a convolutional neural network approach to perform 3D object detection by regressing objects voxels into clusters. Comparisons with leading methods demonstrate a significant speed-up in 3D fiber and porosity detection in composite polymers while preserving detection accuracy.</p><p><br></p><p>The third part of this thesis explores an improvement in the 3D object detection approach by regressing pixels into their instance centers and using geometric regularization. This improvement demonstrates robustness when comparing 3D fiber detection in several large volumetric datasets.</p><p><br></p></div></div></div><div><div><div><p>These methods can contribute to fast and correct structural characterization of large volumetric datasets, which could potentially lead to the development of novel materials.</p></div></div></div>
328

FE Modelling Of Two Femur Fixation Implants

Arsiwala, Ali, Shukla, Vatsal January 2021 (has links)
In the pool of women over the age of 50, the likeliness of an atypical fracture increase drastically, partly due to osteoporosis. With a pre-existing implant in the femur bone, inserted due to a prior atypical fracture, treating a later femoral neck fracture is complex and risky. Currently, a fractured femoral diaphysis is treated using an intermedullary nail which is fixed to the femur bone either through the femoral neck (Recon locking method)or through the lesser trochanter (Antegrade locking method). In a study conducted by Bögl et.al. JBJS102.17 (2020), pp. 1486-1494, it is found that the fixation of the intermedullary nail through the femoral neck reduces the risk of future femoral neck fractures. The study also states that more than 50% of the patients with atypical femoral fractures related to bisphosphonate treatment for osteoporosis (within the study sub population) were treated with the Antegrade locking implant. There does not exist much literature that reasons as to how one locking method is showing lesser risk of re-operation as compared to the other. The purpose of this study is to look into the effects these two implants have on the femur bone using the Finite Element Analysis (FEA). The study presented is aimed at comparing the results of the finite element analysis for the Recon implant model (Recon model) and Antegrade implant model (Antegrade model). The femur model without the implants (native bone model) is used to verify material behavior, while the other two are used for the comparison to study the stress-strain distribution, primarily in the neck region. This is a patient specific study, hence the femur bone model is generated using patient Computed Tomography (CT) scans. The bone model was assigned a heterogeneous isotropic material property derived from patient CT data. The finite element (FE) model of the bone was meshed using Hypermesh. The peak loading condition including the muscle forces were applied on the native bone model along with the Recon and the Antegrademodel. While the loading conditions during normal walking cycle were only applied to theRecon and the Antegrade model to compare the impacts of the two implant types. Both loading conditions were simulated by fixing the distal condyle region of the bone. The analysis results show that the Antegrade implant experiences much higher stresses and strains in the neck region as compared to Recon implant. Also, the presence of the intermedullary nail through the femur diaphysis helps to distribute the stresses and strains in the anterior distal diaphysis region of the bone. For the case of no implants, the model showed strains and stresses in the lateral distal region of femoral diaphysis.
329

Image Vectorization

Price, Brian L. 31 May 2006 (has links) (PDF)
We present a new technique for creating an editable vector graphic from an object in a raster image. Object selection is performed interactively in subsecond time by calling graph cut with each mouse movement. A renderable mesh is then computed automatically for the selected object and each of its (sub)objects by (1) generating a coarse object mesh; (2) performing recursive graph cut segmentation and hierarchical ordering of subobjects; (3) applying error-driven mesh refinement to each (sub)object. The result is a fully layered object hierarchy that facilitates object-level editing without leaving holes. Object-based vectorization compares favorably with current approaches in the representation and rendering quality. Object-based vectorization and complex editing tasks are performed in a few 10s of seconds.
330

Improving character recognition by thresholding natural images / Förbättra optisk teckeninläsning genom att segmentera naturliga bilder

Granlund, Oskar, Böhrnsen, Kai January 2017 (has links)
The current state of the art optical character recognition (OCR) algorithms are capable of extracting text from images in predefined conditions. OCR is extremely reliable for interpreting machine-written text with minimal distortions, but images taken in a natural scene are still challenging. In recent years the topic of improving recognition rates in natural images has gained interest because more powerful handheld devices are used. The main problem faced dealing with recognition in natural images are distortions like illuminations, font textures, and complex backgrounds. Different preprocessing approaches to separate text from its background have been researched lately. In our study, we assess the improvement reached by two of these preprocessing methods called k-means and Otsu by comparing their results from an OCR algorithm. The study showed that the preprocessing made some improvement on special occasions, but overall gained worse accuracy compared to the unaltered images. / Dagens optisk teckeninläsnings (OCR) algoritmer är kapabla av att extrahera text från bilder inom fördefinierade förhållanden. De moderna metoderna har uppnått en hög träffsäkerhet för maskinskriven text med minimala förvrängningar, men bilder tagna i en naturlig scen är fortfarande svåra att hantera. De senaste åren har ett stort intresse för att förbättra tecken igenkännings algoritmerna uppstått, eftersom fler kraftfulla och handhållna enheter används. Det huvudsakliga problemet när det kommer till igenkänning i naturliga bilder är olika förvrängningar som infallande ljus, textens textur och komplicerade bakgrunder. Olika metoder för förbehandling och därmed separation av texten och dess bakgrund har studerats under den senaste tiden. I våran studie bedömer vi förbättringen som uppnås vid förbehandlingen med två metoder som kallas för k-means och Otsu genom att jämföra svaren från en OCR algoritm. Studien visar att Otsu och k-means kan förbättra träffsäkerheten i vissa förhållanden men generellt sett ger det ett sämre resultat än de oförändrade bilderna.

Page generated in 0.1822 seconds