• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Computational Simulation of Southern Pine Lumber Using Finite Element Analysis

Li, Yali 06 August 2021 (has links)
Finite element analysis modeling is a powerful technology to predict the response of materials and structures under certain loaded situations including the applied force, the changing temperature and humanity, the alterative boundary condition, etc. In this paper, the mechanical properties of wood material were analyzed with an emphasis on bending behavior under lateral applied force with the finite element simulation in ABAQUS (Dassault Systèmes, 2020 version). Two methods were conducted in ABAQUS commercial software and the modulus of elastic (MOE) attained from the computational results were compared with the data obtained from the experimental records. The simulation model with grain patterns into consideration showed more accurate behavior when comparing with the displacement from the 3rd point bending test during the elastic range. Machine learning method is widely applied to the image processing procedures like digital recognition. The paper developed a python script to process the wood image cross section with an environmental background and calculated the late wood proportion based on the unsupervised machine learning concept. Grab cut function and Gray Level Co-Occurrence Matrix (GLCM) image processing were defined to obtain the wood section and wood texture features separately. K-Means method was used to cluster the latewood and early wood material based on the mean value from the GLCM matrix then the script was able to calculate the ratio with a simple definition of the equation. The results of the latewood ratio from the python script were compared with the ratio from the dot grid method in this paper. Statistical models in SPSS version 27 (IBM, Chicago, IL) were taken for this paper to predict the relationship between several parameters quantitatively. Since the density, latewood ratio, and number of rings per inch are obviously correlated with each other, this paper proposed a ridge regression statistical model to study the relationship between MOE/modulus of rupture (MOR) with multiple independents. Ridge regression model is also known as Tikhonov Regularization method which aims to solve the collinearity problems that may lead to statistical bias with stepwise regression analysis.
332

Wildfire Modeling with Data Assimilation

Johnston, Andrew 14 December 2022 (has links)
Wildfire modeling is a complex, computationally costly endeavor, but with droughts worsening and fires burning across the western United States, obtaining accurate wildfire predictions is more important than ever. In this paper, we present a novel approach to wildfire modeling using data assimiliation. We model wildfire spread with a modification of the partial differential equation model described by Mandel et al. in their 2008 paper. Specifically, we replace some constant parameter values with geospatial functions of fuel type. We combine deep learning and remote sensing to obtain real-time data for the model and employ the Nelder-Mead method to recover optimal model parameters with data assimilation. We demonstrate the efficacy of this approach on computer-generated fires, as well as real fire data from the 2021 Dixie Fire in California. On generated fires, this approach resulted in an average Jaccard index of 0.996 between the predicted and actual fire perimeters and an average Kulczynski measure of 0.997. On data from the Dixie Fire, the average Jaccard index achieved was 0.48, and the average Kulczynski measure was 0.66.
333

Image Segmentation and Object Identification in Cancer Tissue Slides from Fluorescence Microscopy

Eriksson, Sebastian, Forsberg, Fredrik January 2023 (has links)
In cancer research, there is a need to make accurate spatial measurements in multi-layered fluorescence microscopy images. Researchers would like to measure distances in and between biological objects such as nerves and tumours, to investigate questions which includes if nerve distribution in and around tumours can have a prognostic value in cancer diagnostics. This thesis is split into two parts, the first being: given arbitrary florescent images of cancer tissue samples, investigate the feasibility of automatically identifying nerves, tumours and blood vessels using classic image analysis. The second part is: given an image with identified objects, quantify their spatial data. By analysing 58 different cancer tissue samples we found that a modified Otsu method gives the most promising results for image segmentation. We found that non-verifiable objects and verifiable objects share the same pixel intensity distributions which implies that it is in general not possible to solely use thresholding methods to separate them from each other. For the spatial analysis, two measurement methods were introduced. An object based method that provides measurements from the edges of nerves to tumour edges, and a pixel based measurement method, which provides fraction based measurements that are comparable between different tissue samples.
334

Three-Dimensional Fluorescence Microscopy Image Synthesis and Analysis Using Machine Learning

Liming Wu (6622538) 07 February 2023 (has links)
<p>Recent advances in fluorescence  microscopy enable deeper cellular imaging in living tissues with near-infrared excitation light. </p> <p>High quality fluorescence microscopy images provide useful information for analyzing biological structures and diagnosing diseases.</p> <p>Nuclei detection and segmentation are two fundamental steps for quantitative analysis of microscopy images.</p> <p>However, existing machine learning-based approaches are hampered by three main challenges: (1) Hand annotated ground truth is difficult to obtain especially for 3D volumes, (2) Most of the object detection methods work only on 2D images and are difficult to extend to 3D volumes, (3) Segmentation-based approaches typically cannot distinguish different object instances without proper post-processing steps.</p> <p>In this thesis, we propose various new methods for microscopy image analysis including nuclei synthesis, detection, and segmentation. </p> <p>Due to the limitation of manually annotated ground truth masks, we first describe how we generate 2D/3D synthetic microscopy images using SpCycleGAN and use them as a data augmentation technique for our detection and segmentation networks.</p> <p>For nuclei detection, we describe our RCNN-SliceNet for nuclei counting and centroid detection using slice-and-cluster strategy. </p> <p>Then we introduce our 3D CentroidNet for nuclei centroid estimation using vector flow voting mechanism which does not require any post-processing steps.</p> <p>For nuclei segmentation, we first describe our EMR-CNN for nuclei instance segmentation using ensemble learning and slice fusion strategy.</p> <p>Then we present the 3D Nuclei Instance Segmentation Network (NISNet3D) for nuclei instance segmentation using gradient vector field array.</p> <p>Extensive experiments have been conducted on a variety of challenging microscopy volumes to demonstrate that our approach can accurately detect and segment the cell nuclei and outperforms other compared methods.</p> <p>Finally, we describe the Distributed and Networked Analysis of Volumetric Image Data (DINAVID) system we developed for biologists to remotely analyze large microscopy volumes using machine learning. </p>
335

CT-PET Image Fusion and PET Image Segmentation for Radiation Therapy

Zheng, Yiran January 2011 (has links)
No description available.
336

PARALLEL 3D IMAGE SEGMENTATION BY GPU-AMENABLE LEVEL SET SOLUTION

Hagan, Aaron M. 17 June 2009 (has links)
No description available.
337

Computerized 3D Modeling and Simulations of Patient-Specific Cardiac Anatomy from Segmented MRI

Ringenberg, Jordan January 2014 (has links)
No description available.
338

Thresholded K-means Algorithm for Image Segmentation

Girish, Deeptha S. January 2016 (has links)
No description available.
339

Design and Implementation of a Cost-Effective Sky Imager Station

Dehdari, Amirreza, Cazaubon, Tadj Anton January 2024 (has links)
Accurate and cost-effective weather prediction is crucial for various industries, yet current methods and tools are either expensive or lack real-time, local applicability. This thesis presents the development and evaluation of a cost-effective sky-imaging weather station designed to accurately track cloud cover using a combination of visual and environmental data. Our research focuses on constructing a system that utilises a single camera and image processing techniques for cloud separation. By employing colour-space filtering and modern image processing methods, we aim to enhance accuracy while minimising costs. The hardware design leverages consumer-grade components, reducing the unit cost to a fraction of existing solutions. The methodology involves an iterative design process, expert consultation, and rigorous testing to refine the prototype. We evaluate the system's performance by comparing sensor readings to METAR data and assessing accuracy. Additionally, we investigate the feasibility of using the Lifted Condensation Level as a substitute for Cloud Base Height. Our findings demonstrate that it is possible to create a sky-imaging weather station at a cost significantly lower than that of comparable products while achieving accurate cloud tracking and separation. This research contributes to the field by offering a practical, low-cost sky imager with potential applications in everyday weather preparedness, industrial forecasting, and solar energy management.
340

Ανάπτυξη μεθόδων ανάκτησης εικόνας βάσει περιεχομένου σε αναπαραστάσεις αντικειμένων ασαφών ορίων / Development of methods for content-based image retrieval in representations of fuzzily bounded objects

Καρτσακάλης, Κωνσταντίνος 11 March 2014 (has links)
Τα δεδομένα εικόνων που προκύπτουν από την χρήση βιο-ιατρικών μηχανημάτων είναι από την φύση τους ασαφή, χάρη σε μια σειρά από παράγοντες ανάμεσα στους οποίους οι περιορισμοί στον χώρο, τον χρόνο, οι παραμετρικές αναλύσεις καθώς και οι φυσικοί περιορισμοί που επιβάλλει το εκάστοτε μηχάνημα. Όταν το αντικείμενο ενδιαφέροντος σε μια τέτοια εικόνα έχει κάποιο μοτίβο φωτεινότητας ευκρινώς διαφορετικό από τα μοτίβα των υπόλοιπων αντικειμένων που εμφανίζονται, είναι εφικτή η κατάτμηση της εικόνας με έναν απόλυτο, δυαδικό τρόπο που να εκφράζει επαρκώς τα όρια των αντικειμένων. Συχνά ωστόσο σε τέτοιες εικόνες υπεισέρχονται παράγοντες όπως η ανομοιογένεια των υλικών που απεικονίζονται, θόλωμα, θόρυβος ή και μεταβολές στο υπόβαθρο που εισάγονται από την συσκευή απεικόνισης με αποτέλεσμα οι εντάσεις φωτεινότητας σε μια τέτοια εικόνα να εμφανίζονται με έναν ασαφή, βαθμωτό, «μη-δυαδικό» τρόπο. Μια πρωτοπόρα τάση στην σχετική βιβλιογραφία είναι η αξιοποίηση της ασαφούς σύνθεσης των αντικειμένων μιας τέτοιας εικόνας, με τρόπο ώστε η ασάφεια να αποτελεί γνώρισμα του εκάστοτε αντικειμένου αντί για ανεπιθύμητο χαρακτηριστικό: αντλώντας από την θεωρία ασαφών συνόλων, τέτοιες προσεγγίσεις κατατμούν μια εικόνα με βαθμωτό, μη-δυαδικό τρόπο αποφεύγοντας τον μονοσήμαντο καθορισμό ορίων μεταξύ των αντικειμένων. Μια τέτοια προσέγγιση καταφέρνει να αποτυπώσει με μαθηματικούς όρους την ασάφεια της θολής εικόνας, μετατρέποντάς την σε χρήσιμο εργαλείο ανάλυσης στα χέρια ενός ειδικού. Από την άλλη, το μέγεθος της ασάφειας που παρατηρείται σε τέτοιες εικόνες είναι τέτοιο ώστε πολλές φορές να ωθεί τους ειδικούς σε διαφορετικές ή και αντικρουόμενες κατατμήσεις, ακόμη και από το ίδιο ανθρώπινο χέρι. Επιπλέον, το παραπάνω έχει ως αποτέλεσμα την οικοδόμηση βάσεων δεδομένων στις οποίες για μια εικόνα αποθηκεύονται πολλαπλές κατατμήσεις, δυαδικές και μη. Μπορούμε με βάση μια κατάτμηση εικόνας να ανακτήσουμε άλλες, παρόμοιες τέτοιες εικόνες των οποίων τα δεδομένα έχουν προέλθει από αναλύσεις ειδικών, χωρίς σε κάποιο βήμα να υποβαθμίζουμε την ασαφή φύση των αντικειμένων που απεικονίζονται; Πως επιχειρείται η ανάκτηση σε μια βάση δεδομένων στην οποία έχουν αποθηκευτεί οι παραπάνω πολλαπλές κατατμήσεις για κάθε εικόνα; Αποτελεί κριτήριο ομοιότητας μεταξύ εικόνων το πόσο συχνά θα επέλεγε ένας ειδικός να οριοθετήσει ένα εικονοστοιχείο μιας τέτοιας εικόνας εντός ή εκτός ενός τέτοιου θολού αντικειμένου; Στα πλαίσια της παρούσας εργασίας προσπαθούμε να απαντήσουμε στα παραπάνω ερωτήματα, μελετώντας διεξοδικά την διαδικασία ανάκτησης τέτοιων εικόνων. Προσεγγίζουμε το πρόβλημα θεωρώντας ότι για κάθε εικόνα αποθηκεύονται στην βάση μας περισσότερες της μίας κατατμήσεις, τόσο δυαδικής φύσης από ειδικούς όσο και από ασαφείς από αυτόματους αλγορίθμους. Επιδιώκουμε εκμεταλλευόμενοι το χαρακτηριστικό της ασάφειας να ενοποιήσουμε την διαδικασία της ανάκτησης και για τις δυο παραπάνω περιπτώσεις, προσεγγίζοντας την συχνότητα με την οποία ένας ειδικός θα οριοθετούσε το εκάστοτε ασαφές αντικείμενο με συγκεκριμένο τρόπο καθώς και τα ενδογενή χαρακτηριστικά ενός ασαφούς αντικειμένου που έχει εξαχθεί από αυτόματο αλγόριθμο. Προτείνουμε κατάλληλο μηχανισμό ανάκτησης ο οποίος αναλαμβάνει την μετάβαση από τον χώρο της αναποφασιστικότητας και του ασαφούς στον χώρο της πιθανοτικής αναπαράστασης, διατηρώντας παράλληλα όλους τους περιορισμούς που έχουν επιβληθεί στα δεδομένα από την πρωταρχική ανάλυσή τους. Στην συνέχεια αξιολογούμε την διαδικασία της ανάκτησης, εφαρμόζοντας την νέα μέθοδο σε ήδη υπάρχον σύνολο δεδομένων από το οποίο και εξάγουμε συμπεράσματα για τα αποτελέσματά της. / Image data acquired through the use of bio-medical scanners are by nature fuzzy, thanks to a series of factors including limitations in spatial, temporal and parametric resolutions other than the physical limitations of the device. When the object of interest in such an image displays intensity patterns that are distinct from the patterns of other objects appearing together, a segmentation of the image in a hard, binary manner that clearly defines the borders between objects is feasible. It is frequent though that in such images factors like the lack of homogeneity between materials depicted, blurring, noise or deviations in the background pose difficulties in the above process. Intensity values in such an image appear in a fuzzy, gradient, “non-binary” manner. An innovative trend in the field of study is to make use of the fuzzy composition of objects in such an image, in a way in which fuzziness becomes a characteristic feature of the object instead of an undesirable trait: deriving from the theory of fuzzy sets, such approaches segment an image in a gradient, non-binary manner, therefore avoiding to set up a clear boundary between depicted objects. Such approaches are successful in capturing the fuzziness of the blurry image in mathematical terms, transforming the quality into a powerful tool of analysis in the hands of an expert. On the other hand, the scale of fuzziness observed in such images often leads experts towards different or contradictory segmentations, even drawn by the same human hand. What is more, the aforementioned case results in the compilation of image data bases consisting of multiple segmentations for each image, both binary and fuzzy. Are we able, by segmenting an image, to retrieve other similar such images whose segmented data have been acquired by experts, without downgrading the importance of the fuzziness of the objects depicted in any step involved? How exactly are images in such a database storing multiple segmentations of each retrieved? Is the frequency with which an expert would choose to either include or exclude from a fuzzy object a pixel of an image, a criterion of semblance between objects depicted in images? Finally, how able are we to tackle the feature of fuzziness in a probabilistic manner, thus providing a valuable tool in bridging the gap between automatic segmentation algorithms and segmentations coming from field experts? In the context of this thesis, we tackle the aforementioned problems studying thoroughly the process of image retrieval in a fuzzy context. We consider the case in which a database consists of images for which exist more than one segmentations, both crisp, derived by experts’ analysis, and fuzzy, generated by segmentation algorithms. We attempt to unify the retrieval process for both cases by taking advantage of the feature of fuzziness, and by approximating the frequency with which an expert would confine the boundaries of the fuzzy object in a uniform manner, along with the intrinsic features of a fuzzy, algorithm-generated object. We propose a suitable retrieval mechanism that undertakes the transition from the field of indecisiveness to that of a probabilistic representation, at the same time preserving all the limitations imposed on the data by their initial analysis. Next, we evaluate the retrieval process, by implementing the new method on an already existing data-set and draw conclusions on the effectiveness of the proposed scheme.

Page generated in 0.1226 seconds