• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 117
  • 5
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 102
  • 84
  • 71
  • 62
  • 56
  • 50
  • 46
  • 38
  • 36
  • 30
  • 29
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Object Detection in Paddy Field for Robotic Combine Harvester Based on Semantic Segmentation / セマンティックセグメンテーションに基づくロボットコンバインのための物体検出

Zhu, Jiajun 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(農学) / 甲第24913号 / 農博第2576号 / 新制||農||1103(附属図書館) / 京都大学大学院農学研究科地域環境科学専攻 / (主査)教授 飯田 訓久, 教授 近藤 直, 教授 野口 良造 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
52

SAMPLS: A prompt engineering approach using Segment-Anything-Model for PLant Science research

Sivaramakrishnan, Upasana 30 May 2024 (has links)
Comparative anatomical studies of diverse plant species are vital for the understanding of changes in gene functions such as those involved in solute transport and hormone signaling in plant roots. The state-of-the-art method for confocal image analysis called PlantSeg utilized U-Net for cell wall segmentation. U-Net is a neural network model that requires training with a large amount of manually labeled confocal images and lacks generalizability. In this research, we test a foundation model called the Segment Anything Model (SAM) to evaluate its zero-shot learning capability and whether prompt engineering can reduce the effort and time consumed in dataset annotation, facilitating a semi-automated training process. Our proposed method improved the detection rate of cells and reduced the error rate as compared to state-of-the-art segmentation tools. We also estimated the IoU scores between the proposed method and PlantSeg to reveal the trade-off between accuracy and detection rate for different quality of data. By addressing the challenges specific to confocal images, our approach offers a robust solution for studying plant structure. Our findings demonstrated the efficiency of SAM in confocal image segmentation, showcasing its adaptability and performance as compared to existing tools. Overall, our research highlights the potential of foundation models like SAM in specialized domains and underscores the importance of tailored approaches for achieving accurate semantic segmentation in confocal imaging. / Master of Science / Studying different plant species' anatomy is crucial for understanding how genes work, especially those related to moving substances and signaling in plant roots. Scientists often use advanced techniques like confocal microscopy to examine plant tissues in detail. Traditional techniques like PlantSeg in automatically segmenting plant cells require a lot of computational resources and manual effort in preparing the dataset and training the model. In this study, we develop a novel technique using Segment-Anything-Model that could learn to identify cells without needing as much training data. We found that SAM performed better than other methods, detecting cells more accurately and making fewer mistakes. By comparing SAM with PlantSeg, we could see how well they worked with different types of images. Our results show that SAM is a reliable option for studying plant structures using confocal imaging. This research highlights the importance of using tailored approaches like SAM to get accurate results from complex images, offering a promising solution for plant scientists.
53

Automated Gland Detection in Colorectal Histopathological Images

Al Zorgani, Maisun M., Mehmood, Irfan, Ugail, Hassan 25 March 2022 (has links)
No / Clinical morphological analysis of histopathological specimens is a successful manner for diagnosing benign and malignant diseases. Analysis of glandular architecture is a major challenge for colon histopathologists as a result of the difficulty of identifying morphological structures in glandular malignant tumours due to the distortion of glands boundaries, furthermore the variation in the appearance of staining specimens. For reliable analysis of colon specimens, several deep learning methods have exhibited encouraging performance in the glands automatic segmentation despite the challenges. In the histopathology field, the vast number of annotation images for training the deep learning algorithms is the major challenge. In this work, we propose a trainable Convolutional Neural Network (CNN) from end to end for detecting the glands automatically. More specifically, the Modified Res-U-Net is employed for segmenting the colorectal glands in Haematoxylin and Eosin (H&E) stained images for challenging Gland Segmentation (GlaS) dataset. The proposed Res-U-Net outperformed the prior methods that utilise U-Net architecture on the images of the GlaS dataset.
54

Using Visual Abstractions to Improve Spatially Aware Nominal Safety in Autonomous Vehicles

Modak, Varun Nimish 05 June 2024 (has links)
As autonomous vehicles (AVs) evolve, ensuring their safety extends beyond traditional met- rics. While current nominal safety scores focus on the timeliness of AV responses like latency or instantaneous response time, this paper proposes expanding the concept to include spatial configurations formed by obstacles with respect to the ego-vehicle. By analyzing these spatial relationships, including proximity, density and arrangement, this research aims to demon- strate how these factors influence the safety force field around the AV. The goal is to show that beyond meeting Responsibility-Sensitive Safety (RSS) metrics, spatial configurations significantly impact the safety force field, particularly affecting path planning capability. High spatial occupancy of obstacle configurations can impede easy maneuverability, thus challenging safety-critical modules like path planning. This paper aims to capture this by proposing a safety score that leverages the ability of modern computer vision techniques, par- ticularly image segmentation models, to capture high and low levels of spatial and contextual information. By enhancing the scope of nominal safety to include such spatial analysis, this research aims to broaden the understanding of drivable space and enable AV designers to evaluate path planning algorithms based on spatial configuration centric safety levels. / Master of Science / As self-driving cars become more common, ensuring their safety is crucial. While current safety measures focus on how quickly these cars can react to dangers, this paper suggests that understanding the spatial relationships between the car and obstacles is just as important, and needs to be explored further. Prior metrics use velocity and acceleration of all the actors, to determine the safe-distance of obstacles from the vehicle, and determine how fast the car should react before a predicted collision. This paper aims to extend the scope of how safety is viewed during normal operating conditions of the vehicle by considering the arrangement of obstacles around it as an influencing factor to safety. By using advanced computer vision techniques, particularly models that can understand images in detail, this research proposes a new spatial safety metric. This score considers how well the car navigates through dense environments by understanding the spatial configurations that obstacles form. By studying these factors, I wish to introduce a metric that improves how self-driving cars are designed to navigate and path plan safely on the roads.
55

Artificial Interpretation: An investigation into the feasibility of archaeologically focused seismic interpretation via machine learning

Fraser, Andrew I., Landauer, J., Gaffney, Vincent, Zieschang, E. 31 July 2024 (has links)
Yes / The value of artificial intelligence and machine learning applications for use in heritage research is increasingly appreciated. In specific areas, notably remote sensing, datasets have increased in extent and resolution to the point that manual interpretation is problematic and the availability of skilled interpreters to undertake such work is limited. Interpretation of the geophysical datasets associated with prehistoric submerged landscapes is particularly challenging. Following the Last Glacial Maximum, sea levels rose by 120 m globally, and vast, habitable landscapes were lost to the sea. These landscapes were inaccessible until extensive remote sensing datasets were provided by the offshore energy sector. In this paper, we provide the results of a research programme centred on AI applications using data from the southern North Sea. Here, an area of c. 188,000 km2 of habitable terrestrial land was inundated between c. 20,000 BP and 7000 BP, along with the cultural heritage it contained. As part of this project, machine learning tools were applied to detect and interpret features with potential archaeological significance from shallow seismic data. The output provides a proof-of-concept model demonstrating verifiable results and the potential for a further, more complex, leveraging of AI interpretation for the study of submarine palaeolandscapes.
56

Collaborative Unmanned Air and Ground Vehicle Perception for Scene Understanding, Planning and GPS-denied Localization

Christie, Gordon A. 05 January 2017 (has links)
Autonomous robot missions in unknown environments are challenging. In many cases, the systems involved are unable to use a priori information about the scene (e.g. road maps). This is especially true in disaster response scenarios, where existing maps are now out of date. Areas without GPS are another concern, especially when the involved systems are tasked with navigating a path planned by a remote base station. Scene understanding via robots' perception data (e.g. images) can greatly assist in overcoming these challenges. This dissertation makes three contributions that help overcome these challenges, where there is a focus on the application of autonomously searching for radiation sources with unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV) in unknown and unstructured environments. The three main contributions of this dissertation are: (1) An approach to overcome the challenges associated with simultaneously trying to understand 2D and 3D information about the environment. (2) Algorithms and experiments involving scene understanding for real-world autonomous search tasks. The experiments involve a UAV and a UGV searching for potentially hazardous sources of radiation is an unknown environment. (3) An approach to the registration of a UGV in areas without GPS using 2D image data and 3D data, where localization is performed in an overhead map generated from imagery captured in the air. / Ph. D. / Autonomous robot missions in unknown environments are challenging. In many cases, the systems involved are unable to use <i>a priori</i> information about the scene (<i>e.g.</i> road maps). This is especially true in disaster response scenarios, where existing maps are now out of date. Areas without GPS are another concern, especially when the involved systems are tasked with navigating a path planned by a remote base station. Scene understanding via robots’ perception data (<i>e.g.</i> images) can greatly assist in overcoming these challenges. This dissertation makes three contributions that help overcome these challenges, where there is a focus on the application of autonomously searching for radiation sources with unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV) in unknown and unstructured environments. The three main contributions of this dissertation are: (1) An approach to overcome the challenges associated with simultaneously trying to understand 2D and 3D information about the environment. (2) Algorithms and experiments involving scene understanding for real-world autonomous search tasks. The experiments involve a UAV and a UGV searching for potentially hazardous sources of radiation is an unknown environment. (3) An approach to the registration of a UGV in areas without GPS using 2D image data and 3D data, where localization is performed in an overhead map generated from imagery captured in the air.
57

Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data : Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data

He, Linbo January 2019 (has links)
Semantic segmentation is a key approach to comprehensive image data analysis. It can be applied to analyze 2D images, videos, and even point clouds that contain 3D data points. On the first two problems, CNNs have achieved remarkable progress, but on point cloud segmentation, the results are less satisfactory due to challenges such as limited memory resource and difficulties in 3D point annotation. One of the research studies carried out by the Computer Vision Lab at Linköping University was aiming to ease the semantic segmentation of 3D point cloud. The idea is that by first projecting 3D data points to 2D space and then focusing only on the analysis of 2D images, we can reduce the overall workload for the segmentation process as well as exploit the existing well-developed 2D semantic segmentation techniques. In order to improve the performance of CNNs for 2D semantic segmentation, the study has used input data derived from different modalities. However, how different modalities can be optimally fused is still an open question. Based on the above-mentioned study, this thesis aims to improve the multistream framework architecture. More concretely, we investigate how different singlestream architectures impact the multistream framework with a given fusion method, and how different fusion methods contribute to the overall performance of a given multistream framework. As a result, our proposed fusion architecture outperformed all the investigated traditional fusion methods. Along with the best singlestream candidate and few additional training techniques, our final proposed multistream framework obtained a relative gain of 7.3\% mIoU compared to the baseline on the semantic3D point cloud test set, increasing the ranking from 12th to 5th position on the benchmark leaderboard.
58

Semantic Segmentation of Iron Ore Pellets with Neural Networks

Svensson, Terese January 2019 (has links)
This master’s thesis evaluates five existing Convolutional Neural Network (CNN) models for semantic segmentation of optical microscopy images of iron ore pellets. The models are PSPNet, FC-DenseNet, DeepLabv3+, BiSeNet and GCN. The dataset used for training and evaluation contains 180 microscopy images of iron ore pellets collected from LKAB’s experimental blast furnace in Luleå, Sweden. This thesis also investigates the impact of the dataset size and data augmentation on performance. The best performing CNN model on the task was PSPNet, which had an average accuracy of 91.7% on the dataset. Simple data augmentation techniques, horizontal and vertical flipping, improved the models’ average accuracy performance with 3.4% on average. From the results in this thesis, it was concluded that there are benefits to using CNNs for analysis of iron ore pellets, with time-saving and improved analysis as the two notable areas.
59

Discriminative hand-object pose estimation from depth images using convolutional neural networks

Goudie, Duncan January 2018 (has links)
This thesis investigates the task of estimating the pose of a hand interacting with an object from a depth image. The main contribution of this thesis is the development of our discriminative one-shot hand-object pose estimation system. To the best of our knowledge, this is the first attempt at a one-shot hand-object pose estimation system. It is a two stage system consisting of convolutional neural networks. The first stage segments the object out of the hand from the depth image. This hand-minus-object depth image is combined with the original input depth image to form a 2-channel image for use in the second stage, pose estimation. We show that using this 2-channel image produces better pose estimation performance than a single stage pose estimation system taking just the input depth map as input. We also believe that we are amongst the first to research hand-object segmentation. We use fully convolutional neural networks to perform hand-object segmentation from a depth image. We show that this is a superior approach to random decision forests for this task. Datasets were created to train our hand-object pose estimator stage and hand-object segmentation stage. The hand-object pose labels were estimated semi-automatically with a combined manual annotation and generative approach. The segmentation labels were inferred automatically with colour thresholding. To the best of our knowledge, there were no public datasets for these two tasks when we were developing our system. These datasets have been or are in the process of being publicly released.
60

Localisation précise d'un véhicule par couplage vision/capteurs embarqués/systèmes d'informations géographiques / Localisation of a vehicle through low-cost sensors and geographic information systems fusion

Salehi, Achkan 11 April 2018 (has links)
La fusion entre un ensemble de capteurs et de bases de données dont les erreurs sont indépendantes est aujourd’hui la solution la plus fiable et donc la plus répandue de l’état de l’art au problème de la localisation. Les véhicules semi-autonomes et autonomes actuels, ainsi que les applications de réalité augmentée visant les contextes industriels exploitent des graphes de capteurs et de bases de données de tailles considérables, dont la conception, la calibration et la synchronisation n’est, en plus d’être onéreuse, pas triviale. Il est donc important afin de pouvoir démocratiser ces technologies, d’explorer la possibilité de l’exploitation de capteurs et bases de données bas-coûts et aisément accessibles. Cependant, ces sources d’information sont naturellement plus incertaines, et plusieurs obstacles subsistent à leur utilisation efficace en pratique. De plus, les succès récents mais fulgurants des réseaux profonds dans des tâches variées laissent penser que ces méthodes peuvent représenter une alternative peu coûteuse et efficace à certains modules des systèmes de SLAM actuels. Dans cette thèse, nous nous penchons sur la localisation à grande échelle d’un véhicule dans un repère géoréférencé à partir d’un système bas-coût. Celui-ci repose sur la fusion entre le flux vidéo d’une caméra monoculaire, des modèles 3d non-texturés mais géoréférencés de bâtiments,des modèles d’élévation de terrain et des données en provenance soit d’un GPS bas-coût soit de l’odométrie du véhicule. Nos travaux sont consacrés à la résolution de deux problèmes. Le premier survient lors de la fusion par terme barrière entre le VSLAM et l’information de positionnement fournie par un GPS bas-coût. Cette méthode de fusion est à notre connaissance la plus robuste face aux incertitudes du GPS, mais est plus exigeante en matière de ressources que la fusion via des fonctions de coût linéaires. Nous proposons une optimisation algorithmique de cette méthode reposant sur la définition d’un terme barrière particulier. Le deuxième problème est le problème d’associations entre les primitives représentant la géométrie de la scène(e.g. points 3d) et les modèles 3d des bâtiments. Les travaux précédents se basent sur des critères géométriques simples et sont donc très sensibles aux occultations en milieu urbain. Nous exploitons des réseaux convolutionnels profonds afin d’identifier et d’associer les éléments de la carte correspondants aux façades des bâtiments aux modèles 3d. Bien que nos contributions soient en grande partie indépendantes du système de SLAM sous-jacent, nos expériences sont basées sur l’ajustement de faisceaux contraint basé images-clefs. Les solutions que nous proposons sont évaluées sur des séquences de synthèse ainsi que sur des séquence urbaines réelles sur des distances de plusieurs kilomètres. Ces expériences démontrent des gains importants en performance pour la fusion VSLAM/GPS, et une amélioration considérable de la robustesse aux occultations dans la définition des contraintes. / The fusion between sensors and databases whose errors are independant is the most re-liable and therefore most widespread solution to the localization problem. Current autonomousand semi-autonomous vehicles, as well as augmented reality applications targeting industrialcontexts exploit large sensor and database graphs that are difficult and expensive to synchro-nize and calibrate. Thus, the democratization of these technologies requires the exploration ofthe possiblity of exploiting low-cost and easily accessible sensors and databases. These infor-mation sources are naturally tainted by higher uncertainty levels, and many obstacles to theireffective and efficient practical usage persist. Moreover, the recent but dazzling successes ofdeep neural networks in various tasks seem to indicate that they could be a viable and low-costalternative to some components of current SLAM systems.In this thesis, we focused on large-scale localization of a vehicle in a georeferenced co-ordinate frame from a low-cost system, which is based on the fusion between a monocularvideo stream, 3d non-textured but georeferenced building models, terrain elevation models anddata either from a low-cost GPS or from vehicle odometry. Our work targets the resolutionof two problems. The first one is related to the fusion via barrier term optimization of VS-LAM and positioning measurements provided by a low-cost GPS. This method is, to the bestof our knowledge, the most robust against GPS uncertainties, but it is more demanding in termsof computational resources. We propose an algorithmic optimization of that approach basedon the definition of a novel barrier term. The second problem is the data association problembetween the primitives that represent the geometry of the scene (e.g. 3d points) and the 3d buil-ding models. Previous works in that area use simple geometric criteria and are therefore verysensitive to occlusions in urban environments. We exploit deep convolutional neural networksin order to identify and associate elements from the map that correspond to 3d building mo-del façades. Although our contributions are for the most part independant from the underlyingSLAM system, we based our experiments on constrained key-frame based bundle adjustment.The solutions that we propose are evaluated on synthetic sequences as well as on real urbandatasets. These experiments show important performance gains for VSLAM/GPS fusion, andconsiderable improvements in the robustness of building constraints to occlusions.

Page generated in 0.1368 seconds