• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 148
  • 40
  • 15
  • 11
  • 10
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 274
  • 106
  • 64
  • 58
  • 54
  • 49
  • 42
  • 42
  • 40
  • 38
  • 38
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

LiDAR Point Cloud Transfer and Rendering for SimulationPurposes

Danielsson, Magnus January 2022 (has links)
Digital twins in manufacturing, logistics, retail, and healthcare can help companies makebusiness decisions by simulating changes prior to implementing such changes in real life.In robotic teleoperation, virtual reality technology such as head mounted displays canincrease operator performance. In the mining equipment industry, teleoperation is quitean established concept, using a video feed for visualization, and often similiar or the samecontrol panels as on the real machine. However, cameras don’t provide depth perceptionfor the operator, and the lighting conditions in a mine may make photogrammetry a lessthan ideal solution. Epiroc is currently working on a digital twin simulation softwarein Unity, which could be extended for teleoperation purposes. As a complement to thissoftware, a fast, high-definition Ouster OS0-128 LiDAR was used to render a point cloudof a physical environment. A Unity GameObject script was written in C# that receivesand renders coordinates as a point cloud. Two Python scripts were written to convert theLiDAR data using the Ouster SDK to coordinates, and then sending these coordinates overa TCP connection, either on the same machine, or over Wi-Fi. The Python scripts used twodifferent data formats, and the performance difference between these two data formatswas compared. The results indicated that Wi-Fi transfer of LiDAR data could be a viablesolution to continously scanning the surrounding area of equipment being teleoperatedwith quite a low delay and latency
82

3D Shape Detection for Augmented Reality / 3D form-detektion för förstärkt verklighet

Anadon Leon, Hector January 2018 (has links)
In previous work, 2D object recognition has shown exceptional results. However, it is not possible to sense the environment spatial information, where the objects are and what they are. Having this knowledge could imply improvements in several fields like Augmented Reality by allowing virtual characters to interact more realistically with the environment and Autonomous cars by being able to make better decisions knowing where the objects are in a 3D space. The proposed work shows that it is possible to predict 3D bounding boxes with semantic labels for 3D object detection and a set of primitives for 3D shape recognition from multiple objects in a indoors scene using an algorithm that receives as input an RGB image and its 3D information. It uses Deep Neural Networks with novel architectures for point cloud feature extraction. It uses a unique feature vector capable of representing the latent space of the object that models its shape, position, size and orientation for multi-task prediction trained end-to-end with unbalanced datasets. It runs in real time (5 frames per second) in a live video feed. The method is evaluated in the NYU Depth Dataset V2 using Average Precision for object detection and 3D Intersection over Union and surface-to-surface distance for 3D shape. The results confirm that it is possible to use a shared feature vector for more than one prediction task and it generalizes for unseen objects during the training process achieving state-of-the-art results for 3D object detection and 3D shape prediction for the NYU Depth Dataset V2. Qualitative results are shown in real particular captured data showing that there could be navigation in a real-world indoor environment and that there could be collisions between the animations and the detected objects improving the interaction character-environment in Augmented Reality applications. / 2D-objektigenkänning har i tidigare arbeten uppvisat exceptionella resultat. Dessa modeller gör det dock inte möjligt att erhålla rumsinformation, så som föremåls position och information om vad föremålen är. Sådan kunskap kan leda till förbättringar inom flera områden så som förstärkt verklighet, så att virtuella karaktärer mer realistiskt kan interagera med miljön, samt för självstyrande bilar, så att de kan fatta bättre beslut och veta var objekt är i ett 3D-utrymme. Detta arbete visar att det är möjligt att modellera täckande rätblock med semantiska etiketter för 3D-objektdetektering, samt underliggande komponenter för 3D-formigenkänning, från flera objekt i en inomhusmiljö med en algoritm som verkar på en RGB-bild och dess 3D-information. Modellen konstrueras med djupa neurala nätverk med nya arkitekturer för Point Cloud-representationsextraktion. Den använder en unik representationsvektor som kan representera det latenta utrymmet i objektet som modellerar dess form, position, storlek och orientering för komplett träning med flera uppgifter, med obalanserade dataset. Den körs i realtid (5 bilder per sekund) i realtidsvideo. Metoden utvärderas med NYU Depth Dataset V2 med Genomsnittlig Precision för objektdetektering, 3D-Skärning över Union, samt avstånd mellan ytorna för 3D-form. Resultaten bekräftar att det är möjligt att använda en delad representationsvektor för mer än en prediktionsuppgift, och generaliserar för föremål som inte observerats under träningsprocessen. Den uppnår toppresultat för 3D-objektdetektering samt 3D-form-prediktion för NYU Depth Dataset V2. Kvalitativa resultat baserade på särskilt anskaffade data visar potential inom navigering i en verklig inomhusmiljö, samt kollision mellan animationer och detekterade objekt, vilka kan förbättra interaktonen mellan karaktär och miljö inom förstärkt verklighet-applikationer.
83

Extracting masts of overhead supply and street lights from point cloud

Zhu, Yi January 2019 (has links)
Regular inspection and documentation for railway assets are necessary to monitor the status of the traffic environment. Mobile Laser Scanning (MLS) makes it possible to collect highly accurate spatial information of railway environments in the form of point cloud, and an automatic method to extract interested objects from the point cloud is needed to avoid too much manual work. In this project, point cloud along a railway in Saltsjöbanan was collected by MLS and processed to extract interested objects from it. The main purpose of the project is to develop a workflow for automatic extraction of masts of overhead supply and street lights from the study area. Researchers have proposed various methods for object extraction, such as model-based method, shape-based method, semantic method, and machine learning method recently. Different methods were reviewed and Support Vector Machine was chosen for the classification. Several softwares were reviewed as well. TerraScan and CloudCompare were chosen for pre-processing, and the major part was done in MATLAB. The proposed method consists of 4 steps: pre-processing, voxelization and segmentation, feature computation, classification and validation. The method calculates features to describe every object segmented from the point cloud and learns from the manually classified objects to train a classifier. The study area was divided into training data and validating data. The SVM classifier was trained using training data and evaluated using validating data. In the classification, 90.84% of the masts and 67.65% of the lights were correctly classified. There was some object loss during the step of pre-processing and segmentation. When including the loss from the pre-processing and segmentation step, 87.5% of the masts and 53.49% of the lights were successfully detected. The street lights have more various outlook and more complicated surrounding environment, which caused a relatively low accuracy. / Regelbunden inspektion och dokumentation för järnvägstillgångar är nödvändig för att övervaka trafikmiljön. Mobil Laser Scanning (MLS) gör det möjligt att samla in mycket exakt geografisk information om järnvägsmiljöer i form av punktmoln och en automatisk metod för att extrahera intresserade objekt från punktmoln är nödvändigt för att undvika för mycket manuellt arbete. I det här projektet samlades punktmoln längs en järnväg i Saltsjöbanan av MLS och bearbetades för att extrahera intresserade objekt från den. Huvudsyftet med projektet är att utveckla ett arbetsflöde för automatisk utvinning av kontaktledningsstolpar och gatubelysningsstolpar från studieområdet. Forskare har nyligen föreslagit olika metoder för objektutvinning som baseras på modell, form, semantisk och maskininlärning. I detta arbete har flera olika metoder för objektutvinning undersökts och slutligen valdes Support Vector Machine (SVM) för klassificering. Ett antal tillgängliga programvaror har utvärderats. TerraScan och CloudCompare valdes för förbehandling, och huvuddelen gjordes i MATLAB. Den föreslagna metoden består av 4 steg: förbehandling, voxelisering och segmentering, funktionen beräkning, klassificering och validering. Metoden beräknar funktioner för att beskriva varje objekt segmenterat från punktmoln och lär ut från de manuellt klassificerade objekten för att träna en klassificerare. Studieområdet delades in i träningsdata och validering av data. SVM-klassificeraren utbildades med träningsdata och utvärderades genom att validera data. I klassificeringen klassificerades 90,84% av kontaktledningsstolparna och 67,65% av belysningsstolparna korrekt. Det fanns vissa förluster av objekt under förbehandling och segmentering. Inkluderat förlusten i förbehandling och segmentering upptäcktes 87,5% av kontaktledningsstolparna och 53,49% av belysningsstolparna korrekt. Det något sämre resultatet vid detektion av belysningsstolpar beror på att dessa är placerade i en svårare miljö med närhet till andra objekt och inte minst vegitation. Att automatiskt detektera objekt i sådan miljö baserat på enbart laserdata är svårt vilket medförde en relativt låg noggrannhet.
84

Bradford Multi-Modal Gait Database: Gateway to Using Static Measurements to Create a Dynamic Gait Signature

Alawar, Hamad M.M.A., Ugail, Hassan, Kamala, Mumtaz A., Connah, David 25 November 2014 (has links)
Yes / Aims: To create a gait database with optimum accuracy of joint rotational data and an accu-rate representation of 3D volume, and explore the potential of using the database in studying the relationship between static and dynamic features of a human’s gait. Study Design: The study collected gait samples from 38 subjects, in which they were asked to walk, run, walk to run transition, and walk with a bag. The motion capture, video, and 3d measurement data extracted was used to analyse and build a correlation between features. Place and Duration of Study: The study was conducted in the University of Bradford. With the ethical approval from the University, 38 subjects’ motion and body volumes were recorded at the motion capture studio from May 2011- February 2013. Methodology: To date, the database includes 38 subjects (5 females, 33 males) conducting walk cycles with speed and load as covariants. A correlation analysis was conducted to ex-plore the potential of using the database to study the relationship between static and dynamic features. The volumes and surface area of body segments were used as static features. Phased-weighted magnitudes extracted through a Fourier transform of the rotation temporal data of the joints from the motion capture were used as dynamic features. The Pearson correlation coefficient is used to evaluate the relationship between the two sets of data. Results: A new database was created with 38 subjects conducting four forms of gait (walk, run, walk to run, and walking with a hand bag). Each subject recording included a total of 8 samples of each form of gait, and a 3D point cloud (representing the 3D volume of the subject). Using a Pvalue (P<.05) as a criterion for statistical significance, 386 pairs of features displayed a strong relationship. Conclusion: A novel database available to the scientific community has been created. The database can be used as an ideal benchmark to apply gait recognition techniques, and based on the correlation analysis, can offer a detailed perspective of the dynamics of gait and its relationship to volume. Further research in the relationship between static and dynamic features can contribute to the field of biomechanical analysis, use of biometrics in forensic applications, and 3D virtual walk simulation.
85

Enablement of digital twins for railway overhead catenary system

Patwardhan, Amit January 2022 (has links)
Railway has the potential to become one of the most sustainable mediums for passenger and freight transport. This is possible by continuous updates to the asset management regime supporting Prognostics and Health Management (PHM). Railway tracks and catenaries are linear assets, and their length plays a vital role in maintenance. Railway catenary does not present many failures as compared to the rail track, but the failures that occur do not give enough opportunity for quick recovery. These failures cause extensive time delays disrupting railways operations. Such situations can be handled better by updating the maintenance approach. The domain of maintenance explores possible tools, techniques, and technologies to retain and restore the systems. PHM is dependent on data acquisition and analytics to predict the future state of a system with the least possible divergence. In the case of railway catenary and many other domains, this new technology of data acquisition is Light Detection And Ranging (LiDAR) device-based spatial point cloud collection. Current methods of catenary inspection depend on contact-based methods of inspection of railway catenary and read signals from the pantograph and contact wire while ignoring the rest of the wires and surroundings. Locomotive-mounted LiDAR devices support the collection of spatial data in the form of point-cloud from all the surrounding equipment and environment. This point cloud data holds a large amount of information, waiting for algorithms and technologies to harness it. A Digital Twin (DT) is a virtual representation of a physical system or process, achieved through models and simulations and maintains bidirectional communication for progressive enrichment at both ends. A systems digital twin is exposed to all the same conditions virtually. Such a digital twin can be used to provide prognostics by varying factors such as time, malfunction in components of the system, and conditions in which the system operates. Railways is a multistakeholder domain that depends on many organisations to support smooth function. The development of digital twins depends on the understanding of the system, the availability of sensors to read the state and actuators to affect the system’s state. Enabling a digital twin depends on governance restrictions, business requirements and technological competence. A concrete step towards enablement of the digital twin is designing an architecture to accommodate the technical requirements of content management, processing and infrastructure while addressing railway operations' governance and business aspects.The main objective of this work is to develop and provide architecture and a platform for the enablement of a DT solution based on Artificial Intelligence (AI) and digital technologies aimed at PHM of railway catenary system. The main results of this thesis are i) analysis of content management and processing requirements for railway overhead catenary system ii) methodology for catenary point cloud data processing and information representation iii) architecture and infrastructure requirements for enablement of Digital Twin and iv) roadmap for digital twin enablement for PHM of railway overhead catenary system.
86

Traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data

Foroutan, Morteza 25 November 2020 (has links)
Scene perception and traversability analysis are real challenges for autonomous driving systems. In the context of off-road autonomy, there are additional challenges due to the unstructured environments and the existence of various vegetation types. It is necessary for the Autonomous Ground Vehicles (AGVs) to be able to identify obstacles and load-bearing surfaces in the terrain to ensure a safe navigation (McDaniel et al. 2012). The presence of vegetation in off-road autonomy applications presents unique challenges for scene understanding: 1) understory vegetation makes it difficult to detect obstacles or to identify load-bearing surfaces; and 2) trees are usually regarded as obstacles even though only trunks of the trees pose collision risk in navigation. The overarching goal of this dissertation was to study traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data. More specifically, to address the aforementioned challenges, this dissertation studied the impacts of the understory vegetation density on the solid obstacle detection performance of the off-road autonomous systems. By leveraging a physics-based autonomous driving simulator, a classification-based machine learning framework was proposed for obstacle detection based on point cloud data captured by LIDAR. Features were extracted based on a cumulative approach meaning that information related to each feature was updated at each timeframe when new data was collected by LIDAR. It was concluded that the increase in the density of understory vegetation adversely affected the classification performance in correctly detecting solid obstacles. Additionally, a regression-based framework was proposed for estimating the understory vegetation density for safe path planning purposes according to which the traversabilty risk level was regarded as a function of estimated density. Thus, the denser the predicted density of an area, the higher the risk of collision if the AGV traversed through that area. Finally, for the trees in the terrain, the dissertation investigated statistical features that can be used in machine learning algorithms to differentiate trees from solid obstacles in the context of forested off-road scenes. Using the proposed extracted features, the classification algorithm was able to generate high precision results for differentiating trees from solid obstacles. Such differentiation can result in more optimized path planning in off-road applications.
87

A requirements engineering approach in the development of an AI-based classification system for road markings in autonomous driving : a case study

Sunkara, Srija January 2023 (has links)
Background: Requirements engineering (RE) is the process of identifying, defining, documenting, and validating requirements. However, RE approaches are usually not applied to AI-based systems due to their ambiguity and is still a growing subject. Research also shows that the quality of ML-based systems is affected due to the lack of a structured RE process. Hence, there is a need to apply RE techniques in the development of ML-based systems.  Objectives: This research aims to identify the practices and challenges concerning RE techniques for AI-based systems in autonomous driving and then to identify a suitable RE approach to overcome the identified challenges. Further, the thesis aims to check the feasibility of the selected RE approach in developing a prototype AI-based classification system for road markings.  Methods: A combination of research methods has been used for this research. We apply techniques of interviews, case study, and a rapid literature review. The case company is Scania CV AB. A literature review is conducted to identify the possible RE approaches that can overcome the challenges identified through interviews and discussions with the stakeholders. A suitable RE approach, GR4ML, is found and used to develop and validate an AI-based classification system for road markings.  Results: Results indicate that RE is a challenging subject in autonomous driving. Several challenges are faced at the case company in eliciting, specifying, and validating requirements for AI-based systems, especially in autonomous driving. Results also show that the views in the GR4ML framework were suitable for the specification of system requirements and addressed most challenges identified at the case company. The iterative goal-oriented approach maintained flexibility during development. Through the system's development, it was identified that the Random Forest Classifier outperformed the Logistic Regressor and Support Vector Machine for the road markings classification.  Conclusions: The validation of the system suggests that the goal-oriented requirements engineering approach and the GR4ML framework addressed most challenges identified in eliciting, specifying, and validating requirements for AI-based systems at the case company. The views in the GR4ML framework provide a good overview of the functional and non-functional requirements of the lower-level systems in autonomous driving. However, the GR4ML framework might not be suitable for validation of higher-level AI-based systems in autonomous driving due to their complexity.
88

Automatic processing of LiDAR point cloud data captured by drones / Automatisk bearbetning av punktmolnsdata från LiDAR infångat av drönare

Li Persson, Leon January 2023 (has links)
As automation is on the rise in the world at large, the ability to automatically differentiate objects in datasets via machine learning is of growing interest. This report details an experimental evaluation of supervised learning on point cloud data using random forest with varying setups. Acquired via airborne LiDAR using drones, the data holds a 3D representation of a landscape area containing power line corridors. Segmentation was performed with the goal of isolating data points belonging to power line objects from the rest of the surroundings. Pre-processing was performed on the data to extend the machine learning features used with geometry-based features that are not inherent to the LiDAR data itself. Due to how large-scale the data is, the labels were generated by the customer, Airpelago, and supervised learning was applied using this data. With their labels as benchmark, F1 scores of over 90% could be generated for both of the classes pertaining to power line objects. The best results were obtained when the data classes were balanced and both relevant intrinsic and extended features were used for the training of the classification models.
89

An Assessment of 3D Tracking Systems and Lidar Data for RPO Simulation

Meland, Tallak Edward 30 August 2023 (has links)
This thesis aimed to develop a rendezvous and proximity operation simulation to be tested with physical sensors and hardware, in order to assess the fidelity and performance of low-cost off-the-shelf systems for a hardware-in-the-loop testbed. With the push towards complex autonomous rendezvous missions, a low barrier to entry spacecraft simulator platform allows researchers to test and validate robotics systems, sensors, and algorithms for space applications, without investing in multimillion dollar equipment. This thesis conducted drone flights that followed a representative rendezvous trajectory while collecting lidar data of a target spacecraft model with a lidar sensor affixed to the drone. A relative orbital motion simulation tool was developed to create trajectories of varying orbits and initial conditions, and a representative trajectory was selected for use in drone flights. Two 3D tracking systems, OptiTrack and Vive, were assessed during these flights. OptiTrack is a high-cost state-of-the-art motion capture system that performs pose estimation by tracking reflective markers on a target in the tracking area. Vive is a lower-cost tracking system whose base stations emit lasers for its tracker to detect. Data collection by two lidar types was also assessed during these flights: real lidar data from a physical sensor, and virtual lidar data from a virtual sensor in a virtual environment. Drone flights were therefore performed in these four configurations of tracking system and lidar type, to directly compare the performance of higher-cost configurations with lower-cost configurations. The errors between the tracked drone position time history and the target position time history were analyzed, and the low-cost Vive and real lidar configuration was demonstrated to provide comparable error to the OptiTrack and real lidar configuration because of the dominance of the drone controller error over the tracking system error. In addition, lidar data of a target satellite model was collected by real and virtual lidar sensors during these flights, and point clouds were successfully generated. The resulting point clouds were compared by visualizing the data and noting the characteristics of real lidar data and its error, and how it compared to idealized virtual lidar data of a virtual target satellite model. The resulting real-world data characteristics were found to be modellable which can then be used for more robust simulation development within virtual reality. These results demonstrated that low-cost and open-source hardware and software provide satisfactory results for simulating this kind of spacecraft mission and capturing useful and usable data. / Master of Science / As space missions become more complex, there is a need for lower-cost, more accessible spacecraft simulation platforms that can test and validate hardware and software on the ground for a space-based mission. In this thesis, two position tracking systems and two lidar data collection types were assessed to see if the performance of a low-cost tracking system was comparable to a high-cost tracking system for a space-based simulation. The tracking systems tested were the high-cost state-of-the-art OptiTrack system and the low-cost Vive system. The two types of lidar data collected were real lidar from a physical sensor and virtual lidar from a virtual sensor. These assessments were performed in four configurations, to test each configuration of tracking system and lidar type. First, a simulation tool was developed to simulate the orbital dynamics of a spacecraft that operates in proximity to another spacecraft. After choosing an orbit and initial conditions that represent one such potential mission, the resulting trajectory was uploaded to a drone which acted as a surrogate for a spacecraft, and it flew the uploaded route around a model satellite, collecting lidar data in the process with a lidar sensor affixed to the drone. The tracking systems provided the drone with its position data, and the lidar sensor on the drone collected lidar data of a model satellite as it flew. The data revealed that the low-cost tracking system performance was comparable to the high-cost tracking system because the drone's controller error dominated over the tracking system errors. Additionally, the low-cost drone and physical lidar sensor generated high quality point cloud data that captured the geometry of the target satellite and illustrated the characteristics of real-world lidar data and its errors. These results demonstrated that low-cost and open-source hardware and software provide satisfactory results for simulating this kind of spacecraft mission and capturing useful and usable data.
90

Intracranial aneurysm rupture management: Comparing morphologic and deep learning features

Sobisch, Jannik 26 September 2023 (has links)
Intracranial Aneurysms are a prevalent vascular pathology present in 3-4% of the population with an inherent risk of rupture. The growing accessibility of angiography has led to a rising incidence of detected aneurysms. An accurate assessment of the rupture risk is of utmost importance for the very high disability and mortality rates in case of rupture and the non-negligible risk inherent to surgical treatment. However, human evaluation is rather subjective, and current treatment guidelines, such as the PHASES score, remain inefficient. Therefore we aimed to develop an automatic machine learning-based rupture prediction model. Our study utilized 686 CTA scans, comprising 844 intracranial aneurysms. Among these aneurysms, 579 were classified as ruptured, while 265 were categorized as non-ruptured. Notably, the CTAs of ruptured aneurysms were obtained within a week after rupture, during which negligible morphological changes were observed compared to the aneurysm’s pre-rupture shape, as established by previous research. Based on this observation, our rupture risk assessment focused on the models’ ability to classify between ruptured and unruptured IAs. In our investigation, we implemented an automated vessel and aneurysm segmentation, vessel labeling, and feature extraction framework. The rupture risk prediction involved the use of deep learning-based vessel and aneurysm shape features, along with a combination of demographic features (patient sex and age) and morphological features (aneurysm location, size, surface area, volume, sphericity, etc.). An ablation-type study was conducted to evaluate these features. Eight different machine learning models were trained with the objective of identifying ruptured aneurysms. The best performing model achieved an area under the receiver operating characteristic curve (AUC) of 0.833, utilizing a random forest algorithm with feature selection based on Spearman’s rank correlation thresholding, which effectively eliminated highly correlated and anti-correlated features...:1 Introduction 1.1 Intracranial aneurysms 1.1.1 Treatment strategy 1.1.2 Rupture risk assesment 1.2 Artificial Intelligence 1.3 Thesis structure 1.4 Contribution of the author 2 Theory 2.1 Rupture risk assessment guidelines 2.1.1 PHASES score 2.1.2 ELAPSS score 2.2 Literature review: Aneurysm rupture prediction 2.3 Machine learning classifiers 2.3.1 Decision Tree 2.3.2 Random Forests 2.3.3 XGBoost 2.3.4 K-Nearest-Neighbor 2.3.5 Multilayer Perceptron 2.3.6 Logistic Regression 2.3.7 Support Vector Machine 2.3.8 Naive Bayes 2.4 Latent feature vectors in deep learning 2.5 PointNet++ 3 Methodology 3.1 Data 3.2 Vessel segmentation 3.3 Feature extraction 3.3.1 Deep vessel features 3.3.2 Deep aneurysm features 3.3.3 Conventional features 3.4 Rupture classification 3.4.1 Univariate approach 3.4.2 Multivariate approach 3.4.3 Deep learning approach 3.4.4 Deep learning amplified multivariate approach 3.5 Feature selection 3.5.1 Correlation-based feature selection 3.5.2 Permutation feature importance 3.6 Implementation 3.7 Evaluation 4 Results 4.1 Univariate approach 4.2 Multivariate approach 4.3 Deep learning approach 4.3.1 Deep vessel features 4.3.2 Deep aneurysm features 4.3.3 Deep vessel and deep aneurysm features 4.4 Deep learning amplified multivariate approach 4.4.1 Conventional and deep vessel features 4.4.2 Conventional and deep aneurysm features 4.4.3 Conventional, deep vessel, and deep aneurysm features 5 Discussion and Conclusions 5.1 Overview of results 5.2 Feature selection 5.3 Feature analysis 5.3.1 Deep vessel features 5.3.2 Deep aneurysm features 5.3.3 Conventional features 5.3.4 Summary 5.4 Comparison to other methods 5.5 Outlook Bibliography / Intrakranielle Aneurysmen sind eine weit verbreitete vaskuläre Pathologie, die bei 3 bis 4% der Bevölkerung auftritt und ein inhärentes Rupturrisiko birgt. Mit der zunehmenden Verfügbarkeit von Angiographie wird eine steigende Anzahl von Aneurysmen entdeckt. Angesichts der sehr hohen permanenten Beeinträchtigungs- und Sterblichkeitsraten im Falle einer Ruptur und des nicht zu vernachlässigenden Risikos einer chirurgischen Behandlung ist eine genaue Bewertung des Rupturrisikos von größter Bedeutung. Die Beurteilung durch den Menschen ist jedoch sehr subjektiv, und die derzeitigen Behandlungsrichtlinien, wie der PHASES-Score, sind nach wie vor ineffizient. Daher wollten wir ein automatisches, auf maschinellem Lernen basierendes Modell zur Rupturvorhersage entwickeln. Für unsere Studie wurden 686 CTA-Scans von 844 intrakraniellen Aneurysmen verwendet, von denen 579 rupturiert waren und 265 nicht rupturiert waren. Dabei ist zu beachten, dass die CTAs der rupturierten Aneurysmen innerhalb einer Woche nach der Ruptur gewonnen wurden, in der im Vergleich zur Form des Aneurysmas vor der Ruptur nur geringfügige morphologische Veränderungen zu beobachten waren, wie in vorhergegangenen Studient festgestellt wurde. Im Rahmen unserer Untersuchung haben wir eine automatische Segmentierung von Adern und Aneurysmen, ein Aderlabeling und eine Merkmalsextraktion implementiert. Für die Vorhersage des Rupturrisikos wurden auf Deep Learning basierende Ader- und Aneurysmaformmerkmale zusammen mit einer Kombination aus demografischen Merkmalen (Geschlecht und Alter des Patienten) und morphologischen Merkmalen (u. A. Lage, Größe, Oberfläche, Volumen, Sphärizität des Aneurysmas) verwendet. Zur Bewertung dieser Merkmale wurde eine Ablationsstudie durchgeführt. Acht verschiedene maschinelle Lernmodelle wurden mit dem Ziel trainiert, rupturierte Aneurysmen zu erkennen...:1 Introduction 1.1 Intracranial aneurysms 1.1.1 Treatment strategy 1.1.2 Rupture risk assesment 1.2 Artificial Intelligence 1.3 Thesis structure 1.4 Contribution of the author 2 Theory 2.1 Rupture risk assessment guidelines 2.1.1 PHASES score 2.1.2 ELAPSS score 2.2 Literature review: Aneurysm rupture prediction 2.3 Machine learning classifiers 2.3.1 Decision Tree 2.3.2 Random Forests 2.3.3 XGBoost 2.3.4 K-Nearest-Neighbor 2.3.5 Multilayer Perceptron 2.3.6 Logistic Regression 2.3.7 Support Vector Machine 2.3.8 Naive Bayes 2.4 Latent feature vectors in deep learning 2.5 PointNet++ 3 Methodology 3.1 Data 3.2 Vessel segmentation 3.3 Feature extraction 3.3.1 Deep vessel features 3.3.2 Deep aneurysm features 3.3.3 Conventional features 3.4 Rupture classification 3.4.1 Univariate approach 3.4.2 Multivariate approach 3.4.3 Deep learning approach 3.4.4 Deep learning amplified multivariate approach 3.5 Feature selection 3.5.1 Correlation-based feature selection 3.5.2 Permutation feature importance 3.6 Implementation 3.7 Evaluation 4 Results 4.1 Univariate approach 4.2 Multivariate approach 4.3 Deep learning approach 4.3.1 Deep vessel features 4.3.2 Deep aneurysm features 4.3.3 Deep vessel and deep aneurysm features 4.4 Deep learning amplified multivariate approach 4.4.1 Conventional and deep vessel features 4.4.2 Conventional and deep aneurysm features 4.4.3 Conventional, deep vessel, and deep aneurysm features 5 Discussion and Conclusions 5.1 Overview of results 5.2 Feature selection 5.3 Feature analysis 5.3.1 Deep vessel features 5.3.2 Deep aneurysm features 5.3.3 Conventional features 5.3.4 Summary 5.4 Comparison to other methods 5.5 Outlook Bibliography

Page generated in 0.0491 seconds