201 |
<strong>Operational Decision Tools for SMART Emergency Medical Services</strong>Juan Camilo Paz Roa (15853232) 31 May 2023 (has links)
<p>Smart and connected technology solutions have emerged as a promising way to enhance EMS services, particularly in areas where access to professional services is limited. However, a significant challenge for improving their implementation is determining which technologies to use and how they will change current logistic operations to enhance service efficiencies and expand access to care. In this context, this thesis explores opportunities for the smart and connected technology solutions.</p>
<p>The first study explores the use of medically trained volunteers in the community, known as Citizen Responders (CRs). These individuals can be quickly notified of an EMS request upon its arrival via a mobile alert receiver, which allows them to provide timely and potentially life-saving assistance before an ambulance arrives. However, traditional EMS logistic decision platforms are not equipped to effectively leverage the sharing of the real-time CR information enabled by connected technologies, such as their location and availability. To improve coordination between CRs and ambulances, this study proposes two decision tools that incorporate real-time CR information: one for redeploying ambulances after they complete service and another for dispatching ambulances in response to calls. The redeployment procedure uses mixed-integer linear programming (MILP) to maximize patient survival, while the dispatch procedure enhances a locally optimal dispatch procedure by integrating real-time CR information for priority-differentiated emergencies.</p>
<p>In the second study, a third decision tool was developed to take advantage of the increasing availability of feature information provided by connected technologies: an AI-enabled dispatch rule recommendation model that is more usable for dispatchers than black-box decision models. This is a model based on supervised learning that outputs a “promising” metric-based dispatch rule for the human decision-maker. The model maintains the usability of rules while enhancing the system’s performance and alleviating the cognitive burden of dispatchers. A set of experiments were performed on a self-developed simulator to assess the performance of all the decision tools. The findings suggest they have the potential to significantly enhance the EMS system performance. </p>
|
202 |
Automatic processing of LiDAR point cloud data captured by drones / Automatisk bearbetning av punktmolnsdata från LiDAR infångat av drönareLi Persson, Leon January 2023 (has links)
As automation is on the rise in the world at large, the ability to automatically differentiate objects in datasets via machine learning is of growing interest. This report details an experimental evaluation of supervised learning on point cloud data using random forest with varying setups. Acquired via airborne LiDAR using drones, the data holds a 3D representation of a landscape area containing power line corridors. Segmentation was performed with the goal of isolating data points belonging to power line objects from the rest of the surroundings. Pre-processing was performed on the data to extend the machine learning features used with geometry-based features that are not inherent to the LiDAR data itself. Due to how large-scale the data is, the labels were generated by the customer, Airpelago, and supervised learning was applied using this data. With their labels as benchmark, F1 scores of over 90% could be generated for both of the classes pertaining to power line objects. The best results were obtained when the data classes were balanced and both relevant intrinsic and extended features were used for the training of the classification models.
|
203 |
Self-supervised Representation Learning for Visual Domains Beyond Natural ScenesChhipa, Prakash Chandra January 2023 (has links)
This thesis investigates the possibility of efficiently adapting self-supervised representation learning on visual domains beyond natural scenes, e.g., medical imagining and non-RGB sensory images. The thesis contributes to i) formalizing the self-supervised representation learning paradigm in a unified conceptual framework and ii) proposing the hypothesis based on supervision signal from data, called data-prior. Method adaptations following the hypothesis demonstrate significant progress in downstream tasks performance on microscopic histopathology and 3-dimensional particle management (3DPM) mining material non-RGB image domains. Supervised learning has proven to be obtaining higher performance than unsupervised learning on computer vision downstream tasks, e.g., image classification, object detection, etc. However, it imposes limitations due to human supervision. To reduce human supervision, end-to-end learning, i.e., transfer learning, remains proven for fine-tuning tasks but does not leverage unlabeled data. Representation learning in a self-supervised manner has successfully reduced the need for labelled data in the natural language processing and vision domain. Advances in learning effective visual representations without human supervision through a self-supervised learning approach are thought-provoking. This thesis performs a detailed conceptual analysis, method formalization, and literature study on the recent paradigm of self-supervised representation learning. The study’s primary goal is to identify the common methodological limitations across the various approaches for adaptation to the visual domain beyond natural scenes. The study finds a common component in transformations that generate distorted views for invariant representation learning. A significant outcome of the study suggests this component is closely dependent on human knowledge of the real world around the natural scene, which fits well the visual domain of the natural scenes but remains sub-optimal for other visual domains that are conceptually different. A hypothesis is proposed to use the supervision signal from data (data-prior) to replace the human-knowledge-driven transformations in self-supervised pretraining to overcome the stated challenge. Two separate visual domains beyond the natural scene are considered to explore the mentioned hypothesis, which is breast cancer microscopic histopathology and 3-dimensional particle management (3DPM) mining material non-RGB image. The first research paper explores the breast cancer microscopic histopathology images by actualizing the data-prior hypothesis in terms of multiple magnification factors as supervision signal from data, which is available in the microscopic histopathology images public dataset BreakHis. It proposes a self-supervised representation learning method, Magnification Prior Contrastive Similarity, which adapts the contrastive learning approach by replacing the standard image view transformations (augmentations) by utilizing magnification factors. The contributions to the work are multi-folded. It achieves significant performance improvement in the downstream task of malignancy classification during label efficiency and fully supervised settings. Pretrained models show efficient knowledge transfer on two additional public datasets supported by qualitative analysis on representation learning. The second research paper investigates the 3DPM mining material non-RGB image domain where the material’s pixel-mapped reflectance image and height (depth map) are captured. It actualizes the data-prior hypothesis by using depth maps of mining material on the conveyor belt. The proposed method, Depth Contrast, also adapts the contrastive learning method while replacing standard augmentations with depth maps for mining materials. It outperforms material classification over ImageNet transfer learning performance in fully supervised learning settings in fine-tuning and linear evaluation. It also shows consistent improvement in performance during label efficiency. In summary, the data-prior hypothesis shows one promising direction for optimal adaptations of contrastive learning methods in self-supervision for the visual domain beyond the natural scene. Although, a detailed study on the data-prior hypothesis is required to explore other non-contrastive approaches of recent self-supervised representation learning, including knowledge distillation and information maximization.
|
204 |
Self-learning for 3D segmentation of medical images from single and few-slice annotationLassarat, Côme January 2023 (has links)
Training deep-learning networks to segment a particular region of interest (ROI) in 3D medical acquisitions (also called volumes) usually requires annotating a lot of data upstream because of the predominant fully supervised nature of the existing stateof-the-art models. To alleviate this annotation burden for medical experts and the associated cost, leveraging self-learning models, whose strength lies in their ability to be trained with unlabeled data, is a natural and straightforward approach. This work thus investigates a self-supervised model (called “self-learning” in this study) to segment the liver as a whole in medical acquisitions, which is very valuable for doctors as it provides insights for improved patient care. The self-learning pipeline utilizes only a single-slice (or a few-slice) groundtruth annotation to propagate the annotation iteratively in 3D and predict the complete segmentation mask for the entire volume. The segmentation accuracy of the tested models is evaluated using the Dice score, a metric commonly employed for this task. Conducting this study on Computed Tomography (CT) acquisitions to annotate the liver, the initial implementation of the self-learning framework achieved a segmentation accuracy of 0.86 Dice score. Improvements were explored to address the drifting of the mask propagation, which eventually proved to be of limited benefits. The proposed method was then compared to the fully supervised nnU-Net baseline, the state-of-the-art deep-learning model for medical image segmentation, using fully 3D ground-truth (Dice score ∼ 0.96). The final framework was assessed as an annotation tool. This was done by evaluating the segmentation accuracy of the state-of-the-art nnU-Net trained with annotation predicted by the self-learning pipeline for a given expert annotation budget. While the self-learning framework did not generate accurate enough annotation from a single slice annotation yielding an average Dice score of ∼ 0.85, it demonstrated encouraging results when two ground-truth slice annotations per volume were provided for the same annotation budget (Dice score of ∼ 0.90). / Att träna djupinlärningsnätverk för att segmentera en viss region av intresse (ROI) i medicinska 3D-bilder (även kallade volymer) kräver vanligtvis att en stor mängd data kommenteras uppströms på grund av den dominerande helt övervakade karaktären hos de befintliga toppmoderna modellerna. För att minska annoteringsbördan för medicinska experter samt den associerade kostnaden är det naturligt och enkelt att utnyttja självlärande modeller, vars styrka ligger i förmågan att tränas med omärkta data. Detta arbete undersöker således en självövervakad modell (“kallas ”självlärande” i denna studie) för att segmentera levern som helhet i medicinska skanningar, vilket är mycket värdefullt för läkare eftersom det ger insikter för förbättrad patientvård. Den självlärande pipelinen använder endast en enda skiva (eller några få skivor) för att sprida annotationen iterativt i 3D och förutsäga den fullständiga segmenteringsmasken för hela volymen. Segmenteringsnoggrannheten hos de testade modellerna utvärderas med hjälp av Dice-poängen, ett mått som vanligtvis används för denna uppgift. Vid genomförandet av denna studie på CT-förvärv för att annotera levern uppnådde den initiala implementeringen av det självlärande ramverket en segmenteringsnoggrannhet på 0,86 Dice-poäng. Förbättringar undersöktes för att hantera driften av maskutbredningen, vilket så småningom visade sig ha begränsade fördelar. Den föreslagna metoden jämfördes sedan med den helt övervakade nnU-Net-baslinjen, den toppmoderna djupinlärningsmodellen för medicinsk bildsegmentering, med hjälp av helt 3D-baserad sanning (Dice-poäng ∼ 0, 96). Det slutliga ramverket bedömdes som ett annoteringsverktyg. Detta gjordes genom att utvärdera segmenteringsnoggrannheten hos det toppmoderna nnU-Net som tränats med annotering som förutspåtts av den självlärande pipelinen för en given budget för expertannotering. Det självlärande ramverket genererade inte tillräckligt noggranna annoteringar från baserat på endast en snittannotering och resulterade i en genomsnittlig Dice-poäng på ∼ 0, 85, men uppvisade uppmuntrande resultat när två verkliga snittannoteringar per volym tillhandahölls för samma anteckningsbudget (Dice-poäng på ∼ 0, 90).
|
205 |
AIRS: a Resource Limited Artificial Immune ClassifierWatkins, Andrew B 14 December 2001 (has links)
The natural immune system embodies a wealth of information processing capabilities that can be exploited as a metaphor for the development of artificial immune systems. Chief among these features is the ability to recognize previously encountered substances and to generalize beyond recognition in order to provide appropriate responses to pathogens not seen before. This thesis presents a new supervised learning paradigm, resource limited artificial immune classifiers, inspired by mechanisms exhibited in natural and artificial immune systems. The key abstractions gleaned from these immune systems include resource competition, clonal selection, affinity maturation, and memory cell retention. A discussion of the progenitors of this work is offered. This work provides a thorough explication of a resource limited artifical immune classification algorithm, named AIRS (Artificial Immune Recognition System). Experimental results on both simulated data sets and real world machine learning benchmarks demonstrate the effectiveness of the AIRS algorithm as a classification technique.
|
206 |
Deep-learning Approaches to Object Recognition from 3D DataChen, Zhiang 30 August 2017 (has links)
No description available.
|
207 |
AN ALL-ATTRIBUTES APPROACH TO SUPERVISED LEARNINGVANCE, DANNY W. January 2006 (has links)
No description available.
|
208 |
Identification of Uniform Class Regions using Perceptron TrainingSamuel, Nikhil J. 15 October 2015 (has links)
No description available.
|
209 |
Integral Equations For Machine Learning ProblemsQue, Qichao 28 September 2016 (has links)
No description available.
|
210 |
A supervised learning approach for transport mode detection using GPS tracking dataIvanov, Stepan, Sakellariou, Stefanos January 2022 (has links)
The fast development in telecommunication is producing a huge amount of data related to how people move and behave over time. Nowadays, travel data are mainly collected through Global Positioning Systems (GPS) and can be used to identify human mobility patterns and travel behaviors. Transport mode detection (TMD) aims to identify the means of transport used by an individual and is a field that has become more popular in recent years as it can be beneficial for various applications. However, developing travel models requires different types of information that can be extracted from raw travel data. Although many useful features like speed, acceleration and bearing rate can be extracted from raw GPS data, detecting transport modes requires further processing. Some previous studies have successfully applied machine learning algorithms for detecting the transport mode. Despite achieving high performance in their models, many of these studies have used rather small datasets generated from a limited number of users or identified a small number of different transport modes. Furthermore, in most of these studies more complex methodologies have been applied, where extra information like GIS layers or road and railway networks were required. The purpose of this study is to propose a simple supervised learning model to identify five common transport modes on large datasets by only using raw GPS data. In total, six commonly used supervised learning algorithms are tested on seven selected features (extracted from raw GPS data). The Random Forest (RF) algorithm proves to perform better in detecting five transport modes from the dataset utilized in this study, with an overall accuracy of 82.7%.
|
Page generated in 0.0672 seconds