• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 8
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

USING SNP DATA TO PREDICT RADIATION TOXICITY FOR PROSTATE CANCER PATIENTS

Mirzazadeh, Farzaneh 06 1900 (has links)
Radiotherapy is often used to treat prostate cancer. While using high dose of radiation does kill cancer cells, it can cause toxicity in healthy tissues for some patients. It would be best to apply this treatment only to patients who are likely to be immune from such toxicity. This requires a classifier that can predict, before treatment, which patients are likely to exhibit severe toxicity. Here, we explore ways to use certain genetic features, called Single Nucleotide Polymorphisms (SNPs), for this task. This thesis uses several machine learning methods for learning such classifiers for predicting toxicity. This problem is challenging as there are a large number of features (164,273 SNPs) but only 82 samples. We explore an ensemble classification method for this problem, called Mixture Using Variance (MUV), which first learns several different base probabilistic classifiers, then for each query combines the responses of the different base classifiers based on their respective variances. The original MUV learns the individual classifiers using bootstrap sampling of the training data; we modify this by considering different subsets of the features for each classifier. We derive a new combination rule for base classifiers in the proposed setting and obtain some new theoretical results. Based on characteristics of our task, we propose an approach that involves first clustering the features before selecting only a subset of features from each cluster for each base classifier. Unfortunately, we were unable to predict radiation toxicity in prostate cancer patients using just the SNP values. However, our further experimental results reveal strong relation between correctness of a classifier in its prediction and the variance of the response to the corresponding classification query, which show that the main idea is promising.
2

USING SNP DATA TO PREDICT RADIATION TOXICITY FOR PROSTATE CANCER PATIENTS

Mirzazadeh, Farzaneh Unknown Date
No description available.
3

A Non-invasive 2D Digital Imaging Method for Detection of Surface Lesions Using Machine Learning

Hussain, Nosheen, Cooper, Patricia A., Shnyder, Steven, Ugail, Hassan, Bukar, Ali M., Connah, David January 2017 (has links)
No / As part of the cancer drug development process, evaluation in experimental subcutaneous tumour transplantation models is a key process. This involves implanting tumour material underneath the mouse skin and measuring tumour growth using calipers. This methodology has been proven to have poor reproducibility and accuracy due to observer variation. Furthermore the physical pressure placed on the tumour using calipers is not only distressing for the mouse but could also lead to tumour damage. Non-invasive digital imaging of the tumour would reduce handling stresses and allow volume determination without any potential tumour damage. This is challenging as the tumours sit under the skin and have the same colour pattern as the mouse body making them hard to differentiate in a 2D image. We used the pre-trained convolutional neural network VGG-16 and extracted multiple layers in an attempt to accurately locate the tumour. When using the layer FC7 after RELU activation for extraction, a recognition rate of 89.85% was achieved.
4

Unsupervised Anomaly Detection and Explainability for Ladok Logs

Edholm, Mimmi January 2023 (has links)
Anomaly detection is the process of finding outliers in data. This report will explore the use of unsupervised machine learning for anomaly detection as well as the importance of explaining the decision making of the model. The project focuses on identifying anomalous behaviour in Ladok data from their frontend access logs, with emphasis on security issues, specifically attempted intrusion. This is done by implementing an anomaly detection model which consists of a stacked autoencoder and k-means clustering as well as examining the data using only k-means. In order to attempt to explain the decision making progress, SHAP is used. SHAP is a explainability model that measure the feature importance. The report will include an overview of the necessary theory of machine learning, anomaly detection and explainability, the implementation of the model as well as examine how to explain the process of the decision making in a black box model. Further, the results are presented and a discussion is held about how the models have performed on the data. Lastly, the report concludes whether the chosen approach has been appropriate and proposes how the work could be improved in future work. The study concludes that the results from this approach was not the desired outcome, and might therefore not be the most suitable.
5

Machine learning in multi-frame image super-resolution

Pickup, Lyndsey C. January 2007 (has links)
Multi-frame image super-resolution is a procedure which takes several noisy low-resolution images of the same scene, acquired under different conditions, and processes them together to synthesize one or more high-quality super-resolution images, with higher spatial frequency, and less noise and image blur than any of the original images. The inputs can take the form of medical images, surveillance footage, digital video, satellite terrain imagery, or images from many other sources. This thesis focuses on Bayesian methods for multi-frame super-resolution, which use a prior distribution over the super-resolution image. The goal is to produce outputs which are as accurate as possible, and this is achieved through three novel super-resolution schemes presented in this thesis. Previous approaches obtained the super-resolution estimate by first computing and fixing the imaging parameters (such as image registration), and then computing the super-resolution image with this registration. In the first of the approaches taken here, superior results are obtained by optimizing over both the registrations and image pixels, creating a complete simultaneous algorithm. Additionally, parameters for the prior distribution are learnt automatically from data, rather than being set by trial and error. In the second approach, uncertainty in the values of the imaging parameters is dealt with by marginalization. In a previous Bayesian image super-resolution approach, the marginalization was over the super-resolution image, necessitating the use of an unfavorable image prior. By integrating over the imaging parameters rather than the image, the novel method presented here allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. Finally, a domain-specific image prior, based upon patches sampled from other images, is presented. For certain types of super-resolution problems where it is applicable, this sample-based prior gives a significant improvement in the super-resolution image quality.
6

MULTIMODAL DIGITAL IMAGE EXPLORATION WITH SYNCHRONOUS INTELLIGENT ASSISTANCE FOR THE BLIND

Ting Zhang (8636196) 16 April 2020 (has links)
Emerging haptic devices have granted individuals who are blind the capabilities to explore images in real-time, which has always been a challenge for them. However, when only haptic-based interaction is available, and no visual feedback is given, image comprehension demands time and major cognitive resources. This research developed an approach to improve blind people’s exploration performance by providing assisting strategies in various sensory modalities, when certain exploratory behavior is performed. There are three fundamental components developed in this approach: the user model, the assistance model, and the user interface. The user model recognizes users’ image exploration procedures. A learning framework utilizing spike-timing neural network is developed to classify the frequently applied exploration procedures. The assistance model provides different assisting strategies when certain exploration procedure is performed. User studies were conducted to understand the goals of each exploration procedure and assisting strategies were designed based on the discovered goals. These strategies give users hints of objects’ locations and relationships. The user interface then determines the optimal sensory modality to deliver each assisting strategy. Within-participants experiments were performed to compare three sensory modalities for each assisting strategy, including vibration, sound and virtual magnetic force. A complete computer-aided system was developed by integrating all the validated assisting strategies. Experiments were conducted to evaluate the complete system with each assisting strategy expressed through the optimal modality. Performance metrics including task performance and workload assessment were applied for the evaluation.
7

Computer Vision Approach for Estimating Human Health Parameters

Mayank Gupta (5930651) 03 January 2019 (has links)
<div>Measurement of vital cardiovascular health attributes, e.g., pulse rate variability, and estimation of exertion level of a person can help in diagnosing potential cardiovascular diseases, musculoskeletal injuries and thus monitoring an individual's well-being. Cumulative exposure to repetitive and forceful activities may lead to musculoskeletal injuries which not only reduce workers' efficiency and productivity, but also affect their quality of life. Existing techniques for such measurements pose a great challenge as they are either intrusive, interfere with human-machine interface, and/or subjective in the nature, thus are not scalable. Non-contact methods to measure these metrics can eliminate the need for specialized piece of equipment and manual measurements. Non-contact methods can have additional advantages since they are potentially scalable, portable, can be used for continuous measurements, and can be used on patients and workers with varying levels of dexterity and independence, from people with physical impairments, shop-floor workers to infants. In this work, we use face videos and the photoplethysmography (PPG) signals to extract relevant features and build a regression model that can predict pulse rate, and pulse rate variability, and a classification model that can predict force exertion levels of 0%, 50%, and 100% (representing rest, moderate effort, and high effort), thus providing a non-intrusive and scalable approach. Efficient feature extraction has resulted in high accuracy for both the models.</div>
8

Learning based event model for knowledge extraction and prediction system in the context of Smart City / Un modèle de gestion d'évènements basé sur l'apprentissage pour un système d'extraction et de prédiction dans le contexte de Ville Intelligente

Kotevska, Olivera 30 January 2018 (has links)
Des milliards de «choses» connectées à l’internet constituent les réseaux symbiotiques de périphériques de communication (par exemple, les téléphones, les tablettes, les ordinateurs portables), les appareils intelligents, les objets (par exemple, la maison intelligente, le réfrigérateur, etc.) et des réseaux de personnes comme les réseaux sociaux. La notion de réseaux traditionnels se développe et, à l'avenir, elle ira au-delà, y compris plus d'entités et d'informations. Ces réseaux et ces dispositifs détectent, surveillent et génèrent constamment une grande uantité de données sur tous les aspects de la vie humaine. L'un des principaux défis dans ce domaine est que le réseau se compose de «choses» qui sont hétérogènes à bien des égards, les deux autres, c'est qu'ils changent au fil du temps, et il y a tellement d'entités dans le réseau qui sont essentielles pour identifier le lien entre eux.Dans cette recherche, nous abordons ces problèmes en combinant la théorie et les algorithmes du traitement des événements avec les domaines d'apprentissage par machine. Notre objectif est de proposer une solution possible pour mieux utiliser les informations générées par ces réseaux. Cela aidera à créer des systèmes qui détectent et répondent rapidement aux situations qui se produisent dans la vie urbaine afin qu'une décision intelligente puisse être prise pour les citoyens, les organisations, les entreprises et les administrations municipales. Les médias sociaux sont considérés comme une source d'information sur les situations et les faits liés aux utilisateurs et à leur environnement social. Au début, nous abordons le problème de l'identification de l'opinion publique pour une période donnée (année, mois) afin de mieux comprendre la dynamique de la ville. Pour résoudre ce problème, nous avons proposé un nouvel algorithme pour analyser des données textuelles complexes et bruyantes telles que Twitter-messages-tweets. Cet algorithme permet de catégoriser automatiquement et d'identifier la similarité entre les sujets d'événement en utilisant les techniques de regroupement. Le deuxième défi est de combiner les données du réseau avec diverses propriétés et caractéristiques en format commun qui faciliteront le partage des données entre les services. Pour le résoudre, nous avons créé un modèle d'événement commun qui réduit la complexité de la représentation tout en conservant la quantité maximale d'informations. Ce modèle comporte deux ajouts majeurs : la sémantiques et l’évolutivité. La partie sémantique signifie que notre modèle est souligné avec une ontologie de niveau supérieur qui ajoute des capacités d'interopérabilité. Bien que la partie d'évolutivité signifie que la structure du modèle proposé est flexible, ce qui ajoute des fonctionnalités d'extensibilité. Nous avons validé ce modèle en utilisant des modèles d'événements complexes et des techniques d'analyse prédictive. Pour faire face à l'environnement dynamique et aux changements inattendus, nous avons créé un modèle de réseau dynamique et résilient. Il choisit toujours le modèle optimal pour les analyses et s'adapte automatiquement aux modifications en sélectionnant le meilleur modèle. Nous avons utilisé une approche qualitative et quantitative pour une sélection évolutive de flux d'événements, qui réduit la solution pour l'analyse des liens, l’optimale et l’alternative du meilleur modèle. / Billions of “things” connected to the Internet constitute the symbiotic networks of communication devices (e.g., phones, tablets, and laptops), smart appliances (e.g., fridge, coffee maker and so forth) and networks of people (e.g., social networks). So, the concept of traditional networks (e.g., computer networks) is expanding and in future will go beyond it, including more entities and information. These networks and devices are constantly sensing, monitoring and generating a vast amount of data on all aspects of human life. One of the main challenges in this area is that the network consists of “things” which are heterogeneous in many ways, the other is that their state of the interconnected objects is changing over time, and there are so many entities in the network which is crucial to identify their interdependency in order to better monitor and predict the network behavior. In this research, we address these problems by combining the theory and algorithms of event processing with machine learning domains. Our goal is to propose a possible solution to better use the information generated by these networks. It will help to create systems that detect and respond promptly to situations occurring in urban life so that smart decision can be made for citizens, organizations, companies and city administrations. Social media is treated as a source of information about situations and facts related to the users and their social environment. At first, we tackle the problem of identifying the public opinion for a given period (year, month) to get a better understanding of city dynamics. To solve this problem, we proposed a new algorithm to analyze complex and noisy textual data such as Twitter messages-tweets. This algorithm permits an automatic categorization and similarity identification between event topics by using clustering techniques. The second challenge is combing network data with various properties and characteristics in common format that will facilitate data sharing among services. To solve it we created common event model that reduces the representation complexity while keeping the maximum amount of information. This model has two major additions: semantic and scalability. The semantic part means that our model is underlined with an upper-level ontology that adds interoperability capabilities. While the scalability part means that the structure of the proposed model is flexible in adding new entries and features. We validated this model by using complex event patterns and predictive analytics techniques. To deal with the dynamic environment and unexpected changes we created dynamic, resilient network model. It always chooses the optimal model for analytics and automatically adapts to the changes by selecting the next best model. We used qualitative and quantitative approach for scalable event stream selection, that narrows down the solution for link analysis, optimal and alternative best model. It also identifies efficient relationship analysis between data streams such as correlation, causality, similarity to identify relevant data sources that can act as an alternative data source or complement the analytics process.

Page generated in 0.0794 seconds