• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5661
  • 579
  • 285
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9143
  • 9143
  • 3049
  • 1704
  • 1539
  • 1534
  • 1439
  • 1379
  • 1211
  • 1198
  • 1181
  • 1132
  • 1122
  • 1040
  • 1035
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

A Machine Learning-Based Statistical Analysis of Predictors for Spinal Cord Stimulation Success

Jacobson, Trolle, Segerberg, Gustav January 2019 (has links)
Spinal Cord Stimulation (SCS) is a treatment for lumbar back pain and despitethe proven effcacy of the technology, there is a lack of knowledge in how the treatment outcome varies between different patients groups. Furthermore, since the method is costly, in the sense of material, surgery and follow-up time, a more accurate patient targeting would decrease healthcare costs. Within recent years, Real World Data (RWD) has become a vital source of information to describe the effects of medical treatments. The complexity, however, calls for new, innovative methods using a larger set of useful features to explain the outcome of SCS treatments. This study has employed machine learning algorithms, e.g., Random Forest Classier (RFC) boosting algorithms to finally compare the result with the baseline; Logistic regression (LR). The results retrieved was that RFC tend to classify successful and unsuccessful patients better while logistic regression was unstable regarding unbalanced data. In order to interpret the insights of the models, we also proposed a Soft Accuracy Measurement (SAM) method to explain how RFC and LR differ. Some factors have shown to impact the success of SCS. These factors were age, income, pain experience time and educational level. Many of these variables could also be supported by earlier studies on factors of success from lumbar spine surgery.
842

Machine Learning Methods for Personalized Medicine Using Electronic Health Records

Wu, Peng January 2019 (has links)
The theme of this dissertation focuses on methods for estimating personalized treatment using machine learning algorithms leveraging information from electronic health records (EHRs). Current guidelines for medical decision making largely rely on data from randomized controlled trials (RCTs) studying average treatment effects. However, RCTs are usually conducted under specific inclusion/exclusion criteria, they may be inadequate to make individualized treatment decisions in real-world settings. Large-scale EHR provides opportunities to fulfill the goals of personalized medicine and learn individualized treatment rules (ITRs) depending on patient-specific characteristics from real-world patient data. On the other hand, since patients' electronic health records (EHRs) document treatment prescriptions in the real world, transferring information in EHRs to RCTs, if done appropriately, could potentially improve the performance of ITRs, in terms of precision and generalizability. Furthermore, EHR data domain usually consists text notes or similar structures, thus topic modeling techniques can be adapted to engineer features. In the first part of this work, we address challenges with EHRs and propose a machine learning approach based on matching techniques (referred as M-learning) to estimate optimal ITRs from EHRs. This new learning method performs matching method instead of inverse probability weighting as commonly used in many existing methods for estimating ITRs to more accurately assess individuals' treatment responses to alternative treatments and alleviate confounding. Matching-based value functions are proposed to compare matched pairs under a unified framework, where various types of outcomes for measuring treatment response (including continuous, ordinal, and discrete outcomes) can easily be accommodated. We establish the Fisher consistency and convergence rate of M-learning. Through extensive simulation studies, we show that M-learning outperforms existing methods when propensity scores are misspecified or when unmeasured confounders are present in certain scenarios. In the end of this part, we apply M-learning to estimate optimal personalized second-line treatments for type 2 diabetes patients to achieve better glycemic control or reduce major complications using EHRs from New York Presbyterian Hospital (NYPH). In the second part, we propose a new domain adaptation method to learn ITRs in by incorporating information from EHRs. Unless assuming no unmeasured confounding in EHRs, we cannot directly learn the optimal ITR from the combined EHR and RCT data. Instead, we first pre-train “super" features from EHRs that summarize physicians' treatment decisions and patients' observed benefits in the real world, which are likely to be informative of the optimal ITRs. We then augment the feature space of the RCT and learn the optimal ITRs stratifying by these features using RCT patients only. We adopt Q-learning and a modified matched-learning algorithm for estimation. We present theoretical justifications and conduct simulation studies to demonstrate the performance of our proposed method. Finally, we apply our method to transfer information learned from EHRs of type 2 diabetes (T2D) patients to improve learning individualized insulin therapies from an RCT. In the last part of this work, we report M-learning proposed in the first part to learn ITRs using interpretable features extracted from EHR documentation of medications and ICD diagnoses codes. We use a latent Dirichlet allocation (LDA) model to extract latent topics and weights as features for learning ITRs. Our method achieves confounding reduction in observational studies through matching treated and untreated individuals and improves treatment optimization by augmenting feature space with clinically meaningful LDA-based features. We apply the method to extract LDA-based features in EHR data collected at NYPH clinical data warehouse in studying optimal second-line treatment for T2D patients. We use cross validation to show that ITRs outperforms uniform treatment strategies (i.e., assigning insulin or another class of oral organic compounds to all individuals), and including topic modeling features leads to more reduction of post-treatment complications.
843

Anpassning av mobilnotifikationer med hjälp av maskininlärning

Saveh, Diana January 2019 (has links)
The aim of this study has been to answer the question whether it is possible to obtain notifications that work with the user, instead of against, which can be experienced as stressful and bothersome. To decrease the stressful notifications an application was created which acted as a notification control. The application used machine learning to predict when the user wanted to receive their notifications. For an artificial intelligence to work there needs to be a pattern recognition. In this case the pattern recognition that was used is called the association rule analysis. The association rule analysis used a tree called fp-growth. After the application was made, a usability test was made before and after the installation of the application. The usability test was testing if the user experienced stress and how the application worked. The study showed that screen time decreased by one hour and the number of times the mobile was opened was also reduced. This survey requires more data as it may be that the user was not affected by the application but only randomly used the mobile phone less. / Denna studie handlade om att försöka minska störande notifikationer som kan upplevas som stressande och irriterande. Det som skapades var en applikation som agerade som en notifikationskontroll. Denna applikation fungerar med hjälp av maskininlärning som ska förutse när användaren ville ta emot sina notifikationer. Den mönsterigenkännande artificiella intelligensen som användes kallas associationsregelanalys. Associationsregelanalysen använde sig av ett träd som kallas fp-growth. Det gjordes ett användartest före installation av applikationen och ett användartest efter för att se hur användaren upplevde stress men även själva applikationen. Studien visade att skärmtiden minskade med en timme och antalet gånger som mobilen öppnades minskades också. Denna undersökning kräver mer data då det kan vara så att användaren inte blev påverkad av applikationen utan endast slumpmässigt använde mobiltelefonen mindre.
844

Modelagem de propensão ao atrito no setor de telecomunicações / Modeling Attrition Propensity in the Telecommunication Sector

Arruda, Rodolfo Augusto da Silva 12 March 2019 (has links)
A satisfação dos clientes é fundamental para a manutenção do relacionamento com a empresa. Quando eles precisam resolver algum problema, a empresa necessita proporcionar bom atendimento e ter capacidade de resolutividade. No entanto, o atendimento massificado, muitas vezes, impossibilita soluções sensíveis às necessidades dos clientes. A metodologia estatística pode ajudar a empresa na priorização de clientes com perfil a reclamar em um órgão de defesa ao consumidor (ODC), evitando assim uma situação de atrito. Neste projeto, foi realizada a modelagem do comportamento do cliente com relação à propensão ao atrito. Foram testadas as técnicas de Regressão Logística, Random Forest e Algoritmos Genéticos. Os resultados mostraram que os Algoritmos Genéticos são uma boa opção para tornar o modelo mais simples (parcimonioso), sem perda de performance, e que o Random Forest possibilitou ganho de performance, porém torna o modelo mais complexo, tanto do ponto de vista computacional quanto prático no que tange à implantação em sistemas de produção da empresa. / Customer satisfaction is key to maintaining the relationship with the company. When these need to solve some problem, the company needs to provide good service and have resolving capacity. However, the mass service often makes it impossible. The statistical methodology can help the company in the prioritization of clients with profile to complain in ODC, thus avoiding a situation of attrition. In this project was carried out the modeling of the behavior of the client in relation to the propensity to attrition. Logistic Regression, Random Forest and Genetic Algorithms were tested. The results showed that the Genetic Algorithms are a good option to make the model simpler (parsimonious) without loss of performance and that Random Forest allowed performance gain, but it makes the model more complex, both from the point of view computational and practical in relation to the implantation in production systems of the company.
845

Watermarking in Audio using Deep Learning

Tegendal, Lukas January 2019 (has links)
Watermarking is a technique used to used to mark the ownership in media such as audio or images by embedding a watermark, e.g. copyrights information, into the media. A good watermarking method should perform this embedding without affecting the quality of the media. Recent methods for watermarking in images uses deep learning to embed and extract the watermark in the images. In this thesis, we investigate watermarking in the hearable frequencies of audio using deep learning. More specifically, we try to create a watermarking method for audio that is robust to noise in the carrier, and that allows for the extraction of the embedded watermark from the audio after being played over-the-air. The proposed method consists of two deep convolutional neural network trained end-to-end on music with simulated noise. Experiments show that the proposed method successfully creates watermarks robust to simulated noise with moderate quality reductions, but it is not robust to the real world noise introduced after playing and recording the audio over-the-air.
846

Optimizing t-SNE using random sampling techniques

Buljan, Matej January 2019 (has links)
The main topic of this thesis concerns t-SNE, a dimensionality reduction technique that has gained much popularity for showing great capability of preserving well-separated clusters from a high-dimensional space. Our goal with this thesis is twofold. Firstly we give an introduction to the use of dimensionality reduction techniques in visualization and, following recent research, show that t-SNE in particular is successful at preserving well-separated clusters. Secondly, we perform a thorough series of experiments that give us the ability to draw conclusions about the quality of embeddings from running t-SNE on samples of data using different sampling techniques. We are comparing pure random sampling, random walk sampling and so-called hubness sampling on a dataset, attempting to find a sampling method that is consistently better at preserving local information than simple random sampling. Throughout our testing, a specific variant of random walk sampling distinguished itself as a better alternative to pure random sampling.
847

Materials & Machines: Simplifying the Mosaic of Modern Manufacturing

Birt, Aaron M 25 April 2017 (has links)
Manufacturing in modern society has taken on a different role than in previous generations. Today’s manufacturing processes involve many different physical phenomenon working in concert to produce the best possible material properties. It is the role of the materials engineer to evaluate, develop, and optimize applications for the successful commercialization of any potential materials. Laser-assisted cold spray (LACS) is a solid state manufacturing process relying on the impact of supersonic particles onto a laser heated surface to create coatings and near net structures. A process such as this that involves thermodynamics, fluid dynamics, heat transfer, diffusion, localized melting, deformation, and recrystallization is the perfect target for developing a data science framework for enabling rapid application development with the purpose of commercializing such a complex technology in a much shorter timescale than was previously possible. A general framework for such an approach will be discussed, followed by the execution of the framework for LACS. Results from the development of such a materials engineering model will be discussed as they relate to the methods used, the effectiveness of the final fitted model, and the application of such a model to solving modern materials engineering challenges.
848

Segmentation-based Retinal Image Analysis

Wu, Qian January 2019 (has links)
Context. Diabetic retinopathy is the most common cause of new cases of legal blindness in people of working age. Early diagnosis is the key to slowing the progression of the disease, thus preventing blindness. Retinal fundus image is an important basis for judging these retinal diseases. With the development of technology, computer-aided diagnosis is widely used. Objectives. The thesis is to investigate whether there exist specific regions that could assist in better prediction of the retinopathy disease, it means to find the best region in fundus image that works the best in retinopathy classification with the use of computer vision and machine learning techniques. Methods. An experiment method was used as research methods. With image segmentation techniques, the fundus image is divided into regions to obtain the optic disc dataset, blood vessel dataset, and other regions (regions other than blood vessel and optic disk) dataset. These datasets and original fundus image dataset were tested on Random Forest (RF), Support Vector Machines (SVM) and Convolutional Neural Network (CNN) models, respectively. Results. It is found that the results on different models are inconsistent. As compared to the original fundus image, the blood vessel region exhibits the best performance on SVM model, the other regions perform best on RF model, while the original fundus image has higher prediction accuracy on CNN model. Conclusions. The other regions dataset has more predictive power than original fundus image dataset on RF and SVM models. On CNN model, extracting features from the fundus image does not significantly improve predictive performance as compared to the entire fundus image.
849

Studies of two-dimensional materials beyond graphene: from first-principles to machine learning approaches

Hanakata, Paul Zakharia Fajar 12 July 2019 (has links)
Monolayers and heterostructures of two-dimensional (2D) electronic materials with spin-orbit interactions offer the promise of observing many novel physical effects. While theoretical predictions of 2D layered materials based on density functional theory (DFT) are many, the DFT approach is limited to small simulation sizes (several nanometers), and thus inhomogeneous strain and boundary effects that are often observed experimentally cannot be simulated within a reasonable time. The aim of this thesis is (i) to study effects of strain on 2D materials beyond graphene using first-principles and tight-binding methods and (ii) to investigate the effects of cuts--"kirigami"-- on 2D materials using molecular dynamics and machine learning approach. The first half of this thesis focuses on the effects of strain on manipulating spin and valley degrees of freedom for two classes of 2D materials--monochalcogenide and lead chalcogenide monolayers--using DFT. A tight-binding (TB) approach is developed to describe the electronic changes in lead chalcogenide monolayers due to strains that often persist in real devices. The strain-dependent TB model allows one to establish a relationship between the Rashba field and the out-of-plane strain or electric polarization from a microscopic view, a connection that is not well understood in the ferroelectric Rashba materials. This framework connecting strain fields and electronic changes is important to overcome the size and computational limitations associated with DFT. The second part of the thesis focuses on defect engineering and design of 2D materials via the "kirigami" technique of introducing different patterns of cuts. A machine learning (ML) approach is presented to provide physical insights and an effective model to describe the physical system. We demonstrate that a machine learning model based on a convolutional neural network is able to find the optimal design from a training data set that is much smaller than the design space.
850

A Forex Trading System Using Evolutionary Reinforcement Learning

Song, Yupu 01 May 2017 (has links)
Building automated trading systems has long been one of the most cutting-edge and exciting fields in the financial industry. In this research project, we built a trading system based on machine learning methods. We used the Recurrent Reinforcement Learning (RRL) algorithm as our fundamental algorithm, and by introducing Genetic Algorithms (GA) in the optimization procedure, we tackled the problems of picking good initial values of parameters and dynamically updating the learning speed in the original RRL algorithm. We call this optimization algorithm the Evolutionary Recurrent Reinforcement Learning algorithm (ERRL), or the GA-RRL algorithm. ERRL allows us to find many local optimal solutions easier and faster than the original RRL algorithm. Finally, we implemented the GA-RRL system on EUR/USD at a 5-minute level, and the backtest performance showed that our GA-RRL system has potentially promising profitability. In future research we plan to introduce some risk control mechanism, implement the system on different markets and assets, and perform backtest at higher frequency level.

Page generated in 0.0956 seconds