• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 16
  • 10
  • 9
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 142
  • 142
  • 24
  • 13
  • 13
  • 13
  • 11
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Interpretable Machine Learning in Alzheimer’s Disease Dementia

Kadem, Mason January 2023 (has links)
Alzheimer’s disease (AD) is among the top 10 causes of global mortality, and dementia imposes a yearly $1 trillion USD economic burden. Of particular importance, women and minoritized groups are disproportionately affected by AD, with females having higher risk of developing AD compared to male cohorts. Differentiating mild cognitive impairment (MCIstable) from early stage Alzheimer’s disease (MCIAD) is vital worldwide. Despite genetic markers, such as apo-lipoprotein-E (APOE), identification of patients before they develop early stages of MCIAD, a critical period for possible pharmaceutical intervention, is not yet possible. Based on review of the literature three key limitations in existing AD-specific prediction models are apparent: 1) models developed by traditional statistics which overlook nonlinear relationships and complex interactions between features, 2) machine learning models are based on difficult to acquire, occasionally invasive, manually selected, and costly data, and 3) machine learning models often lack interpretability. Rapid, accurate, low-cost, easily accessible, non-invasive, interpretable and early clinical evaluation of AD is critical if an intervention is to have any hope at success. To support healthcare decision making and planning, and potentially reduce the burden of AD, this research leverages the Alzheimer’s Disease Neuroimaging Initiative (ADNI1/GO/2/3) database and a mathematical modelling approach based on supervised machine learning to identify 1) predictive markers of AD, and 2) patients at the highest risk of AD. Specifically we implemented a supervised XGBoost classifier with diagnostic (Exp 1) and prognostic (Exp 2) objectives. In experiment 1 (n=441) classification of AD (n=72) was performed in comparison to healthy controls (n= 369), while experiment 2 (n=738) involved classification of MCIstable (n = 444) compared to MCIAD(n = 294). In Experiment 1, machine learning tools identified three features (i.e., Everyday Cognition Questionnaire (Study partner) - Total, Alzheimer’s Disease Assessment Scale (13 items) and Delayed Total Recall) with ROC AUC scores consistently above 97%. Low performance on delayed recall alone appears to distinguish most AD patients. This finding is consistent with the pathophysiology of AD with individuals having problems storing new information into long-term memory. In experiment 2, the algorithm identified the major indicators of MCI-to-AD progression by integrating genetic, cognitive assessment, demographic and brain imaging to achieve ROC AUC scores consistently above 87%. This speaks to the multi-faceted nature of MCI progression and the utility of comprehensive feature selection. These features are important because they are non-invasive and easily collected. As an important focus of this research, the interpretability of the ML models and their predictions were investigated. The interpretable model for both experiments maintained performance with their complex counterparts while improving their interpretability. The interpretable models provide an intuitive explanation of the decision process which are vital steps towards the clinical adoption of machine learning tools for AD evaluation. The models can reliably predict patient diagnosis (Exp 1) and prognosis (Exp 2). In summary, our work extends beyond the identification of high-risk factors for developing AD. We identified accessible clinical features, together with clinically operable decision routes, to reliably and rapidly predict patients at the highest risk of developing Alzheimer’s disease. We addressed the aforementioned limitations by providing an intuitive explanation of the decision process among the high-risk non-invasive and accessible clinical features that lead to the patient’s risk. / Thesis / Master of Science in Biomedical Engineering / Early identification of patients at the highest risk of Alzheimer’s disease (AD) is crucial for possible pharmaceutical intervention. Existing prediction models have limitations, including inaccessible data and lack of interpretability. This research used a machine learning approach to identify patients at the highest risk of Alzheimer’s disease and found that certain clinical features, such as specific executive function- related cognitive testing (i.e., task switching), combined with genetic predisposition, brain imaging, and demographics, were important contributors to AD risk. The models were able to reliably predict patient diagnosis and prognosis and were designed to be low-cost, non-invasive, clinically operable and easily accessible. The interpretable models provided an intuitive explanation of the decision process, making it a valuable tool for healthcare decision-making and planning.
22

Development and validation of prediction models for the discharge destination of elderly patients with aspiration pneumonia / 誤嚥性肺炎の高齢患者における退院先予測モデルの開発と検証

Hirota, Yoshito 24 July 2023 (has links)
京都大学 / 新制・課程博士 / 博士(社会健康医学) / 甲第24844号 / 社医博第133号 / 新制||社医||13(附属図書館) / 京都大学大学院医学研究科社会健康医学系専攻 / (主査)教授 近藤, 尚己, 教授 川上, 浩司, 教授 平井, 豊博 / 学位規則第4条第1項該当 / Doctor of Public Health / Kyoto University / DFAM
23

Gene Discovery for Age-related Macular Degeneration

Wang, Yang January 2009 (has links)
No description available.
24

Developing a Protocol for the External Validation of a Clinical Prediction Model for the Diagnosis of Immune Thrombocytopenia

Mahamad, Syed January 2023 (has links)
Defined as a platelet count <100x109/L with no known cause, immune thrombocytopenia (ITP) is a diagnosis of exclusion, meaning other thrombocytopenic conditions must be ruled out before establishing the ITP diagnosis. This can lead to errors, unnecessary exposures to expensive and harmful treatments, and increased patient anxiety and distress. In the absence of a standardized diagnostic test, a clinical prediction model, called the Predict-ITP tool, was developed to aid hematologists in establishing the ITP diagnosis among patients who present with thrombocytopenia. Based on a cohort of 839 patients referred to an academic hematology clinic and using penalized logistic regression, the following predictor variables for the ITP diagnosis were identified: 1) high platelet variability index; 2) lowest platelet count; 3) highest mean platelet volume; and 4) history of a major bleed. Internal validation was completed using bootstrap resampling, and showed good discrimination and excellent calibration. Following internal validation and prior to implementation, the Predict-ITP Tool must undergo external validation by evaluating the tool’s performance in a different cohort. A study protocol was developed with the objective of externally validating the Predict-ITP Tool by collecting data from 960 patients from 11 clinics across Canada. The tool will compute the probability of ITP using information available at the time of the initial consultation, and results will be compared with either the local hematologist’s diagnosis at the end of follow-up or the adjudicated diagnosis. Discrimination (the ability to differentiate between patients with and without ITP) and calibration (the agreement between predicted and actual classifications) of the tool will be assessed. The Predict-ITP Tool must demonstrate good discrimination (c-statistic ≥ 0.8) and excellent calibration (calibration-in-the-large close to 0; calibration slope close to 1) to achieve external validation. If implemented, this tool will improve diagnostic accuracy and reduce delays in diagnosis and unnecessary treatments and investigations. / Thesis / Master of Science (MSc) / There lack of a standardized test to diagnose immune thrombocytopenia (ITP) leads to delays in care, use of incorrect treatments, and increased patient anxiety. The Predict-ITP Tool was developed to classify patients as ITP or non-ITP using the following data: 1) platelet counts in the recent past; 2) the highest mean platelet volume; and 3) major bleeding at any time in the past. The preliminary internal validation study showed promise. I developed a study protocol to externally validate the Predict-ITP Tool that will collect data from 960 patients from 11 clinics across Canada to see how accurately the tool would have performed to classify patients as ITP or non-ITP at the first hematology visit compared with the gold standard clinical diagnosis by the hematologist or an independent expert committee. A successful external validation that demonstrates the tool’s predictive accuracy in an external population must be completed before widespread use.
25

Derivation and validation of clinical prediction model of postoperative clinically important hypotension in patients undergoing noncardiac surgery

Yang, Stephen January 2020 (has links)
Introduction Postoperative medical complications are often preceded by a period with hypotension. Postoperative hypotension is poorly described in the literature. Data are needed to determine the incidence and risk factors for the development of postoperative clinically important hypotension after noncardiac surgery. Methods The incidence of postoperative clinically important hypotension was examined in a cohort of 40,004 patients enrolled in the VISION (Vascular Events in Noncardiac Surgery Patients Cohort Evaluation) Study. Eligible patients were ≥45 years of age, underwent an in-patient noncardiac surgery procedure, and required a general or regional anesthetic. I undertook a multivariable logistic regression model to determine the predictors for postoperative clinically important hypotension. Model validation was performed using calibration and discrimination. Results Of the 40,004 patients included, 20,442 patients were selected for the derivation cohort, and 19,562 patients were selected for the validation cohort. The incidence of clinically important hypotension in the entire cohort was 12.4% (4,959 patients) [95% confidence interval 12.1-12.8]. Using 41 variables related to baseline characteristics, preoperative hemodynamics, laboratory characteristics, and type of surgery, I developed a model to predict the risk of clinically important postoperative hypotension (bias-corrected C-statistics: 0.73) The prediction model was slightly improved by adding intraoperative variables (bias-corrected C-statistics: 0.75). A simplified prediction model using the following variables: high-risk surgery, preoperative systolic blood pressure <130 mm Hg, preoperative heart rate >100 beats per minute, and open surgery, also predicted clinically important hypotension, albeit with less accuracy (bias-corrected C-statistics 0.68). Conclusion Our clinical prediction model can accurately predict patients’ risk of postoperative clinically important hypotension after noncardiac surgery. This model can help identify which patients should have enhanced monitoring after surgery and patients to include in clinical trials evaluating interventions to prevent postoperative clinically important hypotension. / Thesis / Master of Science (MSc) / In patients undergoing noncardiac surgery, numerous patients will develop postoperative clinically important hypotension. This may lead to complications including death, stroke, and myocardial infarction. I performed a large observational study to examine which risk factors would predict clinically important postoperative hypotension. Once we have identified these risk factors, we will use them to conduct randomized trials in patients at risk of clinically important hypotension to determine if we can prevent major postoperative complications.
26

Medication-related risk factors and its association with repeated hospital admissions in frail elderly: A case control study

Cheong, V-Lin, Sowter, Julie, Scally, Andy J., Hamilton, N., Ali, A., Silcock, Jonathan 14 February 2019 (has links)
Yes / Repeated hospital admissions are prevalent in older people. The role of medication in repeated hospital admissions has not been widely studied. The hypothesis that medication-related risk factors for initial hospital admissions were also associated with repeated hospital admissions was generated. To examine the association between medication-related risk factors and repeated hospital admissions in older people living with frailty. A retrospective case-control study was carried out with 200 patients aged ≥75 years with unplanned medical admissions into a large teaching hospital in England between January and December 2015. Demographic, clinical, and medication-related data were obtained from review of discharge summaries. Statistical comparisons were made between patients with 3 or more hospital admissions during the study period (cases) and those with 2 or fewer admissions (controls). Regressions were performed to establish independent predictors of repeated hospital admissions. Participants had a mean age of 83.8 years (SD 5.68) and 65.5% were female. There were 561 admission episodes across the sample, with the main reasons for admissions recorded as respiratory problems (25%) and falls (17%). Univariate logistic regression revealed five medication-related risks to be associated with repeated hospital admissions: Hyper-polypharmacy (defined as taking ≥10 medications) (OR 2.50, p < 0.005); prescription of potentially inappropriate medications (PIMs) (OR 1.89; p < 0.05); prescription of a diuretic (OR 1.87; p < 0.05); number of high risk medication (OR 1.29; p < 0.05) and the number of 'when required' medication (OR 1.20; p < 0.05). However, the effects of these risk factors became insignificant when comorbid disease was adjusted for in a multivariable model. Medication-related risk factors may play an important role in future repeated admission risk prediction models. The modifiable nature of medication-related risks factors highlights a real opportunity to improve health outcomes.
27

Modelování predikce bankrotu stavebních podniků / Bankruptcy prediction modelling in construction business

Burdych, Filip January 2017 (has links)
This master thesis deals with bankruptcy prediction models for construction companies doing business in Czech Republic. Terms important for understanding the issue are defined in the theoretical part. In analytical part, there are five current bankruptcy prediction models tested on the analysed sample and resulted accuracy compared with original ones. On the basis of knowledges acquired, there is developed a brand-new bankruptcy prediction model.
28

Machine learning approach for crude oil price prediction

Abdullah, Siti Norbaiti binti January 2014 (has links)
Crude oil prices impact the world economy and are thus of interest to economic experts and politicians. Oil price’s volatile behaviour, which has moulded today’s world economy, society and politics, has motivated and continues to excite researchers for further study. This volatile behaviour is predicted to prompt more new and interesting research challenges. In the present research, machine learning and computational intelligence utilising historical quantitative data, with the linguistic element of online news services, are used to predict crude oil prices via five different models: (1) the Hierarchical Conceptual (HC) model; (2) the Artificial Neural Network-Quantitative (ANN-Q) model; (3) the Linguistic model; (4) the Rule-based Expert model; and, finally, (5) the Hybridisation of Linguistic and Quantitative (LQ) model. First, to understand the behaviour of the crude oil price market, the HC model functions as a platform to retrieve information that explains the behaviour of the market. This is retrieved from Google News articles using the keyword “Crude oil price”. Through a systematic approach, price data are classified into categories that explain the crude oil price’s level of impact on the market. The price data classification distinguishes crucial behaviour information contained in the articles. These distinguished data features ranked hierarchically according to the level of impact and used as reference to discover the numeric data implemented in model (2). Model (2) is developed to validate the features retrieved in model (1). It introduces the Back Propagation Neural Network (BPNN) technique as an alternative to conventional techniques used for forecasting the crude oil market. The BPNN technique is proven in model (2) to have produced more accurate and competitive results. Likewise, the features retrieved from model (1) are also validated and proven to cause market volatility. In model (3), a more systematic approach is introduced to extract the features from the news corpus. This approach applies a content utilisation technique to news articles and mines news sentiments by applying a fuzzy grammar fragment extraction. To extract the features from the news articles systematically, a domain-customised ‘dictionary’ containing grammar definitions is built beforehand. These retrieved features are used as the linguistic data to predict the market’s behaviour with crude oil price. A decision tree is also produced from this model which hierarchically delineates the events (i.e., the market’s rules) that made the market volatile, and later resulted in the production of model (4). Then, model (5) is built to complement the linguistic character performed in model (3) from the numeric prediction model made in model (2). To conclude, the hybridisation of these two models and the integration of models (1) to (5) in this research imitates the execution of crude oil market’s regulators in calculating their risk of actions before executing a price hedge in the market, wherein risk calculation is based on the ‘facts’ (quantitative data) and ‘rumours’ (linguistic data) collected. The hybridisation of quantitative and linguistic data in this study has shown promising accuracy outcomes, evidenced by the optimum value of directional accuracy and the minimum value of errors obtained.
29

Characterizing the permeability of concrete mixes used in transportation applications: a neuronet approach

Yasarer, Hakan I. January 1900 (has links)
Master of Science / Department of Civil Engineering / Yacoub M. Najjar / Reliable and economical design of Portland Cement Concrete (PCC) pavement structural systems relies on various factors, among which is the proper characterization of the expected permeability response of the concrete mixes. Permeability is a highly important factor which strongly relates the durability of concrete structures and pavement systems to changing environmental conditions. One of the most common environmental attacks which cause the deterioration of concrete structures is the corrosion of reinforcing steel due to chloride penetration. On an annual basis, corrosion-related structural repairs typically cost millions of dollars. This durability problem has gotten widespread interest in recent years due to its incidence rate and the associated high repair costs. For this reason, material characterization is one of the best methods to reduce repair costs. To properly characterize the permeability response of PCC pavement structure, the Kansas Department of Transportation (KDOT) generally runs the Rapid Chloride Permeability test to determine the resistance of concrete to penetration of chloride ions as well as the Boil test to determine the percent voids in hardened concrete. Rapid Chloride test typically measures the number of coulombs passing through a concrete sample over a period of six hours at a concrete age of 7, 28, and 56 days. Boil Test measures the volume of permeable pore space of the concrete sample over a period of five hours at a concrete age of 7, 28, and 56 days. In this research, backpropagation Artificial Neural Network (ANN)-based and Regression-based permeability response prediction models for Rapid Chloride and Boil tests are developed by using the databases provided by KDOT in order to reduce or eliminate the duration of the testing period. Moreover, another set of ANN- and Regression-based permeability prediction models, based on mix-design parameters, are developed using datasets obtained from the literature. The backpropagation ANN learning technique proved to be an efficient methodology to produce a relatively accurate permeability response prediction models. Comparison of the prediction accuracy of the developed ANN models and regression models proved that ANN models have outperformed their counterpart regression-based models. Overall, it can be inferred that the developed ANN-Based permeability prediction models are effective and applicable in characterizing the permeability response of concrete mixes used in transportation applications.
30

A Timescale Estimating Model for Rule-Based Systems

Moseley, Charles Warren 12 1900 (has links)
The purpose of this study was to explore the subject of timescale estimating for rule-based systems. A model for estimating the timescale necessary to build rule-based systems was built and then tested in a controlled environment.

Page generated in 0.0444 seconds