• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 35
  • 19
  • 9
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 22
  • 19
  • 18
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Transformations of Copulas and Measures of Concordance

Fuchs, Sebastian 27 November 2015 (has links)
Copulas are real functions representing the dependence structure of the distribution of a random vector, and measures of concordance associate with every copula a numerical value in order to allow for the comparison of different degrees of dependence. We first introduce and study a group of transformations mapping the collection of all copulas of fixed but arbitrary dimension into itself. These transformations may be used to construct new copulas from a given one or to prove that certain real functions on the unit cube are indeed copulas. It turns out that certain transformations of a symmetric copula may be asymmetric, and vice versa. Applying this group, we then propose a concise definition of a measure of concordance for copulas. This definition, in which the properties of a measure of concordance are defined in terms of two particular subgroups of the group, provides an easy access to the investigation of invariance properties of a measure of concordance. In particular, it turns out that for copulas which are invariant under a certain subgroup the value of every measure of concordance is equal to zero. We also show that the collections of all transformations which preserve symmetry or the concordance order or the value of every measure of concordance each form a subgroup and that these three subgroups are identical. Finally, we discuss a class of measures of concordance in which every element is defined as the expectation with respect to the probability measure induced by a fixed copula having an invariance property with respect to two subgroups of the group. This class is rich and includes the well-known examples Spearman's rho and Gini's gamma.
72

National Systems of Innovation: Evidence from the Industry Level

Savin, Maxim January 2012 (has links)
No description available.
73

Establishing the minimal sufficient number of measurements to validate a 24h blood pressure recording

Agarwal, Rajiv 17 May 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Background: Ambulatory blood pressure (BP) monitoring (ABPM) remains a reference standard but the number of readings required to make the measurement valid has not been empirically validated. Methods: Among 360 patients with chronic kidney disease and 38 healthy controls, BP was recorded 2 per hour during the night and 3 per hour during the day over 24h using a validated ABPM device; all had at least 90% of the expected readings. From this full set of ABPM recording, a variable number of BP measurements were selected and we compared the performance of the selected readings against that of the full sample using random or sequential selection schemes. To address the question whether random or sequential selection schemes affect the diagnostic performance in diagnosing hypertension control we compared the diagnostic decisions reached with the subsample and the full sample using area under the receiver operating-characteristic curves (AUC ROC). To answer the question regarding the number of readings needed to achieve over 90% coverage of the mean BP of the full ABPM sample we ascertained the point and confidence interval (CI) estimates based on the selected data. Results: To diagnose hypertension control, the number of readings randomly drawn to establish lower bound with 2.5% error of area under the receiver operating-characteristic curve (AUC ROC) of 0.9 was 3, 0.95 was 7, and 0.975 was 13 . In contrast, the corresponding number of readings with serial selections was 18, 30 and 39 respectively. With a random selection scheme, 18 readings provided 80% coverage of the 90th percentile of CI of the true systolic BP mean, for 90% coverage, 26 readings were needed, for 95% coverage 33. With serial selections, the number of readings increased to 42, 47, and 50 respectively. Similar results emerged for diastolic BP. Conclusions: For diagnosing hypertension control 3 random measurements or 18 serial measurements is sufficient. For quantitative analysis, the minimal sufficient number of 24h ambulatory BP is 26 random recordings or 42 serial recordings.
74

A Delphi Study To Construct A Script Concordance Test For Spiritual And Religious Competence In Counseling

Christmas, Christopher 01 January 2013 (has links)
The need to address spiritual and religious issues is well established in the counseling literature and in accreditation standards, however, many graduates counseling students do not feel prepared to address these issues. In the United States, the vast majority of clients consider themselves to be spiritual or religious, so counselors who lack competence in addressing spiritual and religious issues in counseling are likely to offer ineffective or perhaps unethical care to clients. Counselor educators must improve education and assessment in this critical specialty area of counseling. Of primary concern is a student’s ability to demonstrate spiritual competence in counseling. The 2009 ASERVIC Spiritual Competencies offer the most comprehensive standard of spiritual competence in counseling in any mental health profession, however there is no reliable and standardized assessment that measures demonstrated spiritual competency. Competency can best be measured when the examinee makes choices in a context that is similar or the same as that in which he or she will practice, therefore an effective competency measurement must include client cases. The purpose of this study was to investigate whether a case based assessment for measuring clinical judgment in situations of uncertainty, called a Script Concordance Test, could be constructed by experts using the Delphi Method. This instrument was based on the 2009 ASERVIC Spiritual Competencies as the standard for demonstrated competence. iv The results of this study indicated that expert practitioners and educators could come to consensus on appropriate cases, appropriate competencies to measure in each case, items to assess competency in each case, and an instrument that included items assessing all 14 of the 2009 Spiritual Competencies. Additionally, the constructed instrument demonstrated excellent test retest reliability and adequate internal reliability. There are several implications for counselor education, First, this study provides evidence that expert practitioners and educators can come to consensus to construct a highly contextual instrument to measures clinical decision making about spiritual competence in counseling. Second, a promising new type of instrument with excellent reliability and strong content validity has been introduced to the field of counselor education. Third, with appropriate assessment, counselor education programs can begin to measure student competence, in terms of clinical judgment, on addressing spiritual and religions issues in counseling over time because this instrument is appropriate for use at different intervals throughout professional development. Fourth, the format of this instrument is also useful for educational purposes and reflective practice. Finally, the theoretical foundations of the Delphi Method and script concordance tests are compatible with one another and with instrument development. The researcher recommends that future studies to construct script concordance tests for other specialty areas of competence employ and refine this method.
75

Foreign Accent, Trust, and Healthcare: The Impact of English-accented Spanish on the Latino Patient-Healthcare Professional Relationship

Pinillos Chavez, Paloma January 2022 (has links)
No description available.
76

Concordance and the risk of military intervention in post-military states : A comparative case study of Indonesia and Myanmar

Svenheim Paldanius, Elvira January 2023 (has links)
The 2021 military coup in Myanmar is part of a much bigger trend towards democratic regression in Southeast Asia where military influence has played an important role. Previous research on the SEA region suggests that the citizenry has been overlooked in understanding how civil-military relations have been shaped. Rebecca L. Schiff’s concordance theory presumes that when concordance, i.e., agreement, between the military, political leadership, and the citizenry exists on the four indicators (1) social composition of officer corps, (2) political decision-making procedures, (3) recruitment method and (4) military style, military intervention in domestic politics is less likely to occur. The aim of this thesis is to conduct a comparative case study of Myanmar and Indonesia to understand how the three actors have shaped their respective civil-military relations. By applying concordance theory, a comparison is made to assess the theory’s predictive and explanatory power of the two cases. Results suggest that the two cases' political developments are in line with the theory. Indonesia demonstrates a higher degree of concordance among all indicators and has not experienced a military intervention in the studied time period. Comparatively, Myanmar demonstrates a low degree ofconcordance among all indicators and subsequently, military intervention in domestic politics is common. However, a lack of data on some indicators questions the strength of these claims. Collecting primary material for future research is suggested to analyse the concordance of all four indicators in depth and ensure an accurate representation of the citizenry for both cases.
77

Machine Learning Survival Models : Performance and Explainability

Alabdallah, Abdallah January 2023 (has links)
Survival analysis is an essential statistics and machine learning field in various critical applications like medical research and predictive maintenance. In these domains understanding models' predictions is paramount. While machine learning techniques are increasingly applied to enhance the predictive performance of survival models, they simultaneously sacrifice transparency and explainability.  Survival models, in contrast to regular machine learning models, predict functions rather than point estimates like regression and classification models. This creates a challenge regarding explaining such models using the known off-the-shelf machine learning explanation techniques, like Shapley Values, Counterfactual examples, and others.    Censoring is also a major issue in survival analysis where the target time variable is not fully observed for all subjects. Moreover, in predictive maintenance settings, recorded events do not always map to actual failures, where some components could be replaced because it is considered faulty or about to fail in the future based on an expert's opinion. Censoring and noisy labels create problems in terms of modeling and evaluation that require to be addressed during the development and evaluation of the survival models. Considering the challenges in survival modeling and the differences from regular machine learning models, this thesis aims to bridge this gap by facilitating the use of machine learning explanation methods to produce plausible and actionable explanations for survival models. It also aims to enhance survival modeling and evaluation revealing a better insight into the differences among the compared survival models. In this thesis, we propose two methods for explaining survival models which rely on discovering survival patterns in the model's predictions that group the studied subjects into significantly different survival groups. Each pattern reflects a specific survival behavior common to all the subjects in their respective group. We utilize these patterns to explain the predictions of the studied model in two ways. In the first, we employ a classification proxy model that can capture the relationship between the descriptive features of subjects and the learned survival patterns. Explaining such a proxy model using Shapley Values provides insights into the feature attribution of belonging to a specific survival pattern. In the second method, we addressed the "what if?" question by generating plausible and actionable counterfactual examples that would change the predicted pattern of the studied subject. Such counterfactual examples provide insights into actionable changes required to enhance the survivability of subjects. We also propose a variational-inference-based generative model for estimating the time-to-event distribution. The model relies on a regression-based loss function with the ability to handle censored cases. It also relies on sampling for estimating the conditional probability of event times. Moreover, we propose a decomposition of the C-index into a weighted harmonic average of two quantities, the concordance among the observed events and the concordance between observed and censored cases. These two quantities, weighted by a factor representing the balance between the two, can reveal differences between survival models previously unseen using only the total Concordance index. This can give insight into the performances of different models and their relation to the characteristics of the studied data. Finally, as part of enhancing survival modeling, we propose an algorithm that can correct erroneous event labels in predictive maintenance time-to-event data. we adopt an expectation-maximization-like approach utilizing a genetic algorithm to find better labels that would maximize the survival model's performance. Over iteration, the algorithm builds confidence about events' assignments which improves the search in the following iterations until convergence. We performed experiments on real and synthetic data showing that our proposed methods enhance the performance in survival modeling and can reveal the underlying factors contributing to the explainability of survival models' behavior and performance.
78

Adherence to secondary prevention medicines by coronary heart disease patients. First Reported Adherence

Khatib, R. January 2012 (has links)
Background Non-adherence to evidence based secondary prevention medicines (SPM) by coronary heart disease (CHD) patients limits their expected benefits and may result in a lack of improvement or significant deterioration in health. This study explored self-reported non-adherence to SPM, barriers to adherence, and the perception that patients in West Yorkshire have about their medicines in order to inform practice and improve adherence. Methods In this cross-sectional study a specially designed postal survey (The Heart Medicines Survey) assessed medicines-taking behaviour using the Morisky Medicines Adherence 8 items Scale (MMAS-8), a modified version of the Single Question Scale (SQ), the Adherence Estimator (AE), Beliefs about Medicines Questionnaire(BMQ) and additional questions to explore practical barriers to adherence. Patients were also asked to make any additional comments about their medicines-taking experience. A purposive sample of 696 patients with long established CHD and who were on SPM for at least 3 months was surveyed. Ethical approval was granted by the local ethics committee. Results 503 (72%) patients participated in the survey. 52%, 34% and 11% of patients were prescribed at least four, three and two SPMs respectively. The level of non-adherence to collective SPM was 44%. The AE predicted that 39% of those had an element of intentional non-adherence. The contribution of aspirin, statins, clopidogrel, beta blockers, angiotensin converting enzyme inhibitors (ACEI) and angiotensin receptor blockers (ARBs) to overall non-adherence as identified by the SQ scale was 62%, 67%, 7%, 30%, 22% and 5%, respectively. A logistic regression model for overall non-adherence revealed that older age and female gender were associated with less non-adherence (OR = 0.96, 95% CI: 0.94, 0.98; OR = 0.56, 95% CI: 0.34, 0.93; respectively). Specific concern about SPM, having issues with repeat prescriptions and aspirin were associated with more non-adherence (OR = 1.12, 95% CI: 1.07, 1.18; OR = 2.48, 95% CI: 1.26, 4.90, OR = 2.22, 95% CI: 1.18, 4.17). Other variables were associated with intentional and non-intentional non-adherence. 221 (44%) patients elaborated on their medicines-taking behaviour by providing additional comments about the need for patient tailored information and better structured medicines reviews. Conclusions The Medicines Heart Survey was successful in revealing the prevalence of self-reported non-adherence and barriers to adherence in our population. Healthcare professionals should examine specific modifiable barriers to adherence in their population before developing interventions to improve adherence. Conducting frequent structured medicines-reviews, which explore and address patients' concerns about their medicines and healthcare services, and enable them to make suggestions, will better inform practice and may improve adherence.
79

Tau-Path Test - A Nonparametric Test For Testing Unspecified Subpopulation Monotone Association

Yu, Li January 2009 (has links)
No description available.
80

Improving the accuracy of statistics used in de-identification and model validation (via the concordance statistic) pertaining to time-to-event data

Caetano, Samantha-Jo January 2020 (has links)
Time-to-event data is very common in medical research. Thus, clinicians and patients need analysis of this data to be accurate, as it is often used to interpret disease screening results, inform treatment decisions, and identify at-risk patient groups (ie, sex, race, gene expressions, etc.). This thesis tackles three statistical issues pertaining to time-to-event data. The first issue was incurred from an Institute for Clinical and Evaluative Sciences lung cancer registry data set, which was de-identified by censoring patients at an earlier date. This resulted in an underestimate of the observed times of censored patients. Five methods were proposed to account for the underestimation incurred by de-identification. A subsequent simulation study was conducted to compare the effectiveness of each method in reducing bias, and mean squared error as well as improving coverage probabilities of four different KM estimates. The simulation results demonstrated that situations with relatively large numbers of censored patients required methodology with larger perturbation. In these scenarios, the fourth proposed method (which perturbed censored times such that they were censored in the final year of study) yielded estimates with the smallest bias, mean squared error, and largest coverage probability. Alternatively, when there were smaller numbers of censored patients, any manipulation to the altered data set worsened the accuracy of the estimates. The second issue arises when investigating model validation via the concordance (c) statistic. Specifically, the c-statistic is intended for measuring the accuracy of statistical models which assess the risk associated with a binary outcome. The c-statistic estimates the proportion of patient pairs where the patient with a higher predicted risk had experienced the event. The definition of a c-statistic cannot be uniquely extended to time-to-event outcomes, thus many proposals have been made. The second project developed a parametric c-statistic which assumes to the true survival times are exponentially distributed to invoke the memoryless property. A simulation study was conducted which included a comparative analysis of two other time-to-event c-statistics. Three different definitions of concordance in the time-to-event setting were compared, as were three different c-statistics. The c-statistic developed by the authors yielded the smallest bias when censoring is present in data, even when the exponentially distributed parametric assumptions do not hold. The c-statistic developed by the authors appears to be the most robust to censored data. Thus, it is recommended to use this c-statistic to validate prediction models applied to censored data. The third project in this thesis developed and assessed the appropriateness of an empirical time-to-event c-statistic that is derived by estimating the survival times of censored patients via the EM algorithm. A simulation study was conducted for various sample sizes, censoring levels and correlation rates. A non-parametric bootstrap was employed and the mean and standard error of the bias of 4 different time-to-event c-statistics were compared, including the empirical EM c-statistic developed by the authors. The newly developed c-statistic yielded the smallest mean bias and standard error in all simulated scenarios. The c-statistic developed by the authors appears to be the most appropriate when estimating concordance of a time-to-event model. Thus, it is recommended to use this c-statistic to validate prediction models applied to censored data. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0732 seconds