• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 13
  • 5
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 109
  • 20
  • 19
  • 14
  • 13
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Hodnocení dopadů veřejných podpor na rozvoj podniků v ČR / The Impact of Public Support on Development of Enterprises in the Czech Republic

Loun, Jakub January 2011 (has links)
Diploma thesis The Impact of Public Subsidies on Development of Enterprises in the Czech Republic examines the real impact on profit, revenues and debts. Counterfactual impact evaluation method is used on companies subsidised from structural funds. These indicators were examined by difference-in-difference method based on the selective research of 1738 companies subsidised from OP Entrepreneurship and Innovation and control group of the same size. The impact of public support on the profit of subsidised companies was quantified as 608 -- 5 547 ths. CZK and the impact on sales as 14 713 -- 42 511 ths. CZK. 1 CZK used for subsidies increased the profit of supported companies by 0.05 -- 0.44 CZK and sales by 1.16 -- 3.35 CZK. Hypothesis that the growth of supported firms increased their debt has not been proved. Also a slight positive effect of support on growth in return on equity was identified.
42

Understanding and Improving Coordination Efficiency in the Minimum Effort Game: Counterfactual- and Behavioral-Based Nudging and Cognitive Modeling

Hough, Alexander R. 27 May 2021 (has links)
No description available.
43

Is it remembered or imagined? The phenomenological characteristics of memory and imagination

Branch, Jared 14 April 2020 (has links)
No description available.
44

The Impact of Counterfactual Thinking on the Career Motivation of Early Career Women Engineers: A Q Methodology Study

Desing, Renee January 2020 (has links)
No description available.
45

Counterfactual explanations for time series

Schultz, Markus January 2022 (has links)
Time Series are used in healthcare, meteorology, and many other fields. Rigorous research has been done to develop distance measures and classifying algorithms for time series. When a time series is classified, one can ask what changes should be made to the time series to classify it differently. A time series with the appropriate changes that make the classifier classify the time series as a different class is known as a counterfactual explanation. There exist model-dependent methods for creating counterfactual explanations. However, there exists a lack in the literature of a model agnostic method for creating counterfactual explanations for Time Series. This study aims to answer the following research question. ” How does a model agnostic method for counterfactuals for time series perform in terms of cost and compactness compared to model dependent algorithms for counterfactuals for time series?” To answer the research question, a model agnostic method for creating counterfactuals for time series was created named Multi-Objective Counterfactuals For Time Series. The Evaluation of the Multi-Objective Counterfactual Explanation For Time Series performed better than the modeldependent algorithms in Compactness but worse in Cost.
46

The Effects of Counterfactual Thinking on Readiness to Change Smoking-Related Behaviors

Eavers, Erika R. 29 May 2013 (has links)
No description available.
47

[en] OCEANUI: INTERFACE FOR COUNTERFACTUAL EXPLANATIONS GENERATION / [pt] OCEANUI: INTERFACE PARA GERAÇÃO DE EXPLICAÇÕES CONTRAFACTUAIS

MOISES HENRIQUE PEREIRA 22 August 2022 (has links)
[pt] Atualmente algoritmos de aprendizado de máquina (ML) estão incrivelmente presentes no nosso cotidiano, desde sistemas de recomendação de filmes e músicas até áreas de alto risco como saúde, justiça criminal, finanças e assim por diante, auxiliando na tomada de decisões. Mas a complexidade de criação desses algoritmos de ML também está aumentando, enquanto sua interpretabilidade está diminuindo. Muitos algoritmos e suas decisões não podem ser facilmente explicados por desenvolvedores ou usuários, e os algoritmos também não são autoexplicáveis. Com isso, erros e vieses podem acabar ficando ocultos, o que pode impactar profundamente a vida das pessoas. Devido a isso, iniciativas relacionadas a transparência, explicabilidade e interpretabilidade estão se tornando cada vez mais relevantes, como podemos ver no novo regulamento sobre proteção e tratamento de dados pessoais (GDPR, do inglês General Data Protection Regulation), aprovado em 2016 para a União Europeia, e também na Lei Geral de Proteção de Dados (LGPD) aprovada em 2020 no Brasil. Além de leis e regulamentações tratando sobre o tema, diversos autores consideram necessário o uso de algoritmos inerentemente interpretáveis; outros mostram alternativas para se explicar algoritmos caixa-preta usando explicações locais, tomando a vizinhança de um determinado ponto e então analisando a fronteira de decisão dessa região; enquanto ainda outros estudam o uso de explicações contrafactuais. Seguindo essa linha dos contrafactuais, nos propomos a desenvolver uma interface com usuário para o sistema Optimal Counterfactual Explanations in Tree Ensembles (OCEAN), denominada OceanUI, através do qual o usuário gera explicações contrafactuais plausíveis usando Programação Inteira Mista e Isolation Forest. O propósito desta interface é facilitar a geração de contrafactuais e permitir ao usuário obter um contrafactual personalizado e mais aplicável individualmente, por meio da utilização de restrições e gráficos interativos. / [en] Machine learning algorithms (ML) are becoming incredibly present in our daily lives, from movie and song recommendation systems to high-risk areas like health care, criminal justice, finance, and so on, supporting decision making. But the complexity of those algorithms is increasing while their interpretability is decreasing. Many algorithms and their decisions cannot be easily explained by either developers or users, and the algorithms are also not self-explanatory. As a result, mistakes and biases can end up being hidden, which can profoundly impact people s lives. So, initiatives concerning transparency, explainability, and interpretability are becoming increasingly more relevant, as we can see in the General Data Protection Regulation (GDPR), approved in 2016 for the European Union, and in the General Data Protection Law (LGPD) approved in 2020 in Brazil. In addition to laws and regulations, several authors consider necessary the use of inherently interpretable algorithms; others show alternatives to explain black-box algorithms using local explanations, taking the neighborhood of a given point and then analyzing the decision boundary in that region; while yet others study the use of counterfactual explanations. Following the path of counterfactuals, we propose to develop a user interface for the system Optimal Counterfactual Explanations in Tree Ensembles (OCEAN), which we call OceanUI, through which the user generates plausible counterfactual explanations using Mixed Integer Programming and Isolation Forest. The purpose of this user interface is to facilitate the counterfactual generation and to allow the user to obtain a personal and more individually applicable counterfactual, by means ofrestrictions and interactive graphics.
48

Predictive Models for Hospital Readmissions

Shi, Junyi January 2023 (has links)
A hospital readmission can occur due to insufficient treatment or the emergence of an underlying disease that was not apparent at the initial hospital stay. The unplanned readmission rate is often viewed as an indicator of the health system performance and may reflect the quality of clinical care provided during hospitalization. Readmissions have also been reported to account for a significant portion of inpatient care expenditures. In an effort to improve treatment quality, clinical outcomes, and hospital operating costs, we present machine learning methods for identifying and predicting potentially preventable readmissions (PPR). In the first part of the thesis, we use logistic regression, extreme gradient boosting, and neural network to predict 30-day unplanned readmissions. In the second part, we apply association rule analysis to assess the clinical association between initial admission and readmission, followed by employing counterfactual analysis to identify potentially preventable readmissions. This comprehensive analysis can assist health care providers in targeting interventions to effectively reduce preventable readmissions. / Thesis / Master of Science (MSc)
49

User Preference-Based Evaluation of Counterfactual Explanation Methods

Akram, Muhammad Zain January 2023 (has links)
Explainable AI (XAI) has grown as an important field over the years. As more complicated AI systems are utilised in decision-making situations, the necessity for explanations for such systems is also increasing in order to ensure transparency and stakeholder trust. This study focuses on a specific type of explanation method, namely counterfactual explanations. Counterfactual explanations provide feedback that outlines what changes should be made to the input to reach a different outcome. This study expands on a previous dissertation in which a proof-of-concept tool was created for comparing several counterfactual explanation methods. This thesis investigates the properties of counterfactual explanation methods along with some appropriate metrics. The identified metrics are then used to evaluate and compare the desirable properties of the counterfactual approaches. The proof-of-concept tool is extended with a properties-metrics mapping module, and a user preference-based system is developed, allowing users to evaluate different counterfactual approaches depending on their preferences. This addition to the proof-of-concept tool is a critical step in providing field researchers with a standardised benchmarking tool.
50

Implementing the Difference in Differences (Dd) Estimator in Observational Education Studies: Evaluating the Effects of Small, Guided Reading Instruction for English Language Learners

Sebastian, Princy 07 1900 (has links)
The present study provides an example of implementing the difference in differences (DD) estimator for a two-group, pretest-posttest design with K-12 educational intervention data. The goal is to explore the basis for causal inference via Rubin's potential outcomes framework. The DD method is introduced to educational researchers, as it is seldom implemented in educational research. DD analytic methods' mathematical formulae and assumptions are explored to understand the opportunity and the challenges of using the DD estimator for causal inference in educational research. For this example, the teacher intervention effect is estimated with multi-cohort student outcome data. First, the DD method is used to detect the average treatment effect (ATE) with linear regression as a baseline model. Second, the analysis is repeated using linear regression with cluster robust standard errors. Finally, a linear mixed effects analysis is provided with a random intercept model. Resulting standard errors, parameter estimates, and inferential statistics are compared among these three analyses to explore the best holistic analytic method for this context.

Page generated in 0.0571 seconds