1201 |
Understanding the impact of pre-existing dementia on stroke rehabilitationLongley, Verity January 2018 (has links)
Pre-existing dementia is associated with poorer functional outcome after stroke. It is unclear however whether this is due to lack of access to, or inequality in, stroke rehabilitation. This PhD used mixed methods to understand whether pre-existing dementia is a factor considered by clinicians when referring/admitting patients for rehabilitation, when providing rehabilitation interventions, and whether there is a difference in rehabilitation received by patients with and without pre-existing dementia. A background literature review informed the first study, which was a systematic review examining factors influencing clinical decision-making about access to stroke rehabilitation. The systematic review suggested that pre-stroke cognition influenced referrals/admission to rehabilitation, however, no studies examined this specifically. The qualitative study therefore used interviews (n=23) to explore clinicians experiences of decision-making about rehabilitation for patients with pre-existing dementia/cognitive impairments. The findings highlighted that clinicians own knowledge influenced their decision-making, with a common perception that people with pre-existing cognitive impairment lack potential to benefit from rehabilitation. The third study, a prospective cohort study, examined differences in rehabilitation received by patients with and without pre-existing cognitive impairments (n=139). People with pre-existing cognitive impairments received less rehabilitation than those without, particularly physiotherapy and referral to community therapies and more non-patient facing occupational therapy. This PhD identified that people with pre-existing dementia/cognitive impairment receive less rehabilitation when compared to those without. This may be, in part, due to clinicians decision-making about which patients should receive stroke rehabilitation. These findings have multiple clinical implications, particularly around the number of patients in stroke services with undiagnosed pre-existing cognitive impairment. Decisions can become more equitable by ensuring clinicians have access to relevant education, training and skills to work alongside patients with pre-existing dementia/cognitive impairments.
|
1202 |
The decisional determinants of self-prioritizationGolubickis, Marius January 2018 (has links)
No description available.
|
1203 |
Mathematical analysis of security investment strategies and influence of cyber-insurance in networks.January 2012 (has links)
在互聯網上的主機(或節點)經常面對比如病毒和蠕蟲攻擊這一類能夠傳播的風險。儘管對這種風險的已經知曉,並且網絡/系統的安全非常重要,對於安全防護的投入依然很少,因此這種傳播式風險依然非常普遍。決定是否對安全保護進行投入是一個相互影響的過程:一個節點關於安全保護的投入會影響到其他節點所遭受的安全風險,因此也會影響它們關於安全保護投入的決定。我們的第一個目標是要了解“網絡外部性"和“節點異質性"如何影響安全投入。每個節點通過評估所受到的安全威脅和預期損失來做出決定。我們把它刻畫成一個貝葉斯博弈,在這個博弈裡面,每個節點只知道局部的信息,例如,自身有多少個鄰節點,和一些很少的全局信息,比如網絡中節點的度分佈。我們的第二個目標是研究一種叫做網絡保險的新的風險管理方式。我們探討競爭的網絡保險市場存在對於安全投入有什麼影響。通過分析,我們發現如果網絡保險提供商能夠觀察到節點的安全狀況,當節點所採取的保護措施質量不是很高時,網絡保險市場對於促進安全保護投入有積極的作用。我們還發現網絡保險對於度數高的節點的激勵程度更好。相反,如果網絡保險提供商不能觀察到節點的安全保護狀況,我們驗證了部分保險可以起到一個非負的激勵效用,雖然不是一種激勵,但是能夠提高節點的效用。 / Hosts (or nodes) in the Internet often face epidemic risks such as virus and worms attack. Despite the awareness of these risks and the importance of network/system security, investment in security protection is still scare, and hence epidemic risk is still prevalent. Deciding whether to invest in security protection is an interdependent process: security investment decision made by one node can affect the security risk of others, and therefore affect their decisions also. Our first goal is to understand how "network externality" and "nodes heterogeneity" may affect security adoption. Nodes make decisions on security investment by evaluating the epidemic risk and the expected loss. We characterize it as a Bayesian network game in which nodes only have the local information, e.g., the number of neighbors, and minimum common information, e.g., degree distribution of the network. Our second goal is to study a new form of risk management, called cyber-insurance. We investigate how the presence of competitive insurance market can affect the security adoption and show that if the insurance provider can observe the protection level of nodes, the insurance market is a positive incentive for security adoption if the protection quality is not very high. We also find that cyber-insurance is more likely to be a good incentive for nodes with higher degree. Conversely, if the insurance provider cannot observe the protection level of nodes, we verify that partial insurance can be a non-negative incentive, improving node’s utility though not being an incentive. / Detailed summary in vernacular field only. / Yang, Zichao. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 59-65). / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Mathematical Models --- p.6 / Chapter 2.1 --- Epidemic Model --- p.6 / Chapter 2.2 --- InvestmentModel --- p.8 / Chapter 2.3 --- Bayesian Network Game --- p.11 / Chapter 3 --- Analysis for Strategic Security Adoption --- p.13 / Chapter 3.1 --- General Case --- p.13 / Chapter 3.1.1 --- Estimating the Probability --- p.14 / Chapter 3.1.2 --- Security Adoption. --- p.17 / Chapter 3.2 --- Analysis of Node Heterogeneity: Two Types Case --- p.25 / Chapter 4 --- Analysis for Cyber-insurance Market --- p.30 / Chapter 4.1 --- Supply of Insurance --- p.30 / Chapter 4.2 --- Cyber-insuranceWithoutMoral Hazard --- p.34 / Chapter 4.2.1 --- Security Adoption with Cyber-insurance Market --- p.34 / Chapter 4.2.2 --- Incentive Analysis --- p.37 / Chapter 4.3 --- Cyber-insurance withMoral Hazard --- p.41 / Chapter 5 --- Simulation & Numerical Results --- p.46 / Chapter 5.1 --- Validating Final Infection Probability --- p.46 / Chapter 5.2 --- Security Adoption with Externality Effect --- p.49 / Chapter 5.3 --- Influence of Cyber-insurance --- p.52 / Chapter 6 --- Related Work --- p.53 / Chapter 7 --- Conclusion --- p.57 / Bibliography --- p.59
|
1204 |
Bayesian criterion-based model selection in structural equation models. / CUHK electronic theses & dissertations collectionJanuary 2010 (has links)
Structural equation models (SEMs) are commonly used in behavioral, educational, medical, and social sciences. Lots of software, such as EQS, LISREL, MPlus, and WinBUGS, can be used for the analysis of SEMs. Also many methods have been developed to analyze SEMs. One popular method is the Bayesian approach. An important issue in the Bayesian analysis of SEMs is model selection. In the literature, Bayes factor and deviance information criterion (DIC) are commonly used statistics for Bayesian model selection. However, as commented in Chen et al. (2004), Bayes factor relies on posterior model probabilities, in which proper prior distributions are needed. And specifying prior distributions for all models under consideration is usually a challenging task, in particular when the model space is large. In addition, it is well known that Bayes factor and posterior model probability are generally sensitive to the choice of the prior distributions of the parameters. Furthermore the computational burden of Bayes factor is heavy. Alternatively, criterion-based methods are attractive in the sense that they do not require proper prior distributions in general, and the computation is quite simple. One of commonly used criterion-based methods is DIC, which however assumes the posterior mean to be a good estimator. For some models like the mixture SEMs, WinBUGS does not provide the DIC values. Moreover, if the difference in DIC values is small, only reporting the model with the smallest DIC value may be misleading. In this thesis, motivated by the above limitations of the Bayes factor and DIC, a Bayesian model selection criterion called the Lv measure is considered. It is a combination of the posterior predictive variance and bias, and can be viewed as a Bayesian goodness-of-fit statistic. The calibration distribution of the Lv measure, defined as the prior predictive distribution of the difference between the Lv measures of the candidate model and the criterion minimizing model, is discussed to help understanding the Lv measure in detail. The computation of the Lv measure is quite simple, and the performance is satisfactory. Thus, it is an attractive model selection statistic. In this thesis, the application of the Lv measure to various kinds of SEMs will be studied, and some illustrative examples will be conducted to evaluate the performance of the Lv measure for model selection of SEMs. To compare different model selection methods, Bayes factor and DIC will also be computed. Moreover, different prior inputs and sample sizes are considered to check the impact of the prior information and sample size on the performance of the Lv measure. In this thesis, when the performances of two models are similar, the simpler one is selected. / Li, Yunxian. / Adviser: Song Xinyuan. / Source: Dissertation Abstracts International, Volume: 72-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 116-122). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
1205 |
Bayesian statistical analysis for nonrecursive nonlinear structural equation models. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
Keywords: Bayesian analysis, Finite mixture, Gibbs sampler, Langevin-Hasting sampler, MH sampler, Model comparison, Nonrecursive nonlinear structural equation model, Path sampling. / Structural equation models (SEMs) have been applied extensively to management, marketing, behavioral, and social sciences, etc for studying relationships among manifest and latent variables. Motivated by more complex data structures appeared in various fields, more complicated models have been recently developed. For the developments of SEMs, there is a usual assumption about the regression coefficient of the underlying latent variables. On themselves, more specifically, it is generally assumed that the structural equation modeling is recursive. However, in practice, nonrecursive SEMs are not uncommon. Thus, this fundamental assumption is not always appropriate. / The main objective of this thesis is to relax this assumption by developing some efficient procedures for some complex nonrecursive nonlinear SEMs (NNSEMs). The work in the thesis is based on Bayesian statistical analysis for NNSEMs. The first chapter introduces some background knowledge about NNSEMs. In chapter 2, Bayesian estimates of NNSEMs are given, then some statistical analysis topics such as standard error, model comparison, etc are discussed. In chapter 3, we develop an efficient hybrid MCMC algorithm to obtain Bayesian estimates for NNSEMs with mixed continuous and ordered categorical data. Also, some statistical analysis topics are discussed. In chapter 4, finite mixture NNSEMs are analyzed with the Bayesian approach. The newly developed methodologies are all illustrated with simulation studies and real examples. At last, some conclusion and discussions are included in Chapter 5. / Li, Yong. / "July 2007." / Adviser: Sik-yum Lee. / Source: Dissertation Abstracts International, Volume: 69-01, Section: B, page: 0398. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 99-111). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
1206 |
A study of investment decisionsLamfalussy, Alexandre January 1958 (has links)
No description available.
|
1207 |
Asset Management Decision Support Tools : a conceptual approach for managing their performanceLattanzio, Susan January 2018 (has links)
Decision Support Tools (DSTs) are commonly utilised within the Asset Management (AM) operations of infrastructure organisations. These manual or computerised tools are used to support decisions about what assets to acquire and how to operate them. Their performance can therefore have significant financial and non-financial implications for a business. Despite their importance, managing the performance of DSTs after implementation has received only limited attention within the literature. The output of this research is a conceptual approach for managing the performance of decision support tools used within an Asset Management context. It encompasses a risk-based DST Performance Management Process and DST Performance Assessment Techniques (the methods for applying the process in an industry setting).The novelty of the approach: (1) Alignment with the fundamental principles of the International Standard for Asset Management, ISO 5500x:2014. Thus, consistency of the management of DSTs with other assets types. (2) A generic process that is tailored to the context of the specific organisation. (3) Consistency with the risk management process (ISO 31000:2009) and meeting the requirements for a quality process defined within the Quality Management Standard (ISO 9000: 2015). (4) A cyclical process design ensuring that the approach, and how the approach is applied within an industry setting, will evolve to reflect the changing environment. A case study and the input of subject matter experts from within National Grid Electricity Transmission was used to both inform and evaluate the conceptual approach design. A semi-structured interview, with a water sector subject matter expert, assesses the transferability of the approach to a wider Asset Management population. The results of the evaluation demonstrate the conceptual approach to be both logical and useable in each context. The future research pathway looks to progress the conceptual approach through to industry adoption.
|
1208 |
Exploring the underlying processes and the long term effects of choice architectureCrookes, Raymond D. January 2017 (has links)
As the application for choice architecture grow, our goal is to better understand both the short and long term effects of our interventions. Many of the world’s most pressing and complicated problems require many actions, instead of a single action. Choice architecture has been shown to be effective on one-and-done problems, but what about the more complicated problems? Can the tool we choose to influence behavior have a positive or negative effect on the likelihood of taking up a second or possibly third behavior? In Chapter 1, we explore the mechanism of risky choice framing, isolating the effect of attraction and repulsion on the number of, and the valence of, thoughts supporting either the risky or riskless outcomes. In Chapter 2, we show behavioral spillover in a lab settings, showing the effects of default setting on not only the initial behavior, but also subsequent behaviors. In Chapter 3, we take choice architecture and explore the effects of different messaging on both short and long term behavioral change.
|
1209 |
Toward a Robust and Universal Crowd Labeling FrameworkKhattak, Faiza Khan January 2017 (has links)
The advent of fast and economical computers with large electronic storage has led to a large volume of data, most of which is unlabeled. While computers provide expeditious, accurate and low-cost computation, they still lag behind in many tasks that require human intelligence such as labeling medical images, videos or text. Consequently, current research focuses on a combination of computer accuracy and human intelligence to complete labeling task. In most cases labeling needs to be done by domain experts, however, because of the variability in expertise, experience, and intelligence of human beings, experts can be scarce.
As an alternative to using domain experts, help is sought from non-experts, also known as Crowd, to complete tasks that cannot be readily automated. Since crowd labelers are non-expert, multiple labels per instance are acquired for quality purposes. The final label is obtained by com- bining these multiple labels. It is very common that the ground truth, instance difficulty, and the labeler ability are unknown entities. Therefore, the aggregation task becomes a “chicken and egg” problem to start with.
Despite the fact that much research using machine learning and statistical techniques has been conducted in this area (e.g., [Dekel and Shamir, 2009; Hovy et al., 2013a; Liu et al., 2012; Donmez and Carbonell, 2008]), many questions remain unresolved, these include: (a) What are the best ways to evaluate labelers? (b) It is common to use expert-labeled instances (ground truth) to evaluate la- beler ability (e.g., [Le et al., 2010; Khattak and Salleb-Aouissi, 2011; Khattak and Salleb-Aouissi, 2012; Khattak and Salleb-Aouissi, 2013]). The question is, what should be the cardinality of the set of expert-labeled instances to have an accurate evaluation? (c) Which factors other than labeler expertise (e.g., difficulty of instance, prevalence of class, bias of a labeler toward a particular class) can affect the labeling accuracy? (d) Is there any optimal way to combine multiple labels to get the
best labeling accuracy? (e) Should the labels provided by oppositional/malicious labelers be dis- carded and blocked? Or is there a way to use the “information” provided by oppositional/malicious labelers? (f) How can labelers and instances be evaluated if the ground truth is not known with certitude?
In this thesis, we investigate these questions. We present methods that rely on few expert-labeled instances (usually 0.1% -10% of the dataset) to evaluate various parameters using a frequentist and a Bayesian approach. The estimated parameters are then used for label aggregation to produce one final label per instance.
In the first part of this thesis, we propose a method called Expert Label Injected Crowd Esti- mation (ELICE) and extend it to different versions and variants. ELICE is based on a frequentist approach for estimating the underlying parameters. The first version of ELICE estimates the pa- rameters i.e., labeler expertise and data instance difficulty, using the accuracy of crowd labelers on expert-labeled instances [Khattak and Salleb-Aouissi, 2011; Khattak and Salleb-Aouissi, 2012]. The multiple labels for each instance are combined using weighted majority voting. These weights are the scores of labeler reliability on any given instance, which are obtained by inputting the pa- rameters in the logistic function.
In the second version of ELICE [Khattak and Salleb-Aouissi, 2013], we introduce entropy as a way to estimate the uncertainty of labeling. This provides an advantage of differentiating between good, random and oppositional/malicious labelers. The aggregation of labels for ELICE version 2 flips the label (for binary classification) provided by the oppositional/malicious labeler thus utilizing the information that is generally discarded by other labeling methodologies.
Both versions of ELICE have a cluster-based variant in which rather than making a random choice of instances from the whole dataset, clusters of data are first formed using any clustering approach e.g., K-means. Then an equal number of instances from each cluster are chosen randomly to get expert-labels. This is done to ensure equal representation of each class in the test dataset.
Besides taking advantage of expert-labeled instances, the third version of ELICE [Khattak and Salleb-Aouissi, 2016], incorporates pairwise/circular comparison of labelers to labelers and in- stances to instances. The idea here is to improve accuracy by using the crowd labels, which unlike expert-labels, are available for the whole dataset and may provide a more comprehensive view of the labeler ability and instance difficulty. This is especially helpful for the case when the domain
experts do not agree on one label and ground truth is not known for certain. Therefore, incorporating more information beyond expert labels can provide better results.
We test the performance of ELICE on simulated labels as well as real labels obtained from Amazon Mechanical Turk. Results show that ELICE is effective as compared to state-of-the-art methods. All versions and variants of ELICE are capable of delaying phase transition. The main contribution of ELICE is that it makes the use of all possible information available from crowd and experts. Next, we also present a theoretical framework to estimate the number of expert-labeled instances needed to achieve certain labeling accuracy. Experiments are presented to demonstrate the utility of the theoretical bound.
In the second part of this thesis, we present Crowd Labeling Using Bayesian Statistics (CLUBS) [Khattak and Salleb-Aouissi, 2015; Khattak et al., 2016b; Khattak et al., 2016a], a new approach for crowd labeling to estimate labeler and instance parameters along with label aggregation. Our approach is inspired by Item Response Theory (IRT). We introduce new parameters and refine the existing IRT parameters to fit the crowd labeling scenario. The main challenge is that unlike IRT, in the crowd labeling case, the ground truth is not known and has to be estimated based on the parameters. To overcome this challenge, we acquire expert-labels for a small fraction of instances in the dataset. Our model estimates the parameters based on the expert-labeled instances. The estimated parameters are used for weighted aggregation of crowd labels for the rest of the dataset. Experiments conducted on synthetic data and real datasets with heterogeneous quality crowd-labels show that our methods perform better than many state-of-the-art crowd labeling methods.
We also conduct significance tests between our methods and other state-of-the-art methods to check the significance of the accuracy of these methods. The results show the superiority of our method in most cases. Moreover, we present experiments to demonstrate the impact of the accuracy of final aggregated labels when used as training data. The results essentially emphasize the need for high accuracy of the aggregated labels.
In the last part of the thesis, we present past and contemporary research related to crowd la- beling. We conclude with future of crowd labeling and further research directions. To summarize, in this thesis, we have investigated different methods for estimating crowd labeling parameters and using them for label aggregation. We hope that our contribution will be useful to the crowd labeling community.
|
1210 |
Essays on Aggregation in Deliberation and InquiryStewart, Rush T. January 2017 (has links)
Mathematical aggregation frameworks are general and precise settings in which to study ways of forming a consensus or group point of view from a set of potentially diverse points of view. Yet the standard frameworks have significant limitations. A number of results show that certain sets of desirable aggregation properties cannot be simultaneously satisfied. Drawing on work in the theory of imprecise probabilities, I propose philosophically-motivated generalizations of the standard aggregation frameworks (for probability, preference, full belief) that I prove can satisfy the desired properties. I then look at some applications and consequences of these proposals in decision theory, epistemology, and the social sciences.
|
Page generated in 0.1 seconds