Spelling suggestions: "subject:"decisiontheory"" "subject:"decisions.theory""
391 |
Compromise, extremism, and guiltPoterack, Alex 07 December 2016 (has links)
This dissertation is a study of non-standard economic behavior. The first chapter concerns two widely observed violations of Independence of Irrelevant Alternatives, the Compromise and Attraction effects. I construct a novel method of representing them by reducing the context of a menu to a frame, encompassing the worst option along each attribute in the menu, and observing a collection of preferences indexed by frames. The agent behaves as though a good’s attractiveness along each attribute is judged relative to the frame with declining marginal utility. This allows me to give a novel interpretation of the compromise and attraction effects: they are consistent with indifference curves rotating clockwise as the frame moves down, and counter- clockwise as it goes left. It also allows me to give a representation theorem showing the behavioral axioms associated with a utility representation taking a good and the frame as arguments.
The second chapter applies the representation from Chapter One to electoral politics. It shows that incorporating these preferences generates equilibria where extremist candidates enter plurality elections in order to attractively frame their preferred moderate candidate, even if the extremists have probability zero of obtaining office themselves. While such candidates are frequently observed in elections, and there are papers generating equilibria with centrist sure losers (including Solow (2015)), this is the first paper generating equilibria with these extremist candidates without unusual assumptions on election rules, or non single-peaked preferences. This paper creates a four candidate equilibrium with two extremist sure loser candidates, each on the fringes of opinion.
The third chapter concerns the effect of guilt on preferences in the circumstance of gift giving. A decision maker who experiences guilt may receive an increase in surplus from a gift card allowing guilt-free indulgence, potentially beyond even the surplus she’d receive from an equivalent cash gift. This paper isolates the behavior of guilt avoidance by exploiting a multi-period setting which incorporates a distinction between the decision maker’s preferences over what she’d receive, and what she would choose. A representation inspired by Kopylov (2009) is adapted to this setting, providing a representation theorem for these preferences.
|
392 |
Propriedades lógicas de classes de testes de hipóteses / Logical properties of classes of hypotheses testsSilva, Gustavo Miranda da 03 November 2014 (has links)
Ao realizar testes de hipóteses simultâneos espera-se que a decisões obtidas neles sejam logicamente consistentes entre si. Neste trabalho, verifica-se sob quais condições testes de Bayes simultâneos atendem às condições lógicas isoladamente ou em conjunto. Demonstra-se que as restrições para que os testes simultâneos atendam essas condições isoladamente são bastante intuitivas. No entanto, ao tentar obedecer as condições conjuntamente, perde-se otimalidade. Além disso, avalia-se a relação entre esses testes de Bayes simultâneos e os testes gerados por estimadores, isto é, mostra-se que, sob algumas condições, tomar uma decisão baseado em um estimador de Bayes é equivalente a tomar uma decisão baseada em um teste de Bayes. Por fim, mostra-se que, se tomamos uma decisão baseada em Estimadores de Máxima Verossimilhança, então essa decisão deve ser igual à tomada por um teste de Bayes e concluímos que essas decisões são admissíveis e obedecem ao Princípio da Verossimilhança. / When performing simultaneous hypotheses testing is expected that the decisions obtained therein are logically consistent with each other. In this work, we find restrictions under which simultaneous Bayes tests meet logical conditions separately or jointly. It is shown that the conditions for the simultaneous tests meet these conditions alone are quite intuitive. However, when trying to obey the conditions jointly, we lose optimality. Furthermore, we evaluate the relationship between these tests and simultaneous Bayes tests generated by estimators, ie, we show that, under some conditions, to choose an estimator based on Bayes decision is equivalent to choosing a decision based on a Bayes test. Finally, we show that if we take a decision based on Maximum Likelihood Estimators, then that decision should be equal to taking a Bayes test and concluded that these decisions are admissible and obey the Likelihood Principle.
|
393 |
Bayesian Modeling of Latent Heterogeneity in Complex Survey Data and Electronic Health RecordsAnthopolos, Rebecca January 2019 (has links)
In population health, the study of unobserved, or latent, heterogeneity in longitudinal data may help inform public health interventions. Growth mixture modeling is a flexible tool for modeling latent heterogeneity in longitudinal data. However, the application of growth mixture models to certain data types, namely, complex survey data and electronic health records, is underdeveloped. For valid statistical inferences in complex survey data, features of the sample design must be incorporated into statistical analysis. In electronic health records, the application of growth mixture modeling is challenged by high levels of missing values. In this dissertation, I have three goals: First, I propose a Bayesian growth mixture model for complex survey data in which I directly incorporate features of the complex sample design. Second, I extend a Bayesian growth mixture model of multiple longitudinal health outcomes collected in electronic health records to a shared parameter model that can account for dierent missing data assumptions. Third, I develop open-source software packages in R for each method that can be used for model tting, selection, and checking.
|
394 |
Composing Deep Learning and Bayesian Nonparametric MethodsZhang, Aonan January 2019 (has links)
Recent progress in Bayesian methods largely focus on non-conjugate models featured with extensive use of black-box functions: continuous functions implemented with neural networks. Using deep neural networks, Bayesian models can reasonably fit big data while at the same time capturing model uncertainty. This thesis targets at a more challenging problem: how do we model general random objects, including discrete ones, using random functions? Our conclusion is: many (discrete) random objects are in nature a composition of Poisson processes and random functions}. Thus, all discreteness is handled through the Poisson process while random functions captures the rest complexities of the object. Thus the title: composing deep learning and Bayesian nonparametric methods.
This conclusion is not a conjecture. In spacial cases such as latent feature models , we can prove this claim by working on infinite dimensional spaces, and that is how Bayesian nonparametric kicks in. Moreover, we will assume some regularity assumptions on random objects such as exchangeability. Then the representations will show up magically using representation theorems. We will see this two times throughout this thesis.
One may ask: when a random object is too simple, such as a non-negative random vector in the case of latent feature models, how can we exploit exchangeability? The answer is to aggregate infinite random objects and map them altogether onto an infinite dimensional space. And then assume exchangeability on the infinite dimensional space. We demonstrate two examples of latent feature models by (1) concatenating them as an infinite sequence (Section 2,3) and (2) stacking them as a 2d array (Section 4).
Besides, we will see that Bayesian nonparametric methods are useful to model discrete patterns in time series data. We will showcase two examples: (1) using variance Gamma processes to model change points (Section 5), and (2) using Chinese restaurant processes to model speech with switching speakers (Section 6).
We also aware that the inference problem can be non-trivial in popular Bayesian nonparametric models. In Section 7, we find a novel solution of online inference for the popular HDP-HMM model.
|
395 |
Bayesian analysis of a structural model with regime switchingShami, Roland G. (Roland George), 1960- January 2001 (has links)
Abstract not available
|
396 |
Forensic speaker analysis and identification by computer : a Bayesian approach anchored in the cepstral domainKhodai-Joopari, Mehrdad, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2007 (has links)
This thesis advances understanding of the forensic value of the automatic speech parameters by addressing the following question: what is the potentiality of the speech cepstrum as a forensic-acoustic parameter? Despite many advances in automatic speech and speaker recognition, robust and unconstrained progress in technical forensic speaker identification has been partly impeded by our incomplete understanding of the interaction and relation between forensic phonetics and the techniques employed in state-of-the-art automatic speech and speaker recognition. The posed question underlies the recurrent and longstanding issue of acoustic parameterisation in the area of forensic phonetics, where 1) speaker identification often must be carried out under less than optimal conditions, and 2) views differ on the usefulness and trustworthiness of the formant frequency measurements. To this end, a new formulation for the forensic evaluation of speech data was derived which is effectively a spectral likelihood ratio with enhanced sensitivity to the local peaks of the formant structure of the speech spectrum of vowel sounds, while retaining the characteristics of the Bayesian framework. This new hybrid formula was used together with a novel approach, which is founded on a statistically-based matched-pairs technique to account for various levels of variation inherent in speech recordings, thereby providing a spectrally meaningful measure of variations between two speech spectra and hence the true worth of speech samples as forensic evidence. The experimental results are obtained based on a forensically-realistic database of a relatively large population of 297 native speakers of Japanese. In sum, the research conducted in this thesis is a major step forward in advancing the forensic-phonetic field which broadens the objective basis of the forensic speaker identification. Beyond advancing knowledge in the field, the semi data-independent nature of the new formula ultimately has great implications in technical forensic speaker identification. It also provides us with a valuable biometric tool with both academic and commercial potential in crime investigation in a field which is already suffering from the lack of adequate data.
|
397 |
Flexible Bayesian modelling of gamma ray count dataLeonte, Daniela, School of Mathematics, UNSW January 2003 (has links)
Bayesian approaches to prediction and the assessment of predictive uncertainty in generalized linear models are often based on averaging predictions over different models, and this requires methods for accounting for model uncertainty. In this thesis we describe computational methods for Bayesian inference and model selection for generalized linear models, which improve on existing techniques. These methods are applied to the building of flexible models for gamma ray count data (data measuring the natural radioactivity of rocks) at the Castlereagh Waste Management Centre, which served as a hazardous waste disposal facility for the Sydney region between March 1978 and August 1998. Bayesian model selection methods for generalized linear models enable us to approach problems of smoothing, change point detection and spatial prediction for these data within a common methodological and computational framework, by considering appropriate basis expansions of a mean function. The data at Castlereagh were collected in the following way. A number of boreholes were drilled at the site, and for each borehole a gamma ray detector recorded gamma ray emissions at different depths as the detector was raised gradually from the bottom of the borehole to ground level. The profile of intensity of gamma counts can be informative about the geology at each location, and estimation of intensity profiles raises problems of smoothing and change point detection for count data. The gamma count profiles can also be modelled spatially, to inform the geological profile across the site. Understanding the geological structure of the site is important for modelling the transport of chemical contaminants beneath the waste disposal area. The structure of the thesis is as follows. Chapter 1 describes the Castlereagh hazardous waste site and the geophysical data, which motivated the methodology developed in this research. We summarise the principles of Gamma Ray (GR) logging, a method routinely employed by geophysicists and environmental engineers in the detailed evaluation of hazardous site geology, and detail the use of the Castlereagh data in this research. In Chapter 2 we review some fundamental ideas of Bayesian inference and computation and discuss them in the context of generalised linear models. Chapter 3 details the theoretical basis of our work. Here we give a new Markov chain Monte Carlo sampling scheme for Bayesian variable selection in generalized linear models, which is analogous to the well-known Swendsen-Wang algorithm for the Ising model. Special cases of this sampling scheme are used throughout the rest of the thesis. In Chapter 4 we discuss the use of methods for Bayesian model selection in generalized linear models in two specific applications, which we implement on the Castlereagh data. First, we consider smoothing problems where we flexibly estimate the dependence of a response variable on one or more predictors, and we apply these ideas to locally adaptive smoothing of gamma ray count data. Second, we discuss how the problem of multiple change point detection can be cast as one of model selection in a generalized linear model, and consider application to change point detection for gamma ray count data. In Chapter 5 we consider spatial models based on partitioning a spatial region of interest into cells via a Voronoi tessellation, where the number of cells and the positions of their centres is unknown, and show how these models can be formulated in the framework of established methods for Bayesian model selection in generalized linear models. We implement the spatial partition modelling approach to the spatial analysis of gamma ray data, showing how the posterior distribution of the number of cells, cell centres and cell means provides us with an estimate of the mean response function describing spatial variability across the site. Chapter 6 presents some conclusions and suggests directions for future research. A paper based on the work of Chapter 3 has been accepted for publication in the Journal of Computational and Graphical Statistics, and a paper based on the work in Chapter 4 has been accepted for publication in Mathematical Geology. A paper based on the spatial modelling of Chapter 5 is in preparation and will be submitted for publication shortly. The work in this thesis was collaborative, to a smaller or larger extent in its various components. I authored Chapters 1 and 2 entirely, including definition of the problem in the context of the CWMC site, data gathering and preparation for analysis, review of the literature on computational methods for Bayesian inference and model selection for generalized linear models. I also authored Chapters 4 and 5 and benefited from some of Dr Nott's assistance in developing the algorithms. In Chapter 3, Dr Nott led the development of sampling scheme B (corresponding to having non-zero interaction parameters in our Swendsen-Wang type algorithm). I developed the algorithm for sampling scheme A (corresponding to setting all algorithm interaction parameters to zero in our Swendsen-Wang type algorithm), and performed the comparison of the performance of the two sampling schemes. The final discussion in Chapter 6 and the direction for further research in the case study context is also my work.
|
398 |
An investigation and comparison of the decision-making process used by industry specialist and other auditorsMoroney, Robyn Ann, Accounting, Australian School of Business, UNSW January 2003 (has links)
Large accounting firms have been structuring their audit divisions along industry lines for some years. Industry specialisation is seen as a means of differentiation between otherwise similar accounting firms. At the individual level industry specialists are identified as being so designated within their firm. They spend a substantial amount of their time auditing clients in that industry. The purpose of this study is to determine what industry specialist auditors do that is different and similar when working on industry-based tasks, one of which they specialise in. Behavioural decision theory is used to investigate the differences and similarities in the decision-making processes of industry specialist and other auditors. It is known that industry specialists perform better on tasks set in their industry. The purpose of this study is to learn why. To that end, the pre-information search, information search and decision processing phases of the decision-making process are examined. It is expected that industry specialists are more efficient and effective at each stage of the decision-making process when completing a case set in the industry they specialise in. Two controlled experiments were conducted in the offices of each of the Big 4 international accounting firms. Participants included manufacturing and superannuation industry specialists from each firm. Each participant was invited to take part in both experiments, which were conducted consecutively via the internet. The first experiment comprised two cases, one set in each industry setting (manufacturing and superannuation). Participants completed both cases. The purpose of the first experiment was to conduct a within-subject examination unveiling similarities and differences between industry specialists and other auditors during the pre-information search, information search and decision processing phases of the decision-making process. Their performance on each case was also monitored and measured. Significant results were found for information search and performance. Moderate results were found for one proxy each of the pre-information search and the decision processing phases. The relationship between efficiency at each stage of the decision-making process and performance was also measured. A significant relationship was found for the pre-information search and decision processing phases. The second experiment comprised two strategic business risk tasks set in each industry setting (manufacturing and superannuation). Participants completed both sets of tasks. The purpose of the second experiment was to examine effectiveness during the pre-information search (listing key strategic business risks), information search (listing key inputs) and decision processing (listing key processes) phases of the decision-making process and their ability to identify and list key outputs (accounts and assertions) for an identified risk (technological change for the manufacturing industry task and solvency due to insufficient funding for the superannuation industry task) within each industry setting. The results were very significant overall. Industry specialist auditors were able list more key strategic business risks, inputs, processes and outputs when the task was set in the industry in which they specialise. The relationship between effectiveness at each stage of the decision-making process and performance was also measured. A significant relationship was found between effectiveness in listing key inputs and effectiveness in listing key outputs (accounts).
|
399 |
Incorporating uncertainty into expert models for management of box-ironbark forests and woodlands in Victoria, AustraliaCzembor, Christina Anne January 2009 (has links)
Anthropogenic utilization of forest and woodland ecosystems can cause declines in flora and fauna species. It is imperative to restore these ecosystems to mitigate further declines. In this thesis, I focused on a highly degraded region, the Box-Ironbark forests and woodlands of Victoria, Australia. Rather than mature stands with large trees, stands are currently dominated by high densities of small stems. This change has resulted in reduced populations of many flora and fauna species dependent on older-growth forests and woodlands. Managers are interested in restoring mature Box-Ironbark forests and woodlands through three alternative management strategies: allocating land to National Parks and allowing stands to develop naturally without harvesting, modifying timber harvesting regimes to retain more medium and large trees, or a new ecological thinning technique that retains target habitat trees and removes competing trees to encourage growth of retained stems. / The effects of each management strategy are not easy to predict due to complex interactions between intervention and stochastic natural processes. Forest simulation models are often employed to overcome this problem. I constructed state-and-transition simulation models (STSMs) to predict the effects of alternative management actions and natural disturbances on vegetation structure. Due to a lack of empirical data, I relied on the knowledge of experts in Box-Ironbark ecology and management to construct STSMs. Models predicted that the development of mature woodlands under all strategies was minimal over the next 150 years, and neither current harvesting nor ecological thinning is likely to expedite the development of mature stands relative to growth and natural disturbances. However, differences in experts’ opinions led to widely diverging model predictions. / Uncertainty must be acknowledged in model construction because it can affect model predictions. I quantified uncertainty due to four sources – between-expert variation, imperfect expert knowledge, natural stochasticity, and model parameterization – to determine which source caused the most variance in model predictions. I found that models were very uncertain and between-expert uncertainty contributed the majority of variance in model predictions. This brings into question the use of consensus methods in forest management where differences between experts are ignored. / Using uncertain model predictions to make management decisions is problematic because any given action can have many plausible outcomes. I applied several decision criteria to uncertain STSM predictions using a formal decision-making framework to determine the optimal management action in Box-Ironbark forests and woodlands. I found that natural development is the most risk-averse option, while ecological thinning is the most risky option because there is a small likelihood that it will greatly expedite the development of mature woodlands. Rather than selecting one option, managers could rely on a risk-spreading approach where the majority of land is allocated to no-cutting National Parks and a small amount of land is allocated to the other two harvesting strategies. This would allow managers to collect monitoring data for all management strategies in order to learn about effects of harvesting and update model predictions through time using adaptive management.
|
400 |
Bayesian sequential state estimation for MIMO wireless communicationsHuber, Kristopher Frederick George. Haykin, Simon S., January 1900 (has links)
Thesis (Ph.D.)--McMaster University, 2005. / Supervisor: Simon Haykin. Includes bibliographical references (leaves [126]-135).
|
Page generated in 0.0771 seconds