• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 518
  • 53
  • 47
  • 32
  • 12
  • 12
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • Tagged with
  • 771
  • 771
  • 601
  • 588
  • 135
  • 116
  • 101
  • 89
  • 68
  • 64
  • 61
  • 60
  • 60
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Revision mot Ekobrott : En studie om sambandet mellan revision och den ekonomiska brottsligheten i små svenska aktiebolag / Audit against economic crime : A study on the relationship between voluntary audit and the economic crime in Swedish SMEs

Malmqvist, Johan, Björkman, Emma January 2017 (has links)
BAKGRUND: Sedan revisionsplikten avskaffades 2010 har inte fastställts hur den ekonomiska brottsligheten påverkats av att små svenska aktiebolag inte längre revideras. Trots att Ekobrottsmyndigheten varnat för att avsaknaden av revisor kan vara en riskfaktor för ökad ekonomisk brottslighet pågår en utredning om att utöka gränsvärdena för revisionsplikten för svenska aktiebolag. SYFTE: Studien syftar till att förklara sambandet mellan revision och förekomsten av ekonomisk brottslighet i små svenska aktiebolag. METOD: Denna kvantitativa studie har ett deduktivt angreppssätt. Utifrån agentteori, resursberoendeteori och beslutsteori har en hypotes formulerats. En tvär-snittsdesign har använts och sekundärdata, främst bestående av räkenskaps-information, har analyserats med Chi2-test och logistisk regression. SLUTSATS: Studiens analys visar på ett svagt negativt samband mellan revisor och ekonomisk brottslighet, vilket innebär att resultatet indikerar att ekonomisk brottslighet är mer förekommande i bolag som väljer bort revisorn. Vidare minskar revisorn risken för att ekonomisk brottslighet förekommer inom små aktiebolag med 10,35 procentenheter. / INTRODUCTION: Since voluntary audit was introduced in 2010, it has not been established how the economic crime in Swedish SMEs has been affected by no longer being revised. Although the Swedish Economic Crime Authority has warned that the absence of an auditor may affect the risk of increased economic crime, a debate about whether to increase the limit of voluntary audit has arisen. PURPOSE: The purpose of this study is to explain the relationship between audit and economic crime in Swedish SMEs. METHOD: This quantitative study is based on a deductive approach. Based on agency theory, resource dependence theory and decision theory a hypothesis has been developed. A cross-sectional design has been used and secondary data, mainly from financial statements, has been analysed using Chi2-test and logistic regression. CONCLUSION: In the analysis, a weak negative correlation between auditor and economic crime is presented. I.e. the results indicate that the risk of economic crime is more prevalent in companies without an auditor. Further, the auditor reduces the risk of economic crime in Swedish SMEs with 10,35 percentage points.
612

A review of generalized linear models for count data with emphasis on current geospatial procedures

Michell, Justin Walter January 2016 (has links)
Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
613

Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases

Abbas, Kaja Moinudeen 05 1900 (has links)
Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest.
614

Análise espacial do risco de dengue no município de Campinas : modelagem bayesiana / Spatial analysis of risk of dengue in the municipality of Campinas : bayesian modeling

Costa, Jose Vilton, 1973- 02 January 2013 (has links)
Orientadores: Liciana Vaz de Arruda Silveira, Maria Rita Donalisio Cordeiro / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Ciências Médicas / Made available in DSpace on 2018-08-22T09:47:51Z (GMT). No. of bitstreams: 1 Costa_JoseVilton_D.pdf: 2360050 bytes, checksum: f65e347c33cec46b245b18e463684599 (MD5) Previous issue date: 2013 / Resumo: A dengue é uma doença viral de transmissão vetorial causada por um dos quatro sorotipos do vírus dengue (DENV-1,DENV-2,DENV-3 e DENV-4). Atualmente é considerada um dos maiores problemas de Saúde Pública do Mundo. O controle do vetor transmissor, Aedes aegypti, é bem complexo, sendo de grande importância para os serviços de vigilância epidemiológica compreender o processo espacial da dinâmica de transmissão da doença sobre o espaço intramunicipal. A presente tese tem como objetivo analisar a distribuição espacial do risco de dengue e sua associação com fatores socioambientais, no município de Campinas-SP, em 2007. Considerando-se a escala local, analisaram-se duas distintas dimensões espaciais: os setores censitários e as áreas de cobertura dos centros de saúde. Foi construído um indicador de condições socioambientais a partir de variáveis do Censo 2000. Foram investigados 11519 casos de dengue autóctones. Na modelagem de regressão ecológica, foram incluídos efeitos aleatórios estruturados espacialmente e não estruturados para ajustar a variação extra-Poisson presente nas contagens agregadas da doença. Os modelos binomiais negativos (BN), Poisson inflacionado de zeros (ZIP) e binomial negativo inflacionado de zeros (ZINB) foram aplicados para modelar a superdispersão e o excesso de zeros para os dados dos setores censitários. A abordagem hierárquica bayesiana foi utilizada para inferência e o método INLA (Integrated Nested Laplace Approximations) foi empregado para estimação dos parâmetros. A distribuição espacial do risco de dengue sobre o espaço intramunicipal de Campinas não mostrou clara relação com as condições socioambientais condições socioambientais. O modelo ZIP mostrou-se mais adequado para modelar o excesso de zeros presente nos dados de contagem para pequenas áreas / Abstract: Dengue is a mosquito-borne disease caused by four serotypes of dengue virus (DENV1 to DENV4) and is currently one of the major public health problems worldwide. The control its vector, the Aedes aegypti mosquito, is very complex and it is of great importance for epidemiological surveillance systems understand the process of spatial dynamics of disease transmission on urban space. This thesis aimed to analyze the spatial distribution of the risk of dengue and its association with socio-environmental factors, in Campinas-SP, in 2007. Considering the local scale, we analyzed two distinct spatial dimensions: census tracts and health districts. We constructed an indicator of socio-environmental conditions from 2000 Census data. We investigated 11,519 cases of autochthonous dengue. In ecological modeling regression, random effects were included spatially structured and unstructured to fit the extra-Poisson variation present in the count data of disease. The negative binomial models (BN) zero inflated Poisson (ZIP) and zero inflated negative binomials (ZINB) were applied to model the overdispersion and excess zeros to the data of the census tracts. A hierarchical Bayesian approach was used for inference method and the INLA (Integrated Nested Laplace approximations) was used for parameter estimation. The spatial distribution of the risk of dengue on local scale Campinas showed no clear relationship with the socio-environmental conditions. The ZIP model was more appropriate for modeling excess zeros count data present in small areas / Doutorado / Epidemiologia / Doutor em Saude Coletiva
615

Classes de testes de hipóteses / Classes of hypotheses tests

Rafael Izbicki 08 June 2010 (has links)
Na Inferência Estatística, é comum, após a realização de um experimento, testar simultaneamente um conjunto de diferentes hipóteses de interesse acerca de um parâmetro desconhecido. Assim, para cada hipótese, realiza-se um teste de hipótese e, a partir disto, conclui-se algo sobre os parâmetros de interesse. O objetivo deste trabalho é avaliar a (falta de) concordância lógica entre as conclusões obtidas a partir dos testes realizados após a observação de um único experimento. Neste estudo, é apresentada uma definição de classe de testes de hipóteses, uma função que para cada hipótese de interesse associa uma função de teste. São então avaliadas algumas propriedades que refletem como gostaríamos que testes para diferentes hipóteses se comportassem em termos de coerência lógica. Tais propriedades são exemplificadas através de classes de testes que as satisfazem. A seguir, consideram-se conjuntos de axiomas para classes. Estes axiomas são baseados nas propriedades mencionadas. Classes de testes usuais são investigadas com relação aos conjuntos de axiomas propostos. São também estudadas propriedades advindas de tais conjuntos de axiomas. Por fim, estuda-se um resultado que estabelece uma espécie de conexão entre testes de hipóteses e estimação pontual. / In Statistical Inference, it is usual, after an experiment is performed, to test simultaneously a set of hypotheses of interest concerning an unknown parameter. Therefore, to each hypothesis, a statistical test is performed and a conclusion about the parameter is drawn based on it. The objective of this work is to evaluate the (lack of) logical coherence among conclusions obtained from tests conducted after the observation of a single experiment. In this study, a definition of class of hypotheses tests, a function that associates a test function to each hypothesis of interest, is presented. Some properties that reflect what one could expect (in terms of logical coherence) from tests to different hypotheses are then evaluated. These properties are exemplified by classes of hypotheses tests that respect them. Then, sets of axioms based on the properties studied are proposed to classes of hypotheses tests. Usual classes of hypotheses tests are investigated with respect to these sets of axioms. Some properties related to these sets of axioms are then analyzed. At last, a result which seems to connect hypotheses testing and point estimation is stated.
616

Effect modification by socioeconomic conditions on the effects of prescription opioid supply on drug poisoning deaths in the United States

Fink, David S. January 2020 (has links)
The rise in America’s drug poisoning rates has been described as a public health crisis and has long been attributed to the rapid rise in opioid supply due to increased volumes of medical prescribing in the United States that began in the mid-1990s and peaked in 2012. In 2016, the introduction of the “deaths of despair” hypothesis provided a more nuanced explanation for the rising rates of drug poisoning deaths: increasing income inequality and stagnation of middle-class worker wages, driven by long-term shifts in the labor market, reduced employment opportunities and overall life prospects for persons with a high school degree or less, driving increases in “deaths of despair” (i.e., deaths from suicide, cirrhosis of the liver, and drug poisonings). This focus on economic and social conditions as capable of shaping geospatial differences in drug demand and attendant drug-related harms (e.g., drug poisonings) provides a larger context to factors potentially underlying the heterogeneous distribution of prescription opioid supply across the United States. However, despite the likelihood that economic and social conditions may be important demand-side factors that also interact with supply-side factors to produce the rates of fatal drug poisonings, little information exists about the effect of area-level socioeconomic conditions on fatal drug poisoning rates, and no study has investigated whether socioeconomic conditions interact with prescription opioid supply to affect area-level rates of fatal drug poisonings. The overarching goal of this dissertation was to test the independent and joint effects of supply- and demand-side factors, operationalized as prescription opioid supply and socioeconomic conditions, on fatal drug poisoning in the U.S. First, a systematic review of the literature was conducted to critically evaluate the evidence on the ecological relationship of prescription opioid supply and socioeconomic conditions on rates of drug poisoning deaths. The systematic review provides robust evidence of the independent effect of each prescription opioid supply and socioeconomic conditions on rates of drug poisoning deaths. The gap in the literature on the joint effects of prescription opioid supply and socioeconomic conditions was clear, with no study examining the interaction between supply- and demand-side factors on rates of fatal drug poisonings. Moreover, although greater prescription opioid supply was associated with higher rates of fatal drug poisonings in most of the studies, two studies presented contradictory findings, with one study showing no effect of supply on drug poisoning deaths and the other showing locations with higher levels of prescription opioid supply were associated with fewer drug-related deaths. Three limitations were also identified in the reviewed studies that could partially explain the observed associations. First, although studies aggregated data on drug poisoning deaths to a range of administrative spatial levels, including census tract, 5-digit ZIP code, county, 3-digit ZIP code, and state, no study investigated the sensitivity of findings to the level of geographic aggregation. Second, spatial modeling requires the assessment of spatial autocorrelation in both the unadjusted and adjusted data, but few studies even assessed spatial autocorrelation in the data, and fewer still incorporated spatial dependencies in the model. This is important because when spatial autocorrelation is present, the independence assumption in standard statistical regression models is violated, potentially causing bias and loss of efficiency. Third, studies operationalized prescription opioid supply and socioeconomic conditions using a variety of different measures, and no study assessed the sensitivity of findings to the different measures of supply and socioeconomic conditions. Second, the ecological relationship between prescription opioid supply and fatal drug poisonings was examined. For this, pooled cross-sectional time series data from 3,109 U.S. counties in 49 states (2006-2016) were used in Bayesian Poisson conditional autoregressive models to estimate the effect of county prescription opioid supply on four types of drug poisoning deaths: any drug (drug-related death), any opioid (opioid-related death), any prescription opioid but not heroin (prescription opioid-related death), and heroin (heroin-related death), adjusting for compositional and contextual differences across counties. Comparisons were made by type of drug poisoning (any drug, any opioid, prescription opioids only, heroin), level of geographic aggregation (county versus state), and measure of prescription opioid supply (rate of opioid-prescribing per 100 persons and morphine milligram equivalents per-capita). Results indicated a positive association between prescription opioid supply and rates of fatal drug poisonings consistent across changes in type of drug poisoning, level of aggregation, and measure of prescription opioid supply. However, removing confounders from the model caused the direction of the effect estimate to reverse for drug poisoning deaths from any drug, any opioid, and heroin. These results suggested that differences in adjustment for confounding could explain most of the inconsistent findings in the literature. Finally, a rigorous test of the hypothesis that worse socioeconomic conditions increase risk of fatal drug poisonings at the county level, and interact with prescription opioid supply was conducted. This analysis used the same pooled cross-sectional time series data from 3,109 U.S. counties in 49 states (2006-2016). The analysis modeled the effect of five key socioeconomic variables, including three single socioeconomic variables (unemployment, poverty rate, income inequality) and two index variables (Rey index, American Human Development Index [HDI]) on four types of drug poisoning deaths: any drug (drug-related death), any opioid (opioid-related death), any prescription opioid but not heroin (prescription opioid-related death), and heroin (heroin-related death). Using a hierarchical Bayesian modeling approach to account for spatial dependence and the variability of fatal drug poisoning rates due to the small number of events, the independent effect of socioeconomic conditions on rates of drug poisoning deaths and their joint multiplicative and additive effect with prescription opioid supply were estimated. Results showed that rates of fatal drug poisonings were higher in more economically and socially disadvantaged counties; the five key indicator variables were differentially associated with drug poisoning rates; and the American Human Development Index (HDI) and income inequality were most strongly associated with fatal drug poisoning rates. Finally, the results indicate that both HDI and income inequality interact with county-level prescription opioid supply to affect drug poisoning rates. Specifically, the effect of higher prescription opioid supply on rates of fatal drug poisonings was greater in counties with higher HDI and more equal income distributions than counties with lower HDI and less equal income distributions. Overall, this dissertation increased knowledge about the separate and conjoint roles of supply- and demand-side factors in the geospatial distribution of fatal drug poisonings in the U.S. The idea that area-level prescription opioid supply are key drivers of prescription drug use, misuse, and addiction and the attendant consequences, including nonfatal and fatal drug poisonings, has been in the literature for well over a decade. However, no study to date has shown that area-level socioeconomic conditions modify the effect of prescription opioid supply on fatal drug poisonings. By identifying important contextual factors capable of modifying the effect of prescription opioid supply reductions on mortality, high-risk geographic areas can be prioritized for interventions to counter any unintended effects of reducing the prescription opioid supply in an area. As federal and state policies continue to target the rising rates of fatal drug poisonings, these findings show that area-level socioeconomic conditions may represent an important target for policy intervention during the current drug poisoning crisis and a critical piece of information necessary for predicting any future drug-related crises.
617

Méthodes statistiques pour la prédiction de température dans les composants hyperfréquences / Statistical methods for temperature prediction in hyperfrequency components

Mallet, Grégory 25 October 2010 (has links)
Cette thèse s’intéresse à l’application des méthodes d’apprentissage statistique pour la prédiction de température d’un composant électronique présent dans un radar. On étudie un cas simplifié des systèmes réels, le système étudié se limitant à un seul composant monté sur un système de refroidissement réduit. Le premier chapitre est consacré à la modélisation thermique. Après avoir présenté les principaux modes de transmission de l’agitation thermique, les modèles analytiques et numériques qui en découlent sont étudiés. En utilisant cette connaissance,le deuxième chapitre propose de choisir dans les méthodes de mesures les plus adaptées aux spécifications et aux contraintes de l’application choisie. Une fois que les bases de données ont été établies, nous pouvons utiliser dans le troisième chapitre les techniques de l’apprentissage statistique pour construire un modèle dynamique. Après un bref rappel sur les tenants et les aboutissants de la modélisation statistique, quatre familles de méthodes seront présentées : les modèles linéaires, les réseaux de neurones, les réseaux bayésiens dynamiques et les machines à vecteur support (SVM). Enfin, le quatrième chapitre est l’occasion de présenter une méthode de modélisation originale.En effet, après avoir détaillé la mise en oeuvre des méthodes d’identification de représentation d’état, nous verrons comment prendre en compte des a priori théoriques au cours de l’apprentissage de ce type de modèle, à savoir une contrainte de stabilité. / This thesis is focused on the application of statistical learning methods for the temperature prediction of an electronic component embedded in a radar. We study a simplified case of real systems, the system under study is limited to a single component mounted on a reduced cooling system. The first chapter is devoted to heat transfer modelisation. After presenting the major mechanisms of thermal agitation transmission, analytical and numerical models are studied. Using this knowledge, the second chapter offers a survey on the methods of temperature measurement, choosing the fittest according to the specifications and the constraints of the chosen application.Once databases have been established, we can use in the third chapter statistical learning techniques to build a dynamic model. After a brief reminder about the ins and outs of statistical modeling, four families of methods willbe presented : linear models, neural networks, dynamic bayesian networks and support vector machines (SVM).The fourth chapter is an opportunity to present a novel method of modeling. Indeed, after a presentation of themethods for the identification of state representation, we see how to take into account theoretical apriorism during learning of this model type, ie a stability constraint.
618

Drogenkonsum als rationale Wahl

Berger, Roger, Gautschi, Thomas 12 January 2018 (has links)
No description available.
619

Statistical and Machine Learning Methods for Pattern Identification in Environmental Mixtures

Gibson, Elizabeth Atkeson January 2021 (has links)
Background: Statistical and machine learning techniques are now being incorporated into high-dimensional mixture research to overcome issues with traditional methods. Though some methods perform well on specific tasks, no method consistently outperforms all others in complex mixture analyses, largely because different methods were developed to answer different research questions. The research presented here concentrates on answering a single mixtures question: Are there exposure patterns within a mixture corresponding with sources or behaviors that give rise to exposure? Objective: This dissertation details work to design, adapt, and apply pattern recognition methods to environmental mixtures and introduces two methods adapted to specific challenges of environmental health data, (1) Principal Component Pursuit (PCP) and (2) Bayesian non-parametric non-negative matrix factorization (BN²MF). We build on this work to characterize the relationship between identified patterns of in utero endocrine disrupting chemical (EDC) exposure and child neurodevelopment. Methods: PCP---a dimensionality reduction technique in computer vision---decomposes the exposure mixture into a low-rank matrix of consistent patterns and a sparse matrix of unique or extreme exposure events. We incorporated two existing PCP extensions that suit environmental data, (1) a non-convex rank penalty, and (2) a formulation that removes the need for parameter tuning. We further adapted PCP to accommodate environmental mixtures by including (1) a non-negativity constraint, (2) a modified algorithm to allow for missing values, and (3) a separate penalty for measurements below the limit of detection (PCP-LOD). BN²MF decomposes the exposure mixture into three parts, (1) a matrix of chemical loadings on identified patterns, (2) a matrix of individual scores on identified patterns, and (3) and diagonal matrix of pattern weights. It places non-negative continuous priors on pattern loadings, weights, and individual scores and uses a non-parametric sparse prior on the pattern weights to estimate the optimal number. We extended BN²MF to explicitly account for uncertainty in identified patterns by estimating the full distribution of scores and loadings. To test both methods, we simulated data to represent environmental mixtures with various structures, altering the level of complexity in the patterns, the noise level, the number of patterns, the size of the mixture, and the sample size. We evaluated PCP-LOD's performance against principal component analysis (PCA), and we evaluated BN²MF's performance against PCA, factor analysis, and frequentist nonnegative matrix factorization (NMF). For all methods, we compared their solutions with true simulated values to measure performance. We further assessed BN²MF's coverage of true simulated scores. We applied PCP-LOD to an exposure mixture of 21 persistent organic pollutants (POPs) measured in 1,000 U.S. adults from the 2001--2002 National Health and Nutrition Examination Survey (NHANES). We applied BN²MF to an exposure mixture of 17 EDCs measured in 343 pregnant women in the Columbia Center for Children’s Environmental Health's Mothers and Newborns Cohort. Finally, we designed a two-stage Bayesian hierarchical model to estimate health effects of environmental exposure patterns while incorporating the uncertainty of pattern identification. In the first stage, we identified EDC exposure patterns using BN²MF. In the second stage, we included individual pattern scores and their distributions as exposures of interest in a hierarchical regression model, with child IQ as the outcome, adjusting for potential confounders. We present sex-specific results. Results: PCP-LOD recovered the true number of patterns through cross-validation for all simulations; based on an a priori specified criterion, PCA recovered the true number of patterns in 32% of simulations. PCP-LOD achieved lower relative predictive error than PCA for all simulated datasets with up to 50% of the data < LOD. When 75% of values were < LOD, PCP-LOD outperformed PCA only when noise was low. In the POP mixture, PCP-LOD identified a rank three underlying structure. One pattern represented comprehensive exposure to all POPs. The other two patterns grouped chemicals based on known properties such as structure and toxicity. PCP-LOD also separated 6% of values as extreme events. Most participants had no extreme exposures (44%) or only extremely low exposures (18%). BN²MF estimated the true number of patterns for 99% of simulated datasets. BN²MF's variational confidence intervals achieved 95% coverage across all levels of structural complexity with up to 40% added noise. BN²MF performed comparably with frequentist methods in terms of overall prediction and estimation of underlying loadings and scores. We identified two patterns of EDC exposure in pregnant women, corresponding with diet and personal care product use as potentially separate sources or behaviors leading to exposure. The diet pattern expressed exposure to phthalates and BPA. One standard deviation increase in this pattern was associated with a decrease of 3.5 IQ points (95% credible interval: -6.7, -0.3), on average, in female children but not in males. The personal care product pattern represented exposure to phenols, including parabens, and diethyl phthalate. We found no associations between this pattern and child cognition. Conclusion: PCP-LOD and BN^2MF address limitations of existing pattern recognition methods employed in this field such as user-specified pattern number, lack of interpretability of patterns in terms of human understanding, influence of outlying values, and lack of uncertainty quantification. Both methods identified patterns that grouped chemicals based on known sources (e.g., diet), behaviors (e.g., personal care product use), or properties (e.g., structure and toxicity). Phthalates and BPA found in food packaging and can linings formed a BN²MF-identified pattern of EDC exposure negatively associated with female child intelligence in the Mothers and Newborns cohort. Results may be used to inform interventions designed to target modifiable behavior or regulations to act on dietary exposure sources.
620

Experimental Investigations of the Role of Information in Economic Choices

Ravaioli, Silvio January 2022 (has links)
Before making a choice, we often have the opportunity to learn more about the options that are available. For example, we can check the characteristics of a product before buying it, or read different newspapers before a political election. Understanding what shapes the demand for information, and its role in the decision process, is important to study economic choices. This dissertation contains three essays in behavioral and information economics that utilize experimental data and modeling to analyze how people choose and use information to make decisions. The first chapter, "Coarse and Precise Information in Food Labeling," uses experimental data to determine whether precise food labels can be more effective and informative than coarse ones. In a preregistered online study conducted on a representative US sample, I manipulate front-of-package labels about foods' calorie content. I find that coarse-categorical labels generate a larger reduction in calories per serving compared to detailed-numerical labels despite providing less information. Choices violate the predictions of Bayesian decision theory, suggesting that consumers are less responsive to detailed information. Results also show that participants prefer coarse labels, suggesting a general preference for simple, easy-to-interpret information. The second chapter, "The Status Quo and Belief Polarization of Inattentive Agents," studies how differences across agents can drive information acquisition and generate polarization. In a rational inattention model, optimal information acquisition and subsequent belief formation depend crucially on the agent-specific status quo valuation. Beliefs can systematically update away from the realized truth and even agents with the same initial beliefs might become polarized. A laboratory experiment confirms the model's predictions about the information acquisition and its effect on beliefs. Differently from the model's predictions, participants display preferences for simple messages that can provide certainty. The third chapter, "Dynamic Information Choice with Biased Information Sources," uses experimental data to study how people decide what kind of information to acquire when they have multiple opportunities to learn. Standard theory predicts that decision makers should collect the stream of information that leads to the maximization of the expected reward from the final choice. An online experiment on sequential information acquisition shows that people systematically deviate from the predictions of the standard normative model. Participants display a certainty-seeking information acquisition behavior and under-respond to the new evidence collected, reviewing rarely their own information acquisition strategy.

Page generated in 0.0901 seconds