1 |
Steve Blackwell: A Florida Folk MusicianHaymans, Brian 01 January 2010 (has links)
This study investigates the life of Steve Blackwell (1947-2006), a Florida folk singer/songwriter from Punta Gorda, FL, located where the Peace River meets the Gulf of Mexico. The study examines his biographical history, his performance career, musical output, and the impact he and his music had on the surrounding community. The first part of the study documents Blackwell's history and the major events that shaped his life while, at the same time, describing what kind of person Steve Blackwell was. The second part of the study examines Blackwell's career as a musical performer, the bands he played with, how those bands came to be or changed over time, what types of music they performed, and any albums he recorded. The third part of the paper looks at Blackwell as a songwriter. How Blackwell decided on his lyrical topics, his musical style, and compositional process are discussed. The final part of the study examines the impact of Blackwell and his music. Consideration is given to Steve Blackwell's closest social networks, as well as to the social implications he, his music, and his networking had on his local community. Research for this study was done through immersion mixed with a close study of Steve Blackwell's personal documents. A number of personal interviews and correspondences were conducted with Steve Blackwell's family, friends, former band members, and a number of other unrelated patrons. Primary sources for this study include a number of Blackwell's own documents, such as his letters, journal entries, sketches, working copies of songs, and recording sessions, etc., which were made available with the gracious permission of the Blackwell family. Few secondary sources were found, save for a few magazine and newspaper articles. After these materials were gathered, a portrait of Steve Blackwell emerged. Evidence was found that supports the idea that Steve Blackwell's music was not stylistically unique, although competently made and enjoyed by a wide audience. Nevertheless, he was special for the community by what he was able to accomplish through his music and extroverted personality. This is not a definitive summation of Steve Blackwell's life, but rather a starting point for any further research on Blackwell or any research in the significance of local musicians for social communities.
|
2 |
Diagramas de influência e teoria estatística / Influence Diagrams and Statistical TheoryStern, Rafael Bassi 09 January 2009 (has links)
O objetivo principal deste trabalho foi analisar o controverso conceito de informação em estatística. Para tal, primeiramente foi estudado o conceito de informação dado por Basu. A seguir, a análise foi dividida em três partes: informação nos dados, informação no experimento e diagramas de influência. Nas duas primeiras etapas, sempre se tentou definir propriedades que uma função de informação deveria satisfazer para se enquadrar ao conceito. Na primeira etapa, foi estudado como o princípio da verossimilhança é uma classe de equivalência decorrente de acreditar que experimentos triviais não trazem informação. Também foram apresentadas métricas que satisfazem o princípio da verossimilhança e estas foram usadas para avaliar um exemplo intuitivo. Na segunda etapa, passamos para o problema da informação de um experimento. Foi apresentada a relação da suficiência de Blackwell com experimentos triviais e o conceito usual de suficiência. Também foi analisada a equivalência de Blackwell e a sua relação com o Princípio da Verossimilhança anteriormente estudado. Além disso, as métricas apresentadas para medir a informação de conjuntos de dados foram adaptadas para também medir a informação de um experimento. Finalmente, observou-se que nas etapas anteriores uma série de simetrias mostraram-se como elementos essenciais do conceito de informação. Para ganhar intuição sobre elas, estas foram reescritas através da ferramenta gráfica dos diagramas de influência. Assim, definições como suficiência, suficiência de Blackwell, suficiência mínima e completude foram reapresentadas apenas usando essa ferramenta. / The main objective of this work is to analyze the controversial concept of information in Statistics. To do so, firstly the concept of information according to Basu is presented. Next, the analysis is divided in three parts: information in a data set, information in an experiment and influence diagrams. In the first two parts, we always tried to define properties an information function should satisfy in order to be in accordance to the concept of Basu. In the first part, it was studied how the likelihood principle is an equivalence class which follows from believing that trivial experiments do not bring information. Metrics which satisfy the likelihood principle were also presented and used to analyze an intuitive example. In the second part, the problem became that of determining information of a particular experiment. The relation between Blackwell\'s suciency, trivial experiments and classical suciency was presented. Blackwell\'s equivalence was also analyzed and its relationship with the Likelihood Principle was exposed. The metrics presented to evaluate the information in a data set were also adapted to do so with experiments. Finally, in the first parts a number of symmetries were shown as essencial elements of the concept of information. To gain more intuition about these elements, we tried to rewrite them using the graphic tool of influence diagrams. Therefore, definitions as sufficiency, Blackwell\'s sufficiency, minimal sufficiency and completeness were shown again, only using influence diagrams.
|
3 |
Diagramas de influência e teoria estatística / Influence Diagrams and Statistical TheoryRafael Bassi Stern 09 January 2009 (has links)
O objetivo principal deste trabalho foi analisar o controverso conceito de informação em estatística. Para tal, primeiramente foi estudado o conceito de informação dado por Basu. A seguir, a análise foi dividida em três partes: informação nos dados, informação no experimento e diagramas de influência. Nas duas primeiras etapas, sempre se tentou definir propriedades que uma função de informação deveria satisfazer para se enquadrar ao conceito. Na primeira etapa, foi estudado como o princípio da verossimilhança é uma classe de equivalência decorrente de acreditar que experimentos triviais não trazem informação. Também foram apresentadas métricas que satisfazem o princípio da verossimilhança e estas foram usadas para avaliar um exemplo intuitivo. Na segunda etapa, passamos para o problema da informação de um experimento. Foi apresentada a relação da suficiência de Blackwell com experimentos triviais e o conceito usual de suficiência. Também foi analisada a equivalência de Blackwell e a sua relação com o Princípio da Verossimilhança anteriormente estudado. Além disso, as métricas apresentadas para medir a informação de conjuntos de dados foram adaptadas para também medir a informação de um experimento. Finalmente, observou-se que nas etapas anteriores uma série de simetrias mostraram-se como elementos essenciais do conceito de informação. Para ganhar intuição sobre elas, estas foram reescritas através da ferramenta gráfica dos diagramas de influência. Assim, definições como suficiência, suficiência de Blackwell, suficiência mínima e completude foram reapresentadas apenas usando essa ferramenta. / The main objective of this work is to analyze the controversial concept of information in Statistics. To do so, firstly the concept of information according to Basu is presented. Next, the analysis is divided in three parts: information in a data set, information in an experiment and influence diagrams. In the first two parts, we always tried to define properties an information function should satisfy in order to be in accordance to the concept of Basu. In the first part, it was studied how the likelihood principle is an equivalence class which follows from believing that trivial experiments do not bring information. Metrics which satisfy the likelihood principle were also presented and used to analyze an intuitive example. In the second part, the problem became that of determining information of a particular experiment. The relation between Blackwell\'s suciency, trivial experiments and classical suciency was presented. Blackwell\'s equivalence was also analyzed and its relationship with the Likelihood Principle was exposed. The metrics presented to evaluate the information in a data set were also adapted to do so with experiments. Finally, in the first parts a number of symmetries were shown as essencial elements of the concept of information. To gain more intuition about these elements, we tried to rewrite them using the graphic tool of influence diagrams. Therefore, definitions as sufficiency, Blackwell\'s sufficiency, minimal sufficiency and completeness were shown again, only using influence diagrams.
|
4 |
Renewing Manchester: A supportive life skills center for Manchester's most underprivileged residentsHamilton, Jennifer Lynne 01 January 2007 (has links)
In America today, many people have fallen into sub-standard housing situations. Domestic violence, drug abuse, and lack of educational and employment opportunities are a few of the myriad reasons for this. On average the number of homeless people in the greater Richmond area is 5,200 individuals.1 These are people specifically in need of a re-integration into society.This thesis examines the role that the built environment can play in this process, by providing a sustainable, affordable and flexible site for a program that encourages people to rise above their current state by "recycling" them into better more productive citizens. The intent of this design is to provide a program that will be flexible enough to become a prototype for future housing plans involving upward mobility.The existing structure lies in the Manchester district of Richmond, Va. This community is comprised of many gentrified warehouses and expensive artist lofts, skirted by poverty and the very compromised Blackwell neighborhood. Specifically this project will serve the needs of the Richmond, VA. Community. Richmond, like most American cities, houses simultaneously houses both affluence and poverty.
|
5 |
Stratégies de descente miroir pour la minimisation du regret et l'approchabilité / Mirror descent strategies for regret minimization and approachabilityKwon, Joon 18 October 2016 (has links)
On présente dans le Chapitre I le problème d'online linear optimization, et on étudie les stratégies de descente miroir. Le Chapitre II se concentre sur le cas où le joueur dispose d'un ensemble fini d'actions. Le Chapitre III établit que les stratégies FTPL appartiennent à la famille de descente miroir. On construit au Chapitre IV des stratégies de descente miroir pour l'approchabilité de Blackwell. Celles-ci sont ensuite appliquées à construction de stratégies optimales pour le problème online combinatorial optimization et la minimisation du regret interne/swap. Le Chapitre V porte sur la minimisation du regret avec l'hypothèse supplémentaire que les vecteurs de paiement possèdent au plus $s$ composantes non-nulles. On met en évidence une différence fondamentale entre les gains et les pertes en établissant des bornes optimales sur le regret d'ordre différents dans chacun de ces deux cas. Le Chapitre VI porte sur l'approchabilité de Blackwell avec observations partielles. On établit que les vitesses de convergence optimales sont $O(T^{-1/2})$ pour des signaux dont les lois ne dépendent pas de l'action du joueur, et $O(T^{-1/3})$ dans le cas général. Le Chapitre VII définit les stratégies de descente miroir en temps continu. On établit pour ces derniers une propriété de non-regret. On effectue ensuite une comparaison entre le temps continu et le temps discret. Enfin, le Chapitre VIII établit une borne universelle sur les variations des fonctions convexes bornées. On obtient en corollaire que toute fonction convexe bornée est lipschitzienne par rapport à la métrique de Hilbert. / In Chapter I, we present the online linear optimization problem and study Mirror Descent strategies. Chapter II focuses on the case where the Decision Maker has a finite set of actions. We establish in Chapter III that FTPL strategies belong to the Mirror Descent family. In Chapter IV, we construct Mirror Descent strategies for Blackwell's approachability. They are then applied to the construction of optimal strategies for online combinatorial optimization and internal/swap regret minimization. Chapter V studies the regret minimization problem with the additional assumption that the payoff vectors have at most $s$ nonzero components. We show that gains and losses are fundamentally different by deriving optimal regret bounds of different orders for those two cases. Chapter VI studies Blackwell's approachability with partial monitoring. We establish that optimal convergence rates are $O(T^{-1/2})$ in the case of outcome-dependent signals, and $O(T^{-1/3})$ in the general case. Chapter VII defines Mirror Descent strategies in continuous-time for which we establish a no-regret property. A comparison between discrete and continuous-time is then conducted. Chapter VIII establish a universal bound on the variations of bounded convex functions. As a byproduct, we obtain that every bounded convex function is Lipschitz continuous with respect to the Hilbert metric.
|
6 |
Algorithmes de restauration bayésienne mono- et multi-objets dans des modèles markoviens / Single and multiple object(s) Bayesian restoration algorithms for Markovian modelsPetetin, Yohan 27 November 2013 (has links)
Cette thèse est consacrée au problème d'estimation bayésienne pour le filtrage statistique, dont l'objectif est d'estimer récursivement des états inconnus à partir d'un historique d'observations, dans un modèle stochastique donné. Les modèles stochastiques considérés incluent principalement deux grandes classes de modèles : les modèles de Markov cachés et les modèles de Markov à sauts conditionnellement markoviens. Ici, le problème est abordé sous sa forme générale dans la mesure où nous considérons le problème du filtrage mono- et multi objet(s), ce dernier étant abordé sous l'angle de la théorie des ensembles statistiques finis et du filtre « Probability Hypothesis Density ». Tout d'abord, nous nous intéressons à l'importante classe d'approximations que constituent les algorithmes de Monte Carlo séquentiel, qui incluent les algorithmes d'échantillonnage d'importance séquentiel et de filtrage particulaire auxiliaire. Les boucles de propagation mises en jeux dans ces algorithmes sont étudiées et des algorithmes alternatifs sont proposés. Les algorithmes de filtrage particulaire dits « localement optimaux », c'est à dire les algorithmes d'échantillonnage d'importance avec densité d'importance conditionnelle optimale et de filtrage particulaire auxiliaire pleinement adapté sont comparés statistiquement, en fonction des paramètres du modèle donné. Ensuite, les méthodes de réduction de variance basées sur le théorème de Rao-Blackwell sont exploitées dans le contexte du filtrage mono- et multi-objet(s) Ces méthodes, utilisées principalement en filtrage mono-objet lorsque la dimension du vecteur d'état à estimer est grande, sont dans un premier temps étendues pour les approximations Monte Carlo du filtre Probability Hypothesis Density. D'autre part, des méthodes de réduction de variance alternatives sont proposées : bien que toujours basées sur le théorème de Rao-Blackwell, elles ne se focalisent plus sur le caractère spatial du problème mais plutôt sur son caractère temporel. Enfin, nous abordons l'extension des modèles probabilistes classiquement utilisés. Nous rappelons tout d'abord les modèles de Markov couple et triplet dont l'intérêt est illustré à travers plusieurs exemples pratiques. Ensuite, nous traitons le problème de filtrage multi-objets, dans le contexte des ensembles statistiques finis, pour ces modèles. De plus, les propriétés statistiques plus générales des modèles triplet sont exploitées afin d'obtenir de nouvelles approximations de l'estimateur bayésien optimal (au sens de l'erreur quadratique moyenne) dans les modèles à sauts classiquement utilisés; ces approximations peuvent produire des estimateurs de performances comparables à celles des approximations particulaires, mais ont l'avantage d'être moins coûteuses sur le plan calculatoire / This thesis focuses on the Bayesian estimation problem for statistical filtering which consists in estimating hidden states from an historic of observations over time in a given stochastic model. The considered models include the popular Hidden Markov Chain models and the Jump Markov State Space Systems; in addition, the filtering problem is addressed under a general form, that is to say we consider the mono- and multi-object filtering problems. The latter one is addressed in the Random Finite Sets and Probability Hypothesis Density contexts. First, we focus on the class of particle filtering algorithms, which include essentially the sequential importance sampling and auxiliary particle filter algorithms. We explore the recursive loops for computing the filtering probability density function, and alternative particle filtering algorithms are proposed. The ``locally optimal'' filtering algorithms, i.e. the sequential importance sampling with optimal conditional importance distribution and the fully adapted auxiliary particle filtering algorithms, are statistically compared in function of the parameters of a given stochastic model. Next, variance reduction methods based on the Rao-Blackwell theorem are exploited in the mono- and multi-object filtering contexts. More precisely, these methods are mainly used in mono-object filtering when the dimension of the hidden state is large; so we first extend them for Monte Carlo approximations of the Probabilty Hypothesis Density filter. In addition, alternative variance reduction methods are proposed. Although we still use the Rao-Blackwell decomposition, our methods no longer focus on the spatial aspect of the problem but rather on its temporal one. Finally, we discuss on the extension of the classical stochastic models. We first recall pairwise and triplet Markov models and we illustrate their interest through several practical examples. We next address the multi-object filtering problem for such models in the random finite sets context. Moreover, the statistical properties of the more general triplet Markov models are used to build new approximations of the optimal Bayesian estimate (in the sense of the mean square error) in Jump Markov State Space Systems. These new approximations can produce estimates with performances alike those given by particle filters but with lower computational cost
|
7 |
Women in ministry : 1853-1984Matthews, Leah January 1985 (has links)
No description available.
|
8 |
Un théorème limite conditionnel. Applications à l'inférence conditionnelle et aux méthodes d'Importance Sampling.Caron, Virgile 16 October 2012 (has links) (PDF)
Cette thèse présente une approximation fine de la densité de longues sous-suites d'une marche aléatoire conditionnée par la valeur de son extrémité, ou par une moyenne d'une fonction de ses incréments, lorsque sa taille tend vers l'infini. Dans le domaine d'un conditionnement de type grande déviation, ce résultat généralise le principe conditionnel de Gibbs au sens où il décrit les sous suites de la marche aléatoire, et non son comportement marginal. Une approximation est aussi obtenue lorsque l'événement conditionnant énonce que la valeur terminale de la marche aléatoire appartient à un ensemble mince, ou gros, d'intérieur non vide. Les approximations proposées ont lieu soit en probabilité sous la loi conditionnelle, soit en distance de la variation totale. Deux applications sont développées; la première porte sur l'estimation de probabilités de certains événements rares par une nouvelle technique d'échantillonnage d'importance; ce cas correspond à un conditionnement de type grande déviation. Une seconde application explore des méthodes constructives d'amélioration d'estimateurs dans l'esprit du théorème de Rao-Blackwell, et d'inférence conditionnelle sous paramètre de nuisance; l'événement conditionnant est alors dans la gamme du théorème de la limite centrale. On traite en détail du choix effectif de la longueur maximale de la sous suite pour laquelle une erreur relative maximale fixée est atteinte par l'approximation; des algorithmes explicites permettent la mise en oeuvre effective de cette approximation et de ses conséquences.
|
9 |
Algorithmes de restauration bayésienne mono- et multi-objets dans des modèles markoviensPetetin, Yohan 27 November 2013 (has links) (PDF)
Cette thèse est consacrée au problème d'estimation bayésienne pour le filtrage statistique, dont l'objectif est d'estimer récursivement des états inconnus à partir d'un historique d'observations, dans un modèle stochastique donné. Les modèles stochastiques considérés incluent principalement deux grandes classes de modèles : les modèles de Markov cachés et les modèles de Markov à sauts conditionnellement markoviens. Ici, le problème est abordé sous sa forme générale dans la mesure où nous considérons le problème du filtrage mono- et multi objet(s), ce dernier étant abordé sous l'angle de la théorie des ensembles statistiques finis et du filtre " Probability Hypothesis Density ". Tout d'abord, nous nous intéressons à l'importante classe d'approximations que constituent les algorithmes de Monte Carlo séquentiel, qui incluent les algorithmes d'échantillonnage d'importance séquentiel et de filtrage particulaire auxiliaire. Les boucles de propagation mises en jeux dans ces algorithmes sont étudiées et des algorithmes alternatifs sont proposés. Les algorithmes de filtrage particulaire dits " localement optimaux ", c'est à dire les algorithmes d'échantillonnage d'importance avec densité d'importance conditionnelle optimale et de filtrage particulaire auxiliaire pleinement adapté sont comparés statistiquement, en fonction des paramètres du modèle donné. Ensuite, les méthodes de réduction de variance basées sur le théorème de Rao-Blackwell sont exploitées dans le contexte du filtrage mono- et multi-objet(s) Ces méthodes, utilisées principalement en filtrage mono-objet lorsque la dimension du vecteur d'état à estimer est grande, sont dans un premier temps étendues pour les approximations Monte Carlo du filtre Probability Hypothesis Density. D'autre part, des méthodes de réduction de variance alternatives sont proposées : bien que toujours basées sur le théorème de Rao-Blackwell, elles ne se focalisent plus sur le caractère spatial du problème mais plutôt sur son caractère temporel. Enfin, nous abordons l'extension des modèles probabilistes classiquement utilisés. Nous rappelons tout d'abord les modèles de Markov couple et triplet dont l'intérêt est illustré à travers plusieurs exemples pratiques. Ensuite, nous traitons le problème de filtrage multi-objets, dans le contexte des ensembles statistiques finis, pour ces modèles. De plus, les propriétés statistiques plus générales des modèles triplet sont exploitées afin d'obtenir de nouvelles approximations de l'estimateur bayésien optimal (au sens de l'erreur quadratique moyenne) dans les modèles à sauts classiquement utilisés; ces approximations peuvent produire des estimateurs de performances comparables à celles des approximations particulaires, mais ont l'avantage d'être moins coûteuses sur le plan calculatoire
|
10 |
Sustainable investments : Transparency regulation as a tool to influence investors to choose sustainable investment fundsPetersson, Frida January 2019 (has links)
In March 2018 the European Commission published the Action Plan on Financing Sustainable Growth. One of the main objectives with the actions presented in the action plan is to reorient capital flows towards sustainable investments, i.e. to influence more investors to invest sustainably. The action plan was followed by three proposals for transparency regulation regarding an EU taxonomy on sustainability, sustainability benchmarks and sustainability disclosures. Furthermore, the action plan included actions regarding two other transparency measures – sustainability labels and sustainability ratings. The first purpose of the thesis is to investigate if transparency regulation in the EU can be used as a tool to influence investors to choose sustainable investment funds. One of the main aims of the actions presented in the Action Plan on Financing Sustainable Growth, as well as the accompanying regulation proposals, is to reorient capital flows towards sustainable investments, i.e. to influence more investors to invest sustainably. In light of this, the Commission’s three proposed transparency regulations, as well as the concept of sustainability labels and ratings, are used as a basis for the investigation. The second purpose of the thesis is therefore to critically review the three regulation proposals and the concept of sustainability labels and ratings in order to gain an understanding of how different transparency measures can influence investors to choose sustainable investment funds. The transparency regulations and measures are analysed and critically reviewed in light of their objective to influence more investors to invest sustainably. A behavioural economics perspective, as well as consumer behaviour theories and decision-making models, are applied in order to analyse the transparency regulations and measures from an external perspective. Based on the analysis there are many indicators that transparency regulation can be used as a tool to influence investors to choose sustainable investment funds. However, to what extent transparency regulation can influence investor behaviour varies depending on which transparency measures are used and how they are designed. Sustainability benchmarks seem to have the least potential to influence investor behaviour, while the EU taxonomy on sustainability and sustainability labels seem to have the best potential to influence investor behaviour.
|
Page generated in 0.0372 seconds