• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 386
  • 176
  • 42
  • 26
  • 26
  • 24
  • 20
  • 20
  • 12
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 915
  • 212
  • 144
  • 140
  • 129
  • 103
  • 97
  • 84
  • 81
  • 81
  • 71
  • 70
  • 69
  • 67
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Analysis of clustered longitudinal count data /

Gao, Dexiang. January 2007 (has links)
Thesis (Ph.D. in Analytic Health Sciences, Department of Preventive Medicine and Biometrics) -- University of Colorado Denver, 2007. / Typescript. Includes bibliographical references (leaves 75-77). Free to UCD affiliates. Online version available via ProQuest Digital Dissertations;
662

Armor of patience : the National Cancer Institute and the development of medical research policy in the United States, 1937-1971 /

Erdey, Nancy Carol. January 1995 (has links)
Thesis (Ph. D.)--Case Western Reserve University, 1995. / Includes bibliographical references (leaves 191-203). Issued also online.
663

Automatisierte Verfahren für die Themenanalyse nachrichtenorientierter Textquellen

Niekler, Andreas 20 January 2016 (has links) (PDF)
Im Bereich der medienwissenschaftlichen Inhaltsanalyse stellt die Themenanalyse einen wichtigen Bestandteil dar. Für die Analyse großer digitaler Textbestände hin- sichtlich thematischer Strukturen ist es deshalb wichtig, das Potential automatisierter computergestützter Methoden zu untersuchen. Dabei müssen die methodischen und analytischen Anforderungen der Inhaltsanalyse beachtet und abgebildet werden, wel- che auch für die Themenanalyse gelten. In dieser Arbeit werden die Möglichkeiten der Automatisierung der Themenanalyse und deren Anwendungsperspektiven untersucht. Dabei wird auf theoretische und methodische Grundlagen der Inhaltsanalyse und auf linguistische Theorien zu Themenstrukturen zurückgegriffen,um Anforderungen an ei- ne automatische Analyse abzuleiten. Den wesentlichen Beitrag stellt die Untersuchung der Potentiale und Werkzeuge aus den Bereichen des Data- und Text-Mining dar, die für die inhaltsanalytische Arbeit in Textdatenbanken hilfreich und gewinnbringend eingesetzt werden können. Weiterhin wird eine exemplarische Analyse durchgeführt, um die Anwendbarkeit automatischer Methoden für Themenanalysen zu zeigen. Die Arbeit demonstriert auch Möglichkeiten der Nutzung interaktiver Oberflächen, formu- liert die Idee und Umsetzung einer geeigneten Software und zeigt die Anwendung eines möglichen Arbeitsablaufs für die Themenanalyse auf. Die Darstellung der Potentiale automatisierter Themenuntersuchungen in großen digitalen Textkollektionen in dieser Arbeit leistet einen Beitrag zur Erforschung der automatisierten Inhaltsanalyse. Ausgehend von den Anforderungen, die an eine Themenanalyse gestellt werden, zeigt diese Arbeit, mit welchen Methoden und Automatismen des Text-Mining diesen Anforderungen nahe gekommen werden kann. Zusammenfassend sind zwei Anforde- rungen herauszuheben, deren jeweilige Erfüllung die andere beeinflusst. Zum einen ist eine schnelle thematische Erfassung der Themen in einer komplexen Dokument- sammlung gefordert, um deren inhaltliche Struktur abzubilden und um Themen kontrastieren zu können. Zum anderen müssen die Themen in einem ausreichenden Detailgrad abbildbar sein, sodass eine Analyse des Sinns und der Bedeutung der The- meninhalte möglich ist. Beide Ansätze haben eine methodische Verankerung in den quantitativen und qualitativen Ansätzen der Inhaltsanalyse. Die Arbeit diskutiert diese Parallelen und setzt automatische Verfahren und Algorithmen mit den Anforde- rungen in Beziehung. Es können Methoden aufgezeigt werden, die eine semantische und damit thematische Trennung der Daten erlauben und einen abstrahierten Über- blick über große Dokumentmengen schaffen. Dies sind Verfahren wie Topic-Modelle oder clusternde Verfahren. Mit Hilfe dieser Algorithmen ist es möglich, thematisch kohärente Untermengen in Dokumentkollektion zu erzeugen und deren thematischen Gehalt für Zusammenfassungen bereitzustellen. Es wird gezeigt, dass die Themen trotz der distanzierten Betrachtung unterscheidbar sind und deren Häufigkeiten und Verteilungen in einer Textkollektion diachron dargestellt werden können. Diese Auf- bereitung der Daten erlaubt die Analyse von thematischen Trends oder die Selektion bestimmter thematischer Aspekte aus einer Fülle von Dokumenten. Diachrone Be- trachtungen thematisch kohärenter Dokumentmengen werden dadurch möglich und die temporären Häufigkeiten von Themen können analysiert werden. Für die detaillier- te Interpretation und Zusammenfassung von Themen müssen weitere Darstellungen und Informationen aus den Inhalten zu den Themen erstellt werden. Es kann gezeigt werden, dass Bedeutungen, Aussagen und Kontexte über eine Kookurrenzanalyse im Themenkontext stehender Dokumente sichtbar gemacht werden können. In einer Anwendungsform, welche die Leserichtung und Wortarten beachtet, können häufig auftretende Wortfolgen oder Aussagen innerhalb einer Thematisierung statistisch erfasst werden. Die so generierten Phrasen können zur Definition von Kategorien eingesetzt werden oder mit anderen Themen, Publikationen oder theoretischen An- nahmen kontrastiert werden. Zudem sind diachrone Analysen einzelner Wörter, von Wortgruppen oder von Eigennamen in einem Thema geeignet, um Themenphasen, Schlüsselbegriffe oder Nachrichtenfaktoren zu identifizieren. Die so gewonnenen Infor- mationen können mit einem „close-reading“ thematisch relevanter Dokumente ergänzt werden, was durch die thematische Trennung der Dokumentmengen möglich ist. Über diese methodischen Perspektiven hinaus lassen sich die automatisierten Analysen als empirische Messinstrumente im Kontext weiterer hier nicht besprochener kommu- nikationswissenschaftlicher Theorien einsetzen. Des Weiteren zeigt die Arbeit, dass grafische Oberflächen und Software-Frameworks für die Bearbeitung von automatisier- ten Themenanalysen realisierbar und praktikabel einsetzbar sind. Insofern zeigen die Ausführungen, wie die besprochenen Lösungen und Ansätze in die Praxis überführt werden können. Wesentliche Beiträge liefert die Arbeit für die Erforschung der automatisierten Inhaltsanalyse. Die Arbeit dokumentiert vor allem die wissenschaftliche Auseinan- dersetzung mit automatisierten Themenanalysen. Während der Arbeit an diesem Thema wurden vom Autor geeignete Vorgehensweisen entwickelt, wie Verfahren des Text-Mining in der Praxis für Inhaltsanalysen einzusetzen sind. Unter anderem wur- den Beiträge zur Visualisierung und einfachen Benutzung unterschiedlicher Verfahren geleistet. Verfahren aus dem Bereich des Topic Modelling, des Clustering und der Kookkurrenzanalyse mussten angepasst werden, sodass deren Anwendung in inhalts- analytischen Anwendungen möglich ist. Weitere Beiträge entstanden im Rahmen der methodologischen Einordnung der computergestützten Themenanalyse und in der Definition innovativer Anwendungen in diesem Bereich. Die für die vorliegende Arbeit durchgeführte Experimente und Untersuchungen wurden komplett in einer eigens ent- wickelten Software durchgeführt, die auch in anderen Projekten erfolgreich eingesetzt wird. Um dieses System herum wurden Verarbeitungsketten,Datenhaltung,Visualisie- rung, grafische Oberflächen, Möglichkeiten der Dateninteraktion, maschinelle Lernver- fahren und Komponenten für das Dokumentretrieval implementiert. Dadurch werden die komplexen Methoden und Verfahren für die automatische Themenanalyse einfach anwendbar und sind für künftige Projekte und Analysen benutzerfreundlich verfüg- bar. Sozialwissenschaftler,Politikwissenschaftler oder Kommunikationswissenschaftler können mit der Softwareumgebung arbeiten und Inhaltsanalysen durchführen, ohne die Details der Automatisierung und der Computerunterstützung durchdringen zu müssen.
664

Qualidade dos ensaios clínicos aleatórios em cirurgia plástica / Quality of randomized clinical trials in plastic surgery

Veiga Filho, Joel [UNIFESP] January 2001 (has links) (PDF)
Made available in DSpace on 2015-12-06T23:02:09Z (GMT). No. of bitstreams: 0 Previous issue date: 2001 / Qualidade dos ensaios clinicos aleatorios em Cirurgia Plastica. Contexto. A avaliacao da qualidade dos ensaios clinicos aleatorios e importante, pois a observacao dos erros e falhas nos permite evita-los no planejamento. conducao, analise e publicacao de futuros estudos. E e fundamental para se determinar o grau de confiabilidade dos resultados dos estudos publicados. Objetivo. Avaliar a qualidade dos ensaios clinicos aleatorios em Cirurgia Plastica. A hipotese testada foi a de que os estudos sao de ma qualidade. Tipo de estudo. Estudo descritivo com a avaliacao realizada por dois pesquisadores, de maneira independente, seguida de uma reuniao de consenso. Selecao da amostra. Ensaios clinicos aleatorios em Cirurgia Plastica, com sigilo de alocacao descrito adequadamente, realizado por/ou com a participacao de pelo menos um cirurgiao plastico, foram identificados atraves da busca eletronica nas bases de dados LILACS, MEDLINE, EMBASE e CCTR. Variavel estudada. Qualidade dos ensaios clinicos aleatorios, por meio da Lista de Delphi, de uma escala de qualidade (JADAD et al., 1996) e de cinco itens complementares. Resultados. Dos 139 estudos publicados como ensaios clinicos aleatorios, 63 por cento (88/139) nao descreveram o sigilo de alocacao, em 17 por cento (23/139) o sigilo de alocacao foi inadequado e 20 por cento (28/139) descreveram o sigilo de alocacao adequadamente. Dos 28 ensaios clinicos aleatorios, com sigilo de alocacao descrito adequadamente, 25 por cento nao descreveram a geracao da sequencia de alocacao, 82 por cento nao descreveram as perdas e exclusoes, 68 por cento nao descreveram se os grupos eram comparaveis, 50 por cento nao especificaram os criterios de inclusao e exclusao, 68 por cento nao apresentaram as medidas de variabilidade e as estimativas dos pontos para a variavel primaria, 61 por cento nao apresentaram uma analise por intencao de tratar. Na pontuacao pela escala de qualidade (JADAD et al., 1996), 71 por cento (20/28) receberam dois ou menos pontos. Conclusao. Os ensaios clinicos aleatorios em Cirurgia Plastica sao de ma qualidade / Quality of randomized clinical trials in Plastic Surgery. Context. The valuation of the quality of the randomized clinical trials is important since the observation of the mistakes and failures allows us to avoid them during planning, performing, analysis and publishing of future studies. It is fundamental in order to determine the reliability degree of the results of the published studies. Objective. To evaluate the quality of randomized clinical trials in Plastic Surgery. The hypothesis tested was the one stating that the studies are low quality ones. Type of study. Descriptive study with the valuation performed by two appraisers, in an independent way, followed by a consensus meeting. Study selection. Randomized clinical trials in Plastic Surgery, with allocation concealment suitably described, performed by/or with the participation of, at least, one plastic surgeon, were identified through electronic search in the basis of LILACS, EMBASE, MEDLINE and CCTR data. Studied variable. Quality of the randomized clinical trials, through Delphi List, through a quality scale (JADAD et al., 1996) and five complementary items. Results. From 139 studies published as randomized clinical trials, 63% (88/139) didn’t describe allocation concealment, in 17% (23/139) it was unsuited and 20% described it suitably. From 28 randomized clinical trials with allocation concealment suitably described, 25% didn’t describe the formation of the allocation sequence, 82% didn’t describe the loss and exclusion, 68% didn’t describe if the groups were comparable, 50% didn’t specify the inclusion and exclusion criteria, 68% didn’t present the variability measures and the points estimation for a primary variable, 61% didn’t present an analysis for a treating intention. In the punctuation by the quality scale (JADAD et al., 1996), 71% (20/28) got two or less points. Conclusion. The randomized clinical trials in Plastic Surgery are low quality ones. / BV UNIFESP: Teses e dissertações
665

Influencers på Instagram : En jämförande studie om hur olika sorters influencers påverkar konsumenters varumärkesuppfattning

Abou Khaled, Walle, Baban, Dylan January 2018 (has links)
Syfte: Syftet med studien är att undersöka hur konsumenters varumärkesuppfattning påverkas av influencer marketing med avseende på influencers följarantal och tematiska inriktning. Metod: I denna studie har en kvantitativ metod genomförts för att totalt samla in 173 svar genom en webbaserad enkätundersökning. Efter sållning utifrån vårt urval bearbetades 153 svar i SPSS där deskriptiv statistik, korrelationsanalys och klusteranalys genomfördes. Resultatet användes för att genomföra en diskussion för att dra vidare slutsatser. Slutsats: Studien visar att både tematiska influencers och influencers med många följare har en positiv inverkan på konsumenters varumärkesuppfattning. Influencers med stora följarantal bidrar till mer positiv inverkan på konsumenters varumärkesuppfattning än tematiska influencers. För maximala resultat bör marknadsförare använda sig av en influencer som både har ett stort följarantal och är inriktad på ett tema. Uppsatsens bidrag: Studiens bidrag till teorin inom marknadsföringens forskningsområde är en ökad kunskap om olika sorters influencers påverkan på konsumenters varumärkesuppfattning. Studien visar att följarantal väger tyngre än tema i frågan om hur positivt påverkad en konsuments varumärkesuppfattning blir. / Aim: The purpose of the study is to investigate how consumers’ brand perception is influenced by influencer marketing with regard to influencers' number of followers and topical alignment. Method: In this study, a quantitative method has been conducted to aggregate 173 responses in a web-based questionnaire survey. After screening based on our selection, 153 responses were processed in SPSS where descriptive statistics, correlation analysis and cluster analysis were conducted. The result was used to conduct a discussion to draw further conclusions. Conclusion: The study shows that both topical influencers and influencers with many followers have a positive impact on consumer brand perception. Influencers with a large number of followers contribute to a more positive impact on consumer brand perception than topical influencers. For maximum results, marketers should use an influencer that has both a large number of followers and focuses on a topic. Contribution: The study's contribution to the theory of marketing research area is an increased knowledge of the influence of different types of influencers on consumer brand perception. The study shows that the number of followers weighs heavier than the topic in terms of how positively affected a consumer's perception of the brand is.
666

Explorer et apprendre à partir de collections de textes multilingues à l'aide des modèles probabilistes latents et des réseaux profonds / Mining and learning from multilingual text collections using topic models and word embeddings

Balikas, Georgios 20 October 2017 (has links)
Le texte est l'une des sources d'informations les plus répandues et les plus persistantes. L'analyse de contenu du texte se réfère à des méthodes d'étude et de récupération d'informations à partir de documents. Aujourd'hui, avec une quantité de texte disponible en ligne toujours croissante l'analyse de contenu du texte revêt une grande importance parce qu' elle permet une variété d'applications. À cette fin, les méthodes d'apprentissage de la représentation sans supervision telles que les modèles thématiques et les word embeddings constituent des outils importants.L'objectif de cette dissertation est d'étudier et de relever des défis dans ce domaine.Dans la première partie de la thèse, nous nous concentrons sur les modèles thématiques et plus précisément sur la manière d'incorporer des informations antérieures sur la structure du texte à ces modèles.Les modèles de sujets sont basés sur le principe du sac-de-mots et, par conséquent, les mots sont échangeables. Bien que cette hypothèse profite les calculs des probabilités conditionnelles, cela entraîne une perte d'information.Pour éviter cette limitation, nous proposons deux mécanismes qui étendent les modèles de sujets en intégrant leur connaissance de la structure du texte. Nous supposons que les documents sont répartis dans des segments de texte cohérents. Le premier mécanisme attribue le même sujet aux mots d'un segment. La seconde, capitalise sur les propriétés de copulas, un outil principalement utilisé dans les domaines de l'économie et de la gestion des risques, qui sert à modéliser les distributions communes de densité de probabilité des variables aléatoires tout en n'accédant qu'à leurs marginaux.La deuxième partie de la thèse explore les modèles de sujets bilingues pour les collections comparables avec des alignements de documents explicites. En règle générale, une collection de documents pour ces modèles se présente sous la forme de paires de documents comparables. Les documents d'une paire sont écrits dans différentes langues et sont thématiquement similaires. À moins de traductions, les documents d'une paire sont semblables dans une certaine mesure seulement. Pendant ce temps, les modèles de sujets représentatifs supposent que les documents ont des distributions thématiques identiques, ce qui constitue une hypothèse forte et limitante. Pour le surmonter, nous proposons de nouveaux modèles thématiques bilingues qui intègrent la notion de similitude interlingue des documents qui constituent les paires dans leurs processus générateurs et d'inférence.La dernière partie de la thèse porte sur l'utilisation d'embeddings de mots et de réseaux de neurones pour trois applications d'exploration de texte. Tout d'abord, nous abordons la classification du document polylinguistique où nous soutenons que les traductions d'un document peuvent être utilisées pour enrichir sa représentation. À l'aide d'un codeur automatique pour obtenir ces représentations de documents robustes, nous démontrons des améliorations dans la tâche de classification de documents multi-classes. Deuxièmement, nous explorons la classification des tweets à plusieurs tâches en soutenant que, en formant conjointement des systèmes de classification utilisant des tâches corrélées, on peut améliorer la performance obtenue. À cette fin, nous montrons comment réaliser des performances de pointe sur une tâche de classification du sentiment en utilisant des réseaux neuronaux récurrents. La troisième application que nous explorons est la récupération d'informations entre langues. Compte tenu d'un document écrit dans une langue, la tâche consiste à récupérer les documents les plus similaires à partir d'un ensemble de documents écrits dans une autre langue. Dans cette ligne de recherche, nous montrons qu'en adaptant le problème du transport pour la tâche d'estimation des distances documentaires, on peut obtenir des améliorations importantes. / Text is one of the most pervasive and persistent sources of information. Content analysis of text in its broad sense refers to methods for studying and retrieving information from documents. Nowadays, with the ever increasing amounts of text becoming available online is several languages and different styles, content analysis of text is of tremendous importance as it enables a variety of applications. To this end, unsupervised representation learning methods such as topic models and word embeddings constitute prominent tools.The goal of this dissertation is to study and address challengingproblems in this area, focusing on both the design of novel text miningalgorithms and tools, as well as on studying how these tools can be applied to text collections written in a single or several languages.In the first part of the thesis we focus on topic models and more precisely on how to incorporate prior information of text structure to such models.Topic models are built on the premise of bag-of-words, and therefore words are exchangeable. While this assumption benefits the calculations of the conditional probabilities it results in loss of information.To overcome this limitation we propose two mechanisms that extend topic models by integrating knowledge of text structure to them. We assume that the documents are partitioned in thematically coherent text segments. The first mechanism assigns the same topic to the words of a segment. The second, capitalizes on the properties of copulas, a tool mainly used in the fields of economics and risk management that is used to model the joint probability density distributions of random variables while having access only to their marginals.The second part of the thesis explores bilingual topic models for comparable corpora with explicit document alignments. Typically, a document collection for such models is in the form of comparable document pairs. The documents of a pair are written in different languages and are thematically similar. Unless translations, the documents of a pair are similar to some extent only. Meanwhile, representative topic models assume that the documents have identical topic distributions, which is a strong and limiting assumption. To overcome it we propose novel bilingual topic models that incorporate the notion of cross-lingual similarity of the documents that constitute the pairs in their generative and inference processes. Calculating this cross-lingual document similarity is a task on itself, which we propose to address using cross-lingual word embeddings.The last part of the thesis concerns the use of word embeddings and neural networks for three text mining applications. First, we discuss polylingual document classification where we argue that translations of a document can be used to enrich its representation. Using an auto-encoder to obtain these robust document representations we demonstrate improvements in the task of multi-class document classification. Second, we explore multi-task sentiment classification of tweets arguing that by jointly training classification systems using correlated tasks can improve the obtained performance. To this end we show how can achieve state-of-the-art performance on a sentiment classification task using recurrent neural networks. The third application we explore is cross-lingual information retrieval. Given a document written in one language, the task consists in retrieving the most similar documents from a pool of documents written in another language. In this line of research, we show that by adapting the transportation problem for the task of estimating document distances one can achieve important improvements.
667

Automatic Identification of Duplicates in Literature in Multiple Languages

Klasson Svensson, Emil January 2018 (has links)
As the the amount of books available online the sizes of each these collections are at the same pace growing larger and more commonly in multiple languages. Many of these cor- pora contain duplicates in form of various editions or translations of books. The task of finding these duplicates is usually done manually but with the growing sizes making it time consuming and demanding. The thesis set out to find a method in the field of Text Mining and Natural Language Processing that can automatize the process of manually identifying these duplicates in a corpora mainly consisting of fiction in multiple languages provided by Storytel. The problem was approached using three different methods to compute distance measures between books. The first approach was comparing titles of the books using the Levenstein- distance. The second approach used extracting entities from each book using Named En- tity Recognition and represented them using tf-idf and cosine dissimilarity to compute distances. The third approach was using a Polylingual Topic Model to estimate the books distribution of topics and compare them using Jensen Shannon Distance. In order to es- timate the parameters of the Polylingual Topic Model 8000 books were translated from Swedish to English using Apache Joshua a statistical machine translation system. For each method every book written by an author was pairwise tested using a hypothesis test where the null hypothesis was that the two books compared is not an edition or translation of the others. Since there is no known distribution to assume as the null distribution for each book a null distribution was estimated using distance measures of books not written by the author. The methods were evaluated on two different sets of manually labeled data made by the author of the thesis. One randomly sampled using one-stage cluster sampling and one consisting of books from authors that the corpus provider prior to the thesis be considered more difficult to label using automated techniques. Of the three methods the Title Matching was the method that performed best in terms of accuracy and precision based of the sampled data. The entity matching approach was the method with the lowest accuracy and precision but with a almost constant recall at around 50 %. It was concluded that there seems to be a set of duplicates that are clearly distin- guished from the estimated null-distributions, with a higher significance level a better pre- cision and accuracy could have been made with a similar recall for the specific method. For topic matching the result was worse than the title matching and when studied the es- timated model was not able to create quality topics the cause of multiple factors. It was concluded that further research is needed for the topic matching approach. None of the three methods were deemed be complete solutions to automatize detection of book duplicates.
668

Příznakový slovosled v italštině a jeho překlady do češtiny. / The marked word-order in Italian and its Czech translations.

BUZKOVÁ, Aneta January 2015 (has links)
This diploma thesis deals with the marked word order in Italian and its translation into Czech. The work is divided into two parts. In the opening of the first, theoretical one the author applies her mind to the development of the linguistic attitude to the word order. The author focuses on the description of the basic parameters of the Czech and Italian word order with respect to the marked structures. The second, analytical part works with the selected examples of the Italian marked word order and its translation into Czech. The comparative examples of word order in the analytical part are chosen from the spoken corpus, fictional text and the parallel corpus InterCorp. The aim of this thesis is firstly comparing the Italian marked word order to the Czech one and secondly mapping general principles of translations into Czech. Conclusions and recommendations are drawn for translation into Czech.
669

Téma holocaustu v literatuře pro děti a mládež / The theme of holocaust in children´s literature

MAGYAROVÁ, Aneta January 2017 (has links)
Topic of this diploma Thesis is holocaust in literature aimed at children and youngsters in the time period from the second World War until recent time. The development of the above-mentioned literature is going to be explored, however the main emphasis is going to be put on different authors and analysis of their works. This analysis is going to be based on the comparison of different elements of literature in these works. The concluding part is going to be focused on the presence of šoa topic in workbooks at the second grade of elementary schools.
670

The institutional pluralism of the state

Holperin, Michelle Moretzsohn 05 June 2017 (has links)
Submitted by Michelle Holperin (mimoretz@gmail.com) on 2017-07-05T11:36:45Z No. of bitstreams: 1 The Institutional Pluralism of the State.pdf: 4295867 bytes, checksum: 8d35c5e25d3078a289613f86df68e81b (MD5) / Approved for entry into archive by Maria Almeida (maria.socorro@fgv.br) on 2017-07-07T18:37:35Z (GMT) No. of bitstreams: 1 The Institutional Pluralism of the State.pdf: 4295867 bytes, checksum: 8d35c5e25d3078a289613f86df68e81b (MD5) / Made available in DSpace on 2017-07-07T18:38:02Z (GMT). No. of bitstreams: 1 The Institutional Pluralism of the State.pdf: 4295867 bytes, checksum: 8d35c5e25d3078a289613f86df68e81b (MD5) Previous issue date: 2017-06-05 / What are the logics that public organizations enact in their daily activities? This doctoral dissertation investigated the institutional logics of the State. The institutional logic concept adopted was the one of Friedland and colleagues: institutional logics are 'stable constellations of practice', the necessary coupling of substances and material practices that constitutes the institutions’ organizing principles (Friedland et.al., 2014). The State is understood as one of the central institutions of society, composed by two dimensions. One is the bureaucratic dimension, permeated by different ideas about how things should be done in the State. The other is the capitalist dimension, permeated by different ideas about what should be done,i.e., what should be the role of the State. I have chosen a specific type of public organization to explore the logic of the State: the Brazilian independent regulatory agencies (IRAs). IRAs have diffused widely in the past years, and the literature suggests that they represent the 'appropriate model of governance' of the capitalist economy (Levi-Faur, 2005). They changed both how things were done - emphasizing the state's rule-making instruments - and what should be done - focusing on competition promotion and correcting market failures (Majone, 1994). In Brazil, IRAs were part of a broader process of State Reform, and represented an important innovation in terms of organizational design, based on autonomy, and role to be performed, based on competition promotion. However, the process of IRAs’ diffusion was largely impacted by the local context and despite being idealized as purely regulatory, their policies and activities indicate that they do much more than promoting competition. In fact, state policies in general, and regulatory policies in particular, 'are rooted in changing conceptions of what the state is, what it can and should do' (Friedland & Alford, 1991). To assess the institutional logics of the State, this research investigated over 9,000 press releases published by three formal independent regulatory agencies in Brazil between 2002 and 2016. Those press releases cover all the news they released since their creation. Press releases are frequently used by Brazilian IRAs, and they serve as a good proxy of the policies and activities conducted by these agencies. I applied a correlated topic model (CTM) to extract the main themes discussed by the agencies in the past years. Originating from the study areas of natural language processing and machine learning, topic models are probabilistic models that uncover the semantic structure of a collection of documents, or corpus (Blei, 2012; Blei, Ng & Jordan, 2003). Differently from other content analysis techniques, topic models are purely inductive and conform to the ‘relationality’ of meaning assumption of the institutional logics literature (DiMaggio, Nag & Blei, 2013). The results indicated that the logics enacted by independent agencies do not refer only to procedural correctness (Meyer & Hammerschmid, 2006) or democracy (Ocasio, Mauskapf & Steele, 2015). In fact, much of what they do is grounded on broader substantive values, reflecting developmental-, pro-competition- and social-oriented interpretations of the role of the State. Yet, the bureaucratic logic is very pervasive within IRAs: it permeates substantive logics, but also it stands up as a logic of its own. Regulatory agencies enact it more often when they are not able to perform their substantive mission. IRAs re-frame at their discretion the practices of administrative police (standards setting and inspections) and public participation (procedural fairness) during periods of crisis, in order to justify their actions. By doing so, they were able to legitimate their existence, gain a new sense of mission and avoid blame for their actions.

Page generated in 0.0312 seconds