Spelling suggestions: "subject:"lexicon""
1 |
Indução de léxicos bilíngües e regras para a tradução automática / Induction of translation lexicons and transfer rules for machine translationCaseli, Helena de Medeiros 21 May 2007 (has links)
A Tradução Automática (TA) -- tradução de uma língua natural (fonte) para outra (alvo) por meio de programas de computador -- é uma tarefa árdua devido, principalmente, à necessidade de um conhecimento lingüístico aprofundado das duas (ou mais) línguas envolvidas para a construção de recursos, como gramáticas de tradução, dicionários bilíngües etc. A escassez de recursos lingüísticos, e mesmo a dificuldade em produzi-los, geralmente são fatores limitantes na atuação dos sistemas de TA, restringindo-os, por exemplo, quanto ao domínio de aplicação. Neste contexto, diversos métodos vêm sendo propostos com o intuito de gerar, automaticamente, conhecimento lingüístico a partir dos recursos multilíngües e, assim, tornar a construção de tradutores automáticos menos trabalhosa. O projeto ReTraTos, apresentado neste documento, é uma dessas propostas e visa à indução automática de léxicos bilíngües e de regras de tradução a partir de corpora paralelos etiquetados morfossintaticamente e alinhados lexicalmente para os pares de idiomas português--espanhol e português--inglês. O sistema proposto para a indução de regras de tradução apresenta uma abordagem inovadora na qual os exemplos de tradução são divididos em blocos de alinhamento e a indução é realizada para cada bloco, separadamente. Outro fator inovador do sistema de indução é uma filtragem mais elaborada das regras induzidas. Além dos sistemas de indução de léxicos bilíngües e de regras de tradução, implementou-se também um módulo de tradução automática para permitir a validação dos recursos induzidos. Os léxicos bilíngües foram avaliados intrinsecamente e os resultados obtidos estão de acordo com os relatados na literatura para essa área. As regras de tradução foram avaliadas direta e indiretamente por meio do módulo de TA e sua utilização trouxe um ganho na tradução palavra-a-palavra em todos os sentidos (fonte--alvo e alvo--fonte) para a tradução dos idiomas em estudo. As traduções geradas com os recursos induzidos no ReTraTos também foram comparadas às geradas por sistemas comerciais, apresentando melhores resultados para o par de línguas português--espanhol do que para o par português--inglês. / Machine Translation (MT) -- the translation of a natural (source) language into another (target) by means of computer programs -- is a hard task, mainly due to the need of deep linguistic knowledge about the two (or more) languages required to build resources such as translation grammars, bilingual dictionaries, etc. The scarcity of linguistic resources or even the difficulty to build them often limits the use of MT systems, for example, to certain application domains. In this context, several methods have been proposed aiming at generating linguistic knowledge automatically from multilingual resources, so that building translation tools becomes less hard. The ReTraTos project presented in this document is one of these proposals and aims at inducing translation lexicons and transfer rules automatically from PoS-tagged and lexically aligned translation examples for Portuguese--Spanish and Portuguese--English language pairs. The rule induction system brings forth a new approach, in which translation examples are split into alignment blocks and induction is performed for each type of block separately. Another new feature of this system is a more elaborate strategy for filtering the induced rules. Besides the translation lexicon and the transfer rule induction systems, we also implemented a MT module for validating the induced resources. The induced translation lexicons were evaluated intrinsically and the results obtained agree with those reported on the literature. The induced translation rules were evaluated directly and indirectly by the MT module, and improved the word-by-word translation in both directions (source--target and target--source) for the languages under study. The target sentences obtained by the induced resources were also compared to those generated by commercial systems, showing better results for Portuguese--Spanish than for Portuguese--English.
|
2 |
Domain-specific lexicon generation for emotion detection from textBandhakavi, Anil January 2018 (has links)
Emotions play a key role in effective and successful human communication. Text is popularly used on the internet and social media websites to express and share emotions, feelings and sentiments. However useful applications and services built to understand emotions from text are limited in effectiveness due to reliance on general purpose emotion lexicons that have static vocabulary and sentiment lexicons that can only interpret emotions coarsely. Thus emotion detection from text calls for methods and knowledge resources that can deal with challenges such as dynamic and informal vocabulary, domain-level variations in emotional expressions and other linguistic nuances. In this thesis we demonstrate how labelled (e.g. blogs, news headlines) and weakly-labelled (e.g. tweets) emotional documents can be harnessed to learn word-emotion lexicons that can account for dynamic and domain-specific emotional vocabulary. We model the characteristics of realworld emotional documents to propose a generative mixture model, which iteratively estimates the language models that best describe the emotional documents using expectation maximization (EM). The proposed mixture model has the ability to model both emotionally charged words and emotion-neutral words. We then generate a word-emotion lexicon using the mixture model to quantify word-emotion associations in the form of a probability vectors. Secondly we introduce novel feature extraction methods to utilize the emotion rich knowledge being captured by our word-emotion lexicon. The extracted features are used to classify text into emotion classes using machine learning. Further we also propose hybrid text representations for emotion classification that use the knowledge of lexicon based features in conjunction with other representations such as n-grams, part-of-speech and sentiment information. Thirdly we propose two different methods which jointly use an emotion-labelled corpus of tweets and emotion-sentiment mapping proposed in psychology to learn word-level numerical quantification of sentiment strengths over a positive to negative spectrum. Finally we evaluate all the proposed methods in this thesis through a variety of emotion detection and sentiment analysis tasks on benchmark data sets covering domains from blogs to news articles to tweets and incident reports.
|
3 |
Indução de léxicos bilíngües e regras para a tradução automática / Induction of translation lexicons and transfer rules for machine translationHelena de Medeiros Caseli 21 May 2007 (has links)
A Tradução Automática (TA) -- tradução de uma língua natural (fonte) para outra (alvo) por meio de programas de computador -- é uma tarefa árdua devido, principalmente, à necessidade de um conhecimento lingüístico aprofundado das duas (ou mais) línguas envolvidas para a construção de recursos, como gramáticas de tradução, dicionários bilíngües etc. A escassez de recursos lingüísticos, e mesmo a dificuldade em produzi-los, geralmente são fatores limitantes na atuação dos sistemas de TA, restringindo-os, por exemplo, quanto ao domínio de aplicação. Neste contexto, diversos métodos vêm sendo propostos com o intuito de gerar, automaticamente, conhecimento lingüístico a partir dos recursos multilíngües e, assim, tornar a construção de tradutores automáticos menos trabalhosa. O projeto ReTraTos, apresentado neste documento, é uma dessas propostas e visa à indução automática de léxicos bilíngües e de regras de tradução a partir de corpora paralelos etiquetados morfossintaticamente e alinhados lexicalmente para os pares de idiomas português--espanhol e português--inglês. O sistema proposto para a indução de regras de tradução apresenta uma abordagem inovadora na qual os exemplos de tradução são divididos em blocos de alinhamento e a indução é realizada para cada bloco, separadamente. Outro fator inovador do sistema de indução é uma filtragem mais elaborada das regras induzidas. Além dos sistemas de indução de léxicos bilíngües e de regras de tradução, implementou-se também um módulo de tradução automática para permitir a validação dos recursos induzidos. Os léxicos bilíngües foram avaliados intrinsecamente e os resultados obtidos estão de acordo com os relatados na literatura para essa área. As regras de tradução foram avaliadas direta e indiretamente por meio do módulo de TA e sua utilização trouxe um ganho na tradução palavra-a-palavra em todos os sentidos (fonte--alvo e alvo--fonte) para a tradução dos idiomas em estudo. As traduções geradas com os recursos induzidos no ReTraTos também foram comparadas às geradas por sistemas comerciais, apresentando melhores resultados para o par de línguas português--espanhol do que para o par português--inglês. / Machine Translation (MT) -- the translation of a natural (source) language into another (target) by means of computer programs -- is a hard task, mainly due to the need of deep linguistic knowledge about the two (or more) languages required to build resources such as translation grammars, bilingual dictionaries, etc. The scarcity of linguistic resources or even the difficulty to build them often limits the use of MT systems, for example, to certain application domains. In this context, several methods have been proposed aiming at generating linguistic knowledge automatically from multilingual resources, so that building translation tools becomes less hard. The ReTraTos project presented in this document is one of these proposals and aims at inducing translation lexicons and transfer rules automatically from PoS-tagged and lexically aligned translation examples for Portuguese--Spanish and Portuguese--English language pairs. The rule induction system brings forth a new approach, in which translation examples are split into alignment blocks and induction is performed for each type of block separately. Another new feature of this system is a more elaborate strategy for filtering the induced rules. Besides the translation lexicon and the transfer rule induction systems, we also implemented a MT module for validating the induced resources. The induced translation lexicons were evaluated intrinsically and the results obtained agree with those reported on the literature. The induced translation rules were evaluated directly and indirectly by the MT module, and improved the word-by-word translation in both directions (source--target and target--source) for the languages under study. The target sentences obtained by the induced resources were also compared to those generated by commercial systems, showing better results for Portuguese--Spanish than for Portuguese--English.
|
4 |
An exploration of the restandardization of Sepedi : the inclusion of the Khelobedu dialectMalatji, Mmatlou Jerida January 2017 (has links)
Thesis ( M. A. (Translation Studies and Linguistics)) -- University of Limpopo, 2017 / The study explored the restandardization of Sepedi with the aspiration of including Khelobedu dialectical lexicons in the standard form. The standardization of Sepedi, unlike the case of Shona, excluded many of its dialects from the process, thus, left Khelobedu speakers outside of this medium and later subjected them to learn it in schools, putting them at a point of disadvantage academically. Very few studies have been conducted around this term restandardization.
This study is mixed method in approach and sequential in design. Data is collected via self-administered questionnaires and face-to-face interviews using an interview guide. A total of 20 participants from four villages in the Mopani District made up a sample for the quantitative data collection phase, while four participants who are Language practitioners by profession made up the qualitative phase of the study.
The findings of the study reveal that dialect speakers do not have much confidence in their dialectical variety. They still believe that English and Sepedi are mediums of development and progress. Although restandardization according to the language practitioners is said to possible, PanSALB still has a lot to do in terms of developing Indigenous Languages in South Africa.
|
5 |
A Content Analysis of Lexicons, Word Lists, and Basal Readers of the Elementary Grades: Their Relation to ArtHogan, Priscilla Lea 05 1900 (has links)
In this investigation, a content analysis was made with eleven lexicographical sources and three basal reading series to determine if art and art-related words were present. The analysis was made with the use of two charts, in which each was divided into eight categories of word context. The Composite Chart contained 6,576 words found in six lexicons, five word lists and forty-two readers, and the Reader Chart contained 407 words found only in the readers.
The analysis revealed: dominant categories and percentages, word and cumulative word frequencies, high and low frequency words, and the percentage of words found in the basal readers as compared to the lexicographical sources.
|
6 |
Intégration de ressources lexicales riches dans un analyseur syntaxique probabiliste / Integration of lexical resources in a probabilistic parserSigogne, Anthony 03 December 2012 (has links)
Cette thèse porte sur l'intégration de ressources lexicales et syntaxiques du français dans deux tâches fondamentales du Traitement Automatique des Langues [TAL] que sont l'étiquetage morpho-syntaxique probabiliste et l'analyse syntaxique probabiliste. Dans ce mémoire, nous utilisons des données lexicales et syntaxiques créées par des processus automatiques ou par des linguistes afin de donner une réponse à deux problématiques que nous décrivons succinctement ci-dessous : la dispersion des données et la segmentation automatique des textes. Grâce à des algorithmes d'analyse syntaxique de plus en plus évolués, les performances actuelles des analyseurs sont de plus en plus élevées, et ce pour de nombreuses langues dont le français. Cependant, il existe plusieurs problèmes inhérents aux formalismes mathématiques permettant de modéliser statistiquement cette tâche (grammaire, modèles discriminants,...). La dispersion des données est l'un de ces problèmes, et est causée principalement par la faible taille des corpus annotés disponibles pour la langue. La dispersion représente la difficulté d'estimer la probabilité de phénomènes syntaxiques apparaissant dans les textes à analyser mais qui sont rares ou absents du corpus ayant servi à l'apprentissage des analyseurs. De plus, il est prouvé que la dispersion est en partie un problème lexical, car plus la flexion d'une langue est importante, moins les phénomènes lexicaux sont représentés dans les corpus annotés. Notre première problématique repose donc sur l'atténuation de l'effet négatif de la dispersion lexicale des données sur les performances des analyseurs. Dans cette optique, nous nous sommes intéressé à une méthode appelée regroupement lexical, et qui consiste à regrouper les mots du corpus et des textes en classes. Ces classes réduisent le nombre de mots inconnus et donc le nombre de phénomènes syntaxiques rares ou inconnus, liés au lexique, des textes à analyser. Notre objectif est donc de proposer des regroupements lexicaux à partir d'informations tirées des lexiques syntaxiques du français, et d'observer leur impact sur les performances d'analyseurs syntaxiques. Par ailleurs, la plupart des évaluations concernant l'étiquetage morpho-syntaxique probabiliste et l'analyse syntaxique probabiliste ont été réalisées avec une segmentation parfaite du texte, car identique à celle du corpus évalué. Or, dans les cas réels d'application, la segmentation d'un texte est très rarement disponible et les segmenteurs automatiques actuels sont loin de proposer une segmentation de bonne qualité, et ce, à cause de la présence de nombreuses unités multi-mots (mots composés, entités nommées,...). Dans ce mémoire, nous nous focalisons sur les unités multi-mots dites continues qui forment des unités lexicales auxquelles on peut associer une étiquette morpho-syntaxique, et que nous appelons mots composés. Par exemple, cordon bleu est un nom composé, et tout à fait un adverbe composé. Nous pouvons assimiler la tâche de repérage des mots composés à celle de la segmentation du texte. Notre deuxième problématique portera donc sur la segmentation automatique des textes français et son impact sur les performances des processus automatiques. Pour ce faire, nous nous sommes penché sur une approche consistant à coupler, dans un même modèle probabiliste, la reconnaissance des mots composés et une autre tâche automatique. Dans notre cas, il peut s'agir de l'analyse syntaxique ou de l'étiquetage morpho-syntaxique. La reconnaissance des mots composés est donc réalisée au sein du processus probabiliste et non plus dans une phase préalable. Notre objectif est donc de proposer des stratégies innovantes permettant d'intégrer des ressources de mots composés dans deux processus probabilistes combinant l'étiquetage ou l'analyse à la segmentation du texte / This thesis focuses on the integration of lexical and syntactic resources of French in two fundamental tasks of Natural Language Processing [NLP], that are probabilistic part-of-speech tagging and probabilistic parsing. In the case of French, there are a lot of lexical and syntactic data created by automatic processes or by linguists. In addition, a number of experiments have shown interest to use such resources in processes such as tagging or parsing, since they can significantly improve system performances. In this paper, we use these resources to give an answer to two problems that we describe briefly below : data sparseness and automatic segmentation of texts. Through more and more sophisticated parsing algorithms, parsing accuracy is becoming higher for many languages including French. However, there are several problems inherent in mathematical formalisms that statistically model the task (grammar, discriminant models,...). Data sparseness is one of those problems, and is mainly caused by the small size of annotated corpora available for the language. Data sparseness is the difficulty of estimating the probability of syntactic phenomena, appearing in the texts to be analyzed, that are rare or absent from the corpus used for learning parsers. Moreover, it is proved that spars ness is partly a lexical problem, because the richer the morphology of a language is, the sparser the lexicons built from a Treebank will be for that language. Our first problem is therefore based on mitigating the negative impact of lexical data sparseness on parsing performance. To this end, we were interested in a method called word clustering that consists in grouping words of corpus and texts into clusters. These clusters reduce the number of unknown words, and therefore the number of rare or unknown syntactic phenomena, related to the lexicon, in texts to be analyzed. Our goal is to propose word clustering methods based on syntactic information from French lexicons, and observe their impact on parsers accuracy. Furthermore, most evaluations about probabilistic tagging and parsing were performed with a perfect segmentation of the text, as identical to the evaluated corpus. But in real cases of application, the segmentation of a text is rarely available and automatic segmentation tools fall short of proposing a high quality segmentation, because of the presence of many multi-word units (compound words, named entities,...). In this paper, we focus on continuous multi-word units, called compound words, that form lexical units which we can associate a part-of-speech tag. We may see the task of searching compound words as text segmentation. Our second issue will therefore focus on automatic segmentation of French texts and its impact on the performance of automatic processes. In order to do this, we focused on an approach of coupling, in a unique probabilistic model, the recognition of compound words and another task. In our case, it may be parsing or tagging. Recognition of compound words is performed within the probabilistic process rather than in a preliminary phase. Our goal is to propose innovative strategies for integrating resources of compound words in both processes combining probabilistic tagging, or parsing, and text segmentation
|
7 |
Kulturelle Lexika und metaphorische Profile : zu einer semantisch-kognitiven Theorie der Protosem- und Konzemstrukturen lexikalisierter Konzepte der ArbeitsweltBrundin, Gudrun January 1999 (has links)
Based on current results produced in modern cognitive linguistics this study examines linguistic evidence derived from the experiential field ,labor4 in German and Swedish in order to formulate a theory which more closely describes the relationship between semantic and conceptual structure in lexical concepts. The author maintains that semantic information can be seen as the content of the culture which it reflects and that semantic content is represented in culture specific modes. On the basis of frequency lists a chronologically organized cultural lexicon is presented for each language and culture area. The aim of the cultural lexicon is to point out central fields of concern in the two examined language communities at different times. The second part of the investigation deals with the conceptual structure in two culturally relevant types, labor and unemployment, of the cultural lexicon. In accordance with modern relativist views it is argued that linguistic form must be seen as a result of cultural embodiment and that modes of representation in the human mind must also show traces of this embodiment. It is shown that conceptual (and semantic) structures not only reinforce views of the world, but also play a central role in compatibility restrictions on the performance level. The author suggests that semantic and conceptual structures in lexical concepts are distributed in a metaphorical profile specific to each lexical concept and held together on its various levels by so called protosemes and concernes.
|
8 |
Mesurer et améliorer la qualité des corpus comparables / Measuring and Improving Comparable Corpus QualityLi, Bo 26 June 2012 (has links)
Les corpus bilingues sont des ressources essentielles pour s'affranchir de la barrière de la langue en traitement automatique des langues (TAL) dans un contexte multilingue. La plupart des travaux actuels utilisent des corpus parallèles qui sont surtout disponibles pour des langues majeurs et pour des domaines spécifiques. Les corpus comparables, qui rassemblent des textes comportant des informations corrélées, sont cependant moins coûteux à obtenir en grande quantité. Plusieurs travaux antérieurs ont montré que l'utilisation des corpus comparables est bénéfique à différentes taches en TAL. En parallèle à ces travaux, nous proposons dans cette thèse d'améliorer la qualité des corpus comparables dans le but d'améliorer les performances des applications qui les exploitent. L'idée est avantageuse puisqu'elle peut être utilisée avec n'importe quelle méthode existante reposant sur des corpus comparables. Nous discuterons en premier la notion de comparabilité inspirée des expériences d'utilisation des corpus bilingues. Cette notion motive plusieurs implémentations de la mesure de comparabilité dans un cadre probabiliste, ainsi qu'une méthodologie pour évaluer la capacité des mesures de comparabilité à capturer un haut niveau de comparabilité. Les mesures de comparabilité sont aussi examinées en termes de robustesse aux changements des entrées du dictionnaire. Les expériences montrent qu'une mesure symétrique s'appuyant sur l'entrelacement du vocabulaire peut être corrélée avec un haut niveau de comparabilité et est robuste aux changements des entrées du dictionnaire. En s'appuyant sur cette mesure de comparabilité, deux méthodes nommées: greedy approach et clustering approach, sont alors développées afin d'améliorer la qualité d'un corpus comparable donnée. L'idée générale de ces deux méthodes est de choisir une sous partie du corpus original qui soit de haute qualité, et d'enrichir la sous-partie de qualité moindre avec des ressources externes. Les expériences montrent que l'on peut améliorer avec ces deux méthodes la qualité en termes de score de comparabilité d'un corpus comparable donnée, avec la méthode clustering approach qui est plus efficace que la method greedy approach. Le corpus comparable ainsi obtenu, permet d'augmenter la qualité des lexiques bilingues en utilisant l'algorithme d'extraction standard. Enfin, nous nous penchons sur la tâche d'extraction d'information interlingue (Cross-Language Information Retrieval, CLIR) et l'application des corpus comparables à cette tâche. Nous développons de nouveaux modèles CLIR en étendant les récents modèles proposés en recherche d'information monolingue. Le modèle CLIR montre de meilleurs performances globales. Les lexiques bilingues extraits à partir des corpus comparables sont alors combinés avec le dictionnaire bilingue existant, est utilisé dans les expériences CLIR, ce qui induit une amélioration significative des systèmes CLIR. / Bilingual corpora are an essential resource used to cross the language barrier in multilingual Natural Language Processing (NLP) tasks. Most of the current work makes use of parallel corpora that are mainly available for major languages and constrained areas. Comparable corpora, text collections comprised of documents covering overlapping information, are however less expensive to obtain in high volume. Previous work has shown that using comparable corpora is beneficent for several NLP tasks. Apart from those studies, we will try in this thesis to improve the quality of comparable corpora so as to improve the performance of applications exploiting them. The idea is advantageous since it can work with any existing method making use of comparable corpora. We first discuss in the thesis the notion of comparability inspired from the usage experience of bilingual corpora. The notion motivates several implementations of the comparability measure under the probabilistic framework, as well as a methodology to evaluate the ability of comparability measures to capture gold-standard comparability levels. The comparability measures are also examined in terms of robustness to dictionary changes. The experiments show that a symmetric measure relying on vocabulary overlapping can correlate very well with gold-standard comparability levels and is robust to dictionary changes. Based on the comparability measure, two methods, namely the greedy approach and the clustering approach, are then developed to improve the quality of any given comparable corpus. The general idea of these two methods is to choose the highquality subpart from the original corpus and to enrich the low-quality subpart with external resources. The experiments show that one can improve the quality, in terms of comparability scores, of the given comparable corpus by these two methods, with the clustering approach being more efficient than the greedy approach. The enhanced comparable corpus further results in better bilingual lexicons extracted with the standard extraction algorithm. Lastly, we investigate the task of Cross-Language Information Retrieval (CLIR) and the application of comparable corpora in CLIR. We develop novel CLIR models extending the recently proposed information-based models in monolingual IR. The information-based CLIR model is shown to give the best performance overall. Bilingual lexicons extracted from comparable corpora are then combined with the existing bilingual dictionary and used in CLIR experiments, which results in significant improvement of the CLIR system.
|
9 |
Study of foreign hadith words in the first Islamic literatureZahīr, Jamīlat Bānū, Zaheer, Jameela Banu 11 1900 (has links)
From the point of view of literary qualities, Prophetic Traditions stand out among Arabic literature. This study aims at selecting some unique words the Prophet used, and search for their presence or absence in the Arabic
Although several sources were used, the reliance for the choice of words is mainly on An-Nihayah fi gharib al-Athar of Ibn al-Athir; and for comparison, several published works. literature. The objective is to find out how the Prophetic words affected the literature. An analysis is attempted to arrive at the meaning of these words as used in Hadith literature, literatures preceding or following it, and compare to find whether they have been used at all, and, if used, in the same meaning or not, or whether they are used in a unique sense. Thus, this study brings to light differences between Prophetic literature, and literatures other than it. / Arabic & Islamic Studies / M.A. (Islamic Studies)
|
10 |
SOBRE ESTRUTURAS LINGUÍSTICAS E PARADIGMAS: AS RELEITURAS RECENTES DE CARNAP E KUHN / ON LINGUISTIC STRUCTURES AND PARADIGMS: THE RECENT REINTERPRETATION OF CARNAP AND KUHNSilva, Gilson Olegario da 26 April 2013 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / e recent literature in philosophy of science has been reassessing the positivist legacy.
One of the items on the agenda is the alleged opposition between the theses put forth
by positivists such as Carnap and the so-called post-positivists , such as Kuhn. Although the
laer came to be viewed as a critic of several important positivist theses, more recent authors
such as Friedman, Reisch, Earman, Irzik and Grünberg, maintain that several of the most characteristic
theses of the Kuhnian view of science were already present in Carnap s philosophy.
Against this kind of reading, authors such as Oliveira and Psillos argue that within Carnap s
philosophy there is no place for Kuhnian theses like incommensurability, holism or the theoryladenness
of observations. e first article of this dissertation presents the reasons for each of
those readings and assesses them having in view the perspectives from which they are offered.
It argues that it is possible to show that some aspects of Kuhn s thesis have a counterpart
in the works of Carnap, although those theses vary in importance for Carnap and Kuhn. e
second article presents aspects that can be seen as antagonistic in the two views, namely, the
conceptions that relate to that distinction made famous by Reichenbach between contexts of
discovery and justification. / A literatura recente em filosofia da ciência vêm reavaliando o legado positivista. Um
dos itens dessa reavaliação é a suposta oposição entre as teses defendidas por positivistas como
Carnap e os chamados pós-positivistas , como Kuhn. Embora este último tenha sido percebido
como um crítico de diversas teses positivistas importantes, autores mais recentes como Friedman,
Reisch, Earman, Irzik e Grünberg, sustentam que várias das teses mais características da
concepção kuhniana da ciência já estariam presentes na filosofia positivista. Contra esse tipo
de leitura, autores como Oliveira e Psillos argumentam que não há na filosofia de Carnap e
outros positivistas lugar para teses como a da incomensurabilidade, do holismo ou da impregnação
teórica das observações, características das concepções kuhnianas. O primeiro artigo
desta dissertação apresenta as razões para cada uma dessas leituras e avalia cada uma tendo
em vista a perspectiva a partir da qual elas são oferecidas. Defende que é possível mostrar
que algumas teses kuhnianas têm uma contraparte já nos trabalhos de Carnap, muito embora
tais teses ocupem posições e importâncias diferenciadas em Carnap e Kuhn. O segundo artigo
apresenta aspectos que podem ser vistos como antagônicos nas filosofias de ambos, a saber, as
concepções que dizem respeito àquela distinção feita famosa por Reichenbach entre contextos
de descoberta e justificação.
|
Page generated in 0.0387 seconds