111 |
Le discours politique relatif à l'aménagement linguistique en France (1997-2002) / Political discourse regarding language management in France (1997-2002)Cherkaoui Messin, Kenza 03 December 2009 (has links)
L’histoire de France est marquée depuis le XVIe siècle par l’uniformisation linguistique. La République a ouvert son ère par une Terreur politique qui s’est accompagnée de Terreur linguistique. Depuis, France et français sont intimement liés dans l’organisation comme dans les imaginaires politiques. Or, à un moment récent et bref de l’histoire de France, lors de la XIème législature [1997-2002], le débat a émergé quant à l’opportunité de reconnaitre une diversité linguistique de moins en moins importante sur le territoire national, les locuteurs des langues régionales disparaissant progressivement par un pur effet démographique. En effet, le débat sur la Charte européenne des langues régionales ou minoritaires [1999] puis sur le statut de la Corse [2001] a occupé la scène politique et médiatique française comme rarement les questions de statut des langues en France l’avaient fait. La multiplicité des lieux d’expression et des conditions de production et de réception des discours politiques a nécessité, pour aborder ce que les médias nomment « la classe politique » et que nous définissons comme une communauté discursive, la construction d’un corpus fortement hétérogène. Séances parlementaires à l’Assemblée nationale ou au Sénat, rapports, avis, projets ou propositions de loi, questions au gouvernement, mais également expression de la communauté discursive des hommes et des femmes politiques dans la presse écrite et audiovisuelle ont été réunis pour tenter de saisir le débat dans son ensemble. L’hétérogénéité constitutive du corpus a justifié un traitement différencié des sous corpus, en fonction de leur lieu de production et de leurs conditions de transmission : le corpus parlementaire, représentant plus de 250000 mots a fait l’objet d’un traitement automatique par Lexico3, ce qui a permis d’entrer dans le corpus. Le traitement lexicométrique de l’ensemble parlementaire et traitement manuel des corpus médiatiques ont été articulés de manière féconde : une analyse de discours à entrée lexicale a été possible grâce à la façon dont le traitement automatique a mis en valeur des phénomènes de catégorisation opérées par les locuteurs au moyen du lexique. L’approche lexico-sémantique a été complétée d’une cartographie des arguments en présence : la communauté discursive des hommes politiques dessine des imaginaires sociodiscursifs. Des idéologies concurrentes de ce qu’est la Nation et de son devenir s’opposent alors. / French history is influenced, since the 16th century, by language standardisation. The French Republic has started its era through political Terror that was completed by language Terror. Since, France and French have been intertwined in terms of politics as well as in terms of collective representations. However, in recent years, during the mandate of L. Jospin as a Prime Minister [1997-2002], France debated about the possibility of acknowledging its language diversity. Although, for mere demographic reasons, this diversity is fading away, it meets a strong social support. In 1999, with the opportunity of signing the European Charter for Regional or Minority Languages and in 2001 at the time where a possible new status was debated for Corsica, a language debate finally took place in France. From this debate, we built a corpus constructed to take into account all accessible discourse produced by French political personnel, seen as a discursive community. The consequence of such a project is a highly heterogeneous corpus, where Parliament debates, reports, law propositions etc. adjoin excerpts from written and audiovisual media. This heterogeneity commanded to approach the data differently: the vast corpus gathered from the Parliament [250,000 words approx.] underwent statistical treatment through Lexico3. This lexico-semantic analysis was hinged on manual analysis of the somewhat numerically smaller media corpus thanks to the lexical categorisation phenomena that were put into light via statistics. This lexico-semantic approach was completed by the analysis of the arguments deployed by different sides of the discursive community, as well as by an exploration of their collective representations of language management. Ideology about both the Nation and its future emerge from the debate, on a much wider scale than for languages [country’s unity, human rights, diversity, etc.].
|
112 |
3D urban cartography incorporating recognition and temporal integration / Cartographie urbaine 3D avec reconnaissance et intégration temporelleAijazi, Ahmad Kamal 15 December 2014 (has links)
Au cours des dernières années, la cartographie urbaine 3D a suscité un intérêt croissant pour répondre à la demande d’applications d’analyse des scènes urbaines tournées vers un large public. Conjointement les techniques d’acquisition de données 3D progressaient. Les travaux concernant la modélisation et la visualisation 3D des villes se sont donc intensifiés. Des applications fournissent au plus grand nombre des visualisations efficaces de modèles urbains à grande échelle sur la base des imageries aérienne et satellitaire. Naturellement, la demande s’est portée vers des représentations avec un point de vue terrestre pour offrir une visualisation 3D plus détaillée et plus réaliste. Intégrées dans plusieurs navigateurs géographiques comme Google Street View, Microsoft Visual Earth ou Géoportail, ces modélisations sont désormais accessibles et offrent une représentation réaliste du terrain, créée à partir des numérisateurs mobiles terrestres. Dans des environnements urbains, la qualité des données obtenues à partir de ces véhicules terrestres hybrides est largement entravée par la présence d’objets temporairement statiques ou dynamiques (piétons, voitures, etc.) dans la scène. La mise à jour de la cartographie urbaine via la détection des modifications et le traitement des données bruitées dans les environnements urbains complexes, l’appariement des nuages de points au cours de passages successifs, voire la gestion des grandes variations d’aspect de la scène dues aux conditions environnementales constituent d’autres problèmes délicats associés à cette thématique. Plus récemment, les tâches de perception s’efforcent également de mener une analyse sémantique de l’environnement urbain pour renforcer les applications intégrant des cartes urbaines 3D. Dans cette thèse, nous présentons un travail supportant le passage à l’échelle pour la cartographie 3D urbaine automatique incorporant la reconnaissance et l’intégration temporelle. Nous présentons en détail les pratiques actuelles du domaine ainsi que les différentes méthodes, les applications, les technologies récentes d’acquisition des données et de cartographie, ainsi que les différents problèmes et les défis qui leur sont associés. Le travail présenté se confronte à ces nombreux défis mais principalement à la classification des zones urbaines l’environnement, à la détection automatique des changements, à la mise à jour efficace de la carte et l’analyse sémantique de l’environnement urbain. Dans la méthode proposée, nous effectuons d’abord la classification de l’environnement urbain en éléments permanents et temporaires. Les objets classés comme temporaire sont ensuite retirés du nuage de points 3D laissant une zone perforée dans le nuage de points 3D. Ces zones perforées ainsi que d’autres imperfections sont ensuite analysées et progressivement éliminées par une mise à jour incrémentale exploitant le concept de multiples passages. Nous montrons que la méthode d’intégration temporelle proposée permet également d’améliorer l’analyse sémantique de l’environnement urbain, notamment les façades des bâtiments. Les résultats, évalués sur des données réelles en utilisant différentes métriques, démontrent non seulement que la cartographie 3D résultante est précise et bien mise à jour, qu’elle ne contient que les caractéristiques permanentes exactes et sans imperfections, mais aussi que la méthode est également adaptée pour opérer sur des scènes urbaines de grande taille. La méthode est adaptée pour des applications liées à la modélisation et la cartographie du paysage urbain nécessitant une mise à jour fréquente de la base de données. / Over the years, 3D urban cartography has gained widespread interest and importance in the scientific community due to an ever increasing demand for urban landscape analysis for different popular applications, coupled with advances in 3D data acquisition technology. As a result, in the last few years, work on the 3D modeling and visualization of cities has intensified. Lately, applications have been very successful in delivering effective visualizations of large scale models based on aerial and satellite imagery to a broad audience. This has created a demand for ground based models as the next logical step to offer 3D visualizations of cities. Integrated in several geographical navigators, like Google Street View, Microsoft Visual Earth or Geoportail, several such models are accessible to large public who enthusiastically view the real-like representation of the terrain, created by mobile terrestrial image acquisition techniques. However, in urban environments, the quality of data acquired by these hybrid terrestrial vehicles is widely hampered by the presence of temporary stationary and dynamic objects (pedestrians, cars, etc.) in the scene. Other associated problems include efficient update of the urban cartography, effective change detection in the urban environment and issues like processing noisy data in the cluttered urban environment, matching / registration of point clouds in successive passages, and wide variations in environmental conditions, etc. Another aspect that has attracted a lot of attention recently is the semantic analysis of the urban environment to enrich semantically 3D mapping of urban cities, necessary for various perception tasks and modern applications. In this thesis, we present a scalable framework for automatic 3D urban cartography which incorporates recognition and temporal integration. We present in details the current practices in the domain along with the different methods, applications, recent data acquisition and mapping technologies as well as the different problems and challenges associated with them. The work presented addresses many of these challenges mainly pertaining to classification of urban environment, automatic change detection, efficient updating of 3D urban cartography and semantic analysis of the urban environment. In the proposed method, we first classify the urban environment into permanent and temporary classes. The objects classified as temporary are then removed from the 3D point cloud leaving behind a perforated 3D point cloud of the urban environment. These perforations along with other imperfections are then analyzed and progressively removed by incremental updating exploiting the concept of multiple passages. We also show that the proposed method of temporal integration also helps in improved semantic analysis of the urban environment, specially building façades. The proposed methods ensure that the resulting 3D cartography contains only the exact, accurate and well updated permanent features of the urban environment. These methods are validated on real data obtained from different sources in different environments. The results not only demonstrate the efficiency, scalability and technical strength of the method but also that it is ideally suited for applications pertaining to urban landscape modeling and cartography requiring frequent database updating.
|
113 |
Исследование и разработка прототипа вопросно-ответной системы : магистерская диссертация / Research and development question and answer system prototypeАлейникова, А. А., Aleinikova, A. A. January 2023 (has links)
В рамках данной работы было проведено исследование существующих типов вопросно-ответных систем и методов анализа текста. Был проведен анализ существующих вопросно-ответных систем. Приведена обобщённая схема работы вопросно-ответных систем и для каждого типа систем приведена детальная схема работы. Описаны и исследованы методы выбора кандидатов ответа и методы их оценки. Также в работе описаны возможные критерии оценки работы таких систем. В ходе исследования был разработан рабочий прототип вопросно-ответной системы, основанный на системе BERT для русского языка. Используемая модель RuBERT была предобучена и протестирована на стандартных задачах SQuAD. В ходе работы модель была протестирована и оценена в разработанном рабочем прототипе и показала высокие результаты по предложенным критериям оценки. / Within the framework of this work, a study was made of the existing types of question-answer systems and text analysis methods. An analysis of the existing question-answer systems was carried out. A generalized scheme of operation of question-answer systems is given, and a detailed scheme of operation is given for each type of system. Methods for selecting response candidates and methods for their evaluation are described and investigated. The paper also describes possible criteria for evaluating the operation of such systems. In the course of the study, a working prototype of a question-answer system based on the BERT system for the Russian language was developed. The RuBERT model used was pre-trained and tested on standard SQuAD problems. During the work, the model was tested and evaluated in the developed working prototype and showed high results according to the proposed evaluation criteria.
|
114 |
Avaliação automática de questões discursivas usando LSASANTOS, João Carlos Alves dos 05 February 2016 (has links)
Submitted by camilla martins (camillasmmartins@gmail.com) on 2017-01-27T15:50:37Z
No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Tese_AvaliacaoAutomaticaQuestoes.pdf: 5106074 bytes, checksum: c401d50ce5e666c52948ece7af20b2c3 (MD5) / Approved for entry into archive by Edisangela Bastos (edisangela@ufpa.br) on 2017-01-30T13:02:31Z (GMT) No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Tese_AvaliacaoAutomaticaQuestoes.pdf: 5106074 bytes, checksum: c401d50ce5e666c52948ece7af20b2c3 (MD5) / Made available in DSpace on 2017-01-30T13:02:31Z (GMT). No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Tese_AvaliacaoAutomaticaQuestoes.pdf: 5106074 bytes, checksum: c401d50ce5e666c52948ece7af20b2c3 (MD5)
Previous issue date: 2016-02-05 / Este trabalho investiga o uso de um modelo usando Latent Semantic Analysis (LSA) na avaliação automática de respostas curtas, com média de 25 a 70 palavras, de questões discursivas. Com o surgimento de ambientes virtuais de aprendizagem, pesquisas sobre correção automática tornaram-se mais relevantes, pois permitem a correção mecânica com baixo custo para questões abertas. Além disso, a correção automática permite um feedback instantâneo e elimina o trabalho de correção manual. Isto possibilita criar turmas virtuais com grande quantidade de alunos (centenas ou milhares). Pesquisas sobre avaliação automática de textos estão sendo desenvolvidas desde a década de 60, mas somente na década atual estão alcançando a acurácia necessária para uso prático em instituições de ensino. Para que os usuários finais tenham confiança, o desafio de pesquisa é desenvolver sistemas de avaliação robustos e com acurácia próxima de avaliadores humanos. Apesar de alguns estudos apontarem nesta direção, existem ainda muitos pontos a serem explorados nas pesquisas. Um ponto é a utilização de bigramas com LSA, mesmo que não contribua muito com a acurácia, contribui com a robustez, que podemos definir como confiabilidade2, pois considera a ordem das palavras dentro do texto. Buscando aperfeiçoar um modelo LSA na direção de melhorar a acurácia e aumentar a robustez trabalhamos em quatro direções: primeira, incluímos bigramas de palavras no modelo LSA; segunda, combinamos modelos de co-ocorrência de unigrama e bigramas com uso de regressão linear múltipla; terceira, acrescentamos uma etapa de ajustes sobre a pontuação do modelo LSA baseados no número de palavras das respostas avaliadas; quarta, realizamos uma análise da distribuição das pontuações atribuídas pelo modelo LSA contra avaliadores humanos. Para avaliar os resultados comparamos a acurácia do sistema contra a acurácia de avaliadores humanos verificando o quanto o sistema se aproxima de um avaliador humano. Utilizamos um modelo LSA com cinco etapas: 1) pré- processamento, 2) ponderação, 3) decomposição a valores singulares, 4) classificação e 5) ajustes do modelo. Para cada etapa explorou-se estratégias alternativas que influenciaram na acurácia final. Nos experimentos obtivemos uma acurácia de 84,94% numa avaliação comparativa contra especialistas humanos, onde a correlação da acurácia entre especialistas humanos foi de 84,93%. No domínio estudado, a tecnologia de avaliação automática teve resultados próximos aos dos avaliadores humanos mostrando que esta alcançando um grau de maturidade para ser utilizada em sistemas de avaliação automática em ambientes virtuais de aprendizagem. / This work investigates the use of a model using Latent Semantic Analysis (LSA) In the automatic evaluation of short answers, with an average of 25 to 70 words, of questions Discursive With the emergence of virtual learning environments, research on Automatic correction have become more relevant as they allow the mechanical correction With low cost for open questions. In addition, automatic Feedback and eliminates manual correction work. This allows you to create classes With large numbers of students (hundreds or thousands). Evaluation research Texts have been developed since the 1960s, but only in the The current decade are achieving the necessary accuracy for practical use in teaching. For end users to have confidence, the research challenge is to develop Evaluation systems that are robust and close to human evaluators. despite Some studies point in this direction, there are still many points to be explored In the surveys. One point is the use of bigrasms with LSA, even if it does not contribute Very much with the accuracy, contributes with the robustness, that we can define as reliability2, Because it considers the order of words within the text. Seeking to perfect an LSA model In the direction of improving accuracy and increasing robustness we work in four directions: First, we include word bigrasms in the LSA model; Second, we combine models Co-occurrence of unigram and bigrams using multiple linear regression; third, We added a stage of adjustments on the LSA model score based on the Number of words of the responses evaluated; Fourth, we performed an analysis of the Of the scores attributed by the LSA model against human evaluators. To evaluate the We compared the accuracy of the system against the accuracy of human evaluators Verifying how close the system is to a human evaluator. We use a LSA model with five steps: 1) pre-processing, 2) weighting, 3) decomposition a Singular values, 4) classification and 5) model adjustments. For each stage it was explored Strategies that influenced the final accuracy. In the experiments we obtained An 84.94% accuracy in a comparative assessment against human Correlation among human specialists was 84.93%. In the field studied, the Evaluation technology had results close to those of the human evaluators Showing that it is reaching a degree of maturity to be used in Assessment in virtual learning environments. Google Tradutor para empresas:Google Toolkit de tradução para appsTradutor de sitesGlobal Market Finder.
|
115 |
El Uso de información web en el desarrollo de procesos de aprendizaje de conocimientos de ciencias sociales e historia: un estudio empírico en la educación secundaria obligatoriaGuijosa Guzmán, Alejandro 06 June 2012 (has links)
En aquesta dissertació es pretén valorar l'efectivitat de la WebQuest com a eina didàctica que afavoreix el desenvolupament dels processos d'ensenyament / aprenentatge de continguts curriculars propis de l'àrea de les Ciències Socials i la Història en l'etapa deEnsenyament Secundari Obligaotira.
Per a això es realitza una àmplia revisió bibliogràfica sobre la recerca en WebQuests i es presenten 3 estudis quantitatius amb 210 alumnes en tres cursos de secundària, en què s'analitzen els sus textos (estudis 1 i 2) i el seu procés de resolució de l’activitat (estudi 3).
Els resultats mostren diferències significatives per a la gran majoria de variables extretes de les anàlisis, a favor de l'experiència amb WebQuests.
Es conclou que encara que les anàlisis estadístiques mostren un efecte favorable de l'experiència, convindrien estudis complementaris de tipus qualitatiu que expliquessin la naturalesa d'aquestes diferències d'una manera més detallada a fi de poder valorar amb més precisió l'efectivitat d'aquesta metodologia didàctica. / En esta disertación se pretende valorar la efectividad de la WebQuest como herramienta didáctica que favorecería del desarrollo de los procesos de enseñanza/aprendizaje de contenidos curriculares propios del área de las Ciencias Sociales y la Historia en la etapa de Enseñanza Secundaria Obligatoria.
Para ello se realiza una amplia revisión bibliográfica sobre la investigación en WebQuests y se presentan 3 estudios cuantitativos con 210 alumnos en tres cursos de secundaria, en los que se analizan los textos que estos producen (estudios 1 y 2) y su proceso de resolución de la actividad (estudio 3).
Los resultados muestran diferencias significativas para la gran mayoría de variables extraídas de los análisis, a favor de la experiencia con WebQuests.
Se concluye que aunque los análisis estadísticos muestran un efecto favorable de la experiencia, convendrían estudios complementarios de tipo cualitativo que explicasen la naturaleza de estas diferencias de una manera más pormenorizada en orden a poder valorar con mayor precisión la efectividad de esta metodología didáctica. / This study assess the effectiveness of WebQuest as teaching and learning tools that enhance: 1) learning of contents specific to the area of Social Science and History, 2) causal reasoning and critical thinking development.
To that end, a comprehensive literature review on WebQuests research is carried on, and 3 quantitative studies with 210 students in three secondary education levels is conducted by analyzing the texts students produce (Studies 1 and 2) and their performance level on the activity resolution process (Study 3).
Results show significant differences for the vast majority of variables which means that the experience with WebQuests could enhance: 1) Social Science and History curricular contents learning, and 2) causal reasoning and critical thinking development.
Despite statistical analysis showed a favorable effect of the experience, it is concluded that further qualitative additional studies would be helpful in order to explain the nature of these differences. These studies could spell out the way these differences are working, in order to better assess WebQuests effectiveness.
|
116 |
Comunicação visual no livro ilustrado: palavra, imagem e design contando histórias. / Visual communication in picturebook: word, image and design telling stories.FERREIRA, Anália Adriana da Silva. 11 June 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-06-11T19:08:30Z
No. of bitstreams: 1
ANÁLIA ADRIANA DA SILVA FERREIRA - DISSERTAÇÃO PPGDsign 2017..pdf: 10863421 bytes, checksum: f502d4a8a5cd3ca66fe6947e9da120a5 (MD5) / Made available in DSpace on 2018-06-11T19:08:30Z (GMT). No. of bitstreams: 1
ANÁLIA ADRIANA DA SILVA FERREIRA - DISSERTAÇÃO PPGDsign 2017..pdf: 10863421 bytes, checksum: f502d4a8a5cd3ca66fe6947e9da120a5 (MD5)
Previous issue date: 2017-05-03 / Esta pesquisa tem por objetivo investigar a comunicação visual e a relação palavraimagem mediada pelo design em livros ilustrados, para isso, selecionou-se o
acervo da Fundação Nacional do Livro Infantil e Juvenil, FNLIJ, especificamente a
categoria Criança. Realizou-se estudo panorâmico de caráter histórico-cultural do
livro infantil no Brasil, observando temáticas, concepções gráficas, autores, ilustradores e a narrativa visual com o intuito de compreender o objeto e identificar a participação do design gráfico no mesmo; levantamento e discussão sobre as classificações do livro infantil, as necessidades de leitura de imagens e os aspectos do design na organização visual também foram contemplados nesta dissertação. Os livros selecionados passaram por análise sintática e semântica para melhor observar as articulações entre as narrativas visual e verbal. Para análise sintática utiliza-se as teorias da Gestalt como norteamento, observando formas, cores, tipografia e composição. Para análise semântica o uso da Semiótica, identificando ícones, índices e suas simbologias. A metodologia seguiu as etapas propostas pelo Método Feldman de Leitura de Imagens e precisou ser adaptado para o livro ilustrado com aplicação das teorias mencionadas. Como resultado destaca-se a diversidade de concepções na narrativa visual e as experimentações que o design
possibilita ao livro. Espera-se por meio deste trabalho contribuir para a ampliação
de estudos do design no livro infantil, fomentar a discussão sobre leitura de imagens e as práticas de projetos de comunicação para criança. / This research aims to investigate visual communication in picturebooks and the
text-image relation mediated by the design. Therefore the collection of the FNLIJ,
Brazilian version of International Board on Books for Young People – IBBY,
specifically the Child category was selected. In order to understand the object and
identify the participation of graphic design in it, a panoramic historical-cultural study
of the children's book in Brazil was carried out, emphasizing its thematic, graphic
conceptions, authors, illustrators and visual narrative. This study also comprises a
selection and discussion about the child book classifications, the need to foment for
reading images, and the aspects of design in the visual organization of the
information. The selected books were analyzed syntactically and semantically to
better observe the articulations between the visual and verbal narratives. The
syntactic analysis followed Gestalt theories as a guide, observing forms, colors,
typography and composition. The semantic analysis was conducted through
semiotics theories to identify icons, indexes and their symbologies. The
methodology was an adaptation of the steps proposed by the Feldman Method of
Reading Images redirecting to the Picturebook with application of the mentioned
theories. As a result, this research highlights the diversity of composition of the
visual narrative and the experimentation that the design makes possible to the
book. Moreover, it seeks to contribute to the growth of design studies in children's
books, to stimulate the discussion about reading images and for the practices of
communication projects for children.
|
117 |
Le lexique-grammaire des verbes du grec moderne : constructions transitives non locatives à un complément d’objet direct / The lexicon-grammar of Modern Greek verbs : transitive non locative constructions with one direct objectVoskaki, Ourania 25 March 2011 (has links)
Cette étude a pour objectif la description syntaxique et sémantique des constructions transitives non locatives à un complément d'objet direct en grec moderne : N0 V N1. Nous nous sommes appuyée sur le cadre théorique de la grammaire transformationnelle de Zellig S. Harris et sur le cadre méthodologique du Lexique-Grammaire, défini par Maurice Gross et développé au Laboratoire d'Automatique Documentaire et Linguistique. À partir de 16 560 entrées verbales morphologiques, nous procédons à la classification des constructions transitives non locatives, à partir de 24 classes distinctes, sur la base de critères formels posés. Un inventaire de 2 934 emplois verbaux à construction transitive non locative à un complément d'objet direct a été ainsi produit et scindé en neuf classes. Parmi ces emplois, 1 884 sont formellement décrits dans 9 tables de lexique-grammaire établies : plus précisément, il s'agit de celles qui impliquent des constructions à un complément d'objet direct illustrant les concepts « apparition » (table 32GA), « disparition » (32GD), objet « concret » (32GC), « partie du corps » (32GCL), substantif « humain » (32GH), substantif avec « pluriel obligatoire » (32GPL). En outre, la transformation passive est largement interdite pour les emplois verbaux recensés dans la table 32GNM, alors que les tables 32GCV et 32GRA regroupent des verbes acceptant une transformation à verbe support. Nous présentons l'application des données linguistiques recensées dans le traitement automatique des langues naturelles (TALN), avec la conversion automatique des tables en automates à états finis récursifs, suivie de nos suggestions sur leur applicabilité à la traduction en français et à l'enseignement du grec moderne (langue maternelle ou étrangère) : acquisition/apprentissage / The current research aims to provide a syntactic and semantic analysis of Modern Greek transitive non-locative constructions with one direct object: N0 V N1. Our study is based on the syntactic framework of the Transformational Grammar defined by Zellig S. Harris. We followed the Lexicon-Grammar methodology framework developed by Maurice Gross and elaborated at the LADL (Laboratoire d'Automatique Documentaire et Linguistique). Based on 16 560 morphological verbal entries, we proceeded to the classification of transitive non-locative constructions. On the basis of formal criteria we divided them into 24 distinct classes that formed an inventory of 2 934 transitive non-locative verbal uses with one direct object. Among them, 1 884 verbal uses were split into nine classes and they were formally described in 9 lexicon-grammar tables established for this purpose. More precisely, these structures include a direct object referring to the following concepts: “appearance” (32GA table), “disappearance” (32GD), “concrete” object (32GC), “body part” (32GCL), “human” object (32GH), and “obligatory plural” (32GPL). Likewise, the passive transformation is largely blocked in the 32GNM table, while the 32GCV and 32GRA tables regroup verbs accepting a support verb transformation. We present the linguistic data application in Natural Language Processing (NLP), by means of automatic tables conversion into recursive transition network automata. Moreover, we set forth our remarks on their applicability in translation from Modern Greek to French as well as in language learning/teaching (Modern Greek as first or second language)
|
118 |
Metody sumarizace dokumentů na webu / Methods of Document Summarization on the WebBelica, Michal January 2013 (has links)
The work deals with automatic summarization of documents in HTML format. As a language of web documents, Czech language has been chosen. The project is focused on algorithms of text summarization. The work also includes document preprocessing for summarization and conversion of text into representation suitable for summarization algorithms. General text mining is also briefly discussed but the project is mainly focused on the automatic document summarization. Two simple summarization algorithms are introduced. Then, the main attention is paid to an advanced algorithm that uses latent semantic analysis. Result of the work is a design and implementation of summarization module for Python language. Final part of the work contains evaluation of summaries generated by implemented summarization methods and their subjective comparison of the author.
|
119 |
Syntaktická analýza založená na řadě metod / Parsing Based on Several MethodsDolíhal, Luděk Unknown Date (has links)
p, li { white-space: pre-wrap; } The main goal of this work is to analyze the creation of the composite compiler. Composite compiler is in this case a szstem, which consists of more cooperating parts. My compiler is special, because its syntactic analyser consists of two parts. The work is focused on the construction of the parsers parts, on its cooperation and comunication. I will trys to scatch the teoretical backgroun of this solution. This is to be done by gramatical systems. Then I~will try to justify whether or not it is neccesary and suitable to create such a kind of parser. Last but not least I~will analyse the language, whose syntactic analyser is to be implemented by the chosen method.
|
120 |
Le développement du neuromarketing aux Etats-Unis et en France. Acteurs-réseaux, traces et controverses / The comparative development of neuromarketing between the United States and France : Actor-networks, traces and controversiesTeboul, Bruno 20 September 2016 (has links)
Notre travail de recherche explore de manière comparée le développement du neuromarketing aux Etats-Unis et en France. Nous commençons par analyser la littérature sur le neuromarketing. Nous utilisons comme cadre théorique et méthodologique l’Actor Network Theory (ANT) ou Théorie de l’Acteur-Réseau (dans le sillage des travaux de Bruno Latour et Michel Callon). Nous montrons ainsi comment des actants « humains et non-humains »: acteurs-réseaux, traces (publications) et controverses forment les piliers d’une nouvelle discipline telle que le neuromarketing. Notre approche hybride « qualitative-quantitative », nous permet de construire une méthodologie appliquée de l’ANT: analyse bibliométrique (Publish Or Perish), text mining, clustering et analyse sémantique de la littérature scientifique et web du neuromarketing. A partir de ces résultats, nous construisons des cartographies, sous forme de graphes en réseau (Gephi) qui révèlent les interrelations et les associations entre acteurs, traces et controverses autour du neuromarketing. / Our research explores the comparative development of neuromarketing between the United States and France. We start by analyzing the literature on neuromarketing. We use as theoretical and methodological framework the Actor Network Theory (ANT) (in the wake of the work of Bruno Latour and Michel Callon). We show how “human and non-human” entities (“actants”): actor-network, traces (publications) and controversies form the pillars of a new discipline such as the neuromarketing. Our hybrid approach “qualitative-quantitative” allows us to build an applied methodology of the ANT: bibliometric analysis (Publish Or Perish), text mining, clustering and semantic analysis of the scientific literature and web of the neuromarketing. From these results, we build data visualizations, mapping of network graphs (Gephi) that reveal the interrelations and associations between actors, traces and controversies about neuromarketing.
|
Page generated in 0.0825 seconds