61 |
Suivi de l'activité humaine par hypothèses multiples abductives / Human Activity Monitoring with Multiple Abductive HypothesesVettier, Benoît 24 September 2013 (has links)
Ces travaux traitent du suivi de l'activité humaine à travers l'analyse en temps r éel de signaux physiologiques et d'accélé rométrie. Il s'agit de données issues de capteurs ambulatoires ; elles sont bruitées, ambigües, et ne représentent qu'une vision incomplète de la situation. De par la nature des données d'une part, et les besoins fonctionnels de l'application d'autre part, nous considérons que le monde des possibles n'est ni exhaustif ni exclusif, ce qui contraint le mode de raisonnement. Ainsi, nous proposons un raisonnement abductif à base de modèles interconnectés et personnalisés. Ce raisonnement consiste à manipuler un faisceau d'hypothèses au sein d'un cadre dynamique de contraintes, venues tant de l'observateur (en termes d'activités acceptables) que d'exigences non-fonctionnelles, ou portant sur la santé du sujet observé. Le nombre d'hypothèses étudiées à chaque instant est amené à varier, par des mécanismes de Pr édiction-Vérification ; l'adaptation du Cadre participe également à la mise en place d'un pilotage sensible au contexte. Nous proposons un système multi-agent pour représenter ces hypothèses; les agents sont organisés autour d'un environnement partagé qui leur permet d' échanger l'information. Ces échanges et, de manière générale, la détection des contextes d'activation des agents, sont régis par des filtres qui associent une action à des conditions. Le mode de raisonnement et l'organisation de ces agents hétérogènes au sein d'un cadre homogène confèrent au système expressivité, évolutivité et maîtrise des coûts calculatoires. Une implémentation utilisant des données réelles permet d'illustrer les qualités de la proposition. / This proposal deals with human activity monitoring, through the real-time analysis of both physiology data and accelerometry. These data come from ambulatory sensors ; they are noisy and ambiguous, and merely represent a partial and incomplete observation of the current si- tuation. Given the nature of the data on one hand, and the application's required features on the other hand, we consider an Open World of non-exclusive possible situations. This has a restrictive impact on the reasoning engine. We thus propose to use abductive reasoning, based on interconnected and personalized models. This way of reasoning consists in handling a beam of hypotheses, within a dynamic Frame of constraints which come both from the Observer (who defines acceptable situations) and from non-functional expectations, or relating to the observed person's health. The number of hy- potheses at each timestep is wont to vary, by means of Prediction-Verification schemes. The evolution of the Frame leads to context-sensitive adaptive control. We propose a multi-agent system to manage these hypotheses; the agents are organized around a shared environment which allows them to trade information. This interaction and the general detection of activation contexts for the agents are powered and regulated by condition- action filters. The way of reasoning and the organization of heterogeneous agents within a homogeneous Frame lead to a system which we claim to be expressive, evolutive and cost-efficient. An imple- mentation using real sensor data is presented to illustrate these qualities.
|
62 |
Processamento da co-referência: pronomes lexicais, nomes repetidos, hiperônimos e hipônimos coino formas de retomada anaforica inter-sentencial do sujeito em português brasileiro / Processing the co-reference: lexical pronouns, repetead NPs, hiperonimos and hipônimos as ways of inter-sentential anaphoric recalling in Brazilian PortuguesQueiroz, Karla Lima de 22 December 2009 (has links)
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2016-09-23T12:22:42Z
No. of bitstreams: 1
arquivototal.pdf: 3967529 bytes, checksum: 17e8000091e75546f18d4625ec646445 (MD5) / Made available in DSpace on 2016-09-23T12:22:42Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 3967529 bytes, checksum: 17e8000091e75546f18d4625ec646445 (MD5)
Previous issue date: 2009-12-22 / The co-reference is usually defined as a strategy of textual progression that recalls a previous entity, called antecedent, through the use of an anaphora. Different areas of science have studied the co-reference for its relevance to the local coherence, as well as to the discourse comprehension. However, a crucial question remains and requires to be clarified, that is about the cognitive mechanisms and the linguistics principles that underlie the choice of anaphora among the various forms of the language. The current work has its theoretical connection with Experimental Psycholinguistics which deals with the process of the phases, and more especifically with the processing of the co-reference. This work compares the efficiency of the lexical pronouns vs. repeated-name, as well as hiperonimos vs. hiponimos, both seen as ways of one's inter-sentential anaphoric recalling in Brazilian Portuguese. It was also verified that the wide range of the Centering Theory (Grosz, Joshi e Weinstein, 1983, 1995), which states a slower effect known as Repeated NP Penalty when recalling an antecedent using the repeated noun instead of a pronoun, as well as the The Informational Load Hypotheses (Almor 1990, 1999, 2000), as an alternative concept that connects the processing cost with the discourse function. For that, the current research used three on-line self paced reading experiments and analyzed its results by using both the T-Test and the ANOVA one. In the first experiment the lexical pronouns were read faster than the repeated pronouns, in accordance to the Centering Theory and the he Informational Load Hypotheses. The second experiment, the co-reference was better established by the hiperônimos instead of hipônimos, reinforcing the cost-funcion principle stated by Almor (1990, 1999, 2000), while the Repeated NP Penalty by Gordon et. al. (1993, 1995), is concerned about the dichotomy between lexical pronouns vs repeated nouns. The syntactic prominence was also discussed in Chamber's & Smith (1999) research as well as in Leitão's (2005), but the third experiment stated the independent act of the syntactic prominence in relation to the type of the anaphora, even though that does not exclude the influence of the structural parallelism. / A co-referência é cornumente definida como uma estratégia de progressão textual e
caracterizada pela retomada de urna entidade prévia, também denominada antecedente,
através de urna anáfora. Ela vem sendo estudada por várias áreas do conhecimento científico, devido a sua importância para a coerência local e, conseqüentemente, para a compreensão do discurso, mas uma questão crucial continua em aberto e requer maiores esclarecimentos: quais os mecanismos cognitivos e os princípios lingüísticos que subjazem a escolha da anáfora, entre a multiplicidade de formas existente na língua. O presente trabalho se insere no quadro teórico da Psicolingüística Experimental que trata do processamento de frases e, mais especificamente, do processamento da co-referência. Nele, compararnos a eficiência dos pronomes lexicais vs. nomes repetidos e dos hiperônimos vs. hipônimos, como formas de retomada anafórica inter-sentencial do sujeito em Português Brasileiro. Verificamos também a abrangência explicativa da Teoria da Centralização (Grosz, Joshi e Weinstein, 1983, 1995), que postula urn efeito de retardamento, mais conhecido corno Penalidade do Nome Repetido, ao retomar um antecedente proeminente sintaticamente usando urn nome repetido em vez de urn pronome, e da Hipótese da Carga Informacional (Almor 1990, 1999, 2000), com uma concepção alternativa que relaciona o custo de processarnento e atenção discursiva.
Para isso, aplicamos três experimentos on-line de leitura auto-monitorada e validamos estatisticamente seus resultados através do Teste-T e da ANOVA. No primeiro experimento, os pronomes lexicais foram lidos mais rapidamente do que os names repetidos, em consonância corn a Teoria da Centralização e com a Hipótese da Carga Informacional. No segundo experimento, a co-referenda foi estabelecida rnais facilrnente pelos hiperônimos do que pelos hipônimos, ratificando o principio de otimização entre custo de processarnento e função discursiva, defendido por Almor (1990, 1999, 2000), enquanto a Penalidade do Nome-Repetido, constatada pioneiramente por Gordon et. al. (1993, 1995), limita-se a dicotornia pronornes lexicais vs. nomes repetidos. A proeminência sintática também foi questionada no estudo de Chambers e Smith (1999) em Inglês e de Leitão (2005) em Português Brasileiro, mas o terceiro e último experirnento comprovou sua atuação independente do tipo de anáfora, apesar de não descartar a influência do paralelismo estrutural.
|
63 |
A origem e a estruturação das assembleias de aves da infraordem Furnariides ao longo do tempo e do espaço: o papel dos processos históricos / Origin and assembly of Furnariides assemblages across space and time: the role of historical processesLedezma, Jesús Nazareno Pinto 07 June 2017 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2018-07-26T17:56:37Z
No. of bitstreams: 2
Tese - Jesús Nazareno Pinto Ledezma - 2017.pdf: 8871426 bytes, checksum: 451cfc37da75487787cfc68ca57f9d82 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-07-27T15:03:35Z (GMT) No. of bitstreams: 2
Tese - Jesús Nazareno Pinto Ledezma - 2017.pdf: 8871426 bytes, checksum: 451cfc37da75487787cfc68ca57f9d82 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-07-27T15:03:35Z (GMT). No. of bitstreams: 2
Tese - Jesús Nazareno Pinto Ledezma - 2017.pdf: 8871426 bytes, checksum: 451cfc37da75487787cfc68ca57f9d82 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2017-06-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / One of the major challenges in biology is to understand the processes that originate and maintain of species diversity, and that in turn, determinate the observed patterns of biological diversity at different spatial and temporal scales. Here, we explore the historical processes that generate the species diversity and the assembly of local assemblages of Furnariides, the largest bird continental endemic radiation. In general, we used data of geographic distribution, local assemblages, life history (e.g., habitat preference) and molecular phylogenies. Furnariides diversified mainly during the Tertiary period, period in which South America was an island continent. Also, they are tightly related with the habitat that they occupy, where, the forest habitats represent the ancestral habitat for this clade. The Furnariides species richness pattern follows the same species richness pattern of birds in general, with a higher concentration of species at low latitudes and in forest habitats. Although the concentration of species is higher in these regions, the regions at higher latitudes and of open habitats, present rapid rates of speciation, extinction and colonization, suggesting that these habitats represent an effective arena for diversification in the Neotropics, and that are important for the maintenance of species diversity in forest habitats. Finally, the phylogenetic structure of assemblages of Furnariides, is influenced for the habitat preferences, and that the assembly of local assemblages is determined by the combined effect of historical colonisation and local extinction, as well as, the niche conservatism and environmental filtering. / Um dos principais desafios em biologia é entender os processos que dão origem e mantêm a diversidade de espécies, e que, por sua vez, determinam os padrões observados da diversidade biológica em diferentes escalas espaciais e temporais. Nesta tese, exploramos os processos históricos que geram a diversidade de espécies e a montagem de assembleias locais no infraorder dos Furnariides, a maior radiação continental endêmica de aves. De maneira geral se usaram dados de distribuição das espécies, de assembleias locais, historia de vida (e.g., preferência de habitat) e filogenias moleculares. Se demostra que os Furnariides principalmente diversificaram no período Terciário, período no qual América do Sul foi uma ilha continente. Além disso, estão estreitamente relacionadas com o habitat que elas ocupam, sendo que os habitats de floresta representam o habitat ancestral deste clado. O padrão de riqueza de espécies de Furnariides segue o mesmo padrão de riqueza de aves em geral, com uma maior concentração de espécies em latitudes menores e em habitats de floresta. Embora a concentração de espécies seja maior em estas regiões, as regiões de latitudes maiores e de habitats abertos, apresentaram taxas de especiação, extinção e dispersão mais rápidas, sugerindo que os habitats abertos representam areias efetivas de diversificação no Neotrópico e são importantes para o mantimento da diversidade de espécies em habitats de floresta. Finalmente, a estrutura filogenética das assembleias dos Furnariides e influenciada pela preferência de habitat, além disso, a montagem de assembleias locais depende do efeito combinado das taxas diferencias de colonização e extinção local, assim como a conservação de nicho e da filtragem ambiental.
|
64 |
Contribuições linguísticas cabo-verdiana e sefardita na formação do papiamentu / African and Sephardic linguistic agencies in the formation of PapiamentuShirley Freitas 08 August 2016 (has links)
Este estudo propõe uma hipótese que considera fundamental a atuação linguística conjunta dos cabo-verdianos e dos judeus sefarditas e seus escravos na gênese e no desenvolvimento do papiamentu. A justificativa para a pesquisa reside no fato de que, a despeito de ser um tema discutido na literatura, ainda se trata de um assunto controverso entre os estudiosos, havendo até o momento, pelo menos, quatro hipóteses diferentes. Maduro (1965), Rona (1970) e Munteanu (1996), por exemplo, defendem que o papiamentu seria um crioulo de base espanhola, tendo seus elementos portugueses introduzidos posteriormente pelos judeus sefarditas e seus escravos. Já Lenz (1928) e Martinus (1996) consideram o papiamentu como resultado da relexificação de um crioulo ou protocrioulo afroportuguês falado por escravos trazidos da África. De acordo com Goodman (1996 [1987]) e Smith (1999), por seu turno, o papiamentu seria um crioulo de base portuguesa, surgido a partir de um dialeto judeo-português da comunidade sefardita e seus escravos. Por fim, Jacobs (2012) considera que o papiamentu teria se originado a partir do crioulo falado na ilha de Santiago, no arquipélago de Cabo Verde, sendo mais tarde levado para Curaçao. Analisando as hipóteses, observou-se que duas apresentam argumentos e fatos linguísticos evidenciáveis, a saber, as relações com o kabuverdianu (especialmente, a variedade de Santiago) e a participação dos judeus sefarditas e seus escravos. A fim de decidir a favor de uma das hipóteses, itens lexicais e funcionais das variedades setecentistas e oitocentistas do papiamentu e do kabuverdianu clássicos, bem como do papiamentu sefardita, foram comparados, resultando em convergências nos níveis lexicais e funcionais. De um lado, a grande quantidade de elementos derivados do português no papiamentu clássico seria uma evidência de que esses itens representaram um papel basilar no desenvolvimento da língua, de outro, as convergências lexicais e funcionais uma vez que há uma menor probabilidade de substituição dos itens funcionais (em virtude de sua opacidade semântica) (MATRAS, 2009) não podem ser explicadas por acaso. Já as similaridades com o kabuverdianu clássico confirmariam seu parentesco linguístico. No que diz respeito ao papel da comunidade sefardita e seus escravos, observou-se que a expressão linguística dos judeus também faz parte da estrutura geral do papiamentu clássico, deixando marcas inclusive na variedade moderna. Tendo em vista o material documental dos séculos xviii e xix, escolher uma única hipótese resultaria em um quadro parcial, sendo necessário postular uma convergência de hipóteses, que consiste não somente na reunião de duas hipóteses (a cabo-verdiana e a sefardita), mas na proposta de um novo cenário para se explicar a gênese e o desenvolvimento do papiamentu. Dentro dessa perspectiva, é importante considerar que, em situações de contato, as línguas continuam se influenciando mutuamente ao longo dos tempos (PERINI-SANTOS, 2015), sendo necessária, portanto, uma análise que privilegie a contribuição dos falantes de diferentes línguas em diversas sincronias. Assim, seguindo Faraclas et al. (2014), uma convergência de elementos linguísticos cabo-verdianos e dos sefarditas e seus escravos deve ser considerada nos estudos sobre a formação e desenvolvimento do papiamentu. / This study proposes a hypothesis considering fundamental the joint linguistic agency of Cape Verdeans and Sephardic Jews and their slaves for the genesis and development of Papiamentu. The rationale for the study lies in the fact that, despite being a topic discussed in the literature, it is still a controversial subject among scholars. So far, there are at least four different hypotheses. Maduro (1965), Rona (1970) and Munteanu (1996), for example, argue that Papiamentu is a Spanish-based Creole and that its Portuguese elements were later introduced by Sephardic Jews and their slaves. On the other hand, Lenz (1928) and Martinus (1996) consider Papiamentu a result of a relexification of a Creole or an African-Portuguese Proto-Creole language spoken by the slaves brought from Africa. According to Goodman (1996 [1987]) and Smith (1999), Papiamentu was a Portuguese-based Creole emerged from a Judeo- Portuguese dialect of the Sephardic community and its slaves. Finally, Jacobs (2012) considers that Papiamentu would have originated from the Creole spoken on Santiago island, in the Cape Verde Islands, and was later taken to Curacao. By analyzing the hypotheses, it was observed that two of them have arguments and linguistic facts capable of being evidenced: relations with Cape Verdean Creole (especially the Santiago variety) and the participation of Sephardic Jews and their slaves in it. In order to decide in favor of one of these hypotheses, lexical and functional items of the eighteenth and nineteenth-century varieties of Classic Papiamentu, Classic Cape Verdean Creole and Sephardic Papiamentu were compared, resulting in convergences at the lexical and functional levels. On the one hand, the large number of elements derived from Portuguese in Classic Papiamentu would evidence that these items played a fundamental role in the development of the language. On the other hand, lexical and functional convergence as it is less likely to replace functional items (by virtue of their semantic opacity) (MATRAS, 2009) cannot be explained by mere chance. Similarities with Classic Cape Verdean Creole confirm their linguistic kinship. Regarding the role of the Sephardic community and its slaves, it was observed that the linguistic expression of Jews was also part of the overall structure of Classic Papiamentu, leaving marks even in its modern variety. Given the eighteenth and nineteenth-century documentation, choosing a single hypothesis would result in a partial picture. It is necessary to postulate a convergence of hypotheses, which consists not only in uniting two hypotheses (Cape Verdean and Sephardic), but also in the proposal of a new scenario to explain the genesis and the development of Papiamentu. Within this perspective, it is important to consider that, in contact situations, languages continue to influence each other over time (PERINI-SANTOS, 2015), requiring therefore an analysis that favors agency on the part of speakers of different languages in different synchronies. Thus, following Faraclas et al. (2014), a convergence of linguistic elements of Cape Verdean Creole and of the languages of Sephardic Jews and their slaves must be considered in studies on the formation and development of Papiamentu.
|
65 |
Tests d'hypothèses pour les processus de Poisson dans les cas non réguliers / Hypotheses testing problems for inhomogeneous Poisson processesYang, Lin 22 January 2014 (has links)
Ce travail est consacré aux problèmes de testd’hypothèses pour les processus de Poisson nonhomogènes.L’objectif principal de ce travail est l’étude decomportement des différents tests dans le cas desmodèles statistiques singuliers. L’évolution de lasingularité de la fonction d'intensité est comme suit :régulière (l'information de Fisher finie), continue maisnon différentiable (singularité de type “cusp”),discontinue (singularité de type saut) et discontinueavec un saut de taille variable. Dans tous les cas ondécrit analytiquement les tests. Dans le cas d’un saut detaille variable, on présente également les propriétésasymptotiques des estimateurs.En particulier, on décrit les statistiques de tests, le choixdes seuils et le comportement des fonctions depuissance sous les alternatives locales. Le problèmeinitial est toujours le test d’une hypothèse simple contreune alternative unilatérale. La méthode principale est lathéorie de la convergence faible dans l’espace desfonctions discontinues. Cette théorie est appliquée àl’étude des processus de rapport de vraisemblancenormalisé dans les modèles singuliers considérés. Laconvergence faible du rapport de vraisemblance sousl’hypothèse et sous les alternatives vers les processuslimites correspondants nous permet de résoudre lesproblèmes mentionnés précédemment.Les résultats asymptotiques sont illustrés par dessimulations numériques contenant la construction destests, le choix des seuils et les fonctions de puissancessous les alternatives locales. / This work is devoted to the hypotheses testing problems for inhomogeneous Poisson processes.The main object of the work is the study of the behaviour of different tests in the case of singular statistical models. The “evolution of singularity” of the intensity function is the following: regular (finite Fisherinformation), continuous but not differentiable (“cusp”type singularity), discontinuous (jump type singularity)and discontinuous with variable jump size. In all thecases we describe analytically the tests. In the case ofvariable jump size we present as well the asymptoticproperties of the estimators.In particular we describe the test statistics, the choice ofthresholds and the form of the power functions for thelocal alternatives. The initial problem is always the testof a simple hypothesis against a one-sided alternative.The main tool is the weak convergence theory in thespace of discontinuous functions. This theory is appliedto the study of the normalized likelihood ratio processesin the considered singular models. The weakconvergence of the likelihood ratio processes underhypothesis and under alternatives to the correspondinglimit processes allows us to solve the mentioned aboveproblems.The asymptotic results are illustrated by numericalsimulations which contain the construction of the tests,the choice of the thresholds, and the power functions forlocal alternatives.
|
66 |
Quelques contributions à la sélection de variables et aux tests non-paramétriques / A few contributions to variable selection and nonparametric testsComminges, Laëtitia 12 December 2012 (has links)
Les données du monde réel sont souvent de très grande dimension, faisant intervenir un grand nombre de variables non pertinentes ou redondantes. La sélection de variables est donc utile dans ce cadre. D'abord, on considère la sélection de variables dans le modèle de régression quand le nombre de variables est très grand. En particulier on traite le cas où le nombre de variables pertinentes est bien plus petit que la dimension ambiante. Sans supposer aucune forme paramétrique pour la fonction de régression, on obtient des conditions minimales permettant de retrouver l'ensemble des variables pertinentes. Ces conditions relient la dimension intrinsèque à la dimension ambiante et la taille de l'échantillon. Ensuite, on considère le problème du test d'une hypothèse nulle composite sous un modèle de régression non paramétrique multi varié. Pour une fonctionnelle quadratique donnée $Q$, l'hypothèse nulle correspond au fait que la fonction $f$ satisfait la contrainte $Q[f] = 0$, tandis que l'alternative correspond aux fonctions pour lesquelles $ |Q[f]|$ est minorée par une constante strictement positive. On fournit des taux minimax de test et les constantes de séparation exactes ainsi qu'une procédure optimale exacte, pour des fonctionnelles quadratiques diagonales et positives. On peut utiliser ces résultats pour tester la pertinence d'une ou plusieurs variables explicatives. L'étude des taux minimax pour les fonctionnelles quadratiques diagonales qui ne sont ni positives ni négatives, fait apparaître deux régimes différents : un régime « régulier » et un régime « irrégulier ». On applique ceci au test de l'égalité des normes de deux fonctions observées dans des environnements bruités / Real-world data are often extremely high-dimensional, severely under constrained and interspersed with a large number of irrelevant or redundant features. Relevant variable selection is a compelling approach for addressing statistical issues in the scenario of high-dimensional and noisy data with small sample size. First, we address the issue of variable selection in the regression model when the number of variables is very large. The main focus is on the situation where the number of relevant variables is much smaller than the ambient dimension. Without assuming any parametric form of the underlying regression function, we get tight conditions making it possible to consistently estimate the set of relevant variables. Secondly, we consider the problem of testing a particular type of composite null hypothesis under a nonparametric multivariate regression model. For a given quadratic functional $Q$, the null hypothesis states that the regression function $f$ satisfies the constraint $Q[f] = 0$, while the alternative corresponds to the functions for which $Q[f]$ is bounded away from zero. We provide minimax rates of testing and the exact separation constants, along with a sharp-optimal testing procedure, for diagonal and nonnegative quadratic functionals. We can apply this to testing the relevance of a variable. Studying minimax rates for quadratic functionals which are neither positive nor negative, makes appear two different regimes: “regular” and “irregular”. We apply this to the issue of testing the equality of norms of two functions observed in noisy environments
|
67 |
Exploitation d’informations riches pour guider la traduction automatique statistique / Complex Feature Guidance for Statistical Machine TranslationMarie, Benjamin 25 March 2016 (has links)
S'il est indéniable que de nos jours la traduction automatique (TA) facilite la communication entre langues, et plus encore depuis les récents progrès des systèmes de TA statistiques, ses résultats sont encore loin du niveau de qualité des traductions obtenues avec des traducteurs humains.Ce constat résulte en partie du mode de fonctionnement d'un système de TA statistique, très contraint sur la nature des modèles qu'il peut utiliser pour construire et évaluer de nombreuses hypothèses de traduction partielles avant de parvenir à une hypothèse de traduction complète. Il existe cependant des types de modèles, que nous qualifions de « complexes », qui sont appris à partir d'informations riches. Si un enjeu pour les développeurs de systèmes de TA consiste à les intégrer lors de la construction initiale des hypothèses de traduction, cela n'est pas toujours possible, car elles peuvent notamment nécessiter des hypothèses complètes ou impliquer un coût de calcul très important. En conséquence, de tels modèles complexes sont typiquement uniquement utilisés en TA pour effectuer le reclassement de listes de meilleures hypothèses complètes. Bien que ceci permette dans les faits de tirer profit d'une meilleure modélisation de certains aspects des traductions, cette approche reste par nature limitée : en effet, les listes d'hypothèses reclassées ne représentent qu'une infime partie de l'espace de recherche du décodeur, contiennent des hypothèses peu diversifiées, et ont été obtenues à l'aide de modèles dont la nature peut être très différente des modèles complexes utilisés en reclassement.Nous formulons donc l'hypothèse que de telles listes d'hypothèses de traduction sont mal adaptées afin de faire s'exprimer au mieux les modèles complexes utilisés. Les travaux que nous présentons dans cette thèse ont pour objectif de permettre une meilleure exploitation d'informations riches pour l'amélioration des traductions obtenues à l'aide de systèmes de TA statistique.Notre première contribution s'articule autour d'un système de réécriture guidé par des informations riches. Des réécritures successives, appliquées aux meilleures hypothèses de traduction obtenues avec un système de reclassement ayant accès aux mêmes informations riches, permettent à notre système d'améliorer la qualité de la traduction.L'originalité de notre seconde contribution consiste à faire une construction de listes d'hypothèses par passes multiples qui exploitent des informations dérivées de l'évaluation des hypothèses de traduction produites antérieurement à l'aide de notre ensemble d'informations riches. Notre système produit ainsi des listes d'hypothèses plus diversifiées et de meilleure qualité, qui s'avèrent donc plus intéressantes pour un reclassement fondé sur des informations riches. De surcroît, notre système de réécriture précédent permet d'améliorer les hypothèses produites par cette deuxième approche à passes multiples.Notre troisième contribution repose sur la simulation d'un type d'information idéalisé parfait qui permet de déterminer quelles parties d'une hypothèse de traduction sont correctes. Cette idéalisation nous permet d'apporter une indication de la meilleure performance atteignable avec les approches introduites précédemment si les informations riches disponibles décrivaient parfaitement ce qui constitue une bonne traduction. Cette approche est en outre présentée sous la forme d'une traduction interactive, baptisée « pré-post-édition », qui serait réduite à sa forme la plus simple : un système de TA statistique produit sa meilleure hypothèse de traduction, puis un humain apporte la connaissance des parties qui sont correctes, et cette information est exploitée au cours d'une nouvelle recherche pour identifier une meilleure traduction. / Although communication between languages has without question been made easier thanks to Machine Translation (MT), especially given the recent advances in statistical MT systems, the quality of the translations produced by MT systems is still well below the translation quality that can be obtained through human translation. This gap is partly due to the way in which statistical MT systems operate; the types of models that can be used are limited because of the need to construct and evaluate a great number of partial hypotheses to produce a complete translation hypothesis. While more “complex” models learnt from richer information do exist, in practice, their integration into the system is not always possible, would necessitate a complete hypothesis to be computed or would be too computationally expensive. Such features are therefore typically used in a reranking step applied to the list of the best complete hypotheses produced by the MT system.Using these features in a reranking framework does often provide a better modelization of certain aspects of the translation. However, this approach is inherently limited: reranked hypothesis lists represent only a small portion of the decoder's search space, tend to contain hypotheses that vary little between each other and which were obtained with features that may be very different from the complex features to be used during reranking.In this work, we put forward the hypothesis that such translation hypothesis lists are poorly adapted for exploiting the full potential of complex features. The aim of this thesis is to establish new and better methods of exploiting such features to improve translations produced by statistical MT systems.Our first contribution is a rewriting system guided by complex features. Sequences of rewriting operations, applied to hypotheses obtained by a reranking framework that uses the same features, allow us to obtain a substantial improvement in translation quality.The originality of our second contribution lies in the construction of hypothesis lists with a multi-pass decoding that exploits information derived from the evaluation of previously translated hypotheses, using a set of complex features. Our system is therefore capable of producing more diverse hypothesis lists, which are globally of a better quality and which are better adapted to a reranking step with complex features. What is more, our forementioned rewriting system enables us to further improve the hypotheses produced with our multi-pass decoding approach.Our third contribution is based on the simulation of an ideal information type, designed to perfectly identify the correct fragments of a translation hypothesis. This perfect information gives us an indication of the best attainable performance with the systems described in our first two contributions, in the case where the complex features are able to modelize the translation perfectly. Through this approach, we also introduce a novel form of interactive translation, coined "pre-post-editing", under a very simplified form: a statistical MT system produces its best translation hypothesis, then a human indicates which fragments of the hypothesis are correct, and this new information is then used during a new decoding pass to find a new best translation.
|
68 |
Patterns and Determinants of Payout Policy in the 21-st Century : A study of the Nordic Countries. / Patterns and Determinants of Payout Policy in the 21-st Century.Silva da Costa, Tatiana, Nyassi, Abubacarr Sidy January 2021 (has links)
Payout policies is one of the most discussed topics in corporate finance. Since Miller & Modigliani (1961) dividend irrelevance theory, which was based on perfect markets, many theories have been developed in order to incorporate market imperfections to payout decisions. Numerous scholars have been trying to explain why companies pay dividends, whether they should compensate investors with alternative methods such as share repurchases or not distribute cash at all. The theme has gained lots of attention during the 21-st century driven by the subprime financial crisis in 2008 and mostly recently, in 2020, due to economic impacts brought by the Covid 19 pandemic. Another important aspect that makes the study of payout policy relevant in the 21-st century is the unique impacts of unveiled trends such as globalization and volatile markets, increased importance of ecology and sustainability, emergency of fast growth firms (mainly in the Tech industry) and change characteristics of listed firms. Globally there is a tendency of reduction in the number of listed firms and also deterioration in the quality of earnings. Additionally, there is no consensus about which factors influence a firm propensity of distributing cash to shareholders, which makes the topic very intriguing. Previous research has been conducted mainly within US firms. Few studies have been conducted regarding payout policies in the Nordic countries and most of them give little attention to share repurchases and payout policy determinants. Therefore, we decided to conduct a study regarding the patterns and determinants of payout policy in the 21-st century with focus on the Nordic countries. The purposes of the study are: first, to understand the pattern of payout policies in the Nordic countries during the 21-st century and second determine if there is a relationship between a number of firm’s selected factors and firm’s payout policy. As a sub purpose we intend to examine whether the Covid 19 pandemic had any effect on Nordic firm’s payout policies. The factors investigated, namely: debt, profit, retained earnings, growth opportunities, cash holdings, size and age were identified through a detailed literature review. We collected data from Thomson Reuters DataStream Eikon covering the period between 2000 and 2020 for 1,153 firms from all Nordic countries: Denmark, Iceland, Finland, Norway and Sweden. The study follows a quantitative research method with a deductive approach, and we have based the theoretical framework on the following theories: Miller-Modigliani dividend irrelevance theory, Signaling theory, Agency theory, Life-cycle theory and Substitution and Flexibility hypotheses. In order to determine whether there is a relationship between the companies selected factors and the payout ratios we conducted ordinary least square (OLS) correlation analysis. Additional regression analysis was conducted to verify possible impacts of Covid 19 on Nordic payout policies. Results indicate that some firms’ selected characteristics such as debt, size and age have an impact on Nordic firms’ payout policy during the 21-st century. Larger firms with lower debt are more willing to pay cash dividends, while older firms tend to present higher levels of share repurchase. Firms’ characteristics showed no impact on changes in payout ratios during the initial period of Covid 19.
|
69 |
ASSESSMENT OF AGREEMENT AND SELECTION OF THE BEST INSTRUMENT IN METHOD COMPARISON STUDIESChoudhary, Pankaj K. 11 September 2002 (has links)
No description available.
|
70 |
Inligtingswaarde van dividendeNortjé, André 11 1900 (has links)
Die studie ondersoek die inligtingswaarde van dividende as 'n moontlike verldaring van
die waargenome aandeleprysreaksie op dividendaankondigings. Twee algemene hipoteses
is getoets, naamlik dat 'n betekenisvolle verandering in 'n maatskappy se dividendbeleid
inligting oor daardie maatskappy se toekomstige verdienste per aandeel bevat, en
tweedens dat hierdie inligting in die reaksie van aandelepryse na die aankondiging van
die verandering gereflekteer word.
Die belangrikste bevindinge is soos volg:
• Die inligting vervat in huidige dividendaankondigings kan nie deur beleggers
gebruik word om die volgende jaar se verdienste per aandeel van 'n maatskappy
te voorspel nie.
Die aandeleprysreaksie op positiewe, negatiewe en neutrale nuus is statisties
beduidend, maar vind hoofsaaklik in dieselfde rigting plaas. Beleggers sou dus nie
die inligting vervat in dividendaankondigings kan gebruik om bogemiddelde
opbrengskoerse te genereer nie.
• Die inligtingswaarde van dividende is dus 'n onwaarskynlike verldaring van die
invloed van 'n maatskappy se dividendbeleid op die waarde van sy gewone
aandele. / This research investigates the information content of dividends as a possible explanation
for the observed share price reaction to dividend announcements. Two hypotheses were
tested, namely that a significant change in a company's dividend policy contains
information on that company's future earnings per share, and secondly, that this
information is reflected in the share price reaction after the announcement of the change.
The most important findings are as follows:
• Investors cannot use the information contained in current dividend
announcements to predict a company's earnings per share for the next year.
• Share price reactions to positive, negative and neutral news are statistically
significant, but will be in the same direction. Hence investors cannot use this
information to generate above-normal returns.
The information content of dividends is therefore an unlikely explanation of the
influence a company's dividend policy has on the value of its ordinary shares. / Business Management / MCom (Sakebestuur)
|
Page generated in 0.3645 seconds