• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 6
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The new guideline for goodwill impairment

Swanson, Nancy Jewel 15 December 2007 (has links)
Goodwill, for financial accounting purposes, is an intangible asset on the balance sheet that represents the excess of the amount paid for an acquired entity over the net fair value of the assets acquired. The Financial Accounting Standards Board has recently issued a new mandate. This new guideline eliminates annual amortization of goodwill and requires annual valuation for potential goodwill impairment and consequent writedown. Determining the amount of impairment requires management estimation, thus, allowing managerial discretion in developing the impairment amounts. Managerial discretion may then be used to manage earnings. Earnings management occurs when managers exercise their professional judgment in financial reporting to manipulate earnings. Prior literature documents that managers have strong motivations to manage earnings. Managers sometimes respond to these motivations by managing earnings to exceed key earnings thresholds. The new goodwill guideline might be used as an earnings management tool. Thus, this dissertation examines whether earnings management results from the judgmental latitude allowed in estimating goodwill when earnings will otherwise just miss key earnings benchmarks. Specifically, this study tests goodwill impairment writedowns in a cross-sectional distributional analysis for the year 2002, the first year following the effective date of the new goodwill standards. The sample is taken from the financial information of publicly-traded companies tracked in the Compustat and CRSP databases. To identify firms that are likely to have managed earnings to exceed key benchmarks, earnings per share, both before and after goodwill impairment writedowns, is compared with two thresholds established in prior research. The first, is a positive earnings per share; and the second is the prior year’s earnings per share. Results from applying both tobit and logistic regression models suggest that managers are exploiting their discretion in recognizing goodwill impairments to manage earnings. Thus, this project contributes to the earnings management literature in that it highlights the exploitation of increased judgmental latitude for earnings management purposes.
2

The Time-course of Lexical Influences on Fixation Durations during Reading: Evidence from Distributional Analyses

Sheridan, Heather 13 August 2013 (has links)
Competing models of eye movement control during reading disagree over the extent to which eye movements reflect ongoing linguistic and lexical processing, as opposed to visual/oculomotor factors (for reviews, see Rayner, 1998, 2009a). To address this controversy, participants’ eye movements were monitored in four experiments that manipulated a wide range of lexical variables. Specifically, Experiment 1 manipulated contextual predictability by presenting target words (e.g., teeth) in a high-predictability prior context (e.g. “The dentist told me to brush my teeth to prevent cavities.”) versus a low-predictability prior context (e.g., “I'm planning to take better care of my teeth to prevent cavities.”), Experiment 2 manipulated lexical ambiguity by presenting biased homographs (e.g., bank, crown, dough) in a subordinate-instantiating versus a dominant-instantiating prior context, and Experiments 3A and 3B manipulated word frequency by contrasting high frequency target words (e.g., table) and low frequency target words (e.g., banjo). In all four experiments, I used distributional analyses to examine the time-course of lexical influences on fixation times. Ex-Gaussian fitting (Staub, White, Drieghe, Hollway, & Rayner, 2010) revealed that all three lexical variables (i.e., predictability, lexical ambiguity, word frequency) were fast-acting enough to shift the entire distribution of fixation times, and a survival analysis technique (Reingold, Reichle, Glaholt, & Sheridan, 2012) revealed rapid lexical effects that emerged as early as 112 ms from the start of the fixation. Building on these findings, Experiments 3A and 3B provided evidence that lexical processing is delayed in an unsegmented text condition that contained numbers instead of spaces (e.g., “John4decided8to5sell9the7table2in3the9garage6sale”), relative to a normal text condition (e.g., “John decided to sell the table in the garage sale”). These findings have implications for ongoing theoretical debates concerning eye movement control, lexical ambiguity resolution, and the role of interword spaces during reading. In particular, the present findings provide strong support for models of eye movement control that assume that lexical influences can have a rapid influence on the majority of fixation durations, and are inconsistent with models that assume that fixation times are primarily determined by visual/oculomotor constraints.
3

The Time-course of Lexical Influences on Fixation Durations during Reading: Evidence from Distributional Analyses

Sheridan, Heather 13 August 2013 (has links)
Competing models of eye movement control during reading disagree over the extent to which eye movements reflect ongoing linguistic and lexical processing, as opposed to visual/oculomotor factors (for reviews, see Rayner, 1998, 2009a). To address this controversy, participants’ eye movements were monitored in four experiments that manipulated a wide range of lexical variables. Specifically, Experiment 1 manipulated contextual predictability by presenting target words (e.g., teeth) in a high-predictability prior context (e.g. “The dentist told me to brush my teeth to prevent cavities.”) versus a low-predictability prior context (e.g., “I'm planning to take better care of my teeth to prevent cavities.”), Experiment 2 manipulated lexical ambiguity by presenting biased homographs (e.g., bank, crown, dough) in a subordinate-instantiating versus a dominant-instantiating prior context, and Experiments 3A and 3B manipulated word frequency by contrasting high frequency target words (e.g., table) and low frequency target words (e.g., banjo). In all four experiments, I used distributional analyses to examine the time-course of lexical influences on fixation times. Ex-Gaussian fitting (Staub, White, Drieghe, Hollway, & Rayner, 2010) revealed that all three lexical variables (i.e., predictability, lexical ambiguity, word frequency) were fast-acting enough to shift the entire distribution of fixation times, and a survival analysis technique (Reingold, Reichle, Glaholt, & Sheridan, 2012) revealed rapid lexical effects that emerged as early as 112 ms from the start of the fixation. Building on these findings, Experiments 3A and 3B provided evidence that lexical processing is delayed in an unsegmented text condition that contained numbers instead of spaces (e.g., “John4decided8to5sell9the7table2in3the9garage6sale”), relative to a normal text condition (e.g., “John decided to sell the table in the garage sale”). These findings have implications for ongoing theoretical debates concerning eye movement control, lexical ambiguity resolution, and the role of interword spaces during reading. In particular, the present findings provide strong support for models of eye movement control that assume that lexical influences can have a rapid influence on the majority of fixation durations, and are inconsistent with models that assume that fixation times are primarily determined by visual/oculomotor constraints.
4

Analyse distributionnelle appliquée aux textes de spécialité : réduction de la dispersion des données par abstraction des contextes / Distributional analysis applied to specialised corpora : reduction of data sparsity through context abstraction

Périnet, Amandine 17 March 2015 (has links)
Dans les domaines de spécialité, les applications telles que la recherche d’information ou la traduction automatique, s’appuient sur des ressources terminologiques pour prendre en compte les termes, les relations sémantiques ou les regroupements de termes. Pour faire face au coût de la constitution de ces ressources, des méthodes automatiques ont été proposées. Parmi celles-ci, l’analyse distributionnelle s’appuie sur la redondance d’informations se trouvant dans le contexte des termes pour établir une relation. Alors que cette hypothèse est habituellement mise en oeuvre grâce à des modèles vectoriels, ceux-ci souffrent du nombre de dimensions considérable et de la dispersion des données dans la matrice des vecteurs de contexte. En corpus de spécialité, ces informations contextuelles redondantes sont d’autant plus dispersées et plus rares que les corpus ont des tailles beaucoup plus petites. De même, les termes complexes sont généralement ignorés étant donné leur faible nombre d’occurrence. Dans cette thèse, nous nous intéressons au problème de la limitation de la dispersion des données sur des corpus de spécialité et nous proposons une méthode permettant de densifier la matrice des contextes en réalisant une abstraction des contextes distributionnels. Des relations sémantiques acquises en corpus sont utilisées pour généraliser et normaliser ces contextes. Nous avons évalué la robustesse de notre méthode sur quatre corpus de tailles, de langues et de domaines différents. L’analyse des résultats montre que, tout en permettant de prendre en compte les termes complexes dans l’analyse distributionnelle, l’abstraction des contextes distributionnels permet d’obtenir des groupements sémantiques de meilleure qualité mais aussi plus cohérents et homogènes. / In specialised domains, the applications such as information retrieval for machine translation rely on terminological resources for taking into account terms or semantic relations between terms or groupings of terms. In order to face up to the cost of building these resources, automatic methods have been proposed. Among those methods, the distributional analysis uses the repeated information in the contexts of the terms to detect a relation between these terms. While this hypothesis is usually implemented with vector space models, those models suffer from a high number of dimensions and data sparsity in the matrix of contexts. In specialised corpora, this contextual information is even sparser and less frequent because of the smaller size of the corpora. Likewise, complex terms are usually ignored because of their very low number of occurrences. In this thesis, we tackle the problem of data sparsity on specialised texts. We propose a method that allows making the context matrix denser, by performing an abstraction of distributional contexts. Semantic relations acquired from corpora are used to generalise and normalise those contexts. We evaluated the method robustness on four corpora of different sizes, different languages and different domains. The analysis of the results shows that, while taking into account complex terms in distributional analysis, the abstraction of distributional contexts leads to defining semantic clusters of better quality, that are also more consistent and more homogeneous.
5

Une approche linguistique de l'évaluation des ressources extraites par analyse distributionnelle automatique / Evaluation of resources provided by automatic distributional analysis : a linguistic approach

Morlane-Hondère, François 10 July 2013 (has links)
Dans cette thèse, nous abordons du point de vue linguistique la question de l'évaluation des bases lexicales extraites par analyse distributionnelle automatique (ADA). Les méthodes d'évaluation de ces ressources qui sont actuellement mises en œuvre (comparaison à des lexiques de référence, évaluation par la tâche, test du TOEFL...) relèvent en effet d'une approche quantitative des données qui ne laisse que peu de place à l'interprétation des rapprochements générés. De ce fait, les conditions qui font que certains couples de mots sont extraits alors que d'autres ne le sont pas restent mal connues. Notre travail vise une meilleure compréhension des fonctionnements en corpus qui régissent les rapprochements distributionnels. Pour cela, nous avons dans un premier temps adopté une approche quantitative qui a consisté à comparer plusieurs ressources distributionnelles calculées sur des corpus différents à des lexiques de références (le Dictionnaire électronique des synonymes du CRISCO et le réseau lexical JeuxDeMots). Cette étape nous a permis, premièrement, d'avoir une estimation globale du contenu de nos ressources, et, deuxièmement, de sélectionner des échantillons de couples de mots à étudier d'un point de vue qualitatif. Cette deuxième étape constitue le cœur de la thèse. Nous avons choisi de nous focaliser sur les relations lexico-sémantiques que sont la synonymie, l'antonymie, l'hyperonymie et la méronymie, que nous abordons en mettant en place quatre protocoles différents. En nous appuyant sur les relations contenues dans les lexiques de référence, nous avons comparé les propriétés distributionnelles des couples de synonymes/antonymes/hyperonymes/méronymes qui ont été extraits par l'ADA avec celles des couples qui ne l'ont pas été. Nous mettons ainsi au jour plusieurs phénomènes qui favorisent ou bloquent la substituabilité des couples de mots (donc leur extraction par l'ADA). Ces phénomènes sont considérés au regard de paramètres comme la nature du corpus qui a permis de générer les bases distributionnelles étudiées (corpus encyclopédique, journalistique ou littéraire) ou les limites des lexiques de référence. Ainsi, en même temps qu'il questionne les méthodes d'évaluation des bases distributionnelles actuellement employées, ce travail de thèse illustre l'intérêt qu'il y a à considérer ces ressources comme des objets d'études linguistiques à part entière. Les bases distributionnelles sont en effet le résultat d'une mise en œuvre à grande échelle du principe de substituabilité, ce qui en fait un matériau de choix pour la description des relations lexico-sémantiques. / In this thesis, we address the question of the evaluation of distributional thesauri from a linguistic point of view. The most current ways to evaluate distributional methods rely on the comparison with gold standards like WordNet or semantic tasks like the TOEFL test. However, these evaluation methods are quantitative and thus restrict the possibility of performing a linguistic analysis of the distributional neighbours. Our work aims at a better understanding of the distributional behaviors of words in texts through the study of distributional thesauri. First, we take a quantitative approach based on a comparison of several distributional thesauri with gold standards (the DES - a dictionary of synonyms - and JeuxDeMots - a crowdsourced lexical network). This step allowed us to have an overview of the nature of the semantic relations extracted in our distributional thesauri. In a second step, we relied on this comparison to select samples of distributional neighbours for a qualitative study. We focused on "classical" semantic relations, e.g. synonymy, antonymy, hypernymy and meronymy. We considered several protocols to compare the properties of the couples of distributional neighbours which were found in the gold standards and the others. Thus, taking into account parameters like the nature of the corpora from which were generated our distributional thesauri, we explain why some synonyms, hypernyms, etc. can be substituted in texts while others cannot. The purpose of this work is twofold. First, it questions the traditional evaluation methods, then it shows how distributional thesauri can be used for the study of semantic relations.
6

Voisinage lexical pour l'analyse du discours / Lexical neighbours for discourse analysis

Adam, Clémentine 28 September 2012 (has links)
Cette thèse s'intéresse au rôle de la cohésion lexicale dans différentes approches de l'analyse du discours. Nous yexplorons deux hypothèses principales:- l'analyse distributionnelle, qui permet de rapprocher des unités lexicales sur la base des contextes syntaxiques qu'ellespartagent, met au jour des relations sémantiques variées pouvant être exploitées pour la détection de la cohésion lexicaledes textes;- les indices lexicaux constituent des éléments de signalisation de l'organisation du discours pouvant être exploités aussibien à un niveau local (identification de relations rhétoriques entre constituants élémentaires du discours) qu'à un niveauglobal (repérage ou caractérisation de segments de niveau supérieur dotés d'une fonction rhétorique et garantissant lacohérence et la lisibilité du texte, par exemple passages à unité thématique).Concernant le premier point, nous montrons la pertinence d'une ressource distributionnelle pour l'appréhension d'une largegamme de relations impliquées dans la cohésion lexicale des textes. Nous présentons les méthodes de projection et defiltrage que nous avons mises en œuvre pour la production de sorties exploitables.Concernant le second point, nous fournissons une série d'éclairages qui montrent l'apport d'une prise en compte réfléchiede la cohésion lexicale pour une grande variété de problématiques liées à l'étude et au repérage automatique del'organisation textuelle: segmentation thématique de textes, caractérisation des structures énumératives, étude de lacorrélation entre lexique et structure rhétorique du discours et enfin détection de réalisations d'une relation de discoursparticulière, la relation d'élaboration. / This thesis considers the role of lexical cohesion in various approaches of discourse analysis. Two main hypotheses arestudied:- distributional analysis, which allows to bring together lexical units based on the syntactic contexts they share, highlightsdiverse semantic relations which can be employed in the detection of lexical cohesion in texts;- lexical cues are involved in discourse signalization and can be used both at a local level (identification of rhetoricalrelations between elementary discourse units) and at a global level (detection or characterization of higher levelsegments).In reference to the first hypothesis, we show that a distributional resource is strongly relevant in the analysis of a widepanel of relations having lexical cohesion roles in texts. We introduce projection and filtering methods for thisdistributional resource.In reference to the second hypothesis, we provide a series of outlooks showing the improvement brought by carefulconsideration of lexical cohesion in a large panel of settings within the study of textual organisation and its automaticdetection: thematic segmentation of texts, enumerative structures characterization, study of the correlation betweenlexicon and the rhetorical structure of discourse, and finally detection of realisations of a specific discourse relation, theElaboration relation.

Page generated in 0.127 seconds