• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 259
  • 98
  • 67
  • 42
  • 23
  • 19
  • 15
  • 13
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 623
  • 103
  • 96
  • 79
  • 67
  • 63
  • 57
  • 49
  • 47
  • 47
  • 46
  • 46
  • 43
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Plattform Göteborg : En utvärdering av ett integrationsprojekt i Göteborg

Hansson, Karolin January 2008 (has links)
<p>In 2005 the Swedish government introduced a law of amnesty for refugees in the country which made the Minister of Integration invite a number of national organizations to discuss this law. After that, seven organizations in Gothenburg also felt that something should be done and they started talking about a cooperation to improve the situation for newly arrived people to Gothenburg. They formed a project, “Plattform Göteborg”, which in this paper will be evaluated according to a manual from Sida. The project consists of seven organizations which have their separate activities formed by them to improve the integration. They do things such as teach Swedish, offer a place for counseling, teach sports to young people and have different activities for children. I will here present these activities, how the organizations planned this, whether these plans is in accordance with what really happened and evaluate the results. To do this I have preformed interviews with the people involved. I have then examined and evaluated the project according to five different criteria; effectiveness, impact, relevance, sustainability and efficiency. From this I have concluded that the idea of a cooperation between organizations is good and necessary a better cooperation with the municipality is needed to make it work better. The project also needs to be structured in a better way and more well-planned, and here the organizations could help each other better. It is also necessary to take effects in to consideration in a better way than done up until now, to see what they want to get out from the project and also think to examine more after which effects that have come from this.</p>
182

Learning and Relevance in Information Retrieval: A Study in the Application of Exploration and User Knowledge to Enhance Performance

Hyman, Harvey Stuart 01 January 2012 (has links)
This dissertation examines the impact of exploration and learning upon eDiscovery information retrieval; it is written in three parts. Part I contains foundational concepts and background on the topics of information retrieval and eDiscovery. This part informs the reader about the research frameworks, methodologies, data collection, and instruments that guide this dissertation. Part II contains the foundation, development and detailed findings of Study One, "The Relationship of Exploration with Knowledge Acquisition." This part of the dissertation reports on experiments designed to measure user exploration of a randomly selected subset of a corpus and its relationship with performance in the information retrieval (IR) result. The IR results are evaluated against a set of scales designed to measure behavioral IR factors and individual innovativeness. The findings reported in Study One suggest a new explanation for the relationship between recall and precision, and provide insight into behavioral measures that can be used to predict user IR performance. Part II also reports on a secondary set of experiments performed on a technique for filtering IR results by using "elimination terms." These experiments have been designed to develop and evaluate the elimination term method as a means to improve precision without loss of recall in the IR result. Part III contains the foundation, and development of Study Three, "A New System for eDiscovery IR Based on Context Learning and Relevance." This section reports on a set of experiments performed on an IT artifact, Legal Intelligence®, developed during this dissertation. The artifact developed for Study Three uses a learning tool for context and relevance to improve the IR extraction process by allowing the user to adjust the IR search structure based on iterative document extraction samples. The artifact has been developed based on the needs of the business community of practitioners in the domain of eDiscovery; it has been instantiated and tested during Study Three and has produced significant results supporting its feasibility for use. Part III contains conclusions and steps for future research extending beyond this dissertation.
183

Does the Use of Personally Relevant Stimuli in Semantic Complexity Training Facilitate Improved Functional Communication Performance Compared to Non-Personally Relevant Stimulus Items among Adults with Chronic Aphasia?

Karidas, Stephanie 01 January 2013 (has links)
This study investigated the influence of semantic complexity treatment in individuals with fluent aphasia on discourse performance. Semantic treatment is an effective way to improve semantically based word retrieval problems in aphasia. Treatment focused on the semantic application of the Complexity Account of Treatment Efficacy (CATE) (Thompson, Shapiro, Kiran, & Sobecks, 2003) promotes training of complex items resulting in generalization to less complex, untrained items. In addition, research has shown that the personal relevance of treatment material can increase treatment efficacy. This study investigated the effect of semantic treatment of atypical personally relevant items among individuals with aphasia on discourse performance. Two treatment phases were applied to examine the influence of personally relevant and non-relevant treatment material on discourse performance. In addition, generalization from trained atypical items to untrained typical items was investigated. Methods and procedures were partially replicated from Kiran, Sandberg, & Sebastian (2011) examining semantic complexity within goal-derived (ad hoc) categories. Three participants with fluent aphasia were trained on three semantic tasks including category sorting, semantic feature generation/selection, and Yes/No feature questions. A generative naming task was used for probe data collection every second session. Stimuli consisted of atypical items only. The hypothesis that semantic complexity training of personally relevant items from ad hoc categories will produce greater generalization to associated, untrained items than training of non-relevant items and consequently increase discourse performance was not supported. The findings revealed a failure to replicate the magnitude and type of improvements previously reported for the typicality effect in generative naming. Clinical significance was found for personally relevant and non-relevant discourse performance. However, no consistent pattern was found within and across participants. In addition, effect size for generalization from trained atypical to untrained typical items was not significant. Limitations of this study lead to future directions to further specify participation selection, such as cognitive abilities, procedural changes, and the inclusion of discourse performance as an outcome measure. Overall, the results of this study provide weak support for replicating semantic treatment of atypical exemplars in ad-hoc categories and hence demonstrate the critical role of replication across labs to identify key issues in the candidacy, procedures, and outcome measurement of any developing treatment.
184

Emergence of comprehension of Spanish second language requests

Sauveur, Robert Paul 23 October 2013 (has links)
This dissertation examines the developmental trajectory of online processing toward second language (L2) pragmatic comprehension. This goal stems from two shortcomings of previous research: (1) approaching L2 pragmatics as the acquisition of discrete phenomena through progressive stages (see Kasper, 2009), and (2) focusing narrowly on production. Building upon previous L2 pragmatic comprehension work (Carrell, 1981; P. García, 2004; Taguchi, 2005, 2007, 2008a, 2008b, 2011a, 2011b; Takahashi & Roitblat, 1994), the current study investigates the development of L2 Spanish request speech act comprehension by native English-speaking adult learners. The analysis involves accuracy, comprehension speed and the relationship between the two dimensions across three levels of directness over a 13-week period. Previous research was informed by skill acquisition theories (Anderson & Lebiere, 1998) to account for increased accuracy and decreased speed over time. Here, further analysis is based on Complexity Theory / Dynamic Systems Theory (CT/DST) (Larsen-Freeman, 1997; Larsen-Freeman & Cameron, 2008a; de Bot, Lowie, & Verspoor, 2007; Ellis, et al., 2009; Verspoor, de Bot, & Lowie, 2011) to account for the seemingly chaotic results often found in L2 research. The findings of the current study show significant overall improvement in accuracy and speed of Spanish request identification, and a moderate relationship between the two measures. However, the association between slower responses and higher accuracy in the current data contradicts skill acquisition theories. Rather, the theoretical framework of CT/DST provides a more authentic account of development. As such, the results indicate that the levels of request directness develop along distinct trajectories and timescales. Direct requests reflect higher accuracy and faster interpretation. While the most indirect level of requests shows the largest improvement in accuracy, the responses for these items are no faster at the end of the study than at the beginning. The development of conventionally indirect requests occupies a middle ground in terms of accuracy similar to direct requests and comprehension speed like implied items. Further findings reflect L2 pragmatic comprehension as a complex, dynamic system that emerges through the differential effects of predictor variables across measures and within sub-groups of participants based on proficiency improvement, motivation and response strategy. / text
185

Metaphor and relevance theory : a new hybrid model

Stöver, Hanna January 2010 (has links)
This thesis proposes a comprehensive cognitive account of metaphor understanding that combines aspects of Relevance Theory (e.g. Sperber & Wilson 1986/95; Carston 2002) and Cognitive Linguistics, in particular ideas from Conceptual Metaphor Theory (e.g. Lakoff & Johnson 1980; Lakoff 1987; Johnson 1991) and Situated Conceptualization (e.g. Barsalou 1999; 2005). While Relevance Theory accounts for propositional aspects of metaphor understanding, the model proposed here additionally accounts for nonpropositional effects which intuitively make metaphor feel ‗special‘ compared to literal expressions. This is achieved by (a) assuming a further, more basic processing level of imagistic-experiential representations involving mental simulation patterns (Barsalou 1999; 2005) alongside relevance-theoretic inferential processing and (b) assuming processing of the literal meaning of a metaphorical expression at a metarepresentational level, as proposed by Carston (2010). The approach takes Tendahl‘s ‗Hybrid Theory of Metaphor‘ (2006), which also combines cognitive-linguistic with relevance-theoretic ideas, as a starting point. Like Tendahl, it incorporates the notion of conceptual metaphors (Lakoff & Johnson 1980), albeit in a modified form, thus accounting for metaphor in thought. Wilson (2009) suggests that some metaphors originate in language (as previously assumed by Relevance Theory) and others originate in thought (as previously assumed within Cognitive Linguistics). The model proposed here can account for both. Unlike Tendahl, it assumes a modular mental architecture (Sperber 1994), which ensures that the different levels of processing are kept apart. This is because each module handles only its own domain-specific input, here consisting of either propositional or imagistic-experiential representations. The propositional level, which remains the dominant processing route in utterance 3 understanding, as in Relevance Theory, receives some input from the imagistic-experiential level. This is mediated at a metarepresentational level, which turns the imagistic-experiential representations into propositional material to be processed at the inferential level in the understanding of literal expressions. In metaphor understanding, however, the literal meaning is not processed as meaning-constitutive content. As a result, the imagistic-experiential aspects of the literal meaning in question are not processed as propositional input. Rather, they are held at the metarepresentational level and experienced as strong impressions of the kind that only metaphors can communicate.
186

The value relevance of comprehensive income

Ringström, Elena, Ekström, Jörgen January 2012 (has links)
In this study, we look at the effects of the adoption of the revised IAS 1 rules, which has been in effect since January 1, 2009. The revised IAS 1 requires that all changes in equity, excluding changes in equity arising from transactions with owners, should be recognized in comprehensive income statement. Revised IAS 1 requires companies to report total comprehensive income that is a sum of net income and other comprehensive income. Total comprehensive income includes all unrealized gains and losses recognized under IFRS. Before the amendment, some of the unrealized gains and losses were shown in a statement of changes in equity but not in the income statement. We hope to answer the question whether inclusion of the components of other comprehensive income provides investors with useful information. We investigate if stock prices have an association with the components of other comprehensive income. We investigate how effective are attempts of IASB to increase the relevance of accounting information about corporate income. We hope that results from the study will be of interest to the standard-setter. In this study, we use data from annual reports and year-end reports for companies listed on the Large and Mid Cap segment at NASDAQ OMX Stockholm and that covers the years 2009 to 2011. We use two regression models to test value relevance of components of other comprehensive income.We have found some evidence that the share price statistically relates to such component of comprehensive income as the change of the fair value of cash flow hedges. This can also be interpreted as that the change of the fair value of cash flow hedges has some value relevance. We also found some evidence that the share price significantly associates with winning cash flow hedging position. We did not find that the share price associates with some other components of other comprehensive income.
187

Have IFRS Contributed to an Increased Value-Relevance? : The Scandinavian Evidence

Bogstrand, Oskar, Larsson, Erik Alexander January 2012 (has links)
This paper examines the value-relevance of Scandinavian earnings information and book values over the past decade in order to shed some light on whether the extensive global adoption of IFRS/IAS has contributed to an increased accounting quality in terms of economic decision-usefulness to equity investors. We address this research question using a sample of 4.310 firm-year observations for 431 exchange-listed companies at NASDAQ OMX Nordic and Oslo Stock Exchange between 2001 and 2010. The degree of value-relevance in our firm-sample is operationalized through two price regressions and one return regression and empirically tested via the statistical association between capitalized values of equity or annual changes in capitalized values of equity and the study’s three explanatory accounting variables: (i) book values, (ii) accrual-based earnings and (iii) cash-flow-based earnings. Taken as a whole, our results show significant empirical signs of an increased value-relevance in both Scandinavian earnings information and book values, allowing us to draw significant as well as contributing conclusions on the information content of financial statement information disclosed in the Scandinavian region. We believe our study adds empirical substance to practical debates over the function of financial reporting as well as resourceful material to both Scandinavian investors and to the ongoing international discussion on the harmonization of financial reporting standards.
188

A comparison of grade 8 to10 urban and peri-urban learners context preferences for mathematical literacy.

Blaauw, Christopher January 2009 (has links)
<p>The study explored the comparison of grade 8 to 10 urban and peri-urban learners&rsquo / contexts preferences in mathematical literacy. There is currently a strong emphasis on the use of contexts for school mathematics. This has been also the case for South Africa when grade 10 learners have to make a choice between mathematics and mathematical literacy as one of their compulsory subjects for grade 10. This study focused more on the use of mathematics in real life situations. Data was collected by using questionnaires developed as part of the Relevance of School Mathematics Education (ROSME) project. The questionnaire dealt with contexts preferred by grade 10 learners from urban and peri-urban areas. The data were analysed using non-parametric statistical techniques. The findings radicate that there were contexts highly preferred by learners from both urban and peri-urban areas / least preferred by learners from both areas, highly preferred by learners from periurban areas but not by learners from urban areas and least preferred by learners from urban areas but not by those from peri-urban areas and vice versa. It is recommended that contexts highly preferred by learners should be incorporated in the learning experiences of learners.</p>
189

TEMPLAR : efficient determination of relevant axioms in big formula sets for theorem proving

Frank, Mario January 2013 (has links)
This document presents a formula selection system for classical first order theorem proving based on the relevance of formulae for the proof of a conjecture. It is based on unifiability of predicates and is also able to use a linguistic approach for the selection. The scope of the technique is the reduction of the set of formulae and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the formula set, it can be used as a preprocessor for automated theorem proving. The document contains the conception, implementation and evaluation of both selection concepts. While the one concept generates a search graph over the negation normal forms or Skolem normal forms of the given formulae, the linguistic concept analyses the formulae and determines frequencies of lexemes and uses a tf-idf weighting algorithm to determine the relevance of the formulae. Though the concept is built for first order logic, it is not limited to it. The concept can be used for higher order and modal logik, too, with minimal adoptions. The system was also evaluated at the world championship of automated theorem provers (CADE ATP Systems Competition, CASC-24) in combination with the leanCoP theorem prover and the evaluation of the results of the CASC and the benchmarks with the problems of the CASC of the year 2012 (CASC-J6) show that the concept of the system has positive impact to the performance of automated theorem provers. Also, the benchmarks with two different theorem provers which use different calculi have shown that the selection is independent from the calculus. Moreover, the concept of TEMPLAR has shown to be competitive to some extent with the concept of SinE and even helped one of the theorem provers to solve problems that were not (or slower) solved with SinE selection in the CASC. Finally, the evaluation implies that the combination of the unification based and linguistic selection yields more improved results though no optimisation was done for the problems. / Dieses Dokument stellt ein System vor, das aus einer (großen) gegebenen Menge von Formeln der klassischen Prädikatenlogik eine Teilmenge auswählt, die für den Beweis einer logischen Formel relevant sind. Ziel des Systems ist, die Beweisbarkeit von Formeln in einer festen Zeitschranke zu ermöglichen oder die Beweissuche durch die eingeschränkte Formelmenge zu beschleunigen. Das Dokument beschreibt die Konzeption, Implementierung und Evaluation des Systems und geht dabei auf die zwei verschiedenen Ansätze zur Auswahl ein. Während das eine Konzept eine Graphensuche wahlweise auf den Negations-Normalformen oder Skolem-Normalformen der Formeln durchführt, indem Pfade von einer Formel zu einer anderen durch Unifikation von Prädikaten gebildet werden, analysiert das andere Konzept die Häufigkeiten von Lexemen und bildet einen Relevanzwert durch Anwendung des in der Computerlinguistik bekannten tf-idf-Maßes. Es werden die Ergebnisse der Weltmeisterschaft der automatischen Theorembeweiser (CADE ATP Systems Competition, CASC-24) vorgestellt und der Effekt des Systems für die Beweissuche analysiert. Weiterhin werden die Ergebnisse der Tests des Systems auf den Problemen der Weltmeisterschaft aus dem Jahre 2012 (CASC-J6) vorgestellt. Es wird darauf basierend evaluiert, inwieweit die Einschränkungen die Theorembeweiser bei dem Beweis komplexer Probleme unterstützen. Letztendlich wird gezeigt, dass das System einerseits positive Effekte für die Theorembeweiser hat und andererseits unabhängig von dem Kalkül ist, den die Theorembeweiser nutzen. Ferner ist der Ansatz unabhängig von der genutzten Logik und kann prinzipiell für alle Stufen der Prädikatenlogik und Aussagenlogik sowie Modallogik genutzt werden. Dieser Aspekt macht den Ansatz universell im automatischen Theorembeweisen nutzbar. Es zeigt sich, dass beide Ansätze zur Auswahl für verschiedene Formelmengen geeignet sind. Es wird auch gezeigt, dass die Kombination beider Ansätze eine signifikante Erhöhung der beweisbaren Formeln zur Folge hat und dass die Auswahl durch die Ansätze mit den Fähigkeiten eines anderen Auswahl-Systems mithalten kann.
190

Axiom relevance decision engine : technical report

Frank, Mario January 2012 (has links)
This document presents an axiom selection technique for classic first order theorem proving based on the relevance of axioms for the proof of a conjecture. It is based on unifiability of predicates and does not need statistical information like symbol frequency. The scope of the technique is the reduction of the set of axioms and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the axiom set, it can be used as a preprocessor for automated theorem proving. This technical report describes the conception, implementation and evaluation of ARDE. The selection method, which is based on a breadth-first graph search by unifiability of predicates, is a weakened form of the connection calculus and uses specialised variants or unifiability to speed up the selection. The implementation of the concept is evaluated with comparison to the results of the world championship of theorem provers of the year 2012 (CASC J6). It is shown that both the theorem prover leanCoP which uses the connection calculus and E which uses equality reasoning, can benefit from the selection approach. Also, the evaluation shows that the concept is applyable for theorem proving problems with thousands of formulae and that the selection is independent from the calculus used by the theorem prover. / Dieser technische Report beschreibt die Konzeption, Implementierung und Evaluation eines Verfahrens zur Auswahl von logischen Formeln bezüglich derer Relevanz für den Beweis einer logischen Formel. Das Verfahren wird ausschließlich für die Prädikatenlogik erster Ordnung angewandt, wenngleich es auch für höherstufige Prädikatenlogiken geeignet ist. Das Verfahren nutzt eine unifikationsbasierte Breitensuche im Graphen wobei jeder Knoten im Graphen ein Prädikat und jede existierende Kante eine Unifizierbarkeitsrelation ist. Ziel des Verfahrens ist die Reduktion einer gegebenen Menge von Formeln auf eine für aktuelle Theorembeweiser handhabbare Größe. Daher ist das Verfahren als Präprozess-Schritt für das automatische Theorembeweisen geeignet. Zur Beschleunigung der Suche wird neben der Standard-Unifikation eine abgeschwächte Unifikation verwendet. Das System wurde während der Weltmeisterschaft der Theorembeweiser im Jahre 2014 (CASC J6) in Manchester zusammen mit dem Theorembeweiser leanCoP eingereicht und konnte leanCoP dabei unterstützen, Probleme zu lösen, die leanCoP alleine nicht handhaben kann. Die Tests mit leanCoP und dem Theorembeweiser E im Nachgang zu der Weltmeisterschaft zeigen, dass das Verfahren unabhängig von dem verwendeten Kalkül ist und bei beiden Theorembeweisern positive Auswirkungen auf die Beweisbarkeit von Problemen mit großen Formelmengen hat.

Page generated in 0.0514 seconds