• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 259
  • 98
  • 67
  • 42
  • 23
  • 19
  • 15
  • 13
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 623
  • 103
  • 96
  • 79
  • 67
  • 63
  • 57
  • 49
  • 47
  • 47
  • 46
  • 46
  • 43
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Efficient Query Expansion

Billerbeck, Bodo, bodob@cs.rmit.edu.au January 2006 (has links)
Hundreds of millions of users each day search the web and other repositories to meet their information needs. However, queries can fail to find documents due to a mismatch in terminology. Query expansion seeks to address this problem by automatically adding terms from highly ranked documents to the query. While query expansion has been shown to be effective at improving query performance, the gain in effectiveness comes at a cost: expansion is slow and resource-intensive. Current techniques for query expansion use fixed values for key parameters, determined by tuning on test collections. We show that these parameters may not be generally applicable, and, more significantly, that the assumption that the same parameter settings can be used for all queries is invalid. Using detailed experiments, we demonstrate that new methods for choosing parameters must be found. In conventional approaches to query expansion, the additional terms are selected from highly ranked do cuments returned from an initial retrieval run. We demonstrate a new method of obtaining expansion terms, based on past user queries that are associated with documents in the collection. The most effective query expansion methods rely on costly retrieval and processing of feedback documents. We explore alternative methods for reducing query-evaluation costs, and propose a new method based on keeping a brief summary of each document in memory. This method allows query expansion to proceed three times faster than previously, while approximating the effectiveness of standard expansion. We investigate the use of document expansion, in which documents are augmented with related terms extracted from the corpus during indexing, as an alternative to query expansion. The overheads at query time are small. We propose and explore a range of corpus-based document expansion techniques and compare them to corpus-based query expansion on TREC data. These experiments show that document expansion delivers at best limited ben efits, while query expansion, including standard techniques and efficient approaches described in recent work, usually delivers good gains. We conclude that document expansion is unpromising, but it is likely that the efficiency of query expansion can be further improved.
62

Model Selection for Solving Kinematics Problems

Goh, Choon P. 01 September 1990 (has links)
There has been much interest in the area of model-based reasoning within the Artificial Intelligence community, particularly in its application to diagnosis and troubleshooting. The core issue in this thesis, simply put, is, model-based reasoning is fine, but whence the model? Where do the models come from? How do we know we have the right models? What does the right model mean anyway? Our work has three major components. The first component deals with how we determine whether a piece of information is relevant to solving a problem. We have three ways of determining relevance: derivational, situational and an order-of-magnitude reasoning process. The second component deals with the defining and building of models for solving problems. We identify these models, determine what we need to know about them, and importantly, determine when they are appropriate. Currently, the system has a collection of four basic models and two hybrid models. This collection of models has been successfully tested on a set of fifteen simple kinematics problems. The third major component of our work deals with how the models are selected.
63

An Evaluation of Existing Light Stemming Algorithms for Arabic Keyword Searches

Brittany E. Rogerson 17 November 2008 (has links)
The field of Information Retrieval recognizes the importance of stemming in improving retrieval effectiveness. This same tool, when applied to searches conducted in the Arabic language, increases the relevancy of documents returned and expands searches to encompass the general meaning of a word instead of the word itself. Since the Arabic language relies mainly on triconsonantal roots for verb forms and derives nouns by adding affixes, words with similar consonants are closely related in meaning. Stemming allows a search term to focus more on the meaning of a term and closely related terms and less on specific character matches. This paper discusses the strengths of light stemming, the best techniques, and components for algorithmic affix-based stemmers used in keyword searching in the Arabic language.
64

Påverkan av auktoritet : Berömmelse ingen faktor som övertygar

Illi, Peter January 2013 (has links)
Ikoniska auktoriteter är personer som tillskrivs sådan betydelse att de kommit att symbolisera delar eller aspekter av samhällslivet eller epoker i historien och som kan sägas ha haft ett betydande inflytande på samhället och kulturen. I studien undersöktesom ikoniska auktoriter påverkar oss mer eller annorlunda än andra källor. I ett experiment fick högskolestudenter uppdelade i fyra grupper läsa en text under hög elaboration om en psykologisk teori där variablerna ikonisk auktoritet och personligrelevans manipulerades. Deltagarna ombads sedan skatta vilken trovärdighet de ansåg att teorin hade. Studien ställde upp två hypoteser: att hög ikonisk aukto-ritet skulle öka den skattade trovärdigheten och personlig relevans minska den. Resultatet gav inte stöd åt någon av hypoteserna men en interaktionseffekt visade att texten vid låg personlig relevans uppfattades som mer trovärdig av de deltagare som exponerades för låg ikonisk auktoritet än de deltagare som exponerades för hög ikonisk auktoritet. Det föreslås att interaktionen beror på en backlash-effekt.
65

The Clinical Relevance of Paediatric Access Targets for Elective Dental Treatment Under General Anaesthesia

Chung, Sonia 06 April 2010 (has links)
The purpose of this study was to evaluate the clinical relevance of access targets for elective dental general anaesthesia (GA) by assessing incremental changes in dental disease burden over wait times at SickKids. A retrospective review of dental records were completed for 378 children who were prioritized by their dental and medical status. A scale was developed to measure cumulative dental disease burden over time. Statistically significant correlations were identified between cumulative disease burden and wait times for priority IV (p = 0.004), the entire sample (p < 0.003), DOSDCADCA (p = 0.005), comorbid (p = 0.036), healthy (p = 0.0002), female (p = 0.014) and male (p = 0.008) groups. The mean cumulative disease burden was not different between matched healthy and cormorbid groups (p = 0.38). A trend of increasing dental disease burden for children with longer wait times for dental GA was found but not clinically significant.
66

The Clinical Relevance of Paediatric Access Targets for Elective Dental Treatment Under General Anaesthesia

Chung, Sonia 06 April 2010 (has links)
The purpose of this study was to evaluate the clinical relevance of access targets for elective dental general anaesthesia (GA) by assessing incremental changes in dental disease burden over wait times at SickKids. A retrospective review of dental records were completed for 378 children who were prioritized by their dental and medical status. A scale was developed to measure cumulative dental disease burden over time. Statistically significant correlations were identified between cumulative disease burden and wait times for priority IV (p = 0.004), the entire sample (p < 0.003), DOSDCADCA (p = 0.005), comorbid (p = 0.036), healthy (p = 0.0002), female (p = 0.014) and male (p = 0.008) groups. The mean cumulative disease burden was not different between matched healthy and cormorbid groups (p = 0.38). A trend of increasing dental disease burden for children with longer wait times for dental GA was found but not clinically significant.
67

Essays on the value relevance of earnings measures

Mbagwu, Chima I 11 September 2007
This dissertation presents two studies on the value relevance and perceived credibility of pro forma earnings. In the first study, I investigate the value relevance of pro forma earnings relative to two alternative earnings measures GAAP earnings and analysts actual earnings. Value relevance is assessed using two approaches. The first approach examines whether the markets expectations (contemporaneous returns or price) is best reflected in future pro forma earnings, future GAAP earnings, or future analysts actual earnings. The second approach is to determine through pair-wise comparisons of the three earnings measures (e.g., pro forma earnings versus GAAP earnings), which has the greatest explanatory power (comparing adjusted R2s) in explaining price and returns. Across approaches and models, each of the three earnings measures tends to be value relevant. However, Pro forma is consistently the most value relevant, followed by analysts actuals, with GAAP earnings having the least value relevance. That is, pro forma earnings have the greatest information content. This finding is consistent with managers, in aggregate, using pro forma to inform rather than to manage expectations or to mislead. <p>In the second study, I examine the impact of credibility attributes board characteristics, auditor quality and overall information quality on the value relevance of pro forma earnings. It is hypothesized that the credibility attributes will have a statistically significant impact on investors reaction to pro forma earnings. Consistent with the predictions, I find that stronger board characteristics, higher auditor quality and higher overall information quality are positively associated with higher market reaction to the pro forma announcement. That is, credibility attributes increase the value relevance of pro forma earnings. This finding is consistent with some firms providing pro forma earnings that are perceived to be credible and others providing pro formas that are perceived as less credible and possibly provided to manage expectations or to mislead.
68

Thinking the Impossible: Counterfactual Conditionals, Impossible Cases, and Thought Experiments

Dohutia, Poonam 11 1900 (has links)
In this thesis I present an account of the formal semantics of counterfactuals that systematically deals with impossible antecedents. This, in turn, allows us to gain a richer understanding of what makes certain thought experiments informative in spite of the impossibility of the situations they consider. In Chapter II, I argue that there are major shortcomings in the leading theories of counterfactuals. The leading theories of counterfactuals (based on classical two-valued logic) are unable to account for counterfactuals with impossible antecedents. In such accounts, everything and anything follows from an impossible antecedent. In Chapter III, I examine some crucial notions such as conceivability, imaginability, and possibility. Herein I argue that there is a distinction to be made between the notions of conceiving and imagining. Conceivability, it turns out, is a sufficient condition for being a case. Recent literature on the semantics for relevance logic have made some use of the notion of a “state”, which differs from a world in that contradictions are true in some states; what is not done in that literature is to clarify how the notion of a state differs from an arbitrary collection of claims. I use the notion of a case as a (modal) tool to analyze counterfactuals with impossible antecedents, one for which, unlike the notion of states, it is clear why arbitrary collections of claims do not count. In Chapter IV, I propose a new account of counterfactuals. This involves modifying existing possible worlds accounts of counterfactuals by replacing possible worlds by the “cases” identified in Chapter III. This theory discerns counterfactuals such as: “If Dave squared the circle, he would be more famous than Gödel” which seems true, from others like: “If Dave squared the circle, the sun would explode”, which seems false. In Chapter V I discuss one of the main pay offs of having an account of counterfactuals that deals systematically with counterfactuals with impossible antecedents. To apply the new account of counterfactual to thought experiments, first we have to transform the thought experiment in question into a series of counterfactuals. I show how this is to be done, in Chapter V. There are two advantages of such an account when we apply it to thought experiments: First, for thought experiments with impossible scenarios, our new account can explain how such thought experiments can still be informative. Secondly, for thought experiments like the Chinese Room, where it is not clear whether there is a subtle impossibility in the scenario or not, this new account with its continuous treatment of possible and impossible cases makes clear why the debate about such thought experiments looks the way it does. The crucial question is not whether there is such an impossibility, but what is the "nearest" situation in which there is a Chinese Room (whether it is impossible or not) and what we would say there (about the intentionality of the room). On traditional accounts, it becomes paramount to deal with the possibility question, because if it is an impossible scenario the lessons we learn are very different from the ones we learn if it is possible. There are no available theories of thought experiments that account for thought experiments with impossible/incomplete scenarios. With the new account of counterfactual and by applying it to thought experiments we over come this difficulty.
69

On the dynamics of active documents for distributed data management

Bourhis, Pierre 11 February 2011 (has links) (PDF)
One of the major issues faced by Web applications is the management of evolving of data. In this thesis, we consider this problem and in particular the evolution of active documents. Active documents is a formalism describing the evolution of XML documents by activating Web services calls included in the document. It has already been used in the context of the management of distributed data \cite{axml}. The main contributions of this thesis are theoretical studies motivated by two systems for managing respectively stream applications and workflow applications. In a first contribution, we study the problem of view maintenance over active documents. The results served as the basis for an implementation of stream processors based on active documents called Axlog widgets. In a second one, we see active documents as the core of data centric workflows and consider various ways of expressing constraints on the evolution of documents. The implementation, called Axart, validated the approach of a data centric workflow system based on active documents. The hidden Web (also known as deep or invisible Web), that is, the partof the Web not directly accessible through hyperlinks, but through HTMLforms or Web services, is of great value, but difficult to exploit. Wediscuss a process for the fully automatic discovery, syntacticand semantic analysis, and querying of hidden-Web services. We proposefirst a general architecture that relies on a semi-structured warehouseof imprecise (probabilistic) content. We provide a detailed complexityanalysis of the underlying probabilistic tree model. We describe how wecan use a combination of heuristics and probing to understand thestructure of an HTML form. We present an original use of a supervisedmachine-learning method, namely conditional random fields,in an unsupervised manner, on an automatic, imperfect, andimprecise, annotation based on domain knowledge, in order to extractrelevant information from HTML result pages. So as to obtainsemantic relations between inputs and outputs of a hidden-Web service, weinvestigate the complexity of deriving a schema mapping between databaseinstances, solely relying on the presence of constants in the twoinstances. We finally describe a model for the semantic representationand intensional indexing of hidden-Web sources, and discuss how toprocess a user's high-level query using such descriptions.
70

Thinking the Impossible: Counterfactual Conditionals, Impossible Cases, and Thought Experiments

Dohutia, Poonam 11 1900 (has links)
In this thesis I present an account of the formal semantics of counterfactuals that systematically deals with impossible antecedents. This, in turn, allows us to gain a richer understanding of what makes certain thought experiments informative in spite of the impossibility of the situations they consider. In Chapter II, I argue that there are major shortcomings in the leading theories of counterfactuals. The leading theories of counterfactuals (based on classical two-valued logic) are unable to account for counterfactuals with impossible antecedents. In such accounts, everything and anything follows from an impossible antecedent. In Chapter III, I examine some crucial notions such as conceivability, imaginability, and possibility. Herein I argue that there is a distinction to be made between the notions of conceiving and imagining. Conceivability, it turns out, is a sufficient condition for being a case. Recent literature on the semantics for relevance logic have made some use of the notion of a “state”, which differs from a world in that contradictions are true in some states; what is not done in that literature is to clarify how the notion of a state differs from an arbitrary collection of claims. I use the notion of a case as a (modal) tool to analyze counterfactuals with impossible antecedents, one for which, unlike the notion of states, it is clear why arbitrary collections of claims do not count. In Chapter IV, I propose a new account of counterfactuals. This involves modifying existing possible worlds accounts of counterfactuals by replacing possible worlds by the “cases” identified in Chapter III. This theory discerns counterfactuals such as: “If Dave squared the circle, he would be more famous than Gödel” which seems true, from others like: “If Dave squared the circle, the sun would explode”, which seems false. In Chapter V I discuss one of the main pay offs of having an account of counterfactuals that deals systematically with counterfactuals with impossible antecedents. To apply the new account of counterfactual to thought experiments, first we have to transform the thought experiment in question into a series of counterfactuals. I show how this is to be done, in Chapter V. There are two advantages of such an account when we apply it to thought experiments: First, for thought experiments with impossible scenarios, our new account can explain how such thought experiments can still be informative. Secondly, for thought experiments like the Chinese Room, where it is not clear whether there is a subtle impossibility in the scenario or not, this new account with its continuous treatment of possible and impossible cases makes clear why the debate about such thought experiments looks the way it does. The crucial question is not whether there is such an impossibility, but what is the "nearest" situation in which there is a Chinese Room (whether it is impossible or not) and what we would say there (about the intentionality of the room). On traditional accounts, it becomes paramount to deal with the possibility question, because if it is an impossible scenario the lessons we learn are very different from the ones we learn if it is possible. There are no available theories of thought experiments that account for thought experiments with impossible/incomplete scenarios. With the new account of counterfactual and by applying it to thought experiments we over come this difficulty.

Page generated in 0.0441 seconds