• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 70
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2078
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
781

Measuring electronic information systems: the use of the information behaviour model

Cheng, Grace Y. T., n/a January 2002 (has links)
This study focused on measuring the importance and contribution of information obtained from the library, particularly electronic information services (EIS), to success in solving clinical problems in hospitals. Three research questions with three main hypotheses were advanced and tested on clinicians in 44 hospitals in Hong Kong. The findings were tested against the framework from Wilson's (1996) existing general information behaviour model, from which a new extended model for clinicians was built. Measures of EIS were then derived from the new model. The research was broadly divided into a series of five studies in two stages: nominal group, quantitative survey, and interviews in the first stage, and randomized controlled study as well as the analyses of statistical data and computer transaction logs in the second stage. The key results in Stage I led to the studies in Stage 11. The randomized controlled study in Stage 11 attempted to reduce the barriers identified in the information environment, with a view to test the results of an educational intervention, and to confirm that the hypotheses were true given reduced barriers and the presence of enabling conditions. The effects of the interventions in this experimental study were validated and verified by statistical data and transaction logs. Corroborative evidence from the two-stage studies showed that the three main inter-connected hypotheses were supported: success in problem-solving is related to the information sources used; user satisfaction is related to success in problem-solving; and EIS use is an indicator of user satisfaction. EIS use is determined by a number of factors: the preference for EIS, the use of the library, the skills and knowledge in searching, the profession of the user and the characteristics of the work environment. Educational intervention was found to improve success in problem-solving, the attitudes, skills and knowledge in searching, the satisfaction with and use of EIS, and is an important enabling condition. The research rejected part of the first hypothesis posed that success in problem-solving is related to clinical question posed and suggests that further research is needed in this area. The study supported the extension of the general model to clinical information needs and behaviours and found new relationships. The study found an additional determinant of EIS satisfaction, the satisfaction with the information obtained. EIS satisfaction would not be changed by educational intervention alone if the information obtained was not satisfactory. On the other hand, education can improve EIS satisfaction regardless of whether the problem has been solved. Of critical importance is the time factor in determining the use (or non-use) of EIS. There is new evidence that the awareness of the user of an answer in literature is a determining factor for active searching. Borrowing the concept of opportunity cost from economic theory, the researcher relates it with the differing levels of self-efficacy and postulates a model for planning EIS and related library services. From the new extended model of information behaviour, sixteen main measures or indicators were tested on a proposed framework in developing performance measures to diagnose information behaviours and predict EIS use, satisfaction and success in problem-solving. In measuring EIS, the researcher suggested the holistic approach in assessing traditional (non-electronic) library and information services as part of information behaviours of clinicians. The study pointed to the imbalance between self-efficacy and the actual skills and knowledge of users in their searching mentality and activities and the implication for library practice. Qualitative aspects that require further research on measurement were suggested. The study has important ramifications for theory and practice for the information professional. The new extended model of information behaviour for clinicians establishes deterministic relationships that help explain why an information search is pursued actively, continuously, or not at all. Measures that have been derived from these relationships can help diagnose and predict information behaviours. The study highlights the flexibility and utility of the general model of information behaviour. Also, this is the first time that such a methodological approach has been adopted to derive EIS measures. The application of the randomized controlled study methodology in information science was proven to be feasible and yielded definitive results. The researcher proposes that further development of information behaviour model should incorporate the element of knowledge generation process in an organization.
782

A prototype interactive identification tool to fragmentary wood from eastern central Australia, and its application to Aboriginal Australian ethnographic artefacts

Barker, Jennifer Anne January 2005 (has links)
Wood identification can serve a role wherever wood has been separated from other diagnostic plant structures as a result of cultural or taphonomic processing. In disciplines that study material culture, such as museum anthropology and art history, it may serve to augment and verify existing knowledge, whilst in fields like palaeobotany, zoology and archaeology, wood identification may test existing paradigms of ecology and human behaviour. However, resources to aid wood identification, particularly of non - commercial species, are sorely lacking and, in Australia, there are only a handful of xylotomists, most of whom are attached to Forestry organisations. In addition, wood fragments are commonly the limit of material available for identification. They may be the physical remains of a wider matrix - as may often appear in biological, archaeological, palaeobotanical or forensic contexts - or a splinter removed from an ethnographic artefact or antique. This research involved the development of an updateable, interactive, computer - based identification tool to the wood of 58 arid Australian species. The identification tool comprises a series of keys and sub - keys to reflect the taxonomic hierarchies and the difficulty of separating wood beyond family or genus. The central Sub - key to Arid Australian Hardwood Taxa is comprised of 20 angiosperm taxa which include families and single representatives of genera. The treated taxa in this key are defined by 57 separate characters. They are split into sets of like characters including four sets based upon method of examination : anatomical ( scanning electron microscopy ), anatomical ( light microscopy ), chemical observations and physical properties. These character sets follow a logical progression, in recognition of the variability in available sample size and that noninvasive techniques are often desirable, if not essential. The use of character sets also reflects that this variability in sample size can affect the range of available characters and the available method of identification, and their diagnostic potential tends to increase with the complexity of the identification method. As part of the research, the identification tool is tested against wood fragments removed from several Aboriginal Australian artefacts from central Australia and case studies are provided. / Thesis (Ph.D.)--School of Earth and Environmental Sciences, 2005.
783

Managing dynamic XML data

Fisher, Damien Kaine, School of Computer Science & Engineering, UNSW January 2007 (has links)
Recent years have seen a surge in the popularity of XML, a markup language for representing semi-structured data. Some of this popularity can be attributed to the success that the semi-structured data model has had in environments where the relational data model has been insufficiently expressive. Concomitant with XMLs growing popularity, the world of database research has seen the rebirth of interest in tree-structured, hierarchical database systems. This thesis analyzes several problems that arise when constructing XML data management systems, particularly in the case where such systems must handle dynamic content. In the first chapter, we consider the problem of incremental schema validation, which arises in almost any XML database system. We build upon previous work by finding several classes of schemas for which very efficient algorithms exist. We also develop an algorithm that works for any schema, and prove that it is optimal. In the second chapter, we turn to the problem of improving query evaluation times on extremely large database systems. In particular, we boost the performance of the structural and twig joins, fundamental XML query evaluation techniques, through the use of an adaptive index. This index tunes itself to the query workload, providing a 20-80% boost in speed for these join operators. The adaptive nature of the index also allows updates to the database to be easily tracked. While accurate selectivity estimation is a critical problem in any database system due to its importance in choosing optimal query plans, there has been very little work on selectivity estimation in the presence of updates. We ask whether it is possible to design a structure for selectivity in XML databases that is updateable, and can return results with theoretically sound error guarantees. Through a combination of lower and upper bounds, we give strong evidence suggesting that this is unlikely in practice. Motivated by these results, we then develop a heuristic selectivity estimation structure for XML databases. This structure is the first such synopsis that can handle all aspects of core XPath, and is also updateable. Our experimental results demonstrate the efficacy of the approach.
784

Gestion des données efficace en pair-à-pair

Zoupanos, Spyros 09 December 2009 (has links) (PDF)
Le développement de l'internet a conduit à une grande augmentation de l'information disponible pour les utilisateurs. Ces utilisateurs veulent exprimer leur besoins de manière simple, par l'intermédiaire des requêtes, et ils veulent que ces requêtes soient évaluées sans se soucier où les données sont placées ou comment les requêtes sont évaluées. Le travail qui est présenté dans cette thèse contribue à l'objectif de la gestion du contenu du Web de manière déclarative et efficace et il est composé de deux parties. Dans le premier partie, nous présentons OptimAX, un optimiseur pour la langage Active XML qui est capable de reécrire un document Active XML donné dans un autre document équivalent dont l'évaluation sera plus efficace. OptimAX contribue à résoudre le problème d'optimisation des requêtes distribuées dans le cadre d'Active XML et nous présentons deux études de cas. Dans le deuxième partie, nous proposons une solution au problème de l'optimisation d'un point de vue différent. Nous optimisons des requêtes en utilisant un ensemble des requêtes pré-calculées (vues matérialisées). Nous avons développé une plateforme pair-à-pair, qui s'appelle ViP2P (views in peer-to-peer) qui permet aux utilisateurs de publier des documents XML et de spécifier des vues sur ces documents en utilisant une langage de motifs d'arbres. Quand un utilisateur pose une requête, le système essaiera de trouver des vues qui peuvent être combinées pour construire une réécriture équivalente à la requête. Nous avons fait des expérimentations en utilisant des ordinateurs des différents laboratoires en France et nous avons montré que notre plateforme passe à l'échelle jusqu'à plusieurs GB de données.
785

Extension et interrogation de résumés de flux de données

Gabsi, Nesrine 31 May 2011 (has links) (PDF)
Au cours de ces dernières années, un nouvel environnement s'est développé dans lequel les données doivent être collectées et traitées instantanément dès leur arrivée. La gestion de cette volumétrie nécessite la mise en place d'un nouveau modèle et de nouvelles techniques de traitements de l'information. Il s'agit du traitement des flux de données. Ces derniers ont la particularité d'être continus, évolutifs, volumineux et ne peuvent être stockés, dans leur intégralité, en tant que données persistantes. Plusieurs travaux de recherche se sont intéressés à cette problématique ce qui a engendré l'apparition des systèmes de gestion de flux de données (SGFD). Ces systèmes permettent d'exprimer des requêtes continues qui s'évaluent au fur et à mesure sur un flux ou sur des fenêtres (sous ensembles finis du flux). Toutefois, dans certaines applications, de nouveaux besoins peuvent apparaître après le passage des données. Dans ce cas, le système ne peut répondre aux requêtes posées car toutes les données n'appelant aucun traitement sont définitivement perdues. Il est ainsi nécessaire de conserver un résumé du flux de données. De nombreux algorithmes de résumé ont été développés. Le choix d'une méthode de résumé particulière dépend de la nature des données à traiter et de la problématique à résoudre. Dans ce manuscrit, nous nous intéressons en premier lieu à l'élaboration d'un résumé généraliste permettant de créer un compromis entre la vitesse de construction du résumé et la qualité du résumé conservé. Nous présentons une nouvelle approche de résumé qui se veut performance face à des requêtes portant sur des données du passé lointain. Nous nous focalisons par la suite sur l'exploitation et l'accès aux évènements du flux conservés dans ces résumés. Notre objectif consiste à intégrer les structures de résumés généralistes dans l'architecture des SGFD existantes de façon à étendre le champ de requêtes possibles. A cet effet, l'évaluation des requêtes qui font appel aux données du passé lointain d'un flux (i.e. données expirées de la mémoire du SGFD) serait possible au même titre que les requêtes posées sur le passé proche d'un flux de données. Nous présentons deux approches permettant d'atteindre cet objectif. Ces approches se différencient par le rôle que détient le module de résumé lors de l'évaluation d'une requêtes.
786

An XML-based Database of Molecular Pathways / En XML-baserad databas för molekylära reaktioner

Hall, David January 2005 (has links)
<p>Research of protein-protein interactions produce vast quantities of data and there exists a large number of databases with data from this research. Many of these databases offers the data for download on the web in a number of different formats, many of them XML-based.</p><p>With the arrival of these XML-based formats, and especially the standardized formats such as PSI-MI, SBML and BioPAX, there is a need for searching in data represented in XML. We wanted to investigate the capabilities of XML query tools when it comes to searching in this data. Due to the large datasets we concentrated on native XML database systems that in addition to search in XML data also offers storage and indexing specially suited for XML documents.</p><p>A number of queries were tested on data exported from the databases IntAct and Reactome using the XQuery language. There were both simple and advanced queries performed. The simpler queries consisted of queries such as listing information on a specified protein or counting the number of reactions.</p><p>One central issue with protein-protein interactions is to find pathways, i.e. series of interconnected chemical reactions between proteins. This problem involve graph searches and since we suspected that the complex queries it required would be slow we also developed a C++ program using a graph toolkit.</p><p>The simpler queries were performed relatively fast. Pathway searches in the native XML databases took long time even for short searches while the C++ program achieved much faster pathway searches.</p>
787

Design av ett objektorienterat datalager / Design of an object oriented data layer

Wikström, Mårten January 2006 (has links)
<p>System som bygger på en underliggande databas behöver ett abstraktionslager mellan databasen och applikationen. Detta kallas för systemets datalager.</p><p>Det är inte ovanligt att en stor del av programmerarnas tid går åt för att skriva programkod som hanterar datalagrets egenheter och för att transformera data mellan applikationen och datalagret.</p><p>I ett objektorienterat datalager kan systemets domänmodell integreras i datalagret så att det blir betydligt enklare och mer effektivt att arbeta med. Ett objektorienterat datalager låter dessutom applikationen navigera mellan objekten i databasen som om hela objektgrafen vore tillgänglig i applikationens primärminne. Hur information hämtas, när den hämtas och precis vilken information som hämtas från databasen är transparent för applikationen.</p><p>Det är också transparent när uppdateringar som görs på objekt i applikationens primärminne når den underliggande databasen. Datalagret ger garantin att alla objekt, som förändrats inom loppet av en transaktion och som är nåbara via navigering från något objekt i databasen, kommer att finnas i databasen med korrekt tillstånd då transaktionen avslutas.</p><p>Ett objektorienterat datalager erbjuder således en striktare form av abstraktion än vad ett traditionellt datalager gör.</p><p>Inom ramen för examensarbetet har jag utvecklat en prototyp av ett objektorienterat datalager, och i den här rapporten presenterar jag: några allmänna koncept som rör datalager i allmänhet och objektorienterade datalager i synnerhet; hur dessa koncept kan designas; samt en kort översikt av prototypen.</p>
788

Boosting Image Database Retrieval

Tieu, Kinh, Viola, Paul 10 September 1999 (has links)
We present an approach for image database retrieval using a very large number of highly-selective features and simple on-line learning. Our approach is predicated on the assumption that each image is generated by a sparse set of visual "causes" and that images which are visually similar share causes. We propose a mechanism for generating a large number of complex features which capture some aspects of this causal structure. Boosting is used to learn simple and efficient classifiers in this complex feature space. Finally we will describe a practical implementation of our retrieval system on a database of 3000 images.
789

Conducting Online Research Undergraduate Preferences of Sources.

Rosalyn Metz 2006 April 1900 (has links)
When students write research papers they use a variety of sources in their paper. These sources range from web pages to research articles. The purpose of this study was to decide whether or not undergraduate students would choose to use scholarly or non-scholarly sources when presented with both types of sources in a set of search results. Twenty Duke University students were recruited for the study. They were given a research topic and asked to perform a search. Both the search results and interface were fabricated by the researcher in order to control the experimental environment. The students were asked to rate the sources found in the results, choose four sources to use for their research scenario, and finally, were asked to explain reasoning behind their choices. The findings concluded that the students in this study were more likely to choose scholarly sources over non-scholarly sources and give these scholarly sources higher ratings.
790

Where Google Scholar Stands on Art: An Evaluation of Content Coverage in Online Databases

Hannah M. Noll 2008 April 1900 (has links)
This study evaluates the content coverage of Google Scholar and three commercial databases (Arts & Humanities Citation Index, Bibliography of the History of Art and Art Full Text/Art Index Retrospective) on the subject of art history. Each database is tested using a bibliography method and evaluated based on Péter Jacsó’s scope criteria for online databases. Of the 472 articles tested, Google Scholar indexed the smallest number of citations (35%), outshone by the Arts & Humanities Citation Index which covered 73% of the test set. This content evaluation also examines specific aspects of coverage to find that in comparison to the other databases, Google Scholar provides consistent coverage over the time range tested (1975-2008) and considerable access to article abstracts (56%). Google Scholar failed, however, to fully index the most frequently cited art periodical in the test set, the Artforum International. Finally, Google Scholar’s total citation count is inflated by a significant percentage (23%) of articles which include duplicate, triplicate or multiple versions of the same record.

Page generated in 0.0445 seconds