• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • Tagged with
  • 20
  • 20
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Discriminating Meta-Search: A Framework for Evaluation

Chignell, Mark, Gwizdka, Jacek, Bodner, Richard January 1999 (has links)
DOI: 10.1016/S0306-4573(98)00065-X / There was a proliferation of electronic information sources and search engines in the 1990s. Many of these information sources became available through the ubiquitous interface of the Web browser. Diverse information sources became accessible to information professionals and casual end users alike. Much of the information was also hyperlinked, so that information could be explored by browsing as well as searching. While vast amounts of information were now just a few keystrokes and mouseclicks away, as the choices multiplied, so did the complexity of choosing where and how to look for the electronic information. Much of the complexity in information exploration at the turn of the twenty-first century arose because there was no common cataloguing and control system across the various electronic information sources. In addition, the many search engines available differed widely in terms of their domain coverage, query methods, and efficiency. Meta-search engines were developed to improve search performance by querying multiple search engines at once. In principle, meta-search engines could greatly simplify the search for electronic information by selecting a subset of first-level search engines and digital libraries to submit a query to based on the characteristics of the user, the query/topic, and the search strategy. This selection would be guided by diagnostic knowledge about which of the first-level search engines works best under what circumstances. Programmatic research is required to develop this diagnostic knowledge about first-level search engine performance. This paper introduces an evaluative framework for this type of research and illustrates its use in two experiments. The experimental results obtained are used to characterize some properties of leading search engines (as of 1998). Significant interactions were observed between search engine and two other factors (time of day, and Web domain). These findings supplement those of earlier studies, providing preliminary information about the complex relationship between search engine functionality and performance in different contexts. While the specific results obtained represent a time-dependent snapshot of search engine performance in 1998, the evaluative framework proposed should be generally applicable in the future.
12

Exploring the Academic Invisible Web

Lewandowski, Dirk, Mayr, Philipp 05 1900 (has links)
Purpose: To provide a critical review of Bergmanâ s 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on infor-metric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergmanâ s size estimation of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimation is given. Research limitations/implications: The precision of our estimation is limited due to small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search en-gines with data on the size and attributes of the Academic Invisible Web.
13

Web impact factors for Iranian Universities

Noruzi, Alireza 04 1900 (has links)
This study investigates the Web Impact Factors (WIFs) for Iranian universities and introduces a new system of measurement. Counts of links to the web sites of Iranian universities were calculated from the output of AltaVista search engine. The WIFs for Iranian universities were calculated by dividing link page counts by the number of pages found in AltaVista for each university at a given point in time. These WIFs were then compared, to study the impact, visibility, and influence of Iranian university web sites. Overall, Iranian university web sites have a low inlink WIF. While specific features of sites may affect an institution's Web Impact Factor, there is a significant correlation between the proportion of English-language pages at an institution's site and the institution's backlink counts. This indicates that for linguistic reasons, Iranian (Persian-language) web sites may not attract the attention they deserve from the World Wide Web. This raises the possibility that information may be ignored due to linguistic and geographic barriers, and this should be taken into account in the development of the global Web.
14

The use of web metrics for online strategic decision-making

Weischedel, Birgit, n/a January 2005 (has links)
"I know but one freedom, and that is the freedom of the mind" Antoine de Saint-Exupery. Web metrics offer significant potential for online businesses to incorporate high-quality, real-time information into their strategic marketing decision-making (SDM) process. This SDM process is affected by the firm�s strategic direction, which is critical for web businesses. A review of the widely researched strategy and SDM literature identified that managers use extensive information to support and improve strategic decisions and make informed decisions. Offline SDM processes might be appropriate for the online environment but the limited literature on web metrics has not researched information needs for online SDM. Even though web metrics can be a valuable tool for web businesses to inform strategic marketing decisions, and their collection might be less expensive and easier than offline measures, virtually no published research has combined web metrics and SDM concepts into one research project. To address this gap in the literature, the thesis investigated the differences and commonalities of online and offline SDM process approaches, the use of web metrics categories for online SDM stages, and the issues encountered during that process through four research questions. A preliminary conceptual model based on the literature review was refined through preliminary research, which addressed the research questions and investigated the current state of web metrics. After investigating various methodologies, a multi-stage qualitative methodology was selected. The use of qualitative methods represents a contribution to knowledge regarding methodological approaches to online research. Four stages within the online SDM process were shown to benefit from the use of web metrics: the setting of priorities, the setting of objectives, the pretest stage and the review stage. The results identified the similarity of online and offline SDM processes; demonstrated that Traffic, Transactions, Customer Feedback and Consumer Behaviour categories provide basic metrics used by most companies; identified the Environment, Technology, Business Results and Campaigns categories as supplementary categories that are applied according to the marketing objectives; and investigated the results based on different types of companies (website classification, channel focus, size and cluster association). Three clusters were identified that relate to the strategic importance of the website and web metrics. Modifying the initial conceptual model, six issues were distinguished that affect the use of web metrics: the adoption and use of web metrics by managers; the integration of multiple sources of metrics; the establishment of industry benchmarks; data quality; the differences to offline measures; as well as resource constraints that interfere with the appropriate web metrics analysis. Links to offline marketing strategy literature and established business concepts were explored and explanations provided where the results confirmed or modified these concepts. Using qualitative methods, the research assisted in building theory of web metrics and online SDM processes. The results show that offline theories apply to the online environment and conventional concepts provide guidance for online processes. Dynamic aspects of strategy relate to the online environment, and qualitative research methods appear suitable for online research. Publications during this research project: Weischedel, B., Matear, S. and Deans, K. R. (2003) The Use of E-metrics in Strategic Marketing Decisions - A Preliminary Investigation. Business Excellence �03 - 1st International Conference on Performance Measures, Benchmarking and Best Practices in the New Economy, Guimaraes, Portugal; June 10-13, 2003. Weischedel, B., Deans, K. R. and Matear, S. (2004) Emetrics - An Empirical Study of Marketing Performance Measures for Web Businesses. Performance Measurement Association Conference 2004, Edinburgh, UK; July 28-30, 2004. Weischedel, B., Matear, S. and Deans, K. R. (2005) "A Qualitative Approach to Investigating Online Strategic Decision-Making" Qualitative Market Research, Vol. 8 No 1, pp. 61-76. Weischedel, B., Matear, S. and Deans, K. R. (2005) "The Use of Emetrics in Strategic Marketing Decisions - A Preliminary Investigation" International Journal of Internet Marketing and Advertising, Vol. 2 Nos 1/2, p. 109-125.
15

Impact of Data Sources on Citation Counts and Rankings of LIS Faculty: Web of Science vs. Scopus and Google Scholar

Meho, Lokman I., Yang, Kiduk 01 1900 (has links)
The Institute for Scientific Information's (ISI) citation databases have been used for decades as a starting point and often as the only tools for locating citations and/or conducting citation analyses. ISI databases (or Web of Science [WoS]), however, may no longer be sufficient because new databases and tools that allow citation searching are now available. Using citations to the work of 25 library and information science faculty members as a case study, this paper examines the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. Overall, more than 10,000 citing and purportedly citing documents were examined. Results show that Scopus significantly alters the relative ranking of those scholars that appear in the middle of the rankings and that GS stands out in its coverage of conference proceedings as well as international, non-English language journals. The use of Scopus and GS, in addition to WoS, helps reveal a more accurate and comprehensive picture of the scholarly impact of authors. WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours.
16

Searching the long tail: Hidden structure in social tagging

Tonkin, Emma January 2006 (has links)
In this paper we explore a method of decomposition of compound tags found in social tagging systems and outline several results, including improvement of search indexes, extraction of semantic information, and benefits to usability. Analysis of tagging habits demonstrates that social tagging systems such as del.icio.us and flickr include both formal metadata, such as geotags, and informally created metadata, such as annotations and descriptions. The majority of tags represent informal metadata; that is, they are not structured according to a formal model, nor do they correspond to a formal ontology. Statistical exploration of the main tag corpus demonstrates that such searches use only a subset of the available tags; for example, many tags are composed as ad hoc compounds of terms. In order to improve accuracy of searching across the data contained within these tags, a method must be employed to decompose compounds in such a way that there is a high degree of confidence in the result. An approach to decomposition of English-language compounds, designed for use within a small initial sample tagset, is described. Possible decompositions are identified from a generous wordlist, subject to selective lexicon snipping. In order to identify the most likely, a Bayesian classifier is used across term elements. To compensate for the limited sample set, a word classifier is employed and the results classified using a similar method, resulting in a successful classification rate of 88%, and a false negative rate of only 1%.
17

Finding Finding Aids on the World Wide Web

Tibbo, Helen R., Meho, Lokman I. January 2001 (has links)
Reports results of a study to explore how well six popular Web search engines performed in retrieving specific electronic finding aids mounted on the World Wide Web. A random sample of online finding aids was selected and then searched using AltaVista, Excite, Fast Search, Google, Hotbot and Northern Light, employing both word and phrase searching. As of February 2000, approximately 8 percent of repositories listed at the 'Repositories of Primary Resources' Web site had mounted at least four full finding aids on the Web. The most striking finding of this study was the importance of using phrase searches whenever possible, rather than word searches. Also of significance was the fact that if a finding aid were to be found using any search engine, it was generally found in the first ten or twenty items at most. The study identifies the best performers among the six chosen search engines. Combinations of search engines often produced much better results than did the search engines individually, evidence that there may be little overlap among the top hits provided by individual engines.
18

Can web-based statistic services be trusted? / Kan man lita på webb-baserade statistiktjänster?

Birkestedt, Sara, Hansson, Andreas January 2004 (has links)
A large number of statistic services exist today, which shows that there is a great interest in knowing more about the visitors on a web site. But how reliable is the result the services are giving? The hypothesis examined in the thesis is: Web-based statistic services do not show an accurate result The purpose of the thesis is to find out how accurate the web-based statistic services are regarding unique visitors and number of pages viewed. Our hope is that this thesis will bring more knowledge about the different statistic services that exists today and the problems surrounding them. We will also draw attention to the importance of knowing how your statistic software works to be able to interpret the results correctly. To investigate this, we chose to do practical tests on a selection of web-based statistic services. The services registered the traffic from the same web site during a test period. During the same period a control program registered the same things and stored the result in a database. In addition to the test, we have done an interview with a person working with web statistics. The investigation showed that there are big differences between the results from the web-based statistic services in the test and that none of them showed an accurate result, neither for the total number of page views nor unique visitors. This led us to the conclusion that web-based statistic services do not show an accurate result, which verifies our hypothesis. Also the interview confirmed that there is a problem with measuring web statistics. / Ett stort antal statistiksystem existerar idag för att ta reda på information om besökare på webbplatser. Men hur pålitliga är egentligen dessa tjänster? Syftet med uppsatsen är att ta reda på hur pålitliga de är när det gäller att visa antal unika besökare och totalt antal sidvisningar. Hypotesen vi har formulerat är: Webb-baserade statistiksystem visar inte ett korrekt resultat. För att testa detta har vi gjort praktiska tester av fem olika webb-baserade statistiktjänster som användes på samma webbplats under samma period. Informationen som dessa tjänster registrerade lagrade vi i en databas, samtidigt som vi använde ett eget kontrollprogram för att mäta samma uppgifter. Vi har också genomfört en intervju med en person som arbetar med webbstatistik på ett webbföretag. Undersökningen visar att resultatet mellan de olika tjänsterna skiljer sig mycket, både jämfört med varandra och med kontrollprogrammet. Detta gällde både antal sidvisningar och unika besökare. Detta leder till slutsatsen att systemen inte visar korrekta uppgifter, vilket gör att vi kan verifiera vår hypotes. Även intervjun som utfördes visade på de problem som finns med att mäta besökarstatistik.
19

Strategic management and development of UK university library websites

Manuel, Sue January 2012 (has links)
This research assessed website management and development practices across the United Kingdom university library sector. As a starting point, the design and features of this group of websites was recorded against criteria drawn from the extant literature. This activity established core content and features of UK library websites as: a search box or link for searching the library catalogue, electronic resources or website; a navigation column on the left and breadcrumb trail to aid information location and website orientation; homepage design was repeated on library website sub-pages; university brand elements appeared in the banner; and a contact us link was provided for communication with library personnel. Library websites conformed to 14 of the 20 homepage usability guidelines examined indicating that web managers were taking steps to ensure that users were well served by their websites. Areas for improvement included better navigation support (sitemap/index), greater adoption of new technologies and more interactive features. Website management and development practices were established through national survey and in-depth case studies. These illustrated the adoption of a team approach to website management and development; formal website policy and strategy were not routinely created; library web personnel and their ability to build effective links with colleagues at the institution made a valuable contribution to the success of a library website; corporate services and institutional practices played an important part in library website development; library staff were actively engaged in consultations with their website audience; and a user focused approach to website development prevailed. User studies and metric data were considered in the website evaluation and development process. However, there were some issues with both data streams and interpreting metric data to inform website development. Evaluation and development activities were not always possible due to staff/time shortages, technical constraints, corporate website templates, and, to a lesser extent, lack of finance.
20

The Rise and Rise of Citation Analysis

Meho, Lokman I. 01 1900 (has links)
Accepted for publication in Physics World (January 2007) / With the vast majority of scientific papers now available online, this paper (accepted for publication in Physics World) describes how the Web is allowing physicists and information providers to measure more accurately the impact of these papers and their authors. Provides a historical background of citation analysis, impact factor, new citation data sources (e.g., Google Scholar, Scopus, NASA's Astrophysics Data System Abstract Service, MathSciNet, ScienceDirect, SciFinder Scholar, Scitation/SPIN, and SPIRES-HEP), as well as h-index, g-index, and a-index.

Page generated in 0.0629 seconds