• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 33
  • 13
  • 13
  • 12
  • 11
  • 10
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Co-occurrence Matrices and their Applications in Information Science: Extending ACA to the Web Environment

Leydesdorff, Loet, Vaughan, Liwen January 2006 (has links)
Journal of the American Society for Information Science and Technology [JASIST] (forthcoming) / To be published in Journal of the American Society for Information Science & Technology 57(12) (2006) 1616-1628. Abstract: Co-occurrence matrices, such as co-citation, co-word, and co-link matrices, have been used widely in the information sciences. However, confusion and controversy have hindered the proper statistical analysis of this data. The underlying problem, in our opinion, involved understanding the nature of various types of matrices. This paper discusses the difference between a symmetrical co-citation matrix and an asymmetrical citation matrix as well as the appropriate statistical techniques that can be applied to each of these matrices, respectively. Similarity measures (like the Pearson correlation coefficient or the cosine) should not be applied to the symmetrical co-citation matrix, but can be applied to the asymmetrical citation matrix to derive the proximity matrix. The argument is illustrated with examples. The study then extends the application of co-occurrence matrices to the Web environment where the nature of the available data and thus data collection methods are different from those of traditional databases such as the Science Citation Index. A set of data collected with the Google Scholar search engine is analyzed using both the traditional methods of multivariate analysis and the new visualization software Pajek that is based on social network analysis and graph theory.
22

Multiple Presents: How Search Engines Re-write the Past

Hellsten, Iina, Leydesdorff, Loet, Wouters, Paul January 2006 (has links)
New Media & Society, 8(6), 2006 (forthcoming). / To be published in New Media & Society, 8(6), 2006 (forthcoming). Abstract: Internet search engines function in a present which changes continuously. The search engines update their indices regularly, overwriting Web pages with newer ones, adding new pages to the index, and losing older ones. Some search engines can be used to search for information at the internet for specific periods of time. However, these â date stampsâ are not determined by the first occurrence of the pages in the Web, but by the last date at which a page was updated or a new page was added, and the search engineâ s crawler updated this change in the database. This has major implications for the use of search engines in scholarly research as well as theoretical implications for the conceptions of time and temporality. We examine the interplay between the different updating frequencies by using AltaVista and Google for searches at different moments of time. Both the retrieval of the results and the structure of the retrieved information erodes over time.
23

Social Network Analysis of Researchers' Communication and Collaborative Networks Using Self-reported Data

Cimenler, Oguz 16 June 2014 (has links)
This research seeks an answer to the following question: what is the relationship between the structure of researchers' communication network and the structure of their collaborative output networks (e.g. co-authored publications, joint grant proposals, and joint patent applications), and the impact of these structures on their citation performance and the volume of collaborative research outputs? Three complementary studies are performed to answer this main question as discussed below. 1. Study I: A frequently used output to measure scientific (or research) collaboration is co-authorship in scholarly publications. Less frequently used are joint grant proposals and patents. Many scholars believe that co-authorship as the sole measure of research collaboration is insufficient because collaboration between researchers might not result in co-authorship. Collaborations involve informal communication (i.e., conversational exchange) between researchers. Using self-reports from 100 tenured/tenure-track faculty in the College of Engineering at the University of South Florida, researchers' networks are constructed from their communication relations and collaborations in three areas: joint publications, joint grant proposals, and joint patents. The data collection: 1) provides a rich data set of both researchers' in-progress and completed collaborative outputs, 2) yields a rating from the researchers on the importance of a tie to them 3) obtains multiple types of ties between researchers allowing for the comparison of their multiple networks. Exponential Random Graph Model (ERGM) results show that the more communication researchers have the more likely they produce collaborative outputs. Furthermore, the impact of four demographic attributes: gender, race, department affiliation, and spatial proximity on collaborative output relations is tested. The results indicate that grant proposals are submitted with mixed gender teams in the college of engineering. Besides, the same race researchers are more likely to publish together. The demographics do not have an additional leverage on joint patents. 2. Study II: Previous research shows that researchers' social network metrics obtained from a collaborative output network (e.g., joint publications or co-authorship network) impact their performance determined by g-index. This study uses a richer dataset to show that a scholar's performance should be considered with respect to position in multiple networks. Previous research using only the network of researchers' joint publications shows that a researcher's distinct connections to other researchers (i.e., degree centrality), a researcher's number of repeated collaborative outputs (i.e., average tie strength), and a researchers' redundant connections to a group of researchers who are themselves well-connected (i.e., efficiency coefficient) has a positive impact on the researchers' performance, while a researcher's tendency to connect with other researchers who are themselves well-connected (i.e., eigenvector centrality) had a negative impact on the researchers' performance. The findings of this study are similar except that eigenvector centrality has a positive impact on the performance of scholars. Moreover, the results demonstrate that a researcher's tendency towards dense local neighborhoods (as measured by the local clustering coefficient) and the researchers' demographic attributes such as gender should also be considered when investigating the impact of the social network metrics on the performance of researchers. 3. Study III: This study investigates to what extent researchers' interactions in the early stage of their collaborative network activities impact the number of collaborative outputs produced (e.g., joint publications, joint grant proposals, and joint patents). Path models using the Partial Least Squares (PLS) method are run to test the extent to which researchers' individual innovativeness, as determined by the specific indicators obtained from their interactions in the early stage of their collaborative network activities, impacts the number of collaborative outputs they produced taking into account the tie strength of a researcher to other conversational partners (TS). Within a college of engineering, it is found that researchers' individual innovativeness positively impacts the volume of their collaborative outputs. It is observed that TS positively impacts researchers' individual innovativeness, whereas TS negatively impacts researchers' volume of collaborative outputs. Furthermore, TS negatively impacts the relationship between researchers' individual innovativeness and the volume of their collaborative outputs, which is consistent with `Strength of Weak Ties' Theory. The results of this study contribute to the literature regarding the transformation of tacit knowledge into explicit knowledge in a university context.
24

Redesign of Library Workflows: Experimental Models for Electronic Resource Description

Calhoun, Karen January 2000 (has links)
This paper explores the potential for and progress of a gradual transition from a highly centralized model for cataloging to an iterative, collaborative, and broadly distributed model for electronic resource description. The author's purpose is to alert library managers to some experiments underway and to help them conceptualize new methods for defining, planning, and leading the e-resource description process under moderate to severe time and staffing constraints. To build a coherent library system for discovery and retrieval of networked resources, librarians and technologists are experimenting with team-based efforts and new workflows for metadata creation. In an emerging new service model for e-resource description, metadata can come from selectors, public service librarians, information technology staff, authors, vendors, publishers, and catalogers. Arguing that e-resource description demands a level of cross-functional collaboration and creative problem-solving that is often constrained by libraries' functional organizational structures, the author calls for reuniting functional groups into virtual teams that can integrate the e-resource description process, speed up operations, and provide better service. The paper includes an examination of the traditional division of labor for producing catalogs and bibliographies, a discussion of experiments that deploy a widely distributed e-resource description process (e.g., the use of CORC at Cornell and Brown), and an exploration of the results of a brief study of selected ARL libraries' e-resource discovery systems.
25

Impact of Data Sources on Citation Counts and Rankings of LIS Faculty: Web of Science vs. Scopus and Google Scholar

Meho, Lokman I., Yang, Kiduk 01 1900 (has links)
The Institute for Scientific Information's (ISI) citation databases have been used for decades as a starting point and often as the only tools for locating citations and/or conducting citation analyses. ISI databases (or Web of Science [WoS]), however, may no longer be sufficient because new databases and tools that allow citation searching are now available. Using citations to the work of 25 library and information science faculty members as a case study, this paper examines the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. Overall, more than 10,000 citing and purportedly citing documents were examined. Results show that Scopus significantly alters the relative ranking of those scholars that appear in the middle of the rankings and that GS stands out in its coverage of conference proceedings as well as international, non-English language journals. The use of Scopus and GS, in addition to WoS, helps reveal a more accurate and comprehensive picture of the scholarly impact of authors. WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours.
26

Ranking the Research Productivity of LIS Faculty and Schools: An Evaluation of Data Sources and Research Methods

Meho, Lokman I., Spurgin, Kristina M. 10 1900 (has links)
This study evaluates the data sources and research methods used in earlier studies to rank the research productivity of Library and Information Science (LIS) faculty and schools. In doing so, the study identifies both tools and methods that generate more accurate publication count rankings as well as databases that should be taken into consideration when conducting comprehensive searches in the literature for research and curricular needs. With a list of 2,625 items published between 1982 and 2002 by 68 faculty members of 18 American Library Associationâ (ALA-) accredited LIS schools, hundreds of databases were searched. Results show that there are only 10 databases that provide significant coverage of the LIS indexed literature. Results also show that restricting the data sources to one, two, or even three databases leads to inaccurate rankings and erroneous conclusions. Because no database provides comprehensive coverage of the LIS literature, researchers must rely on a wide range of disciplinary and multidisciplinary databases for ranking and other research purposes. The study answers such questions as the following: Is the Association of Library and Information Science Educationâ s (ALISEâ s) directory of members a reliable tool to identify a complete list of faculty members at LIS schools? How many and which databases are needed in a multifile search to arrive at accurate publication count rankings? What coverage will be achieved using a certain number of databases? Which research areas are well covered by which databases? What alternative methods and tools are available to supplement gaps among databases? Did coverage performance of databases change over time? What counting method should be used when determining what and how many items each LIS faculty and school has published? The authors recommend advanced analysis of research productivity to provide a more detailed assessment of research productivity of authors and programs.
27

A Scientometric Method to Analyze Scientific Journals as Exemplified by the Area of Information Science

Boell, Sebastian K. 12 1900 (has links)
==Background== In most academic disciplines journals play an important role in disseminating findings of research among the disciplinary community members. Understanding a discipline's body of journals is therefore of grave importance when looking for previous research, compiling an overview of previous research and and in order to make a decision regarding the best place for publishing research results. Furthermore, based on Bradford's Law of scattering, one can assume that in order to be able to compile a satisfying overview of previous research a wide range of journals has to be scanned, but also that there are some 'core' journals which are of more importance to specific disciplines than others. ==Aim== This thesis aims to compile a comprehensive master list of journals which publish articles of relevance to Library and Information Science (LIS). A method to rank journals by their importance is introduced and some key characteristics of the disciplines body of journals are discussed. Databases indexing the disciplines journals are also compared. ==Method== The master list of LIS journals was created by combining the journal listings of secondary sources indexing the field's literature. These sources were six databases focusing on LIS literature: INFODATA, Current Contents, Library and Information Science Abstracts, Library Information Science Technology Abstracts, Information Science and Technology Abstracts, and Library Literature and Information Science, the LIS subsection in three databases with a general focus: Social Science Citation Index, Academic Search Premier, and Expanded Academic ASAP, and the listing of LIS journals from the Elektronische Zeitschriften Bibliothek. Problems related to editorial policies and technical shortcomings are discussed, before comparing: predominant publication languages, places of publication, open access, peer review, and the ISI Journal Impact Factors (JIF). Journals were also ranked by the number of occurrences in multiple databases in order to identify 'core' publications. The number of journals overlapping between databases are estimated and a matrix giving the overlap is visualized using multi dimensional scaling. Lastly, the degree of journals overlapping with other disciplines is measured. ==Results== A comprehensive master list of 1,205 journals publishing articles of relevance to LIS was compiled. The 968 active journals are mostly published in English, with one third of the journals coming from the US and another third from the UK and Germany. Nearly 16% of all journals are open access, 11% have a ISIJIF, and 42% are peer reviewed. Fifteen core journal could be identified and a list of the top fourteen journals published in Germany is introduced. Databases have between five to 318 journals in common and the journal collection shows an substantial overlap with a wide range of subjects, with the biggest journal overlap with Computing Studies, and Business and Economics. ==Conclusion== The aim of compiling a comprehensive list of LIS journal was achieved. The list will contribute to our understanding of scholarly communication within the LIS discipline and provide academics and practitioners with a better understanding of journals within the discipline. The ranking approach proved to be sufficient, showing good similarity with other studies over the last 40 years. The master list of LIS journals has also potential use to further research.
28

The Rise and Rise of Citation Analysis

Meho, Lokman I. 01 1900 (has links)
Accepted for publication in Physics World (January 2007) / With the vast majority of scientific papers now available online, this paper (accepted for publication in Physics World) describes how the Web is allowing physicists and information providers to measure more accurately the impact of these papers and their authors. Provides a historical background of citation analysis, impact factor, new citation data sources (e.g., Google Scholar, Scopus, NASA's Astrophysics Data System Abstract Service, MathSciNet, ScienceDirect, SciFinder Scholar, Scitation/SPIN, and SPIRES-HEP), as well as h-index, g-index, and a-index.
29

Mínimos quadrados ordinários (MQO) na produção científica brasileira: a interdisciplinaridade entre a econometria e as metrias da informação (bibliometria, informetria e cientometria)

Santos, Levi Alã Neves dos 05 December 2017 (has links)
Submitted by Levi Santos (levis@ufba.br) on 2018-01-30T21:19:42Z No. of bitstreams: 1 Tese Levi PPGCI-UFBA 05.12.2017.pdf: 3296241 bytes, checksum: c7064236d23f11486d498f569f5185f1 (MD5) / Approved for entry into archive by Urania Araujo (urania@ufba.br) on 2018-02-19T20:06:50Z (GMT) No. of bitstreams: 1 Tese Levi PPGCI-UFBA 05.12.2017.pdf: 3296241 bytes, checksum: c7064236d23f11486d498f569f5185f1 (MD5) / Made available in DSpace on 2018-02-19T20:06:50Z (GMT). No. of bitstreams: 1 Tese Levi PPGCI-UFBA 05.12.2017.pdf: 3296241 bytes, checksum: c7064236d23f11486d498f569f5185f1 (MD5) / Analisa a produção científica brasileira (artigos nacionais, artigos internacionais, anais de eventos e livros) através dos Mínimos Quadrados Ordinários (MQO). Para tanto, discorre sobre o percurso histórico e de aplicação das metrias que a Ciência da Informação (CI) vem construindo, desde a mais primordial de todas, a bibliometria, oriunda da biblioteconomia, passando pelas visões modernas como a cienciometria até a informetria. Explica como a econometria constrói o seu modelo de análise, que é utilizado para pesquisas na economia e, ao mesmo tempo, reflete como esse método pode ser trazido para as metrias da informação. Explica e expõe o método de estimação por MQO para a análise de regressão, que é a proposta desta tese. Pesquisa aplicada descritiva com abordagem quantitativa com procedimentos baseados no tipo de pesquisa estudo de caso do levantamento de dados a partir do Portal do Plano Tabular do CNPq do ano de 2010. Os critérios para delineamento da pesquisa foram aprofundados, na revisão de literatura, em referências tanto da área da CI quanto da bibliometria, estatística e econometria. Este estudo, metodologicamente, conta com a abordagem conceitual da bibliometria e da CI em busca de teorias aplicáveis aos estudos em MQO e a aplicação empírica do MQO se aproxima da concepção econométrica. A tese conclui que a utilização de técnicas de análises das funções de regressão construída por meio de MQO possibilita a criação de um modelo de previsão da produção científica brasileira. Esse modelo é construído a partir da correlação e determinação detectada entre o número de doutores e a produção científica destes em cada estado do Brasil. Com a aplicação de estratégias econométricas (índice de correlação, índice de determinação, forma funcional de curva de regressão e cálculo dos parâmetros da função por MQO), foi possível construir um modelo de previsão.
30

Descubrimiento y evaluación de recursos web de calidad mediante Patent Link Analysis

Font Julián, Cristina Isabel 26 July 2021 (has links)
[ES] Las patentes son documentos legales que describen el funcionamiento exacto de una invención, otorgando el derecho de explotación económica a sus dueños a cambio de dar a conocer a la sociedad los detalles de funcionamiento de dicha invención. Para que una patente pueda ser concedida debe cumplir tres requisitos: ser novedad (no haber sido expuesto o publicado con anterioridad), cumplir la actividad inventiva y tener aplicación industrial. Es por ello que las patentes son documentos valiosos, ya que contienen una gran cantidad de información técnica no incluida antes en otro tipo de documento (publicado o disponible). Debido a las características particulares de las patentes, los recursos que éstas mencionan, así como los recursos que mencionan a las patentes, contienen enlaces que pueden ser útiles y dar apoyo a diversas aplicaciones (vigilancia tecnológica, desarrollo e innovación, Triple-Helix, etc.) al disponer de información complementaria, así como de la creación de herramientas y técnicas que permitan extraerlos y analizarlos. El método propuesto para alcanzar los objetivos que definen la tesis se encuentra divido en dos bloques complementarios: Patent Outlink y Patent Inlink, que juntos conforman la técnica de Patent Link Analysis. Para realizar el estudio se selecciona la Oficina de Patentes y Marcas de Estados Unidos (USPTO), recogiendo todas aquellas patentes concedidas entre los años 2008 y 2018 (ambos incluidos). Una vez extraída la información a analizar en cada bloque se cuenta con: 3.133.247 de patentes, 2.745.973 millones de enlaces contenidos en patentes, 2.297.366 millones de páginas web de patentes enlazadas, 17.001 paginas únicas web enlazando a patentes y 990.663 patentes únicas enlazadas desde documentos web. Los resultados del análisis de Patent Outlink muestran como tanto la cantidad de patentes que contienen enlaces (20%), como el número de enlaces contenido en patentes (mediana 4-5) es todavía bajo, pero ha crecido significativamente durante los últimos años y se puede esperar un mayor uso en el futuro. Existe una diferencia clara en el uso de enlaces entre áreas de conocimiento (42% pertenecen a Física, especialmente Computación y Cálculos), así como por secciones dentro de los documentos, explicando los resultados obtenidos y la proyección de análisis futuros. Los resultados del análisis de Patent Inlink identifica una cantidad considerable menor de dominios webs que enlazan a patentes (17.001 frente a 256.724), pero existen más enlaces por documento enlazante (el número de enlaces total es similar para ambos bloques de análisis). Así mismo, los datos muestran una elevada dispersión, con unos pocos dominios generando una gran cantidad de enlaces. Ambos bloques muestran la existencia de una alta relación con empresas y servicios tecnológicos, existiendo diferencias relativas a los enlaces a Universidades y Gobiernos (más enlaces en Outlink). Los resultados muestran que el modelo de análisis propuesto permite y facilita el descubrimiento y evaluación de recursos web de calidad. Así mismo, se concluye que la cibermetría, mediante el análisis de enlaces, aporta información de interés para el análisis de los recursos web de calidad a través de los enlaces contenidos y dirigidos a documentos de patentes. El método propuesto y validado permite de un modo eficiente, eficaz y replicable la extracción y análisis de los enlaces contenidos y dirigidos a documentos de patentes. Permitiendo, a su vez, definir, modelar y caracterizar el Patent Link Analysis como un subgénero del Link Analysis que puede ser utilizado para la construcción de sistemas de monitorización de link intelligence, de evaluación y/o de calidad entre otros, mediante el uso de los enlaces entrantes y salientes de documentos de patentes aplicable universidades, centros de investigación, así como empresas públicas y privadas. / [CA] Les patents són documents legals que descriuen el funcionament exacte d'una invenció, atorgant el dret d'explotació econòmica als seus amos a canvi de donar a conéixer a la societat els detalls de funcionament d'aquesta invenció. Perquè una patent puga ser concedida ha de complir tres requisits: ser novetat (no haver sigut exposat o publicat amb anterioritat), complir l'activitat inventiva i tindre aplicació industrial. És per això que les patents són documents valuosos, ja que contenen una gran quantitat d'informació tècnica no inclosa abans en un altre tipus de document (publicat o disponible). A causa de les característiques particulars de les patents, els recursos que aquestes esmenten, així com els recursos que esmenten les patents, contenen enllaços que poden ser útils i donar suport a diverses aplicacions (vigilància tecnològica, desenvolupament i innovació, Triple-Helix, etc.) en disposar d'informació complementària, així com de la creació d'eines i tècniques que permeten extraure'ls i analitzar-los. El mètode proposat per a aconseguir els objectius que defineixen la tesi es troba dividisc en dos blocs complementaris: Patent Outlink i Patent Inlink, que junts conformen la tècnica de Patent Link Analysis. Per a realitzar l'estudi es selecciona l'Oficina de Patents i Marques dels Estats Units (USPTO), recollint totes aquelles patents concedides entre els anys 2008 i 2018 (tots dos inclosos). Una vegada extreta la informació a analitzar en cada bloc es compta amb: 3.133.247 de patents, 2.745.973 milions d'enllaços continguts en patents, 2.297.366 milions de pàgines web de patents enllaçades, 17.001 pàgines úniques web enllaçant a patents i 990.663 patents úniques enllaçades des de documents web. Els resultats de l'anàlisi de Patent Outlink mostren com tant la quantitat de patents que contenen enllaços (20%), com el nombre d'enllaços contingut en patents (mitjana 4-5) és encara baix, però ha crescut significativament durant els últims anys i es pot esperar un major ús en el futur. Existeix una diferència clara en l'ús d'enllaços entre àrees de coneixement (42% pertanyen a Física, especialment Computació i Càlculs), així com per seccions dins dels documents, explicant els resultats obtinguts i la projecció d'anàlisis futures. Els resultats de l'anàlisi de Patent Inlink identifica una quantitat considerable menor de dominis webs que enllacen a patents (17.001 enfront de 256.724), però hi ha més enllaços per document enllaçant (el nombre d'enllaços total és similar per a tots dos blocs d'anàlisis). Així mateix, les dades mostren una elevada dispersió, amb uns pocs dominis generant una gran quantitat d'enllaços. Tots dos blocs mostren l'existència d'una alta relació amb empreses i serveis tecnològics, existint diferències relatives als enllaços a Universitats i Governs (més enllaços en Outlink). Finalment, es verifica que el model d'anàlisi proposat i facilita l'extracció i anàlisi dels enllaços continguts i dirigits a documents de patents, així com facilitar el descobriment i avaluació de recursos web de qualitat. A més, es conclou que la cibermetría, mitjançant l'anàlisi d'enllaços, aporta informació d'interés per a l'anàlisi dels recursos web de qualitat a través dels enllaços continguts i dirigits a documents de patents. El mètode proposat i validat permet definir, modelar i caracteritzar el Patent Link Analysis com un subgènere del Link Analysis que pot ser utilitzat per a la construcció de sistemes de monitoratge de link intelligence, d'avaluació i/o de qualitat entre altres, mitjançant l'ús dels enllaços entrants i sortints de documents de patents aplicable a universitats, centres d'investigació, així com empreses públiques i privades. / [EN] Patents are legal documents that describe the exact operation of an invention, granting the right of economic exploitation to its owners in exchange for describing the details of the operation of said invention. For a patent to be granted, it must meet three requirements: be novel (not have been previously exhibited or published), comply with the inventive step, and have industrial application. That is why patents are valuable documents, since they contain a large amount of technical information not previously included in another type of document (published or available). Due to the particular characteristics of patents, the resources that they mention, as well as the resources that mention patents, contain links that can be useful and give support to various applications (technological surveillance, development and innovation, Triple-Helix, etc.) by having complementary information, as well as the creation of tools and techniques that allow them to be extracted and analyzed. The proposed method to achieve the objectives that define the thesis is divided into two complementary blocks: Patent Outlink and Patent Inlink, which together make up the Patent Link Analysis technique. To carry out the study, the United States Patent and Trademark Office (USPTO) is selected, collecting all those patents granted between 2008 and 2018 (both included). Once the information to be analyzed has been extracted in each block, there are: 3,133,247 patents, 2,745,973 million links contained in patents, 2,297,366 million linked patent web pages, 17,001 unique web pages linking patents and 990,663 Unique patents linked from web documents. The results of the Patent Outlink analysis show that both the number of patents that contain links (20%) and the number of links contained in patents (median 4-5) is still low, but has grown significantly in recent years and you can expect more use in the future. There is a clear difference in the use of links between areas of knowledge (42% belong to Physics, especially Computing and Calculus), as well as by sections within the documents, explaining the results obtained and the projection of future analyzes. The results of the Patent Inlink analysis identify considerably fewer web domains that link to patents (17,001 vs. 256,724), but there are more links per linking document (the total number of links is similar for both analysis blocks). Likewise, the data shows a high dispersion, with a few domains generating a large number of links. Both blocks show the existence of a high relationship with companies and technological services, with differences related to links to Universities and Governments (more links in Outlink). Finally, it is verified that the proposed model allows in an efficient, effective and replicable way the extraction and analysis of the links contained and directed to patent documents, as well as facilitating the discovery and evaluation of quality web resources. In addition, it is concluded that cybermetrics, through the link analysis technique, provides information of interest for the analysis of quality web resources through the links contained and directed to patent documents. The proposed and validated method allows defining, modeling and characterizing Patent Link Analysis as a subgenre of Link Analysis that can be used for the construction of link intelligence monitoring, evaluation and / or quality systems, among others, through the use of the inbound and outbound links of applicable patent documents universities, research centers, as well as public and private companies. / La presente tesis doctoral ha sido financiada por el Gobierno de España mediante el contrato predoctoral para la formación de doctores FPI BES-2017-079741 otorgada por el Ministerio de Ciencia e Innovación. / Font Julián, CI. (2021). Descubrimiento y evaluación de recursos web de calidad mediante Patent Link Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/170640 / TESIS

Page generated in 0.0527 seconds