Spelling suggestions: "subject:"eeb coequality"" "subject:"eeb c.equality""
1 |
Investigating website design factors that influence customers to use third-party websites for booking hotels : the Saudi customers' perspectiveBaeshen, Yasser Ali Mohammed-Saleh January 2017 (has links)
Customers are influenced in the physical world by their surroundings; important factors such as packaging, human interaction and atmosphere (environment) play important roles in any purchase decision customers make. Today's customers are moving towards faster, and more efficient ways of purchasing products/services. One of the most influential features in online purchase decisions is the Virtual Store Atmosphere (VSA). VSA is a marketing tool that not only influences purchase decisions, but it also measures the level of satisfaction in tourism and other industries. A high level of customer satisfaction increases the chance that they will recommend the product/service to other customers. However, despite different studies concerning information technology development and the impact of Electronic Word of Mouth (eWOM) on online Customer Purchase Decisions (CPD), eWOM has been little explored in the sphere of web design. It is vital that this research gap should be addressed, based on the online customer nature and the number of online bookings made in the tourism sector. Based on the above, this study aims to critically investigate and examine the impact of the online shopping environment on eWOM and customer purchase decisions, with respect to online bookings in the hotel industry, and develops a framework. The study aims to assess whether or not this impact is due to customers' web satisfaction and willingness to book a hotel online. Additionally, it looks at the influence of the online tourism environment on eWOM and Saudi Arabian customers' purchase decisions with respect to trust and the perceived risks in the area of hotel bookings made online. This research mainly adopts a quantitative method to achieve the objectives. Therefore, a conceptual framework has been developed based on existing literature concerning eWOM, web design and the hotel industry. The proposed framework has been validated using a measurement scale from previous validated studies. The research embraced and applied two main theories in the study, which are the Technology Acceptance Model (TAM) and the Stimulus-Organism-Response model (S-O-R). The research used an online survey of 1,002 respondents, which was distributed between two groups (Saudi national undergraduate students and Saudi national academic employees). Interviews, focus groups, and a pilot study were conducted to validate the survey too. Data analysis applied Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), and Structural Equation Modelling (SEM) to validate the relationships between constructs and to test the research hypotheses. The findings from this study show that the majority of the environmental factors selected (web design) affect ES and motivate users to book a hotel online; these are: Perceived Ease of Use, Perceived Usefulness, Website Content, Intrusive Marketing Tools 'Pop-up Ads and Banner Ads', Search Engine and Enjoyability, but not System Quality. In addition, the results suggest that one of the organism factors, i.e. eWOM, does not influence CPD. As a result, this study contributes to the customer behaviour and web design/quality literature within the travel/tourism context in Saudi Arabia. It contributes to the existing knowledge and supports practitioners of third-party hotel websites in shaping their web development priorities and enables them to focus on the most influential and critical factors.
|
2 |
Understanding the factors that attract travellers to use airline websites for purchasing air ticketsBukhari, Saleh Mohammed Fadel January 2015 (has links)
In order for e-commerce businesses to attract customers and consequently increase revenue, it is essential to understand the behaviour of online consumers. This involves understanding how consumers react to website elements, and what could influence the adoption of online channels. Given the applied nature of this research area, a number of studies have been carried out by marketers and information systems experts in order to develop a better understanding of consumer behaviour and web elements impact on adoption of online services. However, as web services continue to expand, so does the need for further research concentrating on specific types of products or services. Many academic articles have been published to cater for specific web portals such as retailing, banking, governmental transactions, hotel booking, and many other areas. Whilst this is the case, there remains a lack of research examining customer behaviour when using airline websites. Given the specific nature of online consumers, and the amount of business surrounding e-ticketing, it is imperative that this gap in research is addressed. Multi-faceted limitations surrounding online consumer behaviour within the airlines industry emerge from the literature. For example, the majority of previous research has relied solely upon traditional theories and as such, other important perspectives related to travel warrant investigation. Additionally, apparent links between website qualities and website adoption remain under investigated in the context of the airline industry. Another gap in research relates to the investigation of the moderating role of travellers’ characteristics such as their demographics, internet experiences, and travel habits. Based on these limitations, this research aims to develop a comprehensive, multi-disciplinary (i.e. consumer behaviour, information systems, travel and tourism) theoretical model that is capable of examining the factors that influence travellers’ online satisfaction and intention to purchase air tickets from airline websites. In developing this model, this research adopts a positivist, deductive, and quantitative approach. Thus, based on the analysis and synthesis of literature, a conceptual model comprised of nine constructs is proposed. Inspired by the Information Systems Success Model, e-satisfaction is centralised in the model and suggested as the main predictor of intention to purchase airline tickets. Information web qualities and system web qualities are considered as antecedents of e-satisfaction. The two constructs from the Technology Acceptance Model (TAM) (perceived usefulness and perceived ease of use) are also integrated into the model. Other important factors such as e-trust, airline reputation, and price perception are also embedded. The suggested model has been validated using a measurement scale based on previously validated items. This research adopts an online survey that targets real travellers from Saudi Arabia who have used airline websites. Interviews, focus groups, and a pilot study were conducted to validate the survey items. Data collection procedures utilized the social media channels for the two operating airlines in Saudi Arabia as well as a snowball method. Data analysis techniques including exploratory factor analysis, confirmatory factor analysis, and structural equation modelling were used to validate the relationships and to test overarching research hypotheses. Additionally, group comparison techniques including invariance analysis were used to explore the moderation effect of demographic characteristics (i.e. gender, age, education level, monthly income, occupation, and location), internet experience, and travel habits (i.e. origin of the airline used, actual tickets purchase, travel frequency, motivation for travel, type of travel, and type of funding). The results from this research suggest that the most influential factors motivating travellers to purchase online are e-satisfaction followed by website trust. Furthermore, travellers’ perceptions of website quality also played an important role in influencing e-satisfaction. Price was the next influential factor. Several other factors appeared to have direct and indirect associations with intention to purchase and e-satisfaction. Additionally, findings emanating from group analysis suggest that some demographic factors and travel habits have a moderating influence over the research hypotheses. As such, this research makes several contributions to the consumer behaviour and web quality literature within the travel and tourism context. The findings from this research can assist airlines in shaping their web development priorities and enable them to focus on the most influential factors. This thesis concludes with a discussion of the application of these findings, an evaluation of the studies undertaken, and suggestions for future research.
|
3 |
Metodologia para a análise da qualidade de Web sites baseada em técnicas de aprendizado de máquina. / Methodology to analyze the quality of Web sites based in machini learning techniques.Heitor de Souza Ganzeli 14 March 2014 (has links)
A Web é a aplicação mais popular da Internet e, desde sua criação, gerou mudanças de diversas maneiras na vida das pessoas. Esse é um dos motivos que a transformou em objeto de estudo de diversas pesquisas de cunho social, tecnológico econômico e político. A metodologia descrita nesta dissertação pode ser entendida como uma extensão do projeto TIC Web, que foi desenvolvido como parceria entre o NIC.br, o escritório do W3C Brasil e o instituto InWeb para estudar características de qualidade da Web Brasileira. Nesse sentido, a presente metodologia possui o objetivo de automatizar análises de domínios e sites Web, principalmente com base nos resultados sobre a Web Governamental Brasileira obtidos pelo TIC Web. Ou seja, o presente trabalho se foca na definição e aplicação de metodologia baseada em técnicas de aprendizado de máquina para a automatização das análises de domínios Web, visando praticidade na execução da categorização de sites Web segundo critérios relacionados à qualidade percebida por seus usuários. Os tópicos aqui discutidos compreendem: a importância dos padrões abertos e elementos de desempenho para a determinação da qualidade de um site; fundamentos de aprendizado de máquina; o detalhamento das ferramentas utilizadas para coletar e extrair informações dos sites, bem como dos atributos e indicadores por elas adquiridos; a metodologia proposta, incluindo a descrição de diversos algoritmos utilizados; e, um caso de uso demonstrando sua aplicabilidade. Além disso, propõe-se como parte da metodologia de análise a utilização dos resultados de seus resultados para realizar a avaliação de sites segundo sua qualidade percebida. / The World Wide Web is the most popular application throughout the Internet and, since its creation, it has changed people\'s lives in lots of ways, hence, it has become subject to several social, technological, economical and political researches. The methodology described in the present text may be unterstood as an extension of the TIC Web project, which was developed by a partnership among NIC.br, Brazilian W3C office and the InWeb institute in order to study some quality related issues about the Brazilian Web. Accordingly, the methodology presented in this work aims to automate analyses of Web domains and sites, mainly based on the results over the Brazilian Governmental Web obtained by TIC Web. In other words, the present project focus on the definition and use of a methodology dependent on machine learning in order to automate the analyses of extracted data, having the goal of easing the classification of Web sites according to the quality perceived by their users. Some of the discussed topics are as follows: the importance of Open Standards and performance features to defy the quality of a site; basics of machine learning; details of the tool applied to extract Web sites data, as well as its acquired parameters and indicators; the proposed methodology, including the description of applied algorithms; and a use case evincing its applicability. Additionally, it is proposed, as part of the methodology, the utilization of the results obtained by the domain analysis to evaluate other websites in accordance to their perceived quality.
|
4 |
Metodologia para a análise da qualidade de Web sites baseada em técnicas de aprendizado de máquina. / Methodology to analyze the quality of Web sites based in machini learning techniques.Ganzeli, Heitor de Souza 14 March 2014 (has links)
A Web é a aplicação mais popular da Internet e, desde sua criação, gerou mudanças de diversas maneiras na vida das pessoas. Esse é um dos motivos que a transformou em objeto de estudo de diversas pesquisas de cunho social, tecnológico econômico e político. A metodologia descrita nesta dissertação pode ser entendida como uma extensão do projeto TIC Web, que foi desenvolvido como parceria entre o NIC.br, o escritório do W3C Brasil e o instituto InWeb para estudar características de qualidade da Web Brasileira. Nesse sentido, a presente metodologia possui o objetivo de automatizar análises de domínios e sites Web, principalmente com base nos resultados sobre a Web Governamental Brasileira obtidos pelo TIC Web. Ou seja, o presente trabalho se foca na definição e aplicação de metodologia baseada em técnicas de aprendizado de máquina para a automatização das análises de domínios Web, visando praticidade na execução da categorização de sites Web segundo critérios relacionados à qualidade percebida por seus usuários. Os tópicos aqui discutidos compreendem: a importância dos padrões abertos e elementos de desempenho para a determinação da qualidade de um site; fundamentos de aprendizado de máquina; o detalhamento das ferramentas utilizadas para coletar e extrair informações dos sites, bem como dos atributos e indicadores por elas adquiridos; a metodologia proposta, incluindo a descrição de diversos algoritmos utilizados; e, um caso de uso demonstrando sua aplicabilidade. Além disso, propõe-se como parte da metodologia de análise a utilização dos resultados de seus resultados para realizar a avaliação de sites segundo sua qualidade percebida. / The World Wide Web is the most popular application throughout the Internet and, since its creation, it has changed people\'s lives in lots of ways, hence, it has become subject to several social, technological, economical and political researches. The methodology described in the present text may be unterstood as an extension of the TIC Web project, which was developed by a partnership among NIC.br, Brazilian W3C office and the InWeb institute in order to study some quality related issues about the Brazilian Web. Accordingly, the methodology presented in this work aims to automate analyses of Web domains and sites, mainly based on the results over the Brazilian Governmental Web obtained by TIC Web. In other words, the present project focus on the definition and use of a methodology dependent on machine learning in order to automate the analyses of extracted data, having the goal of easing the classification of Web sites according to the quality perceived by their users. Some of the discussed topics are as follows: the importance of Open Standards and performance features to defy the quality of a site; basics of machine learning; details of the tool applied to extract Web sites data, as well as its acquired parameters and indicators; the proposed methodology, including the description of applied algorithms; and a use case evincing its applicability. Additionally, it is proposed, as part of the methodology, the utilization of the results obtained by the domain analysis to evaluate other websites in accordance to their perceived quality.
|
5 |
An architectural framework for assessing quality of experience of web applicationsRadwan, Omar Amer January 2017 (has links)
Web-based service providers have long been required to deliver high quality services in accordance with standards and customer requirements. Increasingly, however, providers are required to think beyond service quality and develop a deeper understanding of their customers’ Quality of Experience (QoE). Whilst models exist that assess the QoE of Web Application, significant challenges remain in defining QoE factors from a Web engineering perspective, as well as mapping between so called ‘objective’ and ‘subjective’ factors of relevance. Specifically, the following challenges are considered as general fundamental problems for assessing QoE: (1) Quantifying the relationship between QoE factors; (2) predicting QoE as well as dealing with the limited data available in relation to subjective factors; (3) optimising and controlling QoE; and (4) perceiving QoE. In response, this research presents a novel model, called QoEWA (and associated software instantiation) that integrates factors through Key Performance Indicators (KPIs) and Key Quality Indicators (KQIs). The mapping is incorporated into a correlation model that assesses QoE, in particular, that of Web Application, with a consideration of defining the factors in terms of quality requirements derived from web architecture. The data resulting from the mapping is used as input for the proposed model to develop artefacts that: quantify, predict, optimise and perceive QoE. The development of QoEWA is framed and guided by Design Science Research (DSR) approach, with the purpose of enabling providers to make more informed decisions regarding QoE and/or to optimise resources accordingly. The evaluation of the designed artefacts is based on a build-and-evaluate cycle that provides feedback and a better understanding of the utilised solutions. The key artefacts are developed and evaluated through four iterations: Iteration 1 utilises the Actual Versus-Target approach to quantify QoE, and applies statistical analysis to evaluate the outputs. Iteration 2: utilises a Machine Learning (ML) approach to predict QoE, and applies statistical tests to compare the performance of ML algorithms. Iteration 3 utilises the Multi-Objective Optimisation (MOO) approach to optimise QoE and control the balance between resources and user experience. Iteration 4 utilises the Agent-Based Modelling approach to perceive and gain insights into QoE. The design of iteration 4 is rigorously tested using verified and validated models.
|
6 |
Appropriate Web Usability Evaluation Method during Product DevelopmentUmar, Azeem, Tatari, Kamran Khan January 2008 (has links)
Web development is different from traditional software development. Like in all software applications, usability is one of the core components of web applications. Usability engineering and web engineering are rapidly growing fields. Companies can improve their market position by making their products and services more accessible through usability engineering. User testing is often skipped when approaching deadline. This is very much true in case of web application development. Achieving good usability is one of the main concerns of web development. Several methods have been proposed in literature for evaluating web usability. There is not yet an agreement in the software development community about which usability evaluation method is more useful than another. Doing extensive usability evaluation is usually not feasible in case of web development. On the other hand unusable website increases the total cost of ownership. Improved usability is one of the major factors in achieving satisfaction up to a sufficient level. It can be achieved by utilizing appropriate usability evaluation method, but cost-effective usability evaluation tools are still lacking. In this thesis we study usability inspection and usability testing methods. Furthermore, an effort has been made in order to find appropriate usability evaluation method for web applications during product development and in this effort we propose appropriate web usability evaluation method which is based on observation of the common opinion of web industry. / There is no standard framework or mechanism of selecting usability evaluation method for software development. In the context of web development projects where time and budget are more limited than traditional software development projects, it becomes even harder to select appropriate usability evaluation method. Certainly it is not feasible for any web development project to utilize multiple usability inspection method and multiple usability testing methods during product development. The good choice can be the combinational method composed of one usability inspection method and one usability testing method. The thesis has contributed by identifying those usability evaluation methods which are common in literature and current web industry / ifazeem@gmail.com
|
7 |
A quality-centered approach for web application engineering / Une approche centrée sur la qualité pour l'ingénierie des applications WebDo, Tuan Anh 18 December 2018 (has links)
Les développeurs d'applications Web ne sont pas tous des experts. Même s'ils utilisent des méthodes telles que UWE (UML web engineering) et les outils CASE, ils ne sont pas toujours capables de prendre de bonnes décisions concernant le contenu de l'application web, le schéma de navigation et / ou la présentation des informations. La littérature leur fournit de nombreuses lignes directrices (guidelines) pour ces tâches. Cependant, ces connaissances sont disséminées dans de nombreuses sources et non structurées. Dans cette dissertation, nous capitalisons sur les connaissances offertes par ces lignes directrices. Notre contribution est triple: (i) nous proposons un méta-modèle permettant une représentation riche de ces lignes directrices, (ii) nous proposons une grammaire permettant la description des lignes directrices existantes, (iii) sur la base de cette grammaire, nous développons un outil de gestion des lignes directrices . Nous enrichissons la méthode UWE avec cette base de connaissances menant à une approche basée sur la qualité. Ainsi, notre outil enrichit les prototypes existants d'ingénierie logicielle assistée par ordinateur basés sur UWE avec des conseils ad hoc. / Web application developers are not all experts. Even if they use methods such as UWE (UML web engineering) and CASE tools, they are not always able to make good decisions regarding the content of the web application, the navigation schema, and/or the presentation of information. Literature provides them with many guidelines for these tasks. However this knowledge is disseminated in many sources and not structured. In this dissertation, we perform a knowledge capitalization of all these guidelines. The contribution is threefold: (i) we propose a meta-model allowing a rich representation of these guidelines, (ii) we propose a grammar enabling the description of existing guidelines, (iii) based on this grammar, we developed a guideline management tool. We enrich the UWE method with this knowledge base leading to a quality based approach. Thus, our tool enriches existing UWE-based Computer Aided Software Engineering prototypes with ad hoc guidance.
|
8 |
Descubrimiento y evaluación de recursos web de calidad mediante Patent Link AnalysisFont Julián, Cristina Isabel 26 July 2021 (has links)
[ES] Las patentes son documentos legales que describen el funcionamiento exacto de una invención, otorgando el derecho de explotación económica a sus dueños a cambio de dar a conocer a la sociedad los detalles de funcionamiento de dicha invención. Para que una patente pueda ser concedida debe cumplir tres requisitos: ser novedad (no haber sido expuesto o publicado con anterioridad), cumplir la actividad inventiva y tener aplicación industrial. Es por ello que las patentes son documentos valiosos, ya que contienen una gran cantidad de información técnica no incluida antes en otro tipo de documento (publicado o disponible). Debido a las características particulares de las patentes, los recursos que éstas mencionan, así como los recursos que mencionan a las patentes, contienen enlaces que pueden ser útiles y dar apoyo a diversas aplicaciones (vigilancia tecnológica, desarrollo e innovación, Triple-Helix, etc.) al disponer de información complementaria, así como de la creación de herramientas y técnicas que permitan extraerlos y analizarlos.
El método propuesto para alcanzar los objetivos que definen la tesis se encuentra divido en dos bloques complementarios: Patent Outlink y Patent Inlink, que juntos conforman la técnica de Patent Link Analysis.
Para realizar el estudio se selecciona la Oficina de Patentes y Marcas de Estados Unidos (USPTO), recogiendo todas aquellas patentes concedidas entre los años 2008 y 2018 (ambos incluidos). Una vez extraída la información a analizar en cada bloque se cuenta con: 3.133.247 de patentes, 2.745.973 millones de enlaces contenidos en patentes, 2.297.366 millones de páginas web de patentes enlazadas, 17.001 paginas únicas web enlazando a patentes y 990.663 patentes únicas enlazadas desde documentos web.
Los resultados del análisis de Patent Outlink muestran como tanto la cantidad de patentes que contienen enlaces (20%), como el número de enlaces contenido en patentes (mediana 4-5) es todavía bajo, pero ha crecido significativamente durante los últimos años y se puede esperar un mayor uso en el futuro. Existe una diferencia clara en el uso de enlaces entre áreas de conocimiento (42% pertenecen a Física, especialmente Computación y Cálculos), así como por secciones dentro de los documentos, explicando los resultados obtenidos y la proyección de análisis futuros.
Los resultados del análisis de Patent Inlink identifica una cantidad considerable menor de dominios webs que enlazan a patentes (17.001 frente a 256.724), pero existen más enlaces por documento enlazante (el número de enlaces total es similar para ambos bloques de análisis). Así mismo, los datos muestran una elevada dispersión, con unos pocos dominios generando una gran cantidad de enlaces. Ambos bloques muestran la existencia de una alta relación con empresas y servicios tecnológicos, existiendo diferencias relativas a los enlaces a Universidades y Gobiernos (más enlaces en Outlink).
Los resultados muestran que el modelo de análisis propuesto permite y facilita el descubrimiento y evaluación de recursos web de calidad. Así mismo, se concluye que la cibermetría, mediante el análisis de enlaces, aporta información de interés para el análisis de los recursos web de calidad a través de los enlaces contenidos y dirigidos a documentos de patentes.
El método propuesto y validado permite de un modo eficiente, eficaz y replicable la extracción y análisis de los enlaces contenidos y dirigidos a documentos de patentes. Permitiendo, a su vez, definir, modelar y caracterizar el Patent Link Analysis como un subgénero del Link Analysis que puede ser utilizado para la construcción de sistemas de monitorización de link intelligence, de evaluación y/o de calidad entre otros, mediante el uso de los enlaces entrantes y salientes de documentos de patentes aplicable universidades, centros de investigación, así como empresas públicas y privadas. / [CA] Les patents són documents legals que descriuen el funcionament exacte d'una invenció, atorgant el dret d'explotació econòmica als seus amos a canvi de donar a conéixer a la societat els detalls de funcionament d'aquesta invenció. Perquè una patent puga ser concedida ha de complir tres requisits: ser novetat (no haver sigut exposat o publicat amb anterioritat), complir l'activitat inventiva i tindre aplicació industrial. És per això que les patents són documents valuosos, ja que contenen una gran quantitat d'informació tècnica no inclosa abans en un altre tipus de document (publicat o disponible). A causa de les característiques particulars de les patents, els recursos que aquestes esmenten, així com els recursos que esmenten les patents, contenen enllaços que poden ser útils i donar suport a diverses aplicacions (vigilància tecnològica, desenvolupament i innovació, Triple-Helix, etc.) en disposar d'informació complementària, així com de la creació d'eines i tècniques que permeten extraure'ls i analitzar-los. El mètode proposat per a aconseguir els objectius que defineixen la tesi es troba dividisc en dos blocs complementaris: Patent Outlink i Patent Inlink, que junts conformen la tècnica de Patent Link Analysis. Per a realitzar l'estudi es selecciona l'Oficina de Patents i Marques dels Estats Units (USPTO), recollint totes aquelles patents concedides entre els anys 2008 i 2018 (tots dos inclosos). Una vegada extreta la informació a analitzar en cada bloc es compta amb: 3.133.247 de patents, 2.745.973 milions d'enllaços continguts en patents, 2.297.366 milions de pàgines web de patents enllaçades, 17.001 pàgines úniques web enllaçant a patents i 990.663 patents úniques enllaçades des de documents web. Els resultats de l'anàlisi de Patent Outlink mostren com tant la quantitat de patents que contenen enllaços (20%), com el nombre d'enllaços contingut en patents (mitjana 4-5) és encara baix, però ha crescut significativament durant els últims anys i es pot esperar un major ús en el futur. Existeix una diferència clara en l'ús d'enllaços entre àrees de coneixement (42% pertanyen a Física, especialment Computació i Càlculs), així com per seccions dins dels documents, explicant els resultats obtinguts i la projecció d'anàlisis futures. Els resultats de l'anàlisi de Patent Inlink identifica una quantitat considerable menor de dominis webs que enllacen a patents (17.001 enfront de 256.724), però hi ha més enllaços per document enllaçant (el nombre d'enllaços total és similar per a tots dos blocs d'anàlisis). Així mateix, les dades mostren una elevada dispersió, amb uns pocs dominis generant una gran quantitat d'enllaços. Tots dos blocs mostren l'existència d'una alta relació amb empreses i serveis tecnològics, existint diferències relatives als enllaços a Universitats i Governs (més enllaços en Outlink). Finalment, es verifica que el model d'anàlisi proposat i facilita l'extracció i anàlisi dels enllaços continguts i dirigits a documents de patents, així com facilitar el descobriment i avaluació de recursos web de qualitat. A més, es conclou que la cibermetría, mitjançant l'anàlisi d'enllaços, aporta informació d'interés per a l'anàlisi dels recursos web de qualitat a través dels enllaços continguts i dirigits a documents de patents. El mètode proposat i validat permet definir, modelar i caracteritzar el Patent Link Analysis com un subgènere del Link Analysis que pot ser utilitzat per a la construcció de sistemes de monitoratge de link intelligence, d'avaluació i/o de qualitat entre altres, mitjançant l'ús dels enllaços entrants i sortints de documents de patents aplicable a universitats, centres d'investigació, així com empreses públiques i privades. / [EN] Patents are legal documents that describe the exact operation of an invention, granting the right of economic exploitation to its owners in exchange for describing the details of the operation of said invention. For a patent to be granted, it must meet three requirements: be novel (not have been previously exhibited or published), comply with the inventive step, and have industrial application. That is why patents are valuable documents, since they contain a large amount of technical information not previously included in another type of document (published or available). Due to the particular characteristics of patents, the resources that they mention, as well as the resources that mention patents, contain links that can be useful and give support to various applications (technological surveillance, development and innovation, Triple-Helix, etc.) by having complementary information, as well as the creation of tools and techniques that allow them to be extracted and analyzed.
The proposed method to achieve the objectives that define the thesis is divided into two complementary blocks: Patent Outlink and Patent Inlink, which together make up the Patent Link Analysis technique.
To carry out the study, the United States Patent and Trademark Office (USPTO) is selected, collecting all those patents granted between 2008 and 2018 (both included). Once the information to be analyzed has been extracted in each block, there are: 3,133,247 patents, 2,745,973 million links contained in patents, 2,297,366 million linked patent web pages, 17,001 unique web pages linking patents and 990,663 Unique patents linked from web documents.
The results of the Patent Outlink analysis show that both the number of patents that contain links (20%) and the number of links contained in patents (median 4-5) is still low, but has grown significantly in recent years and you can expect more use in the future. There is a clear difference in the use of links between areas of knowledge (42% belong to Physics, especially Computing and Calculus), as well as by sections within the documents, explaining the results obtained and the projection of future analyzes.
The results of the Patent Inlink analysis identify considerably fewer web domains that link to patents (17,001 vs. 256,724), but there are more links per linking document (the total number of links is similar for both analysis blocks). Likewise, the data shows a high dispersion, with a few domains generating a large number of links. Both blocks show the existence of a high relationship with companies and technological services, with differences related to links to Universities and Governments (more links in Outlink).
Finally, it is verified that the proposed model allows in an efficient, effective and replicable way the extraction and analysis of the links contained and directed to patent documents, as well as facilitating the discovery and evaluation of quality web resources. In addition, it is concluded that cybermetrics, through the link analysis technique, provides information of interest for the analysis of quality web resources through the links contained and directed to patent documents.
The proposed and validated method allows defining, modeling and characterizing Patent Link Analysis as a subgenre of Link Analysis that can be used for the construction of link intelligence monitoring, evaluation and / or quality systems, among others, through the use of the inbound and outbound links of applicable patent documents universities, research centers, as well as public and private companies. / La presente tesis doctoral ha sido financiada por el Gobierno de España mediante el
contrato predoctoral para la formación de doctores FPI BES-2017-079741 otorgada
por el Ministerio de Ciencia e Innovación. / Font Julián, CI. (2021). Descubrimiento y evaluación de recursos web de calidad mediante Patent Link Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/170640
|
9 |
Qualité de l'information et vigilance collective sur le web: étude des stratégies d'évaluation des sources en ligne par les professionnels de la gestion de l'information dans les organisations / Information quality and collective mindfulness on the web: study of the online source selection strategies by corporate information professionalsDepauw, Jeremy 29 September 2009 (has links)
La complexité de l'environnement dans lequel évoluent les organisations se traduit par la coexistence permanente d'interprétations multiples et contradictoires de la situation. Perturbées dans leurs capacités de décision et d'action, les organisations déploient des dispositifs sophistiqués de vigilance collective qui permettent de faire face à cette complexité, restaurant leur capacité à dégager du sens et à orienter leur action. La gestion de l'information en fait partie. Parmi toutes les sources disponibles, internet en général, et le web en particulier, constituent les principaux points d'accès à l’information, faisant désormais partie intégrante de ces processus de création de sens. L'évolution récente du paysage informationnel, tant du point de vue des outils que de celui des pratiques, suscite de nouveaux défis. Par leur facilité d’utilisation et leur accessibilité croissante, de nouveaux types de sources en ligne ont remis en question la façon dont les utilisateurs partagent, consomment et produisent du contenu. <p><p>Cette thèse examine la remise en cause des processus habituels de vigilance collective, et en particulier celle de l'adaptation, chez les spécialistes de l'information, des stratégies d'évaluation de la qualité de l'information provenant des sources en ligne. La question de recherche mobilise trois éléments principaux :les professionnels de la gestion de l'information, l'évaluation de la qualité de l'information et l'évolution du paysage informationnel, qu'on appelle communément le Web 2.0. Pour répondre à cette question, une enquête de terrain a été menée auprès de 53 professionnels de la gestion de l'information en Belgique, entre novembre 2007 et juillet 2008. En l'absence de bases théoriques stables et largement acceptées, un cadre conceptuel spécifique a été développé pour assurer l'adéquation entre les éléments de la question de recherche et le dispositif. Ce cadre conceptuel a permis de faire la lumière sur ces éléments. Le recours à une approche sociopsychologique a permis de les articuler, notamment par emprunt aux travaux de Karl Weick et au concept de sensemaking. <p><p>La gestion de l'information (GI), considérée comme un processus de vigilance collective, est un concept générique, comprenant les activités de surveillance, de veille ou encore d'intelligence (économique, stratégique, compétitive, etc.).<p>Sa conceptualisation, construite par une analyse de définitions des principaux termes qui lui sont associés, a mis en évidence l'importance de son rôle de médiateur d'information dans l'organisation, qui s'articule autour des étapes récurrentes de collecte, de traitement et de distribution de l'information. Le recours au concept d'organizational learning a permis de dépasser les approches purement mécaniques, mettant en évidence sa capacité à créer du de sens. <p><p>C'est au coeur de cette médiation, à l'intersection de la collecte et du traitement de l'information, qu'intervient un autre type de sensemaking: l'évaluation de la qualité de l'information. Elle est envisagée comme un processus de réduction de l'ambiguïté, dont l'action permet la sélection (ou non) d'une source ou d'une information dans la suite de la médiation. La qualité de l'information est abordée sous l'angle de l'information seeking qui permet de faire la lumière sur cette création de sens. Elle est généralement traitée dans la littérature en termes de pertinence, de crédibilité ou de fitness for use. Des études de terrain et des contributions émanant des praticiens ont permis de mettre en évidence les attributs et les critères de la qualité qui peuvent être mobilisés pour construire un jugement de qualité des sources en ligne. Dans le cadre de l'enquête de terrain, une check-list composée de 72 critères renvoyant à 9 attributs a été choisie comme cadre de référence pour l'observer: les objectifs de la source, sa couverture, son autorité et sa réputation, sa précision, sa mise à jour, son accessibilité, sa présentation, sa facilité d'utilisation et sa comparaison avec d'autres sources. <p> <p>Pour pouvoir mettre en évidence de manière concrète les aspects du paysage informationnel en transformation, une analyse des définitions et descriptions du Web 2.0 a permis de construire une description morphologique qui reprend ses caractéristiques principales. Il peut ainsi être considéré comme un ensemble d'outils, de pratiques et de tendances. Les outils permettent d'identifier cinq types de sources qui lui sont spécifiques: les blogs, les wikis, les podcasts, les plates-formes de partage de fichiers et les sites de réseaux sociaux. Ces types de sources sont éclairés dans cette recherche sous l'angle du concept de genre et, ensemble, sont positionnés en tant que répertoire, qu'il est nécessaire de comparer avec celui des genres "classiques" de sources en ligne. <p><p>L'examen du changement des stratégies d'évaluation de la qualité de l'information a été concrétisé à l'aide d'un questionnaire administré par téléphone, qui visait à croiser les critères de qualité de la liste choisie comme référence avec les cinq genres typiques du Web 2.0. C'est l'importance relative accordée à un critère pour évaluer une information qui a été choisie comme indicateur à observer. Les répondants ont été invités à indiquer s'ils considèrent que l'importance des critères "change" ("≠") ou "ne change pas" ("=") quand ils évaluent un de ces genres, en comparaison de celle qu'ils accorderaient à un genre classique de source en ligne. En cas de changement, le questionnaire a prévu la possibilité de noter s'il s'agissait d'une hausse (">") ou d'une baisse ("<") d'importance. Pour compléter ce dispositif, 14 entretiens semi-dirigés en face à face ont été menés avec des répondants à ce questionnaire, de manière à pouvoir saisir les éléments explicatifs de leurs réponses. <p><p>L'analyse des données a montré qu'une majorité des réponses (57% de "=") indiquent que l'importance des critères d'évaluation ne change pas quand une information est mise à disposition par l'intermédiaire d'un genre Web 2.0, plutôt que par celui d'un genre classique de source en ligne. Pourtant, cela implique que 43% des critères changent d'une manière ou d'une autre. C'est sur base de ce constat que cette recherche soutient l'existence d'un changement perçu qui, s'il ne remet pas fondamentalement en cause le processus de jugement de qualité, suscite néanmoins une adaptation de ce dernier par les professionnels de la GI interrogés. La lecture des données à l'aide de variables secondaires a montré notamment une forte disparité des distributions de réponses entre les répondants; ce qui plaide en faveur du caractère subjectif, personnel et dépendant du contexte du processus d'évaluation. De même, elle a permis de déterminer l'existence de deux groupes d'attributs qui se différencient par le fait que le premier comporte des attributs liés au contenu de la source (les objectifs, l'autorité, la précision, etc.) alors que le second est composé d'attributs liés à la forme (présentation, facilité, etc.).<p><p>Les entretiens de la seconde phase de l'enquête ont permis d'affiner l'analyse en éclairant, d'une part, sur la nature du changement et, d'autre part, sur les raisons de celui-ci. Les répondants ont indiqué que fondamentalement le processus d'évaluation est identique quel que soit le répertoire envisagé. Ils admettent toutefois que les genres typiques du Web 2.0 peuvent être à l'origine d'une perte de repères. Elle s'explique par la perception d'une familiarité moins grande à l'égard des sources et se traduit par une perte de la confiance qu'ils accordent aux sources et à leur jugement. Le changement perçu se manifeste donc par une hausse d'importance de certains attributs, qui aide les répondants à restaurer cette confiance. L'élément explicatif de ce changement peut être considéré comme un flou dans les modalités de création de contenu. Ce flou comporte trois dimensions: la façon dont est créé le contenu (How?), l'identité de celui qui le crée (Who?) et sa nature (What?). Ces dimensions peuvent être synthétisées par l'idée selon laquelle n'importe qui peut publier n'importe quoi. <p><p>Les entretiens approfondis confirment que les groupes d'attributs liés au contenu d'une part, et ceux liés à la forme d'autre part, sont bien des éléments explicatifs de la manière dont se manifeste le changement. Dans le cas des attributs qui augmentent d'importance, les raisons invoquées renvoient au fait que la facilité de création de contenu à l'aide de ces genres permet à "n'importe qui" de créer du contenu. C'est pour cette raison que l'autorité et les objectifs de la source peuvent faire l'objet d'une attention plus forte que sur les genres classiques de sources en ligne. Le fait que n'importe qui puisse publier "n'importe quoi" renvoie à la nature du contenu créé par ces genres. Il est considéré comme dynamique, personnel, indicateur de tendances, source de signaux faibles, subjectifs, etc. Cela pousse les répondants qui sont sensibles à ces questions à remettre plus sérieusement en cause la précision par exemple. C'est aussi en raison de la facilité de création de contenu, et du fait que les outils du Web 2.0 réduisent la responsabilité de l'auteur dans la qualité de la conception de sa source, que des attributs de forme, quand ils changent d'importance, voient leur niveau baisser. Le second groupe a été signalé par les répondants comme étant davantage des indicateurs de sérieux et des arbitres dans leur processus d'évaluation.<p><p>Que ce soit pour discuter des divergences de vue entre répondants ou pour déterminer les spécificités des genres, il apparaît qu'un aspect capital de la qualité de l'information tient à sa capacité à répondre aux besoins du moment, le fitness for use. Cette notion est intimement liée à celle de pertinence et toutes deux ont été résolument présentées comme déterminantes dans les stratégies, à la fois du point de vue du jugement d'une information ponctuelle, que dans l'attitude face à aux sources en général. Dans tous les cas, c'est d'abord les besoins d'information qui guident le choix. Toutes observations permettent d'apporter une réponse claire, riche et nuancée à la question de recherche. / Doctorat en Information et communication / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0822 seconds