• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 6
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Algoritmos para avaliação de confiança em apontadores encontrados na Web / Algorithms for Assessing Reliability Pointers Found on the Web

Souza, Jucimar Brito de 23 April 2009 (has links)
Made available in DSpace on 2015-04-11T14:03:17Z (GMT). No. of bitstreams: 1 DISSERTACAO JUCIMAR.pdf: 1288048 bytes, checksum: eec502380e9a7d5716cd68993d6cab40 (MD5) Previous issue date: 2009-04-23 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Search engines have become an essential tool for web users today. They use algorithms to analyze the linkage relationships of the pages in order to estimate popularity for each page, taking each link as a vote of quality for pages. This information is used in the search engine ranking algorithms. However, a large amount of links found on the Web can not be considered as a good vote for quality, presenting information that can be considered as noise for search engine ranking algorithms. This work aims to detect noises in the structure of links that exist in search engine collections. We studied the impact of the methods developed here for detection of noisy links, considering scenarios in which the reputation of pages is calculated using Pagerank and Indegree algorithms. The results of the experiments showed improvement up to 68.33% in metric Mean Reciprocal Rank (MRR) for navigational queries and up to 35.36% for randomly selected navigational queries. / Máquinas de busca têm se tornado uma ferramenta imprescindível para os usuários da Web. Elas utilizam algoritmos de análise de apontadores para explorar a estrutura dos apontadores da Web para atribuir uma estimativa de popularidade a cada página. Essa informação é usada na ordenação da lista de respostas dada por máquinas de busca a consultas submetidas por seus usuários. Contudo, alguns tipos de apontadores prejudicam a qualidade da estimativa de popularidade por apresentar informação ruidosa, podendo assim afetar negativamente a qualidade de respostas providas por máquinas de busca a seus usuários. Exemplos de tais apontadores incluem apontadores repetidos, apontadores resultantes da duplicação de páginas, SPAM, dentre outros. Esse trabalho tem como objetivo detectar ruídos na estrutura dos apontadores existentes em base de dados de máquinas de busca. Foi estudado o impacto dos métodos aqui desenvolvidos para detecção de apontadores ruidosos, considerando cenários nos quais a reputação das páginas é calculada tanto com o algoritmos Pagerank quanto com o algoritmo Indegree. Os resultados dos experimentos apresentaram melhoria de até 68,33% na métrica Mean Reciprocal Rank (MRR) para consultas navegacionais e de até 35,36% para as consultas navegacionais aleatórias quando uma máquina de busca utiliza o algoritmo Pagerank.
22

Using Observers for Model Based Data Collection in Distributed Tactical Operations

Thorstensson, Mirko January 2008 (has links)
<p>Modern information technology increases the use of computers in training systems as well as in command-and-control systems in military services and public-safety organizations. This computerization combined with new threats present a challenging complexity. Situational awareness in evolving distributed operations and follow-up in training systems depends on humans in the field reporting observations of events. The use of this observer-reported information can be largely improved by implementation of models supporting both reporting and computer representation of objects and phenomena in operations.</p><p>This thesis characterises and describes observer model-based data collection in distributed tactical operations, where multiple, dispersed units work to achieve common goals. Reconstruction and exploration of multimedia representations of operations is becoming an established means for supporting taskforce training. We explore how modelling of operational processes and entities can support observer data collection and increase information content in mission histories. We use realistic exercises for testing developed models, methods and tools for observer data collection and transfer results to live operations.</p><p>The main contribution of this thesis is the systematic description of the model-based approach to using observers for data collection. Methodological aspects in using humans to collect data to be used in information systems, and also modelling aspects for phenomena occurring in emergency response and communication areas contribute to the body of research. We describe a general methodology for using human observers to collect adequate data for use in information systems. In addition, we describe methods and tools to collect data on the chain of medical attendance in emergency response exercises, and on command-and-control processes in several domains.</p>
23

Distribuição de tarefas em sistemas de workflow por meio da seleção induzida de recursos / Tasks Distribution in Work ow Systems Based on Resources Induced Selection

Silva, Rogério Sousa e 12 September 2007 (has links)
The assingment of tasks to resources of a workflow system is called task distribution. The task distribution is an important activity for workflow systems, because it is necessary to ensure that a task is performed by the appropriate resource in due time. There are several approaches to task distribution in workflow systems. This work innovates by using a Link Analysis technique applied to the task distribution. The Link Analysis is used to rank the result of a web query. The rank is performed by considering the relevance of the pages. This work presents the application of Link Analysis in the context of workflow task distribution. We have proposed a new task distribution algorithm (wf-hits) based on Link Analysis algorithm. We have compared wf-hits against other related ones. This comparison have considered quantitative and qualitative aspects. The experiments have shown that the use of wf-hits has improved workflow systems 25% in quantitative terms meanwhile the qualitative terms has maintained the same level of similar related works. / A entrega de tarefas para que sejam executadas pelos recursos de um sistema de work ow é chamada de distribuição de tarefas. A distribuição de tarefas é uma atividade importante para os sistemas de work ow, pois ´e necessário assegurar que uma determinada tarefa seja executada pelo recurso apropriado no tempo devido. Há várias abordagens para a distribuição de tarefas em sistemas de workflow. Este trabalho inova ao utilizar uma técnica oriunda da Análise de Ligações (Link Analysis) aplicada à distribuição de tarefas. A Link Analysis é utilizada para classificar o resultado de uma consulta na internet. A classificação é realizada considerando a relevância das páginas. O presente trabalho propõe a aplicação da Link Analysis no contexto da distribuição de tarefas em sistemas de work ow. É proposto um novo algoritmo para a distribuição de tarefas (wf-hits) que é baseado no algoritmo de Link Analysis. O algoritmo wf-hits é comparado com trabalhos correlatos em termos quantitativos e qualitativos. Os experimentos realizados mostraram que a utilização do wf-hits na distribuição de tarefas aos recursos em sistemas de workflow representa ganhos na ordem de 25% em termos quantitativos mantendo os mesmos patamares de qualidade dos trabalhos relacionados. / Mestre em Ciência da Computação
24

Descubrimiento y evaluación de recursos web de calidad mediante Patent Link Analysis

Font Julián, Cristina Isabel 26 July 2021 (has links)
[ES] Las patentes son documentos legales que describen el funcionamiento exacto de una invención, otorgando el derecho de explotación económica a sus dueños a cambio de dar a conocer a la sociedad los detalles de funcionamiento de dicha invención. Para que una patente pueda ser concedida debe cumplir tres requisitos: ser novedad (no haber sido expuesto o publicado con anterioridad), cumplir la actividad inventiva y tener aplicación industrial. Es por ello que las patentes son documentos valiosos, ya que contienen una gran cantidad de información técnica no incluida antes en otro tipo de documento (publicado o disponible). Debido a las características particulares de las patentes, los recursos que éstas mencionan, así como los recursos que mencionan a las patentes, contienen enlaces que pueden ser útiles y dar apoyo a diversas aplicaciones (vigilancia tecnológica, desarrollo e innovación, Triple-Helix, etc.) al disponer de información complementaria, así como de la creación de herramientas y técnicas que permitan extraerlos y analizarlos. El método propuesto para alcanzar los objetivos que definen la tesis se encuentra divido en dos bloques complementarios: Patent Outlink y Patent Inlink, que juntos conforman la técnica de Patent Link Analysis. Para realizar el estudio se selecciona la Oficina de Patentes y Marcas de Estados Unidos (USPTO), recogiendo todas aquellas patentes concedidas entre los años 2008 y 2018 (ambos incluidos). Una vez extraída la información a analizar en cada bloque se cuenta con: 3.133.247 de patentes, 2.745.973 millones de enlaces contenidos en patentes, 2.297.366 millones de páginas web de patentes enlazadas, 17.001 paginas únicas web enlazando a patentes y 990.663 patentes únicas enlazadas desde documentos web. Los resultados del análisis de Patent Outlink muestran como tanto la cantidad de patentes que contienen enlaces (20%), como el número de enlaces contenido en patentes (mediana 4-5) es todavía bajo, pero ha crecido significativamente durante los últimos años y se puede esperar un mayor uso en el futuro. Existe una diferencia clara en el uso de enlaces entre áreas de conocimiento (42% pertenecen a Física, especialmente Computación y Cálculos), así como por secciones dentro de los documentos, explicando los resultados obtenidos y la proyección de análisis futuros. Los resultados del análisis de Patent Inlink identifica una cantidad considerable menor de dominios webs que enlazan a patentes (17.001 frente a 256.724), pero existen más enlaces por documento enlazante (el número de enlaces total es similar para ambos bloques de análisis). Así mismo, los datos muestran una elevada dispersión, con unos pocos dominios generando una gran cantidad de enlaces. Ambos bloques muestran la existencia de una alta relación con empresas y servicios tecnológicos, existiendo diferencias relativas a los enlaces a Universidades y Gobiernos (más enlaces en Outlink). Los resultados muestran que el modelo de análisis propuesto permite y facilita el descubrimiento y evaluación de recursos web de calidad. Así mismo, se concluye que la cibermetría, mediante el análisis de enlaces, aporta información de interés para el análisis de los recursos web de calidad a través de los enlaces contenidos y dirigidos a documentos de patentes. El método propuesto y validado permite de un modo eficiente, eficaz y replicable la extracción y análisis de los enlaces contenidos y dirigidos a documentos de patentes. Permitiendo, a su vez, definir, modelar y caracterizar el Patent Link Analysis como un subgénero del Link Analysis que puede ser utilizado para la construcción de sistemas de monitorización de link intelligence, de evaluación y/o de calidad entre otros, mediante el uso de los enlaces entrantes y salientes de documentos de patentes aplicable universidades, centros de investigación, así como empresas públicas y privadas. / [CA] Les patents són documents legals que descriuen el funcionament exacte d'una invenció, atorgant el dret d'explotació econòmica als seus amos a canvi de donar a conéixer a la societat els detalls de funcionament d'aquesta invenció. Perquè una patent puga ser concedida ha de complir tres requisits: ser novetat (no haver sigut exposat o publicat amb anterioritat), complir l'activitat inventiva i tindre aplicació industrial. És per això que les patents són documents valuosos, ja que contenen una gran quantitat d'informació tècnica no inclosa abans en un altre tipus de document (publicat o disponible). A causa de les característiques particulars de les patents, els recursos que aquestes esmenten, així com els recursos que esmenten les patents, contenen enllaços que poden ser útils i donar suport a diverses aplicacions (vigilància tecnològica, desenvolupament i innovació, Triple-Helix, etc.) en disposar d'informació complementària, així com de la creació d'eines i tècniques que permeten extraure'ls i analitzar-los. El mètode proposat per a aconseguir els objectius que defineixen la tesi es troba dividisc en dos blocs complementaris: Patent Outlink i Patent Inlink, que junts conformen la tècnica de Patent Link Analysis. Per a realitzar l'estudi es selecciona l'Oficina de Patents i Marques dels Estats Units (USPTO), recollint totes aquelles patents concedides entre els anys 2008 i 2018 (tots dos inclosos). Una vegada extreta la informació a analitzar en cada bloc es compta amb: 3.133.247 de patents, 2.745.973 milions d'enllaços continguts en patents, 2.297.366 milions de pàgines web de patents enllaçades, 17.001 pàgines úniques web enllaçant a patents i 990.663 patents úniques enllaçades des de documents web. Els resultats de l'anàlisi de Patent Outlink mostren com tant la quantitat de patents que contenen enllaços (20%), com el nombre d'enllaços contingut en patents (mitjana 4-5) és encara baix, però ha crescut significativament durant els últims anys i es pot esperar un major ús en el futur. Existeix una diferència clara en l'ús d'enllaços entre àrees de coneixement (42% pertanyen a Física, especialment Computació i Càlculs), així com per seccions dins dels documents, explicant els resultats obtinguts i la projecció d'anàlisis futures. Els resultats de l'anàlisi de Patent Inlink identifica una quantitat considerable menor de dominis webs que enllacen a patents (17.001 enfront de 256.724), però hi ha més enllaços per document enllaçant (el nombre d'enllaços total és similar per a tots dos blocs d'anàlisis). Així mateix, les dades mostren una elevada dispersió, amb uns pocs dominis generant una gran quantitat d'enllaços. Tots dos blocs mostren l'existència d'una alta relació amb empreses i serveis tecnològics, existint diferències relatives als enllaços a Universitats i Governs (més enllaços en Outlink). Finalment, es verifica que el model d'anàlisi proposat i facilita l'extracció i anàlisi dels enllaços continguts i dirigits a documents de patents, així com facilitar el descobriment i avaluació de recursos web de qualitat. A més, es conclou que la cibermetría, mitjançant l'anàlisi d'enllaços, aporta informació d'interés per a l'anàlisi dels recursos web de qualitat a través dels enllaços continguts i dirigits a documents de patents. El mètode proposat i validat permet definir, modelar i caracteritzar el Patent Link Analysis com un subgènere del Link Analysis que pot ser utilitzat per a la construcció de sistemes de monitoratge de link intelligence, d'avaluació i/o de qualitat entre altres, mitjançant l'ús dels enllaços entrants i sortints de documents de patents aplicable a universitats, centres d'investigació, així com empreses públiques i privades. / [EN] Patents are legal documents that describe the exact operation of an invention, granting the right of economic exploitation to its owners in exchange for describing the details of the operation of said invention. For a patent to be granted, it must meet three requirements: be novel (not have been previously exhibited or published), comply with the inventive step, and have industrial application. That is why patents are valuable documents, since they contain a large amount of technical information not previously included in another type of document (published or available). Due to the particular characteristics of patents, the resources that they mention, as well as the resources that mention patents, contain links that can be useful and give support to various applications (technological surveillance, development and innovation, Triple-Helix, etc.) by having complementary information, as well as the creation of tools and techniques that allow them to be extracted and analyzed. The proposed method to achieve the objectives that define the thesis is divided into two complementary blocks: Patent Outlink and Patent Inlink, which together make up the Patent Link Analysis technique. To carry out the study, the United States Patent and Trademark Office (USPTO) is selected, collecting all those patents granted between 2008 and 2018 (both included). Once the information to be analyzed has been extracted in each block, there are: 3,133,247 patents, 2,745,973 million links contained in patents, 2,297,366 million linked patent web pages, 17,001 unique web pages linking patents and 990,663 Unique patents linked from web documents. The results of the Patent Outlink analysis show that both the number of patents that contain links (20%) and the number of links contained in patents (median 4-5) is still low, but has grown significantly in recent years and you can expect more use in the future. There is a clear difference in the use of links between areas of knowledge (42% belong to Physics, especially Computing and Calculus), as well as by sections within the documents, explaining the results obtained and the projection of future analyzes. The results of the Patent Inlink analysis identify considerably fewer web domains that link to patents (17,001 vs. 256,724), but there are more links per linking document (the total number of links is similar for both analysis blocks). Likewise, the data shows a high dispersion, with a few domains generating a large number of links. Both blocks show the existence of a high relationship with companies and technological services, with differences related to links to Universities and Governments (more links in Outlink). Finally, it is verified that the proposed model allows in an efficient, effective and replicable way the extraction and analysis of the links contained and directed to patent documents, as well as facilitating the discovery and evaluation of quality web resources. In addition, it is concluded that cybermetrics, through the link analysis technique, provides information of interest for the analysis of quality web resources through the links contained and directed to patent documents. The proposed and validated method allows defining, modeling and characterizing Patent Link Analysis as a subgenre of Link Analysis that can be used for the construction of link intelligence monitoring, evaluation and / or quality systems, among others, through the use of the inbound and outbound links of applicable patent documents universities, research centers, as well as public and private companies. / La presente tesis doctoral ha sido financiada por el Gobierno de España mediante el contrato predoctoral para la formación de doctores FPI BES-2017-079741 otorgada por el Ministerio de Ciencia e Innovación. / Font Julián, CI. (2021). Descubrimiento y evaluación de recursos web de calidad mediante Patent Link Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/170640 / TESIS
25

Using Observers for Model Based Data Collection in Distributed Tactical Operations

Thorstensson, Mirko January 2008 (has links)
Modern information technology increases the use of computers in training systems as well as in command-and-control systems in military services and public-safety organizations. This computerization combined with new threats present a challenging complexity. Situational awareness in evolving distributed operations and follow-up in training systems depends on humans in the field reporting observations of events. The use of this observer-reported information can be largely improved by implementation of models supporting both reporting and computer representation of objects and phenomena in operations. This thesis characterises and describes observer model-based data collection in distributed tactical operations, where multiple, dispersed units work to achieve common goals. Reconstruction and exploration of multimedia representations of operations is becoming an established means for supporting taskforce training. We explore how modelling of operational processes and entities can support observer data collection and increase information content in mission histories. We use realistic exercises for testing developed models, methods and tools for observer data collection and transfer results to live operations. The main contribution of this thesis is the systematic description of the model-based approach to using observers for data collection. Methodological aspects in using humans to collect data to be used in information systems, and also modelling aspects for phenomena occurring in emergency response and communication areas contribute to the body of research. We describe a general methodology for using human observers to collect adequate data for use in information systems. In addition, we describe methods and tools to collect data on the chain of medical attendance in emergency response exercises, and on command-and-control processes in several domains.
26

巨量資料環境下之新聞主題暨輿情與股價關係之研究 / A Study of the Relevance between News Topics & Public Opinion and Stock Prices in Big Data

張良杰, Chang, Liang Chieh Unknown Date (has links)
近年來科技、網路以及儲存媒介的發達,產生的資料量呈現爆炸性的成長,也宣告了巨量資料時代的來臨。擁有巨量資料代表了不必再依靠傳統抽樣的方式來蒐集資料,分析數據也不再有資料收集不足以致於無法代表母題的限制。突破傳統的限制後,巨量資料的精隨在於如何從中找出有價值的資訊。 以擁有大量輿論和人際互動資訊的社群網站為例,就有相關學者研究其情緒與股價具有正相關性,本研究也試著利用同樣具有巨量資料特性的網路新聞,抓取中央新聞社2013年7月至2014年5月之經濟類新聞共計30,879篇,結合新聞主題偵測與追蹤技術及情感分析,利用新聞事件相似的概念,透過連結匯聚成網絡並且分析新聞的情緒和股價指數的關係。 研究結果顯示,新聞事件間可以連結成一特定新聞主題,且能在龐大的網絡中找出不同的新聞主題,並透過新聞主題之連結產生新聞主題脈絡。對此提供一種新的方式來迅速了解巨量新聞內容,也能有效的回溯新聞主題及新聞事件。 在新聞情緒和股價指數方面,研究發現新聞情緒影響了股價指數之波動,其相關係數達到0.733562;且藉由情緒與心理線及買賣意願指標之比較,顯示新聞的情緒具有一定的程度能夠成為股價判斷之參考依據。 / In recent years, the technology, network, and storage media developed, the amount of generated data with the explosive growth, and also declared the new era of big data. Having big data let us no longer rely on the traditional sample ways to collect data, and no longer have the issue that could not represent the population which caused by the inadequate data collection. Once we break the limitations, the main spirit of big data is how to find out the valuable information in big data. For example, the social network sites (SNS) have a lot of public opinions and interpersonal information, and scholars have founded that the emotions in SNS have a positive correlation with stock prices. Therefore, the thesis tried to focus on the news which have the same characteristic of big data, using the web crawl to catch total of 30,879 economics news articles form the Central News Agency, furthermore, took the “Topic Detection & Tracking” and “Sentiment Analysis” technology on these articles. Finally, based on the concept of the similarity between news articles, through the links converging networks and analyze the relevant between news sentiment and stock prices. The results shows that news events can be linked to specific news topics, identify different news topics in a large network, and form the news topic context by linked news topics together. The thesis provides a new way to quickly understand the huge amount of news, and backtracking news topics and news event with effective. In the aspect of news sentiment and stock prices, the results shows that the news sentiments impact the fluctuations of stock prices, and the correlation coefficient is 0.733562. By comparing the emotion with psychological lines & trading willingness indicators, the emotion is better than the two indicators in the stock prices determination.
27

Recommending best answer in a collaborative question answering system

Chen, Lin January 2009 (has links)
The World Wide Web has become a medium for people to share information. People use Web-based collaborative tools such as question answering (QA) portals, blogs/forums, email and instant messaging to acquire information and to form online-based communities. In an online QA portal, a user asks a question and other users can provide answers based on their knowledge, with the question usually being answered by many users. It can become overwhelming and/or time/resource consuming for a user to read all of the answers provided for a given question. Thus, there exists a need for a mechanism to rank the provided answers so users can focus on only reading good quality answers. The majority of online QA systems use user feedback to rank users’ answers and the user who asked the question can decide on the best answer. Other users who didn’t participate in answering the question can also vote to determine the best answer. However, ranking the best answer via this collaborative method is time consuming and requires an ongoing continuous involvement of users to provide the needed feedback. The objective of this research is to discover a way to recommend the best answer as part of a ranked list of answers for a posted question automatically, without the need for user feedback. The proposed approach combines both a non-content-based reputation method and a content-based method to solve the problem of recommending the best answer to the user who posted the question. The non-content method assigns a score to each user which reflects the users’ reputation level in using the QA portal system. Each user is assigned two types of non-content-based reputations cores: a local reputation score and a global reputation score. The local reputation score plays an important role in deciding the reputation level of a user for the category in which the question is asked. The global reputation score indicates the prestige of a user across all of the categories in the QA system. Due to the possibility of user cheating, such as awarding the best answer to a friend regardless of the answer quality, a content-based method for determining the quality of a given answer is proposed, alongside the non-content-based reputation method. Answers for a question from different users are compared with an ideal (or expert) answer using traditional Information Retrieval and Natural Language Processing techniques. Each answer provided for a question is assigned a content score according to how well it matched the ideal answer. To evaluate the performance of the proposed methods, each recommended best answer is compared with the best answer determined by one of the most popular link analysis methods, Hyperlink-Induced Topic Search (HITS). The proposed methods are able to yield high accuracy, as shown by correlation scores: Kendall correlation and Spearman correlation. The reputation method outperforms the HITS method in terms of recommending the best answer. The inclusion of the reputation score with the content score improves the overall performance, which is measured through the use of Top-n match scores.
28

Consensus opinion model in online social networks based on the impact of influential users / Modèle d'avis de consensus dans les réseaux sociaux en ligne basé sur l'impact des utilisateurs influents

Mohammadinejad, Amir 04 December 2018 (has links)
Cette thèse s'intéresse particulièrement aux sites de vente en ligne et à leurs réseaux sociaux. La propension des utilisateurs utiliser ces sites Web tels qu'eBay et Amazon est de plus en plus importante en raison de leur fiabilité. Les consommateurs se réfèrent à ces sites Web pour leurs besoins et en deviennent clients. L'un des défis à relever est de fournir les informations utiles pour aider les clients dans leurs achats. Ainsi, une question sous-jacente à la thèse cherche à répondre est de savoir comment fournir une information complète aux clients afin de les aider dans leurs achats. C'est important pour les sites d'achats en ligne car cela satisfait les clients par ces informations utiles. Pour surmonter ce problème, trois études spécifiques ont été réalisées : (1) Trouver les utilisateurs influents, (2) Comprendre la propagation d'opinion et (3) Agréger les opinions. Dans la première partie, la thèse propose une méthodologie pour trouver les utilisateurs influents du réseau qui sont essentiels pour une propagation précise de l'opinion. Pour ce faire, les utilisateurs sont classés en fonction de deux scores : optimiste et pessimiste. Dans la deuxième partie, une nouvelle méthodologie de propagation de l'opinion est présentée pour parvenir à un accord et maintenir la cohérence entre les utilisateurs, ce qui rend l'agrégation possible. La propagation se fait en tenant compte des impacts des utilisateurs influents et des voisins. Enfin, dans la troisième partie, l'agrégation des avis est proposée pour rassembler les avis existants et les présenter comme des informations utiles pour les clients concernant chaque produit du site de vente en ligne. Pour ce faire, l'opérateur de calcul de la moyenne pondérée et les techniques floues sont utilisées. La thèse présente un modèle d'opinion consensuelle dans les réseaux. Les travaux peuvent s'appliquer à tout groupe qui a besoin de trouver un avis parmi les avis de ses membres. Par conséquent, le modèle proposé dans la thèse fournit un taux précis et approprié pour chaque produit des sites d'achat en ligne / Online Social Networks are increasing and piercing our lives such that almost every person in the world has a membership at least in one of them. Among famous social networks, there are online shopping websites such as Amazon, eBay and other ones which have members and the concepts of social networks apply to them. This thesis is particularly interested in the online shopping websites and their networks. According to the statistics, the attention of people to use these websites is growing due to their reliability. The consumers refer to these websites for their need (which could be a product, a place to stay, or home appliances) and become their customers. One of the challenging issues is providing useful information to help the customers in their shopping. Thus, an underlying question the thesis seeks to answer is how to provide comprehensive information to the customers in order to help them in their shopping. This is important for the online shopping websites as it satisfies the customers by this useful information and as a result increases their customers and the benefits of both sides. To overcome the problem, three specific connected studies are considered: (1) Finding the influential users, (2) Opinion Propagation and (3) Opinion Aggregation. In the first part, the thesis proposes a methodology to find the influential users in the network who are essential for an accurate opinion propagation. To do so, the users are ranked based on two scores namely optimist and pessimist. In the second part, a novel opinion propagation methodology is presented to reach an agreement and maintain the consistency among users which subsequently, makes the aggregation feasible. The propagation is conducted considering the impacts of the influential users and the neighbors. Ultimately, in the third part, the opinion aggregation is proposed to gather the existing opinions and present it as the valuable information to the customers regarding each product of the online shopping website. To this end, the weighted averaging operator and fuzzy techniques are used. The thesis presents a consensus opinion model in signed and unsigned networks. This solution can be applied to any group who needs to find a plenary opinion among the opinions of its members. Consequently, the proposed model in the thesis provides an accurate and appropriate rate for each product of the online shopping websites that gives precious information to their customers and helps them to have a better insight regarding the products.

Page generated in 0.0477 seconds