1 |
Centralizace a správa distribuovaných informaci / Centralization and maintenance of distributed informationValčák, Richard January 2010 (has links)
The master’s thesis deals with the web mining, information sources, unattended access methods to these sources, summary of available methods and tools. Web data mining is a very useful tool for required information acquiring, which is used for further processing. The work is focused on the proposal of a system, which is created to gather required information from given sources. The master’s thesis consists of three parts, which employ the developed library: API, which is used by programmers, server application for gathering information in time (such an exchange rate for instance) and example of AWT application, which serves for the processing of tables available on the internet.
|
2 |
Aggregating product reviews for the Chinese marketWu, Yongliang January 2009 (has links)
As of December 2007, the number of Internet users in China had increased to 210 million people. The annual growth rate reached 53.3 percent in 2008, with the average number of Internet users increasing every day by 200,000 people. Currently, China's Internet population is slightly lower than the 215 million internet users in the United States. [1] Despite the rapid growth of the Chinese economy in the global Internet market, China’s e-commerce is not following the traditional pattern of commerce, but instead has developed based on user demand. This growth has extended into every area of the Internet. In the west, expert product reviews have been shown to be an important element in a user’s purchase decision. The higher the quality of product reviews that customers received, the more products they buy from on-line shops. As the number of products and options increase, Chinese customers need impersonal, impartial, and detailed products reviews. This thesis focuses on on-line product reviews and how they affect Chinese customer’s purchase decisions. E-commerce is a complex system. As a typical model of e-commerce, we examine a Business to Consumer (B2C) on-line retail site and consider a number of factors; including some seemingly subtitle factors that may influence a customer’s eventually decision to shop on website. Specifically this thesis project will examine aggregated product reviews from different on-line sources by analyzing some existing western companies. Following this the thesis demonstrates how to aggregate product reviews for an e-business website. During this thesis project we found that existing data mining techniques made it straight forward to collect reviews. These reviews were stored in a database and web applications can query this database to provide a user with a set of relevant product reviews. One of the important issues, just as with search engines is providing the relevant product reviews and determining what order they should be presented in. In our work we selected the reviews based upon matching the product (although in some cases there are ambiguities concerning if two products are actually identical or not) and ordering the matching reviews by date - with the most recent reviews present first. Some of the open questions that remain for the future are: (1) improving the matching - to avoid the ambiguity concerning if the reviews are about the same product or not and (2) determining if the availability of product reviews actually affect a Chinese user's decision to purchase a product. / I december 2007 uppgick antalet internetanvändare i Kina har ökat till 210 miljoner människor. Den årliga tillväxttakten nådde 53,3 procent 2008, med den genomsnittliga Antalet Internet-användare ökar för varje dag av 200.000 människor. Närvarande Kinas Internet befolkningen är något lägre än de 215 miljoner Internetanvändare i USA Staterna.[1] Trots den snabba tillväxten i den kinesiska ekonomin i den globala Internetmarknaden, Kinas e-handel inte följer det traditionella mönstret av handel, men i stället har utvecklats baserat på användarnas efterfrågan. Denna tillväxt har utvidgas till alla områden I Internet. I väst har expert recensioner visat sig vara en viktig del I användarens köpbeslut. Ju högre kvalitet på produkten recensioner som kunderna mottagna fler produkter de köper från on-line butiker. Eftersom antalet produkter och alternativen ökar, kinesiska kunderna behöver opersonlig, opartisk och detaljerade produkter recensioner. Denna avhandling fokuserar på on-line recensioner och hur de påverkar Kinesiska kundens köpbeslut.</p> E-handel är ett komplext system. Som en typisk modell för e-handel, vi undersöka ett Business to Consumer (B2C) on-line-försäljning plats och överväga ett antal faktorer; inklusive några till synes subtitle faktorer som kan påverka kundens småningom Beslutet att handla på webbplatsen. Uttryckligen detta examensarbete kommer att undersöka aggregerade recensioner från olika online-källor genom att analysera vissa befintliga västra företag. Efter den här avhandlingen visar hur samlade produkt recensioner för en e-affärer webbplats. Under detta examensarbete fann vi att befintliga data mining tekniker gjort det rakt fram för att samla recensioner. Dessa översyner har lagrats i en databas och webb program kan söka denna databas för att ge en användare med en rad relevanta product recensioner. En av de viktiga frågorna, precis som med sökmotorer är att tillhandahålla relevanta produkt recensioner och bestämma vilken ordning de ska presenteras i. vårt arbete har vi valt recensioner baserat på matchning produkten (men i vissa fall det finns oklarheter i fråga om två produkter verkligen identiska eller inte) och beställa matchande recensioner efter datum - med den senaste recensioner närvarande första. Några av de öppna frågorna som kvarstår för framtiden är: (1) förbättra matchning - För att undvika oklarheter rörande om Gästrecensionerna om samma produkt eller inte och (2) avgöra om det finns recensioner faktiskt påverka en kinesisk användarens val att köpa en produkt.
|
3 |
Une approche générique pour l'analyse croisant contenu et usage des sites Web par des méthodes de bipartitionnement / A generic approach to combining web content and usage analysis using biclustering algorithmsCharrad, Malika 22 March 2010 (has links)
Dans cette thèse, nous proposons une nouvelle approche WCUM (Web Content and Usage Mining based approach) permettant de relier l'analyse du contenu à l'analyse de l'usage d'un site Web afin de mieux comprendre le comportement général des visiteurs du site. Ce travail repose sur l'utilisation de l'algorithme CROKI2 de classification croisée implémenté selon deux stratégies d'optimisation différentes que nous comparons à travers des expérimentations sur des données générées artificiellement. Afin de pallier le problème de détermination du nombre de classes sur les lignes et les colonnes, nous proposons de généraliser certains indices proposés initialement pour évaluer les partitions obtenues par des algorithmes de classification simple, aux algorithmes de classification simultanée. Pour évaluer la performance de ces indices nous proposons un algorithme de génération de biclasses artificielles pour effectuer des simulations et valider les résultats. Des expérimentations sur des données artificielles ainsi qu'une application sur des données réelles ont été réalisées pour évaluer l'efficacité de l'approche proposée. / In this thesis, we propose a new approach WCUM (Web Content and Usage Mining based approach) for linking content analysis to usage analysis of a website to better understand the general behavior of the web site visitors. This work is based on the use of the block clustering algorithm CROKI2 implemented by two different strategies of optimization that we compared through experiments on artificially generated data. To mitigate the problem of determination of the number of clusters on rows and columns, we suggest to generalize the use of some indices originally proposed to evaluate the partitions obtained by clustering algorithms to evaluate bipartitions obtained by simultaneous clustering algorithms. To evaluate the performance of these indices on data with biclusters structure, we proposed an algorithm for generating artificial data to perform simulations and validate the results. Experiments on artificial data as well as on real data were realized to estimate the efficiency of the proposed approach.
|
4 |
DRFS: Detecting Risk Factor of Stroke Disease from Social Media Using Machine Learning TechniquesPradeepa, S., Manjula, K. R., Vimal, S., Khan, Mohammad S., Chilamkurti, Naveen, Luhach, Ashish Kr 01 January 2020 (has links)
In general humans are said to be social animals. In the huge expanded internet, it's really difficult to detect and find out useful information about a medical illness. In anticipation of more definitive studies of a causal organization between stroke risk and social network, It would be suitable to help social individuals to detect the risk of stroke. In this work, a DRFS methodology is proposed to find out the various symptoms associated with the stroke disease and preventive measures of a stroke disease from the social media content. We have defined an architecture for clustering tweets based on the content using Spectral Clustering an iterative fashion. The class label detection is furnished with the use of highest TF-IDF value words. The resultant clusters obtained as the output of spectral clustering is prearranged as input to the Probability Neural Network (PNN) to get the suitable class labels and their probabilities. Find Frequent word set using support count measure from the group of clusters for identify the risk factors of stroke. We found that the anticipated approach is able to recognize new symptoms and causes that are not listed in the World Health Organization (WHO), Mayo Clinic and National Health Survey (NHS). It is marked that they get associated with precise outcomes portray real statistics. This type of experiments will empower health organization, doctors and Government segments to keep track of stroke diseases. Experimental results shows the causes preventive measures, high and low risk factors of stroke diseases.
|
5 |
Predictive models for online human activitiesYang, Shuang-Hong 04 April 2012 (has links)
The availability and scale of user generated data in online systems raises tremendous challenges and opportunities to analytic study of human activities. Effective modeling of online human activities is not only fundamental to the understanding of human behavior, but also important to the online industry. This thesis focuses on developing models and algorithms to predict human activities in online systems and to improve the algorithmic design of personalized/socialized systems (e.g., recommendation, advertising, Web search systems). We are particularly interested in three types of online user activities, i.e., decision making, social interactions and user-generated contents. Centered around these activities, the thesis focuses on three challenging topics:
1. Behavior prediction, i.e., predicting users' online decisions. We present Collaborative-Competitive Filtering, a novel game-theoretic framework for predicting users' online decision making behavior and leverage the knowledge to optimize the design of online systems (e.g., recommendation systems) in respect of certain strategic goals (e.g., sales revenue, consumption diversity).
2. Social contagion, i.e., modeling the interplay between social interactions and individual behavior of decision making. We establish the joint Friendship-Interest Propagation model and the Behavior-Relation Interplay model, a series of statistical approaches to characterize the behavior of individual user's decision making, the interactions among socially connected users, and the interplay between these two activities. These techniques are demonstrated by applications to social behavior targeting.
3. Content mining, i.e., understanding user generated contents. We propose the Topic-Adapted Latent Dirichlet Allocation model, a probabilistic model for identifying a user's hidden cognitive aspects (e.g., knowledgability) from the texts created by the user. The model is successfully applied to address the challenge of ``language gap" in medical information retrieval.
|
6 |
Extrakce strukturovaných dat z českého webu s využitím extrakčních ontologií / Extracting Structured Data from Czech Web Using Extraction OntologiesPouzar, Aleš January 2012 (has links)
The presented thesis deals with the task of automatic information extraction from HTML documents for two selected domains. Laptop offers are extracted from e-shops and free-published job offerings are extracted from company sites. The extraction process outputs structured data of high granularity grouped into data records, in which corresponding semantic label is assigned to each data item. The task was performed using the extraction system Ex, which combines two approaches: manually written rules and supervised machine learning algorithms. Due to the expert knowledge in the form of extraction rules the lack of training data could be overcome. The rules are independent of the specific formatting structure so that one extraction model could be used for heterogeneous set of documents. The achieved success of the extraction process in the case of laptop offers showed that extraction ontology describing one or a few product types could be combined with wrapper induction methods to automatically extract all product type offers on a web scale with minimum human effort.
|
7 |
Information Retrieval Based on DOM TreesAlarte Aleixandre, Julián 14 September 2023 (has links)
[ES] Desde hace varios años, la cantidad de información disponible en la web crece de manera exponencial. Cada día se genera una gran cantidad de información que prácticamente de inmediato está disponible en la web. Los buscadores e indexadores recorren diariamente la web para encontrar toda esa información que se ha ido añadiendo y así, ponerla a disposición del usuario devolviéndola en los resultados de las búsquedas. Sin embargo, la cantidad de información es tan grande que debe ser preprocesada con anterioridad. Dado que el usuario que realiza una búsqueda de información solamente está interesado en la información relevante, no tiene sentido que los buscadores e indexadores procesen el resto de elementos de las páginas web. El procesado de elementos irrelevantes de páginas web supone un gasto de recursos innecesario, como por ejemplo espacio de almacenamiento, tiempo de procesamiento, uso de ancho de banda, etc. Se estima que entre el 40% y el 50% del contenido de las páginas web son elementos irrelevantes. Por eso, en los últimos 20 años se han desarrollado técnicas para la detección de elementos tanto relevantes como irrelevantes de páginas web. Este objetivo se puede abordar de diversas maneras, por lo que existen técnicas diametralmente distintas para afrontar el problema. Esta tesis se centra en el desarrollo de técnicas basadas en árboles DOM para la detección de diversas partes de las páginas web, como son el contenido principal, la plantilla, y el menú. La mayoría de técnicas existentes se centran en la detección de texto dentro del contenido principal de las páginas web, ya sea eliminando la plantilla de dichas páginas o detectando directamente el contenido principal. Las técnicas que proponemos no sólo son capaces de realizar la extracción de texto, sino que, bien por eliminación de plantilla o bien por detección del contenido principal, son capaces de aislar cualquier elemento relevante de las páginas web, como por ejemplo imágenes, animaciones, videos, etc. Dichas técnicas no sólo son útiles para buscadores y rastreadores, sino que también pueden ser útiles directamente para el usuario que navega por la web. Por ejemplo, en el caso de usuarios con diversidad funcional (como sería una ceguera) puede ser interesante la eliminación de elementos irrelevantes para facilitar la lectura (o escucha) de las páginas web. Para hacer las técnicas accesibles a todo el mundo, las hemos implementado como extensiones del navegador, y son compatibles con navegadores basados en Mozilla o en Chromium. Además, estas herramientas están públicamente disponibles para que cualquier persona interesada pueda acceder a ellas y continuar con la investigación si así lo deseara. / [CA] Des de fa diversos anys, la quantitat d'informació disponible en la web creix de manera exponencial. Cada dia es genera una gran quantitat d'informació que immediatament es posa disponible en la web. Els cercadors i indexadors recorren diàriament la web per a trobar tota aqueixa informació que s'ha anat afegint i així, posar-la a la disposició de l'usuari retornant-la en els resultats de les cerques. No obstant això, la quantitat d'informació és tan gran que aquesta ha de ser preprocessada. Atés que l'usuari que realitza una cerca d'informació solament es troba interessat en la informació rellevant, no té sentit que els cercadors i indexadors processen la resta d'elements de les pàgines web. El processament d'elements irrellevants de pàgines web suposa una despesa de recursos innecessària, com per exemple espai d'emmagatzematge, temps de processament, ús d'amplada de banda, etc. S'estima que entre el 40% i el 50% del contingut de les pàgines web són elements irrellevants. Precisament per això, en els últims 20 anys s'han desenvolupat tècniques per a la detecció d'elements tant rellevants com irrellevants de pàgines web. Aquest objectiu es pot afrontar de diverses maneres, per la qual cosa existeixen tècniques diametralment diferents per a afrontar el problema. Aquesta tesi se centra en el desenvolupament de tècniques basades en arbres DOM per a la detecció de diverses parts de les pàgines web, com són el contingut principal, la plantilla, i el menú. La majoria de tècniques existents se centren en la detecció de text dins del contingut principal de les pàgines web, ja siga eliminant la plantilla d'aquestes pàgines o detectant directament el contingut principal. Les tècniques que hi proposem no sols són capaces de realitzar l'extracció de text, sinó que, bé per eliminació de plantilla o bé per detecció del contingut principal, són capaços d'aïllar qualsevol element rellevant de les pàgines web, com per exemple imatges, animacions, vídeos, etc. Aquestes tècniques no sols són útils per a cercadors i rastrejadors, sinó també poden ser útils directament per a l'usuari que navega per la web. Per exemple, en el cas d'usuaris amb diversitat funcional (com ara una ceguera) pot ser interessant l'eliminació d'elements irrellevants per a facilitar-ne la lectura (o l'escolta) de les pàgines web. Per a fer les tècniques accessibles a tothom, les hem implementades com a extensions del navegador, i són compatibles amb navegadors basats en Mozilla i en Chromium. A més, aquestes eines estan públicament disponibles perquè qualsevol persona interessada puga accedir a elles i continuar amb la investigació si així ho desitjara. / [EN] For several years, the amount of information available on the Web has been growing exponentially. Every day, a huge amount of data is generated and it is made immediately available on the Web. Indexers and crawlers browse the Web daily to find the new information that has been added, and they make it available to answer the users' search queries. However, the amount of information is so huge that it must be preprocessed. Given that users are only interested in the relevant information, it is not necessary for indexers and crawlers to process other boilerplate, redundant or useless elements of the web pages. Processing such irrelevant elements lead to an unnecessary waste of resources, such as storage space, runtime, bandwidth, etc. Different studies have shown that between 40% and 50% of the data on the Web are noisy elements. For this reason, several techniques focused on the detection of both, relevant and irrelevant data, have been developed over the last 20 years. The problems of identifying the relevant content of a web page, its template, its menu, etc. can be faced in various ways, and for this reason, there exist completely different techniques to address those problems. This thesis is focused on the development of information retrieval techniques based on DOM trees. Its goal is to detect different parts of a web page, such as the main content, the template, and the main menu. Most of the existing techniques are focused on the detection of text inside the main content of the web pages, mainly by removing the template of the web page or by inferring the main content. The techniques proposed in this thesis do not only extract text by eliminating the template or inferring the main content, but also extract any other relevant information from web pages such as images, animations, videos, etc. Our techniques are not only useful for indexers and crawlers but also for the user browsing the Web. For instance, in the case of users with functional diversity problems (such as blindness), removing noisy elements can facilitate them to read (or listen to) the web pages. To make the techniques broadly accessible to everybody, we have implemented them as browser extensions, which are compatible with Mozilla-based and Chromium-based browsers. In addition, these tools are publicly available, so any interested person can access them and continue with the research if they wish to do so. / Alarte Aleixandre, J. (2023). Information Retrieval Based on DOM Trees [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/196679
|
Page generated in 0.1218 seconds