• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 21
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 173
  • 173
  • 109
  • 61
  • 56
  • 48
  • 38
  • 35
  • 24
  • 22
  • 22
  • 19
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Proposta de sistema de busca de jogos eletrônicos pautada em ontologia e semântica

Lopes, Rodrigo Arthur de Souza Pereira 10 August 2011 (has links)
Made available in DSpace on 2016-03-15T19:37:38Z (GMT). No. of bitstreams: 1 Rodrigo Arthur de Souza Pereira Lopes.pdf: 2274739 bytes, checksum: 9c19f5e6e3196f349ff838640ac37cc9 (MD5) Previous issue date: 2011-08-10 / Universidade Presbiteriana Mackenzie / With the constant growth in the quantity of websites, and consequently the increase in content availability throughout the Internet, the development of search mechanisms that enable access to reliable information has become a complex activity. In this sense, this work presents a revision on the behavior of search mechanisms, as well as the manner through which they map information, including the study of ontologies and knowledge bases, as well as forms of knowledge representation on the Internet. These models integrate the Semantic Web, which constitutes a proposal for the organization of information. Based on these elements, a search mechanism was developed for a specific domain: videogames. This mechanism is based on the classification of electronic games by specialized review websites, where one may extract information about select titles. As such, this work is divided in four stages. Firstly, data is extracted from the aforementioned websites for previously selected titles through the use of a webcrawler. Secondly, an analysis is performed on the obtained data on two fronts, utilizing natural computing as well as power-law concepts. Next, an ontology for videogames is constructed, with its subsequent publication in a knowledge base accessible to the software. Lastly, the implementation of the actual mechanism, which will make use of the knowledge base and bring the user suggestions pertaining to his search, such as titles or related characteristics intrinsic to games that may be evaluated relating to the search. This work also hopes to present itself as a useful model that may be utilized in different domains, such as movies, travel destinations, electronic appliances and software, among others. / Com o crescimento da quantidade de websites e, consequentemente, o aumento de conteúdo disponível na Internet, desenvolver sistemas de busca que possibilitem o acesso à informação confiável tornou-se uma atividade complexa. Desta forma, este trabalho apresenta uma revisão do funcionamento dos mecanismos de busca e das formas pelas quais a informação é mapeada, o que inclui o estudo de ontologias e bases de conhecimento, bem como de formas de representação de informação na Internet. Estes modelos integram a Web Semântica, que constitui uma proposta de organização de informação. Com base nestes elementos foi desenvolvido um sistema de busca de conteúdo em um domínio específico: jogos eletrônicos. Este pauta-se na classificação de websites especializados, de onde pode-se extrair informações das resenhas disponíveis sobre os títulos escolhidos. Para tanto, a proposta divide-se em quatro fases. A primeira relaciona-se à coleta de dados dos websites mencionados por meio da implementação de um webcrawler que realiza a extração de informações de uma lista de jogos pré-determinada. Em seguida é feito o tratamento e a análise dos dados por meio de duas abordagens, que utilizam-se de computação natural e conceitos de lei de potência. Além disso, foi feita a construção de uma ontologia para estes jogos e publicação destes dados em uma base de conhecimento acessível ao software. Por último, foi implementado um mecanismo de busca que faz uso da base de conhecimento e apresenta como resultado, ao usuário, sugestões pertinentes à sua busca, como títulos ou características relacionadas. Este trabalho ainda apresenta um modelo que pode ser utilizado em outros domínios, tais como filmes, destinos de viagens, eletrodomésticos, softwares, dentre outros.
152

On-line marketing - princip aukce mění svět reklamy. / On-line marketing - Principle of an auction changes the world of advertising

Jankovič, Zdeněk January 2009 (has links)
Diploma thesis 'On-line marketing - Principle of an auction changes the world of advertising' is about internet as a medium, on-line commercial communications, search engines and performance marketing. This thesis is divided into three parts. First part is about internet as a medium and actual situation on the Czech internet market. Second part describes a marketing communication mix on the internet, forms of online advertisement and pros and cons of online advertising. Third part deals with an advertisement within the frame of search engines. This part is also about PPC systems and SEM (Search Engine Marketing). Contribution of this thesis lies in info how to use the internet for propagation.
153

Entertainics

Garza, Jesus Mario Torres 01 January 2003 (has links)
Entertainics is a web-based software application used to gather information about DVD players from several web-sites on the internet. The purpose of this software is to help users search for DVD players in a faster and easier way, by avoiding the navigation on every web-site that contains this product.
154

Att skapa en upplevelse av god svarskvalitet i VAPA

Börjesson, Tim, Kumlin, Filip January 2021 (has links)
Voice-activated personal assistants (VAPA) har på senare tid blivit allt vanligare i dagligt bruk för individer. Då VAPA används som sökmotorer är det viktigt att de kan leverera ett svar som användaren upplever är av god kvalitet. Tidigare studier har genomfört kvantitativa tester för att undersöka svarskvaliteten i VAPA där användarens upplevelse inte tagits hänsyn till. Vi presenterar en studie ämnad för att fylla denna kunskapslucka. Genom en litteraturstudie togs fem grundteman fram, relevans, trovärdighet, läsbarhet, aktualitet och rikhet på innehåll, som ligger till grund för upplevelse av god svarskvalitet. Genom en intervjustudie med nio respondenter har deras upplevelse av VAPAs svarskvalitet undersökts baserat på litteraturstudiens teman. Utkomsten av studien är: (1) Studien visade på ett komplext samband mellan dessa teman, där några teman var beroende av andra teman. (2) Rikhet på innehåll visades ha motsatt effekt i en VAPA jämfört med traditionella sökverktyg, detta då rikheten i VAPA önskades vara kort och koncist snarare än att visa mycket innehåll. (3) I VAPA bör svaren vara i rätt svarsform för den ställda frågan, därmed bör svaren vara enkla, tydliga och inte innehålla onödig information såsom annonser som kan störa användaren i sökandet av ett svar. (4) Svarens trovärdighet är beroende av källornas rykten och användarens kunskap om källan, där vissa användare upplever blind tillit till vissa källor. / Voice-activated personal assistants (VAPA) have recently become more common in daily use for individuals. Because VAPAs are used as search engines, it is important that they can deliver an answer that the user feels is of good quality. Previous studies have conducted quantitative tests to examine the response quality in VAPA where the user experience has not been taken into account. We present a study intended to fill this knowledge gap. Through a literature study, five base themes were developed, relevance, credibility, readability, timeliness and richness of content, which is the basis for the experience of good response quality. Through an interview study with nine respondents, their experience of VAPA's response quality was investigated based on the literature study's themes. The results of the study are: (1) The study showed a complex relationship between these themes, where some themes were dependent on other themes. (2) Richness in content was shown to have the opposite effect in a VAPA compared to traditional search tools, as the richness in VAPA was desired to be short and concise rather than to show a lot of content. (3) The answers in VAPA should be in the correct form of answer for the question asked, thus the answers should be simple, clear and not contain unnecessary information such as advertisements that could disturb the user in the search for the answer. (4) The credibility of the answers depends on the sources' rumors and the user's knowledge of the source, where some users experience blind trust in certain sources.
155

Framtagning av en konceptuell kostnadsmodell för sökmotoroptimerade webbapplikationer : Ett förslag på kostnadsmodell som beskriver uppkomna kostnader utifrån centrala aktiviteter / Development of a conceptual cost model for search engine optimized web applications : An alternative cost model that describes expenses based on key activities

Rosvall, Oliver January 2021 (has links)
Den digitala utvecklingen har förändrat sättet människor kommunicerar och lever sina liv. Idag är det möjligt att boka ett möte, beställa mat eller köpa en resa online. Den moderna människans konsumtionsvanor gör det livsviktigt för företag att etablera en digital närvaro. Som ett resultat av detta väljer allt fler företag att utveckla nya webbapplikationer för att sälja och marknadsföra sina produkter. För att synas finns det idag många olika marknadsföringsstrategier men under de senaste åren har det visat sig vara populärt att använda sig av sökmotorer. Sökmotormarknadsföring kan utföras med hjälp av sökmotoroptimering (SEO) och sökmotorannonsering. Båda metoderna behandlar olika områden som gör att en webbsida syns bland sökresultaten på sökmotorer såsom Google, Yahoo och Bing. Som ett resultat har det blivit vanligt att dessa två metoder används i symbios med varandra för att skapa ett konstant inflöde av användare. Att räkna ut priset för sökmotorannonsering är enkelt eftersom webbägaren betalar ett pris för varje annonsklick. Att räkna ut priset för SEO är något mer komplicerat eftersom kostnaden beror på vilket optimeringsarbete som utförs. Problemet är att det inte finns en känd kostnadsmodell som presenterar uppkomna kostnader vid framtagning och underhåll av en sökmotoroptimerad webbapplikation. Denna rapport syftar därför till att ta fram och presentera en konceptuell kostnadsmodell som ökar förståelsen för uppkomna kostnader vid centrala aktiviteter. Rapportens mål är däremot att ta fram en modell som kan användas av företag, organisationer, forskargrupper och individer för att identifiera och kategorisera uppkomna kostnader vid skapande och underhåll av en sökmotoroptimerad webbapplikation. Framtagandet av den konceptuella kostnadsmodellen genomfördes via en kvalitativ studie vilket innebär att studiens resultat bygger på observationer, upplevelser och sinnesintryck. Insamling av data gjordes med hjälp av en utforskningsmodell som består av två forskningskriterier. Arbetet inleddes med att utforska (1) centrala aktiviteter som påverkar kostnaden och därefter studerades (2) initiala och löpande kostnader. En fallstudie och fyra stycken intervjuer har använts som rapportens forskningsinstrument. De data som samlats in har analyserats med en tematisk analys där samband och olikheter identifieras. Utifrån upptäckterna i analysen skapades sedan en primär kostnadsmodell. Den primära kostnadsmodellen utvärderades med hjälp av arbetets utvärderingsmodell som präglades av tre forskningskriterier. Utvärderingen gjordes med en av intervjupersonerna och fokus låg på att studera hur bra modellen speglar verkligheten. Under intervjun utvärderades modellens övergripande (1) struktur, (2) aktiviteter och (3) kostnadsslag. Resultatet från utvärderingen resulterades sedan i en slutgiltig kostnadsmodell vid namn SEOCM (Search Engine Optimization Cost Model). Modellen tar och beskriver centrala aktiviteter som påverkar tillverkningskostnader och underhållskostnader för en sökmotoroptimerad webbapplikation. / The rise of technology has changed the way people communicate and live their lives. Nowadays, people can book a meeting, order food, or buy a trip online. The change in consumption habits makes it vital for companies to establish a digital presence. As a result, more companies are choosing to develop web applications to sell and market their product. To be visible, there are many different marketing strategies, but in recent years it has proven popular to use search engines. Search engine marketing can be done with search engine optimization (SEO) and search engine advertising. Both methods deal with different areas that make a web application visible on search engines such as Google, Yahoo and Bing. As a result, both methods are usually combined to generate a higher number of visitors. Calculating the price of search engine advertising is easy because the web owner pays a price for each ad clicked. Calculating the price for SEO is somewhat more complicated as the cost depends on the optimization work performed. The problem is that there is no known cost model that presents costs categories during development and maintenance of a search engine optimized web application. This purpose of the report, is therefore, to develop and present a conceptual cost model that demonstrates costs incurred in key activities. The goal of the report, however, is to develop a model that can be used by companies, organizations, research groups, and individuals to identify and categorize the cost incurred in creating a web application that applies search engine optimization. The development of the conceptual cost model has been carried out via a qualitative study, which means that the results are based on observations, experiences, and sensory impressions. Data collection has been done using an exploration model that consists of two research criteria. The work began with exploring (1) key activities that affect the cost and then studied (2) initial and running costs. A case study and four interviews have been used as the report's research instruments. The data collected have been analyzed with a thematic analysis where similarities and differences are identified. Based on the findings, a primary cost model was created. The primary cost model has been evaluated with an evaluation model, which carried three research criteria. The evaluation was made with one of the interviewees and the focus was on studying how well the model reflects reality. During the interview, the model's overall (1) structure, (2) activities, and (3) cost categories were evaluated. The results of the evaluation then resulted in a final cost model called SEOCM (Search Engine Optimization Cost Model). The model captures and describes key activities that affect manufacturing costs and maintenance costs for search engine optimized web applications.
156

”Jag trivs ändå i min lilla bubbla” – En studie om studenters attityder till personalisering

Hedin, Alice January 2016 (has links)
Denna studie ämnar att undersöka studenters attityder till utvecklingen av personalisering inom webbaserade tjänster och utforska skillnader och likheter mellan studenternas attityder. Studiens empiriska material är insamlat genom fem kvalitativa intervjuer och en webbenkät med 72 respondenter. Studien behandlar fördelar och nackdelar med personalisering, möjligheter att förhindra personalisering och möjliga konsekvenser av personalisering. Majoriteten av studenterna har en positiv attityd till personalisering av webbaserade tjänster. Resultatet visar att studenterna var mest positivt inställda till personalisering av streamingtjänster och minst positiva till personalisering av nyhetstjänster. Jag fann att användare i stor utsträckning inte anser att nyhetstjänster bör vara personaliserade. Det visade sig finnas en tydlig skillnad mellan studenternas kännedom om olika verktyg som kan användas för att förhindra personalisering. Ju mer teknisk utbildning som studenterna läser, desto bättre kännedom hade studenterna om verktygen. Resultatet visade även att en stor del av studenterna önskade att de kunde stänga av personaliseringsfunktionen på tjänster. Personalisering har blivit en naturlig del av användarnas vardag och att majoriteten av användarna inte har tillräcklig kunskap om fenomenet och därför intar de en passiv attityd och undviker att reflektera närmare över personaliseringen och dess möjliga konsekvenser. / This essay aims to study student’s attitudes towards web personalization and explore where the student’s attitudes differ and converge. The empirical materials of the study where assembled by the usage of five qualitative interviews and a quantitative survey with 72 respondents. The study discusses the pros and cons, the ability to constrain web personalization and possible effects and outcomes of web personalization. The majority of the students have a positive attitude towards web personalization. The students were most positive towards personalization of streaming services and least positive towards personalization of media channels that output news. There was an explicit difference between the students’ knowledge of the possibilities to constrain web personalization through the usage of different extensions and tools. Those students who studied a more technical program showed more knowledge of extensions and tools that can be used to prevent or constrain web personalization. The results also showed that the over all students desire more control over web personalization and demand a function where the personalization of web services could be turned off. The study resulted in the findings that web personalization has become a part of the users every-day life and that the students do not have enough knowledge of web personalization which have led to a passive attitude towards it.
157

Semantic Web Identity of academic organizations / search engine entity recognition and the sources that influence Knowledge Graph Cards in search results

Arlitsch, Kenning 11 January 2017 (has links)
Semantic Web Identity kennzeichnet den Zustand, in dem ein Unternehmen von Suchmaschinen als Solches erkannt wird. Das Abrufen einer Knowledge Graph Card in Google-Suchergebnissen für eine akademische Organisation wird als Indikator für SWI nominiert, da es zeigt, dass Google nachprüfbare Tatsachen gesammelt hat, um die Organisation als Einheit zu etablieren. Diese Anerkennung kann wiederum die Relevanz ihrer Verweisungen an diese Organisation verbessern. Diese Dissertation stellt Ergebnisse einer Befragung der 125 Mitgliedsbibliotheken der Association of Research Libraries vor. Die Ergebnisse zeigen, dass diese Bibliotheken in den strukturierten Datensätzen, die eine wesentliche Grundlage des Semantic Web sind und Faktor bei der Erreichung der SWI sind, schlecht vertreten sind. Der Mangel an SWI erstreckt sich auf andere akademische Organisationen, insbesondere auf die unteren Hierarchieebenen von Universitäten. Ein Mangel an SWI kann andere Faktoren von Interesse für akademische Organisationen beeinflussen, einschließlich der Fähigkeit zur Gewinnung von Forschungsförderung, Immatrikulationsraten und Verbesserung des institutionellen Rankings. Diese Studie vermutet, dass der schlechte Zustand der SWI das Ergebnis eines Versagens dieser Organisationen ist, geeignete Linked Open Data und proprietäre Semantic Web Knowledge Bases zu belegen. Die Situation stellt eine Gelegenheit für akademische Bibliotheken dar, Fähigkeiten zu entwickeln, um ihre eigene SWI zu etablieren und den anderen Organisationen in ihren Institutionen einen SWI-Service anzubieten. Die Forschung untersucht den aktuellen Stand der SWI für ARL-Bibliotheken und einige andere akademische Organisationen und beschreibt Fallstudien, die die Wirksamkeit dieser Techniken zur Verbesserung der SWI validieren. Die erklärt auch ein neues Dienstmodell der SWI-Pflege, die von anderen akademischen Bibliotheken für ihren eigenen institutionellen Kontext angepasst werden. / Semantic Web Identity (SWI) characterizes an entity that has been recognized as such by search engines. The display of a Knowledge Graph Card in Google search results for an academic organization is proposed as an indicator of SWI, as it demonstrates that Google has gathered enough verifiable facts to establish the organization as an entity. This recognition may in turn improve the accuracy and relevancy of its referrals to that organization. This dissertation presents findings from an in-depth survey of the 125 member libraries of the Association of Research Libraries (ARL). The findings show that these academic libraries are poorly represented in the structured data records that are a crucial underpinning of the Semantic Web and a significant factor in achieving SWI. Lack of SWI extends to other academic organizations, particularly those at the lower hierarchical levels of academic institutions, including colleges, departments, centers, and research institutes. A lack of SWI may affect other factors of interest to academic organizations, including ability to attract research funding, increase student enrollment, and improve institutional reputation and ranking. This study hypothesizes that the poor state of SWI is in part the result of a failure by these organizations to populate appropriate Linked Open Data (LOD) and proprietary Semantic Web knowledge bases. The situation represents an opportunity for academic libraries to develop skills and knowledge to establish and maintain their own SWI, and to offer SWI service to other academic organizations in their institutions. The research examines the current state of SWI for ARL libraries and some other academic organizations, and describes case studies that validate the effectiveness of proposed techniques to correct the situation. It also explains new services that are being developed at the Montana State University Library to address SWI needs on its campus, which could be adapted by other academic libraries.
158

The liability of internet intermediaries

Riordan, Jaani January 2013 (has links)
Internet intermediaries facilitate a wide range of conduct using services supplied over the layered architecture of modern communications networks. Members of this class include search engines, social networks, internet service providers, website operators, hosts, and payment gateways, which together exert a critical and growing influence upon national and global economies, governments and cultures. This research examines who should face legal responsibility when wrongdoers utilise these services tortiously to cause harm to others. It has three parts. Part 1 seeks to understand the nature of an intermediary and how its liability differs from the liability of primary defendants. It classifies intermediaries according to a new layered, functional taxonomy and argues that many instances of secondary liability in English private law reflect shared features and underlying policies, including optimal loss-avoidance and derivative liability premised on an assumption of responsibility. Part 2 analyses intermediaries’ monetary liability for secondary wrongdoing in two areas of English law: defamation and copyright. It traces the historical evolution of these doctrines at successive junctures in communications technology, before identifying and defending limits on that liability which derive from three main sources: (i) in-built limits contained in definitions of secondary wrongdoing; (ii) European safe harbours and general limits on remedies; and (iii) statutory defences and exceptions. Part 3 examines intermediaries’ non-monetary liability, in particular their obligations to disclose information about alleged primary wrongdoers and to cease facilitating wrongdoing where it is necessary and proportionate to do so. It proposes a new suite of non-facilitation remedies designed to restrict access to tortious internet materials, remove such materials from search engines, and reduce the profitability of wrongdoing. It concludes with several recommendations to improve the effectiveness and proportionality of remedies by reference to considerations of architecture, anonymity, efficient procedures, and fundamental rights.
159

Semantiska webben och sökmotorer / Semantic web and search engines

Haj-Bolouri, Amir January 2010 (has links)
<p>Den här semantiska webben. Syftet är att undersöka hur den semantiska webben påverkar sökmotorer på webben. Detta sker genom en undersökning av tio olika sökmotorer där nio är semantiskt sådana och den tionde är den mest använda sökmotorn idag. Studien är genomförd som både en deskriptiv och kvantitativ studie. En litteraturundersökning har också genomförts om den semantiska webben och sökmotorer. Slutsatserna av den här studien är att den semantiska webben är mångfacetterad med dess definitioner, och att resultatet kring hur konkreta sökmotorer tillämpar semantiska webbprinciper kan variera beroende vilken sökmotor man interagerar med.Nyckelord: Semantic web, Semantiska webben, Semantik, Informatik, Web 2.0, Internet, Search engines, Sökmotorerthat relates to the semantic web. Therapporten behandlar definitioner av begrepp som är kopplade till denDen här semantiska webben. Syftet är att undersöka hur den semantiska webben påverkar sökmotorer på webben. Detta sker genom en undersökning av tio olika sökmotorer där nio är semantiskt sådana och den tionde är den mest använda sökmotorn idag. Studien är genomförd som både en deskriptiv och kvantitativ studie. En litteraturundersökning har också genomförts om den semantiska webben och sökmotorer. Slutsatserna av den här studien är att den semantiska webben är mångfacetterad med dess definitioner, och att resultatet kring hur konkreta sökmotorer tillämpar semantiska webbprinciper kan variera beroende vilken sökmotor man interagerar med.</p> / <p>This report deals with the definitions and terms main purpose has been to investigate how the semantic web affects search engines on the web. This has been done through an investigation consisting of ten different search engines. Nine of these search engines are considering being semantic search engines, and the last one being the most used one on the web today. The study is conducted as a descriptive and quantitative study. A literature review has also been implemented by the relevant sources about the semantic web and search engines. The conclusions drawn where that the semantic web is multifaceted with its definitions and that the result of how concrete search engines implements semantic web principles can vary depending on which search engine one interacts with.</p>
160

Multi-Agent User-Centric Specialization and Collaboration for Information Retrieval

Mooman, Abdelniser January 2012 (has links)
The amount of information on the World Wide Web (WWW) is rapidly growing in pace and topic diversity. This has made it increasingly difficult, and often frustrating, for information seekers to retrieve the content they are looking for as information retrieval systems (e.g., search engines) are unable to decipher the relevance of the retrieved information as it pertains to the information they are searching for. This issue can be decomposed into two aspects: 1) variability of information relevance as it pertains to an information seeker. In other words, different information seekers may enter the same search text, or keywords, but expect completely different results. It is therefore, imperative that information retrieval systems possess an ability to incorporate a model of the information seeker in order to estimate the relevance and context of use of information before presenting results. Of course, in this context, by a model we mean the capture of trends in the information seeker's search behaviour. This is what many researchers refer to as the personalized search. 2) Information diversity. Information available on the World Wide Web today spans multitudes of inherently overlapping topics, and it is difficult for any information retrieval system to decide effectively on the relevance of the information retrieved in response to an information seeker's query. For example, the information seeker who wishes to use WWW to learn about a cure for a certain illness would receive a more relevant answer if the search engine was optimized into such domains of topics. This is what is being referred to in the WWW nomenclature as a 'specialized search'. This thesis maintains that the information seeker's search is not intended to be completely random and therefore tends to portray itself as consistent patterns of behaviour. Nonetheless, this behaviour, despite being consistent, can be quite complex to capture. To accomplish this goal the thesis proposes a Multi-Agent Personalized Information Retrieval with Specialization Ontology (MAPIRSO). MAPIRSO offers a complete learning framework that is able to model the end user's search behaviour and interests and to organize information into categorized domains so as to ensure maximum relevance of its responses as they pertain to the end user queries. Specialization and personalization are accomplished using a group of collaborative agents. Each agent employs a Reinforcement Learning (RL) strategy to capture end user's behaviour and interests. Reinforcement learning allows the agents to evolve their knowledge of the end user behaviour and interests as they function to serve him or her. Furthermore, REL allows each agent to adapt to changes in an end user's behaviour and interests. Specialization is the process by which new information domains are created based on existing information topics, allowing new kinds of content to be built exclusively for information seekers. One of the key characteristics of specialization domains is the seeker centric - which allows intelligent agents to create new information based on the information seekers' feedback and their behaviours. Specialized domains are created by intelligent agents that collect information from a specific domain topic. The task of these specialized agents is to map the user's query to a repository of specific domains in order to present users with relevant information. As a result, mapping users' queries to only relevant information is one of the fundamental challenges in Artificial Intelligent (AI) and machine learning research. Our approach employs intelligent cooperative agents that specialize in building personalized ontology information domains that pertain to each information seeker's specific needs. Specializing and categorizing information into unique domains is one of the challenge areas that have been addressed and various proposed solutions were evaluated and adopted to address growing information. However, categorizing information into unique domains does not satisfy each individualized information seeker. Information seekers might search for similar topics, but each would have different interests. For example, medical information of a specific medical domain has different importance to both the doctor and patients. The thesis presents a novel solution that will resolve the growing and diverse information by building seeker centric specialized information domains that are personalized through the information seekers' feedback and behaviours. To address this challenge, the research examines the fundamental components that constitute the specialized agent: an intelligent machine learning system, user input queries, an intelligent agent, and information resources constructed through specialized domains. Experimental work is reported to demonstrate the efficiency of the proposed solution in addressing the overlapping information growth. The experimental work utilizes extensive user-centric specialized domain topics. This work employs personalized and collaborative multi learning agents and ontology techniques thereby enriching the queries and domains of the user. Therefore, experiments and results have shown that building specialized ontology domains, pertinent to the information seekers' needs, are more precise and efficient compared to other information retrieval applications and existing search engines.

Page generated in 0.0531 seconds