Spelling suggestions: "subject:"forminformation retrieval"" "subject:"informationation retrieval""
571 |
A multi-dimensional entropy model of jazz improvisation for music information retrieval.Simon, Scott J. 12 1900 (has links)
Jazz improvisation provides a case context for examining information in music; entropy provides a means for representing music for retrieval. Entropy measures are shown to distinguish between different improvisations on the same theme, thus demonstrating their potential for representing jazz information for analysis and retrieval. The calculated entropy measures are calibrated against human representation by means of a case study of an advanced jazz improvisation course, in which synonyms for "entropy" are frequently used by the instructor. The data sets are examined for insights in music information retrieval, music information behavior, and music representation.
|
572 |
A Generic Approach to Component-Level Evaluation in Information RetrievalKürsten, Jens 19 November 2012 (has links)
Research in information retrieval deals with the theories and models that constitute the foundations for any kind of service that provides access or pointers to particular elements of a collection of documents in response to a submitted information need. The specific field of information retrieval evaluation is concerned with the critical assessment of the quality of search systems. Empirical evaluation based on the Cranfield paradigm using a specific collection of test queries in combination with relevance assessments in a laboratory environment is the classic approach to compare the impact of retrieval systems and their underlying models on retrieval effectiveness.
In the past two decades international campaigns, like the Text Retrieval Conference, have led to huge advances in the design of experimental information retrieval evaluations. But in general the focus of this system-driven paradigm remained on the comparison of system results, i.e. retrieval systems are treated as black boxes. This approach to the evaluation of retrieval system has been criticised for treating systems as black boxes. Recent works on this subject have proposed the study of the system configurations and their individual components. This thesis proposes a generic approach to the evaluation of retrieval systems at the component-level.
The focus of the thesis at hand is on the key components that are needed to address typical ad-hoc search tasks, like finding books on a particular topic in a large set of library records. A central approach in this work is the further development of the Xtrieval framework by the integration of widely-used IR toolkits in order to eliminate the limitations of individual tools. Strong empirical results at international campaigns that provided various types of evaluation tasks confirm both the validity of this approach and the flexibility of the Xtrieval framework.
Modern information retrieval systems contain various components that are important for solving particular subtasks of the retrieval process. This thesis illustrates the detailed analysis of important system components needed to address ad-hoc retrieval tasks. Here, the design and implementation of the Xtrieval framework offers a variety of approaches for flexible system configurations. Xtrieval has been designed as an open system and allows the integration of further components and tools as well as addressing search tasks other than ad-hoc retrieval. This approach ensures that it is possible to conduct automated component-level evaluation of retrieval approaches.
Both the scale and impact of these possibilities for the evaluation of retrieval systems are demonstrated by the design of an empirical experiment that covers more than 13,000 individual system configurations. This experimental set-up is tested on four test collections for ad-hoc search. The results of this experiment are manifold. For instance, particular implementations of ranking models fail systematically on all tested collections. The exploratory analysis of the ranking models empirically confirms the relationships between different implementations of models that share theoretical foundations. The obtained results also suggest that the impact on retrieval effectiveness of most instances of IR system components depends on the test collections that are being used for evaluation. Due to the scale of the designed component-level evaluation experiment, not all possible interactions of the system component under examination could be analysed in this work. For this reason the resulting data set will be made publicly available to the entire research community. / Das Forschungsgebiet Information Retrieval befasst sich mit Theorien und Modellen, die die Grundlage für jegliche Dienste bilden, die als Antwort auf ein formuliertes Informationsbedürfnis den Zugang zu oder einen Verweis auf entsprechende Elemente einer Dokumentsammlung ermöglichen. Die Qualität von Suchalgorithmen wird im Teilgebiet Information Retrieval Evaluation untersucht. Der klassische Ansatz für den empirischen Vergleich von Retrievalsystemen basiert auf dem Cranfield-Paradigma und nutzt einen spezifischen Korpus mit einer Menge von Beispielanfragen mit zugehörigen Relevanzbewertungen.
Internationale Evaluationskampagnen, wie die Text Retrieval Conference, haben in den vergangenen zwei Jahrzehnten zu großen Fortschritten in der Methodik der empirischen Bewertung von Suchverfahren geführt. Der generelle Fokus dieses systembasierten Ansatzes liegt jedoch nach wie vor auf dem Vergleich der Gesamtsysteme, dass heißt die Systeme werden als Black Box betrachtet. In jüngster Zeit ist diese Evaluationsmethode vor allem aufgrund des Black-Box-Charakters des Untersuchungsgegenstandes in die Kritik geraten. Aktuelle Arbeiten fordern einen differenzierteren Blick in die einzelnen Systemeigenschaften, bzw. ihrer Komponenten. In der vorliegenden Arbeit wird ein generischer Ansatz zur komponentenbasierten Evaluation von Retrievalsystemen vorgestellt und empirisch untersucht.
Der Fokus der vorliegenden Dissertation liegt deshalb auf zentralen Komponenten, die für die Bearbeitung klassischer Ad-Hoc Suchprobleme, wie dem Finden von Büchern zu einem bestimmten Thema in einer Menge von Bibliothekseinträgen, wichtig sind. Ein zentraler Ansatz der Arbeit ist die Weiterentwicklung des Xtrieval Frameworks mittels der Integration weitverbreiteter Retrievalsysteme mit dem Ziel der gegenseitigen Eliminierung systemspezifischer Schwächen. Herausragende Ergebnisse im internationalen Vergleich, für verschiedenste Suchprobleme, verdeutlichen sowohl das Potenzial des Ansatzes als auch die Flexibilität des Xtrieval Frameworks.
Moderne Retrievalsysteme beinhalten zahlreiche Komponenten, die für die Lösung spezifischer Teilaufgaben im gesamten Retrievalprozess wichtig sind. Die hier vorgelegte Arbeit ermöglicht die genaue Betrachtung der einzelnen Komponenten des Ad-hoc Retrievals. Hierfür wird mit Xtrieval ein Framework dargestellt, welches ein breites Spektrum an Verfahren flexibel miteinander kombinieren lässt. Das System ist offen konzipiert und ermöglicht die Integration weiterer Verfahren sowie die Bearbeitung weiterer Retrievalaufgaben jenseits des Ad-hoc Retrieval. Damit wird die bislang in der Forschung verschiedentlich geforderte aber bislang nicht erfolgreich umgesetzte komponentenbasierte Evaluation von Retrievalverfahren ermöglicht.
Mächtigkeit und Bedeutung dieser Evaluationsmöglichkeiten werden anhand ausgewählter Instanzen der Komponenten in einer empirischen Analyse mit über 13.000 Systemkonfigurationen gezeigt. Die Ergebnisse auf den vier untersuchten Ad-Hoc Testkollektionen sind vielfältig. So wurden beispielsweise systematische Fehler bestimmter Ranking-Modelle identifiziert und die theoretischen Zusammenhänge zwischen spezifischen Klassen dieser Modelle anhand empirischer Ergebnisse nachgewiesen. Der Maßstab des durchgeführten Experiments macht eine Analyse aller möglichen Einflüsse und Zusammenhänge zwischen den untersuchten Komponenten unmöglich. Daher werden die erzeugten empirischen Daten für weitere Studien öffentlich bereitgestellt.
|
573 |
Recommendation in Enterprise 2.0 Social Media StreamsLunze, Torsten 17 September 2014 (has links)
A social media stream allows users to share user-generated content as well as aggregate different external sources into one single stream. In Enterprise 2.0 such social media streams empower co-workers to share their information and to work efficiently and effectively together while replacing email communication. As more users share information it becomes impossible to read the complete stream leading to an information overload. Therefore, it is crucial to provide the users a personalized stream that suggests important and unread messages. The main characteristic of an Enterprise 2.0 social media stream is that co-workers work together on projects represented by topics: the stream is topic-centered and not user-centered as in public streams such as Facebook or Twitter.
A lot of work has been done dealing with recommendation in a stream or for news recommendation. However, none of the current research approaches deal with the characteristics of an Enterprise 2.0 social media stream to recommend messages. The existing systems described in the research mainly deal with news recommendation for public streams and lack the applicability for Enterprise 2.0 social media streams.
In this thesis a recommender concept is developed that allows the recommendation of messages in an Enterprise 2.0 social media stream. The basic idea is to extract features from a new message and use those features to compute a relevance score for a user. Additionally, those features are used to learn a user model and then use the user model for scoring new messages. This idea works without using explicit user feedback and assures a high user acceptance because no intense rating of messages is necessary. With this idea a content-based and collaborative-based approach is developed. To reflect the topic-centered streams a topic-specific user model is introduced which learns a user model independently for each topic.
There are constantly new terms that occur in the stream of messages. For improving the quality of the recommendation (by finding more relevant messages) the recommender should be able to handle the new terms. Therefore, an approach is developed which adapts a user model if unknown terms occur by using terms of similar users or topics. Also, a short- and long-term approach is developed which tries to detect short-term interests of users. Only if the interest of a user occurs repeatedly over a certain time span are terms transferred to the long-term user model.
The approaches are evaluated against a dataset obtained through an Enterprise 2.0 social media stream application. The evaluation shows the overall applicability of the concept. Specifically the evaluation shows that a topic-specific user model outperforms a global user model and also that adapting the user model according to similar users leads to an increase in the quality of the recommendation. Interestingly, the collaborative-based approach cannot reach the quality of the content-based approach.
|
574 |
Attribute Exploration on the WebJäschke, Robert, Rudolph, Sebastian 28 May 2013 (has links)
We propose an approach for supporting attribute exploration by web information retrieval, in particular by posing appropriate queries to search engines, crowd sourcing systems, and the linked open data cloud. We discuss underlying general assumptions for this to work and the degree to which these can be taken for granted.
|
575 |
Similarity measures for scientific workflowsStarlinger, Johannes 08 January 2016 (has links)
In Laufe der letzten zehn Jahre haben Scientific Workflows als Werkzeug zur Erstellung von reproduzierbaren, datenverarbeitenden in-silico Experimenten an Aufmerksamkeit gewonnen, in die sowohl lokale Skripte und Anwendungen, als auch Web-Services eingebunden werden können. Über spezialisierte Online-Bibliotheken, sogenannte Repositories, können solche Workflows veröffentlicht und wiederverwendet werden. Mit zunehmender Größe dieser Repositories werden Ähnlichkeitsmaße für Scientific Workflows notwendig, etwa für Duplikaterkennung, Ähnlichkeitssuche oder Clustering von funktional ähnlichen Workflows. Die vorliegende Arbeit untersucht solche Ähnlichkeitsmaße für Scientific Workflows. Als erstes untersuchen wir ähnlichkeitsrelevante Eigenschaften von Scientific Workflows und identifizieren Charakteristika der Wiederverwendung ihrer Komponenten. Als zweites analysieren und reimplementieren wir existierende Lösungen für den Vergleich von Scientific Workflows entlang definierter Teilschritte des Vergleichsprozesses. Wir erstellen einen großen Gold-Standard Corpus von Workflowähnlichkeiten, der über 2400 Bewertungen für 485 Workflowpaare enthält, die von 15 Experten aus 6 Institutionen beigetragen wurden. Zum ersten Mal erlauben diese Vorarbeiten eine umfassende, vergleichende Evaluation verschiedener Ähnlichkeitsmaße für Scientific Workflows, in der wir einige vorige Ergebnisse bestätigen, andere aber revidieren. Als drittes stellen wir ein neue Methode für das Vergleichen von Scientific Workflows vor. Unsere Evaluation zeigt, dass diese neue Methode bessere und konsistentere Ergebnisse liefert und leicht mit anderen Ansätzen kombiniert werden kann, um eine weitere Qualitätssteigerung zu erreichen. Als viertes zweigen wir, wie die Resultate aus den vorangegangenen Schritten genutzt werden können, um aus Standardkomponenten eine Suchmaschine für schnelle, qualitativ hochwertige Ähnlichkeitssuche im Repositorymaßstab zu implementieren. / Over the last decade, scientific workflows have gained attention as a valuable tool to create reproducible in-silico experiments. Specialized online repositories have emerged which allow such workflows to be shared and reused by the scientific community. With increasing size of these repositories, methods to compare scientific workflows regarding their functional similarity become a necessity. To allow duplicate detection, similarity search, or clustering, similarity measures for scientific workflows are an essential prerequisite. This thesis investigates similarity measures for scientific workflows. We carry out four consecutive research tasks: First, we closely investigate the relevant properties of scientific workflows regarding their similarity and identify characteristics of re-use of their components. Second, we review and dissect existing approaches to scientific workflow comparison into a defined set of subtasks necessary in the process of workflow comparison, and re-implement previous approaches to each subtask. We create a large gold-standard corpus of expert-ratings on workflow similarity, with more than 2400 ratings provided for 485 pairs of workflows by 15 workflow experts from 6 institutions. For the first time, this allows comprehensive, comparative evaluation of different scientific workflow similarity measures, confirming some previous findings, but rejecting others. Third, we propose and evaluate a novel method for scientific workflow comparison. We show that this novel method provides results of both higher quality and higher consistency than previous approaches, and can easily be stacked and ensembled with other approaches for still better performance and higher speed. Fourth, we show how our findings can be leveraged to implement a search engine using off-the-shelf tools that performs fast, high quality similarity search for scientific workflows at repository-scale, a premier area of application for similarity measures for scientific workflows.
|
576 |
Fairness in RankingsZehlike, Meike 26 April 2022 (has links)
Künstliche Intelligenz und selbst-lernende Systeme, die ihr Verhalten aufgrund
vergangener Entscheidungen und historischer Daten adaptieren, spielen eine im-
mer größer werdende Rollen in unserem Alltag. Wir sind umgeben von einer
großen Zahl algorithmischer Entscheidungshilfen, sowie einer stetig wachsenden
Zahl algorithmischer Entscheidungssysteme. Rankings und sortierte Listen von
Suchergebnissen stellen dabei das wesentliche Instrument unserer Onlinesuche nach
Inhalten, Produkten, Freizeitaktivitäten und relevanten Personen dar. Aus diesem
Grund bestimmt die Reihenfolge der Suchergebnisse nicht nur die Zufriedenheit der
Suchenden, sondern auch die Chancen der Sortierten auf Bildung, ökonomischen
und sogar sozialen Erfolg. Wissenschaft und Politik sorgen sich aus diesem Grund
mehr und mehr um systematische Diskriminierung und Bias durch selbst-lernende
Systeme.
Um der Diskriminierung im Kontext von Rankings und sortierten Suchergeb-
nissen Herr zu werden, sind folgende drei Probleme zu addressieren: Zunächst
müssen wir die ethischen Eigenschaften und moralischen Ziele verschiedener Sit-
uationen erarbeiten, in denen Rankings eingesetzt werden. Diese sollen mit den
ethischen Werten der Algorithmen übereinstimmen, die zur Vermeidung von diskri-
minierenden Rankings Anwendung finden. Zweitens ist es notwendig, ethische
Wertesysteme in Mathematik und Algorithmen zu übersetzen, um sämtliche moralis-
chen Ziele bedienen zu können. Drittens sollten diese Methoden einem breiten
Publikum zugänglich sein, das sowohl Programmierer:innen, als auch Jurist:innen
und Politiker:innen umfasst. / Artificial intelligence and adaptive systems, that learn patterns from past behavior
and historic data, play an increasing role in our day-to-day lives. We are surrounded
by a vast amount of algorithmic decision aids, and more and more by algorithmic
decision making systems, too. As a subcategory, ranked search results have become
the main mechanism, by which we find content, products, places, and people online.
Thus their ordering contributes not only to the satisfaction of the searcher, but also
to career and business opportunities, educational placement, and even social success
of those being ranked. Therefore researchers have become increasingly concerned
with systematic biases and discrimination in data-driven ranking models.
To address the problem of discrimination and fairness in the context of rank-
ings, three main problems have to be solved: First, we have to understand the
philosophical properties of different ranking situations and all important fairness
definitions to be able to decide which method would be the most appropriate for a
given context. Second, we have to make sure that, for any fairness requirement in
a ranking context, a formal definition that meets such requirements exists. More
concretely, if a ranking context, for example, requires group fairness to be met, we
need an actual definition for group fairness in rankings in the first place. Third,
the methods together with their underlying fairness concepts and properties need
to be available to a wide range of audiences, from programmers, to policy makers
and politicians.
|
577 |
Exploring Knowledge Vaults with ChatGPT : A Domain-Driven Natural Language Approach to Document-Based Answer RetrievalHammarström, Mathias January 2023 (has links)
Problemlösning är en viktig aspekt i många yrken. Inklusive fabriksmiljöer, där problem kan leda till minskad produktion eller till och med produktionsstopp. Denna studie fokuserar på en specifik domän: en massafabrik i samarbete med SCA Massa. Syftet med studien är att undersöka potentialen av ett frågebesvarande system för att förbättra arbetarnas förmåga att lösa problem genom att förse dem med möjliga lösningar baserat på en naturlig beskrivning av problemet. Detta uppnås genom att ge arbetarna ett naturligt språk gränssnitt till en stor mängd domänspecifika dokument. Mer specifikt så fungerar systemet genom att utöka ChatGPT med domänspecifika dokument som kontext för en fråga. De relevanta dokumenten hittas med hjälp av en retriever, som använder vektorrepresentationer för varje dokument och jämför sedan dokumentens vektorer med frågans vektor. Resultaten visar att system har genererat rätt svar 92% av tiden, felaktigt svar 5% av tiden och inget svar ges 3% av tiden. Slutsatsen av denna studie är att det implementerade frågebesvarande systemet är lovande, speciellt när det används av en expert eller skicklig arbetare som är mindre benägen att vilseledas av felaktiga svar. Dock, på grund av studiens begränsade omfattning så krävs ytterligare studier för att avgöra om systemet är redo att distribueras i verkliga miljöer. / Problem solving is a key aspect in many professions. Including a factory setting, where problems can cause the production to slow down or even halt completely. The specific domain for this project is a pulp factory setting in collaboration with SCA Pulp. This study explores the potential of a question-answering system to enhance workers ability to solve a problem by providing possible solutions from a natural language description of the problem. This is accomplished by giving workers a natural language interface to a large corpus of domain-specific documents. More specifically the system works by augmenting ChatGPT with domain specific documents as context for a question. The relevant documents are found using a retriever, which uses vector representations for each document, and then compares the documents vectors with the question vector. The result shows that the system has generated a correct answer 92% of the time, an incorrect answer 5% of the time and no answer was given 3% of the time. Conclusions drawn from this study is that the implemented question-answering system is promising, especially when used by an expert or skilled worker who is less likely to be misled by the incorrect answers. However, due to the study’s small scale further study is required to conclude that this system is ready to be deployed in real-world scenarios.
|
578 |
Transfer Learning in Deep Structured Semantic Models for Information Retrieval / Kunskapsöverföring mellan datamängder i djupa arkitekturer för informationssökningZarrinkoub, Sahand January 2020 (has links)
Recent approaches to IR include neural networks that generate query and document vector representations. The representations are used as the basis for document retrieval and are able to encode semantic features if trained on large datasets, an ability that sets them apart from classical IR approaches such as TF-IDF. However, the datasets necessary to train these networks are not available to the owners of most search services used today, since they are not used by enough users. Thus, methods for enabling the use of neural IR models in data-poor environments are of interest. In this work, a bag-of-trigrams neural IR architecture is used in a transfer learning procedure in an attempt to increase performance on a target dataset by pre-training on external datasets. The target dataset used is WikiQA, and the external datasets are Quora’s Question Pairs, Reuters’ RCV1 and SQuAD. When considering individual model performance, pre-training on Question Pairs and fine-tuning on WikiQA gives us the best individual models. However, when considering average performance, pre-training on the chosen external dataset result in lower performance on the target dataset, both when all datasets are used together and when they are used individually, with different average performance depending on the external dataset used. On average, pre-training on RCV1 and Question Pairs gives the lowest and highest average performance respectively, when considering only the pre-trained networks. Surprisingly, the performance of an untrained, randomly generated network is high, and beats the performance of all pre-trained networks on average. The best performing model on average is a neural IR model trained on the target dataset without prior pre-training. / Nya modeller inom informationssökning inkluderar neurala nät som genererar vektorrepresentationer för sökfrågor och dokument. Dessa vektorrepresentationer används tillsammans med ett likhetsmått för att avgöra relevansen för ett givet dokument med avseende på en sökfråga. Semantiska särdrag i sökfrågor och dokument kan kodas in i vektorrepresentationerna. Detta möjliggör informationssökning baserat på semantiska enheter, vilket ej är möjligt genom de klassiska metoderna inom informationssökning, som istället förlitar sig på den ömsesidiga förekomsten av nyckelord i sökfrågor och dokument. För att träna neurala sökmodeller krävs stora datamängder. De flesta av dagens söktjänster används i för liten utsträckning för att möjliggöra framställande av datamängder som är stora nog att träna en neural sökmodell. Därför är det önskvärt att hitta metoder som möjliggör användadet av neurala sökmodeller i domäner med små tillgängliga datamängder. I detta examensarbete har en neural sökmodell implementerats och använts i en metod avsedd att förbättra dess prestanda på en måldatamängd genom att förträna den på externa datamängder. Måldatamängden som används är WikiQA, och de externa datamängderna är Quoras Question Pairs, Reuters RCV1 samt SquAD. I experimenten erhålls de bästa enskilda modellerna genom att föträna på Question Pairs och finjustera på WikiQA. Den genomsnittliga prestandan över ett flertal tränade modeller påverkas negativt av vår metod. Detta äller både när samtliga externa datamänder används tillsammans, samt när de används enskilt, med varierande prestanda beroende på vilken datamängd som används. Att förträna på RCV1 och Question Pairs ger den största respektive minsta negativa påverkan på den genomsnittliga prestandan. Prestandan hos en slumpmässigt genererad, otränad modell är förvånansvärt hög, i genomsnitt högre än samtliga förtränade modeller, och i nivå med BM25. Den bästa genomsnittliga prestandan erhålls genom att träna på måldatamängden WikiQA utan tidigare förträning.
|
579 |
Towards a comprehensive functional layered architecture for the Semantic WebGerber, Aurona J. 30 November 2006 (has links)
The Semantic Web, as the foreseen successor of the current Web, is
envisioned to be a semantically enriched information space usable by machines
or agents that perform sophisticated tasks on behalf of their users.
The realisation of the Semantic Web prescribe the development of a comprehensive
and functional layered architecture for the increasingly semantically
expressive languages that it comprises of. A functional architecture is
a model specified at an appropriate level of abstraction identifying system
components based on required system functionality, whilst a comprehensive
architecture is an architecture founded on established design principles
within Software Engineering.
Within this study, an argument is formulated for the development of a
comprehensive and functional layered architecture through the development
of a Semantic Web status model, the extraction of the function of
established Semantic Web technologies, as well as the development of an
evaluation mechanism for layered architectures compiled from design principles
as well as fundamental features of layered architectures. In addition,
an initial version of such a comprehensive and functional layered architecture
for the Semantic Web is constructed based on the building blocks
described above, and this architecture is applied to several scenarios to
establish the usefulness thereof.
In conclusion, based on the evidence collected as result of the research
in this study, it is possible to justify the development of an architectural
model, or more specifically, a comprehensive and functional layered architecture
for the languages of the Semantic Web. / Computing / PHD (Computer Science)
|
580 |
Multi Domain Semantic Information Retrieval Based on Topic ModelLee, Sanghoon 07 May 2016 (has links)
Over the last decades, there have been remarkable shifts in the area of Information Retrieval (IR) as huge amount of information is increasingly accumulated on the Web. The gigantic information explosion increases the need for discovering new tools that retrieve meaningful knowledge from various complex information sources. Thus, techniques primarily used to search and extract important information from numerous database sources have been a key challenge in current IR systems.
Topic modeling is one of the most recent techniquesthat discover hidden thematic structures from large data collections without human supervision. Several topic models have been proposed in various fields of study and have been utilized extensively for many applications. Latent Dirichlet Allocation (LDA) is the most well-known topic model that generates topics from large corpus of resources, such as text, images, and audio.It has been widely used in many areas in information retrieval and data mining, providing efficient way of identifying latent topics among document collections. However, LDA has a drawback that topic cohesion within a concept is attenuated when estimating infrequently occurring words. Moreover, LDAseems not to consider the meaning of words, but rather to infer hidden topics based on a statisticalapproach. However, LDA can cause either reduction in the quality of topic words or increase in loose relations between topics.
In order to solve the previous problems, we propose a domain specific topic model that combines domain concepts with LDA. Two domain specific algorithms are suggested for solving the difficulties associated with LDA. The main strength of our proposed model comes from the fact that it narrows semantic concepts from broad domain knowledge to a specific one which solves the unknown domain problem. Our proposed model is extensively tested on various applications, query expansion, classification, and summarization, to demonstrate the effectiveness of the model. Experimental results show that the proposed model significantly increasesthe performance of applications.
|
Page generated in 0.1251 seconds