• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 34
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 113
  • 113
  • 50
  • 48
  • 22
  • 18
  • 17
  • 16
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Lageravstämning i oljebranschen : Datalagring

Fridlund, Karl, Ali, Akhlad January 2015 (has links)
Denna undersökning handlar om hur data för ett lagerhanteringssystem, inom oljebranschen, kan lagras på ett mer effektivt sätt. Med hjälp av en databasmodell bidrar den här rapporten genom att visa nya lagringsmöjligheter som finns för företaget Preem. Förutom det ger undersökningen, bland annat, svar på hur skillnader mellan uppgiven (enligt leverantören) och uppmätt drivmedelsvolym för utförda leveranser kan beräknas, med hjälp av att begära information från en databas. Genom en bättre utförd datalagring kommer det vara lättare att hantera ett lager och kunna upptäcka när ett fel uppstår, som gör att informationen om volymen i lagret ändras felaktigt. Ett vanligt existerande problem är att två källor anger två olika resultat som berör lagermängden. Rapporten innehåller två framtagna databasmodeller samt inkluderar de teoretiska fakta som krävs för att förstå sådana modeller, och att kunna få en uppfattning för hur en sådan modell implementeras som en databas. Exempel på sådan information är vad en databas är, hur modellering av en sådan går till, bland mycket annat. Samtidigt framhåller denna undersökning idéer på vad det finns för fortsatta utvecklingsmöjligheter för Preem. / The purpose of this research is to study how a stock management system within the oil industry can be stored in a more efficient way. With the use of a database model, this study will show new data storage possibilities for Preem (one of the largest fuel companies in Sweden). The research will determine the difference between the fuel volume stated (by the supplier) and the measured volume (in the tank), by requesting information from a new designed database. With more effective data storage it will be easier to manage stock and to detect errors, which can interfere with the information about the volume in stock. Having two different sources of information that can give two different results for the stock quantity, causes these errors. The report contains two developed database models as well as the theoretical facts that is necessary for understanding such models and their implementation as a database. For example gives a definition of a database, and a description of the modelling process. This research is concluded by some recommendations for Preem’s future progress for the system.
92

Predicción de la demanda para un general sales service agent (GSSA) mediante regresión lineal simple / Demand forecasting for a general sales service agent through simple linear regression

Rojas García, Freddy Wiliam 09 December 2020 (has links)
Pacific Feeder Services (PFS) es un agente general de venta de espacios aéreos de distintas aerolíneas; por ejemplo, Korean Air, Aeroméxico, Alitalia, Aerolíneas Argentinas y Gol. Estas aerolíneas no cuentan con infraestructura propia en el Perú, de modo que PFS actúa como representante de estas aerolíneas ante sus clientes. En el presente trabajo de investigación se utilizará la metodología iterativa de la ciencia de datos para abordar el problema relacionado a la demanda, puesto que esta es incierta en algunos meses del año. Para ello, se plantea la siguiente hipótesis: ¿Será una regresión lineal simple el modelo adecuado para realizar el pronóstico de los volúmenes de la demanda que tendrá PFS en los próximos meses? El objetivo por alcanzar será proyectar la demanda mediante una regresión lineal simple, para lo cual se está tomando como base los datos de los kilos exportados por PFS en el año 2019. Asimismo, el presente trabajo de investigación académico presenta una arquitectura de datos funcional y una arquitectura de datos tecnológica que da soporte al modelo de regresión lineal simple. La primera explica cuáles son los insumos, almacenamiento y consumo que se requieren para implementar el mencionado modelo, mientras que la segunda expone las herramientas del modelo. Finalmente, el trabajo acaba con las conclusiones y recomendaciones asociadas a la correcta implementación del modelo de regresión lineal simple en el caso específico de PFS. / Pacific Feeder Services (PFS) is a general sales service agent (GSSA) whose main duty is to commercialize air freight capacity of different airlines; for example, Korean Air, Aeroméxico, Alitalia, Aerolineas Argentinas and Gol. These airlines do not have their own infrastructure in the country, so PFS acts as a representative of these airlines to their customers. In this research paper, the iterative methodology of data science will be used to address the problem related to demand, inasmuch as this is uncertain in some months of the year. To do this, the following hypothesis is proposed: Will a simple linear regression be the appropriate model to forecast the volumes of demand that PFS will have in the coming months? The objective to be achieved will be to project the demand through a simple linear regression, for which the data of the kilos exported by PFS in 2019 is being taken as a basis. Likewise, this academic research paper presents a functional data architecture and a technological data architecture that supports the simple linear regression model. The first explains what the inputs, storage and consumption required to implement the mentioned model are, while the second exposes the tools of the model. Finally, the research paper ends with the conclusions and recommendations associated with the correct implementation of the simple linear regression model in the specific case of PFS. / Trabajo de investigación
93

Aplicación de ciencia de datos para incrementar la efectividad del número de operaciones de la base de clientes tácticos de Mibanco - Agencia Zárate

Bravo España, Ana María, Chacón Chávez, Verónica Magaly, Flores Chumpitaz, María Isabel, Mamani Gutiérrez, Miguel Hilarión, Toranzo Pellanne, María Pía 15 July 2021 (has links)
El presente trabajo de investigación busca analizar nuevas estrategias para incrementar el nivel de efectividad del número de operaciones de la base de tácticos de los clientes de la agencia Zárate de Mibanco ubicada en el distrito de San Juan de Lurigancho, basado en la segmentación comercial del cliente. La metodología de investigación de ciencia de datos consta de 10 etapas, desde la comprensión de datos hasta la retroalimentación, se aplicará un modelo analítico de carácter predictivo, se analiza la información histórica de Mibanco y con ello se identifica la problemática de la baja efectividad de la base de tácticos de la agencia Zárate. Seguidamente se exponen las posibles soluciones basadas en el modelo de ciencia de datos y la hipótesis. También, se realiza el análisis EDA para la comprensión y preparación de los datos a través de visualizaciones y se describen las herramientas que se utilizarán para el proyecto. Se establece una arquitectura de los datos en base a la funcionalidad y estructura actual de Mibanco. Asimismo, se emplea la técnica de ciencia de datos de Aprendizaje Supervisado, modelo de Clasificación basado en el algoritmo de Árbol de Decisiones. Adicionalmente, se muestran los resultados del modelo de ciencia de datos, basado en encontrar la fórmula del éxito para encontrar los perfiles idóneos de los clientes pre-aprobados de la base de. Finalmente, se establecieron las estrategias para la implementación del modelo de ciencia de datos en la empresa Mibanco. / The present research work seeks to analyze new strategies to increase the level of effectiveness of the number of operations of the tactical base of the clients of the Zarate agency of Mibanco located in the district of San Juan de Lurigancho, based on the commercial segmentation of the client. The data science research methodology consists of 10 stages, from data comprehension to feedback, an analytical model of predictive character will be applied, the historical information of Mibanco is analyzed and with it the problem of the low effectiveness of the tactical base of the Zarate agency is identified. Next, the possible solutions based on the data science model and the hypothesis are presented. Also, the EDA analysis is performed for the understanding and preparation of the data through visualizations and the tools that will be used for the project are described. A data architecture is established based on the current functionality and structure of Mibanco. Likewise, the Supervised Learning data science technique, a Classification model based on the Decision Tree algorithm, is used. Additionally, the results of the data science model are shown, based on finding the success formula to find the ideal profiles of the pre-approved customers of the customer base. Finally, the strategies for the implementation of the data science model in Mibanco were established. / Trabajo de investigación
94

In Situ Summarization and Visual Exploration of Large-scale Simulation Data Sets

Dutta, Soumya 17 September 2018 (has links)
No description available.
95

Topic change in robot-moderated group discussions : Investigating machine learning approaches for topic change in robot-moderated discussions using non-verbal features / Ämnesbyte i robotmodererade gruppdiskussioner : Undersöka maskininlärningsmetoder för ämnesändring i robotmodererad diskussion med hjälp av icke-verbala egenskaper

Hadjiantonis, Georgios January 2024 (has links)
Moderating group discussions among humans can often be challenging and require certain skills, particularly in deciding when to ask other participants to elaborate or change the current topic of the discussion. Recent research on Human-Robot Interaction in groups has demonstrated the positive impact of robot behavior on the quality and effectiveness of the interaction and their ability to shape the dynamics of the group and promote social behavior. In light of this, there is the potential of using social robots as discussion moderators to facilitate engaging and productive discussions among humans. Previous work on topic management in conversational agents was predominantly based on human engagement and topic personalization, with the agent having an active/central role in the conversation. This thesis focuses exclusively on the moderation of group discussions; instead of moderating the topic based on evaluated human engagement, the thesis builds upon previous research on non-verbal cues related to discussion topic structure and turntaking to determine whether participants intend to continue discussing the current topic in a content-free manner. This thesis investigates the suitability of machine-learning models and the contribution of different audiovisual non-verbal features in predicting appropriate topic changes. For this purpose, we utilized pre-recorded interactions between a robot moderator and human participants, which we annotated and from which we extracted acoustic and body language-related features. We provide an analysis of the performance of sequential and nonsequential machine learning approaches using different sets of features, as well as a comparison with rule-based heuristics. The results indicate promising performance in classifying between cases when a topic change was inappropriate versus when a topic change could or should change, outperforming rule-based approaches and demonstrating the feasibility of using machine learning models for topic moderation. Regarding the type of models, the results suggest no distinct advantage of sequential over non-sequential modeling approaches, indicating the effectiveness of simpler non-sequential data models. Acoustic features exhibited comparable and, in some cases, improved overall performance and robustness compared to using only body language-related features or a combination of both types. In summary, this thesis provides a foundation for future research in robot-mediated topic moderation in groups using non-verbal cues, presenting opportunities to further improve social robots with topic moderation capabilities. / Att moderera gruppdiskussioner mellan människor kan ofta vara utmanande och kräver vissa färdigheter, särskilt när det gäller att bestämma när man ska be andra deltagare att utveckla eller ändra det aktuella ämnet för diskussionen. Ny forskning om människa-robotinteraktion i grupper har visat den positiva effekten av robotbeteende på interaktionens kvalitet och effektivitet och deras förmåga att forma gruppens dynamik och främja socialt beteende. I ljuset av detta finns det potential att använda sociala robotar som diskussionsmoderatorer för att underlätta engagerande och produktiva diskussioner bland människor. Tidigare arbete med ämneshantering hos konversationsagenter baserades till övervägande del på mänskligt engagemang och ämnesanpassning, där agenten hade en aktiv/central roll i samtalet. Denna avhandling fokuserar uteslutande på moderering av gruppdiskussioner; istället för att moderera ämnet baserat på utvärderat mänskligt engagemang, bygger avhandlingen på tidigare forskning om icke-verbala ledtrådar relaterade till diskussionsämnesstruktur och turtagning för att avgöra om deltagarna avser att fortsätta diskutera det aktuella ämnet på ett innehållsfritt sätt. Denna avhandling undersöker lämpligheten av maskininlärningsmodeller och bidraget från olika audiovisuella icke-verbala funktioner för att förutsäga lämpliga ämnesändringar. För detta ändamål använde vi förinspelade interaktioner mellan en robotmoderator och mänskliga deltagare, som vi kommenterade och från vilka vi extraherade akustiska och kroppsspråksrelaterade funktioner. Vi tillhandahåller en analys av prestandan för sekventiell och ickesekventiell maskininlärningsmetoder med olika uppsättningar funktioner, samt en jämförelse med regelbaserad heuristik. Resultaten indikerar lovande prestation när det gäller att klassificera mellan fall när ett ämnesbyte var olämpligt kontra när ett ämnesbyte kunde eller borde ändras, överträffande regelbaserade tillvägagångssätt och demonstrerar genomförbarheten av att använda maskininlärningsmodeller för ämnesmoderering. När det gäller typen av modeller tyder resultaten inte på någon tydlig fördel med sekventiella metoder framför icke-sekventiella modelleringsmetoder, vilket indikerar effektiviteten hos enklare icke-sekventiella datamodeller. Akustiska funktioner uppvisade jämförbara och, i vissa fall, förbättrade övergripande prestanda och robusthet jämfört med att endast använda kroppsspråksrelaterade funktioner eller en kombination av båda typerna.svis ger denna avhandling en grund för framtida forskning inom robotmedierad ämnesmoderering i grupper som använder icke-verbala ledtrådar, och presenterar möjligheter att förbättra sociala robotar ytterligare med ämnesmodererande förmåga.
96

Describing data patterns / a general deconstruction of metadata standards

Voß, Jakob 07 August 2013 (has links)
Diese Arbeit behandelt die Frage, wie Daten grundsätzlich strukturiert und beschrieben sind. Im Gegensatz zu vorhandenen Auseinandersetzungen mit Daten im Sinne von gespeicherten Beobachtungen oder Sachverhalten, werden Daten hierbei semiotisch als Zeichen aufgefasst. Diese Zeichen werden in Form von digitalen Dokumenten kommuniziert und sind mittels zahlreicher Standards, Formate, Sprachen, Kodierungen, Schemata, Techniken etc. strukturiert und beschrieben. Diese Vielfalt von Mitteln wird erstmals in ihrer Gesamtheit mit Hilfe der phenomenologischen Forschungsmethode analysiert. Ziel ist es dabei, durch eine genaue Erfahrung und Beschreibung von Mitteln zur Strukturierung und Beschreibung von Daten zum allgemeinen Wesen der Datenstrukturierung und -beschreibung vorzudringen. Die Ergebnisse dieser Arbeit bestehen aus drei Teilen. Erstens ergeben sich sechs Prototypen, die die beschriebenen Mittel nach ihrem Hauptanwendungszweck kategorisieren. Zweitens gibt es fünf Paradigmen, die das Verständnis und die Anwendung von Mitteln zur Strukturierung und Beschreibung von Daten grundlegend beeinflussen. Drittens legt diese Arbeit eine Mustersprache der Datenstrukturierung vor. In zwanzig Mustern werden typische Probleme und Lösungen dokumentiert, die bei der Strukturierung und Beschreibung von Daten unabhängig von konkreten Techniken immer wieder auftreten. Die Ergebnisse dieser Arbeit können dazu beitragen, das Verständnis von Daten --- das heisst digitalen Dokumente und ihre Metadaten in allen ihren Formen --- zu verbessern. Spezielle Anwendungsgebiete liegen unter Anderem in den Bereichen Datenarchäologie und Daten-Literacy. / Many methods, technologies, standards, and languages exist to structure and describe data. The aim of this thesis is to find common features in these methods to determine how data is actually structured and described. Existing studies are limited to notions of data as recorded observations and facts, or they require given structures to build on, such as the concept of a record or the concept of a schema. These presumed concepts have been deconstructed in this thesis from a semiotic point of view. This was done by analysing data as signs, communicated in form of digital documents. The study was conducted by a phenomenological research method. Conceptual properties of data structuring and description were first collected and experienced critically. Examples of such properties include encodings, identifiers, formats, schemas, and models. The analysis resulted in six prototypes to categorize data methods by their primary purpose. The study further revealed five basic paradigms that deeply shape how data is structured and described in practice. The third result consists of a pattern language of data structuring. The patterns show problems and solutions which occur over and over again in data, independent from particular technologies. Twenty general patterns were identified and described, each with its benefits, consequences, pitfalls, and relations to other patterns. The results can help to better understand data and its actual forms, both for consumption and creation of data. Particular domains of application include data archaeology and data literacy.
97

itSIMPLE: ambiente integrado de modelagem e análise de domínios de planejamento automático. / itSIMPLE: integrated environment for modeling and analysis of automated planning domains.

Vaquero, Tiago Stegun 14 March 2007 (has links)
O grande avanço das técnicas de Planejamento em Inteligência Artificial fez com que a Engenharia de Requisitos e a Engenharia do Conhecimento ganhassem extrema importância entre as disciplinas relacionadas a projeto de engenharia (Engineering Design). A especificação, modelagem e análise dos domínios de planejamento automático se tornam etapas fundamentais para melhor entender e classificar os domínios de planejamento, servindo também de guia na busca de soluções. Neste trabalho, é apresentada uma proposta de um ambiente integrado de modelagem e análise de domínios de planejamento, que leva em consideração o ciclo de vida de projeto, representado por uma ferramenta gráfica de modelagem que utiliza diferentes representações: a UML para modelar e analisar as características estáticas dos domínios; XML para armazenar, integrar, e exportar informação para outras linguagens (ex.: PDDL); as Redes de Petri para fazer a análise dinâmica; e a PDDL para testes com planejadores. / The great development in Artificial Intelligence Planning has emphasized the role of Requirements Engineering and Knowledge Engineering among the disciplines that contributes to Engineering Design. The modeling and specification of automated planning domains turn out to be fundamental tasks in order to understand and classify planning domains and guide the application of problem solving techniques. In this work, it is presented the proposed integrated environment for modeling and analyzing automated planning domains, which considered the life cycle of a project, represented by a tool that uses several language representations: UML to model and perform static analyses of planning environments; XML to hold, integrate, share and export information to other language representations (e.g. PDDL); Petri Nets, where dynamic analyses are made; and PDDL for testing models with planners.
98

Control and Analysis of Pulse-Modulated Systems

Almér, Stefan January 2008 (has links)
The thesis consists of an introduction and four appended papers. In the introduction we give an overview of pulse-modulated systems and provide a few examples of such systems. Furthermore, we introduce the so-called dynamic phasor model which is used as a basis for analysis in two of the appended papers. We also introduce the harmonic transfer function and finally we provide a summary of the appended papers. The first paper considers stability analysis of a class of pulse-width modulated systems based on a discrete time model. The systems considered typically have periodic solutions. Stability of a periodic solution is equivalent to stability of a fixed point of a discrete time model of the system dynamics. Conditions for global and local exponential stability of the discrete time model are derived using quadratic and piecewise quadratic Lyapunov functions. A griding procedure is used to develop a systematic method to search for the Lyapunov functions. The second paper considers the dynamic phasor model as a tool for stability analysis of a general class of pulse-modulated systems. The analysis covers both linear time periodic systems and systems where the pulse modulation is controlled by feedback. The dynamic phasor model provides an $\textbf{L}_2$-equivalent description of the system dynamics in terms of an infinite dimensional dynamic system. The infinite dimensional phasor system is approximated via a skew truncation. The truncated system is used to derive a systematic method to compute time periodic quadratic Lyapunov functions. The third paper considers the dynamic phasor model as a tool for harmonic analysis of a class of pulse-width modulated systems. The analysis covers both linear time periodic systems and non-periodic systems where the switching is controlled by feedback. As in the second paper of the thesis, we represent the switching system using the L_2-equivalent infinite dimensional system provided by the phasor model. It is shown that there is a connection between the dynamic phasor model and the harmonic transfer function of a linear time periodic system and this connection is used to extend the notion of harmonic transfer function to describe periodic solutions of non-periodic systems. The infinite dimensional phasor system is approximated via a square truncation. We assume that the response of the truncated system to a periodic disturbance is also periodic and we consider the corresponding harmonic balance equations. An approximate solution of these equations is stated in terms of a harmonic transfer function which is analogous to the harmonic transfer function of a linear time periodic system. The aforementioned assumption is proved to hold for small disturbances by proving the existence of a solution to a fixed point equation. The proof implies that for small disturbances, the approximation is good. Finally, the fourth paper considers control synthesis for switched mode DC-DC converters. The synthesis is based on a sampled data model of the system dynamics. The sampled data model gives an exact description of the converter state at the switching instances, but also includes a lifted signal which represents the inter-sampling behavior. Within the sampled data framework we consider H-infinity control design to achieve robustness to disturbances and load variations. The suggested controller is applied to two benchmark examples; a step-down and a step-up converter. Performance is verified in both simulations and in experiments. / QC 20100628
99

Modélisation et construction des bases de données géographiques floues et maintien de la cohérence de modèles pour les SGBD SQL et NoSQL / Modeling and construction of fuzzy geographic databases with supporting models consistency for SQL and NoSQL database systems

Soumri Khalfi, Besma 12 June 2017 (has links)
Aujourd’hui, les recherches autour du stockage et de l’intégration des données spatiales constituent un maillon important qui redynamise les recherches sur la qualité des données. La prise en compte de l’imperfection des données géographiques, particulièrement l’imprécision, ajoute une réelle complexification. Parallèlement à l’augmentation des exigences de qualité centrées sur les données (précision, exhaustivité, actualité), les besoins en information intelligible ne cessent d’augmenter. Sous cet angle, nous sommes intéressés aux bases de données géographiques imprécises (BDGI) et leur cohérence. Ce travail de thèse présente des solutions pour la modélisation et la construction des BDGI et cohérentes pour les SGBD SQL et NoSQL.Les méthodes de modélisation conceptuelle de données géographiques imprécises proposées ne permettent pas de répondre de façon satisfaisante aux besoins de modélisation du monde réel. Nous présentons une version étendue de l’approche F-Perceptory pour la conception de BDGI. Afin de construire la BDGI dans un système relationnel, nous présentons un ensemble de règles de transformation automatique de modèles pour générer à partir du modèle conceptuel flou le modèle physique. Nous implémentons ces solutions sous forme d’un prototype baptisé FPMDSG.Pour les systèmes NoSQL type document. Nous présentons un modèle logique baptisé Fuzzy GeoJSON afin de mieux cerner la structure des données géographiques imprécises. En plus, ces systèmes manquent de pertinence pour la cohérence des données ; nous présentons une méthodologie de validation pour un stockage cohérent. Les solutions proposées sont implémentées sous forme d'un processus de validation. / Today, research on the storage and the integration of spatial data is an important element that revitalizes the research on data quality. Taking into account the imperfection of geographic data particularly the imprecision adds a real complexity. Along with the increase in the quality requirements centered on data (accuracy, completeness, topicality), the need for intelligible information (logically consistent) is constantly increasing. From this point of view, we are interested in Imprecise Geographic Databases (IGDBs) and their logical coherence. This work proposes solutions to build consistent IGDBs for SQL and NoSQL database systems.The design methods proposed to imprecise geographic data modeling do not satisfactorily meet the modeling needs of the real world. We present an extension to the F-Perceptory approach for IGDBs design. To generate a coherent definition of the imprecise geographic objects and built the IGDB into relational system, we present a set of rules for automatic models transformation. Based on these rules, we develop a process to generate the physical model from the fuzzy conceptual model. We implement these solutions as a prototype called FPMDSG.For NoSQL document oriented databases, we present a logical model called Fuzzy GeoJSON to better express the structure of imprecise geographic data. In addition, these systems lack relevance for data consistency; therefore, we present a validation methodology for consistent storage. The proposed solutions are implemented as a schema driven pipeline based on Fuzzy GeoJSON schema and semantic constraints.
100

L'espace documentaire en restructuration : l'évolution des services des bibliothèques universitaires. / The information space sstructurde by the digital approach : development of the service offering in academic libraries

Bourdenet, Philippe 05 December 2013 (has links)
Le catalogue occupe une place privilégiée dans l’offre de service des bibliothèques universitaires, pivot de l’intermédiation. Depuis 10 ans, il traverse une crise grave, voyant les usagers le délaisser à la faveur des moteurs de recherche généralistes. Le web, plus qu’un sérieux concurrent, devance aujourd’hui les systèmes d’information documentaires, et devient le point d’entrée principal pour la recherche d’information. Les bibliothèques tentent de structurer un espace documentaire qui soit habité par les usagers, au sein duquel se développe l’offre de service, mais celle-ci se présente encore comme une série de silos inertes, sans grande possibilité de navigation, malgré de considérables efforts d’ingénierie et des pistes d’évolution vers les outils de découverte. La profession, consciente de cette crise profonde, après avoir accusé les remous occasionnés par la dimension disruptive du numérique, cherche des moyens pour adapter et diversifier son offre, fluidifier la diffusion de l’information, et se réinvente un rôle d’intermédiation en cherchant à tirer profit des nouvelles pratiques des usagers, de leurs nouvelles attentes, et de nouvelles perspectives. Les bibliothèques placent leur espoir dans de nouveaux modèles de données, tentent d’y ajouter un niveau d’abstraction favorisant les liaisons avec l’univers de la connaissance. L’évolution vers le web sémantique semble une opportunité à saisir pour valoriser les collections et les rendre exploitables dans un autre contexte, au prix d’importants efforts que cette analyse tente de mesurer. Une approche constructiviste fondée sur l’observation participante et le recueil de données offre une vision issue de l’intérieur de la communauté des bibliothèques sur l’évolution des catalogues et des outils d’intermédiation, et ouvre des perspectives sur leurs enjeux. / The catalog takes up a special position in the supply of services of academic libraries, as a pivot for the intermediary between users and information professionals who carry the responsibility for building up collections. For 10 years, through a serious crisis, they’ve been seeing their patrons preferring the general or commercial search engines. The Web is more than a serious competitor today, ahead of the document information systems, and became the main access point for information retrieval. Libraries are trying to structure an information space that is temporarily or permanently inhabited by users, in which the service offering is developed, but it is still presented as a series of silos, with few opportunities of navigation between them despite considerable engineering efforts and a perspective of evolution towards discovery tools. The profession, having become aware of this deep crisis after accusing eddies caused by the breakdown of the digital switch, looking for ways to adapt and diversify its offering, streamlines the dissemination of information, and reinvents its roles, trying to take advantage of new practices of users, new expectations and new prospects. Libraries put their hope in new data models, trying to add a level of abstraction promoting links with the world of knowledge. The evolution towards the Semantic Web seems to be a valuable opportunity to enhance the collections and make them usable in another context, at the expense of significant efforts sized up by this analysis. A constructivist approach based on participant observation and data collection offers a vision of the outcome within the library community on the development of catalogs and intermediation tools, and an outlook on their issues.

Page generated in 0.0658 seconds