• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 46
  • 21
  • 19
  • 16
  • 15
  • 13
  • 13
  • 11
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 420
  • 73
  • 72
  • 54
  • 50
  • 47
  • 47
  • 47
  • 40
  • 39
  • 36
  • 35
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Évaluer l'apport du binaural dans une application mobile audiovisuelle / Assessing the quality of experience of audiovisual services in a context of mobility : contribution of sound immersion

Moreira, Julian 10 July 2019 (has links)
Les terminaux mobiles offrent à ce jour des performances de plus en plus élevées (CPU, résolution de l’écran, capteurs optiques, etc.) Cela rehausse la qualité vidéo des services média, que ce soit pour le visionnage de contenu vidéo (streaming, TV, etc.) ou pour des applications interactives telles que le jeu vidéo. Mais cette évolution concernant l'image n'est pas ou peu suivie par l'intégration de systèmes de restitution audio de haute qualité dans ce type de terminal. Or, parallèlement à ces évolutions concernant l'image, des solutions de son spatialisé sur casque, à travers notamment la technique de restitution binaurale basée sur l'utilisation de filtres HRTF (Head Related Transfer Functions) voient le jour.Dans ce travail de thèse, nous nous proposons d’évaluer l’intérêt que peut présenter le son binaural lorsqu'il est utilisé sur une application mobile audiovisuelle. Une partie de notre travail a consisté à déterminer les différents sens que l’on pouvait donner au terme « application mobile audiovisuelle » et parmi ces sens ceux qui d’une part étaient pertinents et d’autre part pouvaient donner lieu à une évaluation comparative avec ou sans son binaural.Le couplage entre son binaural et visuel sur mobile occasionne en premier lieu une question d’ordre perceptive : comment peut-on organiser spatialement une scène virtuelle dont le son peut se déployer tout autour de l’utilisateur, et dont le visuel est restreint à un si petit écran ? La première partie de cette thèse est consacrée à cette question. Nous menons une expérience visant à étudier le découplage spatial possible entre un son binaural et un visuel rendus sur smartphone. Cette expérience révèle une forte tolérance de l’être humain face aux dégradations spatiales pouvant survenir entre les deux modalités. En particulier, l’absence d’individualisation des HRTF, ainsi qu’un très grand découplage en élévation ne semblent pas affecter la perception. Par ailleurs, les sujets semblent envisager la scène « comme si » ils y étaient eux-mêmes directement projetés, à la place de la caméra, et cela indépendamment de leur propre distance à l’écran. Tous ces résultats suggèrent la possibilité d’une association entre son binaural et visuel sur mobile dans des conditions d’utilisation proches du grand public.Dans la seconde partie de la thèse, nous tentons de répondre à la question de l’apport du binaural en déployant une expérience « hors les murs », dans un contexte plausible d’utilisation grand public. Trente sujets jouent dans leur vie quotidienne à un jeu vidéo de type Infinite Runner, développé pour l’occasion en deux versions, une avec du son binaural, et l’autre avec du son monophonique. L’expérience dure cinq semaines, à raison de deux sessions par jour. Ce protocole procède de la méthode dite "Experience Sampling Method", sur l’état de l’art de laquelle nous nous sommes appuyés. Nous calculons à chaque session des notes d’immersion, de mémorisation et de performance, et nous comparons les notes obtenues entre les deux versions sonores. Les résultats indiquent une immersion significativement meilleure pour le binaural. La mémorisation et la performance ne sont en revanche pas soumises à un effet statistiquement significatif du rendu sonore. Au-delà des résultats, cette expérience nous permet de discuter de la question de la validité des données en fonction de la méthode de déploiement, en confrontant notamment bien-fondé théorique et faisabilité pratique. / In recent years, smartphone and tablet global performances have been increased significantly (CPU, screen resolution, webcams, etc.). This can be particularly observed with video quality of mobile media services, such as video streaming applications, or interactive applications (e.g., video games). However, these evolutions barely go with the integration of high quality sound restitution systems. Beside these evolutions though, new technologies related to spatialized sound on headphones have been developed, namely the binaural restitution model, using HRTF (Head Related Transfer Functions) filters.In this thesis, we assess the potential contribution of the binaural technology to enhance the quality of experience of an audiovisual mobile application. A part of our work has been dedicated to define what is an “audiovisual mobile application”, what kind of application could be fruitfully experienced with a binaural sound, and among those applications which one could lead to a comparative experiment with and without binaural.In a first place, the coupling of a binaural sound with a mobile-rendered visual tackles a question related to perception: how to spatially arrange a virtual scene whose sound can be spread all around the user, while its visual is limited to a very small space? We propose an experiment in these conditions to study how far a sound and a visual can be moved apart without breaking their perceptual fusion. The results reveal a strong tolerance of subjects to spatial discrepancies between the two modalities. Notably, the absence or presence of individualization for the HRTF filters, and a large separation in elevation between sound and visual don’t seem to affect the perception. Besides, subjects consider the virtual scene as if they were projected inside, at the camera’s position, no matter what distance to the phone they sit. All these results suggest that an association between a binaural sound and a visual on a smartphone could be used by the general public.In the second part, we address the main question of the thesis, i.e., the contribution of binaural, and we conduct an experiment in a realistic context of use. Thirty subjects play an Infinite Runner video game in their daily lives. The game was developed for the occasion in two versions, a monophonic one and a binaural one. The experiment lasts five weeks, at a rate of two sessions per day, which relates to a protocol known as the “Experience Sampling Method”. We collect at each session notes of immersion, memorization and performance, and compare the notes between the monophonic sessions and the binaural ones. Results indicate a significantly better immersion in the binaural sessions. No effect of sound rendering was found for memorization and performance. Beyond the contribution of the binaural, we discuss about the protocol, the validity of the collected data, and oppose theoretical considerations to practical feasibility.
412

Feature selection in short-term load forecasting / Val av attribut vid kortvarig lastprognos för energiförbrukning

Söderberg, Max Joel, Meurling, Axel January 2019 (has links)
This paper investigates correlation between energy consumption 24 hours ahead and features used for predicting energy consumption. The features originate from three categories: weather, time and previous energy. The correlations are calculated using Pearson correlation and mutual information. This resulted in the highest correlated features being those representing previous energy consumption, followed by temperature and month. Two identical feature sets containing all attributes1 were obtained by ranking the features according to correlation. Three feature sets were created manually. The first set contained seven attributes representing previous energy consumption over the course of seven days prior to the day of prediction. The second set consisted of weather and time attributes. The third set consisted of all attributes from the first and second set. These sets were then compared on different machine learning models. It was found the set containing all attributes and the set containing previous energy attributes yielded the best performance for each machine learning model. 1In this report, the words ”attribute” and ”feature” are used interchangeably. / I denna rapport undersöks korrelation och betydelsen av olika attribut för att förutspå energiförbrukning 24 timmar framåt. Attributen härstammar från tre kategorier: väder, tid och tidigare energiförbrukning. Korrelationerna tas fram genom att utföra Pearson Correlation och Mutual Information. Detta resulterade i att de högst korrelerade attributen var de som representerar tidigare energiförbrukning, följt av temperatur och månad. Två identiska attributmängder erhölls genom att ranka attributen över korrelation. Tre attributmängder skapades manuellt. Den första mängden innehåll sju attribut som representerade tidigare energiförbrukning, en för varje dag, sju dagar innan datumet för prognosen av energiförbrukning. Den andra mängden bestod av väderoch tidsattribut. Den tredje mängden bestod av alla attribut från den första och andra mängden. Dessa mängder jämfördes sedan med hjälp av olika maskininlärningsmodeller. Resultaten visade att mängden med alla attribut och den med tidigare energiförbrukning gav bäst resultat för samtliga modeller.
413

Multimodal Data Management in Open-world Environment

K M A Solaiman (16678431) 02 August 2023 (has links)
<p>The availability of abundant multimodal data, including textual, visual, and sensor-based information, holds the potential to improve decision-making in diverse domains. Extracting data-driven decision-making information from heterogeneous and changing datasets in real-world data-centric applications requires achieving complementary functionalities of multimodal data integration, knowledge extraction and mining, situationally-aware data recommendation to different users, and uncertainty management in the open-world setting. To achieve a system that encompasses all of these functionalities, several challenges need to be effectively addressed: (1) How to represent and analyze heterogeneous source contents and application context for multimodal data recommendation? (2) How to predict and fulfill current and future needs as new information streams in without user intervention? (3) How to integrate disconnected data sources and learn relevant information to specific mission needs? (4) How to scale from processing petabytes of data to exabytes? (5) How to deal with uncertainties in open-world that stem from changes in data sources and user requirements?</p> <p><br></p> <p>This dissertation tackles these challenges by proposing novel frameworks, learning-based data integration and retrieval models, and algorithms to empower decision-makers to extract valuable insights from diverse multimodal data sources. The contributions of this dissertation can be summarized as follows: (1) We developed SKOD, a novel multimodal knowledge querying framework that overcomes the data representation, scalability, and data completeness issues while utilizing streaming brokers and RDBMS capabilities with entity-centric semantic features as an effective representation of content and context. Additionally, as part of the framework, a novel text attribute recognition model called HART was developed, which leveraged language models and syntactic properties of large unstructured texts. (2) In the SKOD framework, we incrementally proposed three different approaches for data integration of the disconnected sources from their semantic features to build a common knowledge base with the user information need: (i) EARS: A mediator approach using schema mapping of the semantic features and SQL joins was proposed to address scalability challenges in data integration; (ii) FemmIR: A data integration approach for more susceptible and flexible applications, that utilizes neural network-based graph matching techniques to learn coordinated graph representations of the data. It introduces a novel graph creation approach from the features and a novel similarity metric among data sources; (iii) WeSJem: This approach allows zero-shot similarity matching and data discovery by using contrastive learning<br> to embed data samples and query examples in a high-dimensional space using features as a novel source of supervision instead of relevance labels. (3) Finally, to manage uncertainties in multimodal data management for open-world environments, we characterized novelties in multimodal information retrieval based on data drift. Moreover, we proposed a novelty detection and adaptation technique as an augmentation to WeSJem.<br> </p> <p>The effectiveness of the proposed frameworks, models, and algorithms was demonstrated<br> through real-world system prototypes that solved open problems requiring large-scale human<br> endeavors and computational resources. Specifically, these prototypes assisted law enforcement officers in automating investigations and finding missing persons.<br> </p>
414

Fronto-parietal neural activity during multi-attribute decision-making

Nakahashi, Ayuno 01 1900 (has links)
Cette thèse examine deux modèles alternatifs de prises de décision motrice à travers des données comportementales humaines et des données électrophysiologiques de singes obtenues lors d'une tâche de décision multi-attributs. Les théories psychologiques classiques suggèrent que la prise de décision soit une fonction de l'exécutif central (EC). En accord avec cela, de nombreuses études ont montré des modulations neuronales concernant les décisions dans le cortex préfrontal (PFC), renforçant la notion que les décisions sont prises à un niveau abstrait dans l'exécutif central du cerveau : le PFC. Cependant, de telles corrélations neuronales se trouvent également dans les régions sensorimotrices, qui étaient traditionnellement considérées externes à l’EC. Cela a conduit à un modèle alternatif de prise de décision dans un EC, impliquant plusieurs zones cérébrales, y compris les zones exécutives et sensorimotrices. Ce second modèle suggère qu'une décision est prise lorsque les compétitions au sein et entre les aires cérébrales arrivent à une résolution, ce qui permet d'atteindre un consensus distribué (CD). L'objectif principal de cette thèse est de tester les prédictions faites par ces deux modèles. Pour ce faire, nous avons conçu une tâche d'atteinte basée sur la valeur d'attributs multiples et créé une situation dans laquelle les deux modèles font des prédictions neuronales distinctes. Dans cette tâche, deux attributs visuels indépendants indiquaient le montant de la récompense associé à chaque cible. L'un était un degré de luminosité, information ascendante (BU pour "bottom-up"), ciblant le réseau de saillance par le biais de la voie visuelle dorsale. L'autre était un indice d'orientation de ligne, information descendante (TD pour "top-down"), ciblant le réseau de catégorisation basé sur la connaissance par le biais de la voie visuelle ventrale. Nous avons effectué des enregistrements dans la région d’atteinte pariétale (PRR) et le cortex pré-moteur dorsal (PMd) du singe, dont les activités neuronales ont été précédemment impliquées comme étant modulées par des attributs BU et TD similaires. Dans la plupart des essais, les deux attributs étaient congruents – tous les deux favorisant la même cible. Cependant, un sous-ensemble d'essais avait des cibles avec la même valeur de récompense totale, mais où les deux attributs étaient en conflit (les caractéristiques BU et TD favorisant des cibles opposées). Le modèle de l'EC prédit que dans ce cas, l’activité neuronale la plus précoce doit apparaître dans une région exécutive et que les régions sensorimotrices doivent recevoir la diffusion de cette décision. Ainsi, ce modèle prédit que la différence du temps de réaction entre le PRR et le PMd sera constante, quelle que soit la manière dont la décision est prise. En revanche, le modèle CD prédit que l’intervalle de décision doit refléter le rôle d'une région dans la décision en cours. Plus précisément, si PRR et PMd font tous deux parties du réseau de décision distribué et jouent un rôle dans l'évaluation des attributs BU et TD, un choix en faveur de l'attribut BU devrait apparaître d'abord dans le PRR et par la suite dans le PMd, tandis qu'un choix en faveur de l'attribut TD devrait apparaître dans l'ordre inverse. Notre étude démontre que le temps de réaction des participants humains était plus rapide dans les essais congruents et lors de l'utilisation de l'information BU par rapport à l'utilisation de l'information TD. La distribution ne reflétait pas linéairement la complexité de l'attribut et semblait plutôt suggérer une intégration incomplète des informations disponibles. Ainsi, le résultat n'était pas entièrement explicable par un modèle d'EC pur. Le temps de réaction des participants était également plus rapide lorsqu'ils choisissaient entre deux options de grande valeur par rapport aux options de faible valeur, ce qui suggère que la loi de Weber ne s'applique pas aux attributs visuels indiquant des informations de valeur. La distribution du temps de réaction de notre premier singe était similaire à celle des participants humains. Sur le plan neuronal, l’intervalle de décision du PMd était presque toujours plus rapide que celle du PRR et le PRR ne précédait jamais le PMd; aussi, la différence de l’intervalle de décision entre ces régions n'était pas constante. Le PMd a montré un biais de base pré-stimulus dans les essais de choix libre, alors que ce n’était pas le cas pour le PRR. La distribution de l’intervalle de décision dans le PMd variait également en fonction des conditions d'essai, tandis que celle du PRR ne distinguait que les cibles uniques des cibles multiples. Une tendance similaire a été observée dans les analyses préliminaires des potentiels de champ locaux (LFP). Enfin, les résultats préliminaires suggèrent des effets plus cohérents de la micro-stimulation dans le PMd que dans le PRR. Nos résultats soutiennent le rôle causal du PMd, mais pas celui du PRR. Nos résultats sont cohérents avec les rapports précédents sur l'activité neuronale liée au choix dans les régions pariétales, car l'activité du PRR reflétait le choix du singe dans notre tâche. Nos résultats sont également cohérents avec d'autres études montrant l'absence de preuves du rôle causal des régions pariétales dans la prise de décision, car l'ordre relatif de l'activité prédictive du choix dans le PRR et le PMd ne variait pas entre les différentes conditions. À la lumière de ces deux modèles, nos résultats suggèrent une troisième alternative, qui inclut potentiellement le PMd en tant que partie du réseau de décision, mais pas le PRR. / This thesis examines two alternative models of action decisions through human behavioural and monkey electrophysiological data obtained during a multi-attribute decision task. Classic psychological theories suggest that decision-making is a function of the Central Executive (CE). In line with this, many studies showed neural correlates of decision variables in the prefrontal cortex (PFC), strengthening the notion that decisions are made at an abstract level in the brain’s central executive: PFC. However, such neural correlates are also found in sensorimotor areas, which were traditionally considered outside the CE. This has led to an alternative model to the decision making in a CE, involving multiple brain areas including both executive and sensorimotor areas. This second model suggests that a decision is made when competitions within and across brain areas come to a resolution, thus a Distributed Consensus (DC) is achieved. The main objective of this thesis is to test the predictions made by these two models. To do so, we designed a multi-attribute value-based reaching task, and created a situation in which the two models made distinct neural predictions. In this task, two independent visual attributes indicated the amount of reward associated with each reach target. One was a “bottom-up” (BU) brightness, targeting the saliency network through the dorsal visual pathway. The other was a “top-down” (TD) line orientation cue, targeting the knowledge-based categorization network through the ventral visual pathway. We recorded from monkey parietal reach region (PRR) and dorsal premotor cortex (PMd), whose activities have previously been implied to be modulated by similar BU and TD attributes. In most trials, the two attributes were congruent – both favoring the same target. However, a subset of trials consisted of a conflict between the two attributes (BU and TD features favoring opposite targets), but the targets had the same total reward values. Here, the CE model predicted that the earliest choice-predictive activity should appear in an executive region, and sensorimotor regions were expected to be receiving this decision broadcast. Thus, the model predicted the latency difference between PRR and PMd to be constant, regardless of how the decision is made. In contrast, the DC model predicted choice latency should reflect a region’s role in the ongoing decision. Specifically, if both PRR and PMd are part of the distributed decision network and play a role in evaluating the BU and TD attributes, a choice in favor of the BU attribute should appear first in PRR and then in PMd, whereas a choice in favor of the TD attribute should appear in the opposite order. We report that human participants’ reaction time (RT) was faster in congruent trials and when using the BU information compared to when using the TD information. The RT distribution did not linearly reflect the attribute complexity, and instead suggested an incomplete integration of available information. Thus, the result was not fully explainable with a pure CE model. Their RT was also faster when choosing between two high-valued options compared to low-valued options, suggesting that Weber-Fechner law does not apply to visual attributes that indicate value. Our first monkey’s RT distribution was similar to that of human participants. Neurally, choice latency of PMd was almost always faster than that of PRR and PRR never preceded PMd, and the latency difference between these regions was not consistent. PMd showed a pre-stimulus baseline bias in free-choice trials, whereas PRR did not. The distribution of choice latency in PMd also varied with trial conditions, whereas that of PRR only discriminated single versus multiple targets. A similar trend was seen in preliminary analyses of local field potentials. Finally, preliminary results suggest more consistent effects of microstimulation in PMd than in PRR. Our results support the causal role of PMd, but do not support that of PRR. This is consistent with previous reports of choice-related neural activity in the parietal regions, as PRR activity did reflect the monkey’s choice in our task. Our results are also consistent with other studies showing the absence of evidence for parietal regions’ causal role in decision-making, as the relative order of choice-predictive activity in PRR and PMd did not vary between different conditions. In light of the two models, our results suggest a third alternative, which potentially includes PMd, but not PRR, as part of the decision network.
415

Multicriteria Techniques for Sustainable Supply Chain Management

Barrera Jimenez, Ivan Felipe 30 January 2025 (has links)
Tesis por compendio / [ES] Los métodos multicriterio proporcionan un enfoque analítico y estructurado para la toma de decisiones en la gestión de la cadena de suministro, que permiten evaluaciones basadas en múltiples criterios, esenciales para gestionar socios comerciales sostenibles. El objetivo de esta tesis es contribuir a la gestión sostenible de la cadena de suministro desarrollando nuevos modelos y técnicas multicriterio para evaluar proveedores y clientes. Se han diseñado modelos que incorporan las preferencias empresariales para tomar decisiones colaborativas en la selección y clasificación transparente de alternativas basadas en criterios sostenibles. También se han desarrollado métodos para clasificar las alternativas en grupos ordenados y evaluar su calidad. Tanto los modelos como los métodos se han validado mediante casos empíricos y comparado con enfoques alternativos. La metodología se basa en una profunda revisión bibliográfica y en el conocimiento experto de profesionales en la cadena de suministro. Los modelos multicriterio propuestos emplean técnicas como el Analytic Hierarchy Process (AHP), la Multi-Attribute Utility Theory (MAUT) y PROMETHEE. También se han desarrollado tres algoritmos para la clasificación de alternativas (nominal y ordenada). En primer lugar, se ha propuesto un modelo multicriterio híbrido y se ha validado con datos reales para homologar y seleccionar proveedores de tecnología, así como para su priorización y clasificación. Este modelo integra métodos compensatorios (AHP, MAUT) y no compensatorios (PROMETHEE, FlowSort) en una jerarquía con criterios de sostenibilidad. La validación del modelo en un contexto real y su comparación con un modelo alternativo ha demostrado su capacidad para proporcionar información relevante y transparente en la toma de decisiones para la evaluación sostenible de proveedores de tecnología en el sector bancario. En segundo lugar, se ha diseñado un nuevo algoritmo, denominado Global Local Net Flow sorting (GLNF sorting), que clasifica alternativas en grupos ordenados a partir de los flujos netos generados en búsquedas globales y locales con PROMETHEE. Adicionalmente, se ha diseñado el algoritmo SILhouette for Sorting (SILS) para calcular un índice de calidad en las clasificaciones. Ambos algoritmos se han validado empíricamente en la segmentación de proveedores y sus resultados se han comparado con otros métodos publicados. Por una parte, GLNF sorting destaca al mejorar la discriminación entre proveedores cercanos a los perfiles limitantes de los grupos, aprovechando el nivel de similitud preferencial entre alternativas. Por otra, SILS mejora la calidad de las asignaciones y permite un análisis detallado que facilita la toma de decisiones. En tercer lugar, se ha propuesto un sistema de segmentación de clientes B2B basado en transacciones y colaboración, aplicando AHP y GLNF sorting. Validado con 8,157 clientes de una multinacional, se ha evaluado con SILS y estadística descriptiva. Comparado con K-means, el modelo genera clasificaciones más homogéneas y robustas. Esta herramienta permite a las empresas automatizar decisiones y llevar a cabo análisis detallados para mejorar las relaciones con los clientes, alineándose con sus estrategias de colaboración y enfoques de mercado. En cuarto lugar, las búsquedas globales y locales se han utilizado para proponer un algoritmo de clasificación nominal basado dos dimensiones, que proporciona una matriz estratégica muy útil para los gestores de cadena de suministro. Por último, se ha desarrollado el paquete de software PrometheeTools en R, que automatiza la aplicación de PROMETHEE, GLNF sorting y SILS para resolver problemas multicriterio de priorización y clasificación de alternativas. Este paquete se ha validado con éxito y destaca por su eficiencia en PROMETHEE con miles de alternativas. Está disponible en acceso abierto en el repositorio CRAN para su utilización por investigadores y profesionales interesados en toma de decisiones multicriterio. / [CA] Els mètodes multicriteri proporcionen un enfocament analític i estructurat per a la presa de decisions en la gestió de la cadena de subministrament, que permeten avaluacions basades en múltiples criteris, essencials per a gestionar socis comercials sostenibles. L'objectiu d'aquesta tesi és contribuir a la gestió sostenible de la cadena de subministrament desenvolupant nous models i tècniques multicriteri per a avaluar proveïdors i clients. S'han dissenyat models que incorporen les preferències empresarials per a prendre decisions col·laboratives en la selecció i classificació transparent d'alternatives basades en criteris sostenibles. També s'han desenvolupat mètodes per a classificar les alternatives en grups ordenats i avaluar-ne la qualitat. Tant els models com els mètodes s'han validat mitjançant casos empírics i comparat amb enfocaments alternatius. La metodologia es basa en una profunda revisió bibliogràfica i en el coneixement expert de professionals en la cadena de subministrament. Els models multicriteri proposats empren tècniques com ara el procés analític jeràrquic (AHP), la teoria d'utilitat multiatribut (MAUT) i PROMETHEE. També s'han desenvolupat tres algoritmes per a la classificació d'alternatives (nominal i ordenada). En primer lloc, s'ha proposat un model multicriteri híbrid i s'ha validat amb dades reals per a homologar i seleccionar proveïdors de tecnologia, així com per a la seua priorització i classificació. Aquest model integra mètodes compensatoris (AHP, MAUT) i no compensatoris (PROMETHEE, FlowSort) en una jerarquia amb criteris de sostenibilitat. La validació del model en un context real i la seua comparació amb un model alternatiu n'ha demostrat la capacitat per a proporcionar informació rellevant i transparent en la presa de decisions per a l'avaluació sostenible de proveïdors de tecnologia en el sector bancari. En segon lloc, s'ha dissenyat un nou algoritme, denominat Global Local Net Flow sorting (GLNF sorting), que classifica alternatives en grups ordenats a partir dels fluxos nets generats en cerques globals i locals amb PROMETHEE. Addicionalment, s'ha dissenyat l'algoritme SILhouette for Sorting (SILS) per a calcular un índex de qualitat en les classificacions. Ambdós algoritmes s'han validat empíricament en la segmentació de proveïdors i els seus resultats s'han comparat amb altres mètodes publicats. D'una banda, GLNF sorting destaca en millorar la discriminació entre proveïdors pròxims als perfils limitants dels grups, que aprofita el nivell de similitud preferencial entre alternatives. De l'altra, SILS millora la qualitat de les assignacions i permet una anàlisi detallada que facilita la presa de decisions. En tercer lloc, s'ha proposat un sistema de segmentació de clients B2B basat en transaccions i col·laboració, aplicant AHP i GLNF sorting. Validat amb 8,157 clients d'una multinacional, s'ha avaluat amb SILS i estadística descriptiva. Comparat amb K-means, el model genera classificacions més homogènies i robustes. Aquesta eina permet a les empreses automatitzar decisions i portar a cap anàlisis detallades per a millorar les relacions amb els clients, que s'alineen amb les seues estratègies de col·laboració i enfocaments de mercat. En quart lloc, les cerques globals i locals s'han utilitzat per a proposar un algoritme de classificació nominal basat en dues dimensions, que proporciona una matriu estratègica molt útil per als gestors de la cadena de subministrament. Finalment, s'ha desenvolupat el paquet de programari PrometheeTools en R, que automatitza l'aplicació de PROMETHEE, GLNF sorting i SILS per a resoldre problemes multicriteri de priorització i classificació d'alternatives. Aquest paquet s'ha validat amb èxit i destaca per la seua eficiència en PROMETHEE amb milers d'alternatives. Està disponible en accés obert en el repositori CRAN per a la utilització per investigadors i professionals interessats en la presa de decisions multicriteri. / [EN] Multicriteria methods provide an analytical and structured approach to decision making in supply chain management. These techniques allow multicriteria evaluations, which are essential for choosing and managing sustainable business partners. The aim of this thesis is to contribute to sustainable supply chain management by developing new multicriteria models and techniques to assess suppliers and customers. Models have been designed in order to incorporate business preferences to make collaborative decisions in the transparent selection and ranking of alternatives based on sustainable criteria. New methods have also been developed to classify alternatives into ordered groups and to assess their quality. Both models and methods have been validated using empirical cases and compared with alternative approaches. The methodology is based on an in-depth literature review, as well as the expertise of supply chain professionals. The proposed multicriteria models integrate techniques such as the Analytical Hierarchical Process (AHP), Multi-Attribute Utility Theory (MAUT) and the PROMETHEE method. Three new algorithms have also been developed for classifying alternatives into nominal and ordered groups (sorting problem). Firstly, a hybrid multicriteria model has been proposed and validated with real data for technology supplier qualifying, selection and ranking. This model integrates compensatory (AHP, MAUT) and non-compensatory (PROMETHEE, FlowSort) methods in a hierarchy with sustainability criteria to evaluate products, suppliers and manufacturers. Validation of the model in a real context and its comparison with an alternative model has demonstrated its ability to provide relevant and transparent information for decision making in the sustainable evaluation of technology suppliers in the banking sector. Secondly, a new algorithm has been designed, called Global Local Net Flow sorting (GLNF sorting), which classifies alternatives into ordered groups based on the net flows generated in global and local searches with PROMETHEE. In addition, the SILhouette for Sorting (SILS) algorithm has been designed to calculate a quality index in the classifications. Both algorithms have been empirically validated in supplier segmentation and their results compared with other published methods. On the one hand, the GLNF sorting algorithm excels in improving the discrimination between suppliers close to the limiting profiles by exploiting the level of preference similarity between alternatives. On the other, SILS improves the quality of alternative assignments to groups, allows for a detailed analysis of suppliers and facilitates decision making. Thirdly, a customer segmentation model based on transactions and collaboration has been proposed in the Business to Business context, applying AHP and GLNF sorting. Validated with 8,157 customers of a multinational company, it has been assessed by SILS and descriptive statistics. This model generates more homogeneous and robust groups than the K-means cluster method. This tool enables companies to automate decisions and perform detailed analysis to improve customer relationships, aligning with their collaboration strategies and market approaches. Fourthly, global and local searches have been used to propose an algorithm for nominal classification based on two dimensions, which provides a very useful strategic matrix for supply chain managers. Finally, the PrometheeTools software package has been developed in R, which automates the implementation of PROMETHEE, GLNF sorting and SILS to solve multicriteria problems of alternatives ranking and classification. This package has been successfully validated and stands out for the efficiency in PROMETHEE and especially when solving problems with thousands of alternatives. It is available by open access in the CRAN repository for use by researchers and practitioners interested in multicriteria decision making. / Barrera Jimenez, IF. (2024). Multicriteria Techniques for Sustainable Supply Chain Management [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202879 / Compendio
416

Iconographie de Fortune au Moyen Âge et à la Renaissance (XIe-XVIe siècle) / Iconography of Fortune in the Middle Ages and the Renaissance (XIth-XVIth c.)

Vassilieva-Codognet, Olga 16 May 2017 (has links)
Ce travail sur l’iconographie de Fortune au Moyen Âge et à la Renaissance se base sur un corpus de plus d’un millier d’images dont seul un tiers y est reproduit. L'enquête commence par une nécessaire étude lexicographique qui permet de mieux cerner les sens du mot latin « fortuna » ainsi que ceux du mot français « fortune » entre le xie et le xvie siècle. Une seconde partie s’intéresse au motif iconographique de la Roue de Fortune médiévale – i.e. cette roue sur le pourtour de laquelle prennent place des personnages dont le premier monte, les second trône, le troisième tombe et le dernier gît à terre –, depuis sa genèse dans un manuscrit bénéventain datant des années 1060-1070 jusqu’à sa diffusion sur toutes sortes de supports (peinture murale, sculpture monumentale, mosaïque, etc.) ainsi qu’aux différents usages de ce motif (didactiques, emblématiques, divinatoires). Une troisième partie recense les nombreuses variantes que génère au fil des siècles ce très fécond motif : Roue de la Vie, Roues animales satiriques, Roue des Vicissitudes de l’Humanité, etc. Une quatrième partie étudie la personnification de Fortune qui apparaît au xiie siècle, tant dans les images que dans les textes, avant de devenir l’une des vedettes de l’iconographie de la fin du Moyen Âge, la figure de Fortune se retrouvant dans d’innombrables manuscrits de Boèce, Jean de Meun, Boccace ou Christine de Pizan. La cinquième et dernière partie est consacrée à la mutation que connaît Fortune à la Renaissance, mutation qui la voit changer tant de forme que de fonction, l’aveugle et duplice déesse du sort abandonnant alors sa fonction didactique – et la roue d’exemples qui va avec – pour devenir une accorte jeune femme nue à la mèche de cheveux flottant au vent dont la fonction est propitiatoire et l’usage emblématique. / This work on the iconography of Fortune in the Middle Ages and the Renaissance is based on more than one thousand images of which only one third is reproduced in the document. The study begins with a lexicographical study aiming at better understanding the various meanings of the Latin word « fortuna » and the French word « fortune » from the xith to the xvith centuries. The second section addresses the iconographical pattern of the mediaeval Wheel of Fortune – i.e. that wheel where four human beings occupy different positions on the rim: the first ascends, the second is enthroned, the third falls and the fourth lays on the ground – from its inception in a 1060-1070 Beneventan manuscript to its diffusion to various media (mural painting, monumental sculpture, mosaic, etc.) as well as its different uses (didactic, emblematic, divinatory). The third section identifies the numerous variations that this fertile pattern has generated over the centuries: Wheel of Life, satirical animal Wheels, Wheel of Vicissitudes of Humanity, etc. The fourth section studies the personification of Fortune which appears in the xiith century before becoming a star of late mediaeval iconography, her figure gracing innumerable manuscripts of Boethius, Jean de Meun, Giovanni Boccaccio or Christine de Pizan. The fifth and final section is devoted to Fortune’s mutation during the Renaissance: changing both form and function, the blind and treacherous goddess of fate gives up her didactic function – and the wheel of examples that comes with it – and becomes a beautiful naked woman with a forelock whose function is propitiatory and use emblematic.
417

Well-Formed and Scalable Invasive Software Composition / Wohlgeformte und Skalierbare Invasive Softwarekomposition

Karol, Sven 26 June 2015 (has links) (PDF)
Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons.
418

Well-Formed and Scalable Invasive Software Composition

Karol, Sven 18 May 2015 (has links)
Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons.
419

Ontology-Driven, Guided Visualisation Supporting Explicit and Composable Mappings / Ontologie-getriebene, geführte Visualisierung mit expliziten und komponierbaren Abbildungen

Polowinski, Jan 08 November 2017 (has links) (PDF)
Data masses on the World Wide Web can hardly be managed by humans or machines. One option is the formal description and linking of data sources using Semantic Web and Linked Data technologies. Ontologies written in standardised languages foster the sharing and linking of data as they provide a means to formally define concepts and relations between these concepts. A second option is visualisation. The visual representation allows humans to perceive information more directly, using the highly developed visual sense. Relatively few efforts have been made on combining both options, although the formality and rich semantics of ontological data make it an ideal candidate for visualisation. Advanced visualisation design systems support the visualisation of tabular, typically statistical data. However, visualisations of ontological data still have to be created manually, since automated solutions are often limited to generic lists or node-link diagrams. Also, the semantics of ontological data are not exploited for guiding users through visualisation tasks. Finally, once a good visualisation setting has been created, it cannot easily be reused and shared. Trying to tackle these problems, we had to answer how to define composable and shareable mappings from ontological data to visual means and how to guide the visual mapping of ontological data. We present an approach that allows for the guided visualisation of ontological data, the creation of effective graphics and the reuse of visualisation settings. Instead of generic graphics, we aim at tailor-made graphics, produced using the whole palette of visual means in a flexible, bottom-up approach. It not only allows for visualising ontologies, but uses ontologies to guide users when visualising data and to drive the visualisation process at various places: First, as a rich source of information on data characteristics, second, as a means to formally describe the vocabulary for building abstract graphics, and third, as a knowledge base of facts on visualisation. This is why we call our approach ontology-driven. We suggest generating an Abstract Visual Model (AVM) to represent and »synthesise« a graphic following a role-based approach, inspired by the one used by J. v. Engelhardt for the analysis of graphics. It consists of graphic objects and relations formalised in the Visualisation Ontology (VISO). A mappings model, based on the declarative RDFS/OWL Visualisation Language (RVL), determines a set of transformations from the domain data to the AVM. RVL allows for composable visual mappings that can be shared and reused across platforms. To guide the user, for example, we discourage the construction of mappings that are suboptimal according to an effectiveness ranking formalised in the fact base and suggest more effective mappings instead. The guidance process is flexible, since it is based on exchangeable rules. VISO, RVL and the AVM are additional contributions of this thesis. Further, we initially analysed the state of the art in visualisation and RDF-presentation comparing 10 approaches by 29 criteria. Our approach is unique because it combines ontology-driven guidance with composable visual mappings. Finally, we compare three prototypes covering the essential parts of our approach to show its feasibility. We show how the mapping process can be supported by tools displaying warning messages for non-optimal visual mappings, e.g., by considering relation characteristics such as »symmetry«. In a constructive evaluation, we challenge both the RVL language and the latest prototype trying to regenerate sketches of graphics we created manually during analysis. We demonstrate how graphics can be varied and complex mappings can be composed from simple ones. Two thirds of the sketches can be almost or completely specified and half of them can be almost or completely implemented. / Datenmassen im World Wide Web können kaum von Menschen oder Maschinen erfasst werden. Eine Option ist die formale Beschreibung und Verknüpfung von Datenquellen mit Semantic-Web- und Linked-Data-Technologien. Ontologien, in standardisierten Sprachen geschrieben, befördern das Teilen und Verknüpfen von Daten, da sie ein Mittel zur formalen Definition von Konzepten und Beziehungen zwischen diesen Konzepten darstellen. Eine zweite Option ist die Visualisierung. Die visuelle Repräsentation ermöglicht es dem Menschen, Informationen direkter wahrzunehmen, indem er seinen hochentwickelten Sehsinn verwendet. Relativ wenige Anstrengungen wurden unternommen, um beide Optionen zu kombinieren, obwohl die Formalität und die reichhaltige Semantik ontologische Daten zu einem idealen Kandidaten für die Visualisierung machen. Visualisierungsdesignsysteme unterstützen Nutzer bei der Visualisierung von tabellarischen, typischerweise statistischen Daten. Visualisierungen ontologischer Daten jedoch müssen noch manuell erstellt werden, da automatisierte Lösungen häufig auf generische Listendarstellungen oder Knoten-Kanten-Diagramme beschränkt sind. Auch die Semantik der ontologischen Daten wird nicht ausgenutzt, um Benutzer durch Visualisierungsaufgaben zu führen. Einmal erstellte Visualisierungseinstellungen können nicht einfach wiederverwendet und geteilt werden. Um diese Probleme zu lösen, mussten wir eine Antwort darauf finden, wie die Definition komponierbarer und wiederverwendbarer Abbildungen von ontologischen Daten auf visuelle Mittel geschehen könnte und wie Nutzer bei dieser Abbildung geführt werden könnten. Wir stellen einen Ansatz vor, der die geführte Visualisierung von ontologischen Daten, die Erstellung effektiver Grafiken und die Wiederverwendung von Visualisierungseinstellungen ermöglicht. Statt auf generische Grafiken zielt der Ansatz auf maßgeschneiderte Grafiken ab, die mit der gesamten Palette visueller Mittel in einem flexiblen Bottom-Up-Ansatz erstellt werden. Er erlaubt nicht nur die Visualisierung von Ontologien, sondern verwendet auch Ontologien, um Benutzer bei der Visualisierung von Daten zu führen und den Visualisierungsprozess an verschiedenen Stellen zu steuern: Erstens als eine reichhaltige Informationsquelle zu Datencharakteristiken, zweitens als Mittel zur formalen Beschreibung des Vokabulars für den Aufbau von abstrakten Grafiken und drittens als Wissensbasis von Visualisierungsfakten. Deshalb nennen wir unseren Ansatz ontologie-getrieben. Wir schlagen vor, ein Abstract Visual Model (AVM) zu generieren, um eine Grafik rollenbasiert zu synthetisieren, angelehnt an einen Ansatz der von J. v. Engelhardt verwendet wird, um Grafiken zu analysieren. Das AVM besteht aus grafischen Objekten und Relationen, die in der Visualisation Ontology (VISO) formalisiert sind. Ein Mapping-Modell, das auf der deklarativen RDFS/OWL Visualisation Language (RVL) basiert, bestimmt eine Menge von Transformationen von den Quelldaten zum AVM. RVL ermöglicht zusammensetzbare »Mappings«, visuelle Abbildungen, die über Plattformen hinweg geteilt und wiederverwendet werden können. Um den Benutzer zu führen, bewerten wir Mappings anhand eines in der Faktenbasis formalisierten Effektivitätsrankings und schlagen ggf. effektivere Mappings vor. Der Beratungsprozess ist flexibel, da er auf austauschbaren Regeln basiert. VISO, RVL und das AVM sind weitere Beiträge dieser Arbeit. Darüber hinaus analysieren wir zunächst den Stand der Technik in der Visualisierung und RDF-Präsentation, indem wir 10 Ansätze nach 29 Kriterien vergleichen. Unser Ansatz ist einzigartig, da er eine ontologie-getriebene Nutzerführung mit komponierbaren visuellen Mappings vereint. Schließlich vergleichen wir drei Prototypen, welche die wesentlichen Teile unseres Ansatzes umsetzen, um seine Machbarkeit zu zeigen. Wir zeigen, wie der Mapping-Prozess durch Tools unterstützt werden kann, die Warnmeldungen für nicht optimale visuelle Abbildungen anzeigen, z. B. durch Berücksichtigung von Charakteristiken der Relationen wie »Symmetrie«. In einer konstruktiven Evaluation fordern wir sowohl die RVL-Sprache als auch den neuesten Prototyp heraus, indem wir versuchen Skizzen von Grafiken umzusetzen, die wir während der Analyse manuell erstellt haben. Wir zeigen, wie Grafiken variiert werden können und komplexe Mappings aus einfachen zusammengesetzt werden können. Zwei Drittel der Skizzen können fast vollständig oder vollständig spezifiziert werden und die Hälfte kann fast vollständig oder vollständig umgesetzt werden.
420

Ontology-Driven, Guided Visualisation Supporting Explicit and Composable Mappings

Polowinski, Jan 20 January 2017 (has links)
Data masses on the World Wide Web can hardly be managed by humans or machines. One option is the formal description and linking of data sources using Semantic Web and Linked Data technologies. Ontologies written in standardised languages foster the sharing and linking of data as they provide a means to formally define concepts and relations between these concepts. A second option is visualisation. The visual representation allows humans to perceive information more directly, using the highly developed visual sense. Relatively few efforts have been made on combining both options, although the formality and rich semantics of ontological data make it an ideal candidate for visualisation. Advanced visualisation design systems support the visualisation of tabular, typically statistical data. However, visualisations of ontological data still have to be created manually, since automated solutions are often limited to generic lists or node-link diagrams. Also, the semantics of ontological data are not exploited for guiding users through visualisation tasks. Finally, once a good visualisation setting has been created, it cannot easily be reused and shared. Trying to tackle these problems, we had to answer how to define composable and shareable mappings from ontological data to visual means and how to guide the visual mapping of ontological data. We present an approach that allows for the guided visualisation of ontological data, the creation of effective graphics and the reuse of visualisation settings. Instead of generic graphics, we aim at tailor-made graphics, produced using the whole palette of visual means in a flexible, bottom-up approach. It not only allows for visualising ontologies, but uses ontologies to guide users when visualising data and to drive the visualisation process at various places: First, as a rich source of information on data characteristics, second, as a means to formally describe the vocabulary for building abstract graphics, and third, as a knowledge base of facts on visualisation. This is why we call our approach ontology-driven. We suggest generating an Abstract Visual Model (AVM) to represent and »synthesise« a graphic following a role-based approach, inspired by the one used by J. v. Engelhardt for the analysis of graphics. It consists of graphic objects and relations formalised in the Visualisation Ontology (VISO). A mappings model, based on the declarative RDFS/OWL Visualisation Language (RVL), determines a set of transformations from the domain data to the AVM. RVL allows for composable visual mappings that can be shared and reused across platforms. To guide the user, for example, we discourage the construction of mappings that are suboptimal according to an effectiveness ranking formalised in the fact base and suggest more effective mappings instead. The guidance process is flexible, since it is based on exchangeable rules. VISO, RVL and the AVM are additional contributions of this thesis. Further, we initially analysed the state of the art in visualisation and RDF-presentation comparing 10 approaches by 29 criteria. Our approach is unique because it combines ontology-driven guidance with composable visual mappings. Finally, we compare three prototypes covering the essential parts of our approach to show its feasibility. We show how the mapping process can be supported by tools displaying warning messages for non-optimal visual mappings, e.g., by considering relation characteristics such as »symmetry«. In a constructive evaluation, we challenge both the RVL language and the latest prototype trying to regenerate sketches of graphics we created manually during analysis. We demonstrate how graphics can be varied and complex mappings can be composed from simple ones. Two thirds of the sketches can be almost or completely specified and half of them can be almost or completely implemented.:Legend and Overview of Prefixes xiii 1 Introduction 1 2 Background 11 2.1 Visualisation 11 2.1.1 What is Visualisation? 11 2.1.2 What are the Benefits of Visualisation? 12 2.1.3 Visualisation Related Terms Used in this Thesis 12 2.1.4 Visualisation Models and Architectural Patterns 12 2.1.5 Visualisation Design Systems 14 2.1.6 What is the Difference between Visual Mapping and Styling? 14 2.1.7 Lessons Learned from Style Sheet Languages 15 2.2 Data 16 2.2.1 Data – Information – Knowledge 17 2.2.2 Structured Data 17 2.2.3 Ontologies in Computer Science 19 2.2.4 The Semantic Web and its Languages 19 2.2.5 Linked Data and Open Data 20 2.2.6 The Metamodelling Technological Space 21 2.2.7 SPIN 21 2.3 Guidance 22 2.3.1 Guidance in Visualisation 22 3 Problem Analysis 23 3.1 Problems of Ontology Visualisation Approaches 24 3.2 Research Questions 25 3.3 Set up of the Case Studies 25 3.3.1 Case Studies in the Life Sciences Domain 26 3.3.2 Case Studies in the Publishing Domain 26 3.3.3 Case Studies in the Software Technology Domain 27 3.4 Analysis of the Case Studies’ Ontologies 27 3.5 Manual Sketching of Graphics 29 3.6 Analysis of the Graphics for Typical Visualisation Cases 29 3.7 Requirements 33 3.7.1 Requirements for Visualisation and Interaction 34 3.7.2 Requirements for Data Awareness 34 3.7.3 Requirements for Reuse and Composition 34 3.7.4 Requirements for Variability 35 3.7.5 Requirements for Tooling Support and Guidance 35 3.7.6 Optional Features and Limitations 36 4 Analysis of the State of the Art 37 4.1 Related Visualisation Approaches 38 4.1.1 Short Overview of the Approaches 38 4.1.2 Detailed Comparison by Criteria 46 4.1.3 Conclusion – What Is Still Missing? 60 4.2 Visualisation Languages 62 4.2.1 Short Overview of the Compared Languages 62 4.2.2 Detailed Comparison by Language Criteria 66 4.2.3 Conclusion – What Is Still Missing? 71 4.3 RDF Presentation Languages 72 4.3.1 Short Overview of the Compared Languages 72 4.3.2 Detailed Comparison by Language Criteria 76 4.3.3 Additional Criteria for RDF Display Languages 87 4.3.4 Conclusion – What Is Still Missing? 89 4.4 Model-Driven Interfaces 90 4.4.1 Metamodel-Driven Interfaces 90 4.4.2 Ontology-Driven Interfaces 92 4.4.3 Combined Usage of the Metamodelling and Ontology Technological Space 94 5 A Visualisation Ontology – VISO 97 5.1 Methodology Used for Ontology Creation 100 5.2 Requirements for a Visualisation Ontology 100 5.3 Existing Approaches to Modelling in the Field of Visualisation 101 5.3.1 Terminologies and Taxonomies 101 5.3.2 Existing Visualisation Ontologies 102 5.3.3 Other Visualisation Models and Approaches to Formalisation 103 5.3.4 Summary 103 5.4 Technical Aspects of VISO 103 5.5 VISO/graphic Module – Graphic Vocabulary 104 5.5.1 Graphic Representations and Graphic Objects 105 5.5.2 Graphic Relations and Syntactic Structures 107 5.6 VISO/data Module – Characterising Data 110 5.6.1 Data Structure and Characteristics of Relations 110 5.6.2 The Scale of Measurement and Units 112 5.6.3 Properties for Characterising Data Variables in Statistical Data 113 5.7 VISO/facts Module – Facts for Vis. Constraints and Rules 115 5.7.1 Expressiveness of Graphic Relations 116 5.7.2 Effectiveness Ranking of Graphic Relations 118 5.7.3 Rules for Composing Graphics 119 5.7.4 Other Rules to Consider for Visual Mapping 124 5.7.5 Providing Named Value Collections 124 5.7.6 Existing Approaches to the Formalisation of Visualisation Knowledge . . 126 5.7.7 The VISO/facts/empiric Example Knowledge Base 126 5.8 Other VISO Modules 126 5.9 Conclusions and Future Work 127 5.10 Further Use Cases for VISO 127 5.11 VISO on the Web – Sharing the Vocabulary to Build a Community 128 6 A VISO-Based Abstract Visual Model – AVM 129 6.1 Graphical Notation Used in this Chapter 129 6.2 Elementary Graphic Objects and Graphic Attributes 131 6.3 N-Ary Relations 131 6.4 Binary Relations 131 6.5 Composition of Graphic Objects Using Roles 132 6.6 Composition of Graphic Relations Using Roles 132 6.7 Composition of Visual Mappings Using the AVM 135 6.8 Tracing 135 6.9 Is it Worth Having an Abstract Visual Model? 135 6.10 Discussion of Fresnel as a Related Language 137 6.11 Related Work 139 6.12 Limitations 139 6.13 Conclusions 140 7 A Language for RDFS/OWL Visualisation – RVL 141 7.1 Language Requirements 142 7.2 Main RVL Constructs 145 7.2.1 Mapping 145 7.2.2 Property Mapping 146 7.2.3 Identity Mapping 146 7.2.4 Value Mapping 147 7.2.5 Inheriting RVL Settings 147 7.2.6 Resource Mapping 148 7.2.7 Simplifications 149 7.3 Calculating Value Mappings 150 7.4 Defining Scale of Measurement 153 7.4.1 Determining the Scale of Measurement 154 7.5 Addressing Values in Value Mappings 156 7.5.1 Determining the Set of Addressed Source Values 156 7.5.2 Determining the Set of Addressed Target Values 157 7.6 Overlapping Value Mappings 158 7.7 Default Value Mapping 158 7.8 Default Labelling 159 7.9 Defining Interaction 159 7.10 Mapping Composition and Submappings 160 7.11 A Schema Language for RVL 160 7.11.1 Concrete Examples of the RVL Schema 163 7.12 Conclusions and Future Work 166 8 The OGVIC Approach 169 8.1 Ontology-Driven, Guided Editing of Visual Mappings 172 8.1.1 Classification of Constraints 172 8.1.2 Levels of Guidance 173 8.1.3 Implementing Constraint-Based Guidance 173 8.2 Support of Explicit and Composable Visual Mappings 177 8.2.1 Mapping Composition Cases 178 8.2.2 Selecting a Context 180 8.2.3 Using the Same Graphic Relation Multiple Times 181 8.3 Prototype P1 (TopBraid-Composer-based) 182 8.4 Prototype P2 (OntoWiki-based) 184 8.5 Prototype P3 (Java Implementation of RVL) 187 8.6 Lessons Learned from Prototypes & Future Work 190 8.6.1 Checking RVL Constraints and Visualisation Rules 190 8.6.2 A User Interface for Editing RVL Mappings 190 8.6.3 Graph Transformations with SPIN and SPARQL 1.1 Update 192 8.6.4 Selection and Filtering of Data 193 8.6.5 Interactivity and Incremental Processing 193 8.6.6 Rendering the Final Platform-Specific Code 196 9 Application 197 9.1 Coverage of Case Study Sketches and Necessary Features 198 9.2 Coverage of Visualisation Cases 201 9.3 Coverage of Requirements 205 9.4 Full Example 206 10 Conclusions 211 10.1 Contributions 211 10.2 Constructive Evaluation 212 10.3 Research Questions 213 10.4 Transfer to Other Models and Constraint Languages 213 10.5 Limitations 214 10.6 Future Work 214 Appendices 217 A Case Study Sketches 219 B VISO – Comparison of Visualisation Literature 229 C RVL 231 D RVL Example Mappings and Application 233 D.1 Listings of RVL Example Mappings as Required by Prototype P3 233 D.2 Features Required for Implementing all Sketches 235 D.3 JSON Format for Processing the AVM with D3 – Hierarchical Variant 238 Bibliography 238 List of Figures 251 List of Tables 254 List of Listings 257 / Datenmassen im World Wide Web können kaum von Menschen oder Maschinen erfasst werden. Eine Option ist die formale Beschreibung und Verknüpfung von Datenquellen mit Semantic-Web- und Linked-Data-Technologien. Ontologien, in standardisierten Sprachen geschrieben, befördern das Teilen und Verknüpfen von Daten, da sie ein Mittel zur formalen Definition von Konzepten und Beziehungen zwischen diesen Konzepten darstellen. Eine zweite Option ist die Visualisierung. Die visuelle Repräsentation ermöglicht es dem Menschen, Informationen direkter wahrzunehmen, indem er seinen hochentwickelten Sehsinn verwendet. Relativ wenige Anstrengungen wurden unternommen, um beide Optionen zu kombinieren, obwohl die Formalität und die reichhaltige Semantik ontologische Daten zu einem idealen Kandidaten für die Visualisierung machen. Visualisierungsdesignsysteme unterstützen Nutzer bei der Visualisierung von tabellarischen, typischerweise statistischen Daten. Visualisierungen ontologischer Daten jedoch müssen noch manuell erstellt werden, da automatisierte Lösungen häufig auf generische Listendarstellungen oder Knoten-Kanten-Diagramme beschränkt sind. Auch die Semantik der ontologischen Daten wird nicht ausgenutzt, um Benutzer durch Visualisierungsaufgaben zu führen. Einmal erstellte Visualisierungseinstellungen können nicht einfach wiederverwendet und geteilt werden. Um diese Probleme zu lösen, mussten wir eine Antwort darauf finden, wie die Definition komponierbarer und wiederverwendbarer Abbildungen von ontologischen Daten auf visuelle Mittel geschehen könnte und wie Nutzer bei dieser Abbildung geführt werden könnten. Wir stellen einen Ansatz vor, der die geführte Visualisierung von ontologischen Daten, die Erstellung effektiver Grafiken und die Wiederverwendung von Visualisierungseinstellungen ermöglicht. Statt auf generische Grafiken zielt der Ansatz auf maßgeschneiderte Grafiken ab, die mit der gesamten Palette visueller Mittel in einem flexiblen Bottom-Up-Ansatz erstellt werden. Er erlaubt nicht nur die Visualisierung von Ontologien, sondern verwendet auch Ontologien, um Benutzer bei der Visualisierung von Daten zu führen und den Visualisierungsprozess an verschiedenen Stellen zu steuern: Erstens als eine reichhaltige Informationsquelle zu Datencharakteristiken, zweitens als Mittel zur formalen Beschreibung des Vokabulars für den Aufbau von abstrakten Grafiken und drittens als Wissensbasis von Visualisierungsfakten. Deshalb nennen wir unseren Ansatz ontologie-getrieben. Wir schlagen vor, ein Abstract Visual Model (AVM) zu generieren, um eine Grafik rollenbasiert zu synthetisieren, angelehnt an einen Ansatz der von J. v. Engelhardt verwendet wird, um Grafiken zu analysieren. Das AVM besteht aus grafischen Objekten und Relationen, die in der Visualisation Ontology (VISO) formalisiert sind. Ein Mapping-Modell, das auf der deklarativen RDFS/OWL Visualisation Language (RVL) basiert, bestimmt eine Menge von Transformationen von den Quelldaten zum AVM. RVL ermöglicht zusammensetzbare »Mappings«, visuelle Abbildungen, die über Plattformen hinweg geteilt und wiederverwendet werden können. Um den Benutzer zu führen, bewerten wir Mappings anhand eines in der Faktenbasis formalisierten Effektivitätsrankings und schlagen ggf. effektivere Mappings vor. Der Beratungsprozess ist flexibel, da er auf austauschbaren Regeln basiert. VISO, RVL und das AVM sind weitere Beiträge dieser Arbeit. Darüber hinaus analysieren wir zunächst den Stand der Technik in der Visualisierung und RDF-Präsentation, indem wir 10 Ansätze nach 29 Kriterien vergleichen. Unser Ansatz ist einzigartig, da er eine ontologie-getriebene Nutzerführung mit komponierbaren visuellen Mappings vereint. Schließlich vergleichen wir drei Prototypen, welche die wesentlichen Teile unseres Ansatzes umsetzen, um seine Machbarkeit zu zeigen. Wir zeigen, wie der Mapping-Prozess durch Tools unterstützt werden kann, die Warnmeldungen für nicht optimale visuelle Abbildungen anzeigen, z. B. durch Berücksichtigung von Charakteristiken der Relationen wie »Symmetrie«. In einer konstruktiven Evaluation fordern wir sowohl die RVL-Sprache als auch den neuesten Prototyp heraus, indem wir versuchen Skizzen von Grafiken umzusetzen, die wir während der Analyse manuell erstellt haben. Wir zeigen, wie Grafiken variiert werden können und komplexe Mappings aus einfachen zusammengesetzt werden können. Zwei Drittel der Skizzen können fast vollständig oder vollständig spezifiziert werden und die Hälfte kann fast vollständig oder vollständig umgesetzt werden.:Legend and Overview of Prefixes xiii 1 Introduction 1 2 Background 11 2.1 Visualisation 11 2.1.1 What is Visualisation? 11 2.1.2 What are the Benefits of Visualisation? 12 2.1.3 Visualisation Related Terms Used in this Thesis 12 2.1.4 Visualisation Models and Architectural Patterns 12 2.1.5 Visualisation Design Systems 14 2.1.6 What is the Difference between Visual Mapping and Styling? 14 2.1.7 Lessons Learned from Style Sheet Languages 15 2.2 Data 16 2.2.1 Data – Information – Knowledge 17 2.2.2 Structured Data 17 2.2.3 Ontologies in Computer Science 19 2.2.4 The Semantic Web and its Languages 19 2.2.5 Linked Data and Open Data 20 2.2.6 The Metamodelling Technological Space 21 2.2.7 SPIN 21 2.3 Guidance 22 2.3.1 Guidance in Visualisation 22 3 Problem Analysis 23 3.1 Problems of Ontology Visualisation Approaches 24 3.2 Research Questions 25 3.3 Set up of the Case Studies 25 3.3.1 Case Studies in the Life Sciences Domain 26 3.3.2 Case Studies in the Publishing Domain 26 3.3.3 Case Studies in the Software Technology Domain 27 3.4 Analysis of the Case Studies’ Ontologies 27 3.5 Manual Sketching of Graphics 29 3.6 Analysis of the Graphics for Typical Visualisation Cases 29 3.7 Requirements 33 3.7.1 Requirements for Visualisation and Interaction 34 3.7.2 Requirements for Data Awareness 34 3.7.3 Requirements for Reuse and Composition 34 3.7.4 Requirements for Variability 35 3.7.5 Requirements for Tooling Support and Guidance 35 3.7.6 Optional Features and Limitations 36 4 Analysis of the State of the Art 37 4.1 Related Visualisation Approaches 38 4.1.1 Short Overview of the Approaches 38 4.1.2 Detailed Comparison by Criteria 46 4.1.3 Conclusion – What Is Still Missing? 60 4.2 Visualisation Languages 62 4.2.1 Short Overview of the Compared Languages 62 4.2.2 Detailed Comparison by Language Criteria 66 4.2.3 Conclusion – What Is Still Missing? 71 4.3 RDF Presentation Languages 72 4.3.1 Short Overview of the Compared Languages 72 4.3.2 Detailed Comparison by Language Criteria 76 4.3.3 Additional Criteria for RDF Display Languages 87 4.3.4 Conclusion – What Is Still Missing? 89 4.4 Model-Driven Interfaces 90 4.4.1 Metamodel-Driven Interfaces 90 4.4.2 Ontology-Driven Interfaces 92 4.4.3 Combined Usage of the Metamodelling and Ontology Technological Space 94 5 A Visualisation Ontology – VISO 97 5.1 Methodology Used for Ontology Creation 100 5.2 Requirements for a Visualisation Ontology 100 5.3 Existing Approaches to Modelling in the Field of Visualisation 101 5.3.1 Terminologies and Taxonomies 101 5.3.2 Existing Visualisation Ontologies 102 5.3.3 Other Visualisation Models and Approaches to Formalisation 103 5.3.4 Summary 103 5.4 Technical Aspects of VISO 103 5.5 VISO/graphic Module – Graphic Vocabulary 104 5.5.1 Graphic Representations and Graphic Objects 105 5.5.2 Graphic Relations and Syntactic Structures 107 5.6 VISO/data Module – Characterising Data 110 5.6.1 Data Structure and Characteristics of Relations 110 5.6.2 The Scale of Measurement and Units 112 5.6.3 Properties for Characterising Data Variables in Statistical Data 113 5.7 VISO/facts Module – Facts for Vis. Constraints and Rules 115 5.7.1 Expressiveness of Graphic Relations 116 5.7.2 Effectiveness Ranking of Graphic Relations 118 5.7.3 Rules for Composing Graphics 119 5.7.4 Other Rules to Consider for Visual Mapping 124 5.7.5 Providing Named Value Collections 124 5.7.6 Existing Approaches to the Formalisation of Visualisation Knowledge . . 126 5.7.7 The VISO/facts/empiric Example Knowledge Base 126 5.8 Other VISO Modules 126 5.9 Conclusions and Future Work 127 5.10 Further Use Cases for VISO 127 5.11 VISO on the Web – Sharing the Vocabulary to Build a Community 128 6 A VISO-Based Abstract Visual Model – AVM 129 6.1 Graphical Notation Used in this Chapter 129 6.2 Elementary Graphic Objects and Graphic Attributes 131 6.3 N-Ary Relations 131 6.4 Binary Relations 131 6.5 Composition of Graphic Objects Using Roles 132 6.6 Composition of Graphic Relations Using Roles 132 6.7 Composition of Visual Mappings Using the AVM 135 6.8 Tracing 135 6.9 Is it Worth Having an Abstract Visual Model? 135 6.10 Discussion of Fresnel as a Related Language 137 6.11 Related Work 139 6.12 Limitations 139 6.13 Conclusions 140 7 A Language for RDFS/OWL Visualisation – RVL 141 7.1 Language Requirements 142 7.2 Main RVL Constructs 145 7.2.1 Mapping 145 7.2.2 Property Mapping 146 7.2.3 Identity Mapping 146 7.2.4 Value Mapping 147 7.2.5 Inheriting RVL Settings 147 7.2.6 Resource Mapping 148 7.2.7 Simplifications 149 7.3 Calculating Value Mappings 150 7.4 Defining Scale of Measurement 153 7.4.1 Determining the Scale of Measurement 154 7.5 Addressing Values in Value Mappings 156 7.5.1 Determining the Set of Addressed Source Values 156 7.5.2 Determining the Set of Addressed Target Values 157 7.6 Overlapping Value Mappings 158 7.7 Default Value Mapping 158 7.8 Default Labelling 159 7.9 Defining Interaction 159 7.10 Mapping Composition and Submappings 160 7.11 A Schema Language for RVL 160 7.11.1 Concrete Examples of the RVL Schema 163 7.12 Conclusions and Future Work 166 8 The OGVIC Approach 169 8.1 Ontology-Driven, Guided Editing of Visual Mappings 172 8.1.1 Classification of Constraints 172 8.1.2 Levels of Guidance 173 8.1.3 Implementing Constraint-Based Guidance 173 8.2 Support of Explicit and Composable Visual Mappings 177 8.2.1 Mapping Composition Cases 178 8.2.2 Selecting a Context 180 8.2.3 Using the Same Graphic Relation Multiple Times 181 8.3 Prototype P1 (TopBraid-Composer-based) 182 8.4 Prototype P2 (OntoWiki-based) 184 8.5 Prototype P3 (Java Implementation of RVL) 187 8.6 Lessons Learned from Prototypes & Future Work 190 8.6.1 Checking RVL Constraints and Visualisation Rules 190 8.6.2 A User Interface for Editing RVL Mappings 190 8.6.3 Graph Transformations with SPIN and SPARQL 1.1 Update 192 8.6.4 Selection and Filtering of Data 193 8.6.5 Interactivity and Incremental Processing 193 8.6.6 Rendering the Final Platform-Specific Code 196 9 Application 197 9.1 Coverage of Case Study Sketches and Necessary Features 198 9.2 Coverage of Visualisation Cases 201 9.3 Coverage of Requirements 205 9.4 Full Example 206 10 Conclusions 211 10.1 Contributions 211 10.2 Constructive Evaluation 212 10.3 Research Questions 213 10.4 Transfer to Other Models and Constraint Languages 213 10.5 Limitations 214 10.6 Future Work 214 Appendices 217 A Case Study Sketches 219 B VISO – Comparison of Visualisation Literature 229 C RVL 231 D RVL Example Mappings and Application 233 D.1 Listings of RVL Example Mappings as Required by Prototype P3 233 D.2 Features Required for Implementing all Sketches 235 D.3 JSON Format for Processing the AVM with D3 – Hierarchical Variant 238 Bibliography 238 List of Figures 251 List of Tables 254 List of Listings 257

Page generated in 0.0563 seconds