Spelling suggestions: "subject:"data visualization"" "subject:"mata visualization""
301 |
OLAP query optimization and result visualization / Optimisation de requêtes OLAP et visualisation des résultatsSimonenko, Ekaterina 16 September 2011 (has links)
Nous explorons différents aspects des entrepôts de données et d’OLAP, le point commun de nos recherches étant le modèle fonctionnel pour l'analyse de données. Notre objectif principal est d'utiliser ce modèle dans l'étude de trois aspects différents, mais liés:- l'optimisation de requêtes par réécriture et la gestion du cache,- la visualisation du résultat d'une requête OLAP,- le mapping d'un schéma relationnel en BCNF vers un schéma fonctionnel. L'optimisation de requêtes et la gestion de cache sont des problèmes cruciaux dans l'évaluation de requêtes en général, et les entrepôts de données en particulier; et la réécriture de requêtes est une des techniques de base pour l'optimisation de requêtes. Nous établissons des conditions d'implication de requêtes analytiques, en utilisant le pré-ordre partiel sur l'ensemble de requêtes, et nous définissons un algorithme sain et complet de réécriture ainsi que une stratégie de gestion de cache optimisée, tous les deux basés sur le modèle fonctionnel.Le deuxième aspect important que nous explorons dans cette thèse est celui de la visualisation du résultat. Nous démontrons l'importance pour la visualisation de reproduire des propriétés essentielles de données qui sont les dépendances fonctionnelles. Nous montrons que la connexion, existante entre les données et leur visualisation, est précisément la connexion entre leurs représentations fonctionnelles. Nous dérivons alors un cadre technique, ayant pour objectif d'établir une telle connexion pour un ensemble de données et un ensemble de visualisations. En plus d'analyse du processus de visualisation, nous utilisons le modèle fonctionnel comme un guide pour la visualisation interactive, et définissons ce qu'on appelle la visualisation paramétrique. Le troisième aspect important de notre travail est l'expérimentation des résultats obtenus dans cette thèse. Les résultats de cette thèse peuvent être utilisés afin d’analyser les données contenues dans une table en Boyce-Codd Normal Form (BCNF), étant donné que le schéma de la table peut être transformé aisément en un schéma fonctionnel. Nous présentons une telle transformation (mapping) dans cette thèse. Une fois le schéma relationnel transformé en un schéma fonctionnel, nous pouvons profiter des résultats sur l'optimisation et la visualisation de requêtes. Nous avons utilisé cette transformation dans l’implémentation de deux prototypes dans le cadre de deux projets différents. / In this thesis, we explore different aspects of Data Warehousing and OLAP, the common point of our proposals being the functional model for data analysis. Our main objective is to use that model in studying three different, but related aspects:- query optimization through rewriting and cache management,- query result visualization,- mapping of a relational BCNF schema to a functional schema.Query optimization and cache management is a crucial issue in query processing in general, and in data warehousing in particular; and query rewriting is one of the basic techniques for query optimization. We establish derivability conditions for analytic functional queries, using a partial pre-order over the set of queries. Then we provide a sound and complete rewriting algorithm, as well as an optimized cache management strategy, both based on the underlying functional model.A second important aspect that we explore in the thesis is that of query result visualization. We show the importance for the visualization to reflect such essential features of the dataset as functional dependencies. We show that the connection existing between data and visualization is precisely the connection between their functional representations. We then define a framework, whose objective is to establish such a connection for a given dataset and a set of visualizations. In addition to the analysis of the visualization process, we use the functional data model as a guide for interactive visualization, and define what we call a parametric visualization. A third important aspect of our work is experimentation with the results obtained in the thesis. In order to be able to analyze the data contained in a Boyce-Codd Normal Form (BCNF) table, one can use the results obtained in this thesis, provided that the schema of the table can be mapped to a functional schema. We present such a mapping in this thesis. Once the relational schema has been transformed into a functional schema, we can take advantage of the query optimization and result visualization results presented in the thesis. We have used this transformation in the implementation of two prototypes in the context of two different projects.
|
302 |
兩位華語老師談二語學習歷程:視覺化分析敘事流程、學習自主與觀眾反應之互動關係 / Two Chinese Teachers Narrating Their L2 Learning Journeys: Visual Analysis on the Interaction of Narrating Flow, Language Learning Agency, and Audience Response李晏禎, Li, Yan Zhen Unknown Date (has links)
本研究延續Coffey (2013)的研究,將社會學的理論「自主概念」應用至華語教學研究的領域當中,採用的是敘事資料、個案研究方法。本研究中,有兩位敘事風格與學習行動迥異的華語老師,一位平穩的將自己的故事娓娓道來;另一位則關注觀眾的感受,使得說故事的過程趣味橫生。本研究以兩位老師回顧二語學習過程的音檔及書面資料為分析樣本,並依照他們敘事的時間繪製曲線圖,這兩張曲線圖包含了敘事場域中的三個不同面向,包括:兩位老師學習過程中的各個事件和人生轉捩點(二語學習自主)、說話者是否在乎觀眾反應而改變原先的說話方式或內容(說話者展現之言談自主)以及觀眾的反應。研究人員依話語給予相應不同分數,並利用Holistic-Form(整體形式)的方式繪出三面向之間的交會與變化,從中分析學習者如何敘說自己生命歷程以及自我概念的變化。
本研究之貢獻在於敘事研究的分析方法上,使用三條曲線視覺化學習者自我與時空連結與敘事現場氣氛的流動,更全面的檢視團體互動細節。並發現這種「依附/不依附」觀眾為中心的敘事模式,顯示說故事者的自我揭露程度不同。以此提出建議,當說話者展現之言談自主高、情節多且短時,要注意對方可能避重就輕、迴避了感受的真實性。同時,言談自主高起的區段可能是當事人比較痛苦或負面的經驗,可以多加留意。研究的場域與在場的觀眾會對說故事者的敘事產生影響,如採一對一訪談,應能降低前述情況的發生。並建議將之納入日後研究方法的範疇中。 / Drawing on theories of identity and agency (particularly, Coffey, 2013), this qualitative case study scrutinizes how language learner identities and agencies are performed in group storytelling sections. The participants involved are two Chinese teachers with distinct narrative styles: one tells her stories quite uneventfully; while the other intentionally shaped her stories according to audience response, making the storytelling section full of laughter. Both oral and written narratives were gathered for holistic-form analysis which resulted in two matrix displays of running time, response intensity, and levels of learner agency as the participants narrated critical events in their language learning trajectories. These matrixes helped reveal how the three intersected and changed and how the participants narrated the changing identities and agency in the L2 stories that they lived through.
This study contributes to the approach of data analysis in narrative study by utilizing graphical presentations to facilitate visualization and analysis of the interaction between and among the storytelling time/context, the participants’ language learner agency, and audience response. It also pays close attention to details in how the learner story is told and in what kind of group dynamics as well as reveals different possible levels of self-exposure: that is, audience-centered and non-audience-centered narrative styles. In addition, the study alerts narrative researchers of the possibility that true emotions will be hidden and important details will be avoided when the narrator performs high agency in discourse, particularly with too many short plots in their stories. It is also clear that when there is high intensity in changing discourse it is often involved emotionally charged negative episodes that deserve careful scrutiny. Since context and audience could affect the form, flow, and content of narratives, one-on-one interview is suggested for future study to avoid limitations introduced by group storytelling sections.
|
303 |
Método de evaluación de variables e indicadores para el proceso de Bloque de Cirugía utilizando Process Mining y Data Visualization / Evaluation method of variables and indicators for Surgery Block process using Process Mining and Data VisualizationRojas Candio, Piero Gilmar, Villantoy Pasapera, Arturo Alonso 06 June 2020 (has links)
El presente trabajo consiste en proponer un método que permita formular y evaluar indicadores de Process Mining a través de preguntas relacionadas al funcionamiento de un proceso y permita comprender de manera sencilla las variables del proceso a través de técnicas de Data Visualization. Esta propuesta identifica cuellos de botella y violaciones de políticas de un proceso crítico en una organización de salud, ya que resulta complicado realizar mediciones y análisis para mejorar la calidad y transformación de los procesos en instituciones de atención en el sector salud. Este resultado contribuye a la mejora y optimización de la toma de decisiones por parte del equipo médico del Bloque de Cirugía. Este método está conformado por ocho actividades: 1. Definición de objetivos y preguntas, 2. Extracción de datos, 3. Preprocesamiento de datos, 4. Inspección de registro y patrón, 5. Análisis de Minería de Procesos, 6. Técnicas de Visualización de Datos, 7. Evaluación de resultados y 8. Propuestas de mejora de procesos. / In the present work, we proposed a method that allows us to formulate and evaluate Process Mining indicators through questions related to the process traceability, and to bring about a clear understanding of the process variables through Data Visualization techniques. This proposal identifies bottlenecks and violations of policies that arise due to the difficulty of carrying out measurements and analysis for the improvement of process quality assurance and process transformation. The result contributes to the optimization of decision-making by the medical staff involved in the Surgery Block process. This method is divided into eight fundamental activities: 1. Objectives and question definition, 2. Data extraction, 3. Data preprocessing, 4. Registration and pattern inspection, 5. Process mining analysis, 6. Data visualization techniques, 7. Outcome evaluation, and 8. Process improvement approaches. / Trabajo de investigación
|
304 |
Visualización de datos y data storytelling en la toma de decisionesCancino Quispe, Christopher Manuel, Carrasco Cubillas, Ruth Silvana 01 June 2021 (has links)
La mayoría de las organizaciones toman decisiones a diario de manera intuitiva, pero solo el uso de los datos, visualización y data storytelling, les asegura una ventaja competitiva, ya que, al comprender el mensaje oculto de los datos, les genera valor agregado a las organizaciones. Por ello, el propósito de la investigación es contrastar las diversas posturas y valoración de los autores para responder a la interrogante central de este estudio: ¿cuáles son las principales posturas sobre el uso de la visualización de datos y el data storytelling en la toma de decisiones en las organizaciones?
El método de investigación fue del tipo cualitativo que identificó los principales enfoques de distintos autores. El estudio se desprende de una nueva revolución industrial a la que están sometidos los tomadores de decisiones: la revolución digital, y quiénes no tengan la capacidad de decidir de forma ágil y efectiva perecerán; es decir, no importa cuán increíble sea su análisis o valiosa sea su información, no generará ningún cambio en las partes interesadas sino logran comprender lo que han hecho. Por ello, la visualización de datos y el data storytelling hace la gran diferencia en la toma de decisiones y accionar de forma oportuna. / Most organizations make daily decisions intuitively, but only the use of data, visualization and data storytelling, ensures them a competitive advantage, since, by understanding the hidden message of the data, it generates added value to the organizations. Therefore, the purpose of the research is to contrast the various positions and assessment of the authors to answer the central question of this study: what are the main positions on the use of data visualization and data storytelling in decision making in organizations?
The research method is of the qualitative type that identified the main approaches of different authors. The study shows that decision makers are facing a new industrial revolution: the digital one, and those who do not have the ability to decide in an agile and effective way will perish; that is, no matter how incredible their analysis is or how valuable their information is, it does not generate any change in the stakeholders if they do not manage to understand what they have done. Therefore, data visualization and data storytelling make a big difference in making decisions and action in a timely manner. / Trabajo de Suficiencia Profesional
|
305 |
Zobrazení 3D scény ve webovém prohlížeči / Displaying 3D Graphics in Web BrowserSychra, Tomáš January 2013 (has links)
This thesis discusses possibilities of accelerated 3D scene displaying in a Web browser. In more detail, it deals with WebGL standard and its use in real applications. An application for visualization of volumetric medical data based on JavaScript, WebGL and Three.js library was designed and implemented. Image data are loaded from Google Drive cloud storage. An important part of the application is 3D visualization of the volumetric data based on volume rendering technique called Ray-casting.
|
306 |
Feedback-Driven Data ClusteringHahmann, Martin 28 October 2013 (has links)
The acquisition of data and its analysis has become a common yet critical task in many areas of modern economy and research. Unfortunately, the ever-increasing scale of datasets has long outgrown the capacities and abilities humans can muster to extract information from them and gain new knowledge. For this reason, research areas like data mining and knowledge discovery steadily gain importance. The algorithms they provide for the extraction of knowledge are mandatory prerequisites that enable people to analyze large amounts of information. Among the approaches offered by these areas, clustering is one of the most fundamental. By finding groups of similar objects inside the data, it aims to identify meaningful structures that constitute new knowledge. Clustering results are also often used as input for other analysis techniques like classification or forecasting.
As clustering extracts new and unknown knowledge, it obviously has no access to any form of ground truth. For this reason, clustering results have a hypothetical character and must be interpreted with respect to the application domain. This makes clustering very challenging and leads to an extensive and diverse landscape of available algorithms. Most of these are expert tools that are tailored to a single narrowly defined application scenario. Over the years, this specialization has become a major trend that arose to counter the inherent uncertainty of clustering by including as much domain specifics as possible into algorithms. While customized methods often improve result quality, they become more and more complicated to handle and lose versatility. This creates a dilemma especially for amateur users whose numbers are increasing as clustering is applied in more and more domains. While an abundance of tools is offered, guidance is severely lacking and users are left alone with critical tasks like algorithm selection, parameter configuration and the interpretation and adjustment of results.
This thesis aims to solve this dilemma by structuring and integrating the necessary steps of clustering into a guided and feedback-driven process. In doing so, users are provided with a default modus operandi for the application of clustering. Two main components constitute the core of said process: the algorithm management and the visual-interactive interface. Algorithm management handles all aspects of actual clustering creation and the involved methods. It employs a modular approach for algorithm description that allows users to understand, design, and compare clustering techniques with the help of building blocks. In addition, algorithm management offers facilities for the integration of multiple clusterings of the same dataset into an improved solution. New approaches based on ensemble clustering not only allow the utilization of different clustering techniques, but also ease their application by acting as an abstraction layer that unifies individual parameters. Finally, this component provides a multi-level interface that structures all available control options and provides the docking points for user interaction.
The visual-interactive interface supports users during result interpretation and adjustment. For this, the defining characteristics of a clustering are communicated via a hybrid visualization. In contrast to traditional data-driven visualizations that tend to become overloaded and unusable with increasing volume/dimensionality of data, this novel approach communicates the abstract aspects of cluster composition and relations between clusters. This aspect orientation allows the use of easy-to-understand visual components and makes the visualization immune to scale related effects of the underlying data. This visual communication is attuned to a compact and universally valid set of high-level feedback that allows the modification of clustering results. Instead of technical parameters that indirectly cause changes in the whole clustering by influencing its creation process, users can employ simple commands like merge or split to directly adjust clusters.
The orchestrated cooperation of these two main components creates a modus operandi, in which clusterings are no longer created and disposed as a whole until a satisfying result is obtained. Instead, users apply the feedback-driven process to iteratively refine an initial solution. Performance and usability of the proposed approach were evaluated with a user study. Its results show that the feedback-driven process enabled amateur users to easily create satisfying clustering results even from different and not optimal starting situations.
|
307 |
Lineamientos para la integración de minería de procesos y visualización de datos / Guidelines for the integration of process mining and data visualizationChise Teran, Bryhan, Hurtado Bravo, Jimmy Manuel 04 December 2020 (has links)
Process mining es una disciplina que ha tomado mayor relevancia en los últimos años; prueba de ello es un estudio realizado por la consultora italiana HSPI en el 2018, donde se indica un crecimiento del 72% de casos de estudio aplicados sobre process mining con respecto al año 2017. Así mismo, un reporte publicado en el mismo año por BPTrends, firma especializada en procesos de negocio, afirma que las organizaciones tienen como prioridad en sus proyectos estratégicos el rediseño y automatización de sus principales procesos de negocio. La evolución de esta disciplina ha permitido superar varios de los retos que se identificaron en un manifiesto [1] realizado por los miembros de la IEEE Task Force on Process Mining en el 2012. En este sentido, y apoyados en el desafío número 11 de este manifiesto, el objetivo de este proyecto es integrar las disciplinas de process mining y data visualization a través de un modelo de interacción de lineamientos que permitan mejorar el entendimiento de los usuarios no expertos1 en los resultados gráficos de proyectos de process mining, a fin de optimizar los procesos de negocio en las organizaciones.
Nuestro aporte tiene como objetivo mejorar el entendimiento de los usuarios no expertos en el campo de process mining. Por ello, nos apoyamos de las técnicas de data visualization y de la psicología del color para proponer un modelo de interacción de lineamientos que permita guiar a los especialistas en process mining a diseñar gráficos que transmitan de forma clara y comprensible. Con ello, se busca comprender de mejor forma los resultados de los proyectos de process mining, permitiéndonos tomar mejores decisiones sobre el desempeño de los procesos de negocio en las organizaciones.
El modelo de interacción generado en nuestra investigación se validó con un grupo de usuarios relacionados a procesos críticos de diversas organizaciones del país. Esta validación se realizó a través de una encuesta donde se muestran casos a dichos usuarios a fin de constatar las 5 variables que se definieron para medir de forma cualitativa el nivel de mejora en la compresión de los gráficos al aplicar los lineamientos del modelo de interacción. Los resultados obtenidos demostraron que 4 de las 5 variables tuvieron un impacto positivo en la percepción de los usuarios según el caso que se propuso en forma de pregunta. / Process mining is a discipline that has become more relevant in recent years; proof of this is a study carried out by the Italian consultancy HSPI in 2018, where a growth of 72% of case studies applied on process mining is indicated compared to 2017. Likewise, a report published in the same year by BPTrends, a firm specialized in business processes, affirms that organizations have as a priority in their strategic projects the redesign and automation of their main business processes. The evolution of this discipline has made it possible to overcome several of the challenges that were identified in a manifesto [1] made by the members of the IEEE Task Force on Process Mining in 2012. In this sense, and supported by challenge number 11 of this manifesto, the objective of this project is to integrate the disciplines of process mining and data visualization through an interaction model of guidelines that allow to improve the understanding of non-expert users in the graphical results of process mining projects, in order to optimize the business processes in organizations.
Our contribution aims to improve the understanding of non-expert users in the field of process mining. For this reason, we rely on data visualization techniques and color psychology to propose an interaction model of guidelines that allows us to guide process mining specialists to design graphics that convey clearly and understandably. With this, it seeks to better understand the results of process mining projects, allowing us to make better decisions about the performance of business processes in organizations.
The interaction model generated in our research was validated with a group of users related to critical processes from various organizations in the country. This validation was carried out through a survey where cases are shown to these users in order to verify the 5 variables that were defined to qualitatively measure the level of improvement in the compression of the graphs when applying the guidelines of the interaction model. The results obtained showed that 4 of the 5 variables had a positive impact on the perception of users according to the case that was proposed in the form of a question. / Tesis
|
308 |
Framgångsfaktorer som påverkar skapandet av en användarvänlig försäljningsdashboard : En fallstudie på Two / Success Factors Influencing on the Creation ofa User-friendly Sales Dashboard : A Case Study at TwoTerminWehlin, Rebecka January 2020 (has links)
Företag har i dagens samhälle stora mängder data som ökar ständigt och som därmed behöverhanteras. Företag ska kunna omvandla rådata till värdefull information som möjliggör bättrebeslutfattande. Bussiness Intelligence, även kallat BI, underlättar för företag att samla in, lagra ochanalysera sin data samt att de ger bättre underlag för beslutstöd. Ett känt verktyg som kan användassom en stödfunktion för beslutfattare är dashboard. En dashboard är en synlig presentation på en endaskärm som visar den allra viktigaste information som krävs för att kunna uppnå ett eller flera olikamål. Syftet med en dashboard är att kommunicera betydelsefull information på ett exakt, effektivtsamt tydligt sätt.Kandidatuppsatsens syfte är att identifiera och beskriva framgångsfaktorer som påverkar skapandetav en användarvänlig försäljningsdashboard utifrån ett kund–konsultperspektiv. Den metod som haranvänts i denna uppsats är fallstudiemetoden. Fem personliga intervjuer med semi-struktureradeintervjuguider som underlag har genomförts i denna studie. Intervjuerna gjordes med tre Power BIkonsulterfrån fallföretaget Two och de resterande två intervjuerna gjordes med representanter frånfallföretagets kundföretag Huzells och Pictura.De viktigaste slutsatserna från uppsatsstudien för att kunna skapa en användarvänlig försäljningsdashboard är att det är viktigt att konsulten är lyhörd och lär känna kunden, viktigt attkunden har ett tydligt syfte och mål med försäljningsdashboarden, kommunikationen mellan konsultoch kund är A och O, försäljningsdashboarden ska vara tydlig och lättförståelig, viktigt attförsäljningsdashboarden visar relevant information och nyckeltal (även kallat KPI:er) för denspecifika målgruppen och att försäljningsdashboarden ska vara behaglig att titta på.Det är viktigt att konsulten är lyhörd och lyssnar på vad kunden har för önskemål med dashboardenoch designar utifrån det. Det är också bra om konsulten lär känna kunden för att få en bättre inblick ikundens verksamhet för att kunna komma med lämpligare idéer och förslag på hur dashboarden kanutformas. Kunden behöver ha ett tydligt syfte för hur dashboarden ska se ut, vad den ska innehållasamt var den ska placeras. Kommunikation mellan båda parter är en viktig förutsättning för att de skafortsätta ha samma vision under projektet. En användarvänlig försäljningsdashboard behöver varatydlig samt innehålla relevant information och KPI:er för den specifika målgruppen.
|
309 |
Kompendium der Online-Forschung (DGOF)Deutsche Gesellschaft für Online-Forschung e. V. (DGOF) 24 November 2021 (has links)
Die DGOF veröffentlicht hier digitale Kompendien zu aktuellen Themen der Online-Forschung mit Fachbeiträgen von Experten und Expertinnen aus der Branche.
|
310 |
Visualization of E-commerce Transaction Data : USING BUSINESS INTELLIGENCE TOOLSSafari, Arash January 2015 (has links)
Customer Value(CV) is a data analytics company experiencing problems presenting the result of their analytics in a satisfiable manner. As a result, they considered the use of a data visualization and business intelligence softwares. The purpose of such softwares are to, amongst other things, virtually represent data in an interactive and perceptible manner to the viewer. There are however a large number of these types of applications on the market, making it hard for companies to find the one that best suits their purposes. CV is one such company, and this report was done on behalf of them with the purpose to identify the software best fitting their specific needs. This was done by conducting case studies on specifically chosen softwares and comparing the results of the studies.The software selection process was based largely on the Magic Quadrant report by Gartner, which contains a general overview of a subset of business intelligence softwares available on the market. The selected softwares were Qlik view, Qlik sense, GoodData, panorama Necto, DataWatch, Tableau and SiSense. The case studies focused mainly on aspects of the softwares that were of interest to CV, namely thesoftwares data importation capabilities, data visualization options, the possibilities of updating the model based on underlying data changes, options available regarded sharing the created presentations and the amount of support offered by the software vendor. Based on the results of the case studies, it was concluded that SiSense was the software that best satisfied the requirements set by CV. / Customer Value(CV) är ett företag som upplever svårigheter med att presentera resultaten av deras dataanalys på ett tillfredsställande sätt. De överväger nu att att använda sig av datavisualisering och Business Intelligence program för att virtuellt representera data på ett interaktivt sätt. Det finns däremot ett stort antal olika typer av sådanna applikationer på marknaden, vilket leder till svårigheter för företag att hitta den som passar dem bäst. CV är ett sådant företag, och detta rapport var skriven på deras begäran med syftet att identifieraden datavisualisations- eller Business Intelligence programmet, som bäst passar deras specifika behov. Detta gjordes med hjälp av en serie fallstudier som utfördes på specifikt valda mjukvaror, för att sedan jämföra resultaten av dessa studier.Valprocessen av dessa mjukvaror var i stora drag baserad på "Magic Quadrant 2015" rapporten av Gartner, som innehåller en generell och överskådlig bild av business intelligence applikationsmarknaden. De applikationer som valdes för evaluering varQlik view, Qlik sense, GoodData, panorama Necto, DataWatch, Tableau och SiSense. Fallstudierna fokuserade främst på aspekter av mjukvaran som var av intresse till CV, nämligen deras dataimportationsförmåga, datavisualiseringsmöjligheter, möjligheter till att uppdatera modellen baserad på ändringar i den underliggande datastrukturen, exporeteringsmöjligheter av de skapade presentationerna och den mängd dokumentation och support som erbjöds av mjukvaroutgivaren. Baserad på resultaten av fallstudierna drogs slutsatsen att SiSense var applikationen som bäst täckte CVs behov.
|
Page generated in 0.1482 seconds