• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 626
  • 79
  • 64
  • 59
  • 34
  • 26
  • 25
  • 21
  • 10
  • 8
  • 8
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1194
  • 544
  • 237
  • 218
  • 206
  • 190
  • 189
  • 172
  • 156
  • 152
  • 147
  • 142
  • 131
  • 128
  • 127
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

P©, une approche collaborative d'analyse des besoins et des exigences dirigée par les problèmes : le cas de développement d'une application Analytics RH / P©, A Collaborative Problem-Driven Requirements Engineering Approach to Design An HR Analytics Application

Atif, Lynda 07 July 2017 (has links)
Le développement des systèmes d’information numériques et plus particulièrement les systèmes interactifs d’aide à la décision (SIAD) orientés données (Application Analytics) rencontre des échecs divers.La plupart des études montrent que ces échecs de projets de développement de SIAD relève de la phase d’analyse des besoins et des exigences. Les exigences qu'un système doit satisfaire sont insuffisamment définies à partir de besoins réels des utilisateurs finaux.D’un point de vue théorique, l’analyse de l’état de l’art, mais également du contexte industriel particulier, conduit donc à porter une attention particulière à cette phase et à élaborer une approche collaborative d’analyse des besoins et des exigences dirigée par les problèmes.Un système d’aide à la décision est avant tout un système d’aide à la résolution de problèmes et le développement de ce type d’artefact ne peut donc se faire sans avoir convenablement identifié en amont les problèmes de décision auxquels font face les utilisateurs décideurs, afin d’en déduire les exigences et le type de SIAD.Cette approche, par un renversement de la primauté implicite de la solution technique par rapport à la typologie des problèmes de décision, a été explicitée et mise en œuvre pour le développement d’une Application Analytics qui a permis d’atteindre l’objectif attendu : un système efficace et qui satisfasse d’un triple point de vue technique, fonctionnel et ergonomique, ses différents utilisateurs finaux. / The design of digital information systems, especially interactive Data-Driven Decision Support System (DSS) (Analytics Application) often misses its target.Most of studies have proven that the sources of most DSS design failures are rooted in the analysis step of the users’ needs and requirements a system has to meet and comply with. From a theoretical point of view, the analysis of the state of art combined with the analysis of specific industrial contexts, leads to focus on this critical step, and consequently to develop a collaborative problem-driven requirements engineering approach.A DSS, first and foremost, is a problem solving support system. It implies that developing such an artefact cannot be performed without an adequate upstream identification of end-users’ decision problems, prior to defining the decision makers’ requirements and the appropriate type of DSS.Characterized by the reversal of the implicit primacy of technical solution versus the typology of decision problems, this approach has been elaborated and implemented to design an Analytics Application. As a result, it allowed to reach the expected objective: An effective system that meets the different end-users’ expectations from a technical, functional and ergonomic standpoint.
352

Close and Distant Reading Visualizations for the Comparative Analysis of Digital Humanities Data

Jänicke, Stefan 06 July 2016 (has links)
Traditionally, humanities scholars carrying out research on a specific or on multiple literary work(s) are interested in the analysis of related texts or text passages. But the digital age has opened possibilities for scholars to enhance their traditional workflows. Enabled by digitization projects, humanities scholars can nowadays reach a large number of digitized texts through web portals such as Google Books or Internet Archive. Digital editions exist also for ancient texts; notable examples are PHI Latin Texts and the Perseus Digital Library. This shift from reading a single book “on paper” to the possibility of browsing many digital texts is one of the origins and principal pillars of the digital humanities domain, which helps developing solutions to handle vast amounts of cultural heritage data – text being the main data type. In contrast to the traditional methods, the digital humanities allow to pose new research questions on cultural heritage datasets. Some of these questions can be answered with existent algorithms and tools provided by the computer science domain, but for other humanities questions scholars need to formulate new methods in collaboration with computer scientists. Developed in the late 1980s, the digital humanities primarily focused on designing standards to represent cultural heritage data such as the Text Encoding Initiative (TEI) for texts, and to aggregate, digitize and deliver data. In the last years, visualization techniques have gained more and more importance when it comes to analyzing data. For example, Saito introduced her 2010 digital humanities conference paper with: “In recent years, people have tended to be overwhelmed by a vast amount of information in various contexts. Therefore, arguments about ’Information Visualization’ as a method to make information easy to comprehend are more than understandable.” A major impulse for this trend was given by Franco Moretti. In 2005, he published the book “Graphs, Maps, Trees”, in which he proposes so-called distant reading approaches for textual data that steer the traditional way of approaching literature towards a completely new direction. Instead of reading texts in the traditional way – so-called close reading –, he invites to count, to graph and to map them. In other words, to visualize them. This dissertation presents novel close and distant reading visualization techniques for hitherto unsolved problems. Appropriate visualization techniques have been applied to support basic tasks, e.g., visualizing geospatial metadata to analyze the geographical distribution of cultural heritage data items or using tag clouds to illustrate textual statistics of a historical corpus. In contrast, this dissertation focuses on developing information visualization and visual analytics methods that support investigating research questions that require the comparative analysis of various digital humanities datasets. We first take a look at the state-of-the-art of existing close and distant reading visualizations that have been developed to support humanities scholars working with literary texts. We thereby provide a taxonomy of visualization methods applied to show various aspects of the underlying digital humanities data. We point out open challenges and we present our visualizations designed to support humanities scholars in comparatively analyzing historical datasets. In short, we present (1) GeoTemCo for the comparative visualization of geospatial-temporal data, (2) the two tag cloud designs TagPies and TagSpheres that comparatively visualize faceted textual summaries, (3) TextReuseGrid and TextReuseBrowser to explore re-used text passages among the texts of a corpus, (4) TRAViz for the visualization of textual variation between multiple text editions, and (5) the visual analytics system MusikerProfiling to detect similar musicians to a given musician of interest. Finally, we summarize our and the collaboration experiences of other visualization researchers to emphasize the ingredients required for a successful project in the digital humanities, and we take a look at future challenges in that research field.
353

Wikis in higher education

Kummer, Christian 14 March 2014 (has links)
For many years universities communicated generic graduate attributes (e.g. global citizenship) their students have acquired after studying. Graduate attributes are skills and competencies that are relevant for both employability and other aspects of life (Barrie, 2004). Over the past years and due to the Bologna Process, the focus on competencies has also found its way into universities' curricula. As a consequence, curricula were adapted in order to convey students both in-depth knowledge of a particular area as well as generic competences (Bologna Working Group on Qualifications Framework, 2005, Appendix 8). For example, students with a Master's degree should be able to “communicate their conclusions, and the knowledge and rationale underpinning these, to specialist and non-specialist audiences clearly and unambiguously” (p. 196). This shift has been supported by the demand of the labour market for students that have achieved social and personal competencies, in addition to in-depth knowledge (Heidenreich, 2011). On course level, this placed emphasis on collaborative learning, which had led to “greater autonomy for the learner, but also to greater emphasis on active learning, with creation, communication and participation” (Downes, 2005). The shift to collaborative learning has been supported by existing learning theories and models (Brown et al., 1989; Lave and Wenger, 1991; Vygotsky, 1978), which could explain the educational advantages. For example, collaborative learning has proved to promote critical thinking and communications skills (Johnson and Johnson, 1994; Laal and Ghodsi, 2012). As Haythornthwaite (2006) advocates: “collaborative learning holds the promise of active construction of knowledge, enhanced problem articulation, and benefits exploring and sharing information and knowledge gained from peer-to-peer communication” (p. 10). The term collaboration defies clear definition (Dillenbourg, 1999). In this article, cooperation is seen as the division of labour in tasks, which allows group members to work independently, whereas collaboration needs continuous synchronisation and coordination of labour (Dillenbourg et al., 1996; Haythornthwaite, 2006). Therefore, cooperation allows students to subdivide task assignments, work relatively independent, and to piece the results together to one final product. In contrast, collaboration is seen as a synchronous and coordinated effort of all students to accomplish their task assignment resulting in a final product where “no single hand is visible” (Haythornthwaite, 2006, p. 12). Due to the debate about digital natives (Prensky, 2001) and “students' heavy use of technology” in private life (Luo, 2010, p. 32), teachers have started to explore possible applications of modern technology in teaching and learning. Especially wikis have become popular and gained reasonable attention in higher education. Wikis have been used to support collaborative learning (e.g. Cress and Kimmerle, 2008), collaborative writing (e.g. Naismith et al., 2011), and student engagement (e.g. Neumann and Hood, 2009). A wiki is a “freely expandable collection of interlinked Web ‘pages’, a hypertext system for storing and modifying information - a database, where each page is easily editable by any user” (Leuf and Cunningham, 2001, p. 14; italics in original). Thereby, wikis enable the collaborative construction of knowledge (Alexander, 2006). With the intention to take advantage of the benefits connected with collaborative learning, this doctoral thesis focuses on the facilitation of collaboration in wikis to leverage collaborative learning. The doctoral thesis was founded on a constructivist understanding of reality. The research is associated with three different research areas: adoption of IT, computer-supported collaborative learning, and learning analytics. After reviewing existing literature, three focal points were identified that correspond to the research gaps in these research areas: factors influencing students' use of wikis, assessment of collaborative learning, and monitoring of collaboration. The aims of this doctoral thesis were (1) to investigate students' intentions to adopt and barriers to use wikis in higher education, (2) to develop and evaluate a method for assessing computer-supported collaborative learning, and (3) to map educational objectives onto learning-related data in order to establish indicators for collaboration. Based on the research aims, four studies were carried out. Each study raised unique research questions that has been addressed by different methods. Thereby, this doctoral thesis presents findings covering the complete process of the use of wikis to support collaboration and thus provides a holistic view on the use of wikis in higher education.:Introduction Theoretical foundation Research areas and focal points Research aims and questions Methods Findings Conclusions References Essay 1: Factors influencing wiki collaboration in higher education Essay 2: Students' intentions to use wikis in higher education Essay 3: Facilitating collaboration in wikis Essay 4: Using fsQCA to identify indicators for wiki collaboration
354

Towards Prescriptive Analytics in Cyber-Physical Systems

Siksnys, Laurynas 14 May 2014 (has links)
More and more of our physical world today is being monitored and controlled by so-called cyber-physical systems (CPSs). These are compositions of networked autonomous cyber and physical agents such as sensors, actuators, computational elements, and humans in the loop. Today, CPSs are still relatively small-scale and very limited compared to CPSs to be witnessed in the future. Future CPSs are expected to be far more complex, large-scale, wide-spread, and mission-critical, and found in a variety of domains such as transportation, medicine, manufacturing, and energy, where they will bring many advantages such as the increased efficiency, sustainability, reliability, and security. To unleash their full potential, CPSs need to be equipped with, among other features, the support for automated planning and control, where computing agents collaboratively and continuously plan and control their actions in an intelligent and well-coordinated manner to secure and optimize a physical process, e.g., electricity flow in the power grid. In today’s CPSs, the control is typically automated, but the planning is solely performed by humans. Unfortunately, it is intractable and infeasible for humans to plan every action in a future CPS due to the complexity, scale, and volatility of a physical process. Due to these properties, the control and planning has to be continuous and automated in future CPSs. Humans may only analyse and tweak the system’s operation using the set of tools supporting prescriptive analytics that allows them (1) to make predictions, (2) to get the suggestions of the most prominent set of actions (decisions) to be taken, and (3) to analyse the implications as if such actions were taken. This thesis considers the planning and control in the context of a large-scale multi-agent CPS. Based on the smart-grid use-case, it presents a so-called PrescriptiveCPS – which is (the conceptual model of) a multi-agent, multi-role, and multi-level CPS automatically and continuously taking and realizing decisions in near real-time and providing (human) users prescriptive analytics tools to analyse and manage the performance of the underlying physical system (or process). Acknowledging the complexity of CPSs, this thesis provides contributions at the following three levels of scale: (1) the level of a (full) PrescriptiveCPS, (2) the level of a single PrescriptiveCPS agent, and (3) the level of a component of a CPS agent software system. At the CPS level, the contributions include the definition of PrescriptiveCPS, according to which it is the system of interacting physical and cyber (sub-)systems. Here, the cyber system consists of hierarchically organized inter-connected agents, collectively managing instances of so-called flexibility, decision, and prescription models, which are short-lived, focus on the future, and represent a capability, an (user’s) intention, and actions to change the behaviour (state) of a physical system, respectively. At the agent level, the contributions include the three-layer architecture of an agent software system, integrating the number of components specially designed or enhanced to support the functionality of PrescriptiveCPS. At the component level, the most of the thesis contribution is provided. The contributions include the description, design, and experimental evaluation of (1) a unified multi-dimensional schema for storing flexibility and prescription models (and related data), (2) techniques to incrementally aggregate flexibility model instances and disaggregate prescription model instances, (3) a database management system (DBMS) with built-in optimization problem solving capability allowing to formulate optimization problems using SQL-like queries and to solve them “inside a database”, (4) a real-time data management architecture for processing instances of flexibility and prescription models under (soft or hard) timing constraints, and (5) a graphical user interface (GUI) to visually analyse the flexibility and prescription model instances. Additionally, the thesis discusses and exemplifies (but provides no evaluations of) (1) domain-specific and in-DBMS generic forecasting techniques allowing to forecast instances of flexibility models based on historical data, and (2) powerful ways to analyse past, current, and future based on so-called hypothetical what-if scenarios and flexibility and prescription model instances stored in a database. Most of the contributions at this level are based on the smart-grid use-case. In summary, the thesis provides (1) the model of a CPS with planning capabilities, (2) the design and experimental evaluation of prescriptive analytics techniques allowing to effectively forecast, aggregate, disaggregate, visualize, and analyse complex models of the physical world, and (3) the use-case from the energy domain, showing how the introduced concepts are applicable in the real world. We believe that all this contribution makes a significant step towards developing planning-capable CPSs in the future. / Mehr und mehr wird heute unsere physische Welt überwacht und durch sogenannte Cyber-Physical-Systems (CPS) geregelt. Dies sind Kombinationen von vernetzten autonomen cyber und physischen Agenten wie Sensoren, Aktoren, Rechenelementen und Menschen. Heute sind CPS noch relativ klein und im Vergleich zu CPS der Zukunft sehr begrenzt. Zukünftige CPS werden voraussichtlich weit komplexer, größer, weit verbreiteter und unternehmenskritischer sein sowie in einer Vielzahl von Bereichen wie Transport, Medizin, Fertigung und Energie – in denen sie viele Vorteile wie erhöhte Effizienz, Nachhaltigkeit, Zuverlässigkeit und Sicherheit bringen – anzutreffen sein. Um ihr volles Potenzial entfalten zu können, müssen CPS unter anderem mit der Unterstützung automatisierter Planungs- und Steuerungsfunktionalität ausgestattet sein, so dass Agents ihre Aktionen gemeinsam und kontinuierlich auf intelligente und gut koordinierte Weise planen und kontrollieren können, um einen physischen Prozess wie den Stromfluss im Stromnetz sicherzustellen und zu optimieren. Zwar sind in den heutigen CPS Steuerung und Kontrolle typischerweise automatisiert, aber die Planung wird weiterhin allein von Menschen durchgeführt. Leider ist diese Aufgabe nur schwer zu bewältigen, und es ist für den Menschen schlicht unmöglich, jede Aktion in einem zukünftigen CPS auf Basis der Komplexität, des Umfangs und der Volatilität eines physikalischen Prozesses zu planen. Aufgrund dieser Eigenschaften müssen Steuerung und Planung in CPS der Zukunft kontinuierlich und automatisiert ablaufen. Der Mensch soll sich dabei ganz auf die Analyse und Einflussnahme auf das System mit Hilfe einer Reihe von Werkzeugen konzentrieren können. Derartige Werkzeuge erlauben (1) Vorhersagen, (2) Vorschläge der wichtigsten auszuführenden Aktionen (Entscheidungen) und (3) die Analyse und potentiellen Auswirkungen der zu fällenden Entscheidungen. Diese Arbeit beschäftigt sich mit der Planung und Kontrolle im Rahmen großer Multi-Agent-CPS. Basierend auf dem Smart-Grid als Anwendungsfall wird ein sogenanntes PrescriptiveCPS vorgestellt, welches einem Multi-Agent-, Multi-Role- und Multi-Level-CPS bzw. dessen konzeptionellem Modell entspricht. Diese PrescriptiveCPS treffen und realisieren automatisch und kontinuierlich Entscheidungen in naher Echtzeit und stellen Benutzern (Menschen) Prescriptive-Analytics-Werkzeuge und Verwaltung der Leistung der zugrundeliegenden physischen Systeme bzw. Prozesse zur Verfügung. In Anbetracht der Komplexität von CPS leistet diese Arbeit Beiträge auf folgenden Ebenen: (1) Gesamtsystem eines PrescriptiveCPS, (2) PrescriptiveCPS-Agenten und (3) Komponenten eines CPS-Agent-Software-Systems. Auf CPS-Ebene umfassen die Beiträge die Definition von PrescriptiveCPS als ein System von wechselwirkenden physischen und cyber (Sub-)Systemen. Das Cyber-System besteht hierbei aus hierarchisch organisierten verbundenen Agenten, die zusammen Instanzen sogenannter Flexibility-, Decision- und Prescription-Models verwalten, welche von kurzer Dauer sind, sich auf die Zukunft konzentrieren und Fähigkeiten, Absichten (des Benutzers) und Aktionen darstellen, die das Verhalten des physischen Systems verändern. Auf Agenten-Ebene umfassen die Beiträge die Drei-Ebenen-Architektur eines Agentensoftwaresystems sowie die Integration von Komponenten, die insbesondere zur besseren Unterstützung der Funktionalität von PrescriptiveCPS entwickelt wurden. Der Schwerpunkt dieser Arbeit bilden die Beiträge auf der Komponenten-Ebene, diese umfassen Beschreibung, Design und experimentelle Evaluation (1) eines einheitlichen multidimensionalen Schemas für die Speicherung von Flexibility- and Prescription-Models (und verwandten Daten), (2) der Techniken zur inkrementellen Aggregation von Instanzen eines Flexibilitätsmodells und Disaggregation von Prescription-Models, (3) eines Datenbankmanagementsystem (DBMS) mit integrierter Optimierungskomponente, die es erlaubt, Optimierungsprobleme mit Hilfe von SQL-ähnlichen Anfragen zu formulieren und sie „in einer Datenbank zu lösen“, (4) einer Echtzeit-Datenmanagementarchitektur zur Verarbeitung von Instanzen der Flexibility- and Prescription-Models unter (weichen oder harten) Zeitvorgaben und (5) einer grafische Benutzeroberfläche (GUI) zur Visualisierung und Analyse von Instanzen der Flexibility- and Prescription-Models. Darüber hinaus diskutiert und veranschaulicht diese Arbeit beispielhaft ohne detaillierte Evaluation (1) anwendungsspezifische und im DBMS integrierte Vorhersageverfahren, die die Vorhersage von Instanzen der Flexibility- and Prescription-Models auf Basis historischer Daten ermöglichen, und (2) leistungsfähige Möglichkeiten zur Analyse von Vergangenheit, Gegenwart und Zukunft auf Basis sogenannter hypothetischer „What-if“-Szenarien und der in der Datenbank hinterlegten Instanzen der Flexibility- and Prescription-Models. Die meisten der Beiträge auf dieser Ebene basieren auf dem Smart-Grid-Anwendungsfall. Zusammenfassend befasst sich diese Arbeit mit (1) dem Modell eines CPS mit Planungsfunktionen, (2) dem Design und der experimentellen Evaluierung von Prescriptive-Analytics-Techniken, die eine effektive Vorhersage, Aggregation, Disaggregation, Visualisierung und Analyse komplexer Modelle der physischen Welt ermöglichen und (3) dem Anwendungsfall der Energiedomäne, der zeigt, wie die vorgestellten Konzepte in der Praxis Anwendung finden. Wir glauben, dass diese Beiträge einen wesentlichen Schritt in der zukünftigen Entwicklung planender CPS darstellen. / Mere og mere af vores fysiske verden bliver overvåget og kontrolleret af såkaldte cyber-fysiske systemer (CPSer). Disse er sammensætninger af netværksbaserede autonome IT (cyber) og fysiske (physical) agenter, såsom sensorer, aktuatorer, beregningsenheder, og mennesker. I dag er CPSer stadig forholdsvis små og meget begrænsede i forhold til de CPSer vi kan forvente i fremtiden. Fremtidige CPSer forventes at være langt mere komplekse, storstilede, udbredte, og missionskritiske, og vil kunne findes i en række områder såsom transport, medicin, produktion og energi, hvor de vil give mange fordele, såsom øget effektivitet, bæredygtighed, pålidelighed og sikkerhed. For at frigøre CPSernes fulde potentiale, skal de bl.a. udstyres med støtte til automatiseret planlægning og kontrol, hvor beregningsagenter i samspil og løbende planlægger og styrer deres handlinger på en intelligent og velkoordineret måde for at sikre og optimere en fysisk proces, såsom elforsyningen i elnettet. I nuværende CPSer er styringen typisk automatiseret, mens planlægningen udelukkende er foretaget af mennesker. Det er umuligt for mennesker at planlægge hver handling i et fremtidigt CPS på grund af kompleksiteten, skalaen, og omskifteligheden af en fysisk proces. På grund af disse egenskaber, skal kontrol og planlægning være kontinuerlig og automatiseret i fremtidens CPSer. Mennesker kan kun analysere og justere systemets drift ved hjælp af det sæt af værktøjer, der understøtter præskriptive analyser (prescriptive analytics), der giver dem mulighed for (1) at lave forudsigelser, (2) at få forslagene fra de mest fremtrædende sæt handlinger (beslutninger), der skal tages, og (3) at analysere konsekvenserne, hvis sådanne handlinger blev udført. Denne afhandling omhandler planlægning og kontrol i forbindelse med store multi-agent CPSer. Baseret på en smart-grid use case, præsenterer afhandlingen det såkaldte PrescriptiveCPS hvilket er (den konceptuelle model af) et multi-agent, multi-rolle, og multi-level CPS, der automatisk og kontinuerligt tager beslutninger i nær-realtid og leverer (menneskelige) brugere præskriptiveanalyseværktøjer til at analysere og håndtere det underliggende fysiske system (eller proces). I erkendelse af kompleksiteten af CPSer, giver denne afhandling bidrag til følgende tre niveauer: (1) niveauet for et (fuldt) PrescriptiveCPS, (2) niveauet for en enkelt PrescriptiveCPS agent, og (3) niveauet for en komponent af et CPS agent software system. På CPS-niveau, omfatter bidragene definitionen af PrescriptiveCPS, i henhold til hvilken det er det system med interagerende fysiske- og IT- (under-) systemer. Her består IT-systemet af hierarkisk organiserede forbundne agenter der sammen styrer instanser af såkaldte fleksibilitet (flexibility), beslutning (decision) og præskriptive (prescription) modeller, som henholdsvis er kortvarige, fokuserer på fremtiden, og repræsenterer en kapacitet, en (brugers) intention, og måder til at ændre adfærd (tilstand) af et fysisk system. På agentniveau omfatter bidragene en tre-lags arkitektur af et agent software system, der integrerer antallet af komponenter, der er specielt konstrueret eller udbygges til at understøtte funktionaliteten af PrescriptiveCPS. Komponentniveauet er hvor afhandlingen har sit hovedbidrag. Bidragene omfatter beskrivelse, design og eksperimentel evaluering af (1) et samlet multi- dimensionelt skema til at opbevare fleksibilitet og præskriptive modeller (og data), (2) teknikker til trinvis aggregering af fleksibilitet modelinstanser og disaggregering af præskriptive modelinstanser (3) et database management system (DBMS) med indbygget optimeringsproblemløsning (optimization problem solving) der gør det muligt at formulere optimeringsproblemer ved hjælp af SQL-lignende forespørgsler og at løse dem "inde i en database", (4) en realtids data management arkitektur til at behandle instanser af fleksibilitet og præskriptive modeller under (bløde eller hårde) tidsbegrænsninger, og (5) en grafisk brugergrænseflade (GUI) til visuelt at analysere fleksibilitet og præskriptive modelinstanser. Derudover diskuterer og eksemplificerer afhandlingen (men giver ingen evalueringer af) (1) domæne-specifikke og in-DBMS generiske prognosemetoder der gør det muligt at forudsige instanser af fleksibilitet modeller baseret på historiske data, og (2) kraftfulde måder at analysere tidligere-, nutids- og fremtidsbaserede såkaldte hypotetiske hvad-hvis scenarier og fleksibilitet og præskriptive modelinstanser gemt i en database. De fleste af bidragene på dette niveau er baseret på et smart-grid brugsscenarie. Sammenfattende giver afhandlingen (1) modellen for et CPS med planlægningsmulighed, (2) design og eksperimentel evaluering af præskriptive analyse teknikker der gør det muligt effektivt at forudsige, aggregere, disaggregere, visualisere og analysere komplekse modeller af den fysiske verden, og (3) brugsscenariet fra energiområdet, der viser, hvordan de indførte begreber kan anvendes i den virkelige verden. Vi mener, at dette bidrag udgør et betydeligt skridt i retning af at udvikle CPSer til planlægningsbrug i fremtiden.
355

Visualisering som bromsmedicin för returer inom E-handel : En kvalitativ studie om användarnas behov för utformningen av Visual Analytics inom beslutsstödsystem

Björner, Olivia January 2022 (has links)
Visual Analytics is a powerful tool for decision makers to gather new insights from data. Since Visual Analytics can be hard to get into at first, previous studies have been conducted to bridge the gap between industry experts and these tools. However, few studies have examined the user’s needs regarding how Visual Analytics can generate these valuable insights. In order to examine these needs, the area selected was returns in E-commerce since the returns are devastating both to the companies and to society. The companies collect a lot of data as the goods get returned, which can be visualized. In order to highlight the e-tailer’s needs for visualization tools for their return data, a qualitative empirical study has been conducted. A prototype was developed in order to aid the semi-structured interviews visually. Six e-tailers was interviewed and got to test the prototype, in order to analyze their needs for visualization tools. The results shows that some graphic elements performed better than others, and that return data needs to be presented in comparison to sales data to be relevant. The study’s findings suggests that predefined graphs helped the E-tailers to get into the Visual Analytics mindset and may work as a way to introduce more users into the world of Visual Analytics. / För beslutsfattare är Visual Analytics inom beslutsstödssystem ett kraftfullt verktyg för att få fram nya insikter ur data. Tidigare forskning inom området fokuserar på att brygga gapet mellan branschexperter och Visual Analytics eftersom verktygen ofta är svåra att sätta sig in i. Dock är det få studier som har undersökt vad användarna har för behov av visualiseringsverktygen för att kunna få ut dessa värdefulla insikter. För att undersöka behoven har returer inom E-handel valts ut som tillämpningsområde, eftersom returerna är skadliga för företagen och samhället i stort. I samband med att varor returneras samlar E-handlarna in en hel del data som kan visualiseras. För att identifiera vilka behov E-handlarna har på visualiseringsverktyg kopplat till denna returdata, genomfördes en kvalitativ empirisk studie. I och med att Visual Analytics är visuellt togs en prototyp fram för att enklare kunna genomföra semistrukturerade intervjuer. Sex stycken E-handlare har intervjuats och testat prototypen för att samla in vilka behov dessa har av visualiseringsverktyg. Det framkom att visa grafiska element var att föredra över andra, samt att returdata i sig inte är särskilt intressant för E-handlarna utan att ha den totala försäljningen att jämföra mot. Det visade sig att de flesta E-handlarna var helt nya till Visual Analytics och att de fördefinierade grafiska elementen hjälpte de till att komma in i verktyget samt väckte tankar för hur de skulle vilja arbeta sig vidare i verktyget.
356

Evaluation of development methods for mobile applications : Soundhailer’s site and iOS application

Rezai, Arash January 2016 (has links)
To remain competitive and successful in today’s globalized market, companies need a strategy to ensure that they are constantly at the leading edge in terms of products and services. The implementation of a mobile application is one approach to fulfill this requirement. This report describes an overview of the topic, by introducing briefly today’s development tools for mobile application development and subsequently focusing on the Soundhailer application, as the application done by the author. The problem in focus is to find out whether a native or web-based application is preferred for an iOS application production strategy for a start-up company. Moreover, the report delivers an insight into a well-structured method that works good for setting up measuring points for a website, also Soundhailer’s, and the factual realization of a development tool for iOS development. This insight is based on a lot of help from a former student of the Royal Institute of Technology, who has had some previous experience within the area. To show prospective similarities and differences between theory and reality, the experiences are subsequently compared to the theoretical part. Finally, the results are critically discussed. Two versions of the application were developed, both a native version and a web-based version, and the results show that both native and web-based applications can be convenient solutions for companies to implement and use. The results also provide a foundation upon which others can build and better understand how an iOS application is used and developed. / För att förbli konkurrenskraftiga och framgångsrika i dagens globaliserade marknad, behöver företagen en strategi för att se till att de ständigt är i framkant när det gäller produkter och tjänster. Att framställa en mobilapplikation är ett av många sätt för att nå upp till detta krav. Denna rapport ger en överblick över ämnet genom att först gå igenom dagens utvecklingsverktyg för mobilapplikationer och därefter fokusera på företaget Soundhailers mobilapplikation, eftersom denne har utvecklats av undertecknad. Problemet i fokus består av att ta reda på om en hårdvarukodad eller webbaserad applikation är att föredra för produktionsstrategin av en iOSapplikation för ett start-up-företag. Dessutom ger rapporten en inblick i en välstrukturerad metod som fungerar bra för att inrätta mätpunkter för en webbplats, med fokus på Soundhailers webbplats, samt det faktiska genomförandet av ett utvecklingsverktyg för iOS-utveckling. Denna insikt bygger på en hel del hjälp från en före detta elev på Kungliga Tekniska Högskolan som har tidigare erfarenheter inom området. För att sedan visa potentiella likheter och skillnader mellan teori och verklighet jämförs erfarenheterna med den teoretiska delen. Slutligen diskuteras resultaten kritiskt. Två versioner av applikationen har utvecklats, både en hårdvarukodad version och en webbaserad version, och resultaten visar att både hårdvarukodade och webbaserade applikationer kan vara praktiska lösningar som företag kan implementera och använda sig av. Resultaten ger också en grund på vilken andra kan bygga vidare på samt en bättre förståelse för hur en iOSapplikation kan användas och utvecklas
357

Web Analytics: Best Practices for an Organization’s Successful Performance; A Preliminary Analysis

Dahbi, Salma 01 May 2020 (has links)
This research presents an exploratory study concerning organizations’ best practices of Web analytics for a successful performance and the factors influencing the companies’ successful adoption of Web analytics. A qualitative research methodology was used engaging a comprehension of Web analytics adoption using the Diffusion of Innovation theory (Rogers, 1995) and the theory building approach (Eisenhardt, 1989). Interviews with five companies from different industries were conducted. Findings suggest that for a successful performance, companies should consider: • Data for better decision making. • Web analytics barriers • Selecting the right KPIs and metrics based on the company’s goals. • Web analytics trends A mixed-method approach comprising other extensive methods of data collection should be conducted. Investigation of the use of specific metrics and KPIs within companies from different industries, as well as the strategies for working past the barriers that impede companies from adopting Web analytics should be considered.
358

Attraktivität von Visualisierungsformen in Online-Lernumgebungen

Brandenburger, Jessica, Janneck, Monique 29 April 2019 (has links)
Die Visualisierung von Lernerdaten spielt in der online-gestützten Hochschullehre eine große Rolle. Durch Learning-Analytics-Ansätze kann problematisches Gruppen und Einzelverhalten frühzeitig diagnostiziert werden. Durch die Rückspiegelung lernrelevanter Daten und Informationen können beispielsweise Studierende im Online-Studium unterstützt (Krämer et al., 2017; Diziol et al., 2010) und die Leistung von Lerngruppen verglichen werden (Gaaw et al., 2017, S. 151). Um diese – häufig komplexen und vielschichtigen – Datensätze für Lernende zugänglich, erfassbar und kommunizierbar zu machen, sind geeignete Visualisierungsformen erforderlich. Im vorliegenden Beitrag wurden unterschiedliche Visualisierungsformen hinsichtlich der User Experience (UX), Ästhetik und des Gesamteindrucks mittels einer Online-Studie untersucht. [Aus der Einleitung.]
359

Investigating the Role of Student Ownership in the Design of Student-facing Learning Analytics Dashboards (SFLADs) in Relation to Student Perceptions of SFLADs

January 2019 (has links)
abstract: Learning analytics application is evolving into a student-facing solution. Student-facing learning analytics dashboards (SFLADs), as one popular application, occupies a pivotal position in online learning. However, the application of SFLADs faces challenges due to teacher-centered and researcher-centered approaches. The majority of SFLADs report student learning data to teachers, administrators, and researchers without direct student involvement in the design of SFLADs. The primary design criteria of SFLADs is developing interactive and user-friendly interfaces or sophisticated algorithms that analyze the collected data about students’ learning activities in various online environments. However, if students are not using these tools, then analytics about students are not useful. In response to this challenge, this study focuses on investigating student perceptions regarding the design of SFLADs aimed at providing ownership over learning. The study adopts an approach to design-based research (DBR; Barab, 2014) called the Integrative Learning Design Framework (ILDF; Bannan-Ritland, 2003). The theoretical conjectures and the definition of student ownership are both framed by Self-determination theory (SDT), including four concepts of academic motivation. There are two parts of the design in this study, including prototypes design and intervention design. They are guided by a general theory-based inference which is student ownership will improve student perceptions of learning in an autonomy-supportive SFLAD context. A semi-structured interview is used to gather student perceptions regarding the design of SFLADs aimed at providing ownership over learning. / Dissertation/Thesis / Masters Thesis Educational Psychology 2019
360

Der Einfluss von Analytics Tools auf das Controlling: Erste Ergebnisse

Günther, Thomas, Boerner, Xenia, Mischer, Melanie 24 January 2022 (has links)
Der vorliegende Auswertungsbericht fasst die Ergebnisse einer Studie der TU Dresden zum Einfluss von Analytics Tools auf das Controlling der 3.000 größten Unternehmen in Deutschland im Jahr 2021 zusammen. Der Auswertungsbericht gibt einen Überblick über den Stand der Gestaltung und der Nutzung von Analytics Tools im Controlling. Befragt wurden die in den Unternehmen verantwortlichen Controllingleiter bzw. kaufmännische Geschäftsführer und CFOs mittels eines strukturierten Fragebogens. Der Rücklauf von 322 verwertbaren Fragebögen bei einer Rücklaufquote von 10,78 % unterstreicht das große Interesse der Praxis an dem Untersuchungsthema.:Inhaltsverzeichnis Abbildungsverzeichnis 1 Einleitung 1.1 Zielsetzung und untersuchte Aspekte 1.2 Inhalte des Auswertungsberichts und weitere Schritte im Forschungsprojekt 2 Grundkonzepte der Studie: Ein theoretischer Überblick 2.1 Der Begriff der Digitalisierung 2.1.1 Big Data als Grundlage für Business Analytics 2.1.2 Business Analytics 2.1.3 Abgrenzung von Business Analytics zu anderen Technologien 2.1.4 Business Analytics im Controlling 2.2 Psychologische Effekte von Digitalisierung (Rollenstress) 3 Datenerhebung und Auswertungsmethodik 3.1 Charakterisierung der Grundgesamtheit 3.2 Ablauf der Datenerhebung 3.3 Zusammenfassung des Fragebogenrücklaufs 3.4 Auswertungsmethodik 4 Empirische Ergebnisse zur Nutzung und Gestaltung von Analytics Tools im Controlling 4.1 Demografie der Antwortenden 4.2 Teil 1: Generelle Fragen zum Unternehmen 4.2.1 Organisatorische Einbettung des Controllings 4.2.2 Stand der Digitalisierung des Controllings im Unternehmen 4.2.3 Beitrag der Controlling-Abteilung für das Unternehmen 4.2.4 Einfluss der Corona-Pandemie 4.2.5 Veränderungen im Unternehmensumfeld 4.3 Teil 2: Fragen zur Controlling-Abteilung und zum Einsatz von Analytics Tools im Controlling 4.3.1 Aktivitäten der Controlling-Mitarbeiter (Rollenverständnis) 4.3.2 Verwendete Analytics Tools 4.3.3 Effekte der Analytics Tools 4.3.4 Art der Nutzung von Analytics Tools 4.3.5 Ressourcen für Analytics-Initiativen 4.3.6 Datenorientierung und Datenkultur 4.3.7 Big Data-Charakteristik der Daten 4.3.8 Eigenschaften von in Analytics Tools genutzten Daten 4.3.9 Technologische Charakteristika der Analytics Tools 4.3.10 Unterstützung durch das Top Management Team 4.3.11 Fähigkeiten der Führungskräfte im Controlling 4.3.12 Technische Fähigkeiten von Controlling-Mitarbeitern 4.3.13 Analytische Fähigkeiten der Controlling-Mitarbeiter 4.3.14 Wissenszugang und -nutzung 4.4 Teil 3: Fragen zum Einfluss von Analytics Tools auf die Tätigkeit und das Arbeitsumfeld von Controllingleitern 4.4.1 Auswirkungen von Informationen aus Analytics Tools 4.4.2 Arbeitsrelevante Informationen für die Tätigkeit als Controlling-Leiter 4.4.3 Umstände der Tätigkeit von Controllingleitern (Rollenüberlastung) 4.4.4 Wahrnehmungen der Arbeit von Controlingleitern (Rollenambiguität und Rollenkonflikt) 4.4.5 Einstellungen zum Unternehmen 4.5 Sonstige Hinweise der Studienteilnehmer 5 Management Summary 6 Literaturverzeichnis

Page generated in 0.0309 seconds