51 |
Learning und Academic Analytics in Lernmanagementsystemen (LMS): Herausforderungen und Handlungsfelder im nationalen HochschulkontextGaaw, Stephanie, Stützer, Cathleen M. January 2017 (has links)
Der Einsatz digitaler Medien hat in der nationalen Hochschullehre Tradition. Lernmanagementsysteme (LMS), E-Learning, Blended Learning, etc. sind Schlagwörter im Hochschulalltag. Allerdings stellt sich die Frage, was LMS und Blended Learning im Zeitalter digitaler Vernetzung und der herangewachsenen Generation der “Digital Natives” leisten (können bzw. sollen)? Die Verbreitung neuer Technologien im Zusammenhang mit neuen Lehr- und Lernkonzepten wie OER, MOOCS, etc. macht zudem die Entwicklung von Analytics-Instrumenten erforderlich. Das ist auch im nationalen Diskurs von großem Interesse und legt neue Handlungsfelder für Hochschulen offen. Doch es stellt sich die Frage, warum Learning Analytics (LA) bzw. Academic Analytics (AA) bisher nur in einem geringfügigen Maße an deutschen Hochschulen erfolgreich zum Einsatz kommen und warum eine Nutzung insbesondere in LMS, wie zum Beispiel OPAL, nicht ohne Weiteres realisierbar erscheint. Hierzu sollen Einflussfaktoren, die die Implementierung von LA- und AA-Instrumenten hemmen, identifiziert und diskutiert werden. Aufbauend darauf werden erste Handlungsfelder vorgestellt, deren Beachtung eine verstärkte Einbettung von LA- und AA Instrumenten in LMS möglich machen soll.
|
52 |
Visual Analytics Methodologies on Causality AnalysisJanuary 2019 (has links)
abstract: Causality analysis is the process of identifying cause-effect relationships among variables. This process is challenging because causal relationships cannot be tested solely based on statistical indicators as additional information is always needed to reduce the ambiguity caused by factors beyond those covered by the statistical test. Traditionally, controlled experiments are carried out to identify causal relationships, but recently there is a growing interest in causality analysis with observational data due to the increasing availability of data and tools. This type of analysis will often involve automatic algorithms that extract causal relations from large amounts of data and rely on expert judgment to scrutinize and verify the relations. Over-reliance on these automatic algorithms is dangerous because models trained on observational data are susceptible to bias that can be difficult to spot even with expert oversight. Visualization has proven to be effective at bridging the gap between human experts and statistical models by enabling an interactive exploration and manipulation of the data and models. This thesis develops a visual analytics framework to support the interaction between human experts and automatic models in causality analysis. Three case studies were conducted to demonstrate the application of the visual analytics framework in which feature engineering, insight generation, correlation analysis, and causality inspections were showcased. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
|
53 |
Three Essays on Continuity of Care in Canada: From Predictions to DecisionsGhazalbash, Somayeh January 2022 (has links)
Continuity of care (COC) refers to the delivery of seamless services, continuous caring relationships, and information sharing across care providers. A disruption in COC—that is, care fragmentation (CF)—is an important cause of inefficiency in the Canadian healthcare system; such disruption leads to increased healthcare costs and reduced quality of care. Addressing this issue is particularly challenging among older adults, who often have medically complex needs; such patients can require many care transitions across multiple care settings. An effective strategy for COC improvements is to optimize discharge planning among older adults. However, this is hampered by the imperfect understanding of older patients’ needs, which are associated with their health complexity. Therefore, making early predictions about the patients’ health complexity and incorporating this information into discharge planning decisions can potentially improve COC. In this thesis, I develop data-driven predictive–prescriptive analytics frameworks that leverage machine learning (ML) approaches and a rich, massive set of longitudinal data collected over a decade. The first essay in this dissertation studies the early prediction of older patients’ complexity in hospital pathways using ML. It also examines whether we can conduct accurate prognostics with current information on patient complexity. The second study examines how two common measures of patient complexity—multimorbidity and frailty—concurrently affect post-discharge readmission and mortality among older patients. It also investigates the dependency of the outcomes on other essential socio-demographic factors. Finally, the third study examines the feasibility of predicting patients at risk of fragmented readmission—that is, readmission to a different hospital than the initial one. It uses this predictive information to derive optimal policies for preventing CF while addressing disparities in the decision-making process. The findings highlight the feasibility, utility, and performance of predicting patient complexity and important adverse outcomes, potentially undermining COC. This thesis shows that advanced knowledge and explicit utilization of this information could support decision-making and resource planning toward a targeted allocation at the system level; moreover, it informs actions that affect patient-centered care transition at the service level to optimize patient outcomes and facilitate upstream discharge processes, thereby improving COC. / Thesis / Doctor of Philosophy (PhD) / The aging population in Canada is growing significantly relative to the population as a whole, and several challenges are involved in providing aging people with proper healthcare services. One of these challenges is disruptions in continuity of care. Older adults are often medically complex or frail; they may have multiple diseases and require many care transitions across healthcare settings. Poor continuity of care among these patients leads to health deterioration during care trajectories, resulting in reduced quality of care and increased healthcare costs and inefficiencies. This thesis includes three essays that provide practical insights and solutions regarding the issue of continuity of care disruptions, spanning from predicting the issue to strategies to prevent it in a data-driven manner.
|
54 |
Privacy Preserving Network Security Data AnalyticsDeYoung, Mark E. 24 April 2018 (has links)
The problem of revealing accurate statistics about a population while maintaining privacy of individuals is extensively studied in several related disciplines. Statisticians, information security experts, and computational theory researchers, to name a few, have produced extensive bodies of work regarding privacy preservation.
Still the need to improve our ability to control the dissemination of potentially private information is driven home by an incessant rhythm of data breaches, data leaks, and privacy exposure. History has shown that both public and private sector organizations are not immune to loss of control over data due to lax handling, incidental leakage, or adversarial breaches. Prudent organizations should consider the sensitive nature of network security data and network operations performance data recorded as logged events. These logged events often contain data elements that are directly correlated with sensitive information about people and their activities -- often at the same level of detail as sensor data.
Privacy preserving data publication has the potential to support reproducibility and exploration of new analytic techniques for network security. Providing sanitized data sets de-couples privacy protection efforts from analytic research. De-coupling privacy protections from analytical capabilities enables specialists to tease out the information and knowledge hidden in high dimensional data, while, at the same time, providing some degree of assurance that people's private information is not exposed unnecessarily.
In this research we propose methods that support a risk based approach to privacy preserving data publication for network security data. Our main research objective is the design and implementation of technical methods to support the appropriate release of network security data so it can be utilized to develop new analytic methods in an ethical manner. Our intent is to produce a database which holds network security data representative of a contextualized network and people's interaction with the network mid-points and end-points without the problems of identifiability. / Ph. D. / Network security data is produced when people interact with devices (e.g., computers, printers, mobile telephones) and networks (e.g., a campus wireless network). The network security data can contain identifiers, like user names, that strongly correlate with real world people. In this work we develop methods to protect network security data from privacy-invasive misuse by the ’honest-but-curious’ authorized data users and unauthorized malicious attackers. Our main research objective is the design and implementation of technical methods to support the appropriate release of network security data so it can be utilized to develop new analytic methods in an ethical manner. Our intent is to produce a data set which holds network security data representative of people’s interaction with a contextualized network without the problems of identifiability.
|
55 |
Analytics for Software Product PlanningSaha, Shishir Kumar, Mohymen, Mirza January 2013 (has links)
Context. Software product planning involves product lifecycle management, roadmapping, release planning and requirements engineering. Requirements are collected and used together with criteria to define short-term plans, release plans and long-term plans, roadmaps. The different stages of the product lifecycle determine whether a product is mainly evolved, extended, or simply maintained. When eliciting requirements and identifying criteria for software product planning, the product manager is confronted with statements about customer interests that do not correspond to their needs. Analytics summarize, filter, and transform measurements to obtain insights about what happened, how it happened, and why it happened. Analytics have been used for improving usability of software solutions, monitoring reliability of networks and for performance engineering. However, the concept of using analytics to determine the evolution of a software solution is unexplored. In a context where a misunderstanding of users’ need can easily lead the effective product design to failure, the support of analytics for software product planning can contribute to fostering the realization of which features of the product are useful for the users or customers. Objective. In observation of a lack of primary studies, the first step is to apply analytics of software product planning concept in the evolution of software solutions by having an understanding of the product usage measurement. For this reason, this research aims to understand relevant analytics of users’ interaction with SaaS applications. In addition, to identify an effective way to collect right analytics and measure feature usage with respect to page-based analytics and feature-based analytics to provide decision-support for software product planning. Methods. This research combines a literature review of the state-of-the-art to understand the research gap, related works and to find out relevant analytics for software product planning. A market research is conducted to compare the features of different analytics tools to identify an effective way to collect relevant analytics. Hence, a prototype analytics tool is developed to explore the way of measuring feature usage of a SaaS website to provide decision-support for software product planning. Finally, a software simulation is performed to understand the impact of page clutter, erroneous page presentation and feature spread with respect to page-based analytics and feature-based analytics. Results. The literature review reveals the studies which describe the related work on relevant categories of software analytics that are important for measuring software usage. A software-supported approach, developed from the feature comparison results of different analytics tools, ensures an effective way of collecting analytics for product planners. Moreover, the study results can be used to understand the impact of page clutter, erroneous page representation and feature spread with respect to page-based analytics and feature-based analytics. The study reveals that the page clutter, erroneous page presentation and feature spread exaggerate feature usage measurement with the page-based analytics, but not with the feature-based analytics. Conclusions. The research provided a wide set of evidence fostering the understanding of relevant analytics for software product planning. The results revealed the way of measuring the feature usage to SaaS product managers. Furthermore, feature usage measurement of SaaS websites can be recognized, which helps product managers to understand the impact of page clutter, erroneous page presentation and feature spread between page-based and feature-based analytics. Further case study can be performed to evaluate the solution proposals by tailoring the company needs. / +46739480254
|
56 |
Analytics inom spel : Hur data/analytics kan hjälpa spelare att utvecklas / Analytics in gaming : How data/analytics can help gamers to improveNayef, Badi January 2022 (has links)
En kombination av individuellt genererad data samt ett prediktivt analytiskt system som samlar spelares historiska data undersöks för att få mer kunskap om hur mätningar och mått kan hjälpa spelare att fatta mer datadrivna beslut. Detta i hopp om att göra dem till skickligare spelare i det befintliga spelet som systemet används inom. Grunder i game analytics lyfts fram, där fokuset främst ligger på game metrics och telemetry för att tydliggöra hur data inom spelbranschen analyseras samt fångas upp. En kvalitativ metod kommer att tillämpas där individer kommer att intervjuas om ett tilltänkt system. Här betonas kunskapen om hur spelarna skulle uppleva ett sådant system, samt vad de anser är viktigt om ett sådant system skulle implementeras i deras dagliga spelande och vilket behov det finns i nuläget.
|
57 |
Academic Analytics: Zur Bedeutung von (Big) Data Analytics in der EvaluationStützer, Cathleen M. 03 September 2020 (has links)
Im Kontext der Hochschul- und Bildungsforschung wird Evaluation in ihrer Gesamtheit als Steuerungs- und Controlling-Instrument eingesetzt, um unter anderem Aussagen zur Qualität von Lehre, Forschung und Administration zu liefern. Auch wenn der Qualitätsbegriff an den Hochschulen bislang noch immer sehr unterschiedlich geführt wird, verfolgen die Beteiligten ein einheitliches Ziel – die Evaluation als zuverlässiges (internes) Präventions- und VorhersageInstrument in den Hochschulalltag zu integrieren. Dass dieses übergeordnete Ziel mit einigen Hürden verbunden ist, liegt auf der Hand und wird in der Literatur bereits vielfältig diskutiert (Benneworth & Zomer 2011; Kromrey 2001; Stockmann & Meyer 2014; Wittmann 2013). Die Evaluationsforschung bietet einen interdisziplinären Forschungszugang. Instrumente und Methoden aus unterschiedlichen (sozialwissenschaftlichen) Disziplinen, die sowohl qualitativer als auch quantitativer Natur sein können, kommen zum Einsatz. Mixed Method/Multi Data–Ansätze gelten dabei – trotz des unstreitbar höheren Erhebungs- und Verwertungsaufwandes – als besonders einschlägig in ihrer Aussagekraft (Döring 2016; Hewson 2007). Allerdings finden (Big) Data Analytics, Echtzeit- und Interaktionsanalysen nur sehr langsam einen Zugang zum nationalen Hochschul- und Bildungssystem. Der vorliegende Beitrag befasst sich mit der Bedeutung von (Big) Data Analytics in der Evaluation. Zum einen werden Herausforderungen und Potentiale aufgezeigt – zum anderen wird der Frage nachgegangen, wie es gelingen kann, (soziale) Daten (automatisiert) auf unterschiedlichen Aggregationsebenen zu erheben und auszuwerten. Es werden am Fallbeispiel der Evaluation von E-Learning in der Hochschullehre geeignete Erhebungsmethoden, Analyseinstrumente und Handlungsfelder vorgestellt. Die Fallstudie wird dabei in den Kontext der Computational Social Science (CSS) überführt, um einen Beitrag zur Entwicklung der Evaluationsforschung im Zeitalter von Big Data und sozialen Netzwerken zu leisten.
|
58 |
INFRASTRUCTURE ASSET MANAGEMENT ANALYTICS STRATEGIES FOR SYSTEMIC RISK MITIGATION AND RESILIENCE ENHANCEMENTGoforth, Eric January 2022 (has links)
The effective implementation of infrastructure asset management systems within organizations that own, operate, and manage infrastructure assets is critical to address the main challenges facing the infrastructure industry (e.g., infrastructure ageing and deterioration, maintenance backlogs, strict regulatory operating conditions, limited financial resources, and losing valuable experience through retirements). Infrastructure asset management systems contain connectivity between major operational components and such connectivity can lead to systemic risks (i.e., dependence-induced failures). This thesis analyzes the asset management system as a network of connected components (i.e., nodes and links) to identify critical components exposed to systemic risks induced by information asymmetry and information overload. This thesis applies descriptive and prescriptive analytics strategies to address information asymmetry and information overload and predictive analytics is employed to enhance the resilience. Specifically, descriptive analytics was employed to visualize the key performance indicators of infrastructure assets ensuring that all asset management stakeholders make decisions using consistent information sources and that they are not overwhelmed by having access to the entire database. Predictive analytics is employed to classify the resilience key performance indicator pertaining to the forced outage rapidity of power infrastructure components enabling power infrastructure owners to estimate the rapidity of an outage soon after its occurrence, and thus allocating the appropriate resources to return the infrastructure to operation. Using predictive analytics allows decision-makers to use consistent and clear information to inform their decision to respond to forced outage occurrences. Finally, prescriptive analytics is applied to optimize the asset management system network by increasing the connectivity of the network and in turn decreasing the exposure of the asset management system to systemic risk from information asymmetry and information overload. By analyzing an asset management system as a network and applying descriptive-, predictive-, and prescriptive analytics strategies, this dissertation illustrates how systemic risk exposure, due to information asymmetry and information overload could be mitigated and how power infrastructure resilience could be enhanced in response to forced outage occurrences. / Thesis / Doctor of Science (PhD) / Effective infrastructure asset management systems are critical for organizations that own, manage, and operate infrastructure assets. Infrastructure asset management systems contain main components (e.g., engineering, project management, resourcing strategy) that are dependent on information and data. Inherent within this system is the potential for failures to cascade throughout the entire system instigated by such dependence. Within asset management, such cascading failures, known as systemic risks, are typically caused by stakeholders not using the same information for decision making or being overwhelmed by too much information. This thesis employs analytics strategies including: i) descriptive analytics to present only relevant and meaningful information necessary for respective stakeholders, ii) predictive analytics to forecast the resilience key performance indicator, rapidity, enabling all stakeholders to make future decisions using consistent projections, and iii) prescriptive analytics to optimize the asset management system by introducing additional information connections between main components. Such analytics strategies are shown to mitigate the systemic risks within the asset management system and enhance the resilience of infrastructure in response to an unplanned disruption.
|
59 |
Datadriven HR : HR analytics och dess framväxt / Data driven HR : HR analytics and its advancementLund, Theodor, Erlandsson, Anna January 2020 (has links)
Bakgrund och syfte: Implementeringen av HR analytics är mycket låg trots att forskning visar på att användandet av HR analytics leder till bättre beslut i organisationer. Syftet med studien var att undersöka HR analytikers uppfattning av hinder bakom den begränsade framväxten. Metod: Sex HR analytikers semistrukturerade intervjuer analyserades genom en tematisk analys. Analysen var induktiv med inslag av deduktion. Resultat: Det råder en kompetensbrist inom området. Hinder för framväxten har visat sig vara HR analytikers tvivel på sin egen förmåga att arbeta databaserat, brist på ledningsstöd, brister i mjukvara, kompetens- och utbildningsbrist samt informationsbrist. HR analytics ger en högre legitimitet för professionen vilket också pekar mot ett ökat framtida användande. HR analytics har också resulterat i ett större inflytande hos ledning och chefer. Slutsatser: Studien talar för en utbildningssatsning där fokuset inte enbart bör ligga på de `hårda ́ kompetenserna utan också på de `mjuka ́ såsom förändringsledning, storytelling och kommunikation.
|
60 |
Vysoce výkonné analýzy / High Performance AnalyticsKalický, Andrej January 2013 (has links)
This thesis explains Big Data Phenomenon, which is characterised by rapid growth of volume, variety and velocity of data - information assets, and thrives the paradigm shift in analytical data processing. Thesis aims to provide summary and overview with complete and consistent image about the area of High Performance Analytics (HPA), including problems and challenges on the pioneering state-of-art of advanced analytics. Overview of HPA introduces classification, characteristics and advantages of specific HPA method utilising the various combination of system resources. In the practical part of the thesis the experimental assignment focuses on analytical processing of large dataset using analytical platform from SAS Institute. The experiment demonstrates the convenience and benefits of In-Memory Analytics (specific HPA method) by evaluating the performance of different analytical scenarios and operations. Powered by TCPDF (www.tcpdf.org)
|
Page generated in 0.0292 seconds