Spelling suggestions: "subject:"data analytics"" "subject:"mata analytics""
221 |
Data-driven Strategies for Systemic Risk Mitigation and Resilience Management of Infrastructure ProjectsGondia, Ahmed January 2021 (has links)
Public infrastructure systems are crucial components of modern urban communities as they play major roles in elevating countries’ socio-economics. However, the inherent complexity and systemic interdependence of infrastructure construction/renewal projects have left sites hindered with multiple forms of performance disruptions (e.g., schedule delays, cost overruns, workplace injuries) that result in long-term consequences such as claims, disputes, and stakeholder dissatisfactions. The evolution of advanced data-driven tools (e.g., machine learning and complex network analytics) can play a pivotal role in driving improvements in the management strategies of complex projects due to such tools’ usefulness in applications related to interdependent systems. In this respect, the research presented in this dissertation is aimed at developing data-driven strategies geared towards a resilience-based approach to managing complex infrastructure projects. Such strategies can support project managers and stakeholders with data-informed decision-making to mitigate the impacts of systemic interdependence-induced risks at different levels of their projects. Specifically, the developed data-driven resilience-based strategies can empower decision-makers with the ability to: i) predict potential performance disruptions based on real-time and dynamic project conditions such that proactive response/mitigation strategies and/or contingencies can be deployed ahead of time; and ii) develop adaptive solutions against potential interdependence-induced cascade project disruptions such that rapid restoration of the most important set of performance targets can be restored. It is important to note that data-driven strategies and other analytics-based approaches are not proposed herein to replace but rather to complement the expertise and sensible judgment of project managers and the capabilities of available analysis tools. Specifically, the enriched predictive and analytical insights together with the proactive and rapid adaptation capabilities facilitated by the developed strategies can empower the new paradigm of resilience-guided management of complex dynamic infrastructure projects. / Thesis / Doctor of Philosophy (PhD)
|
222 |
Node Centric Community Detection and Evolutional Prediction in Dynamic NetworksOluwafolake A Ayano (13161288) 27 July 2022 (has links)
<p> </p>
<p>Advances in technology have led to the availability of data from different platforms such as the web and social media platforms. Much of this data can be represented in the form of a network consisting of a set of nodes connected by edges. The nodes represent the items in the networks while the edges represent the interactions between the nodes. Community detection methods have been used extensively in analyzing these networks. However, community detection in evolving networks has been a significant challenge because of the frequent changes to the networks and the need for real-time analysis. Using Static community detection methods for analyzing dynamic networks will not be appropriate because static methods do not retain a network’s history and cannot provide real-time information about the communities in the network.</p>
<p>Existing incremental methods treat changes to the network as a sequence of edge additions and/or removals; however, in many real-world networks, changes occur when a node is added with all its edges connecting simultaneously. </p>
<p>For efficient processing of such large networks in a timely manner, there is a need for an adaptive analytical method that can process large networks without recomputing the entire network after its evolution and treat all the edges involved with a node equally. </p>
<p>We proposed a node-centric community detection method that incrementally updates the community structure in the network using the already known structure of the network to avoid recomputing the entire network from the scratch and consequently achieve a high-quality community structure. The results from our experiments suggest that our approach is efficient for incremental community detection of node-centric evolving networks. </p>
|
223 |
Data governance in big data : How to improve data quality in a decentralized organization / Datastyrning och big dataLandelius, Cecilia January 2021 (has links)
The use of internet has increased the amount of data available and gathered. Companies are investing in big data analytics to gain insights from this data. However, the value of the analysis and decisions made based on it, is dependent on the quality ofthe underlying data. For this reason, data quality has become a prevalent issue for organizations. Additionally, failures in data quality management are often due to organizational aspects. Due to the growing popularity of decentralized organizational structures, there is a need to understand how a decentralized organization can improve data quality. This thesis conducts a qualitative single case study of an organization currently shifting towards becoming data driven and struggling with maintaining data quality within the logistics industry. The purpose of the thesis is to answer the questions: • RQ1: What is data quality in the context of logistics data? • RQ2: What are the obstacles for improving data quality in a decentralized organization? • RQ3: How can these obstacles be overcome? Several data quality dimensions were identified and categorized as critical issues,issues and non-issues. From the gathered data the dimensions completeness, accuracy and consistency were found to be critical issues of data quality. The three most prevalent obstacles for improving data quality were data ownership, data standardization and understanding the importance of data quality. To overcome these obstacles the most important measures are creating data ownership structures, implementing data quality practices and changing the mindset of the employees to a data driven mindset. The generalizability of a single case study is low. However, there are insights and trends which can be derived from the results of this thesis and used for further studies and companies undergoing similar transformations. / Den ökade användningen av internet har ökat mängden data som finns tillgänglig och mängden data som samlas in. Företag påbörjar därför initiativ för att analysera dessa stora mängder data för att få ökad förståelse. Dock är värdet av analysen samt besluten som baseras på analysen beroende av kvaliteten av den underliggande data. Av denna anledning har datakvalitet blivit en viktig fråga för företag. Misslyckanden i datakvalitetshantering är ofta på grund av organisatoriska aspekter. Eftersom decentraliserade organisationsformer blir alltmer populära, finns det ett behov av att förstå hur en decentraliserad organisation kan arbeta med frågor som datakvalitet och dess förbättring. Denna uppsats är en kvalitativ studie av ett företag inom logistikbranschen som i nuläget genomgår ett skifte till att bli datadrivna och som har problem med att underhålla sin datakvalitet. Syftet med denna uppsats är att besvara frågorna: • RQ1: Vad är datakvalitet i sammanhanget logistikdata? • RQ2: Vilka är hindren för att förbättra datakvalitet i en decentraliserad organisation? • RQ3: Hur kan dessa hinder överkommas? Flera datakvalitetsdimensioner identifierades och kategoriserades som kritiska problem, problem och icke-problem. Från den insamlade informationen fanns att dimensionerna, kompletthet, exakthet och konsekvens var kritiska datakvalitetsproblem för företaget. De tre mest förekommande hindren för att förbättra datakvalité var dataägandeskap, standardisering av data samt att förstå vikten av datakvalitet. För att överkomma dessa hinder är de viktigaste åtgärderna att skapa strukturer för dataägandeskap, att implementera praxis för hantering av datakvalitet samt att ändra attityden hos de anställda gentemot datakvalitet till en datadriven attityd. Generaliseringsbarheten av en enfallsstudie är låg. Dock medför denna studie flera viktiga insikter och trender vilka kan användas för framtida studier och för företag som genomgår liknande transformationer.
|
224 |
Big Data Analytics for Assessing Surface Transportation SystemsJairaj Chetas Desai (12454824) 25 April 2022 (has links)
<p> </p>
<p>Most new vehicles manufactured in the last two years are connected vehicles (CV) that transmit back to the original equipment manufacturer at near real-time fidelity. These CVs generate billions of data points on an hourly basis, which can provide valuable data to agencies to improve the overall mobility experience for users. However, with this growing scale of CV big data, stakeholders need efficient and scalable methodologies that allow agencies to draw actionable insights from this large-scale data for daily operational use. This dissertation presents a suite of applications, illustrated through case studies, that use CV data for assessing and managing mobility and safety on surface transportation systems.</p>
<p>A systematic review of construction zone CV data and crashes on Indiana’s interstates for the calendar year 2019, found a strong correlation between crashes and hard-braking event data reported by CVs. Trajectory-level CV data analyzed for a construction zone on interstate 70 provided valuable insights into travel time and traffic signal performance impacts on the surrounding road network. An 11-state analysis of electric and hybrid vehicle usage in proximity to public charging stations highlighted regions under and overserved by charging infrastructure, providing quantitative support for infrastructure investment allocations informed by real-world usage trends. CV data were further leveraged to document route choice behavior during active freeway incidents providing stakeholders with a historical record of observed routing patterns to inform future alternate route planning strategies. CV trajectory data analysis facilitated the identification of trip chaining activities resulting in improved outlier curation and realistic estimation of travel time metrics.</p>
<p>The overall contribution of this thesis is developing analytical big data procedures to process billions of CV data records to inform engineering and public policy investments in infrastructure capacity, highway safety improvements, and new EV infrastructure. These scalable and efficient analysis techniques proposed in this dissertation will help agencies at the federal, state and local levels in addition to private sector stakeholders in assessing transportation system performance at-scale and enable informed data-driven decision making.</p>
|
225 |
Predicting the Effects of Sedative Infusion on Acute Traumatic Brain Injury PatientsMcCullen, Jeffrey Reynolds 09 April 2020 (has links)
Healthcare analytics has traditionally relied upon linear and logistic regression models to address clinical research questions mostly because they produce highly interpretable results [1, 2]. These results contain valuable statistics such as p-values, coefficients, and odds ratios that provide healthcare professionals with knowledge about the significance of each covariate and exposure for predicting the outcome of interest [1]. Thus, they are often favored over new deep learning models that are generally more accurate but less interpretable and scalable. However, the statistical power of linear and logistic regression is contingent upon satisfying modeling assumptions, which usually requires altering or transforming the data, thereby hindering interpretability. Thus, generalized additive models are useful for overcoming this limitation while still preserving interpretability and accuracy.
The major research question in this work involves investigating whether particular sedative agents (fentanyl, propofol, versed, ativan, and precedex) are associated with different discharge dispositions for patients with acute traumatic brain injury (TBI). To address this, we compare the effectiveness of various models (traditional linear regression (LR), generalized additive models (GAMs), and deep learning) in providing guidance for sedative choice. We evaluated the performance of each model using metrics for accuracy, interpretability, scalability, and generalizability. Our results show that the new deep learning models were the most accurate while the traditional LR and GAM models maintained better interpretability and scalability. The GAMs provided enhanced interpretability through pairwise interaction heat maps and generalized well to other domains and class distributions since they do not require satisfying the modeling assumptions used in LR. By evaluating the model results, we found that versed was associated with better discharge dispositions while ativan was associated with worse discharge dispositions. We also identified other significant covariates including age, the Northeast region, the Acute Physiology and Chronic Health Evaluation (APACHE) score, Glasgow Coma Scale (GCS), and ethanol level. The versatility of versed may account for its association with better discharge dispositions while ativan may have negative effects when used to facilitate intubation. Additionally, most of the significant covariates pertain to the clinical state of the patient (APACHE, GCS, etc.) whereas most non-significant covariates were demographic (gender, ethnicity, etc.). Though we found that deep learning slightly improved over LR and generalized additive models after fine-tuning the hyperparameters, the deep learning results were less interpretable and therefore not ideal for making the aforementioned clinical insights. However deep learning may be preferable in cases with greater complexity and more data, particularly in situations where interpretability is not as critical. Further research is necessary to validate our findings, investigate alternative modeling approaches, and examine other outcomes and exposures of interest. / Master of Science / Patients with Traumatic Brain Injury (TBI) often require sedative agents to facilitate intubation and prevent further brain injury by reducing anxiety and decreasing level of consciousness. It is important for clinicians to choose the sedative that is most conducive to optimizing patient outcomes. Hence, the purpose of our research is to provide guidance to aid this decision. Additionally, we compare different modeling approaches to provide insights into their relative strengths and weaknesses.
To achieve this goal, we investigated whether the exposure of particular sedatives (fentanyl, propofol, versed, ativan, and precedex) was associated with different hospital discharge locations for patients with TBI. From best to worst, these discharge locations are home, rehabilitation, nursing home, remains hospitalized, and death. Our results show that versed was associated with better discharge locations and ativan was associated with worse discharge locations. The fact that versed is often used for alternative purposes may account for its association with better discharge locations. Further research is necessary to further investigate this and the possible negative effects of using ativan to facilitate intubation. We also found that other variables that influence discharge disposition are age, the Northeast region, and other variables pertaining to the clinical state of the patient (severity of illness metrics, etc.). By comparing the different modeling approaches, we found that the new deep learning methods were difficult to interpret but provided a slight improvement in performance after optimization. Traditional methods such as linear regression allowed us to interpret the model output and make the aforementioned clinical insights. However, generalized additive models (GAMs) are often more practical because they can better accommodate other class distributions and domains.
|
226 |
Enhancing Big Data Analytics Capabilities: The Influence of Organisational Culture and Data-Driven OrientationOrero Blat, Maria 13 February 2023 (has links)
[ES] La investigación llevada a cabo en esta tesis doctoral tiene como objetivo general analizar la importancia del poder transformador de la analítica de big data, a través de las capacidades analíticas de big data en el ecosistema español de las pequeñas y medianas empresas.
El contexto de la transformación digital ha remodelado la forma de hacer negocios en las organizaciones debido a la complejidad e incertidumbre del entorno, el surgimiento de empresas nativas digitales, la introducción de nuevas tecnologías e industria 4.0 y el aumento de la competitividad de los mercados. Si bien la implantación de la tecnología e infraestructuras digitales ha sido un tema estudiado en la literatura académica en los últimos años, se divisan retos importantes a nivel humano debido a la modificación del contexto laboral, el liderazgo y las habilidades y competencias necesarias para competir de forma exitosa actualmente. Las personas, y no la tecnología, son el centro de la transformación digital y todo cambio organizativo derivado debe priorizarlas.
Muchas de las empresas que invierten en tecnologías como el big data son incapaces de extraer el valor que éste puede ofrecer a través de la analítica de datos, y por tanto, no lo utilizan para tomar decisiones de valor para la organización que lleven a un incremento del desempeño. Tienen poco desarrolladas las capacidades analíticas de big data, necesarias para aprovechar la transformación digital y promover una implementación efectiva de las nuevas tecnologías. De esta problemática derivamos la importancia de conocer cuáles son los antecedentes de las capacidades analíticas de big data y su efecto en ellas, con el objetivo de conseguir un verdadero impacto en el desempeño organizativo.
Por una parte, la cultura organizativa se ha identificado como una de las barreras al cambio o un factor impulsor que permite efectuar una transformación digital efectiva. Para ello es necesario implantar nuevas formas de trabajar y adquirir habilidades y conocimientos adecuados que permitan tomar decisiones en base al análisis de los datos. Es por tanto fundamental, que la cultura organizativa promueva e incentive la promoción de capacidades analíticas de big data y la transformación digital.
Por otra parte, se destaca el papel del CEO de la organización, y de su visión estratégica orientada al dato para incentivar, liderar y motivar el cambio hacia una transformación digital. El rol del directivo es crucial para motivar un cambio cultural que permita ver la transformación digital y las capacidades analíticas de big data como instrumentos para lograr una mejora de la competitividad, desempeño, creación de valor y aumento de la reputación y satisfacción de las personas. Por tanto, el CEO debe tener un compromiso con la transformación digital tangible y visión estratégica orientada a los datos para tomar decisiones y planificar la estrategia a seguir por toda la organización.
Entre las conclusiones del estudio se destaca en primer lugar la relación positiva y significativa de las capacidades analíticas de big data con la transformación digital y el desempeño organizativo a través de la innovación. Además, se pone en valor la importancia de la cultura organizativa y de la orientación al dato, así como de un nivel adecuado de madurez digital, como antecedentes de las capacidades analíticas de big data. Finalmente se analizan los diversos arquetipos culturales para destacar que una cultura digital, jerárquica o adhocrática favorecen la creación de capacidades analíticas y por tanto incrementan el proceso de transformación digital.
A partir de las conclusiones se deriva la necesidad de inversión en formación para las personas en capacidades digitales y analíticas y el rol clave del directivo para conseguir una transformación digital exitosa y aprovechar la inversión tecnológica realizada. Por último, se destaca la importancia del diagnóstico cultural y elaboración de un plan de cambio cultural. / [CA] La investigació duta a terme en aquesta tesi doctoral té com a objectiu general analitzar la importància del poder transformador de l'analítica de big data, a través de les capacitats analítiques de big data en l'ecosistema espanyol de les petites i mitjanes empreses.
El context de la transformació digital ha remodelat la manera de fer negocis en les organitzacions a causa de la complexitat i incertesa de l'entorn, el sorgiment d'empreses natives digitals, la introducció de noves tecnologies i indústria 4.0 i l'augment de la competitivitat dels mercats. Tot i que la implantació de la tecnologia i infraestructures digitals ha sigut un tema estudiat en la literatura acadèmica en els últims anys, s'albiren reptes importants a nivell humà a causa de la modificació del context laboral, el lideratge i les habilitats necessàries per a competir de manera exitosa actualment. Les persones, i no la tecnologia, són el centre de la transformació digital i tot canvi organitzatiu derivat ha de prioritzar-les.
Moltes de les empreses que inverteixen en tecnologies com el big data són incapaços d'extraure el valor que aquest pot oferir a través de l'analítica de dades, i per tant, no l'utilitzen per a prendre decisions de valor per a l'organització que porten a una millora dels resultats. La raó és que tenen poc desenvolupades les capacitats analítiques de big data, necessàries per a aprofitar la transformació digital i promoure una implementació efectiva de les noves tecnologies. D'aquesta problemàtica derivem la importància de conéixer quins són els antecedents de les capacitats analítiques de big data i el seu efecte en elles, amb l'objectiu d'aconseguir un vertader impacte en la millora dels resultats.
D'una banda, la cultura organitzativa s'ha identificat com una de les barreres al canvi per a efectuar una transformació digital efectiva. Per aquesta raó cal implantar noves maneres de treballar i adquirir habilitats i coneixements adequats que permeten prendre decisions sobre la base de l'anàlisi de les dades. És per tant fonamental, que la cultura organitzativa promoga i incentive la promoció de capacitats analítiques de big data i la transformació digital.
D'altra banda, es destaca el paper del CEO de l'organització, i de la seua visió estratègica orientada a les dades per a incentivar, liderar i motivar el canvi cap a una transformació digital. El paper directiu és crucial per a motivar un canvi cultural que permeta veure la transformació digital i les capacitats analítiques de big data com a instruments per a aconseguir una millora de la competitivitat, acompliment, creació de valor i augment de la reputació i satisfacció de les persones. Per tant, el CEO ha de tindre un compromís amb la transformació digital tangible i una visió orientada a les dades per a prendre decisions i planificar l'estratègia a seguir per tota l'organització.
Entre les conclusions de l'estudi es destaca en primer lloc la relació positiva i significativa de les capacitats analítiques de big data amb la transformació digital i la millora dels resultats a través de la innovació. A més, es posa en valor la importància de la cultura organitzativa i de l'orientació a la dada, així com d'un nivell adequat de maduresa digital, com a antecedents de les capacitats analítiques de big data. Finalment s'analitzen els diversos arquetips culturals i es destaca que una cultura digital, jeràrquica o adhocrática afavoreixen la creació de capacitats analítiques i per tant incrementen l'éxit del procés de transformació digital.
A partir de les conclusions es deriven algunes implicacions pràctiques com la necessitat d'inversió en formació per a les persones en competències i capacitats digitals i analítiques, el rol clau del directiu per a aconseguir una transformació digital exitosa i aprofitar la inversió tecnològica. Finalment es destaca la importància del diagnòstic cultural i elaboració d'un pla de canvi cultural alineat amb els objectius envers la transformació digital. / [EN] The general objective of the research carried out in this doctoral thesis is to analyse the importance of the transformative power of big data analytics through big data analytical capabilities in the Spanish context of small and medium-sized enterprises.
The context of digital transformation has reshaped the way of doing business in organisations due to the complexity and uncertainty of the environment, the emergence of digital native companies, the introduction of new technologies and the increased competitiveness of markets. Whilst the implementation of technology and digital infrastructures has been covered in the academic literature in recent years, there are significant challenges at the human level due to the changing context of work, leadership and the skills needed to compete successfully today. People, not technology, are at the heart of digital transformation and any resulting organisational change must priorise them.
Many companies that invest in technologies such as big data are unable to extract the value that big data can offer through data analytics, and therefore do not use it to make valuable decisions for the organisation that lead to increased performance. They have underdeveloped big data analytical capabilities, which are necessary to take advantage of digital transformation and promote the effective implementation of new technologies. From this problem the importance of knowing the background of big data analytical capabilities and their effect on them is derived, in order to achieve a real impact on organisational performance.
On the one hand, organisational culture has been identified as a barrier or booster of change for an effective digital transformation. This requires the implementation of new ways of working and the acquisition of appropriate skills and knowledge to enable data-driven decision making. It is therefore essential that the organisational culture promotes and encourages the promotion of big data analytical capabilities and digital transformation.
On the other hand, the role of the CEO of the organisation, and his or her data-driven strategic vision to incentivise, lead and motivate change towards digital transformation is highlighted. The role of the top management is crucial to motivate a cultural change that allows to see digital transformation and big data analytics capabilities as instruments to achieve superior outcomes (i.e., improved competitiveness, performance, value creation and increased reputation and people satisfaction). Therefore, the CEO must have a strong commitment to digital transformation and a data-driven orientation to make decisions and settle the strategy for the entire organisation.
Among the conclusions of the study, the positive and significant relationship of big data analytics capabilities with digital transformation and organisational performance through innovation are highlighted. This thesis points out the importance of organisational culture and data orientation, as well as an appropriate level of digital maturity, as antecedents to big data analytics capabilities. Finally, the various cultural archetypes are analysed to highlight that a digital, hierarchical or adhocratic culture favours the creation of analytical capabilities and therefore enhances the digital transformation process.
From the conclusions, some practical implications are derived, such as the need to invest in training people in digital and analytical skills and capabilities, the key role of the manager in achieving a successful digital transformation and leveraging technological investment. Finally, the importance of cultural diagnosis and the development of a cultural change plan aligned with the strategic objectives for digital transformation is highlighted, and practical recommendations are settled. / Tesis elaborada gracias al apoyo de las Ayudas de Formación del Profesorado Universitario
(FPU) otorgadas por el Ministerio de Educación, Cultura y Universidades, del Gobierno de
España, y a la Cátedra de Empresa y Humanismo de la Universitat de València. / Orero Blat, M. (2023). Enhancing Big Data Analytics Capabilities: The Influence of Organisational Culture and Data-Driven Orientation [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/191788
|
227 |
Intelligent Clinical Information Platform for Assisting Heart Disease Care Pathway using Machine LearningWalter-tscharf, Franz Frederik Walter Viktor 29 November 2024 (has links)
An average of 3 million deaths occurs each year in high-income countries due to unsafe care, with causes including diagnostic and communication failures. These failures are related to clinical information overload, the extraction of essential unstructured data, and complex health data analytics for deriving insights. The use case of this dissertation focuses on emergency room (ER) physicians, as they are the initial point of contact for patients, and time-sensitive situations occur frequently in the ER. The goal is to develop an intelligent clinical information platform (ICIP) for ER physicians, assisting patients’ care pathways using machine learning (ML). This platform provides a new, multidimensional view to represent patients’ medical conditions, focused on heart diseases. To achieve the platform’s implementation, three technical components are developed and published within this dissertation: first, a component for data extraction from remote video consultations via WebRTC; second, a data classification component using a Faster Region-Based Convolutional Neural Network (R-CNN) model together with active learning (AL); and third, a data search component with an implemented Elasticsearch pipeline and data storage unified in the FHIR standard.
The research for a newly developed clinical platform is practically and industrially based on building a future clinical product. For this product, ML models are developed to analyze data from past clinical treatments using an R-CNN model for text classification and to access verbal audio data through a speech-to-text (STT) engine employing an RNN TensorFlow model and a large language model (LLM) from NLP.js. Additionally, JSON object-based rule-based reasoning from FHIR is used. It has been demonstrated that a three-tier architecture (AngularJS, Java Spring Boot, and PostgreSQL), consisting of components involving neural networks such as R-CNN, RNN (recurrent neural network), and LLM, can be implemented as a data platform for assisting heart disease care pathways. This allows physicians to interpret patients’ vital parameters, pathways, and timelines via diagrams presented in widgets on the AngularJS frontend.
|
228 |
La estrategia del dato en una situación de crisis. Análisis de las comparecencias del presidente Pedro Sánchez y la percepción de los usuarios de la red social Twitter durante la crisis de la COVID-19.Pérez Gómez, Antonio 02 September 2024 (has links)
[ES] El liderazgo en tiempos de crisis requiere una comunicación coherente y de confianza, donde la transmisión de la verdad es fundamental. Las redes sociales han revolucionado la comunicación política al permitir a los actores políticos llegar a los ciudadanos de manera instantánea y multidireccional. Sin embargo, su uso plantea desafíos éticos y prácticos, especialmente en la gestión de crisis y con la difusión de fake news. Durante la pandemia de COVID-19, las ruedas de prensa han sido una herramienta clave para informar a la población sobre la situación y responder preguntas de los medios de comunicación y la ciudadanía en general. En este contexto, Twitter se convirtió en una plataforma importante de comunicación y a través del análisis de los comentarios en esta red social podemos obtener información valiosa sobre la percepción que tiene la ciudadanía de cómo se está gestionando la crisis. En este trabajo de investigación analizaremos el lenguaje utilizado por el presidente Pedro Sánchez en las ruedas de prensa que realizó para informar a la ciudadanía con el objetivo de comprobar si existía una estrategia previa de comunicación y, analizando los comentarios a sus publicaciones en Twitter, comprobaremos cómo fueron percibidos esos mensajes y hasta qué punto eso influyó en la propia estrategia de comunicación. / [CA] El lideratge en temps de crisi requereix una comunicació coherent i de confiança, on la transmissió de la veritat és fonamental. Les xarxes socials han revolucionat la comunicació política, ja que permeten que els actors polítics arriben a la ciutadania de manera instantània i multidireccional. Tot i això, el seu ús planteja reptes ètics i pràctics, especialment en la gestió de crisis i amb la difusió de fake news. Durant la pandèmia de COVID-19, les rodes de premsa van ser una eina clau per a informar la població sobre la situació i respondre preguntes dels mitjans de comunicació i la ciutadania en general. En aquest context, Twitter es va convertir en una plataforma important de comunicació i, analitzant els comentaris en aquesta xarxa social, podem obtindre informació valuosa sobre la percepció que tenia la ciutadania de com es va gestionar la crisi. En aquest treball de recerca analitzarem el llenguatge utilitzat pel president Pedro Sánchez en les rodes de premsa que va dur a terme per informar la ciutadania, amb l'objectiu de comprovar si hi havia una estratègia prèvia de comunicació, i, analitzant els comentaris a les seues publicacions en Twitter, comprovarem com es van percebre aquests missatges i fins a quin punt això va influir en la mateixa estratègia de comunicació. / [EN] Leadership in times of crisis requires coherent and reliable communication, where the transmission of truth is essential. Social media has revolutionized political communication by allowing political actors to reach citizens instantly and in a multidirectional way. However, its use raises ethical and practical challenges, especially in crisis management and with the spread of fake news. During the COVID-19 pandemic, press conferences have been a key tool for informing the public about the situation and answering questions from the media and the public at large. In this context, Twitter has become an important communication platform and, through the analysis of the comments on this social network, we can obtain valuable information about the public's perception of how the crisis is being managed. In this research work, we will analyze the language used by President Pedro Sánchez in the press conferences he held to inform the public, in order to check whether there was a prior communication strategy and, by analyzing the comments on his posts on Twitter, we will analyze how these messages were perceived and to what extent they influenced the communication strategy itself. / Pérez Gómez, A. (2024). La estrategia del dato en una situación de crisis. Análisis de las comparecencias del presidente Pedro Sánchez y la percepción de los usuarios de la red social Twitter durante la crisis de la COVID-19 [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/207551
|
229 |
Scalable time series similarity search for data analyticsSchäfer, Patrick 26 October 2015 (has links)
Eine Zeitreihe ist eine zeitlich geordnete Folge von Datenpunkten. Zeitreihen werden typischerweise über Sensormessungen oder Experimente erfasst. Sensoren sind so preiswert geworden, dass sie praktisch allgegenwärtig sind. Während dadurch die Menge an Zeitreihen regelrecht explodiert, lag der Schwerpunkt der Forschung in den letzten Jahrzehnten auf der Analyse von (a) vorgefilterten und (b) kleinen Zeitreihendatensätzen. Die Analyse realer Zeitreihendatensätze wirft zwei Probleme auf: Erstens setzen aktuelle Ähnlichkeitsmodelle eine Vorfilterung der Zeitreihen voraus. Das beinhaltet die Extraktion charakteristischer Teilsequenzen und das Entfernen von Rauschen. Diese Vorverarbeitung muss durch einen Spezialisten erfolgen. Sie kann zeit- und kostenintensiver als die anschließende Analyse und für große Datensätze unrentabel werden. Zweitens führte die Verbesserung der Genauigkeit aktueller Ähnlichkeitsmodelle zu einem unverhältnismäßig hohen Anstieg der Komplexität (quadratisch bis biquadratisch). Diese Dissertation behandelt beide Probleme. Es wird eine symbolische Zeitreihenrepräsentation vorgestellt. Darauf aufbauend werden drei verschiedene Ähnlichkeitsmodelle eingeführt. Diese erweitern den aktuellen Stand der Forschung insbesondere dadurch, dass sie vorverarbeitungsfrei, unempfindlich gegenüber Rauschen und skalierbar sind. Anhand von 91 realen Datensätzen und Benchmarkdatensätzen wird zusätzlich gezeigt, dass die hier eingeführten Modelle auf den meisten Datenätzen die höchste Genauigkeit im Vergleich zu 15 aktuellen Ähnlichkeitsmodellen liefern. Sie sind teilweise drei Größenordnungen schneller und benötigen kaum Vorfilterung. / A time series is a collection of values sequentially recorded from sensors or live observations over time. Sensors for recording time series have become cheap and omnipresent. While data volumes explode, research in the field of time series data analytics has focused on the availability of (a) pre-processed and (b) moderately sized time series datasets in the last decades. The analysis of real world datasets raises two major problems: Firstly, state-of-the-art similarity models require the time series to be pre-processed. Pre-processing aims at extracting approximately aligned characteristic subsequences and reducing noise. It is typically performed by a domain expert, may be more time consuming than the data mining part itself, and simply does not scale to large data volumes. Secondly, time series research has been driven by accuracy metrics and not by reasonable execution times for large data volumes. This results in quadratic to biquadratic computational complexities of state-of-the-art similarity models. This dissertation addresses both issues by introducing a symbolic time series representation and three different similarity models. These contribute to state of the art by being pre-processing-free, noise-robust, and scalable. Our experimental evaluation on 91 real-world and benchmark datasets shows that our methods provide higher accuracy for most datasets when compared to 15 state-of-the-art similarity models. Meanwhile they are up to three orders of magnitude faster, require less pre-processing for noise or alignment, or scale to large data volumes.
|
230 |
Deep graphs / represent and analyze heterogeneous complex systems across scalesTraxl, Dominik 17 May 2017 (has links)
Netzwerk Theorie hat sich als besonders zweckdienlich in der Darstellung von Systemen herausgestellt. Jedoch fehlen in der Netzwerkdarstellung von Systemen noch immer essentielle Bausteine um diese generell zur Datenanalyse heranzuziehen zu können. Allen voran fehlt es an einer expliziten Assoziation von Informationen mit den Knoten und Kanten eines Netzwerks und einer schlüssigen Darstellung von Gruppen von Knoten und deren Relationen auf verschiedenen Skalen. Das Hauptaugenmerk dieser Dissertation ist der Einbindung dieser Bausteine in eine verallgemeinerte Rahmenstruktur gewidmet. Diese Rahmenstruktur - Deep Graphs - ist in der Lage als Bindeglied zwischen einer vereinheitlichten und generalisierten Netzwerkdarstellung von Systemen und den Methoden der Statistik und des maschinellen Lernens zu fungieren (Software: https://github.com/deepgraph/deepgraph). Anwendungen meiner Rahmenstruktur werden dargestellt. Ich konstruiere einen Regenfall Deep Graph und analysiere raumzeitliche Extrem-Regenfallcluster. Auf Grundlage dieses Graphs liefere ich einen statistischen Beleg, dass die Größenverteilung dieser Cluster einem exponentiell gedämpften Potenzgesetz folgt. Mit Hilfe eines generativen Sturm-Modells zeige ich, dass die exponentielle Dämpfung der beobachteten Größenverteilung durch das Vorhandensein von Landmasse auf unserem Planeten zustande kommen könnte. Dann verknüpfe ich zwei hochauflösende Satelliten-Produkte um raumzeitliche Cluster von Feuer-betroffenen Gebieten im brasilianischen Amazonas zu identifizieren und deren Brandeigenschaften zu charakterisieren. Zuletzt untersuche ich den Einfluss von weißem Rauschen und der globalen Kopplungsstärke auf die maximale Synchronisierbarkeit von Oszillatoren-Netzwerken für eine Vielzahl von Oszillatoren-Modellen, welche durch ein breites Spektrum an Netzwerktopologien gekoppelt sind. Ich finde ein allgemeingültiges sigmoidales Skalierungsverhalten, und validiere dieses mit einem geeignetem Regressionsmodell. / Network theory has proven to be a powerful instrument in the representation of complex systems. Yet, even in its latest and most general form (i.e., multilayer networks), it is still lacking essential qualities to serve as a general data analysis framework. These include, most importantly, an explicit association of information with the nodes and edges of a network, and a conclusive representation of groups of nodes and their respective interrelations on different scales. The implementation of these qualities into a generalized framework is the primary contribution of this dissertation. By doing so, I show how my framework - deep graphs - is capable of acting as a go-between, joining a unified and generalized network representation of systems with the tools and methods developed in statistics and machine learning. A software package accompanies this dissertation, see https://github.com/deepgraph/deepgraph. A number of applications of my framework are demonstrated. I construct a rainfall deep graph and conduct an analysis of spatio-temporal extreme rainfall clusters. Based on the constructed deep graph, I provide statistical evidence that the size distribution of these clusters is best approximated by an exponentially truncated powerlaw. By means of a generative storm-track model, I argue that the exponential truncation of the observed distribution could be caused by the presence of land masses. Then, I combine two high-resolution satellite products to identify spatio-temporal clusters of fire-affected areas in the Brazilian Amazon and characterize their land use specific burning conditions. Finally, I investigate the effects of white noise and global coupling strength on the maximum degree of synchronization for a variety of oscillator models coupled according to a broad spectrum of network topologies. I find a general sigmoidal scaling and validate it with a suitable regression model.
|
Page generated in 0.0731 seconds