• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 592
  • 119
  • 109
  • 75
  • 41
  • 40
  • 27
  • 22
  • 19
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1229
  • 1229
  • 181
  • 170
  • 163
  • 157
  • 151
  • 150
  • 150
  • 130
  • 113
  • 111
  • 111
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Satellite Imagery Big Data for the Sustainable Development of Energy Access

O'Mahony, Patrick January 2023 (has links)
One of the many challenges humanity faces is developing energy access in a sustainable manner, that does not further contribute to the burning of fossil fuels, increasing greenhouse gas emissions, and global warming. In 2020, 2.4 billion people used inefficient and polluting cooking systems due to lack of energy access while 25% of schools lacked access to electricity, drinking water and basic sanitation. This thesis seeks to investigate this challenge by using satellite imagery big data to improve energy access in a sustainable manner.The theoretical framework explains the key concepts and outlines the theoretical underpinnings of this research in transformative social innovation theory and behavioural theory which help guide the analysis. The link between this research and existing research is also explained in this section. The methodology used will be to research review articles and choose the most appropriate and credible texts to answer two research questions. The first challenge relates to identifying promising applications of satellite imagery big data in improving energy access, and the second relates to explaining how we can ensure that development of energy access from satellite imagery is conducted in a sustainable manner.The primary findings of this research are that there are a number of credible review articles which contain real opportunities for improved energy access and include identifying optimum photovoltaics investment locations, identifying optimum small hydropower plant sites, CAM plant cultivation locations, an indicator to directly address sustainable energy investments and rural electricity access needs, and mapping of remote off-grid homes for improvement of energy access. The findings also indicated three key factors that are crucial for the sustainable development of energy access which include communication, collaboration, and community.There are a number of varied applications of satellite imagery big data discovered that each exhibit significant value in improving energy access. The value that can be gained is closely related to the ability of the research community, to engage with local actors, to build a collaborative environment, where knowledge is shared, and community is built.
662

Modest Reductions in Kidney Function and Adverse Outcomes in Younger Adults

Hussain, Junayd 22 June 2023 (has links)
Chronic kidney disease (CKD) is a complex and progressive condition with limited curative therapies and is associated with both physical comorbidity, impaired health-related quality of life, and financial strain on the healthcare system. Currently, CKD is defined by a fixed threshold of an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73m2, which coincides with approximately 60% of healthy kidney function, for ≥3 months regardless of age. However, this definition does not account for natural declines in kidney function with advanced age, leaving older individuals (ages >65 years) with naturally lower eGFR and without significant kidney damage being over-diagnosed with CKD. Conversely, there is also concern of underdiagnosis of CKD in younger adults (ages <40 years) with “modest” eGFR reductions (eGFR levels well above 60, but below age-expected values). Indeed, severe impairment is not detected in younger adults until they lose close to 50% of their healthy kidney function, precluding timely prevention of CKD progression and its associated complications (premature mortality, cardiovascular events, etc.). However, whether these “modest” eGFR reductions are associated with elevated clinical risk in younger adults is unknown. This thesis is based on a retrospective cohort study using linked healthcare administrative databases to examine the association of index eGFR categories with time to adverse outcomes, relative to age-specific referents. In the first manuscript, we compared associations with key adverse outcomes (all-cause mortality, cardiovascular events, and kidney failure) and patterns of healthcare utilization between younger (ages 18-39), middle-aged (40-49), and older adults (50-65 years). In the second manuscript, we examined associations with major cardiovascular events (cardiovascular mortality, acute coronary syndrome, ischemic stroke, heart failure) by age group. In both manuscripts, we noted significant elevations in risk of adverse outcomes at higher eGFR levels relative to age-specific referents in younger, compared to middle-aged and older adults. Despite this age-related vii disparity in clinical risk with modestly reduced eGFR, younger adults were least likely to obtain a repeated eGFR measure or be referred to a specialist during follow-up. Notably, these findings persisted for individual adverse events and in clinically important subgroups, as well as after various sensitivity analyses (adjusting for additional comorbidities, defining index eGFR using repeated measures, using common referents, and excluding individuals with different underlying mechanisms for reduced eGFR (pregnancy, acute kidney injury, etc.)). The current thesis presents evidence of elevated clinical risk with modest reductions in kidney function in younger adults, emphasizing the importance of risk-based eGFR thresholds that vary with age and considering modestly reduced eGFR as important cardiovascular risk factors worth monitoring in routine clinical practice.
663

Big Data Analytics Using Apache Flink for Cybercrime Forensics on X (formerly known as Twitter) / Big Data Analytics Using Apache Flink for Cybercrime Forensics on X (formerly known as Twitter)

Kakkepalya Puttaswamy, Manjunath January 2023 (has links)
The exponential growth of social media usage has led to massive data sharing, posing challenges for traditional systems in managing and analyzing such vast amounts of data. This surge in data exchange has also resulted in an increase in cyber threats from individuals and criminal groups. Traditional forensic methods, such as evidence collection and data backup, become impractical when dealing with petabytes or terabytes of data. To address this, Big Data Analytics has emerged as a powerful solution for handling and analyzing structured and unstructured data. This thesis explores the use of Apache Flink, an open-source tool by the Apache Software Foundation, to enhance cybercrime forensic research. Unlike batch processing engines like Apache Spark, Apache Flink offers real-time processing capabilities, making it well-suited for analyzing dynamic and time-sensitive data streams. The study compares Apache Flink's performance against Apache Spark in handling various workloads on a single node. The literature review reveals a growing interest in utilizing Big Data Analytics, including platforms like Apache Flink, for cybercrime detection and investigation, especially on social media platforms like X (formerly known as Twitter). Sentiment analysis is a vital technique, but challenges arise due to the unique nature of social data. X (formerly known as Twitter), as a valuable source for cybercrime forensics, enables the study of fraudulent, extremist, and other criminal activities. This research explores various data mining techniques and emphasizes the need for real-time analytics to combat cybercrime effectively. The methodology involves data collection from X, preprocessing to remove noise, and sentiment analysis to identify cybercrime-related tweets. The comparative analysis between Apache Flink and Apache Spark demonstrates Flink's efficiency in handling larger datasets and real-time processing. Parallelism and scalability are evaluated to optimize performance. The results indicate that Apache Flink outperforms Apache Spark regarding response time, making it a valuable tool for cybercrime forensics. Despite progress, challenges such as data privacy, accuracy improvement, and cross-platform analysis remain. Future research should focus on refining algorithms, enhancing scalability, and addressing these challenges to further advance cybercrime forensics using Big Data Analytics and platforms like Apache Flink.
664

"Datacentrerat värdeskapande för sakens skull, det blir man bara fattig på" : Datadrivet värdeskapande genom BDA för kundupplevelsehantering för online-magasin / "Data-centric value creation for the sake of it, that only makes you poor"

Andersson, Linus, Hvirf, Rebecka, Norman, Kajsa January 2023 (has links)
The objective of this study is to explore how data-driven value can be created for online magazines through big data analytics for customer experience management. Big data analytics can help online magazines to gain insights into customer behavior, preferences, and needs, which can be used to create a better customer experience and provide assistance in the process of decision making. This study willuse a mixed-method approach to collect and analyze data. The study started with aliterature review to identify the key concepts related to data-driven value creation,big data analytics, and customer experience management. The study then performed interviews and web analytics of provided data from the examined organization. The data has been analyzed through a thematic analysis of two interviews with the editor-in-chief of the magazine as well as an expert on dataanalytics. Data collected by the examined organization was analyzed but required further expertise in order to enlighten its value. The study has provided insights for how online magazines and similar organizations to the one studied can leverage bigdata analytics to create value for their customers, and to provide better insight into the preferences of their customers as well as to assist decision making.
665

Enabling cyber-physical system for manufacturing systems using augmented reality

Beigveder Durante, Pablo January 2023 (has links)
This project focuses on addressing the challenges faced by manufacturing lines such as complexity and flexibility through the integration of Augmented Reality (AR), Internet of Things (IoT), and Big Data technologies. The objective is to develop a framework that enhances the efficiency, flexibility, and sustainability of manufacturing processes in the context of Industry 4.0.  The project involves the design and implementation of an artifact solution using the UNITY platform. The solution enables users to remotely control and monitor a manufacturing line in real-time through an AR interface. By taking advantage of leveraging IoT devices and sensors, real-time data is collected from the production line, providing valuable insights into performance, maintenance needs, and resource optimization. The collected data is processed and analyzed using Big Data techniques, enabling predictive maintenance, quality control, and optimization of manufacturing processes.  The outcomes of this project will provide valuable insights into the potential of AR, IoT, and Big Data technologies in revolutionizing the manufacturing industry. The artifact solution serves as a proof-of-concept, demonstrating the feasibility and benefits of adopting these technologies for sustainable manufacturing in the context of Industry 4.0. Future research and development can build upon this work to further refine and scale the solution for broader industrial applications.
666

The emergence of Big Data and Auditors' Perception : A comparative study on India and Bangladesh

Rahnuma, Zenat January 2023 (has links)
Abstract: Title: The emergence of Big Data and Auditors' Perception (A comparative study on India and Bangladesh) Aim: The aim of the study is to explore and compare the perception of auditors in India and Bangladesh towards the implementation of big data analytics in audit. Method: In this study a qualitative method has been applied using semi-structured interviews. The study is an exploratory research and has been analysed thematically. Results and conclusions: Employing the Technology Acceptance Model (TAM) as a conceptual framework, this study conducted a comparative analysis of auditors' perceptions, emphasizing the components of perceived usefulness, perceived ease of use, intention to adopt, and their interactions. The results of the study show that the intention to adopt big data analytics tools emerges as a shared aspiration among auditors from both India and Bangladesh.
667

”…att biblioteket är en plats där inget sparas” : en intervjustudie om dataveillance och personlig integritet på folkbibliotek / ”…that the library is a place where nothing is stored” : an interview study about dataveillance and privacy at public libraries

Carlander Borgström, Anton January 2023 (has links)
The purpose of this study is to investigate the ways in which relevant personnel view dataveillance and user privacy, in the context of providing internet access via desktop computers at public libraries. Two research questions are put forth: 1. ”In what ways do relevant personnel engage with user privacy, in relation to dataveillance?” and 2. ”What obstacles and opportunities do relevant personnel experience in this line of work?”. Purposeful sampling was employed and semi-structured interviews were conducted with seven individuals either employed by a public library or by the municipality at large in supporting- or executive-roles. The interviews were analyzed using Tavani’s (2007; 2008) and related theories of privacy as well as previous research on the topic. Among the findings were that personnel found themselves prohibited, due to several factors, in the extent to which they were able to customize desktop access to guard against dataveillance’s privacy-intrusions. Furthermore, awareness of the particulars of dataveillance was limited amongst personnel as well as in the most frequent desktop users. Obstacles discerned from the analysis included a lack of personnel training, dataveillance-companies vertical integration, as well as the potential of privacy protection-technologies to mask illegal activities. Opportunities discerned from the analysis included the potential for an expanded approach to raise awareness amongst users about dataveillance as well as a firm dedication across the institution to tackle the issue. Limitations of the study’s results include the narrow scope of its sampling.
668

Un método iterativo y alineado con los planes estratégicos para modelar, integrar y analizar datos en escenarios Big Data

Tardío, Roberto 19 July 2021 (has links)
En esta tesis doctoral se analizan las características de las fuentes de datos Big Data así como las aproximaciones existentes para su procesamiento y uso en aplicaciones de Inteligencia de Negocio. Cómo resultado principal de esta investigación, se presenta una metodología para la gestión, análisis y visualización del Big Data. Esta metodología está basada en el análisis de los requisitos de las aplicaciones de Inteligencia Negocio, guiando de forma sistemática la aplicación del resto técnicas presentadas: (i) un método para la generación del diseño y validación de la arquitectura Big Data, (ii) técnicas para las integración eficiente de las fuentes de datos, (iii) diseño de los modelos de datos óptimos y comparación del rendimiento en sistemas Big Data OLAP (On-Line Analytical Processing) y (iv) diseño de aplicaciones de Inteligencia de Negocio colaborativas. La metodología y métodos propuestos ayudan a reducir la alta tasa de fracaso existente en la implantación de estrategias de Big Data en las organizaciones. Además, la propuesta de benchmarking presentada para sistemas Big Data OLAP es la primera aproximación conocida para este tipo de sistemas, permitiendo su estudio y comparación. Los sistemas Big Data OLAP permiten la ejecución de consultas analíticas, informes o cuadros de mando con tiempos de respuesta inferiores al segundo sobre modelos de datos con tablas de hasta decenas de miles de millones de filas.
669

Predicting the Stock Market Using News Sentiment Analysis

Memari, Majid 01 May 2018 (has links) (PDF)
ABSTRACT MAJID MEMARI, for the Masters of Science degree in Computer Science, presented on November 3rd, 2017 at Southern Illinois University, Carbondale, IL. Title: PREDICTING THE STOCK MARKET USING NEWS SENTIMENT ANALYSIS Major Professor: Dr. Norman Carver Big data is a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them. GDELT is the largest, most comprehensive, and highest resolution open database ever created. It is a platform that monitors the world's news media from nearly every corner of every country in print, broadcast, and web formats, in over 100 languages, every moment of every day that stretches all the way back to January 1st, 1979, and updates daily [1]. Stock market prediction is the act of trying to determine the future value of a company stock or other financial instrument traded on an exchange. The successful prediction of a stock's future price could yield significant profit. The efficient-market hypothesis suggests that stock prices reflect all currently available information and any price changes that are not based on newly revealed information thus are inherently unpredictable [2]. On the other hand, other studies show that it is predictable. The stock market prediction has been a long-time attractive topic and is extensively studied by researchers in different fields with numerous studies of the correlation between stock market fluctuations and different data sources derived from the historical data of world major stock indices or external information from social media and news [6]. The main objective of this research is to investigate the accuracy of predicting the unseen prices of the Dow Jones Industrial Average using information derived from GDELT database. Dow Jones Industrial Average (DJIA) is a stock market index, and one of several indices created by Wall Street Journal editor and Dow Jones & Company co-founder Charles Dow. This research is based on data sets of events from GDELT database and daily prices of the DJI from Yahoo Finance, all from March 2015 to October 2017. First, multiple different classification machine learning models are applied to the generated datasets and then also applied to multiple different Ensemble methods. In statistics and machine learning, Ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Afterwards, performances are evaluated for each model using the optimized parameters. Finally, experimental results show that using Ensemble methods has a significant (positive) impact on improving the prediction accuracy. Keywords: Big Data, GDELT, Stock Market, Prediction, Dow Jones Index, Machine Learning, Ensemble Methods
670

URBAN CLIMATE RESILIENCE AND THE PROMISE OF BIG DATA SOLUTIONS : ASSESSING BIG DATA APPLICATION INTO MADRID’S URBAN CLIMATE CHANGE RESILIENCE SCENARIO / URBAN KLIMAT ELASTICITET OCH LÖFTET OM BIG DATA-LÖSNINGAR : BEDÖMER BIG DATA-APPLIKATIONEN I MADRID KLIMATFÖRÄNDRINGSSCENARIOT

Rojo, Juan January 2020 (has links)
In the midst of a climate crisis like the one the world is facing right now, it is essential to try to find new tools that allow better decision-making both to mitigate climate change and to adapt to it. To this day, data science has yet to develop the necessary knowledge to tackle climate change, even though there are large databases with climate data available. With the technological revolution that society is experiencing, and the large amounts of data generated every moment, it is inevitable to think that the necessary responses will inevitably require greater collection and use of data, along with the tools, knowledge, and infrastructure needed. Cities, as great centers of knowledge, population density and innovation, must take the lead to promote data science and Big Data and incorporate them into building urban resilience. For the combination to be productive, both concepts must also be understood in a holistic and complemented way, resilience and Big Data. Both dynamic and relatively new concepts must find the point of union and scientists investigating adaptation must reach out to data scientists to find the skills necessary to clean the data as well as organize, analyze and manage it. Pairing Big Data insights with a well-established and localized urban resilience context can reveal deeper understanding of climate vulnerability, leading to the adaptation of better early-warning systems, more rigorous monitoring and evaluation and ultimately more robust adaptation response based on more accurately defined problems. This study analyzes both concepts, fully understanding what Big Data is, and studying urban climate resilience in a specific setting: the city of Madrid. In this way, the results of this study allow the clear identification of the varied applications of Big Data for a given environment of climate change threats, such as heatwaves, loss of biodiversity and flooding, describing their main data sources, methods, and standing criteria. In addition, the major characteristics of the Big Data use process are explained in the decision-making mechanism, describing the barriers and key drivers of data access, assessment, and application. Such considerations include the correct integration of the different stakeholders in the data collection, cleaning and application processes, ethical considerations of privacy, use and ownership, as well as good governance issues such as fostering citizen participation, encouraging innovation and urging the creation of a solid and robust management infrastructure that promotes the proper operation of the data conditions. The use of Big Data can be a fundamental tool for the development of more robust, flexible and reflexive resilience strategies, which keep climate threats projections updated, allowing adaptation measures to be more relevant and suited for a system’s shocks and stresses. This study broadens the knowledge on which are the correct data sources, the relevance of these data on their application in urban climate resilience and specific Big Data considerations for the city of Madrid.

Page generated in 0.0669 seconds