• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 592
  • 119
  • 109
  • 75
  • 41
  • 40
  • 27
  • 22
  • 19
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1229
  • 1229
  • 181
  • 170
  • 163
  • 157
  • 151
  • 150
  • 150
  • 130
  • 113
  • 111
  • 111
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Urban Expressway Safety and Efficiency Evaluation and Improvement using Big Data

Shi, Qi 01 January 2014 (has links)
In an age of data explosion, almost every aspect of social activities is impacted by the abundance of information. The information, characterized by alarming volume, velocity and variety, is often referred to as "Big Data". As one fundamental elements of human life, transportation also confronts the promises and challenges brought about by the Big Data era. Big Data in the transportation arena, enabled by the rapid popularization of Intelligent Transportation Systems (ITS) in the past few decades, are often collected continuously from different sources over vast geographical scale. Huge in size and rich in information, the seemingly disorganized data could considerably enhance experts' understanding of their system. In addition, proactive traffic management for better system performance is made possible due to the real-time nature of the Big Data in transportation. Operation efficiency and traffic safety have long been deemed as priorities among highway system performance measurement. While efficiency could be evaluated in terms of traffic congestion, safety is studied through crash analysis. Extensive works have been conducted to identify the contributing factors and remedies of traffic congestion and crashes. These studies lead to gathering consensus that operation and safety have played as two sides of a coin, ameliorating either would have a positive effect on the other. With the advancement of Big Data, monitoring and improvement of both operation and safety proactively in real-time have become an urgent call. In this study, the urban expressway network operated by Central Florida Expressway Authority's (CFX) traffic safety and efficiency was investigated. The expressway system is equipped with multiple Intelligent Transportation Systems (ITS). CFX utilizes Automatic Vehicle Identification (AVI) system for Electronic Toll Collection (ETC) as well as for the provision of real-time information. Recently, the authority introduced Microwave Vehicle Detection System (MVDS) on their expressways for more precise traffic monitoring. These traffic detection systems collect different types of traffic data continuously on the 109-mile expressway network, making them one of the sources of Big Data. In addition, multiple Dynamic Message Signs are currently in use to communicate between CFX and motorists. Due to their dynamic nature, they serve as an ideal tool for efficiency and safety improvement. Careful examination of the Big Data from the ITS traffic detection systems was carried out. Based on the characteristics of the data, three types of congestion measures based on the AVI and MVDS system were proposed for efficiency evaluation. MVDS-based congestion measures were found to be better at capturing the subtle changes in congestion in real-time compared with the AVI-based congestion measure. Moreover, considering the high deployment density of the MVDS system, the whole expressway network is well covered. Thus congestion could be evaluated at the microscopic level in both spatial and temporal dimensions. According to the proposed congestion measurement, both mainline congested segments and ramps experiencing congestion were identified. For congestion alleviation, the existing DMS that could be utilized for queue warning were located. In case of no existing DMS available upstream to the congestion area, the potential area where future DMS could be considered was suggested. Substantial efforts have also been dedicated to Big Data applications in safety evaluation and improvement. Both aggregate crash frequency modeling and disaggregate real-time crash prediction were constructed to explore the use of ITS detection data for urban expressway safety analyses. The safety analyses placed an emphasis on the congestion's effects on the Expressway traffic safety. In the aggregate analysis the three congestion measures developed in this research were tested in the context of safety modeling and their performances compared. Multi-level Bayesian ridge regression was utilized to deal with the multicollinearity issue in the modeling process. While all of the congestion measures indicated congestion was a contributing factor to crash occurrence in the peak hours, they suggested that off-peak hour crashes might be caused by factors other than congestion. Geometric elements such as the horizontal curves and existence of auxiliary lanes were also identified to significantly affect the crash frequencies on the studied expressways. In the disaggregate analysis, rear-end crashes were specifically studied since their occurrence was believed to be significantly related to the traffic flow conditions. The analysis was conducted in Bayesian logistic regression framework. The framework achieved relatively good classifier performance. Conclusions confirmed the significant effects of peak hour congestion on crash likelihood. Moreover, a further step was taken to incorporate reliability analysis into the safety evaluation. With the developed logistic model as a system function indicating the safety states under specific traffic conditions, this method has the advantage that could quantitatively determine the traffic states appropriate to trigger safety warning to motorists. Results from reliability analysis also demonstrate the peak hours as high risk time for rear-end crashes. Again, DMS would be an essential tool to carry the messages to drivers for potential safety benefits. In existing safety studies, the ITS traffic data were normally used in aggregated format or only the pre-crash traffic data were used for real-time prediction. However, to fully realize their applications, this research also explored their use from a post-crash perspective. The real-time traffic states immediately before and after crash occurrence were extracted to identify whether the crash caused traffic deterioration. Elements regarding spatial, temporal, weather and crash characteristics from individual crash reports were adopted to analyze under what conditions a crash could significantly worsen traffic conditions on urban expressways. Multinomial logit model and two separate binomial models were adopted to identify each element's effects. Expected contribution of this work is to shorten the reaction and clearance time to those crashes that might cause delay on expressways, thus reducing congestion and probability of secondary crashes simultaneously. Finally, potential relevant applications beyond the scope of this research but worth investigation in the future were proposed.
652

A study assessing the characteristics of big data environments that predict high research impact: application of qualitative and quantitative methods

Ameli, Omid 24 December 2019 (has links)
BACKGROUND: Big data offers new opportunities to enhance healthcare practice. While researchers have shown increasing interest to use them, little is known about what drives research impact. We explored predictors of research impact, across three major sources of healthcare big data derived from the government and the private sector. METHODS: This study was based on a mixed methods approach. Using quantitative analysis, we first clustered peer-reviewed original research that used data from government sources derived through the Veterans Health Administration (VHA), and private sources of data from IBM MarketScan and Optum, using social network analysis. We analyzed a battery of research impact measures as a function of the data sources. Other main predictors were topic clusters and authors’ social influence. Additionally, we conducted key informant interviews (KII) with a purposive sample of high impact researchers who have knowledge of the data. We then compiled findings of KIIs into two case studies to provide a rich understanding of drivers of research impact. RESULTS: Analysis of 1,907 peer-reviewed publications using VHA, IBM MarketScan and Optum found that the overall research enterprise was highly dynamic and growing over time. With less than 4 years of observation, research productivity, use of machine learning (ML), natural language processing (NLP), and the Journal Impact Factor showed substantial growth. Studies that used ML and NLP, however, showed limited visibility. After adjustments, VHA studies had generally higher impact (10% and 27% higher annualized Google citation rates) compared to MarketScan and Optum (p<0.001 for both). Analysis of co-authorship networks showed that no single social actor, either a community of scientists or institutions, was dominating. Other key opportunities to achieve high impact based on KIIs include methodological innovations, under-studied populations and predictive modeling based on rich clinical data. CONCLUSIONS: Big data for purposes of research analytics has grown within the three data sources studied between 2013 and 2016. Despite important challenges, the research community is reacting favorably to the opportunities offered both by big data and advanced analytic methods. Big data may be a logical and cost-efficient choice to emulate research initiatives where RCTs are not possible.
653

Analytics-as-a-Service in a Multi-Cloud Environment through Semantically-enabled Hierarchical Data Processing

Jayaraman, P.P., Perera, C., Georgakopoulos, D., Dustdar, S., Thakker, Dhaval, Ranjan, R. 16 August 2016 (has links)
yes / A large number of cloud middleware platforms and tools are deployed to support a variety of Internet of Things (IoT) data analytics tasks. It is a common practice that such cloud platforms are only used by its owners to achieve their primary and predefined objectives, where raw and processed data are only consumed by them. However, allowing third parties to access processed data to achieve their own objectives significantly increases intergation, cooperation, and can also lead to innovative use of the data. Multicloud, privacy-aware environments facilitate such data access, allowing different parties to share processed data to reduce computation resource consumption collectively. However, there are interoperability issues in such environments that involve heterogeneous data and analytics-as-a-service providers. There is a lack of both - architectural blueprints that can support such diverse, multi-cloud environments, and corresponding empirical studies that show feasibility of such architectures. In this paper, we have outlined an innovative hierarchical data processing architecture that utilises semantics at all the levels of IoT stack in multicloud environments. We demonstrate the feasibility of such architecture by building a system based on this architecture using OpenIoT as a middleware, and Google Cloud and Microsoft Azure as cloud environments. The evaluation shows that the system is scalable and has no significant limitations or overheads.
654

Towards design and implementation of Industry 4.0 for food manufacturing

Konur, Savas, Lan, Yang, Thakker, Dhaval, Mokryani, Geev, Polovina, N., Sharp, J. 25 January 2021 (has links)
Yes / Today’s factories are considered as smart ecosystems with humans, machines and devices interacting with each other for efficient manufacturing of products. Industry 4.0 is a suite of enabler technologies for such smart ecosystems that allow transformation of industrial processes. When implemented, Industry 4.0 technologies have a huge impact on efficiency, productivity and profitability of businesses. The adoption and implementation of Industry 4.0, however, require to overcome a number of practical challenges, in most cases, due to the lack of modernisation and automation in place with traditional manufacturers. This paper presents a first of its kind case study for moving a traditional food manufacturer, still using the machinery more than one hundred years old, a common occurrence for small- and medium-sized businesses, to adopt the Industry 4.0 technologies. The paper reports the challenges we have encountered during the transformation process and in the development stage. The paper also presents a smart production control system that we have developed by utilising AI, machine learning, Internet of things, big data analytics, cyber-physical systems and cloud computing technologies. The system provides novel data collection, information extraction and intelligent monitoring services, enabling improved efficiency and consistency as well as reduced operational cost. The platform has been developed in real-world settings offered by an Innovate UK-funded project and has been integrated into the company’s existing production facilities. In this way, the company has not been required to replace old machinery outright, but rather adapted the existing machinery to an entirely new way of operating. The proposed approach and the lessons outlined can benefit similar food manufacturing industries and other SME industries. / Innovate UK—Knowledge Transfer Partnerships (KTP010551)
655

High-performance and Scalable Bayesian Group Testing and Real-time fMRI Data Analysis

Chen, Weicong 27 January 2023 (has links)
No description available.
656

Big Data and the Integrated Sciences of the Mind

Faries, Frank January 2022 (has links)
No description available.
657

An exploratory paper of the privacy paradox in the age of big data and emerging technologies / En undersökning av the privacy paradox i en tid med big data och ny teknik

Serra, Michelle January 2018 (has links)
Technological innovations and advancements are helping people gain an increasingly comfortable life, as well as expand their social capital through online networks by offering individual's new opportunities to share personal information. By collecting vast amounts of data a whole new range of services can be offered, information can be collected and compared, and a new level of individualization can be reached. However, with these new technical capacities comes the omnipresence of various devices gathering data, potential threats to privacy, and individuals' increasing concern over data privacy. This paper aims to shed light on the 'privacy paradox' phenomenon, the dichotomy between privacy attitude, concern, and behavior, by examining previous literature as well as using an online survey (N=463). The findings indicate that there is a difference between attitude, concern, and actual behavior. While individuals' value their data privacy and are concerned about information collected on them, few take action to protect it and actions rarely align with expressed concerns. However, the 'privacy paradox' is a complex phenomenon and it requires further research, especially with the implications of a data driven society and when introducing emerging technologies such as Artificial Intelligence and Internet of Things. / Tekniska innovationer och framsteg har bidragit till att människor kan erbjudas en alltmer bekväm livsstil. Genom insamling av stora mängder data kan individer erbjudas ett helt nytt utbud av tjänster, information kan samlas in och jämföras, och en helt ny nivå av Individualisering kan uppnås. Dock innebär dessa innovationer enallt större närvaro av datainsamlandeenheter, potentiella hot mot privatliv, samt individers ökade oro kring dataintegritet. Denna uppsats undersöker "the privacy paradox", skillnaden mellan attityd och beteende kring datasäkerhet, och dess konsekvenser i ett datastyrt samhälle i och med att ny teknik introduceras. Undersökningen har skett genom en litteraturstudie samt en enkätundersökning (N=463) och resultaten visar på ett det finns en skillnad mellan attityd och beteende. Individer värderar datasäkerhet och är oroliga kring vilken mängd information som samlas in, dock är det få som agerar för att inte dela information och attityd går sällan i linje med faktiskt beteende. "The privacy paradox" är ett komplext fenomen och mer forskning krävs, speciellt i och med introduktion av ny teknik så som Artificiell intelligens och Internet of Things.
658

A Knowledge Based Approach of Toxicity Prediction for Drug Formulation. Modelling Drug Vehicle Relationships Using Soft Computing Techniques

Mistry, Pritesh January 2015 (has links)
This multidisciplinary thesis is concerned with the prediction of drug formulations for the reduction of drug toxicity. Both scientific and computational approaches are utilised to make original contributions to the field of predictive toxicology. The first part of this thesis provides a detailed scientific discussion on all aspects of drug formulation and toxicity. Discussions are focused around the principal mechanisms of drug toxicity and how drug toxicity is studied and reported in the literature. Furthermore, a review of the current technologies available for formulating drugs for toxicity reduction is provided. Examples of studies reported in the literature that have used these technologies to reduce drug toxicity are also reported. The thesis also provides an overview of the computational approaches currently employed in the field of in silico predictive toxicology. This overview focuses on the machine learning approaches used to build predictive QSAR classification models, with examples discovered from the literature provided. Two methodologies have been developed as part of the main work of this thesis. The first is focused on use of directed bipartite graphs and Venn diagrams for the visualisation and extraction of drug-vehicle relationships from large un-curated datasets which show changes in the patterns of toxicity. These relationships can be rapidly extracted and visualised using the methodology proposed in chapter 4. The second methodology proposed, involves mining large datasets for the extraction of drug-vehicle toxicity data. The methodology uses an area-under-the-curve principle to make pairwise comparisons of vehicles which are classified according to the toxicity protection they offer, from which predictive classification models based on random forests and decisions trees are built. The results of this methodology are reported in chapter 6.
659

Signos Vitales: cuerpo, subjetividad y tecnologías digitales

Miyagusuku Nakamoto, Adriana Cristina 07 July 2021 (has links)
Guiado a partir de la lógica del capitalismo de plataformas y su modo de operar a través de dispositivos de recolección de datos, Signos Vitales se centra específicamente en el proceso de datificación, en la cual el cuerpo y sus movimientos se convierten en fuentes de información. La presente tesis ha sido abordada desde una investigación teórica-artística como resultado de una indagación sobre la aparente desmaterialización vinculada al fenómeno del Big Data, la experiencia de lo virtual y los modos de subjetividad que se desprenden de la representación a través de datos. El objetivo de la tesis es reflexionar sobre nuestra relación con las tecnologías digitales y los mecanismos detrás de la actual orientación tecnológica desde una aproximación escultórica-cinética interactiva. Esta investigación sintetiza una navegación por una comprensión material de procesos abstractos, para potencialmente redirigir en el futuro. / Guided by the logic of platform capitalism and its way of operating through data recollection devices, Signos Vitales centers specifically on the datafication process, where the body and its movements become sources of information. This thesis has been approached from a theoretical-artistic investigation as a result of an inquiry on the apparent dematerialization linked to the Big Data phenomenon, the experience of virtuality and the modes of subjectivity that emerge from data representation. The objective of this thesis is to reflect on our relationship with digital technologies and the mechanisms behind the current technological orientation by means of interactive kinetic sculptures. This investigation summarizes an attempt to navigate a material understanding of abstract processes to potentially deviate in the future.
660

Estudios de pobreza, estructuras de redes y grandes datos en línea para la ciudad de Bahía Blanca

Gutiérrez, Emiliano Martín 18 September 2023 (has links)
En el marco de esta tesis doctoral se presentan tres ensayos que se centran en la utilización de herramientas digitales y sus posibles aplicaciones en el ámbito de la pobreza, en la ciudad de Bahía Blanca (Argentina). La motivación central de estas investigaciones es contribuir al conocimiento sobre la pobreza y cómo las tecnologías digitales pueden ser un elemento clave para la adopción de políticas públicas que aborden las privaciones en las condiciones de vida de las personas. Así, el primer trabajo presenta un estudio del impacto del acceso digital y la formación de capital social sobre la pobreza multidimensional. En este caso como fuente de datos se utiliza un relevamiento presencial sobre la localidad. Se realizan regresiones ordinales a fin de determinar como el acceso, conocimiento y uso de las herramientas digitales, como así también el capital social afecta al grado de pobreza multidimensional experimentado por un individuo En el segundo manuscrito se introduce el Análisis de Redes Sociales (ARS), para aquel segmento de la población que interactúa en iglesias pentecostales. La justificación para este último grupo reside las importantes vinculaciones históricas por parte de las comunidades religiosas pentecostales lo referido a su localización de barrios pobres. Mediante información relevada a través de del sitio web de Facebook, se evalúan las interacciones que presentan estos usuarios. Finalmente, en una tercera investigación se estima una Canasta Básica Alimentaria (CBA), semanal con precio recopilados en línea. La importancia de la misma reside en que este indicador permitiría cuantificar la cantidad de ingresos mínimos que requiere un individuo para poder satisfacer sus necesidades alimentarias. Asimismo, se efectúan estimaciones econométricas a fin de detectar el impacto del tipo de cambio, combustible y estacionalidad sobre la valorización de esta canasta. / This doctoral thesis presents three essays that focus on the use of digital tools and their potential applications in the field of poverty in Bahía Blanca (Argentina). The central motivation of these investigations is to contribute to the understanding of poverty and how digital technologies can be a key element in the adoption of public policies that address deprivations in people's living conditions. The first essay presents a study on the impact of digital access and social capital formation on multidimensional poverty. A survey conducted in the area is used as the data source, and ordinal regressions are performed to determine how access, knowledge, and use of digital tools, as well as social capital, affect the degree of multidimensional poverty experienced by an individual. 2 The second manuscript introduces Social Network Analysis (SNA) for the segment of the population that interacts in Pentecostal churches. The justification for this group is the historical linkages of Pentecostal religious communities with poor neighborhoods. By collecting information from Facebook, the interactions of these users are evaluated. Finally, in a third investigation, a Basic Food Basket (BFB) is estimated on a weekly basis with prices collected online. The importance of this lies in the fact that this indicator would allow quantifying the minimum income required for an individual to meet their food needs. Econometric estimations are also performed to detect the impact of exchange rates, fuel, and seasonality on the valuation of this basket.

Page generated in 0.0782 seconds