• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 590
  • 119
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 10
  • 7
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1223
  • 1223
  • 180
  • 170
  • 163
  • 156
  • 150
  • 149
  • 149
  • 129
  • 112
  • 110
  • 109
  • 109
  • 107
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Big data in predictive toxicology / Big Data in Predictive Toxicology

Neagu, Daniel, Richarz, A-N. 15 January 2020 (has links)
No / The rate at which toxicological data is generated is continually becoming more rapid and the volume of data generated is growing dramatically. This is due in part to advances in software solutions and cheminformatics approaches which increase the availability of open data from chemical, biological and toxicological and high throughput screening resources. However, the amplified pace and capacity of data generation achieved by these novel techniques presents challenges for organising and analysing data output. Big Data in Predictive Toxicology discusses these challenges as well as the opportunities of new techniques encountered in data science. It addresses the nature of toxicological big data, their storage, analysis and interpretation. It also details how these data can be applied in toxicity prediction, modelling and risk assessment.
212

Cascading permissions policy model for token-based access control in the web of things

Amir, Mohammad, Pillai, Prashant, Hu, Yim Fun January 2014 (has links)
No / The merger of the Internet of Things (IoT) with cloud computing has given birth to a Web of Things (WoT) which hosts heterogeneous and rapidly varying data. Traditional access control mechanisms such as Role-Based Access schemes are no longer suitable for modelling access control on such a large and dynamic scale as the actors may also change all the time. For such a dynamic mix of applications, data and actors, a more distributed and flexible model is required. Token-Based Access Control is one such scheme which can easily model and comfortably handle interactions with big data in the cloud and enable provisioning of access to fine levels of granularity. However, simple token access models quickly become hard to manage in the face of a rapidly growing repository. This paper proposes a novel token access model based on a cascading permissions policy model which can easily control interactivity with big data without becoming a menace to manage and administer.
213

Transformative role of big data through enabling capability recognition in construction

Atuahene, Bernard T., Kanjanabootra, S., Gajendran, T. 10 August 2023 (has links)
Yes / Big data application is a significant transformative driver of change in the retail, health, engineering, and advanced manufacturing sectors. Big data studies in construction are still somewhat limited, although there is increasing interest in what big data application could achieve. Through interviews with construction professionals, this paper identifies the capabilities needed in construction firms to enable the accrual of the potentially transformative benefits of big data application in construction. Based on previous studies, big data application capabilities, needed to transform construction processes, focussed on data, people, technology, and organisation. However, the findings of this research suggest a critical modification to that focus to include knowledge and the organisational environment along with people, data, and technology. The research findings show that construction firms use big data with a combination strategy to enable transformation by (a) driving an in-house data management policy to rolling-out the big data capabilities; (b) fostering collaborative capabilities with external firms for resource development, and (c) outsourcing big data services to address the capabilities deficits impacting digital transformation.
214

Applications of big data approaches to topics in infectious disease epidemiology

Benedum, Corey Michael 04 June 2019 (has links)
The availability of big data (i.e., a large number of observations and variables per observation) and advancements in statistical methods present numerous exciting opportunities and challenges in infectious disease epidemiology. The studies in this dissertation address questions regarding the epidemiology of dengue and sepsis by applying big data and traditional epidemiologic approaches. In doing so, we aim to advance our understanding of both diseases and to critically evaluate traditional and novel methods to understand how these approaches can be leveraged to improve epidemiologic research. In the first study, we examined the ability of machine learning and regression modeling approaches to predict dengue occurrence in three endemic locations. When we utilized models with historical surveillance, population, and weather data, machine learning models predicted weekly case counts more accurately than regression models. When we removed surveillance data, regression models were more accurate. Furthermore, machine learning models were able to accurately forecast the onset and duration of dengue outbreaks up to 12 weeks in advance without using surveillance data. This study highlighted potential benefits that machine learning models could bring to a dengue early warning system. The second study utilized machine learning approaches to identify the rainfall conditions which lead to mosquito larvae being washed away from breeding sites occurring in roadside storm drains in Singapore. We then used conventional epidemiologic approaches to evaluate how the occurrence of these washout events affect dengue occurrence in subsequent weeks. This study demonstrated an inverse relationship between washout events and dengue outbreak risk. The third study compared algorithmic-based and conventional epidemiologic approaches used to evaluate variables for statistical adjustment. We used these approaches to identify what variables to adjust for when estimating the effect of autoimmune disease on 30-day mortality among ICU patients with sepsis. In this study, autoimmune disease presence was associated with an approximate 10-20% reduction in mortality risk. Risk estimates identified with algorithmic-based approaches were compatible with conventional approaches and did not differ by more than 9%. This study revealed that algorithmic-based approaches can approximate conventional selection methods, and may be useful when the appropriate set of variables to adjust for is unknown.
215

Big Data Phylogenomics: Methods and Applications

Sharma, Sudip, 0000-0002-0469-1211 08 1900 (has links)
Phylogenomics, the study of genome-scale data containing many genes and species, has advanced our understanding of patterns of evolutionary relationships and processes throughout the Tree of Life. Recent research studies frequently use such large-scale datasets with the expectation of recovering historical species relationships with high statistical confidence. At the same time, the computational complexity and resource requirements for analyzing such large-scale data increase with the number of genomic loci and sites. Therefore, different crucial steps of phylogenomic studies, like model selection and estimating bootstrap confidence limits on inferred phylogenetic trees, are often not feasible on regular desktop computers and generally time-consuming on high-performance computing systems. Moreover, increasing the number of genes in the data increases the chance of including genomic loci that may cause biased and cause fragile species relationships that spuriously receive high statistical support. Such data errors in phylogenomic datasets are major impediments to building a robust tree of life. Contemporary approaches to detect such data error require alternative tree hypotheses for the fragile clades, which may be unavailable a priori or too numerous to evaluate. In addition, finding causal genomic loci under these contemporary statistical frameworks is also computationally expensive and increases with the number of alternatives to be compared. In my Ph.D. dissertation, I have pursued three major research projects: (1) Introduction and advancement of the bag of little bootstraps approach for placing the confidence limits on species relationships from genome-scale phylogenetic trees. (2) Development of a novel site-subsampling approach to select the best-fit substitution model for genome-scale phylogenomic datasets. Both of these approaches analyze data subsamples containing a small fraction of sites from the full phylogenomic alignment. Before analysis, sites in a subsample are repeatedly chosen randomly to build a new alignment that contains as many sites as the original dataset, which is shown to retain the statistical properties of the full dataset. Analyses of simulated and empirical datasets exhibited that these approaches are fast and require a minuscule amount of computer memory while retaining similar accuracy as that achieved by full dataset analysis. (3) Development of a supervised machine learning approach based on the Evolutionary Sparse Learning framework for detecting fragile clades and associated gene-species combinations. This approach first builds a genetic model for a monophyletic clade of interest, clade probability for the clade, and gene-species concordance scores. The clade model and these novel matrices expose fragile clades and highly influential as well as disruptive gene-species candidates underlying the fragile clades. The efficiency and usefulness of this approach are demonstrated by analyzing a set of simulated and empirical datasets and comparing their performance with the state-of-the-art approaches. Furthermore, I have actively contributed to research projects exploring applications of these newly developed approaches to a variety of research projects. / Biology
216

Benchmarking Performance for Migrating a Relational Application to a Parallel Implementation

Gadiraju, Krishna Karthik 13 October 2014 (has links)
No description available.
217

Predicting Diffusion of Contagious Diseases Using Social Media Big Data

Elkin, Lauren S. 06 February 2015 (has links)
No description available.
218

Conditional Correlation Analysis

Bhatta, Sanjeev 05 June 2017 (has links)
No description available.
219

Using the Architectural Tradeoff Analysis Method to Evaluate the Software Architecture of a Semantic Search Engine: A Case Study

Chatra Raveesh, Sandeep January 2013 (has links)
No description available.
220

Performance Characterization and Improvements of SQL-On-Hadoop Systems

Kulkarni, Kunal Vikas 28 December 2016 (has links)
No description available.

Page generated in 0.0598 seconds