• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 131
  • 41
  • 20
  • 16
  • 15
  • 11
  • 10
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 622
  • 83
  • 79
  • 64
  • 62
  • 57
  • 55
  • 48
  • 46
  • 45
  • 40
  • 39
  • 39
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Parallel Inverted Indices for Large-Scale, Dynamic Digital Libraries

Sornil, Ohm 09 February 2001 (has links)
The dramatic increase in the amount of content available in digital forms gives rise to large-scale digital libraries, targeted to support millions of users and terabytes of data. Retrieving information from a system of this scale in an efficient manner is a challenging task due to the size of the collection as well as the index. This research deals with the design and implementation of an inverted index that supports searching for information in a large-scale digital library, implemented atop a massively parallel storage system. Inverted index partitioning is studied in a simulation environment, aiming at a terabyte of text. As a result, a high performance partitioning scheme is proposed. It combines the best qualities of the term and document partitioning approaches in a new Hybrid Partitioning Scheme. Simulation experiments show that this organization provides good performance over a wide range of conditions. Further, the issues of creation and incremental updates of the index are considered. A disk-based inversion algorithm and an extensible inverted index architecture are described, and experimental results with actual collections are presented. Finally, distributed algorithms to create a parallel inverted index partitioned according to the hybrid scheme are proposed, and performance is measured on a portion of the equipment that normally makes up the 100 node Virginia Tech PetaPlex™ system. NOTE: (02/2007) An updated copy of this ETD was added after there were patron reports of problems with the file. / Ph. D.
302

Development of Fragility Curve Database for Multi-Hazard Performance Based Design

Tahir, Haseeb 14 July 2016 (has links)
There is a need to develop efficient multi-hazard performance based design (PBD) tools to analyze and optimize buildings at a preliminary stage of design. The first step was to develop a database and it is supported by five major contributions: 1) development of nomenclature of variables in PBD; 2) creation of mathematical model to fit data; 3) collection of data; 4) identification of gaps and methods for filling data in PBD; 5) screening of soil, foundation, structure, and envelope (SFSE) combinations.. A unified nomenclature was developed with the collaboration of a multi-disciplinary team to navigate through the PBD. A mathematical model for incremental dynamic analysis was developed to fit the existing data in the database in a manageable way. Three sets of data were collected to initialize the database: 1) responses of structures subjected to hazard; 2) fragility curves; 3) consequence functions. Fragility curves were critically analyzed to determine the source and the process of development of the curves, but structural analysis results and consequence functions were not critically analyzed due to lack of similarities between the data and background information respectively. Gaps in the data and the methods to fill them were identified to lay out the path for the completion of the database. A list of SFSE systems applicable to typical midrise office buildings was developed. Since the database did not have enough data to conduct PBD calculations, engineering judgement was used to screen SFSE combinations to identify the potential combinations for detailed analysis. Through these five contributions this thesis lays the foundation for the development of a database for multi- hazard PBD and identifies potential future work in this area. / Master of Science
303

Procedures to Perform Dam Rehabilitation Analysis in Aging Dams

Bliss, Michael A. 11 July 2006 (has links)
There are hundreds of existing dams within the State of Virginia, and even thousands more specifically within the United States. A large portion of these dams do not meet the current safety standard of passing the Probable Maximum Flood. Likewise, many of the dams have reached or surpassed the original design lives, and are in need of rehabilitation. A standard protocol will assist dam owners in completing a dam rehabilitation analysis. The protocol provides the methods to complete the hydrologic, hydraulic, and economic analysis. Additionally, alternative augmentation techniques are discussed including the integration of GIS applications and linear programming optimization techniques. The standard protocol and alternative techniques are applied to a case study. The case study includes a set of flood control dams located in the headwaters of the South River watershed in Augusta County, VA. The downstream impacts of the flood control dams on the city of Waynesboro are demonstrated through the hydrologic and hydraulic analysis. / Master of Science
304

Exploring the Landscape of Big Data Analytics Through Domain-Aware Algorithm Design

Dash, Sajal 20 August 2020 (has links)
Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. High volume and velocity of the data warrant a large amount of storage, memory, and compute power while a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. In this thesis, we present our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental arrival of data through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We present Claret, a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool, to demonstrate the application of the first guideline. It combines algorithmic concepts extended from the stochastic force-based multi-dimensional scaling (SF-MDS) and Glimmer. Claret computes approximate weighted Euclidean distances by combining a novel data mapping called stretching and Johnson Lindestrauss' lemma to reduce the complexity of WMDS from O(f(n)d) to O(f(n) log d). In demonstrating the second guideline, we map the problem of identifying multi-hit combinations of genetic mutations responsible for cancers to weighted set cover (WSC) problem by leveraging the semantics of cancer genomic data obtained from cancer biology. Solving the mapped WSC with an approximate algorithm, we identified a set of multi-hit combinations that differentiate between tumor and normal tissue samples. To identify three- and four-hits, which require orders of magnitude larger computational power, we have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. Developing new statistics to combine search results over time makes incremental analysis feasible. iBLAST performs (1+δ)/δ times faster than NCBI BLAST, where δ represents the fraction of database growth. We also explored various approaches to mitigate catastrophic forgetting in incremental training of deep learning models. / Doctor of Philosophy / Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. Here volume represents the data's size, variety represents various sources and formats of the data, and velocity represents the data arrival rate. High volume and velocity of the data warrant a large amount of storage, memory, and computational power. In contrast, a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. This thesis presents our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric (pair-wise distance and distribution-related) and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental data arrival through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We demonstrate the application of the first guideline through the design and development of Claret. Claret is a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool that can reduce the dimension of high-dimensional data points. In demonstrating the second guideline, we identify combinations of cancer-causing gene mutations by mapping the problem to a well known computational problem known as the weighted set cover (WSC) problem. We have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer to solve the problem in less than two hours instead of an estimated hundred years. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. This analysis was made possible by developing new statistics to combine search results over time. We also explored various approaches to mitigate the catastrophic forgetting of deep learning models, where a model forgets to perform machine learning tasks efficiently on older data in a streaming setting.
305

Ecohydrologic Indicators of Low-flow Habitat Availability in Eleven Virginia Rivers

Hoffman, Kinsey H. 26 October 2015 (has links)
Increasing demand and competition for freshwater is threatening instream uses including ecosystem services and aquatic habitat. A standard method of evaluating impacts of alternative water management scenarios on instream habitat is Instream Flow Incremental Methodology (IFIM). The primary outputs of IFIM studies are: 1) habitat rating curves that relate habitat availability to streamflow for every species, lifestage, or recreational use modelled; and 2) habitat time series under alternative water management scenarios. We compiled 428 habitat rating curves from previous IFIM studies across 11 rivers in Virginia and tested the ability to reduce this number based on similarities in flow preferences and responses to flow alteration. Individual site-species combinations were reduced from 428 objects to four groups with similar seasonal habitat availability patterns using a hierarchical, agglomerative cluster analysis. A seasonal habitat availability (SHA) ratio was proposed as a future indicator of seasonal flow preferences. Four parameters calculated from the magnitude and shape of habitat rating curves were proposed as response metrics that indicate how a lifestage responds to flow alteration. Univariate and multivariate analyses of variance and post-hoc tests identified significantly different means for the SHA ratio, QP (F=63.2, p<2e-16) and SK (F=65.6, p<2e-16). A reduced number of instream flow users can simplify the incorporation of aquatic habitat assessment in statewide water resources management. / Master of Science
306

An Incremental Approach to Development at Gesundheit! Institute

Segal, Martin Daniel 10 January 2003 (has links)
This thesis is an evaluation and proposal for development for an alternative health care center in West Virginia. The Gesundheit Institute is based on the work of Dr. Hunter "Patch" Adams and his desire to create an alternative to the current model of health care. The Institute would not charge for services and will offer non-traditional as well as traditional methods of healing. By evaluating what is currently happening at the center and what the resources are, I propose to use an incremental approach to growth. The ideas would result in a series of smaller buildings developed over time as opposed to a single larger building. The thesis includes the design for the next major building, a community center/dining hall and a basic design for a series of sleeping quarters. It also includes the reworking of the master plan to better include issues integrating incremental growth and sustainable development. / Master of Architecture
307

Human-Machine Alignment for Context Recognition in the Wild

Bontempelli, Andrea 30 April 2024 (has links)
The premise for AI systems like personal assistants to provide guidance and suggestions to an end-user is to understand, at any moment in time, the personal context that the user is in. The context – where the user is, what she is doing and with whom – allows the machine to represent the world in user’s terms. The context must be inferred from a stream of sensor readings generated by smart wearables such as smartphones and smartwatches, and the labels are acquired from the user directly. To perform robust context prediction in this real-world scenario, the machine must handle the egocentric nature of the context, adapt to the changing world and user, and maintain a bidirectional interaction with the user to ensure the user-machine alignment of world representations. To this end, the machine must learn incrementally on the input stream of sensor readings and user supervision. In this work, we: (i) introduce interactive classification in the wild and present knowledge drift (KD), a special form of concept drift, occurring due to world and user changes; (ii) develop simple and robust ML methods to tackle these scenarios; (iii) showcase the advantages of each of these methods in empirical evaluations on controlled synthetic and real-world data sets; (iv) design a flexible and modular architecture that combines the methods above to support context recognition in the wild; (v) present an evaluation with real users in a concrete social science use case.
308

A study on big data analytics and innovation: From technological and business cycle perspectives

Sivarajah, Uthayasankar, Kumar, S., Kumar, V., Chatterjee, S., Li, Jing 10 March 2024 (has links)
Yes / In today’s rapidly changing business landscape, organizations increasingly invest in different technologies to enhance their innovation capabilities. Among the technological investment, a notable development is the applications of big data analytics (BDA), which plays a pivotal role in supporting firms’ decision-making processes. Big data technologies are important factors that could help both exploratory and exploitative innovation, which could affect the efforts to combat climate change and ease the shift to green energy. However, studies that comprehensively examine BDA’s impact on innovation capability and technological cycle remain scarce. This study therefore investigates the impact of BDA on innovation capability, technological cycle, and firm performance. It develops a conceptual model, validated using CB-SEM, through responses from 356 firms. It is found that both innovation capability and firm performance are significantly influenced by big data technology. This study highlights that BDA helps to address the pressing challenges of climate change mitigation and the transition to cleaner and more sustainable energy sources. However, our results are based on managerial perceptions in a single country. To enhance generalizability, future studies could employ a more objective approach and explore different contexts. Multidimensional constructs, moderating factors, and rival models could also be considered in future studies.
309

Oral Histories: a simple method of assigning chronological age to isotopic values from human dentine collagen

Beaumont, Julia, Montgomery, Janet 07 1900 (has links)
Yes / Background: stable isotope ratios of carbon (δ13C) and nitrogen (δ15N) in bone and dentine collagen have been used for over 30 years to estimate palaeodiet, subsistence strategy, breastfeeding duration and migration within burial populations. Recent developments in dentine microsampling allow improved temporal resolution for dietary patterns. Aim: We propose a simple method which could be applied to human teeth to estimate chronological age represented by dentine microsamples in the direction of tooth growth, allowing comparison of dietary patterns between individuals and populations. The method is tested using profiles from permanent and deciduous teeth of two individuals. Subjects and methods: using a diagrammatic representation of dentine development by approximate age for each human tooth (based on the Queen Mary University of London Atlas) (AlQahtani et al., 2010), we estimate the age represented by each dentine section. Two case studies are shown: comparison of M1 and M2 from a 19th century individual from London, England, and identification of an unknown tooth from an Iron Age female adult from Scotland. Results and conclusions: The isotopic profiles demonstrate that variations in consecutively-forming teeth can be aligned using this method to extend the dietary history of an individual, or identify an unknown tooth by matching the profiles.
310

The Great Irish Famine: identifying starvation in the tissues of victims using stable isotope analysis of bone and incremental dentine collagen

Beaumont, Julia, Montgomery, Janet 13 July 2016 (has links)
Yes / The major components of human diet both past and present may be estimated by measuring the carbon and nitrogen isotope ratios (δ13C and δ15N) of the collagenous proteins in bone and tooth dentine. However, the results from these two tissues differ substantially: bone collagen records a multi-year average whilst primary dentine records and retains timebound isotope ratios deriving from the period of tooth development. Recent studies harnessing a sub-annual temporal sampling resolution have shed new light on the individual dietary histories of our ancestors by identifying unexpected radical short-term dietary changes, the duration of breastfeeding and migration where dietary change occurs, and by raising questions regarding factors other than diet that may impact on δ13C and δ15N values. Here we show that the dentine δ13C and δ15N profiles of workhouse inmates dating from the Great Irish Famine of the 19th century not only record the expected dietary change from C3 potatoes to C4 maize, but when used together they also document prolonged nutritional and other physiological stress resulting from insufficient sustenance. In the adults, the influence of the maize-based diet is seen in the δ13C difference between dentine (formed in childhood) and rib (representing an average from the last few years of life). The demonstrated effects of stress on the δ13C and δ15N values will have an impact on the interpretations of diet in past populations even in slow-turnover tissues such as compact bone. This technique also has applicability in the investigation of modern children subject to nutritional distress where hair and nails are unavailable or do not record an adequate period of time. / This study was supported by an Arts and Humanities Research Council grant funding to JB under AHRC Studentship AH/I503307/1.

Page generated in 0.0768 seconds