• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 500
  • 92
  • 71
  • 61
  • 36
  • 21
  • 19
  • 18
  • 13
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 1023
  • 688
  • 265
  • 180
  • 130
  • 125
  • 117
  • 97
  • 81
  • 80
  • 79
  • 77
  • 67
  • 64
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

MULTI-TEMPORAL MULTI-MODAL PREDICTIVE MODELLING OF PLANT PHENOTYPES

Ali Masjedi (8789954) 01 May 2020 (has links)
<p>High-throughput phenotyping using high spatial, spectral, and temporal resolution remote sensing (RS) data has become a critical part of the plant breeding chain focused on reducing the time and cost of the selection process for the “best” genotypes with respect to the trait(s) of interest. In this study, the potential of accurate and reliable sorghum biomass prediction using hyperspectral and LiDAR data acquired by sensors mounted on UAV platforms is investigated. Experiments comprised multiple varieties of grain and forage sorghum, including some photoperiod sensitive varieties, providing an opportunity to evaluate a wide range of genotypes and phenotypes. </p><p>Feature extraction is investigated, where various novel features, as well as traditional features, are extracted directly from the hyperspectral imagery and LiDAR point cloud data and input to classical machine learning (ML) regression based models. Predictive models are developed for multiple experiments conducted during the 2017, 2018, and 2019 growing seasons at the Agronomy Center for Research and Education (ACRE) at Purdue University. The impact of the regression method, data source, timing of RS and field-based biomass reference data acquisition, and number of samples on the prediction results are investigated. R2 values for end-of-season biomass ranged from 0.64 to 0.89 for different experiments when features from all the data sources were included. Using geometric based features derived from the LiDAR point cloud and the chemistry-based features extracted from hyperspectral data provided the most accurate predictions. The analysis of variance (ANOVA) of the accuracies of the predictive models showed that both the data source and regression method are important factors for a reliable prediction; however, the data source was more important with 69% significance, versus 28% significance for the regression method. The characteristics of the experiments, including the number of samples and the type of sorghum genotypes in the experiment also impacted prediction accuracy. </p><p>Including the genomic information and weather data in the “multi-year” predictive models is also investigated for prediction of the end of season biomass. Models based on one and two years of data are used to predict the biomass yield for the future years. The results show the high potential of the models for biomass and biomass rank predictions. While models developed using one year of data are able to predict biomass rank, using two years of data resulted in more accurate models, especially when RS data, which encode the environmental variation, are included. Also, the possibility of developing predictive models using the RS data collected until mid-season, rather than the full season, is investigated. The results show that using the RS data until 60 days after sowing (DAS) in the models can predict the rank of biomass with R2 values of around 0.65-0.70. This not only reduces the time required for phenotyping by avoiding the manual sampling process, but also decreases the time and the cost of the RS data collections and the associated challenges of time-consuming processing and analysis of large data sets, and particularly for hyperspectral imaging data.</p><p>In addition to extracting features from the hyperspectral and LiDAR data and developing classical ML based predictive models, supervised and unsupervised feature learning based on fully connected, convolutional, and recurrent neural networks is also investigated. For hyperspectral data, supervised feature extraction provides more accurate predictions, while the features extracted from LiDAR data in an unsupervised training yield more accurate prediction. </p><p>Predictive models based on Recurrent Neural Networks (RNNs) are designed and implemented to accommodate high dimensional, multi-modal, multi-temporal data. RS data and weather data are incorporated in the RNN models. Results from multiple experiments focused on high throughput phenotyping of sorghum for biomass predictions are provided and evaluated. Using proposed RNNs for training on one experiment and predicting biomass for other experiments with different types of sorghum varieties illustrates the potential of the network for biomass prediction, and the challenges relative to small sample sizes, including weather and sensitivity to the associated ground reference information.</p>
402

Network & Cloud Track

Fitzek, Frank H.P. 15 November 2016 (has links)
No description available.
403

Exploring sources of variability in metal organic frameworks through high throughput adsorption and calorimetric methods / Exploration des sources de variabilité dans les réseaux métallo-organiques par adsorption à haut débit et méthodes calorimétiques

Iacomi, Paul Adrian 15 November 2018 (has links)
Les réseaux métallo-organiques (MOF) sont une nouvelle classe de matériaux poreux hybrides. Néanmoins leurs propriétés uniques introduisent également des difficultés significatives dans la caractérisation par adsorption de gaz. Dans cette thèse, la création d'un code source libre est détaillé, pour standardiser le traitement des isothermes. En utilisant ce code, un traitement à haut débit de plus de 18 000 isothermes est utilisé pour explorer l'échelle d'incertitude présente dans les données publiées sur l'adsorption dans les matériaux poreux. De plus, la mesure directe de l'enthalpie différentielle de l'adsorption en utilisant la microcalorimétrie s'avère être un excellent moyen d'obtenir la contribution des interactions particulières sur l'énergie d'adsorption. Ensemble, ces méthodes peuvent être utilisées pour étudier les sources d'incertitude des MOF. On étudie d’abord l’impact des défauts structurels au moyen d’une méthode post-synthétique alternative de génération de linker/cluster manquants dans l'UiO-66(Zr). Le traitement des matériaux pour leurs utilisations dans un environnement industriel par façonnage est étudié ici sous l’effet de la granulation par voie humide sur trois MOF topiques (UiO-66(Zr), MIL-100(Fe) et MIL-127(Fe)). Enfin, les comportements contre-intuitifs intrinsèques aux cristaux poreux «souples» sont étudiés, où la structure elle-même est responsable de la fluctuation dans les isothermes d'adsorption. Ici, une étude fondamentale sur un matériau flexible DUT-49 (Cu), apporte des informations sur la source de flexibilité induite par adsorption et sa changeabilité par modification structurelle / Metal organic frameworks (MOF) are novel adsorption materials with unique and desirable properties. However, structural defects, processing and structural compliance can lead to irreproducibility in adorption measurements. In this thesis, the creation of an open-source codebase is detailed, which is intended to standardize the processing of isotherms. Using this framework, high throughput processing of over 18 000 isotherms is used to explore the scale of uncertainty present in published adsorption data. Then, direct measurement of the differential enthalpy of adsorption using microcalorimetry is shown to be an excellent avenue of obtaining further insight into the contribution of guest-host and host-host interactions to the overall energetics of adsorption. Together, these methods are used to study some of the sources of the variability of MOFs, and quantify their effect. First, the impact of structural defects is investigated, through an alternative post-synthetic method of missing linker/cluster generation in the prototypical UiO-66(Zr) MOF. The processing of materials for their use in an industrial environment through shaping is another potential source of performance modification, which is here studied as the effect of wet granulation on three topical MOFs (UiO-66(Zr), MIL-100(Fe) and MIL-127(Fe)). Finally, counterintuitive behaviours intrinsic to ``soft'' porous crystals are investigated, where the structure itself is responsible for fluctuation in adsorption isotherms. A fundamental study on a copper paddlewheel based material, DUT-49(Cu) yields know-how on the source of adsorption induced compliance and its tunability through structural modification
404

KTHFS – A HIGHLY AVAILABLE ANDSCALABLE FILE SYSTEM

D'Souza, Jude Clement January 2013 (has links)
KTHFS is a highly available and scalable file system built from the version 0.24 of the Hadoop Distributed File system. It provides a platform to overcome the limitations of existing distributed file systems. These limitations include scalability of metadata server in terms of memory usage, throughput and its availability. This document describes KTHFS architecture and how it addresses these problems by providing a well coordinated distributed stateless metadata server (or in our case, Namenode) architecture. This is backed with the help of a persistence layer such as NDB cluster. Its primary focus is towards High Availability of the Namenode. It achieves scalability and recovery by persisting the metadata to an NDB cluster. All namenodes are connected to this NDB cluster and hence are aware of the state of the file system at any point in time. In terms of High Availability, KTHFS provides Multi-Namenode architecture. Since these namenodes are stateless and have a consistent view of the metadata, clients can issue requests on any of the namenodes. Hence, if one of these servers goes down, clients can retry its operation on the next available namenode. We next discuss the evaluation of KTHFS in terms of its metadata capacity for medium and large size clusters, throughput and high availability of the Namenode and an analysis of the underlying NDBcluster. Finally, we conclude this document with a few words on the ongoing and future work in KTHFS.
405

Pico Cell Densification Study in LTE Heterogeneous Networks

Cong, Guanglei January 2012 (has links)
Heterogeneous Network (HetNet) deployment has been considered as the main approach to boost capacity and coverage in Long Term Evolu-tion (LTE) networks in order to fulfill the huge future demand on mo-bile broadband usage. In order to study the improvement on network performance, i.e. capacity, coverage and user throughput, from pico cell densification in LTE HetNets, a network densification algorithm which determines the placement locations of the pico sites based on pathloss has been designed and applied to build several network models with different pico cell densities. The study has been taken based on a real radio network in a limited urban area using an advanced Matlab-based radio network simulator. The simulation results show that the network performance generally is enhanced by introducing more pico cells to the network.
406

En prestandajämförelse av databashanteringssystem över olika workloads / A performance comparison of database management systems across different workloads

Jakobsson, Alfred, Le Duy, Mário January 2022 (has links)
This study conducted an experiment on NoSQL and NewSQL database management systems where the average throughput of Cassandra, CockroachDB, MongoDB, and VoltDB was compared using five workloads composed of different proportions of read and update queries. How much these different workload compositions affect throughput for each individual database management system was also investigated. The results showed that VoltDB had the highest throughput overall, and its throughput was affected the least by the workloads’ composition. MongoDB had similar high throughput consistency across workloads but at a much lower throughput level, and its throughput was affected much more by the workload compositions than VoltDB. Cassandra had extremely high throughput for 100 percent update workloads,even beating VoltDB in certain cases, but showed underwhelming results for all other workloads. CockroachDB’s throughput was by far the worst at workloads that had any update queries, but was comparable and sometimes even better than Cassandra and MongoDB with 100 percent read workloads. CockroachDB’s throughput proved to be the most affected by the query composition of workloads.
407

Development of a Novel Selection Method for Protease Engineering : A high-throughput fluorescent reporter-based method for characterization and selection of proteases

Hendrikse, Natalie January 2016 (has links)
Proteases are crucial to many biological processes and have become an important field of biomedical and biotechnological research. Engineering of proteases towards therapeutic applications has been limited due to the lack of high-throughput methods for characterization and selection. We have developed a novel high-throughput method for quantitative assessment of proteolytic activity in the cytoplasm of Escherichia coli bacterial cells. The method is based on coexpression of a protease of interest and a reporter complex consisting of an aggregation-prone protein fused to a fluorescent reporter. Cleavage of a substrate sequence situated between the two reporter complex proteins results in increased whole-cell fluorescence proportional to proteolytic activity, which can be monitored using flow cytometry. We have demonstrated that the method can distinguish efficiencies with which Tobacco Etch Virus (TEV) protease processes different substrates. We believe that this is the first method in the field of protease engineering that enables simultaneous measurement of proteolytic activity and protease expression levels and can therefore be applied for substrate profiling, as well as screening and selection of libraries of engineered proteases.
408

Advances in DNA Detection on Paper Chips

Song, Yajing January 2013 (has links)
DNA detection has an increasing importance in our everyday lives, with applications ranging from microbial diagnostics to forensic analysis. Currently, as the associated costs decrease, DNA diagnostic techniques are routinely used not only in research laboratories, but also in clinical and forensic practice. The present thesis aims to unravel the potential of cellulose filter paper to be a viable candidate for DNA array support. There are two papers in this study. In Paper I, we studied the method of functionalizing the surface of filter paper and the possibility to detect DNA on acitve paper using fluorescence. In Paper II, we investigated visualization and throughput of DNA detection with magnetic beads on active filter papers, an assay which requires no instrumentation (scanner). The findings in Paper I show that XG-NH2 and PDITC can functionalize the cellulose filter paper and that the activated filter papers can covalently bind oligonucleotides modified with amino groups to detect DNA. The detection limit of the assay is approximately 0.2 pmol. In Paper II, visualization of DNA detection on active paper is achieved without instrumentation, based on the natural color of magnetic beads. Furthermore, successful multiplex detection supports the potential to increase the throughput of DNA detection on active papers. In summary, these studies show that active cellulose filter paper is a good DNA array support candidate as it provides a user-friendly and cost-efficient DNA detection assay. The methods described in Paper I and II are possible sources of development to a point-of-care device for on-site analysis of DNA contents in a sample. / <p>QC 20131111</p>
409

HIGH-THROUGHPUT EXPERIMENTATION OF THE BUCHWALD-HARTWIG AMINATION FOR REACTION SCOUTING AND GUIDED SYNTHESIS

Damien Edward Dobson (12790118) 16 June 2022 (has links)
<p>  </p> <p>Aromatic C-N bond formation is critical for synthetic chemistry in pharmaceutical, agrochemical, and natural product synthesis. Due to the prevalence of this bond class, many synthetic routes have been developed over time to meet the demand. The most recent and robust C-N bond formation reaction is the palladium catalyzed Buchwald-Hartwig amination. Considering the importance of the Buchwald-Hartwig amination, a high-throughput experimentation (HTE) campaign was devised to create a library in which chemists can refer to optimal reaction conditions and ligand/catalyst choice based on the nature of their substrates to be coupled. This study showed trends for the appropriate choice of ligand and catalyst, along with what bases, temperatures, stoichiometries, and solvents are appropriate for the selected substrate combination at hand. </p>
410

A High Throughput Low Power Soft-Output Viterbi Decoder

Ouyang, Gan January 2011 (has links)
A high-throughput low-power Soft-Output Viterbi decoder designed for the convolutional codes used in the ECMA-368 UWB standard is presented in this thesis. The ultra wide band (UWB) wireless communication technology is supposed to be used in physical layer of the wireless personal area network (WPAN) and next generation Blue Tooth. MB-OFDM is a very popular scheme to implement the UWB system and is adopted as the ECMA-368 standard. To make the high speed data transferred over the channel reappear reliably at the receiver, the error correcting codes (ECC) are wildly utilized in modern communication systems. The ECMA-368 standard uses concatenated convolutional codes and Reed-Solomon (RS) codes to encode the PLCP header and only convolutional codes to encode the PPDU Payload. The Viterbi algorithm (VA) is a popular method of decoding convolutional codes for its fairly low hardware implementation complexity and relatively good performance. Soft-Output Viterbi Algorithm (SOVA) proposed by J. Hagenauer in 1989 is a modified Viterbi Algorithm. A SOVA decoder can not only take in soft quantized samples but also provide soft outputs by estimating the reliability of the individual symbol decisions. These reliabilities can be provided to the subsequent decoder to improve the decoding performance of the concatenated decoder. The SOVA decoder is designed to decode the convolutional codes defined in the ECMA-368 standard. Its code rate and constraint length is R=1/3 and K=7 respectively. Additional code rates derived from the "mother" rate R=1/3 codes by employing "puncturing", including 1/2, 3/4, 5/8, can also be decoded. To speed up the add-compare-select unit (ACSU), which is always the speed bottleneck of the decoder, the modified CSA structure proposed by E.Yeo is adopted to replace the conventional ACS structure. Besides, the seven-level quantization instead of the traditional eight-level quantization is proposed to be used is in this decoder to speed up the ACSU in further and reduce its hardware implementation overhead. In the SOVA decoder, the delay line storing the path metric difference of every state contains the major portion of the overall required memory. A novel hybrid survivor path management architecture using the modified trace-forward method is proposed. It can reduce the overall required memory and achieve high throughput without consuming much power. In this thesis, we also give the way to optimize the other modules of the SOVA decoder. For example, the first K-1 necessary stages in the Path Comparison Unit (PCU) and Reliability Measurement Unit (RMU) are IX removed without affecting the decoding results. The attractiveness of SOVA decoder enables us to find a way to deliver its soft output to the RS decoder. We have to convert bit reliability into symbol reliability because the soft output of SOVA decoder is the bit-oriented while the reliability per byte is required by the RS decoder. But no optimum transformation strategy exists because the SOVA output is correlated. This thesis compare two kinds of the sub-optimum transformation strategy and proposes an easy to implement scheme to concatenate the SOVA decoder and RS decoder under various kinds of convolutional code rates. Simulation results show that, using this scheme, the concatenated SOVA-RS decoder can achieve about 0.35dB decoding performance gain compared to the conventional Viterbi-RS decoder.

Page generated in 0.0299 seconds