• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 8
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 52
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Dating Divergence Times in Phylogenies

Anderson, Cajsa Lisa January 2007 (has links)
This thesis concerns different aspects of dating divergence times in phylogenetic trees, using molecular data and multiple fossil age constraints. Datings of phylogenetically basal eudicots, monocots and modern birds (Neoaves) are presented. Large phylograms and multiple fossil constraints were used in all these studies. Eudicots and monocots are suggested to be part of a rapid divergence of angiosperms in the Early Cretaceous, with most families present at the Cretaceous/Tertiary boundary. Stem lineages of Neoaves were present in the Late Cretaceous, but the main divergence of extant families took place around the Cre-taceous/Tertiary boundary. A novel method and computer software for dating large phylogenetic trees, PATHd8, is presented. PATHd8 is a nonparametric smoothing method that smoothes one pair of sister groups at a time, by taking the mean of the added branch lengths from a terminal taxon to a node. Because of the local smoothing, the algorithm is simple, hence providing stable and very fast analyses, allowing for thousands of taxa and an arbitrary number of age constraints. The importance of fossil constraints and their placement are discussed, and concluded to be the most important factor for obtaining reasonable age estimates. Different dating methods are compared, and it is concluded that differences in age estimates are obtained from penalized likelihood, PATHd8, and the Bayesian autocorrelation method implemented in the multidivtime program. In the Bayesian method, prior assumptions about evolutionary rate at the root, rate variance and the level of rate smoothing between internal edges, are suggested to influence the results.
32

Random Iterations of Subhyperbolic Relaxed Newton's Methods / Zufällige Iterationen subhyperbolischer Eulerscher Verfahren

Arghanoun, Ghazaleh 14 April 2004 (has links)
No description available.
33

Computational studies of biomolecules

Chen, Sih-Yu January 2017 (has links)
In modern drug discovery, lead discovery is a term used to describe the overall process from hit discovery to lead optimisation, with the goal being to identify drug candidates. This can be greatly facilitated by the use of computer-aided (or in silico) techniques, which can reduce experimentation costs along the drug discovery pipeline. The range of relevant techniques include: molecular modelling to obtain structural information, molecular dynamics (which will be covered in Chapter 2), activity or property prediction by means of quantitative structure activity/property models (QSAR/QSPR), where machine learning techniques are introduced (to be covered in Chapter 1) and quantum chemistry, used to explain chemical structure, properties and reactivity. This thesis is divided into five parts. Chapter 1 starts with an outline of the early stages of drug discovery; introducing the use of virtual screening for hit and lead identification. Such approaches may roughly be divided into structure-based (docking, by far the most often referred to) and ligand-based, leading to a set of promising compounds for further evaluation. Then, the use of machine learning techniques, the issue of which will be frequently encountered, followed by a brief review of the "no free lunch" theorem, that describes how no learning algorithm can perform optimally on all problems. This implies that validation of predictive accuracy in multiple models is required for optimal model selection. As the dimensionality of the feature space increases, the issue referred to as "the curse of dimensionality" becomes a challenge. In closing, the last sections focus on supervised classification Random Forests. Computer-based analyses are an integral part of drug discovery. Chapter 2 begins with discussions of molecular docking; including strategies incorporating protein flexibility at global and local levels, then a specific focus on an automated docking program – AutoDock, which uses a Lamarckian genetic algorithm and empirical binding free energy function. In the second part of the chapter, a brief introduction of molecular dynamics will be given. Chapter 3 describes how we constructed a dataset of known binding sites with co-crystallised ligands, used to extract features characterising the structural and chemical properties of the binding pocket. A machine learning algorithm was adopted to create a three-way predictive model, capable of assigning each case to one of the classes (regular, orthosteric and allosteric) for in silico selection of allosteric sites, and by a feature selection algorithm (Gini) to rationalize the selection of important descriptors, most influential in classifying the binding pockets. In Chapter 4, we made use of structure-based virtual screening, and we focused on docking a fluorescent sensor to a non-canonical DNA quadruplex structure. The preferred binding poses, binding site, and the interactions are scored, followed by application of an ONIOM model to re-score the binding poses of some DNA-ligand complexes, focusing on only the best pose (with the lowest binding energy) from AutoDock. The use of a pre-generated conformational ensemble using MD to account for the receptors' flexibility followed by docking methods are termed “relaxed complex” schemes. Chapter 5 concerns the BLUF domain photocycle. We will be focused on conformational preference of some critical residues in the flavin binding site after a charge redistribution has been introduced. This work provides another activation model to address controversial features of the BLUF domain.
34

Air emissions measurements at cattle feedlots

Baum, Kristen A. January 1900 (has links)
Master of Science / Department of Agronomy / Jay M. Ham / The potential environmental impact of animal feeding operations on air quality has created the need for accurate air emissions measurements. Of particular concern are ammonia emissions from cattle feedlots, operations that contribute a large portion of the agricultural ammonia emissions inventory. Micrometeorological methods are ideal for emissions measurements from large, open-source areas like feedlot pens; however, theoretical assumptions about the boundary layer must be made, which may not hold true above the heterogeneous, fetch-limited surface of the feedlot. Thus, the first objective of this work was to characterize the surface boundary layer of an open-air cattle feedlot and provide insight into how micrometeorological techniques might be applied to these non-ideal sites. Eddy covariance was used to measure fluxes of momentum, heat, water, and carbon dioxide from a commercial cattle feedlot in central Kansas. Data supported the use of eddy covariance and similar methods (i.e., relaxed eddy accumulation) for flux measurements from both cattle and pen surfaces. The modeled cumulative source area contributing to eddy covariance measurements at a 6 m sample height was dominated by just a few pens near the tower, making the characteristics of those pens especially important when interpreting results. The second objective was to develop a system for measuring ammonia fluxes from feedlots. A new type of relaxed eddy accumulation system was designed, fabricated, and tested that used honeycomb denuders to independently sample ammonia in up-moving and down-moving eddies. Field testing of the relaxed eddy accumulation system at a feedlot near Manhattan, KS showed fluxes of ammonia ranged between 60 and 130 μg m-2 s-1 during the summer of 2007. Even in the high ammonia environment (e.g., 300-600 μg m-3), the honeycomb denuders had enough capacity for the 4-hour sampling duration and could be used to measure other chemical species that the denuders could be configured to capture. Results provide a foundation for emissions measurements of ammonia and other gases at cattle feedlots and help address some of the challenges that micrometeorologists face with any non-ideal source area.
35

Využití přibližné ekvivalence při návrhu přibližných obvodů / Employing Approximate Equivalence for Design of Approximate Circuits

Matyáš, Jiří January 2017 (has links)
This thesis is concerned with the utilization of formal verification techniques in the design of the functional approximations of combinational circuits. We thoroughly study the existing formal approaches for the approximate equivalence checking and their utilization in the approximate circuit development. We present a new method that integrates the formal techniques into the Cartesian Genetic Programming. The key idea of our approach is to employ a new search strategy that drives the evolution towards promptly verifiable candidate solutions. The proposed method was implemented within ABC synthesis tool. Various parameters of the search strategy were examined and the algorithm's performance was evaluated on the functional approximations of multipliers and adders with operand widths up to 32 and 128 bits respectively. Achieved results show an unprecedented scalability of our approach.
36

(Relaxed) Product Structures of Graphs and Hypergraphs

Ostermeier, Lydia 13 May 2015 (has links)
In this thesis, we investigate graphs and hypergraphs that have (relaxed) product structures. In the class of graphs, we discuss in detail \\emph{RSP-relations}, a relaxation of relations fulfilling the square property and therefore of the product relation $\\sigma$, that identifies the copies of the prime factors of a graph w.r.t. the Cartesian product. For $K_{2,3}$-free graphs finest RSP-relations can be computed in polynomial-time. In general, however, they are not unique and their number may even grow exponentially. Explicit constructions of such relations in complete and complete bipartite graphs are given. Furthermore, we establish the close connection of (\\emph{well-behaved}) RSP-relations to \\mbox{(quasi-)covers} of graphs and equitable partitions. Thereby, we characterize the existence of non-trivial RSP-relations by means of the existence of spanning subgraphs that yield quasi-covers of the graph under investigation. We show, how equitable partitions on the vertex set of a graph $G$ arise in a natural way from well-behaved RSP-relations on $E(G)$. These partitions in turn give rise to quotient graphs that have rich product structure even if $G$ itself is prime. This product structure of the quotient graph is still retained even for RSP-relations that are not well-behaved. Furthermore, we will see that a (finest) RSP-relation of a product graph can be obtained easily from (finest) RSP-relations on the prime factors w.r.t. certain products and in what manner the quotient graphs of the product w.r.t. such an RSP-relation result from the quotient graphs of the factors and the respective product. In addition, we examine relations on the edge sets of \\emph{hyper}graphs that satisfy the grid property, the hypergraph analog of the square property. We introduce the \\emph{strong} and the \\emph{relaxed} grid property as variations of the grid property, the latter generalizing the relaxed square property. We thereby show, that many, although not all results for graphs and the (relaxed) square property can be transferred to hypergraphs. Similar to the graph case, any equivalence relation $R$ on the edge set of a hypergraph $H$ that satisfies the relaxed grid property induces a partition of the vertex set of $H$ which in turn determines quotient hypergraphs that have non-trivial product structures. Besides, we introduce the notion of \\emph{(Cartesian) hypergraph bundles}, the analog of (Cartesian) graph bundles and point out the connection between the grid property and hypergraph bundles. Finally, we show that every connected thin hypergraph $H$ has a unique prime factorization with respect to the normal and strong (hypergraph) product. Both products coincide with the usual strong \\emph{graph} product whenever $H$ is a graph. We introduce the notion of the Cartesian skeleton of hypergraphs as a natural generalization of the Cartesian skeleton of graphs and prove that it is uniquely defined for thin hypergraphs. Moreover, we show that the Cartesian skeleton of thin hypergraphs and its PFD w.r.t. the strong and the normal product can be computed in polynomial time.
37

Robust, fault-tolerant majority based key-value data store supporting multiple data consistency

Khan, Tareq Jamal January 2011 (has links)
Web 2.0 has significantly transformed the way how modern society works now-a-days. In today‘s Web, information not only flows top down from the web sites to the readers; but also flows bottom up contributed by mass user. Hugely popular Web 2.0 applications like Wikis, social applications (e.g. Facebook, MySpace), media sharing applications (e.g. YouTube, Flickr), blogging and numerous others generate lots of user generated contents and make heavy use of the underlying storage. Data storage system is the heart of these applications as all user activities are translated to read and write requests and directed to the database for further action. Hence focus is on the storage that serves data to support the applications and its reliable and efficient design is instrumental for applications to perform in line with expectations. Large scale storage systems are being used by popular social networking services like Facebook, MySpace where millions of users‘ data have been stored and fully accessed by these companies. However from users‘ point of view there has been justified concern about user data ownership and lack of control over personal data. For example, on more than one occasions Facebook have exercised its control over users‘ data without respecting users‘ rights to ownership of their own content and manipulated data for its own business interest without users‘ knowledge or consent. The thesis proposes, designs and implements a large scale, robust and fault-tolerant key-value data storage prototype that is peer-to-peer based and intends to back away from the client-server paradigm with a view to relieving the companies from data storage and management responsibilities and letting users control their own personal data. Several read and write APIs (similar to Yahoo!‘s P NUTS but different in terms of underlying design and the environment they are targeted for) with various data consistency guarantees are provided from which a wide range of web applications would be able to choose the APIs according to their data consistency, performance and availability requirements. An analytical comparison is also made against the PNUTS system that targets a more stable environment. For evaluation, simulation has been carried out to test the system availability, scalability and fault-tolerance in a dynamic environment. The results are then analyzed and conclusion is drawn that the system is scalable, available and shows acceptable performance.
38

Cost-Sensitive Learning-based Methods for Imbalanced Classification Problems with Applications

Razzaghi, Talayeh 01 January 2014 (has links)
Analysis and predictive modeling of massive datasets is an extremely significant problem that arises in many practical applications. The task of predictive modeling becomes even more challenging when data are imperfect or uncertain. The real data are frequently affected by outliers, uncertain labels, and uneven distribution of classes (imbalanced data). Such uncertainties create bias and make predictive modeling an even more difficult task. In the present work, we introduce a cost-sensitive learning method (CSL) to deal with the classification of imperfect data. Typically, most traditional approaches for classification demonstrate poor performance in an environment with imperfect data. We propose the use of CSL with Support Vector Machine, which is a well-known data mining algorithm. The results reveal that the proposed algorithm produces more accurate classifiers and is more robust with respect to imperfect data. Furthermore, we explore the best performance measures to tackle imperfect data along with addressing real problems in quality control and business analytics.
39

Flux Measurements of Volatile Organic Compounds from an Urban Tower Platform

Park, Chang Hyoun 2010 May 1900 (has links)
A tall tower flux measurement setup was established in metropolitan Houston, Texas, to measure trace gas fluxes from both anthropogenic and biogenic emission sources in the urban surface layer. We describe a new relaxed eddy accumulation system combined with a dual-channel gas chromatography - flame ionization detection used for volatile organic compound (VOC) flux measurements in the urban area, focusing on the results of selected anthropogenic VOCs, including benzene, toluene, ethylbenzene and xylenes (BTEX), and biogenic VOCs including isoprene and its oxidation products, methacrolein (MACR) and methyl vinyl ketone (MVK). We present diurnal variations of concentrations and fluxes of BTEX, and isoprene and its oxidation products during summer time (May 22 - July 22, 2008) and winter time (January 1 - February 28). The measured BTEX values exhibited diurnal cycles with a morning peak during weekdays related to rush-hour traffic and additional workday daytime flux maxima for toluene and xylenes in summer time. However, in winter time there was no additional workday daytime peaks due mainly to the different flux footprints between the two seasons. A comparison with different EPA National Emission Inventories (NEI) with our summer time flux data suggests potential underestimates in the NEI by a factor of 3 to 5. The mixing ratios and fluxes of isoprene, MACR and MVK were measured during the same time period in summer 2008. The presented results show that the isoprene was affected by both tail-pipe emission sources during the morning rush hours and biogenic emission sources in daytime. The observed daytime mixing ratios of isoprene were much lower than over forested areas, caused by a comparatively low density of isoprene emitters in the tower's footprint area. The average daytime isoprene flux agreed well with emission rates predicted by a temperature and light only emission model (Guenther et al., 1993). Our investigation of isoprene's oxidation products MACR and MVK showed that both anthropogenic and biogenic emission sources exist for MACR, while MVK was strongly dominated by a biogenic source, likely the isoprene oxidation between the emission and sampling points.
40

Optimization of memory management on distributed machine

Ha, Viet Hai 05 October 2012 (has links) (PDF)
In order to explore further the capabilities of parallel computing architectures such as grids, clusters, multi-processors and more recently, clouds and multi-cores, an easy-to-use parallel language is an important challenging issue. From the programmer's point of view, OpenMP is very easy to use with its ability to support incremental parallelization, features for dynamically setting the number of threads and scheduling strategies. However, as initially designed for shared memory systems, OpenMP is usually limited on distributed memory systems to intra-nodes' computations. Many attempts have tried to port OpenMP on distributed systems. The most emerged approaches mainly focus on exploiting the capabilities of a special network architecture and therefore cannot provide an open solution. Others are based on an already available software solution such as DMS, MPI or Global Array and, as a consequence, they meet difficulties to become a fully-compliant and high-performance implementation of OpenMP. As yet another attempt to built an OpenMP compliant implementation for distributed memory systems, CAPE − which stands for Checkpointing Aide Parallel Execution − has been developed which with the following idea: when reaching a parallel section, the master thread is dumped and its image is sent to slaves; then, each slave executes a different thread; at the end of the parallel section, slave threads extract and return to the master thread the list of all modifications that has been locally performed; the master includes these modifications and resumes its execution. In order to prove the feasibility of this paradigm, the first version of CAPE was implemented using complete checkpoints. However, preliminary analysis showed that the large amount of data transferred between threads and the extraction of the list of modifications from complete checkpoints lead to weak performance. Furthermore, this version was restricted to parallel problems satisfying the Bernstein's conditions, i.e. it did not solve the requirements of shared data. This thesis aims at presenting the approaches we proposed to improve CAPE' performance and to overcome the restrictions on shared data. First, we developed DICKPT which stands for Discontinuous Incremental Checkpointing, an incremental checkpointing technique that supports the ability to save incremental checkpoints discontinuously during the execution of a process. Based on the DICKPT, the execution speed of the new version of CAPE was significantly increased. For example, the time to compute a large matrix-matrix product on a desktop cluster has become very similar to the execution time of the same optimized MPI program. Moreover, the speedup associated with this new version for various number of threads is quite linear for different problem sizes. In the side of shared data, we proposed UHLRC, which stands for Updated Home-based Lazy Release Consistency, a modified version of the Home-based Lazy Release Consistency (HLRC) memory model, to make it more appropriate to the characteristics of CAPE. Prototypes and algorithms to implement the synchronization and OpenMP data-sharing clauses and directives are also specified. These two works ensures the ability for CAPE to respect shared-data behavior

Page generated in 0.0568 seconds