• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 700
  • 223
  • 199
  • 91
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1738
  • 537
  • 245
  • 183
  • 165
  • 153
  • 153
  • 125
  • 114
  • 108
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

RNA 3D Structure Analysis and Validation, and Design Algorithms for Proteins and RNA

Jain, Swati January 2015 (has links)
<p>RNA, or ribonucleic acid, is one of the three biological macromolecule types essential for all known life forms, and is a critical part of a variety of cellular processes. The well known functions of RNA molecules include acting as carriers of genetic information in the form of mRNAs, and then assisting in translation of that information to protein molecules as tRNAs and rRNAs. In recent years, many other kinds of non-coding RNAs have been found, like miRNAs and siRNAs, that are important for gene regulation. Some RNA molecules, called ribozymes, are also known to catalyze biochemical reactions. Functions carried out by these recently discovered RNAs, coupled with the traditionally known functions of tRNAs, mRNAs, and rRNAs make RNA molecules even more crucial and essential components in biology.</p><p>Most of the functions mentioned above are carried out by RNA molecules associ- ating themselves with proteins to form Ribonucleoprotein (RNP) complexes, e.g. the ribosome or the splicesosome. RNA molecules also bind a variety of small molecules, such as metabolites, and their binding can turn on or off gene expression. These RNP complexes and small molecule binding RNAs are increasingly being recognized as potential therapeutic targets for drug design. The technique of computational structure-based rational design has been successfully used for designing drugs and inhibitors for protein function, but its potential has not been tapped for design of RNA or RNP complexes. For the success of computational structure-based design, it is important to both understand the features of RNA three-dimensional structure and develop new and improved algorithms for protein and RNA design.</p><p>This document details my thesis work that covers both the above mentioned areas. The first part of my thesis work characterizes and analyzes RNA three-dimensional structure, in order to develop new methods for RNA validation and refinement, and new tools for correction of modeling errors in already solved RNA structures. I collaborated to assemble non-redundant and quality-conscious datasets of RNA crystal structures (RNA09 and RNA11), and I analyzed the range of values occupied by the RNA backbone and base dihedral angles to improve methods for RNA structure correction, validation, and refinement in MolProbity and PHENIX. I rebuilt and corrected the pre-cleaved structure of the HDV ribozyme and parts of the 50S ribosomal subunit to demonstrate the potential of new tools and techniques to improve RNA structures and help crystallographers to make correct biological interpretations. I also extended the previous work of characterizing RNA backbone conformers by the RNA Ontology Consortium (ROC) to define new conformers using the data from the larger RNA11 dataset, supplemented by ERRASER runs that optimize data points to add new conformers or improve cluster separation.</p><p>The second part of my thesis work develops novel algorithms for structure-based</p><p>protein redesign when interactions between distant residue pairs are neglected and the design problem is represented by a sparse residue interaction graph. I analyzed the sequence and energy differences caused by using sparse residue interaction graphs (using the protein redesign package OSPREY), and proposed a novel use of ensemble-based provable design algorithms to mitigate the effects caused by sparse residue interaction graphs. I collaborated to develop a novel branch-decomposition based dynamic programming algorithm, called BWM*, that returns the Global Minimum Energy Conformation (GMEC) for sparse residue interaction graphs much faster than the traditional A* search algorithm. As the final step, I used the results of my analysis of the RNA base dihedral angle and implemented the capability of RNA design and RNA structural flexibility in osprey. My work enables OSPREY to design not only RNA, but also simultaneously design both the RNA and the protein chains in a RNA-protein interface.</p> / Dissertation
372

Distortion Cancellation in Time Interleaved ADCs

Sambasivan Mruthyunjaya, Naga Thejus January 2015 (has links)
Time-Interleaved Analog to Digital Converters (TI ADC) consist of several individual sub-converters operating at a lower sampling rate, working in parallel, and in a circular loop. Thereby, they are increasing the sampling rate without compromising on the resolution during conversion, at high sampling rates. The latter is the main requirement in the area of radio frequency sampling. However, they suffer from mismatches caused by the different characteristics in each sub-converter and the TI structure. The output of the TI ADC under consideration contains a lot of harmonics and spurious tones due to the non-linearities mismatch between the sub-converters. Therefore, previously extensive frequency planning was performed to avoid the input signal from coinciding with these harmonic bins. More importance has been given to digital calibration in recent years where algorithms are developed and implemented outside ADC in a Digital signal processor (DSP), whereas the compensation is done in real time. In this work, we model the distortions and the harmonics present in the TI ADC output to get a clear understanding of the TI ADC. A post-correction block is developed for the cancellation of the characterized harmonics. The suggested method is tested on the TI ADCs working at radio frequencies, but is valid also for other types of ADCs, such as pipeline ADCs and sigma-delta ADCs.
373

738 years of global climate model simulated streamflow in the Nelson-Churchill River Basin

Vieira, Michael John Fernandes 02 February 2016 (has links)
Uncertainty surrounds the understanding of natural variability in hydrologic extremes such as droughts and floods and how these events are projected to change in the future. This thesis leverages Global Climate Model (GCM) data to analyse 738 year streamflow scenarios in the Nelson-Churchill River Basin. Streamflow scenarios include a 500 year stationary period and future projections forced by two forcing scenarios. Fifty three GCM simulations are evaluated for performance in reproducing observed runoff characteristics. Runoff from a subset of nine simulations is routed to generate naturalized streamflow scenarios. Quantile mapping is then applied to reduce volume bias while maintaining the GCM’s sequencing of events. Results show evidence of future increases in mean annual streamflow and evidence that mean monthly streamflow variability has decreased from stationary conditions and is projected to decrease further into the future. There is less evidence of systematic change in droughts and floods. / May 2016
374

Single manager hedge funds - aspects of classification and diversification

Bohlandt, Florian Martin 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2013. / A persistent problem for hedge fund researchers presents itself in the form of inconsistent and diverse style classifications within and across database providers. For this paper, single-manager hedge funds from the Hedge Fund Research (HFR) and Hedgefund.Net (HFN) databases were classified on the basis of a common factor, extracted using the factor axis methodology. It was assumed that the returns of all sample hedge funds are attributable to a common factor that is shared across hedge funds within one classification, and a specific factor that is unique to a particular hedge fund. In contrast to earlier research and the application of principal component analysis, factor axis has sought to determine how much of the covariance in the dataset is due to common factors (communality). Factor axis largely ignores the diagonal elements of the covariance matrix and orthogonal factor rotation maximises the covariance between hedge fund return series. In an iterative framework, common factors were extracted until all return series were described by one common and one specific factor. Prior to factor extraction, the series was tested for autoregressive moving-average processes and the residuals of such models were used in further analysis to improve upon squared correlations as initial factor estimates. The methodology was applied to 120 ten-year rolling estimation windows in the July 1990 to June 2010 timeframe. The results indicate that the number of distinct style classifications is reduced in comparison to the arbitrary self-selected classifications of the databases. Single manager hedge funds were grouped in portfolios on the basis of the common factor they share. In contrast to other classification methodologies, these common factor portfolios (CFPs) assume that some unspecified individual component of the hedge fund constituents’ returns is diversified away and that single manager hedge funds should be classified according to their common return components. From the CFPs of single manager hedge funds, pure style indices were created to be entered in a multivariate autoregressive framework. For each style index, a Vector Error Correction model (VECM) was estimated to determine the short-term as well as co-integrating relationship of the hedge fund series with the index level series of a stock, bond and commodity proxy. It was postulated that a) in a well-diversified portfolio, the current level of the hedge fund index is independent of the lagged observations from the other asset indices; and b) if the assumptions of the Efficient Market Hypothesis (EMH) hold, it is expected that the predictive power of the model will be low. The analysis was conducted for the July 2000 - June 2010 period. Impulse response tests and variance decomposition revealed that changes in hedge fund index levels are partially induced by changes in the stock, bond and currency markets. Investors are therefore cautioned not to overemphasise the diversification benefits of hedge fund investments. Commodity trading advisors (CTAs) / managed futures, on the other hand, deliver diversification benefits when integrated with an existing portfolio. The results indicated that single manager hedge funds can be reliably classified using the principal factor axis methodology. Continuously re-balanced pure style index representations of these classifications could be used in further analysis. Extensive multivariate analysis revealed that CTAs and macro hedge funds offer superior diversification benefits in the context of existing portfolios. The empirical results are of interest not only to academic researchers, but also practitioners seeking to replicate the methodologies presented.
375

Turbo Equalization for OFDM over the Doubly-Spread Channel using Nonlinear Programming

Iltis, Ronald A. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / OFDM has become the preferred modulation format for a wide range of wireless networks including 802.11g, 802.16e (WiMAX) and 4G LTE. For multipath channels which are time-invariant during an OFDM symbol duration, near-optimal demodulation is achieved using the FFT followed by scalar equalization. However, demodulating OFDM on the doubly-spread channel remains a challenging problem, as time-variations within a symbol generate intercarrier interference. Furthermore, demodulation and channel estimation must be effectively combined with decoding of the LDPC code in the 4G-type system considered here. This paper presents a new Turbo Equalization (TEQ) decoder, detector and channel estimator for OFDM on the doubly-spread channel based on nonlinear programming. We combine the Penalty Gradient Projection TEQ with a MMSE-type channel estimator (PGP-TEQ) that is shown to yield a convergent algorithm. Simulation results are presented comparing conventional MMSE TEQ using the Sum Product Algorithm (MMSE-SPA-TEQ) with the new PGP-TEQ for doubly-spread channels.
376

Fast and accurate lithography simulation and optical proximity correction for nanometer design for manufacturing

Yu, Peng 23 October 2009 (has links)
As semiconductor manufacture feature sizes scale into the nanometer dimension, circuit layout printability is significantly reduced due to the fundamental limit of lithography systems. This dissertation studies related research topics in lithography simulation and optical proximity correction. A recursive integration method is used to reduce the errors in transmission cross coefficient (TCC), which is an important factor in the Hopkins Equation in aerial image simulation. The runtime is further reduced, without increasing the errors, by using the fact that TCC is usually computed on uniform grids. A flexible software framework, ELIAS, is also provided, which can be used to compute TCC for various lithography settings, such as different illuminations. Optimal coherent approximations (OCAs), which are used for full-chip image simulation, can be speeded up by considering the symmetric properties of lithography systems. The runtime improvement can be doubled without loss of accuracy. This improvement is applicable to vectorial imaging models as well. Even in the case where the symmetric properties do not hold strictly, the new method can be generalized such that it could still be faster than the old method. Besides new numerical image simulation algorithms, variations in lithography systems are also modeled. A Variational LIthography Model (VLIM) as well as its calibration method are provided. The Variational Edge Placement Error (V-EPE) metrics, which is an improvement of the original Edge Placement Error (EPE) metrics, is introduced based on the model. A true process-variation aware OPC (PV-OPC) framework is proposed using the V-EPE metric. Due to the analytical nature of VLIM, our PV-OPC is only about 2-3× slower than the conventional OPC, but it explicitly considers the two main sources of process variations (exposure dose and focus variations) during OPC. The EPE metrics have been used in conventional OPC algorithms, but it requires many intensity simulations and takes the majority of the OPC runtime. By making the OPC algorithm intensity based (IB-OPC) rather than EPE based, we can reduce the number of intensity simulations and hence reduce the OPC runtime. An efficient intensity derivative computation method is also provided, which makes the new algorithm converge faster than the EPE based algorithm. Our experimental results show a runtime speedup of more than 10× with comparable result quality compared to the EPE based OPC. The above mentioned OPC algorithms are vector based. Other categories of OPC algorithms are pixel based. Vector based algorithms in general generate less complex masks than those of pixel based ones. But pixel based algorithms produce much better results than vector based ones in terms of contour fidelity. Observing that vector based algorithms preserve mask shape topologies, which leads to lower mask complexities, we combine the strengths of both categories—the topology invariant property and the pixel based mask representation. A topological invariant pixel based OPC (TIP-OPC) algorithm is proposed, with lithography friendly mask topological invariant operations and an efficient Fast Fourier Transform (FFT) based cost function sensitivity computation. The experimental results show that TIP-OPC can achieve much better post-OPC contours compared with vector based OPC while maintaining the mask shape topologies. / text
377

THE LABOR MARKET, POLITICAL CAPITAL, AND OWNERSHIP SECTOR IN URBAN CHINA

Pan, Xi 01 January 2010 (has links)
Over the past three decades, economic reforms have brought about dramatic changes in China. The wave of structural and economic reforms regarding the State-owned Sector (SOS), and the surge of the Non-State-owned Sector (NSOS), have influenced returns in the labor market, such as the returns concerning human capital and political capital in urban China. Presumably, the NSOS would be more marketed-oriented compared to the SOS, and it would have different returns concerning political capital, as represented by Chinese Communist Party (CCP) membership. This is likely because the NSOS would not value Party membership as much as the SOS does. The question of how Party membership is rewarded in the two sectors might also change with the development of the two ownership sectors, as more time passes since the establishment of the economic reforms. I examine whether CCP members display any earnings advantage in these two sectors, and I also explore how such an advantage might have changed over time. Unlike most of the previous studies that have focused on earnings in urban China, I treat Party membership affiliation and ownership sector selection as being endogeneous. I apply the Mlogit -OLS two-stage selection correction estimation proposed by Lee (1983) and discover evidence which suggests that Party membership serves as a proxy for both political and productive skills. A flat Party premium in the SOS and a decreasing Party premium in the NSOS suggest that the Party card served a similar function in the payment scheme present in the SOS during this three year span, whereas the NSOS valued political capital by a decreasing amount over time. The evidence presented in my dissertation indicates that economic reforms tend to mitigate the earning advantage of Party members that occurs as a result of unequal treatment based on Party membership. This evidence suggests that CCP membership is losing its earning power, at least in the NSOS. In addition, the CCP members sacrifice the benefits previously possessed in the adaptation to the transformed economic environment in urban China. However, the rewards to other forms of human capital have increased over time.
378

Slice ribbon conjecture, pretzel knots and mutation

Long, Ligang 06 November 2014 (has links)
In this paper we explore the slice-ribbon conjecture for some families of pretzel knots. Donaldson's diagonalization theorem provides a powerful obstruction to sliceness via the union of the double branched cover W of B⁴ over a slicing disk and a plumbing manifold P([capital gamma]). Donaldson's theorem classifies all slice 4-strand pretzel knots up to mutation. The correction term is another 3-manifold invariant defined by Ozsváth and Szabó. For a slice knot K the number of vanishing correction terms of Y[subscript K] is at least the square root of the order of H₁(Y[subscript K];Z). Donaldson's theorem and the correction term argument together give a strong condition for 5-strand pretzel knots to be slice. However, neither Donaldson's theorem nor the correction terms can distinguish 4-strand and 5-strand slice pretzel knots from their mutants. A version of the twisted Alexander polynomial proposed by Paul Kirk and Charles Livingston provides a feasible way to distinguish those 5-strand slice pretzel knots and their mutants; however the twisted Alexander polynomial fails on 4-strand slice pretzel knots. / text
379

Caractérisation et quantification de surfaces par stéréocorrélation pour des essais mécaniques du quasi statique à la dynamique ultra-rapide

Besnard, Gilles 24 March 2010 (has links) (PDF)
Depuis un certain nombre d'années, les méthodes optiques s'imposent dans le domaine de la mesure. Plus particulièrement, la technique de stéréo-corrélation d'images est souvent utilisée en raison de sa facilité de mise en oeuvre, de son caractère non intrusif et de sa bonne resolution spatiale couplée à un suivi temporel. Dans cette thèse, nous mettons en application la technique de stéréo-corrélation pour différentes vitesses de sollicitation allant du quasi statique à la dynamique ultra-rapide, cette dernière étant appliquée dans le cadre d'expérience de détonique. Cette large gamme d'expérimentations nous impose de développer des outils spécifiques afin d'obtenir des résultats quantifiés. Ainsi, nous avons mis au point des méthodes de correction des distorsions optiques et de l'étalonnage. Nous avons également développé une nouvelle technique d'association capable de traiter des images de faible dimension. Afin d'améliorer l'association, des méthodes de correction des grands d'eplacements, basées sur l'image elle-même ou un calcul hydrodynamique prédictif, ont été élaborées. Afin de valider l'ensemble de notre approche, nous la mettons en application lors de divers essais : compression quasi statique sur un matériau composite, traction lente sur une plaque en tantale, traction rapide sur des échantillons en aluminium. Enfin, nous nous intéressons à nos applications principales de dynamique ultra-rapide : l'expansion et le relèvement de cylindre. Pour ce dernier point, un soin particulier est apporté à l'élaboration d'un cas de synthèse représentatif de nos expériences. Ceci, dans le but de caractériser les performances de notre approche dans des conditions expérimentales sévères. Dans tous les cas, nous donnons les incertitudes de reconstruction afin de quantifier au mieux la rugosité de la surface et, par conséquent, le phénomène de striction.
380

Oral Feedback : Students' Reactions and Opinions

Hulterström, Terése January 2006 (has links)
<p>In Sweden we come in contact with the English language almost daily; in television shows, radio commercials and at work. English is also mandatory in the Swedish curriculum; therefore it is important that the students learn as much as possible in school, to be able to use English in their daily life. Teachers use different methods to help students acquire the tools needed to learn English, or any other subject for that matter. One method is oral feedback, which is used to immediately encourage students or correct them when making an error. My aim in this study is therefore to investigate if students find oral feedback in the classroom valuable and if not, how they would like it to be changed. To investigate this I handed out a questionnaire to five classes. The questions were divided up into three categories: if the students had noticed oral feedback being given to them, what their experiences of oral feedback were and how they would like the feedback to be delivered. I also made observations and recorded three classes. The results of this investigation showed that the students were positive to oral feedback in the classroom. Most of the students had noticed oral feedback being given to them, and the teachers had mostly corrected the students’ grammar and pronunciation. These were also the areas where the students felt they had developed the most from oral feedback. In the questionnaire the students pointed out that they wanted the feedback to be delivered privately and that the teachers have to be careful how they give the feedback, they have to always remember to give positive feedback as well as corrective feedback.</p>

Page generated in 0.0696 seconds