• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Ultrawideband Time Domain Radar for Time Reversal Applications

Lopez-Castellanos, Victor 31 March 2011 (has links)
No description available.
252

Caractérisation des aérosols par inversion des données combinées des photomètres et lidars au sol.

Nassif Moussa Daou, David January 2012 (has links)
Aerosols are small, micrometer-sized particles, whose optical effects coupled with their impact on cloud properties is a source of large uncertainty in climate models. While their radiative forcing impact is largely of a cooling nature, there can be significant variations in the degree of their impact, depending on the size and the nature of the aerosols. The radiative and optical impact of aerosols are, first and foremost, dependent on their concentration or number density (an extensive parameter) and secondly on the size and nature of the aerosols (intensive, per particle, parameters). We employed passive (sunphotmetry) and active (backscatter lidar) measurements to retrieve extensive optical signals (aerosol optical depth or AOD and backscatter coefficient respectively) and semi-intensive optical signals (fine and coarse mode OD and fine and coarse mode backscatter coefficient respectively) and compared the optical coherency of these retrievals over a variety of aerosol and thin cloud events (pollution, dust, volcanic, smoke, thin cloud dominated). The retrievals were performed using an existing spectral deconvolution method applied to the sunphotometry data (SDA) and a new retrieval technique for the lidar based on a colour ratio thresholding technique. The validation of the lidar retrieval was accomplished by comparing the vertical integrations of the fine mode, coarse mode and total backscatter coefficients of the lidar with their sunphotometry analogues where lidar ratios (the intensive parameter required to transform backscatter coefficients into extinction coefficients) were (a) computed independently using the SDA retrievals for fine mode aerosols or prescribed for coarse mode aerosols and clouds or (b) computed by forcing the computed (fine, coarse and total) lidar ODs to be equal to their analog sunphotometry ODs. Comparisons between cases (a) and (b) as well as the semi-qualitative verification of the derived fine and coarse mode vertical profiles with the expected backscatter coefficient behavior of fine and coarse mode aerosols yielded satisfactory agreement (notably that the fine, coarse and total OD errors were <~ sunphotometry instrument errors). Comparisons between cases (a) and (b) also showed a degree of optical coherency between the fine mode lidar ratios.
253

Predicting the sound field from aeroacoustic sources on moving vehicles : Towards an improved urban environment

Pignier, Nicolas January 2017 (has links)
In a society where environmental noise is becoming a major health and economical concern, sound emissions are an increasingly critical design factor for vehicle manufacturers. With about a quarter of the European population living close to roads with heavy traffic, traffic noise in urban landscapes has to be addressed first. The current introduction of electric vehicles on the market and the need for sound systems to alert their presence is causing a shift in mentalities requiring engineering methods that will have to treat noise management problems from a broader perspective. That in which noise emissions need not only be considered as a by-product of the design but as an integrated part of it. Developing more sustainable ground transportation will require a better understanding of the sound field emitted in various realistic operating conditions, beyond the current requirements set by the standard pass-by test, which is performed in a free-field. A key aspect to improve this understanding is the development of efficient numerical tools to predict the generation and propagation of sound from moving vehicles. In the present thesis, a methodology is proposed aimed at evaluating the pass-by sound field generated by vehicle acoustic sources in a simplified urban environment, with a focus on flow sound sources. Although it can be argued that the aerodynamic noise is still a minor component of the total emitted noise in urban driving conditions, this share will certainly increase in the near future with the introduction of quiet electric engines and more noise-efficient tyres on the market. This work presents a complete modelling of the problem from sound generation to sound propagation and pass-by analysis in three steps. Firstly, computation of the flow around the geometry of interest; secondly, extraction of the sound sources generated by the flow, and thirdly, propagation of the sound generated by the moving sources to observers including reflections and scattering by nearby surfaces. In the first step, the flow is solved using compressible detached-eddy simulations. The identification of the sound sources in the second step is performed using direct numerical beamforming with linear programming deconvolution, with the phased array pressure data being extracted from the flow simulations. The outcome of this step is a set of uncorrelated monopole sources. Step three uses this set as input to a propagation method based on a point-to-point moving source Green's function and a modified Kirchhoff integral under the Kirchhoff approximation to compute reflections on built surfaces. The methodology is demonstrated on the example of the aeroacoustic noise generated by a NACA air inlet moving in a simplified urban setting. Using this methodology gives insights on the sound generating mechanisms, on the source characteristics and on the sound field generated by the sources when moving in a simplified urban environment. / I ett samhälle där buller håller på att bli ett stort hälsoproblem och en ekonomisk belastning, är ljudutsläpp en allt viktigare aspekt för fordonstillverkare. Då ungefär en fjärdedel av den europeiska befolkningen bor nära vägar med tung trafik, är åtgärder för minskat trafikbuller i stadsmiljö en hög prioritet. Introduktionen av elfordon på marknaden och behovet av ljudsystem för att varna omgivningen kräver också ett nytt synsätt och tekniska angreppssätt som behandlar bullerproblemen ur ett bredare perspektiv. Buller bör inte längre betraktas som en biprodukt av konstruktionen, utan som en integrerad del av den. Att utveckla mer hållbara marktransporter kommer att kräva en bättre förståelse av det utstrålade ljudfältet vid olika realistiska driftsförhållanden, utöver de nuvarande standardiserade kraven för förbifartstest som utförs i ett fritt fält. En viktig aspekt för att förbättra denna förståelse är utvecklingen av effektiva numeriska verktyg för att beräkna ljudalstring och ljudutbredning från fordon i rörelse. I denna avhandling föreslås en metodik som syftar till att utvärdera förbifartsljud som alstras av fordons akustiska källor i en förenklad stadsmiljö, här med fokus på strömningsgenererat ljud. Även om det aerodynamiska bullret är fortfarande en liten del av de totala bullret från vägfordon i urbana miljöer, kommer denna andel säkerligen att öka inom en snar framtid med införandet av tysta elektriska motorer och de bullerreducerande däck som introduceras på marknaden. I detta arbete presenteras en komplett modellering av problemet från ljudalstring till ljudutbredning och förbifartsanalys i tre steg. Utgångspunkten är beräkningar av strömningen kring geometrin av intresse; det andra steget är identifiering av ljudkällorna som genereras av strömningen, och det tredje steget rör ljudutbredning från rörliga källor till observatörer, inklusive effekten av reflektioner och spridning från närliggande ytor. I det första steget löses flödet genom detached-eddy simulation (DES) för kompressibel strömning. Identifiering av ljudkällor i det andra steget görs med direkt numerisk lobformning med avfaltning med hjälp av linjärprogrammering, där källdata extraheras från flödessimuleringarna. Resultatet av detta steg är en uppsättning av okorrelerade akustiska monopolkällor. Steg tre utnyttjar dessa källor som indata till en ljudutbredningsmodel baserad på beräkningar punkt-till-punkt med Greensfunktioner för rörliga källor, och med en modifierad Kirchhoff-integral under Kirchhoffapproximationen för att beräkna reflektioner mot byggda ytor. Metodiken demonstreras med exemplet med det aeroakustiska ljud som genereras av ett NACA-luftintag som rör sig i en förenklad urban miljö. Med hjälp av denna metod kan man få insikter om ljudalstringsmekanismer, om källegenskaper och om ljudfältet som genereras av källor när de rör sig i en förenklad stadsmiljö. / <p>QC 20170425</p>
254

Méthodes modernes d'analyse de données en biophysique analytique : résolution des problèmes inverses en RMN DOSY et SM / New methods of data analysis in analytical biophysics : solving the inverse ill-posed problems in DOSY NMR and MS

Cherni, Afef 20 September 2018 (has links)
Cette thèse s’intéresse à la création de nouvelles approches algorithmiques pour la résolution du problème inverse en biophysiques. Dans un premier temps, on vise l’application RMN de type DOSY: une nouvelle approche de régularisation hybride a été proposée avec un nouvel algorithme PALMA (http://palma.labo.igbmc.fr/). Cet algorithme permet d’analyser des données réelles DOSY avec une précision importante quelque soit leur type. Dans un deuxième temps, notre intérêt s’est tourné vers l’application de spectrométrie de masse. Nous avons proposé une nouvelle approche par dictionnaire dédiée à l’analyse protéomique en utilisant le modèle averagine et une stratégie de minimisation sous contraintes d'une pénalité de parcimonie. Afin d’améliorer la précision de l’information obtenue, nous avons proposé une nouvelle méthode SPOQ, basée sur une nouvelle fonction de pénalisation, résolue par un nouvel algorithme Forward-Backward à métrique variable localement ajustée. Tous nos algorithmes bénéficient de garanties théoriques de convergence, et ont été validés expérimentalement sur des spectres synthétisés et des données réelles / This thesis aims at proposing new approaches to solve the inverse problem in biophysics. Firstly, we study the DOSY NMR experiment: a new hybrid regularization approach has been proposed with a novel PALMA algorithm (http://palma.labo.igbmc.fr/). This algorithm ensures the efficient analysis of real DOSY data with a high precision for all different type. In a second time, we study the mass spectrometry application. We have proposed a new dictionary based approach dedicated to proteomic analysis using the averagine model and the constrained minimization approach associated with a sparsity inducing penalty. In order to improve the accuracy of the information, we proposed a new SPOQ method based on a new penalization, solved with a new Forward-Backward algorithm with a variable metric locally adjusted. All our algorithms benefit from sounded convergence guarantees, and have been validated experimentally on synthetics and real data.
255

Correction des effets de volume partiel en tomographie d'émission

Le Pogam, Adrien 29 April 2010 (has links)
Ce mémoire est consacré à la compensation des effets de flous dans une image, communément appelés effets de volume partiel (EVP), avec comme objectif d’application l’amélioration qualitative et quantitative des images en médecine nucléaire. Ces effets sont la conséquence de la faible résolutions spatiale qui caractérise l’imagerie fonctionnelle par tomographie à émission mono-photonique (TEMP) ou tomographie à émission de positons (TEP) et peuvent être caractérisés par une perte de signal dans les tissus présentant une taille comparable à celle de la résolution spatiale du système d’imagerie, représentée par sa fonction de dispersion ponctuelle (FDP). Outre ce phénomène, les EVP peuvent également entrainer une contamination croisée des intensités entre structures adjacentes présentant des activités radioactives différentes. Cet effet peut conduire à une sur ou sous estimation des activités réellement présentes dans ces régions voisines. Différentes techniques existent actuellement pour atténuer voire corriger les EVP et peuvent être regroupées selon le fait qu’elles interviennent avant, durant ou après le processus de reconstruction des images et qu’elles nécessitent ou non la définition de régions d’intérêt provenant d’une imagerie anatomique de plus haute résolution(tomodensitométrie TDM ou imagerie par résonance magnétique IRM). L’approche post-reconstruction basée sur le voxel (ne nécessitant donc pas de définition de régions d’intérêt) a été ici privilégiée afin d’éviter la dépendance aux reconstructions propres à chaque constructeur, exploitée et améliorée afin de corriger au mieux des EVP. Deux axes distincts ont été étudiés. Le premier est basé sur une approche multi-résolution dans le domaine des ondelettes exploitant l’apport d’une image anatomique haute résolution associée à l’image fonctionnelle. Le deuxième axe concerne l’amélioration de processus de déconvolution itérative et ce par l’apport d’outils comme les ondelettes et leurs extensions que sont les curvelets apportant une dimension supplémentaire à l’analyse par la notion de direction. Ces différentes approches ont été mises en application et validées par des analyses sur images synthétiques, simulées et cliniques que ce soit dans le domaine de la neurologie ou dans celui de l’oncologie. Finalement, les caméras commerciales actuelles intégrant de plus en plus des corrections de résolution spatiale dans leurs algorithmes de reconstruction, nous avons choisi de comparer de telles approches en TEP et en TEMP avec une approche de déconvolution itérative proposée dans ce mémoire. / Partial Volume Effects (PVE) designates the blur commonly found in nuclear medicine images andthis PhD work is dedicated to their correction with the objectives of qualitative and quantitativeimprovement of such images. PVE arise from the limited spatial resolution of functional imaging witheither Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography(SPECT). They can be defined as a signal loss in tissues of size similar to the Full Width at HalfMaximum (FWHM) of the PSF of the imaging device. In addition, PVE induce activity crosscontamination between adjacent structures with different tracer uptakes. This can lead to under or overestimation of the real activity of such analyzed regions. Various methodologies currently exist tocompensate or even correct for PVE and they may be classified depending on their place in theprocessing chain: either before, during or after the image reconstruction process, as well as theirdependency on co-registered anatomical images with higher spatial resolution, for instance ComputedTomography (CT) or Magnetic Resonance Imaging (MRI). The voxel-based and post-reconstructionapproach was chosen for this work to avoid regions of interest definition and dependency onproprietary reconstruction developed by each manufacturer, in order to improve the PVE correction.Two different contributions were carried out in this work: the first one is based on a multi-resolutionmethodology in the wavelet domain using the higher resolution details of a co-registered anatomicalimage associated to the functional dataset to correct. The second one is the improvement of iterativedeconvolution based methodologies by using tools such as directional wavelets and curveletsextensions. These various developed approaches were applied and validated using synthetic, simulatedand clinical images, for instance with neurology and oncology applications in mind. Finally, ascurrently available PET/CT scanners incorporate more and more spatial resolution corrections in theirimplemented reconstruction algorithms, we have compared such approaches in SPECT and PET to aniterative deconvolution methodology that was developed in this work.
256

Localisation et contribution de sources acoustiques de navire au passage par traitement d’antenne réduite / Array processing for the localization and the contribution of acoustic sources of a passing-by ship using a short antenna

Oudompheng, Benoit 03 November 2015 (has links)
Le bruit rayonné par le trafic maritime étant la principale source de nuisance acoustique sous-marine dans les zones littorales, la Directive-Cadre Stratégie pour le Milieu Marin de la Commission Européenne promeut le développement de méthodes de surveillance et de réduction de l'impact du bruit du trafic maritime. Le besoin de disposer d'un système industriel d'imagerie du bruit rayonné par les navires de surface a motivé la présente étude, il permettra aux industriels du naval d'identifier quels éléments d'un navire rayonnent le plus de bruit.Dans ce contexte, ce travail de recherche porte sur la mise en place de méthodes d'imagerie acoustique sous-marine passive d'un navire de surface au passage au-dessus d'une antenne linéaire et fixe au nombre réduit d'hydrophones. Deux aspects de l'imagerie acoustique sont abordés : la localisation de sources acoustiques et l'identification de la contribution relative de chacune de ces sources dans la signature acoustique du navire.Tout d'abord, une étude bibliographique sur le rayonnement acoustique d'un navire de surface au passage est menée afin d'identifier les principales sources acoustiques et de pouvoir ensuite simuler des sources représentatives d'un navire. La propagation acoustique est simulée par la théorie des rayons et intègre le mouvement des sources. Ce simulateur de rayonnement acoustique de navire au passage est construit afin de valider les algorithmes d'imagerie acoustique proposés et de dimensionner une configuration expérimentale. Une étude sur l'influence du mouvement des sources sur les algorithmes d'imagerie acoustique a conduit à l'utilisation d'un algorithme de formation de voies pour sources mobiles pour la localisation des sources et une méthode de déconvolution pour accéder à l'identification de la contribution des sources. Les performances de ces deux méthodes sont évaluées en présence de bruit de mesure et d'incertitudes sur le modèle de propagation afin d'en connaître les limites. Une première amélioration de la méthode de formation de voies consiste en un traitement d'antenne à ouverture synthétique qui exploite le mouvement relatif entre le navire et l'antenne pour notamment améliorer la résolution en basses fréquences. Un traitement de correction acoustique de la trajectoire permet de corriger la trajectographie du navire au passage qui est souvent incertaine. Enfin, la dernière partie de cette thèse concerne une campagne de mesures de bruit de passage d'une maquette de navire de surface tractée en lac, ces mesures ont permis de valider les méthodes d'imagerie acoustique proposées ainsi que les améliorations proposées, dans un environnement réel maîtrisé. / Since the surface ship radiated noise is the main contribution to the underwater acoustic noise in coastal waters, The Marine Framework Strategy Directive of the European Commission recommends the development of the monitoring and the reduction of the impact of the traffic noise. The need for developing an industrial system for the noise mapping of the surface ship have motivated this study, it will allow the naval industries to identify which part of the ship radiates the stronger noise level.In this context, this research work deals with the development of passive noise mapping methods of a surface ship passing-by above a static linear array with a reduced number of hydrophones. Two aspects of the noise mapping are considered: the localization of acoustic sources and the identification of the relative contribution of each source to the ship acoustic signature.First, a bibliographical study concerning the acoustic radiation of a passing-by surface ship is conducted in order to list the main acoustic sources and then to simulate representative ship sources. The acoustic propagation is simulated according to the ray theory and takes the source motion into account. The simulator of the acoustic radiation of a passing-by ship is built in order to validate the proposed noise mapping methods and to design an experimental set-up. A study about the influence of the source motion on the noise mapping methods led to the use of the beamforming method for moving sources for the source localization and a deconvolution method for the identification of the source contribution. The performances of both methods are assessed considering measurement noise and uncertainties about the propagation model in order to know their limitations. A first improvement of the beamforming method consists of a passive synthetic aperture array algorithm which benefits from the relative motion between the ship and the antenna in order to improve the spatial resolution at low frequencies. Then, an algorithm is proposed to acoustically correct the trajectography mismatches of a passing-by surface ship. Finally, the last part of this thesis concerns a pass-by experiment of a towed-ship model in a lake. These measurements allowed us to validate the proposed noise mapping methods and their proposed improvements, in a real and controlled environment.
257

Mass Spectrometric Deconvolution of Libraries of Natural Peptide Toxins

Gupta, Kallol January 2013 (has links) (PDF)
This thesis deals with the analysis of natural peptide libraries using mass spectrometry. In the course of the study, both ribosomal and non-ribosomal classes of peptides have been investigated. Microheterogeneity, post-translational modifications (PTM), isobaric amino acids and disulfide crosslinks present critical challenges in routine mass spectral structure determination of natural peptides. These problems form the core of this thesis. Chapter 2 describes an approach where chemical derivatization, in unison with high resolution LC-MSn experiments, resulted in deconvolution of a microheterogenous peptide library of B. subtilis K1. Chapter 3 describes an approach for distinction between isobaric amino acids (Leu/Ile/Hyp), by the use of combined ETD-CID fragmentation, through characteristic side chain losses. Chapters 4-6 address a long standing problem in structure elucidation of peptide toxins; the determination of disulfide connectivity. Through the use of direct mass spectral CID fragmentation, a methodology has been proposed for determination of the S-S pairing schemes in polypeptides. Further, an algorithm DisConnect has been developed for a rapid and robust solution to the problem. This general approach is applicable to both peptides and proteins, irrespective of the size and the number of disulfide bonds present. The method has been successfully applied to a large number of peptide toxins from marine cone snails, conotoxins, synthetic foldamers and proteins. Chapter 7 describes an attempt to integrate next generation sequencing (NGS) data with mass spectrometric analysis of the crude venom. This approach couples rapidly generated cDNA sequences, with high-throughput LC-ESI-MS/MS analysis, which provides mass spectral fragmentation information. An algorithm has been developed that allows the construction of a putative conus peptide database from the NGS data, followed by a protocol that permits rapid annotation of tandem MS data. The approach is exemplified by an analysis of the peptide components present in the venom of Conus amadis, yielding 225 chemically unique sequences, with identification of more than 150 sites of PTMs. In summary, this thesis presents different methodologies that address the existing limitations of de novo mass spectral structure determination of natural peptides and presents new methodologies that permit for rapid and efficient analysis of complex mixtures.
258

Implementace rekonstrukčních metod pro čtení čárového kódu / Implementation of restoring method for reading bar code

Kadlčík, Libor January 2013 (has links)
Bar code stores information in the form of series of bars and gaps with various widths, and therefore can be considered as an example of bilevel (square) signal. Magnetic bar codes are created by applying slightly ferromagnetic material to a substrate. Sensing is done by reading oscillator, whose frequency is modulated by presence of the mentioned ferromagnetic material. Signal from the oscillator is then subjected to frequency demodulation. Due to temperature drift of the reading oscillator, the demodulated signal is accompanied by DC drift. Method for removal of the drift is introduced. Also, drift-insensitive detection of presence of a bar code is described. Reading bar codes is complicated by convolutional distortion, which is result of spatially dispersed sensitivity of the sensor. Effect of the convolutional distortion is analogous to low-pass filtering, causing edges to be smoothed and overlapped, and making their detection difficult. Characteristics of convolutional distortion can be summarized into point-spread function (PSF). In case of magnetic bar codes, the shape of the PSF can be known in advance, but not its width of DC transfer. Methods for estimation of these parameters are discussed. The signal needs to be reconstructed (into original bilevel form) before decoding can take place. Variational methods provide effective way. Their core idea is to reformulate reconstruction as an optimization problem of functional minimization. The functional can be extended by other functionals (regularizations) in order to considerably improve results of reconstruction. Principle of variational methods will be shown, including examples of use of various regularizations. All algorithm and methods (including frequency demodulation of signal from reading oscillator) are digital. They are implemented as a program for a microcontroller from the PIC32 family, which offers high computing power, so that even blind deconvolution (when the real PSF also needs to be found) can be finished in a few seconds. The microcontroller is part of magnetic bar code reader, whose hardware allows the read information to be transferred to personal computer via the PS/2 interface or USB (by emulating key presses on virtual keyboard), or shown on display.
259

Spektrometrické metody pro výzkum huminových látek / Spectrometric Methods for Research of Humic Substances

Enev, Vojtěch January 2016 (has links)
The main aim of doctoral thesis is the study on physicochemical properties of humic substances (HS) by modern instrumental techniques. The subject of the study were HS isolated from South Moravian lignite, South Bohemian peat, forest soil Humic Podzol and finally extract from brown sea algae Ascophyllum nodosum. With respect on determination of structure and reactivity of these unique “biocolloids”, standard samples (Leonardite HA, Elliott Soil HS and Pahokee Peat HS) were also studied. These samples were obtained from International Humic Substances Society (IHSS). All mentioned substances were characterized by elemental analysis (EA), molecular absorption spectroscopy in ultraviolet and visible region (UV/Vis), infrared spectroscopy with Fourier transformation (FTIR), nuclear magnetic resonance spectroscopy of carbon isotope 13C (LS 13C NMR), steady-state and time resolved fluorescence spectroscopy. Obtained fluorescence, UV/Vis and 13C NMR spectra were used for calculation of fluorescence and absorption indexes, values of specific absorbance and structural parameters respectively, which were used for fundamental characterization of these “biocolloidal” compounds. Infrared spectroscopy with Fourier transformation was utilized for the identification of functional groups and structural units of HS. Evaluation of infrared spectra is quiet complicated by overlapping of absorption bands especially in fingerprint region. This problem was overcome by Fourier self-deconvolution (FSD). Steady-state fluorescence spectroscopy was used for deeper characterization of HS with respect to origin, structural units, amount of substituents with electron-donor and electron-acceptor effects, content of reactive functional groups, “molecular” heterogeneity, the degree of humification, etc. Parameters of complexation of samples Elliott Soil with heavy metal ions (Cu2+, Pb2+ and Hg2+) were obtained by using modified Stern-Volmer equation. These ions were chosen purposefully, because the interaction of HS with these ions is one of the fundamental criteria for the assessment of the reactivity of HS. Key part of the whole doctoral thesis is time-resolved fluorescence spectroscopy. It is able to determine the origin of emission of HS by method Time-Resolved Area Normalized Emission Spectra (TRANES). The viscosity of micro medium about excited fluorophores of HS was determined by Time-Resolved Emission Spectra (TRES).
260

Downhill folders in slow motion:: Lambda repressor variants probed by optical tweezers

Mukhortava, Ann 26 September 2017 (has links)
Die Proteinfaltung ist ein Prozess der molekularen Selbstorganisation, bei dem sich eine lineare Kette von Aminosäuren zu einer definierten, funktionellen dreidimensionalen Struktur zusammensetzt. Der Prozess der Faltung ist ein thermisch getriebener diffusiver Prozess durch eine Gibbs-Energie-Landschaft im Konformationsraum für die Struktur der minimalen Energie. Während dieses Prozesses zeigt die freie Enthalpie des Systems nicht immer eine monotone Abnahme; stattdessen führt eine suboptimale Kompensation der Enthalpie- und der Entropieänderung während jedes Faltungsschrittes zur Bildung von Freien-Enthalpie-Faltungsbarrieren. Diese Barrieren und damit verbundenen hochenergetischen Übergangszustände, die wichtige Informationen über Mechanismen der Proteinfaltung enthalten, sind jedoch kinetisch unzugänglich. Um den Prozess der Barrierebildung und die strukturellen Merkmale von Übergangszuständen aufzudecken, werden Proteine genutzt, die über barrierefreie Pfade falten – so genannte “downhill folder“. Aufgrund der geringen Faltungsbarrieren werden wichtige Interaktionen der Faltung zugänglich und erlauben Einblicke in die ratenbegrenzenden Faltungsvorgänge. In dieser Arbeit vergleichen wir die Faltungsdynamiken von drei verschiedenen Varianten eines Lambda-Repressor-Fragments, bestehend aus den Aminosäuren 6 bis 85: ein Zwei-Zustands-Falter λWT (Y22W) und zwei downhill-folder-artige Varianten, λYA (Y22W/Q33Y/ G46,48A) und λHA (Y22W/Q33H/G46,48A). Um auf die Kinetik und die strukturelle Dynamik zu greifen zu können, werden Einzelmolekülkraftspektroskopische Experimente mit optische Pinzetten mit Submillisekunden- und Nanometer-Auflösung verwendet. Ich fand, dass die niedrige denaturierende Kraft die Mikrosekunden Faltungskinetik von downhill foldern auf eine Millisekunden-Zeitskala verlangsamt, sodass das System für Einzelmolekülstudien gut zugänglich ist. Interessanterweise zeigten sich unter Krafteinwirkung die downhill-folder-artigen Varianten des Lambda-Repressors als kooperative Zwei-Zustands-Falter mit deutlich unterschiedlicher Faltungskinetik und Kraftabhängigkeit. Drei Varianten des Proteins zeigten ein hoch konformes Verhalten unter Last. Die modellfreie Rekonstruktion von Freien-Enthalpie-Landschaften ermöglichte es uns, die feinen Details der Transformation des Zwei-Zustands-Faltungspfad direkt in einen downhill-artigen Pfad aufzulösen. Die Auswirkungen von einzelnen Mutationen auf die Proteinstabilität, Bildung der Übergangszustände und die konformationelle Heterogenität der Faltungs- und Entfaltungszustände konnten beobachtet werden. Interessanterweise zeigen unsere Ergebnisse, dass sich die untersuchten Varianten trotz der ultraschnellen Faltungszeit im Bereich von 2 μs in einem kooperativen Prozess über verbleibende Energiebarrieren falten und entfalten, was darauf hindeutet, dass wesentlich schnellere Faltungsraten notwendig sind um ein downhill Limit vollständig zu erreichen.:I Theoretical background 1 1 Introduction 3 2 Protein folding: the downhill scenario 5 2.1 Protein folding as a diffusion on a multidimensional energy landscape 5 2.2 Downhill folding proteins 7 2.2.1 Thermodynamic description of downhill folders 7 2.2.2 Identification criteria for downhill folders 8 2.3 Lambda repressor as a model system for studying downhill folding 9 2.3.1 Wild-type lambda repressor fragment λ{6-85} 10 2.3.2 Acceleration of λ{6-85} folding by specifific point mutations 11 2.3.3 The incipient-downhill λYA and downhill λHA variants 14 2.4 Single-molecule techniques as a promising tool for probing downhill folding dynamics 17 3 Single-molecule protein folding with optical tweezers 19 3.1 Optical tweezers 19 3.1.1 Working principle of optical tweezers 19 3.1.2 The optical tweezers setup 21 3.2 The dumbbell assay 22 3.3 Measurement protocols 23 3.3.1 Constant-velocity experiments 23 3.3.2 Constant-trap-distance experiments (equilibrium experiments) 24 4 Theory and analysis of single-molecule trajectories 27 4.1 Polymer elasticity models 27 4.2 Equilibrium free energies of protein folding in optical tweezers 28 4.3 Signal-pair correlation analysis 29 4.4 Force dependence of transition rate constants 29 4.4.1 Zero-load extrapolation of rates: the Berkemeier-Schlierf model 30 4.4.2 Detailed balance for unfolding and refolding data 31 4.5 Direct measurement of the energy landscape via deconvolution 32 II Results 33 5 Efficient strategy for protein-DNA hybrid formation 35 5.1 Currently available strategies for protein-DNA hybrid formation 35 5.2 Novel assembly of protein-DNA hybrids based on copper-free click chemistry 37 5.3 Click-chemistry based assembly preserves the native protein structure 40 5.4 Summary 42 6 Non-equilibrium mechanical unfolding and refolding of lambda repressor variants 45 6.1 Non-equilibrium unfolding and refolding of lambda repressor λWT 45 6.2 Non-equilibrium unfolding and refolding of incipient-downhill λYA and downhill λHA variants of lambda repressor 48 6.3 Summary 52 7 Equilibrium unfolding and refolding of lambda repressor variants 53 7.1 Importance of the trap stiffness to resolve low-force nanometer transitions 54 7.2 Signal pair-correlation analysis to achieve millisecond transitions 56 7.3 Force-dependent equilibrium kinetics of λWT 59 7.4 Equilibrium folding of incipient-downhill λYA and downhill λHA variants of lambda repressor 61 7.5 Summary 65 8 Model-free energy landscape reconstruction for λWT, incipient-downhill λYA and downhill λHA variants 69 8.1 Direct observation of the effect of a single mutation on the conformational heterogeneity and protein stability 71 8.2 Artifacts of barrier-height determination during deconvolution 75 8.3 Summary 76 9 Conclusions and Outlook 79 / Protein folding is a process of molecular self-assembly in which a linear chain of amino acids assembles into a defined, functional three-dimensional structure. The process of folding is a thermally driven diffusive search on a free-energy landscape in the conformational space for the minimal-energy structure. During that process, the free energy of the system does not always show a monotonic decrease; instead, sub-optimal compensation of enthalpy and entropy change during each folding step leads to formation of folding free-energy barriers. However, these barriers, and associated high-energy transition states, that contain key information about mechanisms of protein folding, are kinetically inaccessible. To reveal the barrier-formation process and structural characteristics of transition states, proteins are employed that fold via barrierless paths – so-called downhill folders. Due to the low folding barriers, the key folding interactions become accessible, yielding insights about the rate-limiting folding events. Here, I compared the folding dynamics of three different variants of a lambda repressor fragment, containing amino acids 6 to 85: a two-state folder λWT (Y22W) and two downhill-like folding variants, λYA (Y22W/Q33Y/G46,48A) and λHA (Y22W/Q33H/G46,48A). To access the kinetics and structural dynamics, single-molecule optical tweezers with submillisecond and nanometer resolution are used. I found that force perturbation slowed down the microsecond kinetics of downhill folders to a millisecond time-scale, making it accessible to single-molecule studies. Interestingly, under load, the downhill-like variants of lambda repressor appeared as cooperative two-state folders with significantly different folding kinetics and force dependence. The three protein variants displayed a highly compliant behaviour under load. Model-free reconstruction of free-energy landscapes allowed us to directly resolve the fine details of the transformation of the two-state folding path into a downhill-like path. The effect of single mutations on protein stability, transition state formation and conformational heterogeneity of folding and unfolding states was observed. Noteworthy, our results demonstrate, that despite the ultrafast folding time in a range of 2 µs, the studied variants fold and unfold in a cooperative process via residual barriers, suggesting that much faster folding rate constants are required to reach the full-downhill limit.:I Theoretical background 1 1 Introduction 3 2 Protein folding: the downhill scenario 5 2.1 Protein folding as a diffusion on a multidimensional energy landscape 5 2.2 Downhill folding proteins 7 2.2.1 Thermodynamic description of downhill folders 7 2.2.2 Identification criteria for downhill folders 8 2.3 Lambda repressor as a model system for studying downhill folding 9 2.3.1 Wild-type lambda repressor fragment λ{6-85} 10 2.3.2 Acceleration of λ{6-85} folding by specifific point mutations 11 2.3.3 The incipient-downhill λYA and downhill λHA variants 14 2.4 Single-molecule techniques as a promising tool for probing downhill folding dynamics 17 3 Single-molecule protein folding with optical tweezers 19 3.1 Optical tweezers 19 3.1.1 Working principle of optical tweezers 19 3.1.2 The optical tweezers setup 21 3.2 The dumbbell assay 22 3.3 Measurement protocols 23 3.3.1 Constant-velocity experiments 23 3.3.2 Constant-trap-distance experiments (equilibrium experiments) 24 4 Theory and analysis of single-molecule trajectories 27 4.1 Polymer elasticity models 27 4.2 Equilibrium free energies of protein folding in optical tweezers 28 4.3 Signal-pair correlation analysis 29 4.4 Force dependence of transition rate constants 29 4.4.1 Zero-load extrapolation of rates: the Berkemeier-Schlierf model 30 4.4.2 Detailed balance for unfolding and refolding data 31 4.5 Direct measurement of the energy landscape via deconvolution 32 II Results 33 5 Efficient strategy for protein-DNA hybrid formation 35 5.1 Currently available strategies for protein-DNA hybrid formation 35 5.2 Novel assembly of protein-DNA hybrids based on copper-free click chemistry 37 5.3 Click-chemistry based assembly preserves the native protein structure 40 5.4 Summary 42 6 Non-equilibrium mechanical unfolding and refolding of lambda repressor variants 45 6.1 Non-equilibrium unfolding and refolding of lambda repressor λWT 45 6.2 Non-equilibrium unfolding and refolding of incipient-downhill λYA and downhill λHA variants of lambda repressor 48 6.3 Summary 52 7 Equilibrium unfolding and refolding of lambda repressor variants 53 7.1 Importance of the trap stiffness to resolve low-force nanometer transitions 54 7.2 Signal pair-correlation analysis to achieve millisecond transitions 56 7.3 Force-dependent equilibrium kinetics of λWT 59 7.4 Equilibrium folding of incipient-downhill λYA and downhill λHA variants of lambda repressor 61 7.5 Summary 65 8 Model-free energy landscape reconstruction for λWT, incipient-downhill λYA and downhill λHA variants 69 8.1 Direct observation of the effect of a single mutation on the conformational heterogeneity and protein stability 71 8.2 Artifacts of barrier-height determination during deconvolution 75 8.3 Summary 76 9 Conclusions and Outlook 79

Page generated in 0.063 seconds