Spelling suggestions: "subject:"deconvolution"" "subject:"econvolution""
261 |
Spektrometrické metody pro výzkum huminových látek / Spectrometric Methods for Research of Humic SubstancesEnev, Vojtěch January 2016 (has links)
The main aim of doctoral thesis is the study on physicochemical properties of humic substances (HS) by modern instrumental techniques. The subject of the study were HS isolated from South Moravian lignite, South Bohemian peat, forest soil Humic Podzol and finally extract from brown sea algae Ascophyllum nodosum. With respect on determination of structure and reactivity of these unique “biocolloids”, standard samples (Leonardite HA, Elliott Soil HS and Pahokee Peat HS) were also studied. These samples were obtained from International Humic Substances Society (IHSS). All mentioned substances were characterized by elemental analysis (EA), molecular absorption spectroscopy in ultraviolet and visible region (UV/Vis), infrared spectroscopy with Fourier transformation (FTIR), nuclear magnetic resonance spectroscopy of carbon isotope 13C (LS 13C NMR), steady-state and time resolved fluorescence spectroscopy. Obtained fluorescence, UV/Vis and 13C NMR spectra were used for calculation of fluorescence and absorption indexes, values of specific absorbance and structural parameters respectively, which were used for fundamental characterization of these “biocolloidal” compounds. Infrared spectroscopy with Fourier transformation was utilized for the identification of functional groups and structural units of HS. Evaluation of infrared spectra is quiet complicated by overlapping of absorption bands especially in fingerprint region. This problem was overcome by Fourier self-deconvolution (FSD). Steady-state fluorescence spectroscopy was used for deeper characterization of HS with respect to origin, structural units, amount of substituents with electron-donor and electron-acceptor effects, content of reactive functional groups, “molecular” heterogeneity, the degree of humification, etc. Parameters of complexation of samples Elliott Soil with heavy metal ions (Cu2+, Pb2+ and Hg2+) were obtained by using modified Stern-Volmer equation. These ions were chosen purposefully, because the interaction of HS with these ions is one of the fundamental criteria for the assessment of the reactivity of HS. Key part of the whole doctoral thesis is time-resolved fluorescence spectroscopy. It is able to determine the origin of emission of HS by method Time-Resolved Area Normalized Emission Spectra (TRANES). The viscosity of micro medium about excited fluorophores of HS was determined by Time-Resolved Emission Spectra (TRES).
|
262 |
Downhill folders in slow motion:: Lambda repressor variants probed by optical tweezersMukhortava, Ann 26 September 2017 (has links)
Die Proteinfaltung ist ein Prozess der molekularen Selbstorganisation, bei dem sich eine lineare Kette von Aminosäuren zu einer definierten, funktionellen dreidimensionalen Struktur zusammensetzt. Der Prozess der Faltung ist ein thermisch getriebener diffusiver Prozess durch eine Gibbs-Energie-Landschaft im Konformationsraum für die Struktur der minimalen Energie. Während dieses Prozesses zeigt die freie Enthalpie des Systems nicht immer eine monotone Abnahme; stattdessen führt eine suboptimale Kompensation der Enthalpie- und der Entropieänderung während jedes Faltungsschrittes zur Bildung von Freien-Enthalpie-Faltungsbarrieren. Diese Barrieren und damit verbundenen hochenergetischen Übergangszustände, die wichtige Informationen über Mechanismen der Proteinfaltung enthalten, sind jedoch kinetisch unzugänglich. Um den Prozess der Barrierebildung und die strukturellen Merkmale von Übergangszuständen aufzudecken, werden Proteine genutzt, die über barrierefreie Pfade falten – so genannte “downhill folder“. Aufgrund der geringen Faltungsbarrieren werden wichtige Interaktionen der Faltung zugänglich und erlauben Einblicke in die ratenbegrenzenden Faltungsvorgänge.
In dieser Arbeit vergleichen wir die Faltungsdynamiken von drei verschiedenen Varianten eines Lambda-Repressor-Fragments, bestehend aus den Aminosäuren 6 bis 85: ein Zwei-Zustands-Falter λWT (Y22W) und zwei downhill-folder-artige Varianten, λYA (Y22W/Q33Y/ G46,48A) und λHA (Y22W/Q33H/G46,48A). Um auf die Kinetik und die strukturelle Dynamik zu greifen zu können, werden Einzelmolekülkraftspektroskopische Experimente mit optische Pinzetten mit Submillisekunden- und Nanometer-Auflösung verwendet. Ich fand, dass die niedrige denaturierende Kraft die Mikrosekunden Faltungskinetik von downhill foldern auf eine Millisekunden-Zeitskala verlangsamt, sodass das System für Einzelmolekülstudien gut zugänglich ist.
Interessanterweise zeigten sich unter Krafteinwirkung die downhill-folder-artigen Varianten des Lambda-Repressors als kooperative Zwei-Zustands-Falter mit deutlich unterschiedlicher Faltungskinetik und Kraftabhängigkeit. Drei Varianten des Proteins zeigten ein hoch konformes Verhalten unter Last. Die modellfreie Rekonstruktion von Freien-Enthalpie-Landschaften ermöglichte es uns, die feinen Details der Transformation des Zwei-Zustands-Faltungspfad direkt in einen downhill-artigen Pfad aufzulösen. Die Auswirkungen von einzelnen Mutationen auf die Proteinstabilität, Bildung der Übergangszustände und die konformationelle Heterogenität der Faltungs- und Entfaltungszustände konnten beobachtet werden.
Interessanterweise zeigen unsere Ergebnisse, dass sich die untersuchten Varianten trotz der ultraschnellen Faltungszeit im Bereich von 2 μs in einem kooperativen Prozess über verbleibende Energiebarrieren falten und entfalten, was darauf hindeutet, dass wesentlich schnellere Faltungsraten notwendig sind um ein downhill Limit vollständig zu erreichen.:I Theoretical background 1
1 Introduction 3
2 Protein folding: the downhill scenario 5
2.1 Protein folding as a diffusion on a multidimensional energy landscape 5
2.2 Downhill folding proteins 7
2.2.1 Thermodynamic description of downhill folders 7
2.2.2 Identification criteria for downhill folders 8
2.3 Lambda repressor as a model system for studying downhill folding 9
2.3.1 Wild-type lambda repressor fragment λ{6-85} 10
2.3.2 Acceleration of λ{6-85} folding by specifific point mutations 11
2.3.3 The incipient-downhill λYA and downhill λHA variants 14
2.4 Single-molecule techniques as a promising tool for probing downhill folding dynamics 17
3 Single-molecule protein folding with optical tweezers 19
3.1 Optical tweezers 19
3.1.1 Working principle of optical tweezers 19
3.1.2 The optical tweezers setup 21
3.2 The dumbbell assay 22
3.3 Measurement protocols 23
3.3.1 Constant-velocity experiments 23
3.3.2 Constant-trap-distance experiments (equilibrium experiments) 24
4 Theory and analysis of single-molecule trajectories 27
4.1 Polymer elasticity models 27
4.2 Equilibrium free energies of protein folding in optical tweezers 28
4.3 Signal-pair correlation analysis 29
4.4 Force dependence of transition rate constants 29
4.4.1 Zero-load extrapolation of rates: the Berkemeier-Schlierf model 30
4.4.2 Detailed balance for unfolding and refolding data 31
4.5 Direct measurement of the energy landscape via deconvolution 32
II Results 33
5 Efficient strategy for protein-DNA hybrid formation 35
5.1 Currently available strategies for protein-DNA hybrid formation 35
5.2 Novel assembly of protein-DNA hybrids based on copper-free click chemistry 37
5.3 Click-chemistry based assembly preserves the native protein structure 40
5.4 Summary 42
6 Non-equilibrium mechanical unfolding and refolding of lambda repressor variants 45
6.1 Non-equilibrium unfolding and refolding of lambda repressor λWT 45
6.2 Non-equilibrium unfolding and refolding of incipient-downhill λYA and downhill λHA variants of lambda repressor 48
6.3 Summary 52
7 Equilibrium unfolding and refolding of lambda repressor variants 53
7.1 Importance of the trap stiffness to resolve low-force nanometer transitions 54
7.2 Signal pair-correlation analysis to achieve millisecond transitions 56
7.3 Force-dependent equilibrium kinetics of λWT 59
7.4 Equilibrium folding of incipient-downhill λYA and downhill λHA variants of lambda repressor 61
7.5 Summary 65
8 Model-free energy landscape reconstruction for λWT, incipient-downhill λYA and downhill λHA variants 69
8.1 Direct observation of the effect of a single mutation on the conformational heterogeneity and protein stability 71
8.2 Artifacts of barrier-height determination during deconvolution 75
8.3 Summary 76
9 Conclusions and Outlook 79 / Protein folding is a process of molecular self-assembly in which a linear chain of amino acids assembles into a defined, functional three-dimensional structure. The process of folding is a thermally driven diffusive search on a free-energy landscape in the conformational space for the minimal-energy structure. During that process, the free energy of the system does not always show a monotonic decrease; instead, sub-optimal compensation of enthalpy and entropy change during each folding step leads to formation of folding free-energy barriers. However, these barriers, and associated high-energy transition states, that contain key information about mechanisms of protein folding, are kinetically inaccessible. To reveal the barrier-formation process and structural characteristics of transition states, proteins are employed that fold via barrierless paths – so-called downhill folders. Due to the low folding barriers, the key folding interactions become accessible, yielding insights about the rate-limiting folding events.
Here, I compared the folding dynamics of three different variants of a lambda repressor fragment, containing amino acids 6 to 85: a two-state folder λWT (Y22W) and two downhill-like folding variants, λYA (Y22W/Q33Y/G46,48A) and λHA (Y22W/Q33H/G46,48A). To access the kinetics and structural dynamics, single-molecule optical tweezers with submillisecond and nanometer resolution are used. I found that force perturbation slowed down the microsecond kinetics of downhill folders to a millisecond time-scale, making it accessible to single-molecule studies.
Interestingly, under load, the downhill-like variants of lambda repressor appeared as cooperative two-state folders with significantly different folding kinetics and force dependence. The three protein variants displayed a highly compliant behaviour under load. Model-free reconstruction of free-energy landscapes allowed us to directly resolve the fine details of the transformation of the two-state folding path into a downhill-like path. The effect of single mutations on protein stability, transition state formation and conformational heterogeneity of folding and unfolding states was observed.
Noteworthy, our results demonstrate, that despite the ultrafast folding time in a range of 2 µs, the studied variants fold and unfold in a cooperative process via residual barriers, suggesting that much faster folding rate constants are required to reach the full-downhill limit.:I Theoretical background 1
1 Introduction 3
2 Protein folding: the downhill scenario 5
2.1 Protein folding as a diffusion on a multidimensional energy landscape 5
2.2 Downhill folding proteins 7
2.2.1 Thermodynamic description of downhill folders 7
2.2.2 Identification criteria for downhill folders 8
2.3 Lambda repressor as a model system for studying downhill folding 9
2.3.1 Wild-type lambda repressor fragment λ{6-85} 10
2.3.2 Acceleration of λ{6-85} folding by specifific point mutations 11
2.3.3 The incipient-downhill λYA and downhill λHA variants 14
2.4 Single-molecule techniques as a promising tool for probing downhill folding dynamics 17
3 Single-molecule protein folding with optical tweezers 19
3.1 Optical tweezers 19
3.1.1 Working principle of optical tweezers 19
3.1.2 The optical tweezers setup 21
3.2 The dumbbell assay 22
3.3 Measurement protocols 23
3.3.1 Constant-velocity experiments 23
3.3.2 Constant-trap-distance experiments (equilibrium experiments) 24
4 Theory and analysis of single-molecule trajectories 27
4.1 Polymer elasticity models 27
4.2 Equilibrium free energies of protein folding in optical tweezers 28
4.3 Signal-pair correlation analysis 29
4.4 Force dependence of transition rate constants 29
4.4.1 Zero-load extrapolation of rates: the Berkemeier-Schlierf model 30
4.4.2 Detailed balance for unfolding and refolding data 31
4.5 Direct measurement of the energy landscape via deconvolution 32
II Results 33
5 Efficient strategy for protein-DNA hybrid formation 35
5.1 Currently available strategies for protein-DNA hybrid formation 35
5.2 Novel assembly of protein-DNA hybrids based on copper-free click chemistry 37
5.3 Click-chemistry based assembly preserves the native protein structure 40
5.4 Summary 42
6 Non-equilibrium mechanical unfolding and refolding of lambda repressor variants 45
6.1 Non-equilibrium unfolding and refolding of lambda repressor λWT 45
6.2 Non-equilibrium unfolding and refolding of incipient-downhill λYA and downhill λHA variants of lambda repressor 48
6.3 Summary 52
7 Equilibrium unfolding and refolding of lambda repressor variants 53
7.1 Importance of the trap stiffness to resolve low-force nanometer transitions 54
7.2 Signal pair-correlation analysis to achieve millisecond transitions 56
7.3 Force-dependent equilibrium kinetics of λWT 59
7.4 Equilibrium folding of incipient-downhill λYA and downhill λHA variants of lambda repressor 61
7.5 Summary 65
8 Model-free energy landscape reconstruction for λWT, incipient-downhill λYA and downhill λHA variants 69
8.1 Direct observation of the effect of a single mutation on the conformational heterogeneity and protein stability 71
8.2 Artifacts of barrier-height determination during deconvolution 75
8.3 Summary 76
9 Conclusions and Outlook 79
|
263 |
Compression et inférence des opérateurs intégraux : applications à la restauration d’images dégradées par des flous variables / Approximation and estimation of integral operators : applications to the restoration of images degraded by spatially varying blursEscande, Paul 26 September 2016 (has links)
Le problème de restauration d'images dégradées par des flous variables connaît un attrait croissant et touche plusieurs domaines tels que l'astronomie, la vision par ordinateur et la microscopie à feuille de lumière où les images sont de taille un milliard de pixels. Les flous variables peuvent être modélisés par des opérateurs intégraux qui associent à une image nette u, une image floue Hu. Une fois discrétisé pour être appliqué sur des images de N pixels, l'opérateur H peut être vu comme une matrice de taille N x N. Pour les applications visées, la matrice est stockée en mémoire avec un exaoctet. On voit apparaître ici les difficultés liées à ce problème de restauration des images qui sont i) le stockage de ce grand volume de données, ii) les coûts de calculs prohibitifs des produits matrice-vecteur. Ce problème souffre du fléau de la dimension. D'autre part, dans beaucoup d'applications, l'opérateur de flou n'est pas ou que partialement connu. Il y a donc deux problèmes complémentaires mais étroitement liés qui sont l'approximation et l'estimation des opérateurs de flou. Cette thèse a consisté à développer des nouveaux modèles et méthodes numériques permettant de traiter ces problèmes. / The restoration of images degraded by spatially varying blurs is a problem of increasing importance. It is encountered in many applications such as astronomy, computer vision and fluorescence microscopy where images can be of size one billion pixels. Variable blurs can be modelled by linear integral operators H that map a sharp image u to its blurred version Hu. After discretization of the image on a grid of N pixels, H can be viewed as a matrix of size N x N. For targeted applications, matrices is stored with using exabytes on the memory. This simple observation illustrates the difficulties associated to this problem: i) the storage of a huge amount of data, ii) the prohibitive computation costs of matrix-vector products. This problems suffers from the challenging curse of dimensionality. In addition, in many applications, the operator is usually unknown or only partially known. There are therefore two different problems, the approximation and the estimation of blurring operators. They are intricate and have to be addressed with a global overview. Most of the work of this thesis is dedicated to the development of new models and computational methods to address those issues.
|
264 |
Improved in silico methods for target deconvolution in phenotypic screensMervin, Lewis January 2018 (has links)
Target-based screening projects for bioactive (orphan) compounds have been shown in many cases to be insufficiently predictive for in vivo efficacy, leading to attrition in clinical trials. Phenotypic screening has hence undergone a renaissance in both academia and in the pharmaceutical industry, partly due to this reason. One key shortcoming of this paradigm shift is that the protein targets modulated need to be elucidated subsequently, which is often a costly and time-consuming procedure. In this work, we have explored both improved methods and real-world case studies of how computational methods can help in target elucidation of phenotypic screens. One limitation of previous methods has been the ability to assess the applicability domain of the models, that is, when the assumptions made by a model are fulfilled and which input chemicals are reliably appropriate for the models. Hence, a major focus of this work was to explore methods for calibration of machine learning algorithms using Platt Scaling, Isotonic Regression Scaling and Venn-Abers Predictors, since the probabilities from well calibrated classifiers can be interpreted at a confidence level and predictions specified at an acceptable error rate. Additionally, many current protocols only offer probabilities for affinity, thus another key area for development was to expand the target prediction models with functional prediction (activation or inhibition). This extra level of annotation is important since the activation or inhibition of a target may positively or negatively impact the phenotypic response in a biological system. Furthermore, many existing methods do not utilize the wealth of bioactivity information held for orthologue species. We therefore also focused on an in-depth analysis of orthologue bioactivity data and its relevance and applicability towards expanding compound and target bioactivity space for predictive studies. The realized protocol was trained with 13,918,879 compound-target pairs and comprises 1,651 targets, which has been made available for public use at GitHub. Consequently, the methodology was applied to aid with the target deconvolution of AstraZeneca phenotypic readouts, in particular for the rationalization of cytotoxicity and cytostaticity in the High-Throughput Screening (HTS) collection. Results from this work highlighted which targets are frequently linked to the cytotoxicity and cytostaticity of chemical structures, and provided insight into which compounds to select or remove from the collection for future screening projects. Overall, this project has furthered the field of in silico target deconvolution, by improving the performance and applicability of current protocols and by rationalizing cytotoxicity, which has been shown to influence attrition in clinical trials.
|
265 |
Data-driven goodness-of-fit tests / Datagesteuerte VerträglichkeitskriteriumtestsLangovoy, Mikhail Anatolievich 09 July 2007 (has links)
No description available.
|
266 |
Sur des méthodes préservant les structures d'une classe de matrices structurées / On structure-preserving methods of a class of structured matricesBen Kahla, Haithem 14 December 2017 (has links)
Les méthodes d'algèbres linéaire classiques, pour le calcul de valeurs et vecteurs propres d'une matrice, ou des approximations de rangs inférieurs (low-rank approximations) d'une solution, etc..., ne tiennent pas compte des structures de matrices. Ces dernières sont généralement détruites durant le procédé du calcul. Des méthodes alternatives préservant ces structures font l'objet d'un intérêt important par la communauté. Cette thèse constitue une contribution dans ce domaine. La décomposition SR peut être calculé via l'algorithme de Gram-Schmidt symplectique. Comme dans le cas classique, une perte d'orthogonalité peut se produire. Pour y remédier, nous avons proposé deux algorithmes RSGSi et RMSGSi qui consistent à ré-orthogonaliser deux fois les vecteurs à calculer. La perte de la J-orthogonalité s'est améliorée de manière très significative. L'étude directe de la propagation des erreurs d'arrondis dans les algorithmes de Gram-Schmidt symplectique est très difficile à effectuer. Nous avons réussi à contourner cette difficulté et donner des majorations pour la perte de la J-orthogonalité et de l'erreur de factorisation. Une autre façon de calculer la décomposition SR est basée sur les transformations de Householder symplectique. Un choix optimal a abouti à l'algorithme SROSH. Cependant, ce dernier peut être sujet à une instabilité numérique. Nous avons proposé une version modifiée nouvelle SRMSH, qui a l'avantage d'être aussi stable que possible. Une étude approfondie a été faite, présentant les différentes versions : SRMSH et SRMSH2. Dans le but de construire un algorithme SR, d'une complexité d'ordre O(n³) où 2n est la taille de la matrice, une réduction (appropriée) de la matrice à une forme condensée (J(Hessenberg forme) via des similarités adéquates, est cruciale. Cette réduction peut être effectuée via l'algorithme JHESS. Nous avons montré qu'il est possible de réduire une matrice sous la forme J-Hessenberg, en se basant exclusivement sur les transformations de Householder symplectiques. Le nouvel algorithme, appelé JHSJ, est basé sur une adaptation de l'algorithme SRSH. Nous avons réussi à proposer deux nouvelles variantes, aussi stables que possible : JHMSH et JHMSH2. Nous avons constaté que ces algorithmes se comportent d'une manière similaire à l'algorithme JHESS. Une caractéristique importante de tous ces algorithmes est qu'ils peuvent rencontrer un breakdown fatal ou un "near breakdown" rendant impossible la suite des calculs, ou débouchant sur une instabilité numérique, privant le résultat final de toute signification. Ce phénomène n'a pas d'équivalent dans le cas Euclidien. Nous avons réussi à élaborer une stratégie très efficace pour "guérir" le breakdown fatal et traîter le near breakdown. Les nouveaux algorithmes intégrant cette stratégie sont désignés par MJHESS, MJHSH, JHM²SH et JHM²SH2. Ces stratégies ont été ensuite intégrées dans la version implicite de l'algorithme SR lui permettant de surmonter les difficultés rencontrées lors du fatal breakdown ou du near breakdown. Rappelons que, sans ces stratégies, l'algorithme SR s'arrête. Finalement, et dans un autre cadre de matrices structurées, nous avons présenté un algorithme robuste via FFT et la matrice de Hankel, basé sur le calcul approché de plus grand diviseur commun (PGCD) de deux polynômes, pour résoudre le problème de la déconvolution d'images. Plus précisément, nous avons conçu un algorithme pour le calcul du PGCD de deux polynômes bivariés. La nouvelle approche est basée sur un algorithme rapide, de complexité quadratique O(n²), pour le calcul du PGCD des polynômes unidimensionnels. La complexité de notre algorithme est O(n²log(n)) où la taille des images floues est n x n. Les résultats expérimentaux avec des images synthétiquement floues illustrent l'efficacité de notre approche. / The classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach
|
267 |
Monitorování dynamických soustav s využitím piezoelektrických senzorů vibrací / Monitoring of dynamic systems with piezoelectric sensorsSvoboda, Lukáš January 2020 (has links)
The aim of this diploma thesis is to describe localization and calculation load identification of dynamic systems using piezoelectric sensors. Finding methods, which would allow us to evaluate loads on simple systems is the key to their application in the structural health monitoring of more complex systems. A theory necessary for understanding and application in aerospace, civil engineering, automobile industry, and train traffic of presented methods is given in the first part of the thesis. In these applications, the methods of wave propagation and different types of neural network methods are used to evaluate load identification. It is possible to evaluate loads by using a time reverse method, a method based on signal deconvolution, and a method based on a voltage amplitude ratio of the piezoelectric sensor. In the next part, the methods are described, the suitable place for gluing of a sensor, and the number of sensors for using method is given. These methods were verified and compared to a simple experimental system. In the following part, the model of the piezoelectric sensor is presented. It is possible to use the model for calculating voltage output from the strain. For methods verification, the problem of train passage in a specific place of the railway is chosen. The speed of the train and its load on the railway was calculated by using these methods.
|
268 |
Multikanálová dekonvoluce obrazů / Multichannel Image DeconvolutionBradáč, Pavel January 2009 (has links)
This Master Thesis deals with image restoration using deconvolution. The terms introducing into deconvolution theory like two-dimensional signal, distortion model, noise and convolution are explained in the first part of thesis. The second part deals with deconvolution methods via utilization of the Bayes approach which is based on the probability principle. The third part is focused on the Alternating Minimization Algorithm for Multichannel Blind Deconvolution. At the end this algorithm is written in Matlab with utilization of the NAG C Library. Then comparison of different optimization methods follows (simplex, steepest descent, quasi-Newton), regularization forms (Tichonov, Total Variation) and other parameters used by this deconvolution algorithm.
|
269 |
Nestandardní úlohy v odstranění rozmazání obrazu / Image Deblurring in Demanding ConditionsKotera, Jan January 2020 (has links)
Title: Image Deblurring in Demanding Conditions Author: Jan Kotera Department: Institute of Information Theory and Automation, Czech Academy of Sciences Supervisor: Doc. Ing. Filip Šroubek, Ph.D., DSc., Institute of Information Theory and Automation, Czech Academy of Sciences Abstract: Image deblurring is a computer vision task consisting of removing blur from image, the objective is to recover the sharp image corresponding to the blurred input. If the nature and shape of the blur is unknown and must be estimated from the input image, image deblurring is called blind and naturally presents a more difficult problem. This thesis focuses on two primary topics related to blind image deblurring. In the first part we work with the standard image deblurring based on the common convolution blur model and present a method of increasing robustness of the deblur- ring to phenomena violating the linear acquisition model, such as for example inten- sity clipping caused by sensor saturation in overexposed pixels. If not properly taken care of, these effects significantly decrease accuracy of the blur estimation and visual quality of the restored image. Rather than tailoring the deblurring method explicitly for each particular type of acquisition model violation we present a general approach based on flexible automatic...
|
270 |
Channel Probing for an Indoor Wireless Communications ChannelHunter, Brandon 13 March 2003 (has links) (PDF)
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.
|
Page generated in 0.0914 seconds