191 |
Investigating the structure and dynamics of DNA with fluorescence and computational techniquesSmith, Darren Andrew January 2015 (has links)
Nucleic acids, such as DNA, play an essential role in all known forms of life; however, despite their fundamental importance, there is still a significant lack of understanding surrounding their functional behaviour. This thesis explores the structure and dynamics of DNA by employing methods based on fluorescence and through the use of computational calculations. Time-resolved fluorescence experiments have been performed on dinucleotides containing 2-aminopurine (2AP) in various alcohol-water mixtures. 2AP, a fluorescent analogue of the nucleobase adenine, has been used extensively to investigate nucleic acids because of its ability to be incorporated into their structures with minimal perturbation and its high sensitivity to its local environment. Direct solvent effects on 2AP were established through measurements on the free fluorophore. Analysis of the complex fluorescence decays associated with the dinucleotides was challenging but has provided insight into their conformational dynamics. Solvent polarity was found to play a significant role in determining both photophysical and conformational properties in these systems. The complicated fluorescence decay of 2AP in nucleic acids highlights the need for accurate and unbiased analysis methods. Various time-resolved fluorescence analysis methods, including iterative reconvolution and the exponential series method, have been investigated with real and simulated data to obtain an overview of their benefits and limitations. The main outcome of the evaluation is that no single method is preferred in all situations and there is likely to be value in using a combination when there is ambiguity in the interpretation of the results. Regardless of the analysis technique used, the parameterised description of the observed fluorescence decay is meaningless if the underlying physical model is unrealistic. The advance of computational methods has provided a new means to rigorously test the viability of proposed models. Calculations have been performed at the M06-2X/6-31+G(d) level of theory to investigate the stability of 2AP-containing dinucleotides in conformations similar to those observed in the double-helical structure of DNA. The results help to explain the similarity of the time-resolved fluorescence behaviour of 2AP in dinucleotide and DNA systems but also bring to light subtle differences that could perhaps account for experimental discrepancies. The recent emergence of advanced optical microscopy techniques has offered the prospect of being able to directly visualise nucleic acid structure at the nanoscale but, unfortunately, limitations of existing labelling methods have hindered delivery of this potential. To address this issue, a novel strategy has been used to introduce reversible fluorescence photoswitching into DNA at high label density. Photophysical studies have implicated aggregation and energy-transfer as possible quenching mechanisms in this system, which could be detrimental to its future application. The reliability of fluorescence photoswitching was investigated at ensemble and single-molecule level and by performing optical lock-in detection imaging. These developments lay the foundations for improved and sequence-specific super-resolution microscopy of DNA, which could offer new insights into the 3D nanoscale structure of this remarkable biopolymer. In summary, the work presented in this thesis outlines important observations and developments that have been made in the study of the structure and dynamics of nucleic acids.
|
192 |
Principled Variance Reduction Techniques for Real Time Patient-Specific Monte Carlo Applications within Brachytherapy and Cone-Beam Computed TomographySampson, Andrew 30 April 2013 (has links)
This dissertation describes the application of two principled variance reduction strategies to increase the efficiency for two applications within medical physics. The first, called correlated Monte Carlo (CMC) applies to patient-specific, permanent-seed brachytherapy (PSB) dose calculations. The second, called adjoint-biased forward Monte Carlo (ABFMC), is used to compute cone-beam computed tomography (CBCT) scatter projections. CMC was applied for two PSB cases: a clinical post-implant prostate, and a breast with a simulated lumpectomy cavity. CMC computes the dose difference between the highly correlated dose computing homogeneous and heterogeneous geometries. The particle transport in the heterogeneous geometry assumed a purely homogeneous environment, and altered particle weights accounted for bias. Average gains of 37 to 60 are reported from using CMC, relative to un-correlated Monte Carlo (UMC) calculations, for the prostate and breast CTV’s, respectively. To further increase the efficiency up to 1500 fold above UMC, an approximation called interpolated correlated Monte Carlo (ICMC) was applied. ICMC computes using CMC on a low-resolution (LR) spatial grid followed by interpolation to a high-resolution (HR) voxel grid followed. The interpolated, HR is then summed with a HR, pre-computed, homogeneous dose map. ICMC computes an approximate, but accurate, HR heterogeneous dose distribution from LR MC calculations achieving an average 2% standard deviation within the prostate and breast CTV’s in 1.1 sec and 0.39 sec, respectively. Accuracy for 80% of the voxels using ICMC is within 3% for anatomically realistic geometries. Second, for CBCT scatter projections, ABFMC was implemented via weight windowing using a solution to the adjoint Boltzmann transport equation computed either via the discrete ordinates method (DOM), or a MC implemented forward-adjoint importance generator (FAIG). ABFMC, implemented via DOM or FAIG, was tested for a single elliptical water cylinder using a primary point source (PPS) and a phase-space source (PSS). The best gains were found by using the PSS yielding average efficiency gains of 250 relative to non-weight windowed MC utilizing the PPS. Furthermore, computing 360 projections on a 40 by 30 pixel grid requires only 48 min on a single CPU core allowing clinical use via parallel processing techniques.
|
193 |
Time dependent cone-beam CT reconstruction via a motion model optimized with forward iterative projection matchingStaub, David 29 April 2013 (has links)
The purpose of this work is to present the development and validation of a novel method for reconstructing time-dependent, or 4D, cone-beam CT (4DCBCT) images. 4DCBCT can have a variety of applications in the radiotherapy of moving targets, such as lung tumors, including treatment planning, dose verification, and real time treatment adaptation. However, in its current incarnation it suffers from poor reconstruction quality and limited temporal resolution that may restrict its efficacy. Our algorithm remedies these issues by deforming a previously acquired high quality reference fan-beam CT (FBCT) to match the projection data in the 4DCBCT data-set, essentially creating a 3D animation of the moving patient anatomy. This approach combines the high image quality of the FBCT with the fine temporal resolution of the raw 4DCBCT projection data-set. Deformation of the reference CT is accomplished via a patient specific motion model. The motion model is constrained spatially using eigenvectors generated by a principal component analysis (PCA) of patient motion data, and is regularized in time using parametric functions of a patient breathing surrogate recorded simultaneously with 4DCBCT acquisition. The parametric motion model is constrained using forward iterative projection matching (FIPM), a scheme which iteratively alters model parameters until digitally reconstructed radiographs (DRRs) cast through the deforming CT optimally match the projections in the raw 4DCBCT data-set. We term our method FIPM-PCA 4DCBCT. In developing our algorithm we proceed through three stages of development. In the first, we establish the mathematical groundwork for the algorithm and perform proof of concept testing on simulated data. In the second, we tune the algorithm for real world use; specifically we improve our DRR algorithm to achieve maximal realism by incorporating physical principles of image formation combined with empirical measurements of system properties. In the third stage we test our algorithm on actual patient data and evaluate its performance against gold standard and ground truth data-sets. In this phase we use our method to track the motion of an implanted fiducial marker and observe agreement with our gold standard data that is typically within a millimeter.
|
194 |
Modelování závislostí v rezervování škod / Modeling dependencies in claims reservingKaderjáková, Zuzana January 2014 (has links)
The generalized linear models (GLM) lately received a lot of attention in modelling the insurance data. However, the violation of assumptions about the independence of underlying data set often causes problems and misinterpretation of achieved results. The need for more exible instruments has been spoken out and consequently various proposals have been made. This thesis deals with GLM based techniques enabling to handle correlated data sets. The usage have been made of generalized linear mixed models (GLMM) and generalized estimating equations (GEE). The main aim of this thesis is to provide a solid statistical background and perform a practical application to demonstrate and compare features of various models. Powered by TCPDF (www.tcpdf.org)
|
195 |
Modélisation sémantique du cloud computing : vers une composition de services DaaS à sémantique incertaine / Semantic modeling for cloud computing : toward Daas service composition with uncertain semanticsMalki, Abdelhamid 23 April 2015 (has links)
Avec l'émergence du mouvement Open Data, des centaines de milliers de sources de données provenant de divers domaines (e.g., santé, gouvernementale, statistique, etc.) sont maintenant disponibles sur Internet. Ces sources de données sont accessibles et interrogées via des services cloud DaaS, et cela afin de bénéficier de la flexibilité, l'interopérabilité et la scalabilité que les paradigmes SOA et Cloud Computing peuvent apporter à l'intégration des données. Dans ce contexte, les requêtes sont résolues par la composition de plusieurs services DaaS. Définir la sémantique des services cloud DaaS est la première étape vers l'automatisation de leur composition. Une approche intéressante pour définir la sémantique des services DaaS est de les décrire comme étant des vues sémantiques à travers une ontologie de domaine. Cependant, la définition de ces vues sémantiques ne peut pas être toujours faite avec certitude, surtout lorsque les données retournées par un service sont trop complexes. Dans cette thèse, nous proposons une approche probabiliste pour représenter les services DaaS à sémantique incertaine. Dans notre approche, un service DaaS dont la sémantique est incertaine est décrit par plusieurs vues sémantiques possibles, chacune avec une probabilité. Les services ainsi que leurs vues sémantiques possibles sont représentées dans un registre de services probabiliste (PSR). Selon les dépendances qui existent entre les services, les corrélations dans PSR peuvent être représentées par deux modèles différents : le modèle Bloc-indépendant-disjoint (BID), et le modèle à base des réseaux bayésiens. En se basant sur nos modèles probabilistes, nous étudions le problème de l'interprétation d'une composition existante impliquant des services à sémantique incertaine. Nous étudions aussi le problème de la réécriture de requêtes à travers les services DaaS incertains, et nous proposons des algorithmes efficaces permettant de calculer les différentes compositions possibles ainsi que leurs probabilités. Nous menons une série d'expérimentation pour évaluer la performance de nos différents algorithmes de composition. Les résultats obtenus montrent l'efficacité et la scalabilité de nos solutions proposées / With the emergence of the Open Data movement, hundreds of thousands of datasets from various concerns (e.g., healthcare, governmental, statistic, etc.) are now freely available on Internet. A good portion of these datasets are accessed and queried through Cloud DaaS services to benefit from the flexibility, the interoperability and the scalability that the SOA and Cloud Computing paradigms bring to data integration. In this context, user’s queries often require the composition of multiple Cloud DaaS services to be answered. Defining the semantics of DaaS services is the first step towards automating their composition. An interesting approach to define the semantics of DaaS services is by describing them as semantic views over a domain ontology. However, defining such semantic views cannot always be done with certainty, especially when the service’s returned data are too complex. In this dissertation, we propose a probabilistic approach to model the semantic uncertainty of data services. In our approach, a DaaS service with an uncertain semantics is described by several possible semantic views, each one is associated with a probability. Services along with their possible semantic views are represented in probabilistic service registry (PSR).According to the services dependencies, the correlations in PSR can be represented by two different models :Block-Independent-Disjoint model (noted BID), and directed probabilistic graphical model (Bayesian network). Based on our modeling, we study the problem of interpreting an existing composition involving services with uncertain semantics. We also study the problem of compositing uncertain DaaS services to answer a user query, and propose efficient methods to compute the different possible compositions and their probabilities. We conduct a series of experiments to evaluate the performance of our composition algorithms. The obtained results show the efficiency and the scalability of our proposed solutions
|
196 |
De la frustration et du désordre dans les chaînes et les échelles de spins quantiques / Frustration and disorder in quantum spin chains and laddersLavarelo, Arthur 19 July 2013 (has links)
Dans les systèmes de spins quantiques, la frustration et la basse dimensionnalité génèrent des fluctuations quantiques et donnent lieu à des phases exotiques. Cette thèse étudie un modèle d'échelle de spins avec des couplages frustrants le long des montants, motivé par les expériences sur le cuprate BiCu$_2$PO$_6$. Dans un premier temps, on présente une méthode variationnelle originale pour décrire les excitations de basse énergie d'une seule chaîne frustrée. Le diagramme de phase de deux chaînes couplées est ensuite établi à l'aide de méthodes numériques. Le modèle exhibe une transition de phase quantique entre une phase dimérisée est une phase à liens de valence résonnants (RVB). La physique de la phase RVB et en particulier l'apparition de l'incommensurabilité sont étudiées numériquement et par un traitement en champ moyen. On étudie ensuite les effets d'impuretés non-magnétiques sur la courbe d'aimantation et la loi de Curie à basse température. Ces propriétés magnétiques sont tout d'abord discutées à température nulle à partir d'arguments probabilistes. Puis un modèle effectif de basse énergie est dérivé dans la théorie de la réponse linéaire et permet de rendre compte des propriétés magnétiques à température finie. Enfin, on étudie l'effet d'un désordre dans les liens, sur une seule chaîne frustrée. La méthode variationnelle, introduite dans le cas non-désordonné, donne une image à faible désordre de l'instabilité de la phase dimérisée, qui consiste en la formation de domaines d'Imry-Ma délimités par des spinons localisés. Ce résultat est finalement discuté à la lumière de la renormalisation dans l'espace réel à fort désordre. / In quantum spins systems, frustration and low-dimensionality generate quantum fluctuations and give rise to exotic quantum phases. This thesis studies a spin ladder model with frustrating couplings along the legs, motivated by experiments on cuprate BiCu$_2$PO$_6$. First, we present an original variational method to describe the low-energy excitations of a single frustrated chain. Then, the phase diagram of two coupled chains is computed with numerical methods. The model exhibits a quantum phase transition between a dimerized phase and resonating valence bound (RVB) phase. The physics of the RVB phase and in particular the onset of incommensurability are studied numerically and by a mean-field treatment. Afterwards, we study the effects of non-magnetic impurities on the magnetization curve and the Curie law at low temperature. These magnetic properties are first discussed at zero temperature with probability arguments. Then a low-energy effective model is derived within the linear response theory and is used to explain the magnetic properties at finite temperature. Eventually, we study the effect of bonds disorder, on a single frustrated chain. The variational method introduced in the non-disordered case gives a low disorder picture of the dimerized phase instability, which consists in the formation of Imry-Ma domains delimited by localized spinons. This result is finally discussed in the light of the strong disorder real space renormalization.
|
197 |
O equilíbrio correlacionado de Aumann e as convenções sociaisSantos, Rodrigo Prates dos January 2008 (has links)
O principal objetivo deste trabalho é mostrar que uma convenção social está fortemente relacionada com o conceito de equilíbrio correlacionado. Através da interação de longo prazo e do aprendizado, os agentes podem chegar a um acordo, mesmo com suposições pouco restritivas e que possibilitem uma interpretação mais natural e realista do conceito de equilíbrio em Teoria dos Jogos. Inicialmente a suposição de conhecimento comum é apresentado de maneira formal e informal. O conceito de equilíbrio correlacionado é apresentado com exemplos. Finalmente, a relação entre o equilíbrio correlacionado e a convenção social é analisada. / The main purpose of this dissertation is to show that a convention can be related to a correlated equilibrium. Through the long run interaction and learning, the players can reach an agreement, even if we relax the traditional assumptions of Game Theory, and we can find a more natural and plausible interpretation of equilibrium. Initially the common knowledge assumption is presented in a formal and informal way. The correlated equilibrium is presented with examples. Finally, the relation between correlated equilibrium and convention is analyzed.
|
198 |
Comparing the Structural Components Variance Estimator and U-Statistics Variance Estimator When Assessing the Difference Between Correlated AUCs with Finite SamplesBosse, Anna L 01 January 2017 (has links)
Introduction: The structural components variance estimator proposed by DeLong et al. (1988) is a popular approach used when comparing two correlated AUCs. However, this variance estimator is biased and could be problematic with small sample sizes.
Methods: A U-statistics based variance estimator approach is presented and compared with the structural components variance estimator through a large-scale simulation study under different finite-sample size configurations.
Results: The U-statistics variance estimator was unbiased for the true variance of the difference between correlated AUCs regardless of the sample size and had lower RMSE than the structural components variance estimator, providing better type 1 error control and larger power. The structural components variance estimator provided increasingly biased variance estimates as the correlation between biomarkers increased.
Discussion: When comparing two correlated AUCs, it is recommended that the U-Statistics variance estimator be used whenever possible, especially for finite sample sizes and highly correlated biomarkers.
|
199 |
離散型態配對資料模型建立探討吳東霖, Wu, Dong-Lin Unknown Date (has links)
在實務上,複選題分析一直處於觀察樣本情形的階段;至於進行檢定以推估母體情形的過程,則幾乎沒有人考慮到。就算曾經試圖想作類似檢定,卻也常常找不到可供參考的文獻或是使用了不適當的分析方法。
本研究的主要目的在於探討各式各樣離散型態相關資料的分析方法,其中亦包含許多複選題的分析方法。幾乎每個方法皆附上範例來說明程式撰寫及分析過程,希望對有此需求的人能有所幫助。 / Problems with multiple responses are usually analyzed by observing only the sample proportions. People don't bother to make any inferences based on the sample information mostly because they do not know how to do it. Even for those who do go beyond the stage of descriptive statistics might not work it out correctly.
In the study, we review statistical methods for analyzing dependent proportions, including multiple responses. Almost every method is supplemented with an example which explains the way a related SAS program is written and the way the output is analyzed and explained. We hope that the results presented here will be helpful to those who are engaged in any analysis of multiple responses.
|
200 |
Weak-coupling instabilities of two-dimensional lattice electronsBinz, Benedikt 15 April 2002 (has links) (PDF)
Les systèmes électroniques bidimensionnels sont d'une grande actualité tout particulièrement depuis la découverte de la supraconductivité à haute température. Ici, on se restreint à l'étude d'un modèle de Hubbard étendu, à la limite d'un couplage faible. En général, le gaz électronique subit une instabilité supraconductrice même sans phonons. Cependant, dans le cas spécial d'une bande demi-remplie, la surface de Fermi est emboîtée et se trouve à une singularité de Van Hove. Cette situation conduit à une compétition entre six instabilités différentes. Outre la supraconductivité en onde $s$ et $d$, on trouve des ondes de densités de spin et de charge ainsi que deux phases qui sont caractérisées par des courants circulaires de charge et de spin respectivement. Le formalisme du groupe de renormalisation est présenté en reliant l'idée de la "< sommation parquet "> au concept plus moderne de l'action effective de Wilson. Comme résultat on obtient un diagramme de phases riche en fonction de l'interaction du modèle. Ce diagramme de phase est exact dans la limite d'une interaction infiniment faible, puisque dans ce cas les lignes de transitions sont fixées par des symétries du modèle. Les comportements à basse température de la susceptibilité de spin ainsi que de la compressibilité de charge complètent l'image physique de ces instabilités. Il s'avère que la surface de Fermi à une tendence générale de se déformer spontanément, mais l'emboîtement n'est pas détruit. En résumé, le modèle de Hubbard à couplage faible reproduit deux propriétés essentielles des cuprates: une phase antiferromagnetique à demi remplissage et la supraconductivité en onde $d$ dans le cas dopé. Mais elle n'éxplique pas les propriétés inhabituelles de l'état métallique dans le régime sous-dopé. Une extension systématique de l'approche perturbative pourrait aider à mieux comprendre ces propriétés, mais reste difficile puisque les techniques nécessaires ne sont pas encore complètement développées.
|
Page generated in 0.1245 seconds