Spelling suggestions: "subject:"truncated""
11 |
Nachweis von phosphoryliertem und trunkiertem Alpha-Synuclein in Hautbiopsien von Patienten in frühen Stadien des idiopathischen M. Parkinson / Phospho-alpha-synuclein in dermal nerve fibres of patients with early stages of Parkinson's diseaseSchulmeyer, Lena January 2020 (has links) (PDF)
Ziel der Studie war, phosphoryliertes und trunkiertes Alpha-Synuclein in Nervenfasern der Haut zu untersuchen und herauszufinden, ob die posttranslationalen Modifikationen Phosphorylierung und Trunkierung des Alpha-Synucleins als potenzielle Biomarker für eine Diagnosestellung des M. Parkinson geeignet sind. Die Besonderheit der vorliegenden Studie war zum einen, dass ausschließlich Patienten in frühen Erkrankungsstadien (Hoehn-und-Yahr-Stadien I und II) des idiopathischen M. Parkinson untersucht wurden und zum anderen der Versuch, die Detektionsrate anhand von Stufenschnitten zu erhöhen.
Zusammenfassend kann man sagen, dass phosphoryliertes Alpha-Synuclein ein hohes Potenzial als Biomarker für die Diagnosestellung und zur Differenzialdiagnostik eines M. Parkinson hat und Stufenschnitte die Nachweisrate deutlich erhöhen können.
In der Immunfluoreszenzdoppelfärbung mit dem Anti-Phospho-Alpha-Synuclein-Antikörper von BioLegend® (San Diego, USA) konnte bei beinahe 80% der Patienten das gesuchte Protein gefunden werden (Nachweisrate Hoehn-und-Yahr-Stadium I: 58,3%; Hoehn-und-Yahr-Stadium II: 93,8%), in der Immunfluoreszenzdoppelfärbung mit dem Anti-Phospho-Alpha-Synuclein-Antikörper von Prothena Biosciences Inc (San Francisco, USA) nur in etwas mehr als 46% der Patienten (Nachweisrate Hoehn-und-Yahr-Stadium I: 41,7%; Hoehn-und-Yahr-Stadium II: 50%).
In Hoehn-und-Yahr-Stadium I ist die Sensitivität jedoch noch nicht ausreichend hoch. Da insbesondere in frühen Stadien der Erkrankung eine Differenzierung zwischen atypischen Parkinson-Syndromen und idiopathischem M. Parkinson klinisch sehr schwierig ist, ist jedoch vor allem das frühdiagnostische Potential eines Biomarkers entscheidend. In Hoehn-und-Yahr-Stadium I müsste die Detektionsrate noch erhöht werden, um einen sinnvollen Einsatz des Biomarkers Phospho-Alpha-Synuclein in der Klinik gewährleisten zu können. / The aim of the study was to investigate phosphorylated and truncated alpha-synuclein in dermal nerve fibres and to find out whether the posttranslational modifications phosphorylation and truncation of the alpha-synuclein are suitable as potential biomarkers for a diagnosis of Parkinson's disease. The peculiarity of the present study was, on the one hand, that only patients in early disease stages (Hoehn and Yahr stages I and II) of idiopathic Parkinson's disease were examined and, on the other hand, an attempt to increase the detection rate by serial sections.
In summary, it can be said that phosphorylated alpha-synuclein has great potential as a biomarker for the diagnosis and differential diagnosis of Parkinson's disease and that step-cuts can significantly increase the detection rate.
The protein searched for was found in nearly 80% of patients in the immunofluorescence double staining with the anti-phospho-alpha-synuclein antibody from BioLegend® (San Diego, USA) (detection rate Hoehn and Yahr stage I: 58.3% ; Hoehn-und-Yahr stage II: 93.8%), in the immunofluorescence double staining with the anti-phospho-alpha-synuclein antibody from Prothena Biosciences Inc (San Francisco, USA) only in a little more than 46% of the patients ( Detection rate of Hoehn-und-Yahr stage I: 41.7%; Hoehn-und-Yahr stage II: 50%).
In Hoehn-und-Yahr stage I, however, the sensitivity is not yet sufficiently high. Since it is clinically very difficult to differentiate between atypical Parkinson's syndrome and idiopathic Parkinson's disease, especially in the early stages of the disease, the early diagnostic potential of a biomarker is crucial. In Hoehn-und-Yahr stage I, the detection rate would have to be increased in order to be able to ensure a sensible use of the biomarker phospho-alpha-synuclein in the clinic.
|
12 |
Solving strongly coupled quantum field theory using Lightcone Conformal TruncationXin, Yuan 03 December 2020 (has links)
Quantum Field Theory (QFT) is the language that describes a wide spectrum of physics. However, it is notoriously hard at strong coupling regime. We approach this problem in an old Quantum Mechanical method - keep a finite number of states and diagonalize the Hamiltonian as a finite-size matrix. To study a QFT, we take the Hamiltonian to be the Conformal Field Theory as the Ultraviolet fixed point of the theory's Renormalization Group Flow, deformed by a relevant operator. We use a recent framework known as the Lightcone Conformal Truncation (LCT), where we use conformal basis and lightcone quantization. As an application of the method, we study the two dimensional Supersymmetric (SUSY) Gross-Neveu-Yukawa Model. The model is expected to have a critical point in the universality class of tri-critical Ising model, a massive phase and a massless SUSY-breaking phase. We use the LCT to compute the spectrum and the spectral density of the theory at all couplings and map the entire phase diagram.
|
13 |
Hardware Implementation of Post-Compression Rate-Distortion Optimization for EBCOT in JPEG2000Kordik, Andrew Michael 22 August 2011 (has links)
No description available.
|
14 |
Revised Correlations of the Ordovician (Katian, Richmondian) Waynesville Formation of Ohio, Indiana and KentuckyAucoin, Christopher D. January 2014 (has links)
No description available.
|
15 |
Normalization of Complex Mode Shapes by Truncation of the Alpha-PolynomialNiranjan, Adityanarayan C. January 2015 (has links)
No description available.
|
16 |
On the Tightness of the Balanced Truncation Error Bound with an Application to Arrowhead SystemsReiter, Sean Joseph 28 January 2022 (has links)
Balanced truncation model reduction for linear systems yields reduced-order models that satisfy a well-known error bound in terms of a system's Hankel singular values. This bound is known to hold with equality under certain conditions, such as when the full-order system is state-space symmetric.
In this work, we derive more general conditions in which the balanced truncation error bound holds with equality. We show that this holds for single-input, single-output systems that exhibit a generalized type of state-space symmetry based on the sign parameters corresponding to a system's Hankel singular values. We prove an additional result that shows how to determine this state-space symmetry from the arrowhead realization of a system, if available. In particular, we provide a formula for the sign parameters of an arrowhead system in terms of the off-diagonal entries of its arrowhead realization.
We then illustrate these results with an example of an arrowhead system arising naturally in power systems modeling that motivated our study. / Master of Science / Mathematical modeling of dynamical systems provides a powerful means for studying physical phenomena. Due the complexities of real-world problems, many mathematical models face computational difficulties due to the costs of accurate modeling. Model-order reduction of large-scale dynamical systems circumvents this by approximating the large-scale model with a ``smaller'' one that still accurately describes the problem of interest. Balanced truncation model reduction for linear systems is one such example, yielding reduced-order models that satisfy a tractable upper bound on the approximation error. This work investigates conditions in which this bound is known to hold with equality, becoming an exact formula for the error in reduction. We additionally show how to determine these conditions for a special class of linear dynamical systems known as arrowhead systems, which arise in special applications of network modeling. We provide an example of one such system from power systems modeling that motivated our study.
|
17 |
Truncation Error Based Mesh Adaptation and its Application to Multi-Mesh CFDJackson, Charles Wilson, V 18 July 2019 (has links)
One of the largest sources of error in a CFD simulation is the discretization error. One of the least computationally expensive ways of reducing the discretization error in a simulation is by performing mesh adaptation. In this work, the mesh adaptation processes are driven by the truncation error, which is the local source of the discretization error. Because this work is focused on methods for structured grids, r-adaptation is used as opposed to h-adaptation.
A new method for performing the r-adaptation based on an optimization process is developed and presented here. This optimization process was applied to simple 1D and 2D Euler problems as a method of testing the approach. The mesh optimization approach is compared to the more common equidistribution approach to determine which produces more accurate results as well as the costs associated with each. It is found that the optimization process is able to reduce the truncation error than equidistribution. However, in the 2D cases optimization does not reduce the discretization error sufficiently to warrant the significant costs of the approach. This indicates that the much cheaper equidistribution process provides a cost-effective manner to reduce the discretization error in the solution. Further, equidistribution is able to achieve the bulk of the potential reductions in discretization error possible through r-adaptation.
This work also develops a new framework for reducing the cost of performing truncation error based r-adaptation. This new framework also addresses some of the issues associated with r-adaptation. In this framework, adaptation is performed on a coarse mesh where it is faster to perform, creating a mapping function for this mesh, and finally evaluating this mapping at a fine enough mesh to meet the error target. The framework is used for 2D Euler and 2D laminar Navier-Stokes problems and shown to be the most cost-effective way to meet a desired error target.
Finally, the multi-mesh CFD method is introduced and applied to a wide variety of problems from quasi-1D nozzle to 2D laminar and turbulent boundary layers. The multi-mesh method allows the system of equations to be solved on a system of meshes. With this method, each equation is solved on a mesh that is adapted specifically for it, meaning that more accurate solutions for each equation can be obtained. This work shows that, for certain problems, the multi-mesh approach is able to achieve more accurate results in less time compared to using a single mesh. / Doctor of Philosophy / Computational fluid dynamics (CFD) describes a method of numerically solving equations that attempt to model the behavior of a fluid. As computers have become cheaper and more powerful and the software has become more capable, CFD has become an integral part of the engineering process. One of the goals of the field is to be able to bring these higher fidelity simulations into the design loop earlier. Ideally, using CFD earlier in the design process would allow design engineers to create new innovative designs with less programmatic risk. Likewise, it is also becoming necessary to use these CFD tools later in the final design process to replace some physical experiments which can be expensive, unsafe, or infeasible to run. Both of these goals require the CFD codes to meet the accuracy requirements for the results as fast as possible. This work discusses several different methods for improving the accuracy of the simulations as well as ways of obtaining these more accurate results for the cheapest cost. In CFD, the governing equations modeling the flow behavior are solved on a computer. As a result, these continuous differential equations must be approximated as a system of discrete equations, so that they can be solved on a computer. These approximations result in discretization error, the difference between the exact solutions to the discrete and continuous equations, which is typically the largest type of numerical error in a CFD solution. The source of the discretization error is the truncation error, which is composed of the terms left out of the approximations made when discretizing the continuous equations. Thus, if the truncation error can be reduced, the discretization error in the solution should also be reduced. In this work, several different ways of reducing this truncation error through mesh adaptation are discussed, including the use of optimization methods. These mesh optimization methods are compared to a more common way of performing adaptation, namely equidistribution. It is determined that equidistribution is able to reduce the discretization error by a similar amount while being significantly faster than mesh optimization. This work also presents a framework for making the adaptation process faster overall by performing the adaptation on a coarse mesh and then refining the mesh enough to meet the error tolerance for the application. This framework was the cheapest method investigated to meet a given error target. This work also introduces a new technique called multi-mesh CFD, which allows each equation (conservation of mass, momentum, energy, etc.) to be solved on a separate mesh. This allows each equation to be solved on a mesh that is specifically adapted for it, resulting in a more accurate solution. Here, it is shown that, for certain problems, the multi-mesh technique is able to obtain a solution with lower error than only using a single mesh. This work also shows that these more accurate results can be obtained in less time using multiple meshes than on a single mesh.
|
18 |
Application of r-Adaptation Techniques for Discretization Error Improvement in CFDTyson, William Conrad 29 January 2016 (has links)
Computational fluid dynamics (CFD) has proven to be an invaluable tool for both engineering design and analysis. As the performance of engineering devices become more reliant upon the accuracy of CFD simulations, it is necessary to not only quantify and but also to reduce the numerical error present in a solution. Discretization error is often the primary source of numerical error. Discretization error is introduced locally into the solution by truncation error. Truncation error represents the higher order terms in an infinite series which are truncated during the discretization of the continuous governing equations of a model. Discretization error can be reduced through uniform grid refinement but is often impractical for typical engineering problems. Grid adaptation provides an efficient means for improving solution accuracy without the exponential increase in computational time associated with uniform grid refinement. Solution accuracy can be improved through local grid refinement, often referred to as h-adaptation, or by node relocation in the computational domain, often referred to as r-adaptation. The goal of this work is to examine the effectiveness of several r-adaptation techniques for reducing discretization error. A framework for geometry preservation is presented, and truncation error is used to drive adaptation. Sample problems include both subsonic and supersonic inviscid flows. Discretization error reductions of up to an order of magnitude are achieved on adapted grids. / Master of Science
|
19 |
Truncated Data Problems In Helical Cone-Beam TomographyAnoop, K P 06 1900 (has links)
This report delves into two of the major truncated data problems in helical cone-beam tomography: Axial truncation and Lateral truncation. The problem of axial truncation, also classically known as the Long Object problem, was a major challenge in the development of helical scan tomography. Generalization of the Feldkamp method (FDK) for circular scan to the helical scan trajectory was known to give reasonable solutions to the problem. The FDK methods are approximate in nature and hence provide only approximate solution to the long object problem. Recently, many methods which provide exact solution to this problem have been developed the major breakthrough being the Katsevich’s algorithm which is exact, efficient and also requires lesser detector area compared to Feldkamp methods. The first part of the report deals with the implementation strategies for methods capable of handling axial truncation. Here, we specifically look at the exact and efficient Katsevich’s solution to long object problem and the class of approximate solutions provided by the generalized FDK formulae.
The later half of the report looks at the lateral truncation problem and suggests new methods to handle such truncation in helical scan CT. Simulation results for reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view (FOV). A row-by-row data completion approach using Linear Prediction is introduced for helical scan truncated data. An extension/improvement of this technique known as Windowed Linear Prediction approach is introduced. Efficacy of both these techniques are shown using simulation with standard phantoms. Various image quality measures for the resulting reconstructed images are used to evaluate the performance of the proposed methods against an already existing technique.
Motivated by a study of the autocorrelation and partial autocorrelation functions of the projection data the use of a non-stationary linear model, the ARIMA model, is proposed for data completion. The new model is first validated in the 2D truncated data situation. Also a method of incorporating the parallel beam data consistency condition into this new method is considered. Performance evaluation of the new method with consistency condition shows that it can outperform the existing techniques. Simulation experiments show the efficacy of the ARIMA model for data completion in 2D as well as 3D truncated data scenario. The model is shown to work well for the laterally truncated helical cone-beam case.
|
20 |
Use of microcomputers in mathematics in Hong Kong higher educationPong, Tak-Yun G. January 1988 (has links)
Since the innovation of computers some 40 years ago and the introduction of microcomputers in 1975, computers are playing an active role in education processes and altering the pattern of interaction between teacher and student in the classroom. Computer assisted learning has been seen as a revolution in education. In this research, the author has studied the impact of using microcomputers on mathematical education, particularly at the Hong Kong tertiary level, in different perspectives. Two computer software packages have been developed on the microcomputer. The consideration of the topic to be used in the computer assisted learning was arrived at in earlier surveys with students who thought that computers could give very accurate solutions to calculations. The two software packages, demonstrating on the spot the error that would be incurred by the computer, have been used by the students. They are both interactive and make use of the advantages of the microcomputer's functions over other teaching media, such as graphics facility and random number generator, to draw to the students' attention awareness of errors that may be obtained using computers in numerical solutions. Much emphasis is put on the significance and effectiveness of using computer packages in learning and teaching. Measurements are based on questionnaires, conversations with students, and tests on content material after the packages have been used. Feedback and subjective opinion of using computers in mathematical education have also been obtained from both students and other teachers. The research then attempts to examine the suitability of applying computer assisted learning in Hong Kong education sectors. Some studies on the comments made by students who participated in the learning process are undertaken. The successes and failures in terms of student accomplishment and interest in the subject area as a result of using a software package is described. Suggestions and recommendations are given in the concluding chapter.
|
Page generated in 0.1041 seconds