1 |
XTREND: A computer program for estimating trends in the occurrence rate of extreme weather and climate eventsMudelsee, Manfred 05 January 2017 (has links) (PDF)
XTREND consists of the following methodical Parts. Time interval extraction (Part 1) to analyse different parts of a time series; extreme events detection (Part 2) with robust smoothing; magnitude classification (Part 3) by hand; occurrence rate estimation (Part 4) with kernel functions; bootstrap simulations (Part 5) to estimate confidence bands around the occurrence rate. You work interactively with XTREND (parameter adjustment, calculation, graphics) to acquire more intuition for your data. Although, using “normal” data sizes (less than, say, 1000) and modern machines, the computing time seems to be acceptable (less than a few minutes), parameter adjustment should be done carefully to avoid spurious results or, on the other hand, too long computing times. This Report helps you to achieve that. Although it explains the statistical concepts used, this is generally done with less detail, and you should consult the given references (which include some textbooks) for a deeper understanding.
|
2 |
Optimierung und Objektivierung der DNA-Biegewinkelmessung zur Untersuchung der initialen Schadenserkennung von Glykosylasen im Rahmen der Basen-Exzisions-Reparatur / Optimisation and standardisation of DNA bend angle measurements as application of automated DNA bend angle measurements to initial damage detection of base excision repair glycosylasesMehringer, Christian Felix January 2021 (has links) (PDF)
Im Rahmen dieser Doktorarbeit sollte anknüpfend an die Ergebnisse aus vo-rangegangenen Untersuchungen der AG Tessmer, das von Büchner et al. [1] vorgestellte Modell zur DNA-Schadenserkennung, welches im Speziellen auf Daten zu den Glykosylasen hTDG und hOGG1 basierte, auf seine Allgemein-gültigkeit für DNA-Glykosylasen untersucht werden. Das Modell beschreibt den Prozess der Schadenserkennung als eine notwendige Übereinstimmung der passiven Biegung am Schadensort mit dem aktiven BiegungswinkeI der scha-densspezifischen Glykosylase. Ein wesentlicher Bestandteil dieser Arbeit war zudem die Etablierung einer automatisierten Messsoftware zur objektiven Biegewinkelmessung an DNA-Strängen in rasterkraftmikroskopischen Aufnah-men. Dies wurde mit verschiedenen Bildverarbeitungsprogrammen sowie einer in MATLAB implementierten Messsoftware erreicht und das Programm zudem auf die Biegewinkelmessung von proteininduzierten Biegewinkeln erweitert. Zur Anwendung kam die Methode der automatisierten Biegewinkelmessung sowohl an rasterkraftmikroskopischen Aufnahmen der Glykosylase MutY gebunden an ungeschädigter DNA als auch an Aufnahmen von DNA mit und ohne Basen-schaden. Neben oxoG:A und G:A, den spezifischen MutY-Zielschäden, wurden auch andere Basenschäden wie beispielsweise oxoG:C und ethenoA:T vermes-sen und zudem die von der Glykosylase MutY an ungeschädigter DNA induzier-te Biegung mit den Biegewinkeln der jeweiligen Zielschäden verglichen. Die Übereinstimmung in den Konformationen der Zielschäden und der Reparatur-komplexe auch für die Glykosylase MutY (wie bereits für hTDG und hOGG1 in oben genannter Arbeit gezeigt) erlauben ein verbessertes Verständnis der Schadenssuche und -erkennung durch DNA-Glykosylasen, indem sie die All-gemeingültigkeit einer Biegungsenergie-basierten initialen Schadenserkennung durch DNA-Glykosylasen unterstützen. Die etablierte Messsoftware kann zu-künftig an weiteren DNA-Schäden und den entsprechenden Protein-DNA-Komplexen ihre Anwendung finden und kann somit durch die effektive Gewin-nung objektiver Daten in großer Menge zur Stützung des Modells beitragen. / The focus of this thesis was to test the general applicability of a model for initial lesion detection by base excision repair (BER) glycosylases. This thesis built on previous results from the Tessmer laboratory on the human base excision re-pair (BER) glycosylases hTDG and hOGG1 (Büchner et al. [1]). Based on this work, a model for initial lesion detection by glycosylases had been proposed that describes the process of damage recognition as a necessary match of the passive bending at the point of damage with the active bending by the damage-specific glycosylase. An essential component of this work was also the estab-lishment of an automated measurement software for objective bend angle measurements on DNA strands in atomic force microscopy (AFM) images. This was achieved with various image processing programs and a custom written MATLAB software. In addition, the procedure was extended to the measure-ment of DNA bend angles in protein-DNA complexes. In particular, the automa-ted bend angle analsyis was applied to AFM images of the glycosylase MutY bound to non-specific DNA and MutY target lesions (oxoG:A and G:A), as well as other DNA damages (oxoG:C and ethenoA:T). In the analyses, DNA bending induced by MutY in undamaged DNA was measured and compared to bending at the respective target damage. Similarities in the conformations of target da-mage and repair complexes also for this additional glycosylase (as already shown for hTDG and hOGG1 in above mentioned work) allow an improved un-derstanding of DNA glycosylase damage search and recognition by supporting the general validity of bending energy-based initial damage detection by DNA glycosylases. In addition, the established measurement software can also be used to measure DNA bending by other protein systems in an unbiased manner and on a high-throughput scale. The software thus contributes to the effective acquisition of objective data.
|
3 |
XTREND: A computer program for estimating trends in the occurrence rate of extreme weather and climate eventsMudelsee, Manfred 05 January 2017 (has links)
XTREND consists of the following methodical Parts. Time interval extraction (Part 1) to analyse different parts of a time series; extreme events detection (Part 2) with robust smoothing; magnitude classification (Part 3) by hand; occurrence rate estimation (Part 4) with kernel functions; bootstrap simulations (Part 5) to estimate confidence bands around the occurrence rate. You work interactively with XTREND (parameter adjustment, calculation, graphics) to acquire more intuition for your data. Although, using “normal” data sizes (less than, say, 1000) and modern machines, the computing time seems to be acceptable (less than a few minutes), parameter adjustment should be done carefully to avoid spurious results or, on the other hand, too long computing times. This Report helps you to achieve that. Although it explains the statistical concepts used, this is generally done with less detail, and you should consult the given references (which include some textbooks) for a deeper understanding.
|
4 |
Hardening High-Assurance Security Systems with Trusted ComputingOzga, Wojciech 12 August 2022 (has links)
We are living in the time of the digital revolution in which the world we know changes beyond recognition every decade. The positive aspect is that these changes also drive the progress in quality and availability of digital assets crucial for our societies. To name a few examples, these are broadly available communication channels allowing quick exchange of knowledge over long distances, systems controlling automatic share and distribution of renewable energy in international power grid networks, easily accessible applications for early disease detection enabling self-examination without burdening the health service, or governmental systems assisting citizens to settle official matters without leaving their homes. Unfortunately, however, digitalization also opens opportunities for malicious actors to threaten our societies if they gain control over these assets after successfully exploiting vulnerabilities in the complex computing systems building them. Protecting these systems, which are called high-assurance security systems, is therefore of utmost importance.
For decades, humanity has struggled to find methods to protect high-assurance security systems. The advancements in the computing systems security domain led to the popularization of hardware-assisted security techniques, nowadays available in commodity computers, that opened perspectives for building more sophisticated defense mechanisms at lower costs. However, none of these techniques is a silver bullet. Each one targets particular use cases, suffers from limitations, and is vulnerable to specific attacks. I argue that some of these techniques are synergistic and help overcome limitations and mitigate specific attacks when used together. My reasoning is supported by regulations that legally bind high-assurance security systems' owners to provide strong security guarantees. These requirements can be fulfilled with the help of diverse technologies that have been standardized in the last years.
In this thesis, I introduce new techniques for hardening high-assurance security systems that execute in remote execution environments, such as public and hybrid clouds. I implemented these techniques as part of a framework that provides technical assurance that high-assurance security systems execute in a specific data center, on top of a trustworthy operating system, in a virtual machine controlled by a trustworthy hypervisor or in strong isolation from other software. I demonstrated the practicality of my approach by leveraging the framework to harden real-world applications, such as machine learning applications in the eHealth domain. The evaluation shows that the framework is practical. It induces low performance overhead (<6%), supports software updates, requires no changes to the legacy application's source code, and can be tailored to individual trust boundaries with the help of security policies.
The framework consists of a decentralized monitoring system that offers better scalability than traditional centralized monitoring systems. Each monitored machine runs a piece of code that verifies that the machine's integrity and geolocation conform to the given security policy. This piece of code, which serves as a trusted anchor on that machine, executes inside the trusted execution environment, i.e., Intel SGX, to protect itself from the untrusted host, and uses trusted computing techniques, such as trusted platform module, secure boot, and integrity measurement architecture, to attest to the load-time and runtime integrity of the surrounding operating system running on a bare metal machine or inside a virtual machine. The trusted anchor implements my novel, formally proven protocol, enabling detection of the TPM cuckoo attack.
The framework also implements a key distribution protocol that, depending on the individual security requirements, shares cryptographic keys only with high-assurance security systems executing in the predefined security settings, i.e., inside the trusted execution environments or inside the integrity-enforced operating system. Such an approach is particularly appealing in the context of machine learning systems where some algorithms, like the machine learning model training, require temporal access to large computing power. These algorithms can execute inside a dedicated, trusted data center at higher performance because they are not limited by security features required in the shared execution environment. The evaluation of the framework showed that training of a machine learning model using real-world datasets achieved 0.96x native performance execution on the GPU and a speedup of up to 1560x compared to the state-of-the-art SGX-based system.
Finally, I tackled the problem of software updates, which makes the operating system's integrity monitoring unreliable due to false positives, i.e., software updates move the updated system to an unknown (untrusted) state that is reported as an integrity violation. I solved this problem by introducing a proxy to a software repository that sanitizes software packages so that they can be safely installed. The sanitization consists of predicting and certifying the future (after the specific updates are installed) operating system's state. The evaluation of this approach showed that it supports 99.76% of the packages available in Alpine Linux main and community repositories.
The framework proposed in this thesis is a step forward in verifying and enforcing that high-assurance security systems execute in an environment compliant with regulations. I anticipate that the framework might be further integrated with industry-standard security information and event management tools as well as other security monitoring mechanisms to provide a comprehensive solution hardening high-assurance security systems.
|
Page generated in 0.0515 seconds