Spelling suggestions: "subject:"bearbeitung"" "subject:"vorbearbeitung""
81 |
Bearbeitung als Schule interpretatorischer GestaltungsmöglichkeitenHamer, Jens 22 October 2023 (has links)
No description available.
|
82 |
Development of a method to tune endogenous gene expression and its application to study dose-sensitivity in transcriptional regulation and random X-chromosome inactivationNoviello, Gemma 16 September 2024 (has links)
Einige biologische Prozesse sind dosisabhängig, wobei nicht nur die Anwesenheit oder Abwesenheit bestimmter Genprodukte, sondern auch deren spezifische Mengen wichtig sind. Ein Beispiel ist die Dosis-Kompensation für Geschlechtschromosomen bei Säugetieren, die durch X-Chromosomen-Inaktivierung erreicht wird. Dieser Mechanismus ist auf Frauen beschränkt, da sie zwei X-Chromosomen besitzen, im Gegensatz zu Männern mit nur einem X-Chromosom.
Dosisabhängigkeit spielt auch bei der Differenzierung pluripotenter Stammzellen eine Rolle. Geringe Schwankungen in der Menge des Pluripotenzfaktors OCT4 (POU5F1) können bestimmen, ob Maus-Embryonale Stammzellen (mESCs) sich in das Trophektoderm oder in meso-endodermale Linien differenzieren. Ebenso ist die Menge des Pluripotenzfaktors NANOG entscheidend für die Steuerung der naiven und vorbereiteten pluripotenten Zustände.
Das Verständnis der dosisabhängigen Regulation biologischer Prozesse ist entscheidend, jedoch technisch anspruchsvoll, da es erfordert, die Proteinmenge quantitativ zu modulieren. Hier wurde ein auf Degron- und CRISPR/Cas-basiertes Toolkit, CasTuner, entwickelt, um die endogene Genexpression analog zu steuern. CasTuner basiert auf Cas-abgeleiteten Repressoren, die an eine Degron-Domäne fusioniert sind und durch die Titration der Konzentration eines Liganden gesteuert werden können.
CasTuner ermöglicht eine homogene (analoge) Steuerung der Genexpression, im Gegensatz zum KRAB-basierten CRISPRi-System, das eine bimodale (digitale) Repression zeigt. Mit CasTuner wurden die Dosis-Wirkungs-Beziehungen von NANOG und OCT4 mit ihren Zielgenen und dem zellulären Phänotyp gemessen. Schließlich wurde CasTuner eingesetzt, um die dosisabhängige Rolle des X-gebundenen Xist-Aktivators RNF12 und des neu entdeckten Faktors ZIC3 zu untersuchen. Dabei wurde ein modifiziertes Modell für die zufällige X-Chromosomen-Inaktivierung vorgeschlagen. / Certain biological processes are dose-dependent, depending not only on the
presence or absence of given gene products but also on their specific. The importance of quantitative regulation of gene expression is illustrated by the need for dosage compensation for sex chromosomes and by the presence of genes whose decreased expression is linked to diseases. The mechanism by which mammals achieve X-dosage compensation, X-chromosome inactivation, is itself dose-dependent, being restricted to females through sensing the two-fold higher dose for X-linked genes in females compared to males. Dose-dependency has been described in the differentiation of pluripotent stem cells into different lineages: small variations in the quantity of the pluripotency factor OCT4 (POU5F1) can determine the differentiation of mouse embryonic stem cells (mESCs) into the trophectoderm or meso-endoderm lineages. Similarly, the amount of the pluripotency factor NANOG is critical for the control of naïve and primed pluripotent states. Understanding the principles underlying the dose-dependent regulation of biological processes is crucial, but also technically challenging, since it requires the ability to quantitatively modulate protein abundance. Here, I developed a degron- and CRISPR/Cas-based toolkit, CasTuner, for analogue tuning of endogenous gene expression. CasTuner relies on Cas-derived repressors fused to a degron domain, which can be tuned by titrating the concentration of a ligand. I demonstrate homogenous (analogue) tuning of gene expression across cells, as opposed to the KRAB-based CRISPRi system, which exhibits bimodal (digital) repression. I employ CasTuner to measure the dose-response relationships of NANOG and OCT4 with their target genes and the cellular phenotype. Finally, I apply CasTuner to study the dose-dependent role of the X-linked Xist activator RNF12 and the newly discovered factor ZIC3, and propose a modified model for random X-chromosome inactivation.
|
83 |
Indexing RDF data using materialized SPARQL queriesEspinola, Roger Humberto Castillo 10 September 2012 (has links)
In dieser Arbeit schlagen wir die Verwendung von materialisierten Anfragen als Indexstruktur für RDF-Daten vor. Wir streben eine Reduktion der Bearbeitungszeit durch die Minimierung der Anzahl der Vergleiche zwischen Anfrage und RDF Datenmenge an. Darüberhinaus betonen wir die Rolle von Kostenmodellen und Indizes für die Auswahl eines efizienten Ausführungsplans in Abhängigkeit vom Workload. Wir geben einen Überblick über das Problem der Auswahl von materialisierten Anfragen in relationalen Datenbanken und diskutieren ihre Anwendung zur Optimierung der Anfrageverarbeitung. Wir stellen RDFMatView als Framework für SPARQL-Anfragen vor. RDFMatView benutzt materializierte Anfragen als Indizes und enthalt Algorithmen, um geeignete Indizes fur eine gegebene Anfrage zu finden und sie in Ausführungspläne zu integrieren. Die Auswahl eines effizienten Ausführungsplan ist das zweite Thema dieser Arbeit. Wir führen drei verschiedene Kostenmodelle für die Verarbeitung von SPARQL Anfragen ein. Ein detaillierter Vergleich der Kostmodelle zeigt, dass ein auf Index-- und Prädikat--Statistiken beruhendes Modell die genauesten Informationen liefert, um einen effizienten Ausführungsplan auszuwählen. Die Evaluation zeigt, dass unsere Methode die Anfragebearbeitungszeit im Vergleich zu unoptimierten SPARQL--Anfragen um mehrere Größenordnungen reduziert. Schließlich schlagen wir eine einfache, aber effektive Strategie für das Problem der Auswahl von materialisierten Anfragen über RDF-Daten vor. Ausgehend von einem bestimmten Workload werden algorithmisch diejenigen Indizes augewählt, die die Bearbeitungszeit des gesamten Workload minimieren sollen. Dann erstellen wir auf der Basis von Anfragemustern eine Menge von Index--Kandidaten und suchen in dieser Menge Zusammenhangskomponenten. Unsere Auswertung zeigt, dass unsere Methode zur Auswahl von Indizes im Vergleich zu anderen, die größten Einsparungen in der Anfragebearbeitungszeit liefert. / In this thesis, we propose to use materialized queries as a special index structure for RDF data. We strive to reduce the query processing time by minimizing the number of comparisons between the query and the RDF dataset. We also emphasize the role of cost models in the selection of execution plans as well as index sets for a given workload. We provide an overview of the materialized view selection problem in relational databases and discuss its application for optimization of query processing. We introduce RDFMatView, a framework for answering SPARQL queries using materialized views as indexes. We provide algorithms to discover those indexes that can be used to process a given query and we develop different strategies to integrate these views in query execution plans. The selection of an efficient execution plan states the topic of our second major contribution. We introduce three different cost models designed for SPARQL query processing with materialized views. A detailed comparison of these models reveals that a model based on index and predicate statistics provides the most accurate cost estimation. We show that selecting an execution plan using this cost model yields a reduction of processing time with several orders of magnitude compared to standard SPARQL query processing. Finally, we propose a simple yet effective strategy for the materialized view selection problem applied to RDF data. Based on a given workload of SPARQL queries we provide algorithms for selecting a set of indexes that minimizes the workload processing time. We create a candidate index by retrieving all connected components from query patterns. Our evaluation shows that using the set of suggested indexes usually achieves larger runtime savings than other index sets regarding the given workload.
|
84 |
fAST Refresh using Mass Query OptimizationLehner, Wolfgang, Cochrane, Bobbie, Pirahesh, Hamid, Zaharioudakis, Markos 02 June 2022 (has links)
Automatic summary tables (ASTs), more commonly known as materialized views, are widely used to enhance query performance, particularly for aggregate queries. Such queries access a huge number of rows to retrieve aggregated summary data while performing multiple joins in the context of a typical data warehouse star schema. To keep ASTs consistent with their underlying base data, the ASTs are either immediately synchronized or fully recomputed. This paper proposes an optimization strategy for simultaneously refreshing multiple ASTs, thus avoiding multiple scans of a large fact table (one pass for AST computation). A query stacking strategy detects common sub-expressions using the available query matching technology of DB2. Since exact common sub-expressions are rare, the novel query sharing approach systematically generates common subexpressions for a given set of 'related' queries, considering different predicates, grouping expressions, and sets of base tables. The theoretical framework, a prototype implementation of both strategies in the IBM DB2 UDB/UWO database system, and performance evaluations based on the TPC/R data schema are presented in this paper.
|
85 |
Prediction of designer-recombinases for DNA editing with generative deep learningSchmitt, Lukas Theo 17 January 2024 (has links)
Site-specific tyrosine-type recombinases are effective tools for genome engineering, with the first engineered variants having demonstrated therapeutic potential. So far, adaptation to new DNA target site selectivity of designer-recombinases has been achieved mostly through iterative cycles of directed molecular evolution. While effective, directed molecular evolution methods are laborious and time consuming. To accelerate the development of designer-recombinases I evaluated two sequencing approaches and gathered the sequence information of over two million Cre-like recombinase sequences evolved for 89 different target sites. With this information I first investigated the sequence compositions and residue changes of the recombinases to further our understanding of their target site selectivity. The complexity of the data led me to a generative deep learning approach. Using the sequence data I trained a conditional variational autoencoder called RecGen (Recombinase Generator) that is capable of generating novel recombinases for a given target site. With computational evaluation of the sequences I revealed that known recombinases functional on the desired target site are generally more similar to the RecGen predicted recombinases than other recombinase libraries. Additionally, I could experimentally show that predicted recombinases for known target sites are at least as active as the evolved recombinases. Finally, I also experimentally show that 4 out of 10 recombinases predicted for novel target sites are capable of excising their respective target sites. As a bonus to RecGen I also developed a new method capable of accurate sequencing of recombinases with nanopore sequencing while simultaneously counting DNA editing events. The data of this method should enable the next development iteration of RecGen.
|
86 |
Extending the Cutting Stock Problem for Consolidating Services with Stochastic WorkloadsHähnel, Markus, Martinovic, John, Scheithauer, Guntram, Fischer, Andreas, Schill, Alexander, Dargie, Waltenegus 16 May 2023 (has links)
Data centres and similar server clusters consume a large amount of energy. However, not all consumed energy produces useful work. Servers consume a disproportional amount of energy when they are idle, underutilised, or overloaded. The effect of these conditions can be minimised by attempting to balance the demand for and the supply of resources through a careful prediction of future workloads and their efficient consolidation. In this paper we extend the cutting stock problem for consolidating workloads having stochastic characteristics. Hence, we employ the aggregate probability density function of co-located and simultaneously executing services to establish valid patterns. A valid pattern is one yielding an overall resource utilisation below a set threshold. We tested the scope and usefulness of our approach on a 16-core server with 29 different benchmarks. The workloads of these benchmarks have been generated based on the CPU utilisation traces of 100 real-world virtual machines which we obtained from a Google data centre hosting more than 32000 virtual machines. Altogether, we considered 600 different consolidation scenarios during our experiment. We compared the performance of our approach-system overload probability, job completion time, and energy consumption-with four existing/proposed scheduling strategies. In each category, our approach incurred a modest penalty with respect to the best performing approach in that category, but overall resulted in a remarkable performance clearly demonstrating its capacity to achieve the best trade-off between resource consumption and performance.
|
Page generated in 0.0608 seconds