• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 133
  • 25
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 319
  • 235
  • 153
  • 143
  • 137
  • 137
  • 55
  • 38
  • 37
  • 31
  • 28
  • 25
  • 22
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Proteomic consequences of TDA1 deficiency in Saccharomyces cerevisiae: protein kinase Tda1 is essential for Hxk1 and Hxk2 serine 15 phosphorylation

Müller, Henry, Lesur, Antoine, Dittmar, Gunnar, Gentzel, Marc, Kettner, Karina 27 February 2024 (has links)
Hexokinase 2 (Hxk2) of Saccharomyces cerevisiae is a dual function hexokinase, acting as a glycolytic enzyme and being involved in the transcriptional regulation of glucose-repressible genes. Relief from glucose repression is accompanied by phosphorylation of Hxk2 at serine 15, which has been attributed to the protein kinase Tda1. To explore the role of Tda1 beyond Hxk2 phosphorylation, the proteomic consequences of TDA1 deficiency were investigated by difference gel electrophoresis (2D-DIGE) comparing a wild type and a Δtda1 deletion mutant. To additionally address possible consequences of glucose repression/derepression, both were grown at 2% and 0.1% (w/v) glucose. A total of eight protein spots exhibiting a minimum twofold enhanced or reduced fluorescence upon TDA1 deficiency was detected and identified by mass spectrometry. Among the spot identities are—besides the expected Hxk2—two proteoforms of hexokinase 1 (Hxk1). Targeted proteomics analyses in conjunction with 2D-DIGE demonstrated that TDA1 is indispensable for Hxk2 and Hxk1 phosphorylation at serine 15. Thirty-six glucose-concentration-dependent protein spots were identified. A simple method to improve spot quantification, approximating spots as rotationally symmetric solids, is presented along with new data on the quantities of Hxk1 and Hxk2 and their serine 15 phosphorylated forms at high and low glucose growth conditions. The Δtda1 deletion mutant exhibited no altered growth under high or low glucose conditions or on alternative carbon sources. Also, invertase activity, serving as a reporter for glucose derepression, was not significantly altered. Instead, an involvement of Tda1 in oxidative stress response is suggested.
222

Decomposing compounds enables reconstruction of interaction fingerprints for structure‑based drug screening

Adasme, Melissa F., Bolz, Sarah Naomi, Al‑Fatlawi, Ali, Schroeder, Michael 22 January 2024 (has links)
Background: Structure-based drug repositioning has emerged as a promising alternative to conventional drug development. Regardless of the many success stories reported over the past years and the novel breakthroughs on the AI-based system AlphaFold for structure prediction, the availability of structural data for protein–drug complexes remains very limited. Whereas the chemical libraries contain millions of drug compounds, the vast majority of them do not have structures to crystallized targets,and it is, therefore, impossible to characterize their binding to targets from a structural view. However, the concept of building blocks offers a novel perspective on the structural problem. A drug compound is considered a complex of small chemical blocks or fragments, which confer the relevant properties to the drug and have a high proportion of functional groups involved in protein binding. Based on this, we propose a novel approach to expand the scope of structure-based repositioning approaches by transferring the structural knowledge from a fragment to a compound level. - Results: We fragmented over 100,000 compounds in the Protein Data Bank (PDB) and characterized the structural binding mode of 153,000 fragments to their crystallized targets. Using the fragment’s data, we were able to artificially reconstruct the binding mode of over 7,800 complexes between ChEMBL compounds and their known targets, for which no structural data is available. We proved that the conserved binding tendency of fragments, when binding to the same targets, highly influences the drug’s binding specificity and carries the key information to reconstruct full drugs binding mode. Furthermore, our approach was able to reconstruct multiple compound-target pairs at optimal thresholds and high similarity to the actual binding mode. - Conclusions: Such reconstructions are of great value and benefit structure-based drug repositioning since they automatically enlarge the technique’s scope and allow exploring the so far ‘unexplored compounds’ from a structural perspective. In general, the transfer of structural information is a promising technique that could be applied to any chemical library, to any compound that has no crystal structure available in PDB, and even to transfer any other feature that may be relevant for the drug discovery process and that due to data limitations is not yet fully available. In that sense, the results of this work document the full potential of structure-based screening even beyond PDB.
223

SCINTRA: A Model for Quantifying Inconsistencies in Grid-Organized Sensor Database Systems

Schlesinger, Lutz, Lehner, Wolfgang 12 January 2023 (has links)
Sensor data sets are usually collected in a centralized sensor database system or replicated cached in a distributed system to speed up query evaluation. However, a high data refresh rate disallows the usage of traditional replicated approaches with its strong consistency property. Instead we propose a combination of grid computing technology with sensor database systems. Each node holds cached data of other grid members. Since cached information may become stale fast, the access to outdated data may sometimes be acceptable if the user has knowledge about the degree of inconsistency if unsynchronized data are combined. The contribution of this paper is the presentation and discussion of a model for describing inconsistencies in grid organized sensor database systems.
224

Teaching hydrological modelling: illustrating model structure uncertainty with a ready-to-use computational exercise

Knoben, Wouter J. M., Spieler, Diana 06 June 2024 (has links)
Estimating the impact of different sources of uncertainty along the modelling chain is an important skill graduates are expected to have. Broadly speaking, educators can cover uncertainty in hydrological modelling by differentiating uncertainty in data, model parameters and model structure. This provides students with insights on the impact of uncertainties on modelling results and thus on the usability of the acquired model simulations for decision making. A survey among teachers in the Earth and environmental sciences showed that model structural uncertainty is the least represented uncertainty group in teaching. This paper introduces a computational exercise that introduces students to the basics of model structure uncertainty through two ready-to-use modelling experiments. These experiments require either Matlab or Octave, and use the open-source Modular Assessment of Rainfall-Runoff Models Toolbox (MARRMoT) and the open-source Catchment Attributes and Meteorology for Large-sample Studies (CAMELS) data set. The exercise is short and can easily be integrated into an existing hydrological curriculum, with only a limited time investment needed to introduce the topic of model structure uncertainty and run the exercise. Two trial applications at the Technische Universität Dresden (Germany) showed that the exercise can be completed in two afternoons or four 90 min sessions and that the provided setup effectively transfers the intended insights about model structure uncertainty.
225

Investigating adult age differences in real-life empathy, prosociality, and well-being using experience sampling

Pollerhoff, Lena, Stietz, Julia, Depow, Gregory John, Inzlicht, Michael, Kanske, Philipp, Li, Shu-Chen, Reiter, Andrea M. F. 04 June 2024 (has links)
While the importance of social affect and cognition is indisputable throughout the adult lifespan, findings of how empathy and prosociality develop and interact across adulthood are mixed and real-life data are scarce. Research using ecological momentary assessment recently demonstrated that adults commonly experience empathy in daily life. Furthermore, experiencing empathy was linked to higher prosocial behavior and subjective well-being. However, to date, it is not clear whether there are adult age differences in daily empathy and daily prosociality and whether age moderates the relationship between empathy and prosociality across adulthood. Here we analyzed experience-sampling data collected from participants across the adult lifespan to study age effects on empathy, prosocial behavior, and well-being under real-life circumstances. Linear and quadratic age effects were found for the experience of empathy, with increased empathy across the three younger age groups (18 to 45 years) and a slight decrease in the oldest group (55 years and older). Neither prosocial behavior nor well-being showed significant age-related differences. We discuss these findings with respect to (partially discrepant) results derived from lab-based and traditional survey studies. We conclude that studies linking in-lab experiments with real-life experience-sampling may be a promising venue for future lifespan studies.
226

Sachsenforst ...: Jahresbericht

04 December 2024 (has links)
No description available.
227

How low working memory demands and reduced anticipatory attentional gating contribute to impaired inhibition during acute alcohol intoxication

Stock, Ann-Kathrin, Yu, Shijing, Ghin, Filippo, Beste, Christian 08 April 2024 (has links)
High-dose alcohol intoxication is commonly associated with impaired inhibition, but the boundary conditions, as well as associated neurocognitive/neuroanatomical changes have remained rather unclear. This study was motivated by the counterintuitive finding that high-dose alcohol intoxication compromises response inhibition performance when working memory demands were low, but not when they were high. To investigate whether this is more likely to be caused by deficits in cognitive control processes or in attentional processes, we examined event-related (de)synchronization processes in theta and alpha-band activity and performed beamforming analyses on the EEG data of previously published behavioral findings. This yielded two possible explanations: There may be a selective decrease of working memory engagement in case of relatively low demand, which boosts response automatization, ultimately putting more strain on the remaining inhibitory resources. Alternatively, there may be a decrease in proactive preparatory and anticipatory attentional gating processes in case of relatively low demand, hindering attentional sampling of upcoming stimuli. Crucially, both of these interrelated mechanisms reflect differential alcohol effects after the actual motor inhibition process and therefore tend to be processes that serve to anticipate future response inhibition affordances. This provides new insights into how high-dose alcohol intoxication can impair inhibitory control.
228

Anonymization Techniques for Privacy-preserving Process Mining

Fahrenkrog-Petersen, Stephan A. 30 August 2023 (has links)
Process Mining ermöglicht die Analyse von Event Logs. Jede Aktivität ist durch ein Event in einem Trace recorded, welcher jeweils einer Prozessinstanz entspricht. Traces können sensible Daten, z.B. über Patienten enthalten. Diese Dissertation adressiert Datenschutzrisiken für Trace Daten und Process Mining. Durch eine empirische Studie zum Re-Identifikations Risiko in öffentlichen Event Logs wird die hohe Gefahr aufgezeigt, aber auch weitere Risiken sind von Bedeutung. Anonymisierung ist entscheidend um Risiken zu adressieren, aber schwierig weil gleichzeitig die Verhaltensaspekte des Event Logs erhalten werden sollen. Dies führt zu einem Privacy-Utility-Trade-Off. Dieser wird durch neue Algorithmen wie SaCoFa und SaPa angegangen, die Differential Privacy garantieren und gleichzeitig Utility erhalten. PRIPEL ergänzt die anonymiserten Control-flows um Kontextinformationen und ermöglich so die Veröffentlichung von vollständigen, geschützten Logs. Mit PRETSA wird eine Algorithmenfamilie vorgestellt, die k-anonymity garantiert. Dafür werden privacy-verletztende Traces miteinander vereint, mit dem Ziel ein möglichst syntaktisch ähnliches Log zu erzeugen. Durch Experimente kann eine bessere Utility-Erhaltung gegenüber existierenden Lösungen aufgezeigt werden. / Process mining analyzes business processes using event logs. Each activity execution is recorded as an event in a trace, representing a process instance's behavior. Traces often hold sensitive info like patient data. This thesis addresses privacy concerns arising from trace data and process mining. A re-identification risk study on public event logs reveals high risk, but other threats exist. Anonymization is vital to address these issues, yet challenging due to preserving behavioral aspects for analysis, leading to a privacy-utility trade-off. New algorithms, SaCoFa and SaPa, are introduced for trace anonymization using noise for differential privacy while maintaining utility. PRIPEL supplements anonymized control flows with trace contextual info for complete protected logs. For k-anonymity, the PRETSA algorithm family merges privacy-violating traces based on a prefix representation of the event log, maintaining syntactic similarity. Empirical evaluations demonstrate utility improvements over existing techniques.
229

Describing data patterns / a general deconstruction of metadata standards

Voß, Jakob 07 August 2013 (has links)
Diese Arbeit behandelt die Frage, wie Daten grundsätzlich strukturiert und beschrieben sind. Im Gegensatz zu vorhandenen Auseinandersetzungen mit Daten im Sinne von gespeicherten Beobachtungen oder Sachverhalten, werden Daten hierbei semiotisch als Zeichen aufgefasst. Diese Zeichen werden in Form von digitalen Dokumenten kommuniziert und sind mittels zahlreicher Standards, Formate, Sprachen, Kodierungen, Schemata, Techniken etc. strukturiert und beschrieben. Diese Vielfalt von Mitteln wird erstmals in ihrer Gesamtheit mit Hilfe der phenomenologischen Forschungsmethode analysiert. Ziel ist es dabei, durch eine genaue Erfahrung und Beschreibung von Mitteln zur Strukturierung und Beschreibung von Daten zum allgemeinen Wesen der Datenstrukturierung und -beschreibung vorzudringen. Die Ergebnisse dieser Arbeit bestehen aus drei Teilen. Erstens ergeben sich sechs Prototypen, die die beschriebenen Mittel nach ihrem Hauptanwendungszweck kategorisieren. Zweitens gibt es fünf Paradigmen, die das Verständnis und die Anwendung von Mitteln zur Strukturierung und Beschreibung von Daten grundlegend beeinflussen. Drittens legt diese Arbeit eine Mustersprache der Datenstrukturierung vor. In zwanzig Mustern werden typische Probleme und Lösungen dokumentiert, die bei der Strukturierung und Beschreibung von Daten unabhängig von konkreten Techniken immer wieder auftreten. Die Ergebnisse dieser Arbeit können dazu beitragen, das Verständnis von Daten --- das heisst digitalen Dokumente und ihre Metadaten in allen ihren Formen --- zu verbessern. Spezielle Anwendungsgebiete liegen unter Anderem in den Bereichen Datenarchäologie und Daten-Literacy. / Many methods, technologies, standards, and languages exist to structure and describe data. The aim of this thesis is to find common features in these methods to determine how data is actually structured and described. Existing studies are limited to notions of data as recorded observations and facts, or they require given structures to build on, such as the concept of a record or the concept of a schema. These presumed concepts have been deconstructed in this thesis from a semiotic point of view. This was done by analysing data as signs, communicated in form of digital documents. The study was conducted by a phenomenological research method. Conceptual properties of data structuring and description were first collected and experienced critically. Examples of such properties include encodings, identifiers, formats, schemas, and models. The analysis resulted in six prototypes to categorize data methods by their primary purpose. The study further revealed five basic paradigms that deeply shape how data is structured and described in practice. The third result consists of a pattern language of data structuring. The patterns show problems and solutions which occur over and over again in data, independent from particular technologies. Twenty general patterns were identified and described, each with its benefits, consequences, pitfalls, and relations to other patterns. The results can help to better understand data and its actual forms, both for consumption and creation of data. Particular domains of application include data archaeology and data literacy.
230

Parallelizing Set Similarity Joins

Fier, Fabian 24 January 2022 (has links)
Eine der größten Herausforderungen in Data Science ist heutzutage, Daten miteinander in Beziehung zu setzen und ähnliche Daten zu finden. Hierzu kann der aus relationalen Datenbanken bekannte Join-Operator eingesetzt werden. Das Konzept der Ähnlichkeit wird häufig durch mengenbasierte Ähnlichkeitsfunktionen gemessen. Um solche Funktionen als Join-Prädikat nutzen zu können, setzt diese Arbeit voraus, dass Records aus Mengen von Tokens bestehen. Die Arbeit fokussiert sich auf den mengenbasierten Ähnlichkeitsjoin, Set Similarity Join (SSJ). Die Datenmenge, die es heute zu verarbeiten gilt, ist groß und wächst weiter. Der SSJ hingegen ist eine rechenintensive Operation. Um ihn auf großen Daten ausführen zu können, sind neue Ansätze notwendig. Diese Arbeit fokussiert sich auf das Mittel der Parallelisierung. Sie leistet folgende drei Beiträge auf dem Gebiet der SSJs. Erstens beschreibt und untersucht die Arbeit den aktuellen Stand paralleler SSJ-Ansätze. Diese Arbeit vergleicht zehn Map-Reduce-basierte Ansätze aus der Literatur sowohl analytisch als auch experimentell. Der größte Schwachpunkt aller Ansätze ist überraschenderweise eine geringe Skalierbarkeit aufgrund zu hoher Datenreplikation und/ oder ungleich verteilter Daten. Keiner der Ansätze kann den SSJ auf großen Daten berechnen. Zweitens macht die Arbeit die verfügbare hohe CPU-Parallelität moderner Rechner für den SSJ nutzbar. Sie stellt einen neuen daten-parallelen multi-threaded SSJ-Ansatz vor. Der vorgestellte Ansatz ermöglicht erhebliche Laufzeit-Beschleunigungen gegenüber der Ausführung auf einem Thread. Drittens stellt die Arbeit einen neuen hoch skalierbaren verteilten SSJ-Ansatz vor. Mit einer kostenbasierten Heuristik und einem daten-unabhängigen Skalierungsmechanismus vermeidet er Daten-Replikation und wiederholte Berechnungen. Der Ansatz beschleunigt die Join-Ausführung signifikant und ermöglicht die Ausführung auf erheblich größeren Datenmengen als bisher betrachtete parallele Ansätze. / One of today's major challenges in data science is to compare and relate data of similar nature. Using the join operation known from relational databases could help solving this problem. Given a collection of records, the join operation finds all pairs of records, which fulfill a user-chosen predicate. Real-world problems could require complex predicates, such as similarity. A common way to measure similarity are set similarity functions. In order to use set similarity functions as predicates, we assume records to be represented by sets of tokens. In this thesis, we focus on the set similarity join (SSJ) operation. The amount of data to be processed today is typically large and grows continually. On the other hand, the SSJ is a compute-intensive operation. To cope with the increasing size of input data, additional means are needed to develop scalable implementations for SSJ. In this thesis, we focus on parallelization. We make the following three major contributions to SSJ. First, we elaborate on the state-of-the-art in parallelizing SSJ. We compare ten MapReduce-based approaches from the literature analytically and experimentally. Their main limit is surprisingly a low scalability due to too high and/or skewed data replication. None of the approaches could compute the join on large datasets. Second, we leverage the abundant CPU parallelism of modern commodity hardware, which has not yet been considered to scale SSJ. We propose a novel data-parallel multi-threaded SSJ. Our approach provides significant speedups compared to single-threaded executions. Third, we propose a novel highly scalable distributed SSJ approach. With a cost-based heuristic and a data-independent scaling mechanism we avoid data replication and recomputation. A heuristic assigns similar shares of compute costs to each node. Our approach significantly scales up the join execution and processes much larger datasets than all parallel approaches designed and implemented so far.

Page generated in 0.0572 seconds