• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 12
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Genome-Wide Significant, Replicated and Functional Risk Variants for Alzheimer’s Disease

Guo, Xiaoyun, Qiu, Wenying, Garcia-Milian, Rolando, Lin, Xiandong, Zhang, Yong, Cao, Yuping, Tan, Yunlong, Wang, Zhiren, Shi, Jing, Wang, Jijun, Liu, Dengtang, Song, Lisheng, Xu, Yifeng, Wang, Xiaoping, Liu, Na, Sun, Tao, Zheng, Jianming, Luo, Justine, Zhang, Huihao, Xu, Jianying, Kang, Longli, Ma, Chao, Wang, Kesheng, Luo, Xingguang 01 November 2017 (has links)
Genome-wide association studies (GWASs) have reported numerous associations between risk variants and Alzheimer’s disease (AD). However, these associations do not necessarily indicate a causal relationship. If the risk variants can be demonstrated to be biologically functional, the possibility of a causal relationship would be increased. In this article, we reviewed all of the published GWASs to extract the genome-wide significant (p < 5×10−8) and replicated associations between risk variants and AD or AD-biomarkers. The regulatory effects of these risk variants on the expression of a novel class of non-coding RNAs (piRNAs) and protein-coding RNAs (mRNAs), the alteration of proteins caused by these variants, the associations between AD and these variants in our own sample, the expression of piRNAs, mRNAs and proteins in human brains targeted by these variants, the expression correlations between the risk genes and APOE, the pathways and networks that the risk genes belonged to, and the possible long non-coding RNAs (LncRNAs) that might regulate the risk genes were analyzed, to investigate the potential biological functions of the risk variants and explore the potential mechanisms underlying the SNP-AD associations. We found replicated and significant associations for AD or AD-biomarkers, surprisingly, only at 17 SNPs located in 11 genes/snRNAs/LncRNAs in eight genomic regions. Most of these 17 SNPs enriched some AD-related pathways or networks, and were potentially functional in regulating piRNAs and mRNAs; some SNPs were associated with AD in our sample, and some SNPs altered protein structures. Most of the protein-coding genes regulated by the risk SNPs were expressed in human brain and correlated with APOE expression. We conclude that these variants were most robust risk markers for AD, and their contributions to AD risk was likely to be causal. As expected, APOE and the lipoprotein metabolism pathway possess the highest weight among these contributions.
12

The analysis and co-design of weakly-consistent applications / L'analyse et co-design des applications faiblement-cohérent

Najafzadeh, Mahsa 22 April 2016 (has links)
Afin d'assurer disponibilité et réactivité, de nombreux systèmes distribués reposent sur des bases de données répliquées qui maintiennent des copies (répliques) des données sur différents serveurs. la cohérence constitue un défi important dans la mise en oeuvre des bases de données répliquées. les concepteurs de bases de données repliquées doivent faire un choix difficile entre une cohérence forte, qui guarantit une large gamme d'invariants applicatifs, mais qui est lente et fragile, et une réplication asynchrone, qui assure un bon niveau de disponibilité et de réactivité, mais laisse le programmeur face à de possibles anomalies liées à la concurrence.pour résoudre ce dilemme, des bases de données commerciales et recherche fournissent une cohérence hybride qui permet au programmeur d'exiger une cohérence forte pour certaines opérations, et d'ainsi permettre une synchronisation.cette thèse étudie l'analyse et la mise en oeuvre d'une application et de la cohérence associée, de manière à assurer les invariants de cette application avec un minimum d'exigences de cohérence. les trois principales contributions de cette thèse sont: 1) nous proposons le premier outil d'analyse statique destiné à prouver la validité d'invariants d'applications de base de données à modèle de cohérence hybride. 2) nous présentons la mise en application de notre outil d'analyse dans le cadre de la conception d'un système de fichiers dont la sémantique permet un comportement similaire à posix et à un coût résonable. 3)nous proposons un ensemble de patterns utiles, susceptibles d'aider les développeurs d'application dans l'implémentation d'invariants les plus communs. / Distributed databases take advantage of replication to bring data close to the client, and to always be available. the primary challenge for such databases is to ensure consistency. recent research provide hybrid consistency models that allow the database supports asynchronous updates by default, but synchronisation is available upon request. to help programmers exploit the hybrid consistency model, we propose a set of useful patterns,proof rules, and tool for proving integrity invariants of applications. in the first part, we study a sound proof rule that enables programmers to check whether the operations of a given application semantics maintain the application invariants under a given amount of parallelism. we have developed a smt-based tool that automates this proof, and verified several example applications using the tool. in the second part, we apply the above methodology to the design of a replicated file system.the main invariant is that the directory structure forms a tree. we study three alternative semantics for the file system. each exposes a different amount of parallelism, and different anomalies. using our tool-assisted rules, we check whether a specific file system semantics maintains the tree invariant, and derive an appropriate consistency protocol. in the third part of this thesis, we present three classes of invariants: equivalence, partial order, and single-item generic. each places some constraints over the state. each of these classes maps to a different storage-layer consistency property: respectively, atomicity, causal ordering, or total ordering.
13

Cohérence de données répliquées partagées adaptative pour architectures de stockage à fort degré d’élasticité. / Adaptive Consistency Protocols for Replicated Data in Modern Storage Systems with a High Degree of Elasticity

Kumar, Sathiya Prabhu 15 March 2016 (has links)
Les principales contributions de cette thèse sont au nombre de trois. La première partie de cette thèse concerne le développement d’un nouveau protocole de réplication nommé LibRe, permettant de limiter le nombre de lectures obsolètes dans un système de stockage distribué. LibRe est un acronyme signifiant "Library for Replication". Le principal objectif de LibRe est d’assurer la cohérence des données en contactant un minimum de répliques durant les opérations de lectures où d’écritures. Dans ce protocole, lors d’une opération d’écriture, chaque réplique met à jour un registre (la "librairie"), de manière asynchrone, avec l’identifiant de version de la donnée modifiée. Lors des opérations de lecture, la requête est transférée au réplica le plus approprié en fonction de l’information figurant dans le registre. Ce mécanisme permet de limiter le nombre de lectures obsolétes. L’évaluation de la cohérence d’un système reste un problème difficile à resoudre, que ce soit par simulation ou par évaluation en conditions réelles. Par conséquent nous avons développé un simulateur appelé Simizer, qui permet d’évaluer et de comparer la performance de différents protocoles de cohérence. Le système d’évaluation de bases de données YCSB a aussi été étendu pour évaluer l’échange entre cohérence et latence dans les systèmes de stockage modernes. Le code du simulateur et les modifications apportées à l’outil YCSB sont disponibles sous licence libre.Bien que les systèmes de bases de données modernes adaptent les garanties de cohérence à la demande de l’utilisateur, anticiper le niveau de cohérence requis pour chaque opération reste difficile pour un développeur d’application. La deuxième contribution de cette thèse cherche à résoudre ce problème en permettant à la base de données de remplacer le niveau de cohérence défini par défaut par d’autres règles définies à partir d’informations externes. Ces informations peuvent être fournies par l’administrateur ou un service extérieur. Dans cette thèse, nous validons ce modèle à l’aide d’une implémentation au sein du système de bases de données distribué Cassandra. La troisième contribution de cette thèse concerne la résolution des conflits de mise à jour. La résolution de ce type de conflits nécessite de retenir toutes les valeurs possibles d’un objet pour permettre la résolution du conflit grâce à une connaissance spécifique côté client. Ceci implique des coûts supplémentaires en termes de débit et de latence. Dans cette thèse nous discutons le besoin et la conception d’un nouveau type d’objet distribué, le registre à priorité, qui utilise une stratégie de détection et de résolution de conflits spécifique au domaine, et l’implante côté serveur. Notre approche utilise la notion d’ordre de remplacement spécifique. Nous montrons qu’un type de donnée paramètrée par un tel ordre peut fournir une solution efficace pour les applications demandant des solutions spécifiques à la résolution des conflits. Nous décrivons aussi l’implémentation d’une preuve de concept au sein de Cassandra. / The main contributions of this thesis are three folds. The first contribution of the thesis focuses on an efficient way to control stale reads in modern database systems with the help of a new consistency protocol called LibRe. LibRe is an acronym for Library for Replication. The main goal of the LibRe protocol is to ensure data consistency by contacting a minimum number of replica nodes during read and write operations with the help of a library information. According to the protocol, during write operations each replica node updates a registry (library) asynchronously with the recent version identifier of the updated data. Forwarding the read requests to a right replica node referring the registry information helps to control stale reads during read operations. Evaluation of data consistency remains challenging both via simulation as well as in a real world setup. Hence, we implemented a new simulation toolkit called Simizer that helps to evaluate the performance of different consistency policies in a fast and efficient way. We also extended an existing benchmark tool YCSB that helps to evaluate the consistency-latency tradeoff offered by modern database systems. The codebase of the simulator and the extended YCSB are made open-source for public access. The performance of the LibRe protocol is validated both via simulation as well as in a real setup with the help of extended YCSB.Although the modern database systems adapt the consistency guarantees of the system per query basis, anticipating the consistency level of an application query in advance during application development time remains challenging for the application developers. In order to overcome this limitation, the second contribution of the thesis focuses on enabling the database system to override the application-defined consistency options during run time with the help of an external input. The external input could be given by a data administrator or by an external service. The thesis validates the proposed model with the help of a prototype implementation inside the Cassandra distributed storage system.The third contribution of the thesis focuses on resolving update conflicts. Resolving update conflicts often involve maintaining all possible values and perform the resolution via domain-specific knowledge at the client side. This involves additional cost in terms of network bandwidth and latency, and considerable complexity. In this thesis, we discuss the motivation and design of a novel data type called priority register that implements a domain-specific conflict detection and resolution scheme directly at the database side, while leaving open the option of additional reconciliation at the application level. Our approach uses the notion of an application-defined replacement ordering and we show that a data type parameterized by such an order can provide an efficient solution for applications that demand domain-specific conflict resolution. We also describe the proof of concept implementation of the priority register inside Cassandra. The conclusion and perspectives of the thesis work are summarized at the end.
14

Handling of mobile applications state using Conflict-Free Replicated Data Types / Hantering av mobilapplikationer med hjälp av Conflict-Free Replicated Data Types

Tranquillini, Anna January 2022 (has links)
Mobile applications often must synchronize their local state with a backend to maintain an up-to-date view of the application state. Nevertheless, in some cases, the application’s ability to work offline or with poor network connectivity may be more significant than guaranteeing strong consistency. We present a method to structure the application state in a portable way using the Redux pattern and the properties of strongly typed languages. This method allows employing Conflict-free Replicated Data Types to create a custom converging state: this way, each replica can edit its local state autonomously and merge conflicts with other replicas when possible. Furthermore, we propose to keep a server as the communication channel and analyze how this architecture impacts design choices and optimizations related to CRDTs. Finally, we evaluate our method on a note-taking application using a few well-known CRDT designs and quantitatively justify our design choices. / Mobilapplikationer måste ofta synkronisera lokala tillstånd med backend för att upprätthålla en uppdaterad vy av applikationstillstånd. I vissa fall kan dock applikationens förmåga att arbeta offline eller med dålig nätverksanslutning vara viktigare än att garantera strong consistency. Vi presenterar en metod för att strukturera applikationstillståndet på ett portabelt sätt med hjälp av Redux-mönstret och egenskaperna hos starkt typade språk. Den här metoden gör det möjligt att använda Conflict-free Replicated Data Types för att skapa ett anpassat konvergerande tillstånd: på så sätt kan varje replik redigera sin lokala status självständigt och slå samman konflikter med andra repliker när det är möjligt. Dessutom föreslår vi att behålla en server som kommunikationskanal och analysera hur denna arkitektur påverkar designval och optimeringar relaterade till CRDT. Slutligen utvärderar vi vår metod på en anteckningsapplikation med några välkända CRDT-designer och motiverar kvantitativt våra designval. / Le applicazioni mobile spesso devono sincronizzare il loro stato locale con il back-end per mantenere una visione aggiornata dello stato dell’applicazione. Tuttavia, in alcuni casi, la capacità dell’applicazione di funzionare offline o con una scarsa connettività può essere più importante del garantire la “strong consistency”. In questa tesi presentiamo un metodo per strutturare lo stato di un’applicazione in modo portabile utilizzando il pattern Redux e le proprietà dei linguaggi fortemente tipizzati. Questo metodo consente di utilizzare i Conflict-free Replicated Data Types per creare uno stato convergente ad hoc: in questo modo, ogni replica può modificare il proprio stato locale in modo autonomo e risolvere i conflitti con le altre repliche quando possibile. Inoltre, proponiamo di mantenere un server come canale di comunicazione e di analizzare come questa architettura influisca sulle scelte progettuali e sulle ottimizzazioni relative ai CRDT. Infine, valutiamo il nostro metodo su un’applicazione per prendere appunti utilizzando alcuni CRDT noti e giustifichiamo quantitativamente le nostre scelte di progettazione.
15

Shelfaware: Accelerating Collaborative Awareness with Shelf CRDT

Waidhofer, John C 01 March 2023 (has links) (PDF)
Collaboration has become a key feature of modern software, allowing teams to work together effectively in real-time while in different locations. In order for a user to communicate their intention to several distributed peers, computing devices must exchange high-frequency updates with transient metadata like mouse position, text range highlights, and temporary comments. Current peer-to-peer awareness solutions have high time and space complexity due to the ever-expanding logs that each client must maintain in order to ensure robust collaboration in eventually consistent environments. This paper proposes an awareness Conflict-Free Replicated Data Type (CRDT) library that provides the tooling to support an eventually consistent, decentralized, and robust multi-user collaborative environment. Our library is tuned for rapid iterative updates that communicate fine-grained user actions across a network of collaborators. Our approach holds memory constant for subsequent writes to an existing key on a shared resource and completely prunes stale data from shared documents. These features allow us to keep the CRDT's memory footprint small, making it a feasible solution for memory constrained applications. Results show that our CRDT implementation is comparable to or exceeds the performance of similar data structures in high-frequency read/write scenarios.
16

Designs and methods for the identification of active location and dispersion effects

Dingus, Cheryl Ann Venard 02 December 2005 (has links)
No description available.
17

Crash recovery with partial amnesia failure model issues

De Juan Marín, Rubén 30 September 2008 (has links)
Replicated systems are a kind of distributed systems whose main goal is to ensure that computer systems are highly available, fault tolerant and provide high performance. One of the last trends in replication techniques managed by replication protocols, make use of Group Communication Sys- tem, and more specifically of the communication primitive atomic broadcast for developing more eficient replication protocols. An important aspect in these systems consists in how they manage the disconnection of nodes {which degrades their service{ and the connec- tion/reconnection of nodes for maintaining their original support. This task is delegated in replicated systems to recovery protocols. How it works de- pends specially on the failure model adopted. A model commonly used for systems managing large state is the crash-recovery with partial amnesia be- cause it implies short recovery periods. But, assuming it implies arising several problems. Most of them have been already solved in the literature: view management, abort of local transactions started in crashed nodes { when referring to transactional environments{ or for example the reinclu- sion of new nodes to the replicated system. Anyway, there is one problem related to the assumption of this second failure model that has not been completely considered: the amnesia phenomenon. Phenomenon that can lead to inconsistencies if it is not correctly managed. This work presents this inconsistency problem due to the amnesia and formalizes it, de ning the properties that must be ful lled for avoiding it and de ning possible solutions. Besides, it also presents and formalizes an inconsistency problem {due to the amnesia{ which appears under a speci c sequence of events allowed by the majority partition progress condition that will imply to stop the system, proposing the properties for overcoming it and proposing di erent solutions. As a consequence it proposes a new majority partition progress condition. In the sequel there is de / De Juan Marín, R. (2008). Crash recovery with partial amnesia failure model issues [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3302 / Palancia
18

Multivariate Models and Algorithms for Systems Biology

Acharya, Lipi Rani 17 December 2011 (has links)
Rapid advances in high-throughput data acquisition technologies, such as microarraysand next-generation sequencing, have enabled the scientists to interrogate the expression levels of tens of thousands of genes simultaneously. However, challenges remain in developingeffective computational methods for analyzing data generated from such platforms. In thisdissertation, we address some of these challenges. We divide our work into two parts. Inthe first part, we present a suite of multivariate approaches for a reliable discovery of geneclusters, often interpreted as pathway components, from molecular profiling data with replicated measurements. We translate our goal into learning an optimal correlation structure from replicated complete and incomplete measurements. In the second part, we focus on thereconstruction of signal transduction mechanisms in the signaling pathway components. Wepropose gene set based approaches for inferring the structure of a signaling pathway.First, we present a constrained multivariate Gaussian model, referred to as the informed-case model, for estimating the correlation structure from replicated and complete molecular profiling data. Informed-case model generalizes previously known blind-case modelby accommodating prior knowledge of replication mechanisms. Second, we generalize theblind-case model by designing a two-component mixture model. Our idea is to strike anoptimal balance between a fully constrained correlation structure and an unconstrained one.Third, we develop an Expectation-Maximization algorithm to infer the underlying correlation structure from replicated molecular profiling data with missing (incomplete) measurements.We utilize our correlation estimators for clustering real-world replicated complete and incompletemolecular profiling data sets. The above three components constitute the first partof the dissertation. For the structural inference of signaling pathways, we hypothesize a directed signal pathway structure as an ensemble of overlapping and linear signal transduction events. We then propose two algorithms to reverse engineer the underlying signaling pathway structure using unordered gene sets corresponding to signal transduction events. Throughout we treat gene sets as variables and the associated gene orderings as random.The first algorithm has been developed under the Gibbs sampling framework and the secondalgorithm utilizes the framework of simulated annealing. Finally, we summarize our findingsand discuss possible future directions.
19

Visualization of replication-dependent DNA double-strand break repair in Escherichia coli

Amarh, Vincent January 2017 (has links)
Chromosomal replication is a source of spontaneous DNA double-strand breaks (DSBs). In E. coli, DSBs are repaired by homologous recombination using an undamaged sister template. During repair, the RecA protein polymerizes on single-stranded DNA generated at the site of the DSB and catalyses the search for sequence homologies on the undamaged sister template. This study utilized fluorescence microscopy to investigate the spatial and temporal dynamics of the RecA protein at the site of a replication-dependent DSB generated at the lacZ locus of the E. coli chromosome. The DSB was generated by SbcCD-mediated cleavage of a hairpin DNA structure formed on the lagging strand template of the replication fork by a long palindromic sequence. The tandem insertion of a recA-mCherry gene with the endogenous recA gene at the natural chromosomal locus produced no detectable effect on cell viability in the presence of DSB formation. During repair, the fluorescently-labelled RecA protein formed a transient focus, which was inferred to be the RecA nucleoprotein filament at the site of the replication-dependent DSB. The duration of the RecA focus at the site of the DSB was modestly reduced in a ΔdinI mutant and modestly increased in a ΔuvrD or ΔrecX mutant. Most cells underwent a period of extended cohesion of the sister lacZ loci after disappearance of the RecA focus. Segregation of the sister lacZ loci was followed by cell division, with each daughter cell obtaining a copy of the fluorescently-labelled lacZ locus. The RecA focus at the site of the DSB was observed predominantly between the mid-cell and the 1⁄4 position. In the absence of DSB formation, the lacZ locus exhibited dynamic movement between the mid-cell and the 1⁄4 position until the onset of segregation. Formation of the DSB and initiation of repair occurred at the spatial localization for replication of the lacZ locus while the downstream repair events occurred very close to the mid-cell. Genomic analysis of RecA-DNA interactions by ChIP-seq was used to demonstrate that the RecA focus at the lacZ locus was generated by the repair of the palindrome-induced DSB and not the repair of one-ended DSBs emanating from stalled replication forks at the repressor-bound operator arrays. This study has shown that the repair of a replication-dependent DSB occurs exclusively during the period of cohesion of the sister loci and the repair is efficiently completed prior to segregation of the two sister loci.
20

Live updates in High-availability (HA) clouds

Sanagari, Vivek January 2018 (has links)
Background. High-availability (HA) is a cloud’s ability to keep functioning after one or more hardware or software components fail. Its purpose is to minimize the system downtime and data loss. Many service providers guarantee a Service Level Agreement including uptime percentage of the computing service, which is calculated based on the available time and system downtime excluding the planned outage time. The aim of the thesis is to perform the update of the virtual machines running in the cloud without causing any interruptions to the user by redirecting the resources/services running on them to an alternative virtual machine before the original VM is updated. Objectives. The objectives for the above aim include. • The first objective is to investigate existing solutions for high-availability and, if possible, adapt them to our aim. The alternative is to design our own solution. • The second objective is to implement the solution in an Open Stack environment. As an alternative, we can try a smaller scale implementation under a virtualization platform such as Virtual Box. • The final objective is to run experiments to quantify the effectiveness of our solution in terms of overhead and degree of seamlessness to the users. Methods. An environment with multiple virtual machines may be created to represent multiple virtual servers in the cloud. The state of service provided by the primary virtual machine is saved to persistent storage and the client is redirected to an alternate virtual machine. At that point the primary virtual machine may reboot for an update or any other issues. Results. In the case of CPU Utilization, the mean CPU utilization on Server and Host in scenario 1 are 0.34% and 3.2% respectively. The mean CPU utilization on Primary server and Host in scenario 2 during the failover cycle are 2.0% and 9.7% respectively. The mean CPU utilization on Secondary server and Host in scenario 2 during failover cycle are 0.99% and 8.0% respectively. For the Memory Utilization, the mean Memory usage on server in scenario 1 is 16%. The mean Memory usage on primary server and secondary server in scenario 2 during failover cycle are 37% and 48% respectively. The Time for failover of the high availability environment remains for 6.8 seconds and the time for the off-line node to rejoin the cluster as on-line when told would take 1.5 seconds. The network traffic is measured in Kilobits per second, it is 1.2 Kilobits per second on port 80 in scenario 2 and is 1.4 Kilobits per second between the client and the server in scenario 1. In addition, data traffic on ports 5405, 2224 and 7788 are captured where port 5405 (Pacemaker/Corosync) contains UDP traffic, port 2224 (Pcsd) contains TCP traffic and port 7788 (DRBD) contains TCP traffic. The traffic captured on these ports represent network overhead due to HA. During failover cycle an additional traffic of 45Kb/s, 1.2Kb/s. 7.0Kb/s flow on 5405, 2224 and 7788 ports respectively. Conclusions. From our experiment results we can say that the overhead to handle live updates on high availability environment is approximately 1.1 - 1.7 % of CPU higher in HA mode than when a stand-alone server is used. The overhead is around 21 - 32 % higher in terms of memory utilization for the live updates on the HA system than for the standard server. The network traffic overhead induced by the ports used by high availability environment (5405, 2224, 7788) is approximately 53 Kilobits /Second while the minimum overhead is approximately 16 Kilobits / Second. The Final and the important metric is the Failover time which tells the seamlessness of the service as the environment needs to provide the services uninterrupted to the users. The failover time of the HA model is about just 6.8 seconds leaving the environment highly available. However, the user may notice slight interruption for the requests made during this span.

Page generated in 0.0647 seconds