• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 12
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Broker based Web Service Allocation Mechanism

Alwagait, Esam Abdullah K 02 November 2011 (has links)
Los servicios web son considerados por la industria y la investigación de facto por proporcionar funcionalidad de forma distribuida que sea usable en entornos heterogéneos. En pocas palabras, los servicios web son funcionalidad empaquetada que se basa en un conjunto de estándares que facilitan la definición de los métodos de los servicios web, sus números y formatos de entrada, así como sus números y formatos de salida. En combinación con la replicación, los servicios web pueden proporcionar soluciones de optimización del rendimiento a un número ilimitado de aplicaciones de negocio de la vida real. Sin embargo, cuando se habla de replicación es necesario plantearse cómo asignar o elegir las réplicas para proporcionar el mejor rendimiento posible. Esta tesis está dedicada a responder a esta pregunta. La tesis se titula "Un mecanismo de asignación de servicios web basado en Broker" donde un paradigma de ejecución (Proteus) es presentado, y varios algoritmos de asignación son también presentados, examinados y analizados para mostrar el óptimo. La tesis se centra en el componente broker del paradigma de ejecución. Este componente tiene embebido el algoritmo de asignación. Por otra parte, la tesis se centra en el Algoritmo de menor tiempo de servicio (LRT), el cual funciona asignando las réplicas del servicio web que proporcionan un tiempo de respuesta más rápido. La tesis ofrece todo el trabajo de fondo necesario, así como toda la investigación relacionada. Contiene una parte simulada, así como una descripción de un sistema de la vida real (Proteus). Contiene también una sección dedicada a analizar los resultados de los registros de ejecución y analizar las diferentes variaciones de los entornos (homogéneos y heterogéneos), el número de replicas y el nivel de paralelismo. Los registros son examinados y las conclusiones expuestas. / Alwagait, EAK. (2011). A Broker based Web Service Allocation Mechanism [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/12500 / Palancia
22

Cohérence à terme fiable avec des types de données répliquées / Dependable eventual consistency with replicated data types

Zawirski, Marek 14 January 2015 (has links)
Les bases de données répliquées cohérentes à terme récentes encapsulent la complexité de la concurrence et des pannes par le biais d'une interface supportant la cohérence causale, protégeant l'application des problèmes d'ordre, et/ou des Types de Données Répliqués (RDTs), assurant une sémantique convergente des mises-à-jour concurrentes en utilisant une interface objet. Cependant, les algorithmes fiables pour les RDTs et la cohérence causale ont un coût en terme de taille des métadonnées. Cette thèse étudie la conception de tels algorithmes avec une taille de métadonnées minimisée et leurs limites. Notre première contribution est une étude de la complexité des métadonnées des RDTs. Les nombreuses implémentations existantes impliquent un important surcoût en espace de stockage. Nous concevons un ensemble optimisé et un registre RDTs avec un surcoût des métadonnées réduit au nombre de répliques. Nous démontrons également les bornes inférieures de la taille des métadonnées pour six RDTs, prouvant ainsi l'optimalité de quatre implémentations. Notre seconde contribution est le design de SwiftCloud, une base de données répliquée causalement cohérente d'objets RDTs pour les applications côté client. Nous concevons des algorithmes qui supportent un grand nombre de répliques partielles côté client, s'appuyant sur le cloud, tout en étant tolérant aux fautes et avec une faible taille de métadonnées. Nous démontrons comment supporter la disponibilité (y compris la capacité à basculer entre des centre de données lors d'une erreur), la cohérence et le passage à l'échelle (petite taille de métadonnées, parallélisme) au détriment d'un léger retard dans l'actualisation des données. / Eventually consistent replicated databases offer excellent responsiveness and fault-tolerance, but expose applications to the complexity of concurrency andfailures. Recent databases encapsulate these problems behind a stronger interface, supporting causal consistency, which protects the application from orderinganomalies, and/or Replicated Data Types (RDTs), which ensure convergent semantics of concurrent updates using object interface. However, dependable algorithms for RDT and causal consistency come at a cost in metadata size. This thesis studies the design of such algorithms with minimized metadata, and the limits of the design space. Our first contribution is a study of metadata complexity of RDTs. RDTs use metadata to provide rich semantics; many existing RDT implementations incur high overhead in storage space. We design optimized set and register RDTs with metadata overhead reduced to the number of replicas. We also demonstrate metadata lower bounds for six RDTs, thereby proving optimality of four implementations. Our second contribution is the design of SwiftCloud, a replicated causally-consistent RDT object database for client-side applications. We devise algorithms to support high numbers of client-side partial replicas backed by the cloud, in a fault-tolerant manner, with small metadata. We demonstrate how to support availability and consistency, at the expense of some slight data staleness; i.e., our approach trades freshness for scalability (small metadata, parallelism), and availability (ability to fail-over between data centers). We validate our approach with experiments involving thousands of client replicas.
23

Structure and restoration of natural secondary forests in the Central Highlands, Vietnam

Bui, Manh Hung 15 December 2016 (has links) (PDF)
Introduction and objectives In Vietnam, the forest resources have been declining and degrading severely in recent years. The degradation has decreased the natural forest area, changed the forest structure seriously and reduced timber volume and biodiversity. From 1999 to 2005, the rich forest area has decreased 10.2%, whereas the poor secondary forest has increased dramatically by 20.7%. Forest structure plays an important role in forestry research. Understanding forest structure will unlock an understanding of the history, function and future of a forest ecosystem (Spies, 1998). The forest structure is an excellent basis for restoration measures. Therefore, this research is necessary to contribute to improving forest area and quality, reducing difficulties in forest management. The study also enhances the grasp of forest structure, structure changes after harvesting and fills serious gaps in knowledge. In addition, the research results will contribute to improving and rescuing the poor secondary forest and restoring it, approaching the old-growth forest in Vietnam. Material and methods The study was conducted in Kon Ka Kinh national park. The park is located in the Northeastern region of Gia Lai province, 50 km from Pleiku city center to the Northeast. The park is distributed over seven different communes in three districts: K’Bang, Mang Yang and Đăk Đoa. Data were collected from 10 plots of secondary forests (Type IIb) and 10 plots of primeval forests (Type IV). Stratified random sampling was applied to select plot locations. 1 ha plots were used to investigate gaps. 2000 m2 plots were used to measure overstorey trees such as diameter at breast height, total height, crown width and species names. 500 m2 subplots were used to record tree positions. For regeneration, 25 systematic 4 m2 subplots were established inside 1 ha plots. After data were collected in the field, data analyses were conducted by using R and Excel. Firstly, some stand information, such as density, volume and so on, was calculated, and then descriptive statistics were computed for diameter and height variables. Linear mixed effect models were applied to analyze the difference of diameter and height and to check the effect of random factor between the two forest types. Diameter and height frequency distributions were also generated and compared by using permutational analysis of variance (PERMANOVA). Non-linear regression models were analyzed for diameter and height variables. Similar analyses were implemented for gaps. Regarding spatial point patterns of overstorey trees, replicated point pattern analysis techniques were applied in this research. For biodiversity, some calculations were run such as richness and biodiversity indices, comparison of biodiversity indices by using linear mixed models and biodiversity differences between two forest types tested again by permutational analysis of variance. In terms of regeneration, some analyses were implemented such as: height frequency distribution generation, frequency difference testing, biodiversity indices for the regeneration and spatial distribution checking by using a nonrandomness index. Results and discussion After analyzing the data, some essential findings were obtained as follows: Hypothesis H1 “The overstorey structure of secondary forests is more homogeneous and uniform than old-growth forests” is accepted. In other words, the secondary forest density is about 1.8 times higher than the jungle. However, the volume is only 0.56 times as large. The average diameter and height of the secondary forest is smaller by 5.71 cm and 3.73 m than the old-growth forest, respectively. Linear mixed effect model results indicate that this difference is statistically different and the effect of the random factor (Section) is not important. Type IIb has many small trees and the diameter frequency distribution is quite homogeneous. The old-growth forest has more big trees. For both forest stages, the height frequency distribution is positively skewed. PERMANOVA results illustrate that the frequency distribution is statistically different between the two forest types. Regression functions are also more variant and diverse in the old-growth forest, because all standard deviations of the parameters are greater there. Gap analysis results indicate that the number of gaps in the young forest is slightly higher, while the average gap size is much smaller. The gap frequency distribution is statistically different between the two types. In terms of the spatial point pattern of overlayer trees, the G-test and the pair correlation function results show that trees distribute randomly in the secondary forest. In contrast, the spatial point patterns of trees are more regular and diverse in the old-growth forest. The spatial point pattern difference is not significant, and this is proved by a permutational t-test for pair correlation function (pcf). Envelope function results indicate that the variation of pcf in young forests is much lower than in the primary forests. Hypothesis H2 “The overstorey species biodiversity of the secondary forest is less than in the old-growth forest” is rejected. Results show that the number of species of the secondary forest is much greater than in the old-growth forest, especially richness. The richness of the secondary forest is 1.16 times higher. The Simpson and Shannon indices are slightly smaller in the secondary forest. The average Simpson index for both forest stages is 0.898 and 0.920, respectively. However, the difference is not significant. Species accumulation curves become relatively flatter on the right, meaning a reasonable number of plots have been observed. Estimated number of species from accumulation curves in two forest types are 105 and 95/ha. PERMANOVA results show that number of species and proportion of individuals in each species are significantly different between forest types. Hypothesis H3 “The number regenerating species of the secondary forest is less and they distribute more regularly, compared to the old-growth forest” is rejected. There are both similarities and differences between the two types. The regeneration density of the stage IIb is 22,930 seedlings/ha, greater than the old forest by 9,030 seedlings. The height frequency distribution shows a decreasing trend. Similar to overstorey, the richness of the secondary forest is 141 species, higher than the old-growth forest by 9 species. Biodiversity indices are not statistically different between two types. PERMANOVA results indicate that the number of species and the proportion of individuals for each species are also not significantly different from observed forest types. Nonrandomness index results show that the regeneration distributes regularly. Up to 95% of the plots reflect this distribution trend. Hypothesis H4 “Restoration measures (with and without human intervention) could be implemented in the regenerating forest” is accepted. The investigated results show that the secondary forest still has mother trees, and it has enough seedlings to restore. Therefore, restoration solutions with and without human intervention can be implemented. Firstly, forest protection should be applied. This measure is relevant to national park regulations in Vietnam. Rangers and other related organizations will be responsible for carrying out protection activities. These activities will protect forest resources from illegal logging, grazing and tourist activities. Environmental education and awareness-raising activities for indigenous people is also important. Another measure is additional and enrichment planting. It should focus on exclusive species of the overstorey in Type IIb or exclusive species of the primary forest. Selection of these species will lead to species biodiversity increase in the future. This also meets the purpose of the maximum biodiversity solution. Conclusion Forest resources play a very important role in human life as well as maintaining the sustainability of ecosystems. However, at present, they are under serious threat, particularly in Vietnam. Central Highland, Vietnam, where forest resources are still relatively good, is also threatened by illegal logging, lack of knowledge of people and so on. Therefore, it needs the hands of the people, especially foresters and researchers. Through research, scientists can provide the knowledge and understanding of the forest, including the structure and forest restoration. This study has obtained important findings. The secondary forest is more homogeneous and uniform, while the old-growth forest is very diverse. Biodiversity of the overstorey in the secondary forest is more than the primary. The number of regenerating species in the secondary forest is higher, but other indices are not statistically different between two types. The regeneration distribute regularly on the ground. The secondary forest still has mother trees and sufficient regeneration, so some restoration measures can be applied here. Findings of the study contribute to improve people’s understanding of the structure and the structural changes after harvesting in Kon Ka Kinh national park, Gia Lai. That is a key to have better understandings of the history and values of the forests. These findings and the proposed restoration measures address rescuing degraded forests in Central Highland in particular and Vietnam in general. And further, this is a promising basis for the management and sustainable use of forest resources in the future.
24

Implication de l’ADN polymérase spécialisée zêta au cours de la réplication de l’hétérochromatine dans les cellules de mammifères / Involvement of the specialized DNA polymerase zeta during heterochromatin replication in mammalian cells

Ahmed-Seghir, Sana 24 September 2015 (has links)
La synthèse translésionnelle (TLS) est un processus important pour franchir des lésions de l’ADN au cours de la duplication du génome dans les cellules humaines. Le modèle « d’échange de polymérases » suggère que la polymérase réplicative est transitoirement remplacée par une polymérase spécialisée, qui va franchir le dommage et permettre de continuer la synthèse d’ADN. Ces ADN polymérases spécialisées appelées Pol êta (η), iota (ι), kappa (κ), zêta (ζ), et Rev1 ont été bien caractérisées pour leur capacité à franchir différents types de lésions in vitro. Un concept en émergence est que ces enzymes pourraient également être requises pour répliquer des zones spécifiques du génome qui sont « difficiles à répliquer ». Polζ est constituée d’au moins 2 sous-unités : Rev3 qui est la sous-unité catalytique et Rev7 sous-unité augmentant l’activité de Rev3L. Jusqu'ici, la fonction la mieux caractérisée de Polζ était de sa capacité à catalyser l'extension d'un mésappariement en face d'une lésion d'ADN. Cependant, il a été montré que la sous unité catalytique Rev3 de levure et humaine interagissent avec les deux sous-unités accessoires de Polδ que sont pol31 et pol32 chez la levure et p50 et p66 chez l’humain. Il a aussi été mis en évidence que Rev3L est importante pour la réplication des sites fragiles (SFCs) dans les cellules humaines, zones connues pour être à l’origine d’une instabilité génétique et pour être répliquées de manière tardive (en G2/M). Tout ceci suggère que Polζ pourrait jouer un rôle dans la réplication du génome non endommagé, et plus spécifiquement lorsque des barrières naturelles (e.g. ADN non-B) entravent la progression normale des fourches de réplication.Chez la levure S. cerevisiae, l’inactivation du gène rev3 est viable et conduit à une diminution de la mutagenèse spontanée ou induite par des agents génotoxiques suggérant que Polζ est impliquée dans le franchissement mutagène des lésions endogènes ou induite. En revanche, l’inactivation du gène Rev3L chez la souris est embryonnaire létale alors que la plupart des autres ADN polymérases spécialisées ne sont pas vitales. Ceci suggère que Polζ a acquis des fonctions essentielles au cours de l’évolution qui restent inconnues à ce jour. Les fibroblastes embryonnaires murins (MEF) Rev3L-/- présente une grande instabilité génétique spontanée associée une forte augmentation de cassures et de translocations chromosomiques indiquant que Polζ est directement impliquée dans le maintien de la stabilité du génome. Afin de clarifier le rôle de cette polymérase spécialisée au cours de la réplication du génome, nous avons entrepris de procéder à une étude sur les relations structure/fonction/localisation de la protéine Rev3. Notre étude met en évidence que la progression en phase S des cellules Rev3L-/- est fortement perturbée, notamment en fin de phase S. Dans ces cellules invalidées pour Rev3L, on constate des changements dans le programme de réplication et plus particulièrement dans des régions de transition (TTR) répliquées à partir du milieu de la phase S. Nous montrons aussi un enrichissement global en marques épigénétiques répressives (marques associées à l’hétérochromatine et méthylation de l’ADN) suggérant qu’un ralentissement de la progression de la fourche de réplication à des loci particuliers peut promouvoir une hétérochromatinisation lorsque Rev3L est invalidé. De manière intéressante, nous constatons une diminution de l’expression de plusieurs gènes impliqués dans le développement qui pourrait peut-être expliquer la létalité embryonnaire constatée en absence de Rev3L. Enfin, nous mettons en évidence une interaction directe entre la protéine d’organisation de l’hétérochromatine HP1α et Rev3L via un motif PxVxL. Tout ceci nous suggère fortement que Polζ pourrait assister les ADN polymérases réplicatives Polδ et Polε dans la réplication des domaines compactés de la chromatine en milieu et fin de phase S. / DNA polymerase zeta (Polζ) is a key player in Translesion DNA synthesis (TLS). Polζ is unique among TLS polymerases in mammalian cells, because inactivation of the gene encoding its catalytic subunit (Rev3L) leads to embryonic lethality in the mouse. However little is known about its biological functions under normal growth conditions.Here we show that S phase progression is impaired in Rev3L-/- MEFs with a delay in mid and late S phase. Genome-wide profiling of replication timing revealed that Rev3L inactivation induces changes in the temporal replication program, mainly in particular genomic regions in which the replication machinery propagates a slower velocity. We also highlighted a global enrichment of repressive histone modifications as well as hypermethylation of major satellites DNA repeats in Rev3L-deficient cells, suggesting that fork movements can slow down or stall in specific locations, and a delay in restarting forks could promote heterochromatin formation in Rev3L-depleted cells. As a direct or indirect consequence, we found that several genes involved in growth and development are down-regulated in Rev3L-/- MEFs, which might potentially explain the embryonic lethality observed in Rev3L KO mice. Finally we discovered that HP1α directly interacts and recruits Rev3L to pericentromeric heterochromatin. We therefore propose that Polζ has been co-opted by evolution to assist DNA polymerase ε and δ in duplicating condensed chromatin domains during mid and late S phase.
25

Méthodologie d'évaluation pour les types de données répliqués / Evaluation methodology for replicated data types

Ahmed-Nacer, Mehdi 05 May 2015 (has links)
Pour fournir une disponibilité permanente des données et réduire la latence réseau, les systèmes de partage de données se basent sur la réplication optimiste. Dans ce paradigme, il existe plusieurs copies de l'objet partagé dite répliques stockées sur des sites. Ces répliques peuvent être modifiées librement et à tout moment. Les modifications sont exécutées en local puis propagées aux autres sites pour y être appliquées. Les algorithmes de réplication optimiste sont chargés de gérer les modifications parallèles. L'objectif de cette thèse est de proposer une méthodologie d'évaluation pour les algorithmes de réplication optimiste. Le contexte de notre étude est l'édition collaborative. Nous allons concevoir pour cela un outil d'évaluation qui intègre un mécanisme de génération de corpus et un simulateur d'édition collaborative. À travers cet outil, nous allons dérouler plusieurs expériences sur deux types de corpus: synchrone et asynchrone. Dans le cas d'une édition collaborative synchrone, nous évaluerons les performances des différents algorithmes de réplication sur différents critères tels que le temps d'exécution, l'occupation mémoire, la taille des messages, etc. Nous proposerons ensuite quelques améliorations. En plus, dans le cas d'une édition collaborative asynchrone, lorsque deux répliques se synchronisent, les conflits sont plus nombreux à apparaître. Le système peut bloquer la fusion des modifications jusqu'à ce que l'utilisateur résolut les conflits. Pour réduire le nombre de ces conflits et l'effort des utilisateurs, nous proposerons une métrique d'évaluation et nous évaluerons les différents algorithmes sur cette métrique. Nous analyserons le résultat pour comprendre le comportement des utilisateurs et nous proposerons ensuite des algorithmes pour résoudre les conflits les plus important et réduire ainsi l'effort des développeurs. Enfin, nous proposerons une nouvelle architecture hybride basée sur deux types d'algorithmes de réplication. Contrairement aux architectures actuelles, l'architecture proposéeest simple, limite les ressources sur les dispositifs clients et ne nécessite pas de consensus entre les centres de données / To provide a high availability from any where, at any time, with low latency, data is optimistically replicated. This model allows any replica to apply updates locally, while the operations are later sent to all the others. In this way, all replicas eventually apply all updates, possibly even in different order. Optimistic replication algorithms are responsible for managing the concurrent modifications and ensure the consistency of the shared object. In this thesis, we present an evaluation methodology for optimistic replication algorithms. The context of our study is collaborative editing. We designed a tool that implements our methodology. This tool integrates a mechanism to generate a corpus and a simulator to simulate sessions of collaborative editing. Through this tool, we made several experiments on two different corpus: synchronous and asynchronous. In synchronous collaboration, we evaluate the performance of optimistic replication algorithms following several criteria such as execution time, memory occupation, message's size, etc. After analysis, some improvements were proposed. In addition, in asynchronous collaboration, when replicas synchronize their modifications, more conflicts can appear in the document. In this case, the system cannot merge the modifications until a user resolves them. In order to reduce the conflicts and the user's effort, we propose an evaluation metric and we evaluate the different algorithms on this metric. Afterward, we analyze the quality of the merge to understand the behavior of the users and the collaboration cases that create conflicts. Then, we propose algorithms for resolving the most important conflicts, therefore reducing the user's effort. Finally, we propose a new architecture for supporting cloud-based collaborative editing system. This architecture is based on two optimistic replication algorithms. Unlike current architectures, the proposed one removes the problems of the centralization and consensus between data centers, is simple and accessible for any developers
26

Collaborative Editing of Graphical Network using Eventual Consistency

Hedkvist, Pierre January 2019 (has links)
This thesis compares different approaches of creating a collaborative editing application using different methods such as OT, CRDT and Locking. After a comparison between these methods an implementation based on CRDT was done. The implementation of a collaborative graphical network was made such that consistency is guaranteed. The implementation uses the 2P2P-Graph which was extended in order to support moving of nodes, and uses the client-server communication model. An evaluation of the implementation was made by creating a time-complexity and a space complexity analysis. The result of the thesis includes a comparison between different methods and by an evaluation of the Extended 2P2P-Graph.
27

Construction of Minimal Partially Replicated Orthogonal Main-Effect Plans with 3 Factors

朱正中, Chu, Cheng-Chung Unknown Date (has links)
正交主效應計畫(Orthogonal main-effect plans)因可無相關地估計主效應,故常被應用於一般工業上作為篩選因子之用。然而,實驗通常費時耗財。因此,如何設計一個較經濟且有效的計劃是很重要的。回顧過去相關的研究,Jacroux (1992)提供了最小正交主效應計劃的充份條件及正交主效應計畫之最少實驗次數表(Jacroux 1992),張純明(1998)針對此表提出修正與補充。在此,我們再次的補足此表。 正交主效應計畫中,如有重複實驗點,則純誤差可被估計,且據此檢定模型之適合度。Jacroux (1993)及張純明(1998)皆曾提出具最多部份重複之正交主效應計畫(Partially replicated orthogonal main-effect plans)。在此,我們討論所有三因子部份重複正交主效應計畫中,可能重複之最大次數,且具體提出建構此最大部份重複之最小正交主效應計畫之方法。 / Orthogonal main-effect plans (OMEP's), being able to estimate the main effects without correlation, are often employed in industrial situations for screening purpose. But experiments are expensive and time consuming. When an economical and efficient design is desired, a minimal orthogonal main-effect plans is a good choice. Jacroux (1992) derived a sufficient condition for OEMP's to have minimal number of runs and provided a table of minimal OMEP run numbers. Chang (1998) corrected and supplemented the table. In this paper, we try to improve the table to its perfection. A minimal OMEP with replicated runs is appreciated even more since then the pure error can be estimated and the goodness-of-fit of the model can be tested. Jacroux (1993) and Chang (1998) gave some partially replicated orthogonal main-effect plans (PROMEP's) with maximal number of replicated points. Here, we discuss minimal PROMEP's with 3 factors in detail. Methods of constructing minimal PROMEP's with replicated runs are provided, and the number of replicated runs are maximal for most cases.
28

Conditioning of unobserved period-specific abundances to improve estimation of dynamic populations

Dail, David (David Andrew) 28 February 2012 (has links)
Obtaining accurate estimates of animal abundance is made difficult by the fact that most animal species are detected imperfectly. Early attempts at building likelihood models that account for unknown detection probability impose a simplifying assumption unrealistic for many populations, however: no births, deaths, migration or emigration can occur in the population throughout the study (i.e., population closure). In this dissertation, I develop likelihood models that account for unknown detection and do not require assuming population closure. In fact, the proposed models yield a statistical test for population closure. The basic idea utilizes a procedure in three steps: (1) condition the probability of the observed data on the (unobserved) period- specific abundances; (2) multiply this conditional probability by the (prior) likelihood for the period abundances; and (3) remove (via summation) the period- specific abundances from the joint likelihood, leaving the marginal likelihood of the observed data. The utility of this procedure is two-fold: step (1) allows detection probability to be more accurately estimated, and step (2) allows population dynamics such as entering migration rate and survival probability to be modeled. The main difficulty of this procedure arises in the summation in step (3), although it is greatly simplified by assuming abundances in one period depend only the most previous period (i.e., abundances have the Markov property). I apply this procedure to form abundance and site occupancy rate estimators for both the setting where observed point counts are available and the setting where only the presence or absence of an animal species is ob- served. Although the two settings yield very different likelihood models and estimators, the basic procedure forming these estimators is constant in both. / Graduation date: 2012
29

Performance of Disk I/O operations during the Live Migration of a Virtual Machine over WAN

Vemulapalli, Revanth, Mada, Ravi Kumar January 2014 (has links)
Virtualization is a technique that allows several virtual machines (VMs) to run on a single physical machine (PM) by adding a virtualization layer above the physical host's hardware. Many virtualization products allow a VM be migrated from one PM to other PM without interrupting the services running on the VM. This is called live migration and offers many potential advantages like server consolidation, reduced energy consumption, disaster recovery, reliability, and efficient workflows such as "Follow-the-Sun''. At present, the advantages of VM live migration are limited to Local Area Networks (LANs) as migrations over Wide Area Networks (WAN) offer lower performance due to IP address changes in the migrating VMs and also due to large network latency. For scenarios which require migrations, shared storage solutions like iSCSI (block storage) and NFS (file storage) are used to store the VM's disk to avoid the high latencies associated with disk state migration when private storage is used. When using iSCSI or NFS, all the disk I/O operations generated by the VM are encapsulated and carried to the shared storage over the IP network. The underlying latency in WAN will effect the performance of application requesting the disk I/O from the VM. In this thesis our objective was to determine the performance of shared and private storage when VMs are live migrated in networks with high latency, with WANs as the typical case. To achieve this objective, we used Iometer, a disk benchmarking tool, to investigate the I/O performance of iSCSI and NFS when used as shared storage for live migrating Xen VMs over emulated WANs. In addition, we have configured the Distributed Replicated Block Device (DRBD) system to provide private storage for our VMs through incremental disk replication. Then, we have studied the I/O performance of the private storage solution in the context of live disk migration and compared it to the performance of shared storage based on iSCSI and NFS. The results from our testbed indicate that the DRBD-based solution should be preferred over the considered shared storage solutions because DRBD consumed less network bandwidth and has a lower maximum I/O response time.
30

Using prior information on the intraclass correlation coefficient to analyze data from unreplicated and under-replicated experiments

Perrett, Jamis J. January 1900 (has links)
Doctor of Philosophy / Department of Statistics / James Higgins / Many studies are performed on units that cannot be replicated due to cost or other restrictions. There often is an abundance of subsampling to estimate the within unit component of variance, but what is needed for statistical tests is an estimate of the between unit component of variance. There is evidence to suggest that the ratio of the between component of variance to the total variance will remain relatively constant over a range of studies of similar types. Moreover, in many cases this intraclass correlation, which is the ratio of the between unit variance to the total variance, will be relatively small, often 0.1 or less. Such situations exist in education, agriculture, and medicine to name a few. The present study discusses how to use such prior information on the intraclass correlation coefficient (ICC) to obtain inferences about differences among treatments in the face of no replication. Several strategies that use the ICC are recommended for different situations and various designs. Their properties are investigated. Work is extended to under-replicated experiments. The work has a Bayesian flavor but avoids the full Bayesian analysis, which has computational complexities and the potential for lack of acceptance among many applied researchers. This study compares the prior information ICC methods with traditional methods. Situations are suggested in which prior information ICC methods are preferable to traditional methods and those in which the traditional methods are preferable.

Page generated in 0.0472 seconds