81 |
Exploring the effects of aperture size, aperture variability and matrix properties on biocolloid transport and retention in a single saturated fractureBurke, Margaret G. 04 1900 (has links)
<p>To increase the understanding of contaminant transport, specifically biocolloid transport in fractured media, a series of experiments were conducted on single saturated fractures. Hydraulic and solute tracer tests were used to characterize three separate fractures: one natural fracture and two synthetic fractures. Zeta potentials are reported showing the high negative electric charge of the synthetic fractures relative to the natural fractures in the phosphate buffer solution (PBS) used during the biocolloid tracer tests.</p> <p><em>E. coli</em> RS2-GFP tracer tests were conducted on all three fractures at specific discharges of 5 m/d, 10 m/d and 30 m/d. Lower <em>E. coli</em> recovery was consistently observed in the natural fracture, due to 1) attachment because of the lower negative charge of the natural fracture relative to the synthetic fracture; and 2) the presence of dead end fractures within the fracture matrix. In the synthetic fractures, where surface charges were equal, in the larger, more variable fracture aperture, lower recoveries were found when compared to the smaller, less variable fracture aperture, which was not expected. This indicates that aperture variability plays a larger role than fracture aperture size in the retention of biocolloids in fractures.</p> <p>Differential transport was consistently observed in all three fractures, but was more prominent in the synthetic fractures. This indicates that charge exclusion plays a more dominant role in the differential transport of colloids than size exclusion, though size exclusion cannot be eliminated as a retention mechanism based on these experiments. Differential transport was also heavily influenced by specific discharge as the difference in arrival times between the bromide and <em>E. coli</em> increased in all three fractures as the specific discharge decreased.</p> <p>Visualization tests were completed on the synthetic fractures showing the location of multiple preferential flow paths, as well as areas with low flow.</p>
|
82 |
On the Consistency, Characterization, Adaptability and Integrity of Database Replication SystemsRuiz Fuertes, María Idoia 30 September 2011 (has links)
Desde la aparición de las primeras bases de datos distribuidas hasta los actuales sistemas de
replicación modernos, la comunidad de investigación ha propuesto múltiples protocolos para
administrar la distribución y replicación de datos, junto con algoritmos de control de concurrencia
para manejar las transacciones en ejecución en todos los nodos del sistema. Muchos protocolos están
disponibles, por tanto, cada uno con diferentes características y rendimiento, y garantizando
diferentes niveles de coherencia. Para saber qué protocolo de replicación es el más adecuado, dos
aspectos deben ser considerados: el nivel necesario de coherencia y aislamiento (es decir, el criterio
de corrección), y las propiedades del sistema (es decir, el escenario), que determinará el rendimiento
alcanzable.
Con relación a los criterios de corrección, la serialización de una copia es ampliamente aceptada
como el más alto nivel de corrección. Sin embargo, su definición permite diferentes interpretaciones
en cuanto a la coherencia de réplicas. En esta tesis se establece una correspondencia entre los modelos
de coherencia de memoria, tal como se definen en el ámbito de la memoria compartida distribuida, y
los posibles niveles de coherencia de réplicas, definiendo así nuevos criterios de corrección que
corresponden a las diferentes interpretaciones identificadas sobre la serialización de una copia.
Una vez seleccionado el criterio de corrección, el rendimiento alcanzable por un sistema depende en
gran medida del escenario, es decir, de la suma del entorno del sistema y de las aplicaciones que se
ejecutan en él. Para que el administrador pueda seleccionar un protocolo de replicación apropiado, los
protocolos disponibles deben conocerse plena y profundamente. Una buena descripción de cada
candidato es fundamental, pero un marco común es imperativo para comparar las diferentes opciones
y estimar su rendimiento en un escenario dado. Los resultados presentados en esta tesis cumplen los objetivos establecidos y
constituyen una contribución al estado del arte de la replicación de bases de datos
en el momento en que se iniciaron los trabajos respectivos. Estos resultados son
relevantes, además, porque abren la puerta a posibles contribuciones futuras. / Ruiz Fuertes, MI. (2011). On the Consistency, Characterization, Adaptability and Integrity of Database Replication Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11800
|
83 |
The Petz (lite) recovery map for scrambling channel / スクランブリングなチャンネルに対するペッツ(ライト)復元写像Nakayama, Yasuaki 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第25109号 / 理博第5016号 / 新制||理||1715(附属図書館) / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)教授 橋本 幸士, 教授 杉本 茂樹, 教授 田島 治 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
|
84 |
Études par dynamique moléculaire de l’interaction de Récepteurs Couplés aux Protéines-G avec leurs partenaires extra et intra-cellulaires / Molecular dynamics studies of the interaction between G-Protein-Coupled Receptors and their extra and intra-cellular partnersDelort, Bartholomé 19 November 2018 (has links)
Les Récepteurs Couplés aux Protéines-G forment la plus importante famille de protéines membranaires chez l’homme et sont impliqués dans de nombreux processus de signalisation cellulaire. Aussi, ils forment un vivier très important de cibles thérapeutiques, déjà identifiées ou potentielles. L’activation d’un RCPG est amorcée par la liaison d’un ligand dans sa partie extra-cellulaire, modifiant ainsi ses propriétés dynamiques intrinsèques. Ces changements structuraux vont alors se répercuter le long des domaines trans-membranaires et promouvoir la dissociation de la Protéine-G hétéro-trimérique, de l’autre côté de la membrane, propageant ainsi le signal au compartiment intra-cellulaire. Ce processus peut être modulé par la liaison de nombreux autres partenaires des RCPGs. Malgré de nombreuses données structurales existantes, ces mécanismes restent encore mal connus à l’échelle moléculaire. Ainsi, la dynamique moléculaire s’est révélée être un outil formidable pour mieux comprendre ces mécanismes. Toutefois, les échelles de taille et de temps requises pour discuter de la dynamique de ces systèmes membranaires limitent ces études aux laboratoires ayant accès à une très grande puissance de calcul. L’objectif des travaux présentés dans ce manuscrit a été de prédire et de mieux comprendre la dynamique d’interaction de différents récepteurs de cette famille avec leurs partenaires, en développant un protocole de dynamique moléculaire, peu coûteux en ressources de calcul, combinant le champ de forces gros-grains MARTINI à un protocole de dynamique moléculaire « Replica-Exchange ».Dans un premier temps, nous présentons la validation de notre protocole pour la prédiction de la liaison de peptides à leur récepteur avec l’étude des peptides Neurotensine, agoniste du Récepteur de la Neurotensine-1, et CVX15, antagoniste du Récepteur Chemokine C-X-C de type-4. Nous montrons également que notre protocole est capable de prédire la sélectivité de plusieurs peptides dérivés de la Neurotensine envers plusieurs récepteurs sauvages et mutés, ne présentant qu’un résidu de différence.Dans un second temps, nous nous sommes intéressés à la dynamique de formation d’un hétéro-dimère de RCPGs impliquant le Récepteur de la Ghréline et le récepteur de la Dopamine D2, couplés aux protéines Gq et Gi respectivement. Ce modèle validé au laboratoire par des mesures LRET montre une interface impliquant une forte complémentarité entre les protéines-G. En se basant sur notre modèle, nous avons conçu et synthétisé des peptides inhibiteurs de la formation de cet hétéro-dimère de protéines-G.Enfin, nous présentons d’autres exemples d’applications de notre protocole et comment il peut être utilisé de concert avec l’expérience avec : la prédiction de la liaison de toxines de serpents aux Récepteurs de la Vasopressine-1a et V2 ; la prédiction de la liaison des peptides Ghréline et Leap2 au Récepteur GHSR-1a et la prédiction de la sélectivité de couplage de différents récepteurs aux peptides C-terminaux de la sous-unité α des protéines-G. / G-Protein Coupled Receptors form the largest family of human membrane proteins and are involved in many cellular signaling processes. Thus, they constitute a pool of already identified or potential pharmacological targets. The activation of a GPCR starts with the binding of a ligand in its extra-cellular part, further modifying its intrinsic dynamical properties. These structural rearrangements are then transmitted along the transmembrane domains and promote the dissociation of the G-protein on the other side of the bilayer, thus propagating the signal into the intra-cellular compartment. This activation process can be modulated by the binding of many other partners of GPCRs. Despite many structural data now available, these mechanisms are still badly known at the molecular scale. In agreement, molecular dynamics simulations appear to be a method of choice to get a better description of these mechanisms. Nevertheless, the size and the time scales required for the simulation of these membrane systems limit such studies to laboratories having access to large computational facilities.The objective of this work was to predict and get a dynamical view of the interactions of several GPCRs with their partners, by developing an affordable molecular dynamics protocol that combines the coarse-grained MARTINI force field to Replica-Exchange MD simulations.In a first step, we validated our protocol by showing its ability to predict the dynamical binding of peptides to their receptors, through the study of Neurotensin, an agonist of the Neurotensin-1 receptor and CVX15, an antagonist of the CXCR4 chemokine receptor. We also show that the same protocol is able to predict the selectivity of several Neurotensin derived peptides against several wild-type/mutated receptors differing by a single residue.In a second step, we were concerned by the dynamical assembly of a GPCR heterodimer involving the Ghrelin and the Dopamine D2 receptors, respectively coupled to Gq and Gi proteins. Our model was validated by LRET measurements confirming a large protein:protein interface and a high complementarity between G-proteins. Based on this model, we designed and synthesized some peptides able to inhibit the assembly of this G-proteins heterodimer.Finally, we describe other applications of our protocol and how it can be employed and confronted to experiments to : predict the dynamical binding of toxins from snake’s venom to the Vasopressin-1a and Vasopressin-2 receptors ; predict the binding of the Ghrelin and Leap2 peptides to their GHSR-1a receptor and predict the coupling selectivity of several receptors to peptides mimicking the C-terminus of the α subunit of G-proteins.
|
85 |
Statistical physics of constraint satisfaction problemsLamouchi, Elyes 10 1900 (has links)
La technique des répliques est une technique formidable prenant ses origines de la physique statistique, comme un moyen de calculer l'espérance du logarithme de la constante de normalisation d'une distribution de probabilité à haute dimension. Dans le jargon de physique, cette quantité est connue sous le nom de l’énergie libre, et toutes sortes de quantités utiles, telle que l’entropie, peuvent être obtenue de là par des dérivées. Cependant, ceci est un problème NP-difficile, qu’une bonne partie de statistique computationelle essaye de résoudre, et qui apparaît partout; de la théorie des codes, à la statistique en hautes dimensions, en passant par les problèmes de satisfaction de contraintes. Dans chaque cas, la méthode des répliques, et son extension par (Parisi et al., 1987), se sont prouvées fortes utiles pour illuminer quelques aspects concernant la corrélation des variables de la distribution de Gibbs et la nature fortement nonconvexe de son logarithme negatif. Algorithmiquement, il existe deux principales méthodologies adressant la difficulté de calcul que pose la constante de normalisation:
a). Le point de vue statique: dans cette approche, on reformule le problème en tant que graphe dont les nœuds correspondent aux variables individuelles de la distribution de Gibbs, et dont les arêtes reflètent les dépendances entre celles-ci. Quand le graphe en question est localement un arbre, les procédures de message-passing sont garanties d’approximer arbitrairement bien les probabilités marginales de la distribution de Gibbs et de manière équivalente d'approximer la constante de normalisation. Les prédictions de la physique concernant la disparition des corrélations à longues portées se traduise donc, par le fait que le graphe soit localement un arbre, ainsi permettant l’utilisation des algorithmes locaux de passage de messages. Ceci va être le sujet du chapitre 4.
b). Le point de vue dynamique: dans une direction orthogonale, on peut contourner le problème que pose le calcul de la constante de normalisation, en définissant une chaîne de Markov le long de laquelle, l’échantillonnage converge à celui selon la distribution de Gibbs, tel qu’après un certain nombre d’itérations (sous le nom de temps de relaxation), les échantillons sont garanties d’être approximativement générés selon elle. Afin de discuter des conditions dans lesquelles chacune de ces approches échoue, il est très utile d’être familier avec la méthode de replica symmetry breaking de Parisi.
Cependant, les calculs nécessaires sont assez compliqués, et requièrent des notions qui sont typiquemment étrangères à ceux sans un entrainement en physique statistique.
Ce mémoire a principalement deux objectifs : i) de fournir une introduction a la théorie des répliques, ses prédictions, et ses conséquences algorithmiques pour les problèmes de satisfaction de constraintes, et ii) de donner un survol des méthodes les plus récentes adressant la transition de phase, prédite par la méthode des répliques, dans le cas du problème k−SAT, à partir du point de vu statique et dynamique, et finir en proposant un nouvel algorithme qui prend en considération la transition de phase en question. / The replica trick is a powerful analytic technique originating from statistical physics as an attempt to compute the expectation of the logarithm of the normalization constant of a high dimensional probability distribution known as the Gibbs measure. In physics jargon this quantity is known as the free energy, and all kinds of useful quantities, such as the entropy, can be obtained from it using simple derivatives. The computation of this normalization constant is however an NP-hard problem that a large part of computational statistics attempts to deal with, and which shows up everywhere from coding theory, to high dimensional statistics, compressed sensing, protein folding analysis and constraint satisfaction problems. In each of these cases, the replica trick, and its extension by (Parisi et al., 1987), have proven incredibly successful at shedding light on keys aspects relating to the correlation structure of the Gibbs measure and the highly non-convex nature of − log(the Gibbs measure()). Algorithmic speaking, there exists two main methodologies addressing the intractability of the normalization constant:
a) Statics: in this approach, one casts the system as a graphical model whose vertices represent individual variables, and whose edges reflect the dependencies between them. When the underlying graph is locally tree-like, local messagepassing procedures are guaranteed to yield near-exact marginal probabilities or equivalently compute Z. The physics predictions of vanishing long range correlation in the Gibbs measure, then translate into the associated graph being locally tree-like, hence permitting the use message passing procedures. This will be the focus of chapter 4.
b) Dynamics: in an orthogonal direction, we can altogether bypass the issue of computing the normalization constant, by defining a Markov chain along which sampling converges to the Gibbs measure, such that after a number of iterations known as the relaxation-time, samples
are guaranteed to be approximately sampled according to the Gibbs measure. To get into the conditions in which each of the two approaches is likely to fail (strong long range correlation, high energy barriers, etc..), it is very helpful to be familiar with the so-called replica symmetry breaking picture of Parisi. The computations involved are however quite involved, and come with a number of prescriptions and prerequisite notions (s.a. large deviation principles, saddle-point approximations) that are typically foreign to those without a statistical physics background. The purpose of this thesis is then twofold: i) to provide a self-contained introduction to replica theory, its predictions, and its algorithmic implications for constraint satisfaction problems, and ii) to give an account of state of the art methods in addressing the predicted phase transitions in the case of k−SAT, from both the statics and dynamics points of view,
and propose a new algorithm takes takes these into consideration.
|
86 |
Transport électronique et Verres de SpinsPaulin, Guillaume 22 June 2010 (has links) (PDF)
The results reported in this thesis contribute to the understanding of disordered systems, to mesoscopic physics on the one hand, and to the physics of spin glasses on the other hand. The first part of this thesis studies numerically coherent electronic transport in a non magnetic metal accurately doped with frozen magnetic impurities (a low temperature spin glass). Thanks to a recursive code that calculates the two terminal conductance of the system, we study in detail the metallic regime of conduction (large conductance) as well as the insulating regime (small conductance). In both regimes, we highlight a universal behavior of the system. Moreover, a study of correlations between the conductance of different spin configurations of impurities allows us to link these correlations with correlations between spin configurations. This study opens the route for the first experimental determination of the overlap via transport measurements. A second part of this thesis deals with the study of the mean field Sherrington-Kirkpatrick model, which describes the low temperature phase of an Ising spin glass. We are interested here in the generalization of this model to quantum spins (i.e including the possibility to flip by quantum tunneling) of this classical model that was well studied during the past thirty years. We deduce analytically motion equations at the semi-classical level, for which the influence of quantum tunneling is weak, and we compare them with the classical case. We finally solve numerically these equations using a pseudo-spectral method.
|
87 |
Estudo das propriedades termodinâmicas do modelo de Ashkin-Teller na presença de campo magnético aleatório. / Study of thermodynamics properties of Ashkin-Teller in random magnetic fieldBernardes, Luiz Antonio Bastos 27 October 1995 (has links)
A teoria de campo médio para o modelo de Ashkin-Teller com interações ferromagnéticas de longo alcance na presença de campos magnéticos aleatórios foi desenvolvida. Isso foi conseguido através do uso do truque de réplicas para a obtenção da energia livre e do estudo analítico das equações integrais acopladas dos parâmetros de ordem, da estabilidade de suas soluções e das suas expansões para T ≤ Tc. Inicialmente, foram determinadas as expressões gerais das funções termodinâmicas do modelo no caso em que existiam três campos magnéticos aleatórios com distribuições gaussianas. Em seguida, foi examinado o caso particular do modelo com um só campo magnético aleatório na direção de Z = ‹ δ S ›. A estratégia adotada se mostrou poderosa pois possibilitou a caracterização detalhada do diagrama de fases com várias superfícies de coexistência e das linhas de pontos críticos. As equações integrais das funções termodinâmicas desse caso particular foram discutidas e resolvidas numericamente para valores especiais das constantes de interação e da variância. Para o caso particular do modelo na presença de campos magnéticos aleatórios nas direções ‹ S › e ‹ δ ›, foram determinadas e discutidas as expressões das funções termodinâmicas. Foram também obtidas as equações das superfícies de instabilidade da solução paramagnética. Foi provado que a transição entre as fases paramagnética e de Baxter é sempre de primeira ordem. Outro resultado original da tese foi a verificação da existência da simetria de dilatação e contração do modelo de Potts na presença de campos magnéticos aleatórios. Essa simetria permite que o estudo da energia livre no intervalo q∈ (1,2) forneça o comportamento termodinâmico do sistema para todo q>2. / The meanfield theory of the long range Ashkin-Teller model in random fields was developed. This was obtained by using the replica trick and the study of the coupled integral equations for the order parameters, the stability of their solutions, and their expansions for T ≤ Tc. Inicially, the expressions of the thermodynamic functions for the model in three random fields with Gaussian distributiuons were determined. After this, it was examined the particular case of the model with only one random field in the Z = ‹ δ S › direction. The strategy revealed itself powerful by the detailed characterization of the phase diagram with several coexistence surfaces and lines of critical points. The integral equations of the thermodynamic functions for this particular case were discussed and numerically solved for special values of the interaction constants and field distribution variance. For the particular case of the model with random fields in the ‹ S › and ‹ δ ›, directions, the expressions were also determined and discussed. The equations of the instability surfaces for the paramagnetic solution were obtained, and it was proved that the para-Baxter transition line is always of first order. Another original result of the thesis was the verification of the the existence of the dilatation and contration symmetry in the Potts model with random fields. This symmetry permits that the study of the free energy in the q∈(1,2) interval supplies the thermodynamics behavior of the system for q>2.
|
88 |
SUN PIECE : actions of cuttingGram, Greta January 2013 (has links)
This works explores how to work with Event scores as a design method. In the search for what is real or what is reality the already existing things are being explored. The work started with investigating suitable ways to work with the moving body in the design process, with the aim to find a method that gave control but also left some parameters to the undecided and ambiguous. Convinced that this will lead to something new some parts of the process were highlighted and re-formulated. / Program: Modedesignutbildningen
|
89 |
Adaptivitätssensitive Platzierung von Replikaten in Adaptiven Content Distribution Networks / Adaptation-aware Replica Placement in Adaptive Content Distribution NetworksBuchholz, Sven 14 June 2005 (has links) (PDF)
Adaptive Content Distribution Networks (A-CDNs) sind anwendungsübergreifende, verteilte Infrastrukturen, die auf Grundlage verteilter Replikation von Inhalten und Inhaltsadaption eine skalierbare Auslieferung von adaptierbaren multimedialen Inhalten an heterogene Clients ermöglichen. Die Platzierung der Replikate in den Surrogaten eines A-CDN wird durch den Platzierungsmechanismus des A-CDN gesteuert. Anders als in herkömmlichen CDNs, die keine Inhaltsadaption berücksichtigen, muss ein Platzierungsmechanismus in einem A-CDN nicht nur entscheiden, welches Inhaltsobjekt in welchem Surrogat repliziert werden soll, sondern darüber hinaus, in welcher Repräsentation bzw. in welchen Repräsentationen das Inhaltsobjekt zu replizieren ist. Herkömmliche Platzierungsmechanismen sind nicht in der Lage, verschiedene Repräsentationen eines Inhaltsobjektes zu berücksichtigen. Beim Einsatz herkömmlicher Platzierungsmechanismen in A-CDNs können deshalb entweder nur statisch voradaptierte Repräsentationen oder ausschließlich generische Repräsentationen repliziert werden. Während bei der Replikation von statisch voradaptierten Repräsentationen die Wiederverwendbarkeit der Replikate eingeschränkt ist, führt die Replikation der generischen Repräsentationen zu erhöhten Kosten und Verzögerungen für die dynamische Adaption der Inhalte bei jeder Anfrage. Deshalb werden in der Arbeit adaptivitätssensitive Platzierungsmechanismen zur Platzierung von Replikaten in A-CDNs vorgeschlagen. Durch die Berücksichtigung der Adaptierbarkeit der Inhalte bei der Ermittlung einer Platzierung von Replikaten in den Surrogaten des A-CDNs können adaptivitätssensitive Platzierungsmechanismen sowohl generische und statisch voradaptierte als auch teilweise adaptierte Repräsentationen replizieren. Somit sind sie in der Lage statische und dynamische Inhaltsadaption flexibel miteinander zu kombinieren. Das Ziel der vorliegenden Arbeit ist zu evaluieren, welche Vorteile sich durch die Berücksichtigung der Inhaltsadaption bei Platzierung von adaptierbaren Inhalten in A-CDNs realisieren lassen. Hierzu wird das Problem der adaptivitätssensitiven Platzierung von Replikaten in A-CDNs als Optimierungsproblem formalisiert, Algorithmen zur Lösung des Optimierungsproblems vorgeschlagen und diese in einem Simulator implementiert. Das zugrunde liegende Simulationsmodell beschreibt ein im Internet verteiltes A-CDN, welches zur Auslieferung von JPEG-Bildern an heterogene mobile und stationäre Clients verwendet wird. Anhand dieses Simulationsmodells wird die Leistungsfähigkeit der adaptivitätssensitiven Platzierungsmechanismen evaluiert und mit der von herkömmlichen Platzierungsmechanismen verglichen. Die Simulationen zeigen, dass der adaptivitätssensitive Ansatz in Abhängigkeit vom System- und Lastmodell sowie von der Speicherkapazität der Surrogate im A-CDN in vielen Fällen Vorteile gegenüber dem Einsatz herkömmlicher Platzierungsmechanismen mit sich bringt. Wenn sich die Anfragelasten verschiedener Typen von Clients jedoch nur wenig oder gar nicht überlappen oder bei hinreichend großer Speicherkapazität der Surrogate hat der adaptivitätssensitive Ansatz keine signifikanten Vorteile gegenüber dem Einsatz eines herkömmlichen Platzierungsmechanismus. / Adaptive Content Distribution Networks (A-CDNs) are application independent, distributed infrastructures using content adaptation and distributed replication of contents to allow the scalable delivery of adaptable multimedia contents to heterogeneous clients. The replica placement in an A-CDN is controlled by the placement mechanisms of the A-CDN. As opposed to traditional CDNs, which do not take content adaptation into consideration, a replica placement mechanism in an A-CDN has to decide not only which object shall be stored in which surrogate but also which representation or which representations of the object to replicate. Traditional replica placement mechanisms are incapable of taking different representations of the same object into consideration. That is why A-CDNs that use traditional replica placement mechanisms may only replicate generic or statically adapted representations. The replication of statically adapted representations reduces the sharing of the replicas. The replication of generic representations results in adaptation costs and delays with every request. That is why the dissertation thesis proposes the application of adaptation-aware replica placement mechanisms. By taking the adaptability of the contents into account, adaptation-aware replica placement mechanisms may replicate generic, statically adapted and even partially adapted representations of an object. Thus, they are able to balance between static and dynamic content adaptation. The dissertation is targeted at the evaluation of the performance advantages of taking knowledge about the adaptability of contents into consideration when calculating a placement of replicas in an A-CDN. Therefore the problem of adaptation-aware replica placement is formalized as an optimization problem; algorithms for solving the optimization problem are proposed and implemented in a simulator. The underlying simulation model describes an Internet-wide distributed A-CDN that is used for the delivery of JPEG images to heterogeneous mobile and stationary clients. Based on the simulation model, the performance of the adaptation-aware replica placement mechanisms are evaluated and compared to traditional replica placement mechanisms. The simulations prove that the adaptation-aware approach is superior to the traditional replica placement mechanisms in many cases depending on the system and load model as well as the storage capacity of the surrogates of the A-CDN. However, if the load of different types of clients do hardly overlap or with sufficient storage capacity within the surrogates, the adaptation-aware approach has no significant advantages over the application of traditional replica-placement mechanisms.
|
90 |
Enhanced Conformational Sampling of Proteins Using TEE-REX / Verbessertes Sampling von Proteinkonformationen durch TEE-REXKubitzki, Marcus 11 December 2007 (has links)
No description available.
|
Page generated in 0.0529 seconds