• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 54
  • 41
  • 39
  • 23
  • 16
  • 15
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 745
  • 291
  • 279
  • 144
  • 100
  • 93
  • 90
  • 87
  • 79
  • 70
  • 65
  • 46
  • 44
  • 43
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Service Aware Traffic Distribution in Heterogeneous A2G Networks

Tomic, David January 2019 (has links)
Airplanes have different ways to connect to the ground, including satellite air-to-ground communication (SA2GC) and direct air-to-ground communication (DA2GC). Each connection/link offers a different varying amount of transmission capacity over flight time. The traffic generated in the airplane must be forwarded/sent to ground over the available links. It is however not clear how the traffic should be forwarded so that traffic quality of service (QoS) requirements are met. The thesis at hand considers this question, and implements an algorithm handling the forwarding decision with three different forwarding schemes. Those consider traffic parameters in calculating a value assigned to each traffic flow, over a combination of priority, delay requirement and the number of times a traffic flow is dropped. The forwarding algorithm relies on proposed in-flight broadband connectivity (IFBC) network traffic and air-to-ground (A2G) link models, which aim at approximating the network environment of future IFBC networks. It is shown that QoS requirements of traffic flows in terms of packet loss and delay cannot be satisfied with capacities offered by current DA2GC and SA2GC technology. For a future scenario, with higher assumed link capacities, the QoS requirements are met to a higher extent. This is shown in lower packet loss and delay experienced by the respective traffic flows. Further, it is shown that the performance can be improved with specific forwarding schemes used by the forwarding algorithm. It is also investigated how a web cache can be used as a fallback technology. For this a required web cache hit rate is found, which should be high enough to offload the network with content served from the cache. Overall, the thesis aims at proposing an efficient traffic forwarding technique, and at giving insight into an alternative if this technique fails. / Flygplan har olika sätt att ansluta till marken, inklusive satellit-mark-kommunikation (SA2GC) och direkt luft till markkommunikation (DA2GC). Varje anslutning/länk erbjuder en annan varierande mängd överföringskapacitet under flygtid. Den trafik som genereras i flygplanet måste vidarebefordras/skickas till marken över de tillgängliga länkarna. Det är emellertid inte klart hur trafiken ska vidarebefordras så att trafiksäkerhetskvaliteten (QoS) uppfylls. Avhandlingen handlar om denna fråga och implementerar en algoritm som hanterar vidarebefordringsbeslutet med tre olika vidarebefordringssystem. De betraktar trafikparametrar vid beräkning av ett värde som tilldelas varje trafikflöde, över en kombination av prioritet, fördröjningskrav och antalet gånger ett trafikflöde tappas. Vidarebefordringsalgoritmen är beroende av föreslagna bredbandsförbindelser (IFBC) i nätverk och A2G-länkmodeller, som syftar till att approximera nätverksmiljön för framtida IFBC-nätverk. Det visas att QoS-krav på trafikflöden när det gäller paketförlust och fördröjning inte kan tillgodoses med kapacitet som erbjuds av nuvarande DA2GC- och SA2GC-teknik. För ett framtida scenario, med högre antagna länkkapacitet, uppfylls QoS-kraven i högre utsträckning. Detta visas med lägre paketförlust och fördröjning som upplevs av respektive trafikflöden. Vidare är det visat att prestanda kan förbättras med specifika vidarekopplingsscheman som används av vidarebefordringsalgoritmen. Det undersöks också hur en webbcache kan användas som en återgångsteknik. För detta hittas en obligatorisk webbcache-träfffrekvens, som bör vara tillräckligt hög för att ladda upp nätverket med innehåll som serveras från cacheminnet. Sammanfattningsvis syftar uppsatsen till att föreslå en effektiv trafiköverföringsteknik och att ge insikt om ett alternativ om denna teknik misslyckas.
712

Interference Analysis and Resource Management in Server Processors: from HPC to Cloud Computing

Pons Escat, Lucía 01 September 2023 (has links)
[ES] Una de las principales preocupaciones de los centros de datos actuales es maximizar la utilización de los servidores. En cada servidor se ejecutan simultáneamente varias aplicaciones para aumentar la eficiencia de los recursos. Sin embargo, las prestaciones dependen en gran medida de la proporción de recursos que recibe cada aplicación. El mayor número de núcleos (y de aplicaciones ejecutándose) con cada nueva generación de procesadores hace que crezca la preocupación por la interferencia en los recursos compartidos. Esta tesis se centra en mitigar la interferencia cuando diferentes aplicaciones se consolidan en un mismo procesador desde dos perspectivas: computación de alto rendimiento (HPC) y computación en la nube. En el contexto de HPC, esta tesis propone políticas de gestión para dos de los recursos más críticos: la caché de último nivel (LLC) y los núcleos del procesador. La LLC desempeña un papel clave en las prestaciones de los procesadores actuales al reducir considerablemente el número de accesos de alta latencia a memoria principal. Se proponen estrategias de particionado de la LLC tanto para cachés inclusivas como no inclusivas, ambos diseños presentes en los procesadores para servidores actuales. Para los esquemas, se detectan nuevos comportamientos problemáticos y se asigna un mayor espacio de caché a las aplicaciones que hacen mejor uso de este. En cuanto a los núcleos del procesador, muchas aplicaciones paralelas (como aplicaciones de grafos) no escalan bien con un mayor número de núcleos. Además, el planificador de Linux aplica una estrategia de tiempo compartido que no ofrece buenas prestaciones cuando se ejecutan aplicaciones de grafo. Para maximizar la utilización del sistema, esta tesis propone ejecutar múltiples aplicaciones de grafo en el mismo procesador, asignando a cada una el número óptimo de núcleos (y adaptando el número de hilos creados) dinámicamente. En cuanto a la computación en la nube, esta tesis aborda tres grandes retos: la compleja infraestructura de estos sistemas, las características de sus aplicaciones y el impacto de la interferencia entre máquinas virtuales (MV). Primero, esta tesis presenta la plataforma experimental desarrollada con los principales componentes de un sistema en la nube. Luego, se presenta un amplio estudio de caracterización sobre un conjunto de aplicaciones de latencia crítica representativas con el fin de identificar los puntos que los proveedores de servicios en la nube deben tener en cuenta para mejorar el rendimiento y la utilización de los recursos. Por último, se realiza una propuesta que permite detectar y estimar dinámicamente la interferencia entre MV. El enfoque usa métricas que pueden monitorizarse fácilmente en la nube pública, ya que las MV deben tratarse como "cajas negras". Toda la investigación descrita se lleva a cabo respetando las restricciones y cumpliendo los requisitos para ser aplicable en entornos de producción de nube pública. En resumen, esta tesis aborda la contención en los principales recursos compartidos del sistema en el contexto de la consolidación de servidores. Los resultados experimentales muestran importantes ganancias sobre Linux. En los procesadores con LLC inclusiva, el tiempo de ejecución (TT) se reduce en más de un 40%, mientras que se mejora el IPC más de un 3%. Con una LLC no inclusiva, la equidad y el TT mejoran en un 44% y un 24%, respectivamente, al mismo tiempo que se mejora el rendimiento hasta un 3,5%. Al distribuir los núcleos del procesador de forma eficiente, se alcanza una equidad casi perfecta (94%), y el TT se reduce hasta un 80%. En entornos de computación en la nube, la degradación del rendimiento puede estimarse con un error de un 5% en la predicción global. Todas las propuestas presentadas han sido diseñadas para ser aplicadas en procesadores comerciales sin requerir ninguna información previa, tomando las decisiones dinámicamente con datos recogidos de los contadores de prestaciones. / [CAT] Una de les principals preocupacions dels centres de dades actuals és maximitzar la utilització dels servidors. A cada servidor s'executen simultàniament diverses aplicacions per augmentar l'eficiència dels recursos. Tot i això, el rendiment depèn en gran mesura de la proporció de recursos que rep cada aplicació. El nombre creixent de nuclis (i aplicacions executant-se) amb cada nova generació de processadors fa que creixca la preocupació per l'efecte causat per les interferències en els recursos compartits. Aquesta tesi se centra a mitigar la interferència en els recursos compartits quan diferents aplicacions es consoliden en un mateix processador des de dues perspectives: computació d'alt rendiment (HPC) i computació al núvol. En el context d'HPC, aquesta tesi proposa polítiques de gestió per a dos dels recursos més crítics: la memòria cau d'últim nivell (LLC) i els nuclis del processador. La LLC exerceix un paper clau a les prestacions del sistema en els processadors actuals reduint considerablement el nombre d'accessos d'alta latència a la memòria principal. Es proposen estratègies de particionament de la LLC tant per a caus inclusives com no inclusives, ambdós dissenys presents en els processadors actuals. Per als dos esquemes, se detecten nous comportaments problemàtics i s'assigna un major espai de memòria cau a les aplicacions que en fan un millor ús. Pel que fa als nuclis del processador, moltes aplicacions paral·leles (com les aplicacions de graf) no escalen bé a mesura que s'incrementa el nombre de nuclis. A més, el planificador de Linux aplica una estratègia de temps compartit que no ofereix bones prestacions quan s'executen aplicacions de graf. Per maximitzar la utilització del sistema, aquesta tesi proposa executar múltiples aplicacions de grafs al mateix processador, assignant a cadascuna el nombre òptim de nuclis (i adaptant el nombre de fils creats) dinàmicament. Pel que fa a la computació al núvol, aquesta tesi aborda tres grans reptes: la complexa infraestructura d'aquests sistemes, les característiques de les seues aplicacions i l'impacte de la interferència entre màquines virtuals (MV). En primer lloc, aquesta tesi presenta la plataforma experimental desenvolupada amb els principals components d'un sistema al núvol. Després, es presenta un ampli estudi de caracterització sobre un conjunt d'aplicacions de latència crítica representatives per identificar els punts que els proveïdors de serveis al núvol han de tenir en compte per millorar el rendiment i la utilització dels recursos. Finalment, es fa una proposta que de manera dinàmica permet detectar i estimar la interferència entre MV. L'enfocament es basa en mètriques que es poden monitoritzar fàcilment al núvol públic, ja que les MV han de tractar-se com a "caixes negres". Tota la investigació descrita es duu a terme respectant les restriccions i complint els requisits per ser aplicable en entorns de producció al núvol públic. En resum, aquesta tesi aborda la contenció en els principals recursos compartits del sistema en el context de la consolidació de servidors. Els resultats experimentals mostren que s'obtenen importants guanys sobre Linux. En els processadors amb una LLC inclusiva, el temps d'execució (TT) es redueix en més d'un 40%, mentres que es millora l'IPC en més d'un 3%. En una LLC no inclusiva, l'equitat i el TT es milloren en un 44% i un 24%, respectivament, al mateix temps que s'obté una millora del rendiment de fins a un 3,5%. Distribuint els nuclis del processador de manera eficient es pot obtindre una equitat quasi perfecta (94%), i el TT pot reduir-se fins a un 80%. En entorns de computació al núvol, la degradació del rendiment pot estimar-se amb un error de predicció global d'un 5%. Totes les propostes presentades en aquesta tesi han sigut dissenyades per a ser aplicades en processadors de servidors comercials sense requerir cap informació prèvia, prenent decisions dinàmicament amb dades recollides dels comptadors de prestacions. / [EN] One of the main concerns of today's data centers is to maximize server utilization. In each server processor, multiple applications are executed concurrently, increasing resource efficiency. However, performance and fairness highly depend on the share of resources that each application receives, leading to performance unpredictability. The rising number of cores (and running applications) with every new generation of processors is leading to a growing concern for interference at the shared resources. This thesis focuses on addressing resource interference when different applications are consolidated on the same server processor from two main perspectives: high-performance computing (HPC) and cloud computing. In the context of HPC, resource management approaches are proposed to reduce inter-application interference at two major critical resources: the last level cache (LLC) and the processor cores. The LLC plays a key role in the system performance of current multi-cores by reducing the number of long-latency main memory accesses. LLC partitioning approaches are proposed for both inclusive and non-inclusive LLCs, as both designs are present in current server processors. In both cases, newly problematic LLC behaviors are identified and efficiently detected, granting a larger cache share to those applications that use best the LLC space. As for processor cores, many parallel applications, like graph applications, do not scale well with an increasing number of cores. Moreover, the default Linux time-sharing scheduler performs poorly when running graph applications, which process vast amounts of data. To maximize system utilization, this thesis proposes to co-locate multiple graph applications on the same server processor by assigning the optimal number of cores to each one, dynamically adapting the number of threads spawned by the running applications. When studying the impact of system-shared resources on cloud computing, this thesis addresses three major challenges: the complex infrastructure of cloud systems, the nature of cloud applications, and the impact of inter-VM interference. Firstly, this thesis presents the experimental platform developed to perform representative cloud studies with the main cloud system components (hardware and software). Secondly, an extensive characterization study is presented on a set of representative latency-critical workloads which must meet strict quality of service (QoS) requirements. The aim of the studies is to outline issues cloud providers should consider to improve performance and resource utilization. Finally, we propose an online approach that detects and accurately estimates inter-VM interference when co-locating multiple latency-critical VMs. The approach relies on metrics that can be easily monitored in the public cloud as VMs are handled as ``black boxes''. The research described above is carried out following the restrictions and requirements to be applicable to public cloud production systems. In summary, this thesis addresses contention in the main system shared resources in the context of server consolidation, both in HPC and cloud computing. Experimental results show that important gains are obtained over the Linux OS scheduler by reducing interference. In inclusive LLCs, turnaround time (TT) is reduced by over 40% while improving IPC by more than 3%. In non-inclusive LLCs, fairness and TT are improved by 44% and 24%, respectively, while improving performance by up to 3.5%. By distributing core resources efficiently, almost perfect fairness can be obtained (94%), and TT can be reduced by up to 80%. In cloud computing, performance degradation due to resource contention can be estimated with an overall prediction error of 5%. All the approaches proposed in this thesis have been designed to be applied in commercial server processors without requiring any prior information, making decisions dynamically with data collected from hardware performance counters. / Pons Escat, L. (2023). Interference Analysis and Resource Management in Server Processors: from HPC to Cloud Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/195840
713

Une approche fréquentielle pratique pour l'échantillonnage adaptatif en espace image

Dubouchet, Renaud Adrien 10 1900 (has links)
En synthèse d'images réalistes, l'intensité finale d'un pixel est calculée en estimant une intégrale de rendu multi-dimensionnelle. Une large portion de la recherche menée dans ce domaine cherche à trouver de nouvelles techniques afin de réduire le coût de calcul du rendu tout en préservant la fidelité et l'exactitude des images résultantes. En tentant de réduire les coûts de calcul afin d'approcher le rendu en temps réel, certains effets réalistes complexes sont souvent laissés de côté ou remplacés par des astuces ingénieuses mais mathématiquement incorrectes. Afin d'accélerer le rendu, plusieurs avenues de travail ont soit adressé directement le calcul de pixels individuels en améliorant les routines d'intégration numérique sous-jacentes; ou ont cherché à amortir le coût par région d'image en utilisant des méthodes adaptatives basées sur des modèles prédictifs du transport de la lumière. L'objectif de ce mémoire, et de l'article résultant, est de se baser sur une méthode de ce dernier type[Durand2005], et de faire progresser la recherche dans le domaine du rendu réaliste adaptatif rapide utilisant une analyse du transport de la lumière basée sur la théorie de Fourier afin de guider et prioriser le lancer de rayons. Nous proposons une approche d'échantillonnage et de reconstruction adaptative pour le rendu de scènes animées illuminées par cartes d'environnement, permettant la reconstruction d'effets tels que les ombres et les réflexions de tous les niveaux fréquentiels, tout en préservant la cohérence temporelle. / In realistic image synthesis, a pixel's final intensity is computed by estimating a multi-dimensional shading integral. A large part of the research in this domain is thus aimed at finding new techniques to reduce the computational cost of rendering while preserving the fidelity and correctness of the resulting images. When trying to reduce rendering costs to approach real-time computation, complex realistic effects are often left aside or replaced by clever but mathematically incorrect tricks. To accelerate rendering, previous directions of work have either addressed the computation of individual pixels by improving the underlying numerical integration routines; or have sought to amortize the computation across regions of an image using adaptive methods based on predictive models of light transport. This thesis' - and resulting paper's - objective is to build upon the latter of the aforementioned classes of methods[Durand2005], and foray into fast adaptive rendering techniques using frequency-based light transport analysis to efficiently guide and prioritize ray tracing. We thus propose an adaptive sampling and reconstruction approach to render animated scenes lit by environment lighting and faithfully reconstruct all-frequency shading effects such as shadows and reflections while preserving temporal coherency.
714

Le Sylvicole inférieur au Méganticois : le cas du site Nepress (BiEr-21)

Provençal, Julie 01 1900 (has links)
La découverte du site Nepress (BiEr-21) en 2004 et les saisons de fouilles subséquentes ont permis de découvrir de nombreux vestiges archéologiques. Ce mémoire a donc pour objectif de déterminer l’identité culturelle des occupants qui ont fréquenté le site, en prenant en considération les activités rituelles et la stratégie d’approvisionnement en matière lithique. Pour y parvenir, une analyse morpho-métrique de l’assemblage lithique a été effectuée. La distribution intra-site des artéfacts a également été prise en considération lors de l’analyse. Une séquence chronologique du Nord-Est américain remontant au Sylvicole inférieur est présentée dans ce mémoire. Une période d’occupation semble dominer sur le site Nepress, soit le Sylvicole inférieur. Cette manifestation est caractérisée par la présence d’artéfacts diagnostiques de la culture Meadowood. Ces objets sont un grattoir triangulaire bifacial Meadowood, ainsi qu’une imitation de pointe de type box-base. / The discovery of the Nepress site (BiEr-21) in 2004 and the subsequent excavations have revealed many archaeological remains. This thesis has seeks to determine the cultural identity of the site’s occupants, taking into account their ritual activities and their lithic procurement strategy. To achieve this, a morpho-metric analysis of the lithic assemblage was undertaken. The intra-site artifact distribution was also taken into account. A chronological sequence for Northeastern North America going back to the Early Woodland is presented. The Early Woodland appears to dominate the occupation of the Nepress site. This is characterised by the presence of diagnostic artifacts of the Meadowood culture. These objects are a triangular bifacial Meadowood scraper, as well as an imitation of a projectile point.
715

Amélioration de la qualité d'expérience vidéo en combinant streaming adaptif, caching réseau et multipath / Combining in-network caching, HTTP adaptive streaming and multipath to improve video quality of experience

Poliakov, Vitalii 11 December 2018 (has links)
Le trafic vidéo s’est considérablement accru et est prévu de doubler pour représenter 82% du trafic Internet d’ici 2021. Une telle croissance surcharge les fournisseurs de services Internet (ISP), nuisant à la Qualité d’Expérience (QoE) perçue par les utilisateurs. Cette thèse vise à améliorer la QoE des utilisateurs de streaming vidéo sans hypothèse de changement d’infrastructure physique des opérateurs. Pour cela, nous combinons les technologies de caching réseau, de streaming HTTP adaptatif (HAS), et de transport multipath. Nous explorons d’abord l’interaction entre HAS et caching, pour montrer que les algorithmes d’adaptation de qualité vidéo ont besoin de savoir qu’il y a un cache et ce qui y est stocké, et proposons des algorithmes bénéficiant de cette connaissance. Concluant sur la difficulté d’obtenir la connaissance de l’état du cache, nous étudions ensuite un système de distribution vidéo à large échelle, où les caches sont représentés par un réseau de distribution du contenu (CDN). Un CDN déploie des caches à l’intérieur des réseaux des ISP, et dispose de ses propres serveurs externes. L’originalité du problème vient de l’hypothèse que nous faisons que l’utilisateur est simultanément connecté à 2 ISP. Ceci lui permet d’accéder en multipath aux serveurs externes aux ISP (pouvant ainsi accroître le débit mais chargeant plus les ISP), ou streamer le contenu depuis un cache plus proche mais avec un seul chemin. Ce désaccord entre les objectifs du CDN et de l’ISP conduit à des performances sous-optimales. Nous développons un schéma de collaboration entre ISP et CDN qui permet de nous rapprocher de l’optimal dans certains cas, et discutons l’implémentation pratique. / Video traffic volume grew considerably in recent years and is forecasted to reach 82% of the total Internet traffic by 2021, doubling its net volume as compared to today. Such growth overloads Internet Service Providers' networks (ISPs), which negatively impacts users' Quality of Experience (QoE). This thesis attempts to tackle the problem of improving users' video QoE without relying on network upgrades. For this, we have chosen to combine such technologies as in-network caching, HTTP Adaptive Streaming (HAS), and multipath data transport. We start with exploration of interaction between HAS and caching; we confirm the need of cache-awareness in quality adaptation algorithms and propose such an extension to a state-of-the-art optimisation-based algorithm. Concluding on the difficulty of achieving cache-awareness, we take a step back to study a video delivery system on a large scale, where in-network caches are represented by Content Delivery Networks (CDNs). They deploy caches inside ISPs and dispose of their own outside video servers. As a novelty, we consider users to have a simultaneous connectivity to several ISP networks. This allows video clients either to access outside multipath servers with aggregate bandwidth (which may increase their QoE, but will also bring more traffic into ISP), or stream their content from a closer cache through only single connectivity (bringing less traffic into ISP). This disagreement in ISP and CDN objectives leads to suboptimal system performance. In response to this, we develop a collaboration scheme between two actors, performance of which can approach optimal boundary for certain settings, and discuss its practical implementation.
716

Mobility Metrics for Routing in MANETs

Xu, Sanlin, SanlinXu@yahoo.com January 2007 (has links)
A Mobile Ad hoc Network (MANET) is a collection of wireless mobile nodes forming a temporary network without the need for base stations or any other pre–existing network infrastructure. In a peer-to-peer fashion, mobile nodes can communicate with each other by using wireless multihop communication. Due to its low cost, high flexibility, fast network establishment and self-reconfiguration, ad hoc networking has received much interest during the last ten years. However, without a fixed infrastructure, frequent path changes cause significant numbers of routing packets to discover new paths, leading to increased network congestion and transmission latency over fixed networks. Many on-demand routing protocols have been developed by using various routing mobility metrics to choose the most reliable routes, while dealing with the primary obstacle caused by node mobility. ¶ In the first part, we have developed an analysis framework for mobility metrics in random mobility model. Unlike previous research, where the mobility metrics were mostly studied by simulations, we derive the analytical expressions of mobility metrics, including link persistence, link duration, link availability, link residual time, link change rate and their path equivalents. We also show relationships between the different metrics, where they exist. Such exact expressions constitute precise mathematical relationships between network connectivity and node mobility. ¶ We further validate our analysis framework in Random Walk Mobility model (RWMM). Regarding constant or random variable node velocity, we construct the transition matrix of Markov Chain Model through the analysis of the PDF of node separation after one epoch. In addition, we present intuitive and simple expressions for the link residual time and link duration, for the RWMM, which relate them directly to the ratio between transmission range and node speed. We also illustrate the relationship between link change rate and link duration. Finally, simulation results for all mentioned mobility metrics are reported which match well the proposed analytical framework. ¶ In the second part, we investigate the mobility metric applications on caching strategies and hierarchy routing algorithm. When on-demand routing employed, stale route cache information and frequent new-route discovery in processes in MANETs generate considerable routing delay and overhead. This thesis proposes a practical route caching strategy to minimize routing delay and/or overhead by setting route cache timeout to a mobility metric, the expected path residual time. The strategy is independent of network traffic load and adapts to various non-identical link duration distributions, so it is feasible to implement in a real-time route caching scheme. Calculated results show that the routing delay achieved by the route caching scheme is only marginally more than the theoretically determined minimum. Simulation in NS-2 demonstrates that the end-to-end delay from DSR routing can be remarkably reduced by our caching scheme. By using overhead analysis model, we demonstrate that the minimum routing overhead can be achieved by increasing timeout to around twice the expected path residual time, without significant increase in routing delay. ¶ Apart from route cache, this thesis also addresses link cache strategy which has the potential to utilize route information more efficiently than a route cache scheme. Unlike some previous link cache schemes delete links at some fixed time after they enter the cache, we proposes using either the expected path duration or the link residual time as the link cache timeout. Simulation results in NS-2 show that both of the proposed link caching schemes can improve network performance in the DSR by reducing dropped data packets, latency and routing overhead, with the link residual time scheme out-performing the path duration scheme. ¶ To deal with large-scale MANETs, this thesis presents an adaptive k-hop clustering algorithm (AdpKHop), which selects clusterhead (CH) by our CH selection metrics. The proposed CH selection criteria enable that the chosen CHs are closer to the cluster centroid and more stable than other cluster members with respect to node mobility. By using merging threshold which is based on the CH selection metric, 1-hop clusters can merge to k-hop clusters, where the size of each k-hop cluster adapts to the node mobility of the chosen CH. Moreover, we propose a routing overhead analysis model for k-hop clustering algorithm, which is determined by a range of network parameters, such as link change rate (related to node mobility), node degree and cluster density. Through the overhead analysis, we show that an optimal k-hop cluster density does exist, which is independent of node mobility. Therefore, the corresponding optimal cluster merging threshold can be employed to efficiently organise k-hop clusters to achieve minimum routing overhead, which is highly desirable in large-scale networks. ¶ The work presented in this thesis provides a sound basis for future research on mobility analysis for mobile ad hoc networks, in aspects such as mobility metrics, caching strategies and k-hop clustering routing protocols.
717

Adéquation Algorithme Architecture pour la reconstruction 3D en imagerie médicale TEP

Gac, Nicolas 17 July 2008 (has links) (PDF)
L'amélioration constante de la résolution dynamique et temporelle des scanners et des méthodes de reconstruction en imagerie médicale, s'accompagne d'un besoin croissant en puissance de calcul. Les accélérations logicielles, algorithmiques et matérielles sont ainsi appelées à réduire le fossé technologique existant entre les systèmes d'acquisition et ceux de reconstruction.<br />Dans ce contexte, une architecture matérielle de rétroprojection 3D en Tomographie à Emission de Positons (TEP) est proposée. Afin de lever le verrou technologique constitué par la forte latence des mémoires externes de type SDRAM, la meilleure Adéquation Algorithme Architecture a été recherchée. Cette architecture a été implémentée sur un SoPC (System on Programmable Chip) et ses performances comparées à celles d'un PC, d'un serveur de calcul et d'une carte graphique. Associée à un module matériel de projection 3D, cette architecture permet de définir une paire matérielle de projection/rétroprojection et de constituer ainsi un système de reconstruction complet.
718

Federation de données semi-structurées avec XML

Dang Ngoc, Tuyet Tram 10 June 2003 (has links) (PDF)
Contrairement aux données traditionnelles, les données semi-structurées<br />sont irrégulières : des données peuvent manquer, des concepts<br />similaires peuvent être représentés par différents types de données,<br />et les structures même peuvent être mal connues. Cette absence <br />de schéma prédéfini, permettant de tenir compte de toutes les données<br />du monde extérieur, présente l'inconvénient de complexifier les<br />algorithmes d'intégration des données de différentes sources.<br /><br />Nous proposons une architecture de médiation basée entièrement sur XML.<br />L'objectif de cette architecture de médiation est de fédérer des sources de<br />données distribuées de différents types.<br />Elle s'appuie sur le langage XQuery, un langage fonctionnel<br />conçu pour formuler des requêtes sur des documents XML. Le médiateur analyse<br />les requêtes exprimées en XQuery et répartit l'exécution de la requête<br />sur les différentes sources avant de recomposer les résultats.<br /><br />L'évaluation des requêtes doit se faire en exploitant au maximum les<br />spécificités des données et permettre une optimisation efficace.<br />Nous décrivons l'algèbre XAlgebre à base d'opérateurs conçus<br />pour XML. Cette algèbre a pour but de construire des plans d'exécution pour<br />l'évaluation de requêtes XQuery et traiter des tuples d'arbres XML.<br /><br />Ces plans d'exécution doivent pouvoir être modélisés par un modèle<br />de coût et celui de coût minimum sera sélectionné pour l'exécution. <br />Dans cette thèse, nous définissons un modèle de coût pour les données<br />semi-structurées adapté à notre algèbre.<br /><br />Les sources de données (SGBD, serveurs Web, moteur de recherche)<br />peuvent être très hétérogènes, elles peuvent avoir des<br />capacités de traitement de données très différentes, mais aussi avoir<br />des modèles de coût plus ou moins définis. <br />Pour intégrer ces différentes informations dans<br />l'architecture de médiation, nous devons déterminer comment communiquer<br />ces informations entre le médiateur et les sources, et comment les intégrer.<br />Pour cela, nous utilisons des langages basés sur XML comme XML-Schema et MathML<br />pour exporter les informations de métadonnées, de formules de coûts<br />et de capacité de sources.<br />Ces informations exportées sont communiquées par l'intermédiaire d'une interface<br />applicative nommée XML/DBC.<br /><br />Enfin, des optimisations diverses spécifiques à l'architecture de médiation<br />doivent être considérées. Nous introduisons pour cela un cache sémantique<br />basé sur un prototype de SGBD stockant efficacement des données XML<br />en natif.
719

Caching Techniques For Dynamic Web Servers

Suresha, * 07 1900 (has links)
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites. A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code. We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation. In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
720

Adaptivitätssensitive Platzierung von Replikaten in Adaptiven Content Distribution Networks / Adaptation-aware Replica Placement in Adaptive Content Distribution Networks

Buchholz, Sven 14 June 2005 (has links) (PDF)
Adaptive Content Distribution Networks (A-CDNs) sind anwendungsübergreifende, verteilte Infrastrukturen, die auf Grundlage verteilter Replikation von Inhalten und Inhaltsadaption eine skalierbare Auslieferung von adaptierbaren multimedialen Inhalten an heterogene Clients ermöglichen. Die Platzierung der Replikate in den Surrogaten eines A-CDN wird durch den Platzierungsmechanismus des A-CDN gesteuert. Anders als in herkömmlichen CDNs, die keine Inhaltsadaption berücksichtigen, muss ein Platzierungsmechanismus in einem A-CDN nicht nur entscheiden, welches Inhaltsobjekt in welchem Surrogat repliziert werden soll, sondern darüber hinaus, in welcher Repräsentation bzw. in welchen Repräsentationen das Inhaltsobjekt zu replizieren ist. Herkömmliche Platzierungsmechanismen sind nicht in der Lage, verschiedene Repräsentationen eines Inhaltsobjektes zu berücksichtigen. Beim Einsatz herkömmlicher Platzierungsmechanismen in A-CDNs können deshalb entweder nur statisch voradaptierte Repräsentationen oder ausschließlich generische Repräsentationen repliziert werden. Während bei der Replikation von statisch voradaptierten Repräsentationen die Wiederverwendbarkeit der Replikate eingeschränkt ist, führt die Replikation der generischen Repräsentationen zu erhöhten Kosten und Verzögerungen für die dynamische Adaption der Inhalte bei jeder Anfrage. Deshalb werden in der Arbeit adaptivitätssensitive Platzierungsmechanismen zur Platzierung von Replikaten in A-CDNs vorgeschlagen. Durch die Berücksichtigung der Adaptierbarkeit der Inhalte bei der Ermittlung einer Platzierung von Replikaten in den Surrogaten des A-CDNs können adaptivitätssensitive Platzierungsmechanismen sowohl generische und statisch voradaptierte als auch teilweise adaptierte Repräsentationen replizieren. Somit sind sie in der Lage statische und dynamische Inhaltsadaption flexibel miteinander zu kombinieren. Das Ziel der vorliegenden Arbeit ist zu evaluieren, welche Vorteile sich durch die Berücksichtigung der Inhaltsadaption bei Platzierung von adaptierbaren Inhalten in A-CDNs realisieren lassen. Hierzu wird das Problem der adaptivitätssensitiven Platzierung von Replikaten in A-CDNs als Optimierungsproblem formalisiert, Algorithmen zur Lösung des Optimierungsproblems vorgeschlagen und diese in einem Simulator implementiert. Das zugrunde liegende Simulationsmodell beschreibt ein im Internet verteiltes A-CDN, welches zur Auslieferung von JPEG-Bildern an heterogene mobile und stationäre Clients verwendet wird. Anhand dieses Simulationsmodells wird die Leistungsfähigkeit der adaptivitätssensitiven Platzierungsmechanismen evaluiert und mit der von herkömmlichen Platzierungsmechanismen verglichen. Die Simulationen zeigen, dass der adaptivitätssensitive Ansatz in Abhängigkeit vom System- und Lastmodell sowie von der Speicherkapazität der Surrogate im A-CDN in vielen Fällen Vorteile gegenüber dem Einsatz herkömmlicher Platzierungsmechanismen mit sich bringt. Wenn sich die Anfragelasten verschiedener Typen von Clients jedoch nur wenig oder gar nicht überlappen oder bei hinreichend großer Speicherkapazität der Surrogate hat der adaptivitätssensitive Ansatz keine signifikanten Vorteile gegenüber dem Einsatz eines herkömmlichen Platzierungsmechanismus. / Adaptive Content Distribution Networks (A-CDNs) are application independent, distributed infrastructures using content adaptation and distributed replication of contents to allow the scalable delivery of adaptable multimedia contents to heterogeneous clients. The replica placement in an A-CDN is controlled by the placement mechanisms of the A-CDN. As opposed to traditional CDNs, which do not take content adaptation into consideration, a replica placement mechanism in an A-CDN has to decide not only which object shall be stored in which surrogate but also which representation or which representations of the object to replicate. Traditional replica placement mechanisms are incapable of taking different representations of the same object into consideration. That is why A-CDNs that use traditional replica placement mechanisms may only replicate generic or statically adapted representations. The replication of statically adapted representations reduces the sharing of the replicas. The replication of generic representations results in adaptation costs and delays with every request. That is why the dissertation thesis proposes the application of adaptation-aware replica placement mechanisms. By taking the adaptability of the contents into account, adaptation-aware replica placement mechanisms may replicate generic, statically adapted and even partially adapted representations of an object. Thus, they are able to balance between static and dynamic content adaptation. The dissertation is targeted at the evaluation of the performance advantages of taking knowledge about the adaptability of contents into consideration when calculating a placement of replicas in an A-CDN. Therefore the problem of adaptation-aware replica placement is formalized as an optimization problem; algorithms for solving the optimization problem are proposed and implemented in a simulator. The underlying simulation model describes an Internet-wide distributed A-CDN that is used for the delivery of JPEG images to heterogeneous mobile and stationary clients. Based on the simulation model, the performance of the adaptation-aware replica placement mechanisms are evaluated and compared to traditional replica placement mechanisms. The simulations prove that the adaptation-aware approach is superior to the traditional replica placement mechanisms in many cases depending on the system and load model as well as the storage capacity of the surrogates of the A-CDN. However, if the load of different types of clients do hardly overlap or with sufficient storage capacity within the surrogates, the adaptation-aware approach has no significant advantages over the application of traditional replica-placement mechanisms.

Page generated in 0.0341 seconds