• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 395
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 42
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Rede auto-organizada utilizando chaveamento de pacotes ópticos. / Self-organized network architecture deployed by the utilization of optical packet switching technology.

Sachs, Antonio de Campos 27 April 2011 (has links)
A tecnologia de chaveamento de pacotes ópticos comumente utiliza componentes muito complexos, relegando sua viabilidade para o futuro. A utilização de pacotes ópticos, entretanto, é uma boa opção para melhorar a granularidade dos enlaces ópticos, bem como para tornar os processos de distribuição de banda muito mais eficientes e flexíveis. Esta tese propõe simplificações nas chaves ópticas que além de tornarem o pacote óptico viável para um futuro mais próximo, permitem montar redes ópticas complexas, com muitos nós, que operam de maneira auto-organizada. A rede proposta nesta tese não possui sinalização para reserva ou estabelecimento de caminho. As rotas são definidas pacote a pacote, em tempo real, durante o seu percurso, utilizando roteamento por deflexão. Com funções muito simples realizadas localmente, a rede ganha características desejáveis como: alta escalabilidade e eficiente sistema de proteção de enlace. Estas características desejáveis são tratadas como funções da rede que emergem de funções realizadas em cada um dos nós de rede individualmente. A tese apresenta um modelo analítico estatístico, validado por simulação, para caracterização da rede. No sistema de proteção contra falhas, os cálculos realizados para redes com até 256 nós mostram que o aumento do número médio de saltos ocorre apenas para destinos localizados no entorno da falha. Para demonstrar a viabilidade de construção de chave óptica rápida simplificada utilizando somente componentes já disponíveis no mercado foi montado um protótipo, que mostrou ter um tempo de chaveamento inferior a dois nanossegundos, sendo compatível com as operações de chaveamento de pacotes ópticos. / The Optical Packet Switching (OPS) technology usually involves complex and expensive components relegating its application viability to the future. Nevertheless the OPS utilization is a good option for improving the granularity at high bit rate transmissions, as well as for operation involving flexibility and fast bandwidth distribution. This thesis proposes simplifications on optical switching devices that besides getting closer future viability enable the deployment of highly scalable and self-organized complex network architecture. The proposed network operates without resources reservation or previous path establishment. The routes are defined packet-by-packet in a real time deflection routing procedure. With simple local functions the network starts to operate with desirable performance characteristics such as high scalability and automatic protection system. Those desirable performance characteristics are treated as Emerging Functions. For the network characterization it is presented a statistical analytical model validated by simulation. In the automatic protection functions investigation the results for a 256 nodes network showed that the mean number of hops enhancement occurs only around the failure neighborhood. To demonstrate the switch viability, a prototype was fabricated utilizing components already available in the market. The switching time obtained was below two nanoseconds showing compatibility with the optical packet switching technology.
222

Contributions à l'amélioration de l'extensibilité de simulations parallèles de plasmas turbulents / Towards highly scalable parallel simulations for turbulent plasma physics

Rozar, Fabien 05 November 2015 (has links)
Les besoins en énergie dans le monde sont croissants alors que les ressources nécessaires pour la production d'énergie fossile s'épuisent d'année en année. Un des moyens alternatifs pour produire de l'énergie est la fusion nucléaire par confinement magnétique. La maîtrise de cette réaction est un défi et constitue un domaine actif de recherche. Pour améliorer notre connaissance des phénomènes qui interviennent lors de la réaction de fusion, deux approches sont mises en oeuvre : l'expérience et la simulation. Les expérience réalisées grâce aux Tokamaks permettent de prendre des mesures. Ceci nécessite l'utilisation des technologiques les plus avancées. Actuellement, ces mesures ne permettent pas d'accéder à toutes échelles de temps et d'espace des phénomènes physiques. La simulation numérique permet d'explorer ces échelles encore inaccessibles par l'expérience. Les ressources matérielles qui permettent d'effectuer des simulations réalistes sont conséquentes. L'usage du calcul haute performance (High Performance Computing HPC) est nécessaire pour avoir accès à ces simulations. Ceci se traduit par l'exploitation de grandes machines de calcul aussi appelées supercalculateurs. Les travaux réalisés dans cette thèse portent sur l'optimisation de l'application Gysela qui est un code de simulation de turbulence de plasma. L'optimisation d'un code de calcul scientifique vise classiquement l'un des trois points suivants : (i ) la simulation de plus grand domaine de calcul, (ii ) la réduction du temps de calcul et (iii ) l'amélioration de la précision des calculs. La première partie de ce manuscrit présente les contributions concernant la simulation de plus grand domaine. Comme beaucoup de codes de simulation, l'amélioration de la précision de la simulation est souvent synonyme de raffinement du maillage. Plus un maillage est fin, plus la consommation mémoire est grande. De plus, durant ces dernières années, les supercalculateurs ont eu tendance à disposer de moins en moins de mémoire par coeur de calcul. Pour ces raisons, nous avons développé une bibliothèque, la libMTM (Modeling and Tracing Memory), dédiée à l'étude précise de la consommation mémoire d'applications parallèles. Les outils de la libMTM ont permis de réduire la consommation mémoire de Gysela et d'étudier sa scalabilité. À l'heure actuelle, nous ne connaissons pas d'autre outil qui propose de fonctionnalités équivalentes permettant une étude précise de la scalabilité mémoire. La deuxième partie de ce manuscrit présente les travaux concernant l'optimisation du temps d'exécution et l'amélioration de la précision de l'opérateur de gyromoyenne. Cet opérateur est fondamental dans le modèle gyromagnétique qui est utilisé par l'application Gysela. L'amélioration de la précision vient d'un changement de la méthode de calcul : un schéma basé sur une interpolation de type Hermite vient remplacer l'approximation de Padé. Il s'avère que cette nouvelle version de l'opérateur est plus précise mais aussi plus coûteuse en terme de temps de calcul que l'opérateur existant. Afin que les temps de simulation restent raisonnables, différentes optimisations ont été réalisées sur la nouvelle méthode de calcul pour la rendre très compétitive. Nous avons aussi développé une version parallélisée en MPI du nouvel opérateur de gyromoyenne. La bonne scalabilité de cet opérateur de gyromoyenne permettra, à terme, de réduire des coûts en communication qui sont pénalisants dans une application parallèle comme Gysela. / Energy needs around the world still increase despite the resources needed to produce fossil energy drain off year after year. An alternative way to produce energy is by nuclear fusion through magnetic confinement. Mastering this reaction is a challenge and represents an active field of the current research. In order to improve our understanding of the phenomena which occur during a fusion reaction, experiment and simulation are both put to use. The performed experiments, thanks to Tokamaks, allow some experimental reading. The process of experimental measurements is of great complexity and requires the use of the most advanced available technologies. Currently, these measurements do not give access to all scales of time and space of physical phenomenon. Numerical simulation permits the exploration of these scales which are still unreachable through experiment. An extreme computing power is mandatory to perform realistic simulations. The use of High Performance Computing (HPC) is necessary to access simulation of realistic cases. This requirement means the use of large computers, also known as supercomputers. The works realized through this thesis focuses on the optimization of the Gysela code which simulates a plasma turbulence. Optimization of a scientific application concerns mainly one of the three following points : (i ) the simulation of larger meshes, (ii ) the reduction of computing time and (iii ) the enhancement of the computation accuracy. The first part of this manuscript presents the contributions relative to simulation of larger mesh. Alike many simulation codes, getting more realistic simulations is often analogous to refine the meshes. The finer the mesh the larger the memory consumption. Moreover, during these last few years, the supercomputers had trend to provide less and less memory per computer core. For these reasons, we have developed a library, the libMTM (Modeling and Tracing Memory), dedicated to study precisely the memory consumption of parallel softwares. The libMTM tools allowed us to reduce the memory consumption of Gysela and to study its scalability. As far as we know, there is no other tool which provides equivalent features which allow the memoryscalability study. The second part of the manuscript presents the works relative to the optimization of the computation time and the improvement of accuracy of the gyroaverage operator. This operator represents a corner stone of the gyrokinetic model which is used by the Gysela application. The improvement of accuracy emanates from a change in the computing method : a scheme based on a 2D Hermite interpolation substitutes the Padé approximation. Although the new version of the gyroaverage operator is more accurate, it is also more expensive in computation time than the former one. In order to keep the simulation in reasonable time, diferent optimizations have been performed on the new computing method to get it competitive. Finally, we have developed a MPI parallelized version of the new gyroaverage operator. The good scalability of this new gyroaverage computer will allow, eventually, a reduction of MPI communication costs which are penalizing in Gysela.
223

Rede auto-organizada utilizando chaveamento de pacotes ópticos. / Self-organized network architecture deployed by the utilization of optical packet switching technology.

Antonio de Campos Sachs 27 April 2011 (has links)
A tecnologia de chaveamento de pacotes ópticos comumente utiliza componentes muito complexos, relegando sua viabilidade para o futuro. A utilização de pacotes ópticos, entretanto, é uma boa opção para melhorar a granularidade dos enlaces ópticos, bem como para tornar os processos de distribuição de banda muito mais eficientes e flexíveis. Esta tese propõe simplificações nas chaves ópticas que além de tornarem o pacote óptico viável para um futuro mais próximo, permitem montar redes ópticas complexas, com muitos nós, que operam de maneira auto-organizada. A rede proposta nesta tese não possui sinalização para reserva ou estabelecimento de caminho. As rotas são definidas pacote a pacote, em tempo real, durante o seu percurso, utilizando roteamento por deflexão. Com funções muito simples realizadas localmente, a rede ganha características desejáveis como: alta escalabilidade e eficiente sistema de proteção de enlace. Estas características desejáveis são tratadas como funções da rede que emergem de funções realizadas em cada um dos nós de rede individualmente. A tese apresenta um modelo analítico estatístico, validado por simulação, para caracterização da rede. No sistema de proteção contra falhas, os cálculos realizados para redes com até 256 nós mostram que o aumento do número médio de saltos ocorre apenas para destinos localizados no entorno da falha. Para demonstrar a viabilidade de construção de chave óptica rápida simplificada utilizando somente componentes já disponíveis no mercado foi montado um protótipo, que mostrou ter um tempo de chaveamento inferior a dois nanossegundos, sendo compatível com as operações de chaveamento de pacotes ópticos. / The Optical Packet Switching (OPS) technology usually involves complex and expensive components relegating its application viability to the future. Nevertheless the OPS utilization is a good option for improving the granularity at high bit rate transmissions, as well as for operation involving flexibility and fast bandwidth distribution. This thesis proposes simplifications on optical switching devices that besides getting closer future viability enable the deployment of highly scalable and self-organized complex network architecture. The proposed network operates without resources reservation or previous path establishment. The routes are defined packet-by-packet in a real time deflection routing procedure. With simple local functions the network starts to operate with desirable performance characteristics such as high scalability and automatic protection system. Those desirable performance characteristics are treated as Emerging Functions. For the network characterization it is presented a statistical analytical model validated by simulation. In the automatic protection functions investigation the results for a 256 nodes network showed that the mean number of hops enhancement occurs only around the failure neighborhood. To demonstrate the switch viability, a prototype was fabricated utilizing components already available in the market. The switching time obtained was below two nanoseconds showing compatibility with the optical packet switching technology.
224

Towards SDN/NFV-based Mobile Packet Core : Benefits, Challenges, and Potential Solutions

Nguyen, Van-Giang January 2018 (has links)
In mobile networks, the mobile core plays a crucial role in providing connectivity between mobile user devices and external packet data networks such as the Internet. Through the years, along with the dramatical changes in radio access networks, the mobile core has also been evolved from being a circuit-based analog telephony system in its first generation (1G) to become a purely packet-based network called the Evolved Packet Core (EPC) in the current generation (4G). In recent years, the explosion of mobile data traffic and devices and the advent of new services have led to the investigation of the next generation of mobile networks, i.e., 5G. A wide range of technologies has been proposed as candidates for the development of 5G. Among other technology candidates, Software Defined Networking (SDN) and Network Function Virtualization (NFV) have been widely considered to be key enablers for the network architecture of 5G, especially the mobile packet core (MPC) network. This thesis aims at identifying benefits and challenges of introducing SDN and NFV to re-achitect the current MPC network architecture towards 5G and addressing some of the challenges. To this end, we conduct a comprehensive literature review of the state-of-the-art work leveraging SDN and NFV to re-design the 4G EPC architecture. Through this survey work, several research questions for future work have been identified and we contribute to address two of them in this thesis. Firstly, since most of the current works focus on unicast services, we propose an SDN/NFV-based MPC architecture for providing multicast and broadcast services. Our numerical results show that the proposed architecture can reduce the total signaling cost compared to the traditional architecture. Secondly, we address the question regarding the scalability of the control plane. We take the Mobility Management Entity (MME) - one of the EPC key control plane entities - as a case study. In our work, the MME is deployed as a cluster of multiple virtual instances (vMMEs) and a front-end load balancer. We focus on investigating different approaches to achieve better load balancing among these vMMEs, which in turn improves scalability. Our experimental results suggest that carefully selected load balancing algorithms can significantly reduce the control plane latency. / In mobile networks, the mobile core plays a crucial role in providing connectivity between mobile user devices and external packet data networks such as the Internet. After more than three decades, the mobile core has been gradually evolved through four generations and is called the Evolved Packet Core (EPC) in the current generation (4G). In recent years, the explosion of mobile data traffic and devices and the advent of new services have led to the investigation of the next generation of mobile networks, i.e., 5G. Among other technology candidates, Software Defined Networking (SDN) and Network Function Virtualization (NFV) have been widely considered to be key enablers for the network architecture of 5G, especially the mobile packet core (MPC) network. This thesis aims at identifying benefits and challenges of introducing SDN and NFV to re-achitect the current MPC architecture towards 5G and addressing some of the challenges. To this end, we conduct a comprehensive survey of the existing SDN/NFV-based MPC architectures. Through this survey work, several research questions for future work have been identified and we contribute to address two of the research questions. Firstly, we propose an SDN/NFV-based MPC architecture for providing multicast and broadcast services. Secondly, we tackle the scalability problem of the Mobility Management Entity (MME) - one of the EPC key control plane entities. In particular, we investigate different approaches to achieve better load balancing among virtual MMEs in a virtual and distributed MME design, which in turn improves scalability. / HITS, 4707
225

Gestion de Mobilité Supportée par le Réseau dans les Réseaux Sans Fil Hétérogènes

Nguyen, Huu-Nghia 07 July 2009 (has links) (PDF)
Dans cette thèse, nous nous intéressons à la mise en œuvre de Proxy Mobile IPv6 (PMIPv6) dans les réseaux sans fil hétérogènes, dont la topologie peut être arbitraire et spontanée. Nous proposons d'abord le concept de groupe autonome ou "cluster" qui permet le passage à l'échelle des réseaux. Ensuite nous proposons des extensions à PMIPv6, appelée Scalable Proxy Mobile IPv6 (SPMIPv6), qui prennent en compte de l'architecture en clusters au travers de l'interaction entre de multiples Local Mobility Anchors (LMAs). Nous évaluons l'aptitude à supporter le passage à l'échelle de SPMIPv6 dans un contexte de réseau maillé sans fil en faisant varier sa taille, la vitesse moyenne et la densité des terminaux mobiles. En outre, nous proposons des méthodes pour l'optimisation du routage dans SPMIPv6 pour réduire les latences des communications. Nous introduisons également un mécanisme de détection de mouvements des terminaux mobiles qui prend en compte de l'hétérogénéité des technologies d'accès. Nous implémentons l'ensemble des propositions sous Linux dans un environnement virtualisé. Nous expérimentons différents scénarios dans le mode émulation ainsi qu'en vrai grandeur pour évaluer des mesures différentes telle que le coût de signalisation, la latence de handover, la perte de paquets, le temps aller-retour (RTT), et variation de débit. Finalement, nous adressons le contexte de multi-domiciliation en proposant un concept appelé virtual Stream Control Transmission Protocol (vSCTP) et l'appliquons à l'architecture PMIPv6. Les premières simulations sous Ns-2 laissent entrevoir des bénéfices pour les scénarios d'agrégation de bande passante et les scénarios d'équilibrage de charge.
226

A Study of Scalability and Performance of Solaris Zones

Xu, Yuan January 2007 (has links)
<p>This thesis presents a quantitative evaluation of an operating system virtualization technology known as Solaris Containers or Solaris Zones, with a special emphasis on measuring the influence of a security technology known as Solaris Trusted Extensions. Solaris Zones is an operating system-level (OS-level) virtualization technology embedded in the Solaris OS that primarily provides containment of processes within the abstraction of a complete operating system environment. Solaris Trusted Extensions presents a specific configuration of the Solaris operating system that is designed to offer multi-level security functionality.</p><p>Firstly, we examine the scalability of the OS with respect to an increasing number of zones. Secondly, we evaluate the performance of zones in three scenarios. In the first scenario we measure - as a baseline - the performance of Solaris Zones on a 2-CPU core machine in the standard configuration that is distributed as part of the Solaris OS. In the second scenario we investigate the influence of the number of CPU cores. In the third scenario we evaluate the performance in the presence of a security configuration known as Solaris Trusted Extensions. To evaluate performance, we calculate a number of metrics using the AIM benchmark. We calculate these benchmarks for the global zone, a non-global zone, and increasing numbers of concurrently running non-global zones. We aggregate the results of the latter to compare aggregate system performance against single zone performance.</p><p>The results of this study demonstrate the scalability and performance impact of Solaris Zones in the Solaris OS. On our chosen hardware platform, Solaris Zones scales to about 110 zones within a short creation time (i.e., less than 13 minutes per zone for installation, configuration, and boot.) As the number of zones increases, the measured overhead of virtualization shows less than 2% of performance decrease for most measured benchmarks, with one exception: the benchmarks for memory and process management show that performance decreases of 5-12% (depending on the sub-benchmark) are typical. When evaluating the Trusted Extensions-based security configuration, additional small performance penalties were measured in the areas of Disk/Filesystem I/O and Inter Process Communication. Most benchmarks show that aggregate system performance is higher when distributing system load across multiple zones compared to running the same load in a single zone.</p>
227

Virtual Full Replication for Scalable Distributed Real-Time Databases

Mathiason, Gunnar January 2009 (has links)
A fully replicated distributed real-time database provides high availability and predictable access times, independent of user location, since all the data is available at each node. However, full replication requires that all updates are replicated to every node, resulting in exponential growth of bandwidth and processing demands with the number of nodes and objects added. To eliminate this scalability problem, while retaining the advantages of full replication, this thesis explores Virtual Full Replication (ViFuR); a technique that gives database users a perception of using a fully replicated database while only replicating a subset of the data. We use ViFuR in a distributed main memory real-time database where timely transaction execution is required. ViFuR enables scalability by replicating only data used at the local nodes. Also, ViFuR enables flexibility by adaptively replicating the currently used data, effectively providing logical availability of all data objects. Hence, ViFuR substantially reduces the problem of non-scalable resource usage of full replication, while allowing timely execution and access to arbitrary data objects. In the thesis we pursue ViFuR by exploring the use of database segmentation. We give a scheme (ViFuR-S) for static segmentation of the database prior to execution, where access patterns are known a priori. We also give an adaptive scheme (ViFuR-A) that changes segmentation during execution to meet the evolving needs of database users. Further, we apply an extended approach of adaptive segmentation (ViFuR-ASN) in a wireless sensor network - a typical dynamic large-scale and resource-constrained environment. We use up to several hundreds of nodes and thousands of objects per node, and apply a typical periodic transaction workload with operation modes where the used data set changes dynamically. We show that when replacing full replication with ViFuR, resource usage scales linearly with the required number of concurrent replicas, rather than exponentially with the system size.
228

Optimization of Segmentation-Based Video Sequence Coding Techniques. Application to content based functionalities

Morros Rubio, Josep Ramon 23 December 2004 (has links)
En aquest treball s'estudia el problema de la compressió de video utilitzant funcionalitats basades en el contingut en el marc teòric dels sistemes de codificació de seqüències de video basats en regions. Es tracten bàsicament dos problemes: El primer està relacionat amb com es pot aconseguir una codificació òptima en sistemes de codificació de video basats en regions. En concret, es mostra com es pot utilitzar un metodologia de 'rate-distortion' en aquest tipus de problemes. El segon problema que es tracta és com introduir funcionalitats basades en el contingut en un d'aquests sistemes de codificació de video.La teoria de 'rate-distortion' defineix l'optimalitat en la codificació com la representació d'un senyal que, per una taxa de bits donada, resulta en una distorsió mínima al reconstruir el senyal. En el cas de sistemes de codificació basats en regions, això implica obtenir una partició òptima i al mateix temps, un repartiment òptim dels bits entre les diferents regions d'aquesta partició. Aquest problema es formalitza per sistemes de codificació no escalables i es proposa un algorisme per solucionar-lo. Aquest algorisme s'aplica a un sistema de codificació concret anomenat SESAME. En el SESAME, cada quadre de la seqüència de video es segmenta en un conjunt de regions que es codifiquen de forma independent. La segmentació es fa seguint criteris d'homogeneitat espaial i temporal. Per eliminar la redundància temporal, s'utilitza un sistema predictiu basat en la informació de moviment tant per la partició com per la textura. El sistema permet seguir l'evolució temporal de cada regió per tota la seqüència. Els resultats de la codificació són òptims (o quasi-òptims) pel marc donat en un sentit de 'rate-distortion'. El procés de codificació inclou trobar una partició òptima i també trobar la tècnica de codificació i nivell de qualitat més adient per cada regió. Més endavant s'investiga el problema de codificació de video en sistemes amb escalabilitat i que suporten funcionalitats basades en el contingut. El problema es generalitza incloent en l'esquema de codificació les dependències espaials i temporals entre els diferents quadres o entre les diferents capes d'escalabilitat. En aquest cas, la solució requereix trobar la partició òptima i les tècniques de codificació de textura òptimes tant per la capa base com per la capa de millora. A causa de les dependències que hi ha entre aquestes capes, la partició i el conjunt de tècniques de codificació per la capa de millora dependran de les decisions preses en la capa base. Donat que aquest tipus de solucions generalment són molt costoses computacionalment, també es proposa una solució que no té en compte aquestes dependències.Els algorismes obtinguts s'apliquen per extendre SESAME. El sistema de codificació extès, anomenat XSESAME suporta diferents tipus d'escalabilitat (PSNR, espaial i temporal) així com funcionalitats basades en el contingut i la possibilitat de seguiment d'objectes a través de la seqüència de video. El sistema de codificació permet utilitzar dos modes diferents pel que fa a la selecció de les regions de la partició de la capa de millora: El primer mode (supervisat) està pensat per utilitzar funcionalitats basades en el contingut. El segon mode (no supervisat) no suporta funcionalitats basades en el contingut i el seu objectiu és simplement obtenir una codificació òptima a la capa de millora.Un altre tema que s'ha investigat és la integració d'un mètode de seguiment d'objectes en el sistema de codificació. En el cas general, el seguiment d'objectes en seqüències de video és un problema molt complex. Si a més aquest seguiment es vol integrar en un sistema de codificació apareixen problemes addicionals degut a que els requisits necessaris per obtenir eficiència en la codificació poden entrar en conflicte amb els requisits per una bona precisió en el seguiment d'objectes. Aquesta aparent incompatibilitat es soluciona utilitzant un enfocament basat en una doble partició de cada quadre de la seqüència. La partició que s'utilitza per la codificació es resegmenta utilitzant criteris purament espaials. Al projectar aquesta segona partició permet una millor adaptació dels contorns de l'objecte a seguir. L'excés de regions que implicaria aquesta re-segmentació s'elimina amb una etapa de fusió de regions realitzada a posteriori. / En este trabajo se estudia el problema de la compresión de vídeo utilizando funcionalidades basadas en el contenido en el marco teórico de los sistemas de codificación de secuencias de vídeo basados en regiones. Se tratan básicamente dos problemas: El primero está relacionado con la obtención de una codificación óptima en sistemas de codificación de vídeo basados en regiones. En concreto, se muestra como se puede utilizar un metodología de 'rate-distortion' para este tipo de problemas. El segundo problema tratado es como introducir funcionalidades basadas en el contenido en uno de estos sistemas de codificación de vídeo.La teoría de 'rate-distortion' define la optimalidad en la codificación como la representación de una señal que, para un tasa de bits dada, resulta en una distorsión mínima al reconstruir la señal. En el caso de sistemas de codificación basados en regiones, esto implica obtener una partición óptima y al mismo tiempo, un reparto óptimo de los bits entre las diferentes regiones de esta partición. Este problema se formaliza para sistemas de codificación no escalables y se propone un algoritmo para solucionar este problema. Este algoritmo se aplica a un sistema de codificación concreto llamado SESAME. En SESAME, cada cuadro de la secuencia de vídeo se segmenta en un conjunto de regiones que se codifican de forma independiente. La segmentación se hace siguiendo criterios de homogeneidad espacial y temporal. Para eliminar la redundancia temporal, se utiliza un sistema predictivo basado en la información de movimiento tanto para la partición como para la textura. El sistema permite seguir la evolución temporal de cada región a lo largo de la secuencia. Los resultados de la codificación son óptimos (o casi-óptimos) para el marco dado en un sentido de 'rate-distortion'. El proceso de codificación incluye encontrar una partición óptima y también encontrar la técnica de codificación y nivel de calidad más adecuados para cada región.Más adelante se investiga el problema de la codificación de vídeo en sistemas con escalabilidad y que suporten funcionalidades basadas en el contenido. El problema se generaliza incluyendo en el esquema de codificación las dependencias espaciales y temporales entre los diferentes cuadros o entre las diferentes capas de escalabilidad. En este caso, la solución requiere encontrar la partición óptima y las técnicas de codificación de textura óptimas tanto para la capa base como para la capa de mejora. A causa de les dependencias que hay entre estas capas, la partición y el conjunto de técnicas de codificación para la capa de mejora dependerán de las decisiones tomadas en la capa base. Dado que este tipo de soluciones generalmente son muy costosas computacionalmente, también se propone una solución que no tiene en cuenta estas dependencias.Los algoritmos obtenido se usan en la extensión de SESAME. El sistema de codificación extendido, llamado XSESAME soporta diferentes tipos de escalabilidad (PSNR, espacial y temporal) así como funcionalidades basadas en el contenido y la posibilidad de seguimiento de objetos a través de la secuencia de vídeo. El sistema de codificación permite utilizar dos modos diferentes por lo que hace referencia a la selección de les regiones de la partición de la capa de mejora: El primer modo (supervisado) está pensado para utilizar funcionalidades basadas en el contenido. El segundo modo (no supervisado) no soporta funcionalidades basadas en el contenido y su objetivo es simplemente obtener una codificación óptima en la capa de mejora.Otro tema investigado es la integración de un método de seguimiento de objetos en el sistema de codificación.En el caso general, el seguimiento de objetos en secuencias de vídeo es un problema muy complejo. Si este seguimiento se quiere integrar en un sistema de codificación aparecen problemas adicionales debido a que los requisitos necesarios para obtener eficiencia en la codificación pueden entrar en conflicto con los requisitos para obtener una buena precisión en el seguimiento de objetos. Esta aparente incompatibilidad se soluciona usando un enfoque basado en una doble partición de cada cuadro de la secuencia. La partición que se usa para codificar se resegmenta usando criterios puramente espaciales. Proyectando esta segunda partición se obtiene una mejor adaptación de los contornos al objeto a seguir. El exceso de regiones que implicaría esta resegmentación se elimina con una etapa de fusión de regiones realizada a posteriori. / This work addresses the problem of video compression with content-based functionalities in the framework of segmentation-based video coding systems. Two major problems are considered. The first one is related with coding optimality in segmentation-based coding systems. Regarding this subject, the feasibility of a rate-distortion approach for a complete region-based coding system is shown. The second one is how to address content-based functionalities in the coding system proposed as a solution of the first problem. Optimality, as defined in the framework of rate-distortion theory, deals with obtaining a representation of the video sequence that leads to a minimum distortion of the coded signal for a given bit budget. In the case of segmentation-based coding systems this means to obtain an 'optimal' partition together with the best coding technique for each region of this partition so that the result is optimal in an operational rate-distortion sense. The problem is formalized for independent, non-scalable coding.An algorithm to solve this problem is provided as well.This algorithms is applied to a specific segmentation-based coding system, the so called SESAME. In SESAME, each frame is segmented into a set of regions, that are coded independently. Segmentation involves both spatial and motion homogeneity criteria. To exploit temporal redundancy, a prediction for both the partition and the texture of the current frame is created by using motion information. The time evolution of each region is defined along the sequence (time tracking). The results are optimal (or near-optimal) for the given framework in a rate-distortion sense. The definition of the coding strategy involves a global optimization of the partition as well as of the coding technique/quality level for each region. Later, the investigation is also extended to the problem of video coding optimization in the framework of a scalable video coding system that can address content-based functionalities. The focus is set in the various types of content-based scalability and object tracking. The generality of the problem has also been extended by including the spatial and temporal dependencies between frames and scalability layers into the optimization schema. In this case the solution implies finding the optimal partition and set of quantizers for both the base and the enhancement layers. Due to the coding dependencies of the enhancement layer with respect to the base layer, the partition and the set of quantizers of the enhancement layer depend on the decisions made on the base layer. Also, a solution for the independent optimization problem (i.e. without tacking into account dependencies between different frames of scalability layers) has been proposed to reduce the computational complexity. These solutions are used to extend the SESAME coding system. The extended coding system, named XSESAME, supports different types of scalability (PSNR, Spatial and temporal) as well as content-based functionalities, such as content-based scalability and object tracking. Two different operating modes for region selection in the enhancement layer have been presented: One (supervised) aimed at providing content-based functionalities at the enhancement layer and the other (unsupervised) aimed at coding efficiency, without content-based functionalities. Integration of object tracking into the segmentation-based coding system is also investigated.In the general case, tracking is a very complex problem. If this capability has to be integrated into a coding system, additional problems arise due to conflicting requirements between coding efficiency and tracking accuracy. This is solved by using a double partition approach, where pure spatial criteria are used to re-segment the partition used for coding. The projection of the re-segmented partition results in more precise adaptation to object contours. A merging step is performed a posteriori to eliminate the excess of regions originated by the re-segmentation.
229

Security Architecture and Protocols for Overlay Network Services

Srivatsa, Mudhakar 16 May 2007 (has links)
Conventional wisdom suggests that in order to build a secure system, security must be an integral component in the system design. However, cost considerations drive most system designers to channel their efforts on the system's performance, scalability and usability. With little or no emphasis on security, such systems are vulnerable to a wide range of attacks that can potentially compromise confidentiality, integrity and availability of sensitive data. It is often cumbersome to redesign and implement massive systems with security as one of the primary design goals. This thesis advocates a proactive approach that cleanly retrofits security solutions into existing system architectures. The first step in this approach is to identify security threats, vulnerabilities and potential attacks on a system or an application. The second step is to develop security tools in the form of customizable and configurable plug-ins that address these security issues and minimally modify existing system code, while preserving its performance and scalability metrics. This thesis uses overlay network applications to shepherd through and address challenges involved in supporting security in large scale distributed systems. In particular, the focus is on two popular applications: publish/subscribe networks and VoIP networks. Our work on VoIP networks has for the first time identified and formalized caller identification attacks on VoIP networks. We have identified two attacks: a triangulation based timing attack on the VoIP network's route set up protocol and a flow analysis attack on the VoIP network's voice session protocol. These attacks allow an external observer (adversary) to uniquely (nearly) identify the true caller (and receiver) with high probability. Our work on the publish/subscribe networks has resulted in the development of an unified framework for handling event confidentiality, integrity, access control and DoS attacks, while incurring small overhead on the system. We have proposed a key isomorphism paradigm to preserve the confidentiality of events on publish/subscribe networks while permitting scalable content-based matching and routing. Our work on overlay network security has resulted in a novel information hiding technique on overlay networks. Our solution represents the first attempt to transparently hide the location of data items on an overlay network.
230

Providing Scalability For An Automated Web Service Composition Framework

Kaya, Ertay 01 June 2010 (has links) (PDF)
In this thesis, some enhancements to an existing automatic web service composition and execution system are described which provide a practical significance to the existing framework with scalability, i.e. the ability to operate on large service sets in reasonable time. In addition, the service storage mechanism utilized in the enhanced system presents an effective method to maintain large service sets. The described enhanced system provides scalability by implementing a pre-processing phase that extracts service chains and problem initial and goal state dependencies from service descriptions. The service storage mechanism is used to store this extracted information and descriptions of available services. The extracted information is used in a forward chaining algorithm which selects the potentially useful services for a given composition problem and eliminates the irrelevant ones according to the given problem initial and goal states. Only the selected services are used during the AI planning and execution phases which generate the composition and execute the services respectively.

Page generated in 0.0593 seconds