• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 29
  • 29
  • 12
  • 11
  • 9
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

NAT Free Open Source 3D Video Conferencing using SAMTK and Application Layer Router

Muramoto, Eiichi, Jinmei, Tatsuya, Kurosawa, Takahiro, Abade, Odira Elisha, Nishiura, Shuntaro, Kawaguchi, Nobuo 10 January 2009 (has links)
No description available.
12

A study of application layer protocols within the Internet of Things

Sohlman, Patrik January 2018 (has links)
The Internet of Things market grows at an extreme rate each passing year. Devices will gather more data, which puts a lot of pressure on the communication between the devices and the cloud. The protocols used needs to be fast, secure, reliable and send any type of content. This thesis work conducts a research of the three most popular application level protocols ; MQTT, HTTP and AMQP, to examine which is best suited in an Internet of Things environment. The project is made with Axians AB to provide insight regarding the protocols, so that the company can decide which protocol will be best suited for their projects. A theoretical study of the performance was made, followed by case studies on different aspects of the protocol. The case studies were made using a Dell gateway and a 4G connection to mimic a real world project. Scripts were developed to measure different performance attributes of the protocols. The analysis and discussion of the results proved that MQTT or AMQP is the best protocols, depending on the project.
13

Speak-up as a Resource Based Defence against Application Layer Distributed Denial-of-Service Attacks

Jawad, Dina, Rosell, Felicia January 2015 (has links)
Under de senaste åren har antalet DDoS-attacker i Internets applikationsskikt ökat. Detta problem behöver adresseras. Den här rapporten presenterar ett antal existerande metoder för att upptäcka och skydda mot DDoS-attacker i applikationsskiktet. En metod för detta ändamål är att hitta avvikelser av olika typer hos de attackerande klienterna, för att urskilja mellan attackerande och vanliga klienter. Detta är ett brett utforskatförsvarsområde med många positiva resultat, men dessa metoder har ett antal brister, som att de kan resultera i både falska positiva och negativa resultat. En metod som ännu inte har undersökts tillräckligt är resurs-baserat försvar. Det är en metod med mycket potential, eftersom den tydligare kan skilja på goda och onda klienter under en DDoS-attack. Speak-up är en sådan metod och är huvudfokus i denna rapport. För- och nackdelarna med Speak-up har undersökts och resultaten visar på att Speak-up har potential till att bli ett kraftfullt verktyg mot DDoS-attacker. Speak-up har dock sina begränsningar och är därför inte det bästa alternativet under vissa typer av dessa DDoS-attacker. / In recent years, the internet has endured an increase in application layer DDoS attacks. It is a growing problem that needs to be addressed. This paper presents a number of existing detection and protection methods that are used to mitigate application layer DDoS attacks. Anomaly detection is a widely explored area for defence and there have been many findings that show positive results in mitigating attacks. However, anomaly detection possesses a number of flaws, such as causing false positives and negatives. Another method that has yet to become thoroughly examined is resource based defence. This defence method has great potential as it addresses clear differences between legitimate users and attackers during a DDoS attack. One such defence method is called Speak-up and is the center of this paper. The advantages and limitations of Speak-up have been explored and the findings suggest that Speak-up has the potential to become a strong tool in defending against DDoS attacks. However, Speak-up has its limitations and may not be the best alternative during certain types of application layer DDoS attacks.
14

Estratégias para tratamento de ataques de negação de serviço na camada de aplicação em redes IP

Dantas, Yuri Gil 14 July 2015 (has links)
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2016-02-15T12:15:56Z No. of bitstreams: 1 arquivototal.pdf: 3158533 bytes, checksum: 99b0075b0671ec0e3c4fdda3a82a360f (MD5) / Made available in DSpace on 2016-02-15T12:15:56Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3158533 bytes, checksum: 99b0075b0671ec0e3c4fdda3a82a360f (MD5) Previous issue date: 2015-07-14 / Distributed Denial of Service (DDoS) attacks remain among the most dangerous and noticeable attacks on the Internet. Differently from previous attacks, many recent DDoS attacks have not been carried out over the Transport Layer, but over the Application Layer. The main difference is that in the latter, an attacker can target a particular application of the server, while leaving the others applications still available, thus generating less traffic and being harder to detected. Such attacks are possible by exploiting application layer protocols used by the target application. This work proposes a novel defense, called SeVen, for Application Layer DDoS attacks (ADDoS) based on the Adaptive Selective Verification (ASV) defense used for Transport Layer DDoS attacks. We used two approches to validate the SeVen: 1) Simulation: The entire defense mechanism was formalized in Maude tool and simulated using the statistical model checker (PVeStA). 2) Real scenario experiments: Analysis of efficiency SeVen, implemented in C++, in a real experiment on the network. We investigate the resilience for mitigating three attacks using the HTTP protocol: HTTPPOST, Slowloris, and HTTP-GET. The defence is effective, with high levels of availability, for all three types of attacks, despite having different attack profiles, and even for a relatively large number of attackers. / Ataques de Negação de Serviço Distribuídos (Distributed Denial of Service - DDoS) estão entre os ataques mais perigosos na Internet. As abordagens desses ataques vêm mudando nos últimos anos, ou seja, os ataques DDoS mais recentes não têm sido realizados na camada de transporte e sim na camada de aplicação. A principal diferença é que, nesse último, um atacante pode direcionar o ataque para uma aplicação específica do servidor, gerando menos tráfego na rede e tornando-se mais difícil de detectar. Tais ataques exploram algumas peculiaridades nos protocolos utilizados na camada de aplicação. Este trabalho propõe SeVen, um mecanismo de defesa probabilístico para mitigar ataques DDoS na camada de aplicação, baseada em Adaptive Selective Verification (ASV), um mecanismo de defesa para ataques DDoS na camada de transporte. Foram utilizadas duas abordagens para validar o SeVen: 1) Simulação: Todo o mecanismo de defesa foi formalizado na ferramenta computacional, baseada em lógica de reescrita, chamada Maude e simulado usando um modelo estatístico (PVeStA). 2) Experimentos na rede: Análise da eficiência do SeVen, implementado em C++, em um experimento real na rede. Em particular, foram investigados três ataques direcionados ao Protocolo HTTP: GET FLOOD, Slowloris e o POST. Nesses ataques, apesar de terem perfis diferentes, o SeVen obteve um elevado índice de disponibilidade.
15

CoEP: uma camada para comunicação segura para dispositivos computacionais de grão fino em redes de sensores sem fio. / CoEP: a layer for secure communication for fine grain computational devices in wireless sensor networks.

Manini, Matheus Barros 20 May 2019 (has links)
O trabalho desenvolvido nessa dissertação é a concepção de uma camada de segurança nomeada de CoEP (Constrained Extensible Protocol), nome similar à camada de aplicação CoAP (Constrained Aplication Protocol), por razões de sua compatibilidade em utilização, como apresentado em detalhes no texto. Essa camada, por sua vez, é criada por conta da identificação de lacuna de prover tecnologia de segurança adequada e compacta em dispositivos restritos; isto é, dispositivos com baixa capacidade de processamento, comunicação e armazenamento, mas alto volume de dipositivos em rede que, como justificado posteriormente, chamamos de dispositivo de grão fino. Além disso, esses volumes, que são de dezenas a centenas de dispositivos, se dão em sistemas conhecidos como redes de sensores sem fio, onde processamento paralelo e distribuído ocorre de forma autônoma e de forma a criar-se uma rede de comunicação. Enquanto diversas tecnologias habilitadoras permitiram que redes de sensores sem fio se tornassem cada vez mais possíveis tecnicamente e economicamente - devido ao avanço da tecnologia em áreas como baterias, supercapacitores, circuitos de consumo ultra-baixo, sensores mais econômicos e com novas técnicas de medição, e outras -, o mesmo tem sido feito em questões de software e implementação de protocolos e padrões para que se utilize todo esse avanço de hardware de forma inteligente e eficiente. Isto foi feito através de reestruturação e adaptação de protocolos para esses dispositivos, porém percebe-se que foram feitas poucas contribuições para implementar-se segurança de forma eficiente; como identificado na dissertação, esses dispositivos têm utilizado versões parcialmente implementadas do protocolo DTLS, um protocolo extenso e que foi desenvolvido para dispositivos de recursos abundantes, isto é, que possuem recursos de sobra para a tecnologia. O CoEP é, nesse contexto, uma camada de segurança que substituiria o DTLS em dispositivos de grão fino para realizar tarefas de segurança para protocolos de aplicação de forma inteligente e eficiente. Além do CoEP realizar sua tarefa de segurança, a camada também foi arquitetada para utilizar recursos de forma eficiente e prover segurança como um serviço para protocolos, retirando necessidades de implementação de segurança individualmente em cada protocolo e demais trabalhos de desenvolvimento. Como resultado, a camada foi implementada em sistema restrito e pode reduzir consideravelmente a utilização de seus recursos, bem como sua utilização obriga todos dispositivos conectados nessa mesma rede a utilizem segurança, o que é essencial para redes de sensores sem fio, que tem o potencial de ter o mesmo impacto na sociedade que a própria Internet teve, como detalhado no texto. / The developed work in this dissertation is the design of a security layer named Constrained Extensible Protocol (CoEP), similar name to the Constrained Application Protocol (CoAP) application layer, for reasons of its compatibility in use, as presented in detail in the text. This layer, on the other hand, is created because of the identification of the gap to provide adequate and lean security technology in restricted devices; that is, with low processing, communication and storage capacity, but with high volume which, as justified later, we call fine grain device. In addition, these device volumes are reachable in systems known as wireless sensor networks, where parallel and distributed processing occurs autonomously and in a way to create a communication network. While several enabling technologies have made wireless sensor networks to become increasingly possible technically and economically - due to the advancement of technology, areas such as batteries, supercapacitors, ultra-low consumption circuits, more economical sensors and new measurement techniques, and more -, the same has been done in software, protocols and standards implementation so that all this advance of hardware can be used intelligently and efficiently. This was done through the restructuring and adaptation of protocols for these devices, but it is noticed that few contributions were made to implement security efficiently; as identified in the dissertation, these devices have used partially implemented versions of the DTLS protocol, an extensive protocol that was developed for relatively unlimited resource devices. CoEP is, in this context, a security layer that would replace DTLS in fine grain devices to perform security tasks for application protocols in an intelligent and efficient manner. In addition to CoEP performing its security task, the layer was also architected to efficiently utilize resources and provide security as a service for protocols, removing individual security implementation needs in each protocol and other development work. As a result, the layer has been deployed on a lean system and can greatly reduce the utilization of its resources, as well as its use obliges all connected devices to use security, which is essential for wireless sensor networks, which has the potential to have the same impact on society that the Internet itself had, as detailed in the text.
16

Conservação de energia em redes de sensores sem fio. / Energy conservation in wireless sensor networks.

Felipe da Rocha Henriques 16 July 2010 (has links)
Esta dissertação tem por objetivo propor algoritmos para conservação de energia de uma rede de sensores sem fio (RSSF) aplicada ao monitoramento de um processo suave f(x , y, t), que depende das coordenadas x e y dos nós sensores, e do tempo t, de forma a aumentar a autonomia da rede. Os algoritmos rodam na camada de aplicação de cada nó, e visam a economia de energia dos nós através do gerenciamento da necessidade de transmissões. Após a primeira amostra transmitida, apenas amostras com uma variação percentual maior do que um dado limiar são transmitidas. Além disso, cada nó pode permanecer inativo (economizando energia) entre essas transmissões. Em RSSfs de salto único, são propostos dois algoritmos: um baseado na fonte, onde cada nó é responsável por todo o processamento e tomada de decisões, e outro baseado no sorvedouro, onde todo o processamento e a tomada de decisões são realizadas pelo sorvedouro. Além disso, uma extensão de algoritmo baseado na fonte é proposta, para RSSFs de múltiplos saltos. Através dos resultados obtidos, observa-se que os algoritmos conseguiram uma redução significativa da quantidade de transmissões, o que leva a um aumento do tempo de vida e o erro na reconstrução do processo é apresentada. Desta forma, pode-se conjugar a relação entre tempo de vida máximo e erro de reconstrução mínimo. / This paper aims to propose algorithms for energy conservation in a wireless sensor network (WSN) applied to monitoring a smooth process f (x, y, t), which depends on x and y coordinates of the sensor nodes, and the time t so as to increase the autonomy of the network. The algorithms run in the application layer of each node, and are designed to save energy of the nodes through the management of the need for transmissions. Furthermore, each node can remain idle (saving energy) between these transmissions. In single hop WSNs, we propose two algorithms: one based on the source, where each node is responsible for all processing and decision making, and another based on the sink, where all processing and decision making are performed by the sink. In addition, an algorithm based on the extent of power is proposed for multi-hop WSNs. From the results obtained, it is observed that the algorithms have achieved a significant reduction of the number of transmissions, which leads to an increase in the life time and the error in the reconstruction process is presented. In this way, one can combine the relationship between maximum life span and minimum reconstruction error.
17

Conservação de energia em redes de sensores sem fio. / Energy conservation in wireless sensor networks.

Felipe da Rocha Henriques 16 July 2010 (has links)
Esta dissertação tem por objetivo propor algoritmos para conservação de energia de uma rede de sensores sem fio (RSSF) aplicada ao monitoramento de um processo suave f(x , y, t), que depende das coordenadas x e y dos nós sensores, e do tempo t, de forma a aumentar a autonomia da rede. Os algoritmos rodam na camada de aplicação de cada nó, e visam a economia de energia dos nós através do gerenciamento da necessidade de transmissões. Após a primeira amostra transmitida, apenas amostras com uma variação percentual maior do que um dado limiar são transmitidas. Além disso, cada nó pode permanecer inativo (economizando energia) entre essas transmissões. Em RSSfs de salto único, são propostos dois algoritmos: um baseado na fonte, onde cada nó é responsável por todo o processamento e tomada de decisões, e outro baseado no sorvedouro, onde todo o processamento e a tomada de decisões são realizadas pelo sorvedouro. Além disso, uma extensão de algoritmo baseado na fonte é proposta, para RSSFs de múltiplos saltos. Através dos resultados obtidos, observa-se que os algoritmos conseguiram uma redução significativa da quantidade de transmissões, o que leva a um aumento do tempo de vida e o erro na reconstrução do processo é apresentada. Desta forma, pode-se conjugar a relação entre tempo de vida máximo e erro de reconstrução mínimo. / This paper aims to propose algorithms for energy conservation in a wireless sensor network (WSN) applied to monitoring a smooth process f (x, y, t), which depends on x and y coordinates of the sensor nodes, and the time t so as to increase the autonomy of the network. The algorithms run in the application layer of each node, and are designed to save energy of the nodes through the management of the need for transmissions. Furthermore, each node can remain idle (saving energy) between these transmissions. In single hop WSNs, we propose two algorithms: one based on the source, where each node is responsible for all processing and decision making, and another based on the sink, where all processing and decision making are performed by the sink. In addition, an algorithm based on the extent of power is proposed for multi-hop WSNs. From the results obtained, it is observed that the algorithms have achieved a significant reduction of the number of transmissions, which leads to an increase in the life time and the error in the reconstruction process is presented. In this way, one can combine the relationship between maximum life span and minimum reconstruction error.
18

Um protocolo de comunica??o multicast na camada de aplica??o com Consci?ncia de Localiza??o

Oliveira, Marlos Andr? Marques Sim?es de 15 January 2010 (has links)
Made available in DSpace on 2014-12-17T14:54:53Z (GMT). No. of bitstreams: 1 MarlosAMSO.pdf: 1784342 bytes, checksum: 36e985b587c52304548da7b98cad94f7 (MD5) Previous issue date: 2010-01-15 / Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time / Atualmente aplica??es em grupo na Internet est?o em ascens?o, como por exemplo transmiss?o de ?udio e v?deo, computa??o colaborativa e jogos com m?ltiplos participantes. Isso leva ? necessidade de comunica??o multicast, mas infelizmente o suporte a este tipo de servi?o n?o est? amplamente dispon?vel pela camada de rede. Por isso, no atual est?gio tecnol?gico surgiram solu??es de protocolos multicast implementados na camada de aplica??o para suprir tal defici?ncia. Al?m disso, estas aplica??es muitas vezes se apresentam simultaneamente como provedores e clientes dos servi?os utilizados, caracterizando-as como aplica??es denominadas peer-to-peer, possuindo caracter?sticas din?micas, onde os participantes podem entrar e sair de um grupo com uma freq??ncia muito alta. Assim, algoritmos centralizados de ger?ncia de grupo n?o apresentam bom desempenho para essa classe de aplica??es, e mesmo as solu??es peer-to-peer tradicionais necessitam ter algum mecanismo que leve em considera??o essa volatilidade. A id?ia de consci?ncia de localiza??o permite distribuir os participantes na rede virtual de acordo com a sua proximidade na rede f?sica, permitindo um bom desempenho nas opera??es de gerenciamento do grupo. Diante deste contexto, nesta tese ? proposto um protocolo de comunica??o multicast na camada de aplica??o, chamado LAALM, que leva em considera??o a topologia da rede real no processo de montagem da rede virtual, utilizando uma nova m?trica denominada IPXY para prover a consci?ncia de localiza??o, atrav?s do processamento de informa??es locais. O LAALM foi implementado utilizando uma ?rvore distribu?da compartilhada e bi-direcional, possuindo uma heur?stica sub-?tima para o processo de inclus?o de novos participantes que visa minimizar o custo de constru??o da ?rvore de distribui??o de dados. A avalia??o do protocolo foi realizada de duas formas distintas: i) atrav?s de um simulador pr?prio onde se procurou avaliar a qualidade de constru??o da ?rvore de distribui??o gerada, avaliando-se m?tricas como o n?mero de filhos por cada n? e a dist?ncia final entre os n?s; ii) atrav?s de cen?rios real?sticos constru?dos no simulador de redes ns-3, onde foi avaliado o desempenho do protocolo atrav?s de m?tricas como stress, stretch e tempos de associa??o e reconfigura??o dos grupos
19

PALMS+: protocolo ALM baseado em desigualdade triangular para distribuição de streaming de vídeo

Castro, Bianca Portes de 25 August 2014 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-06T14:59:44Z No. of bitstreams: 1 biancaportesdecastro.pdf: 1203353 bytes, checksum: 0cd5843bff9e747e5432fff99ec1e565 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-07T11:04:29Z (GMT) No. of bitstreams: 1 biancaportesdecastro.pdf: 1203353 bytes, checksum: 0cd5843bff9e747e5432fff99ec1e565 (MD5) / Made available in DSpace on 2017-06-07T11:04:29Z (GMT). No. of bitstreams: 1 biancaportesdecastro.pdf: 1203353 bytes, checksum: 0cd5843bff9e747e5432fff99ec1e565 (MD5) Previous issue date: 2014-08-25 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Aplicações multimídia são muito populares na internet. Grande parte delas necessita de multicast para escalar. É sabido que multicast em nível de redes não foi implementado como desejado. Protocolos em nível de aplicação são a solução atual. Apesar do sucesso dos protocolos ALM (Application Layer Multicast), a maioria dos protocolos existentes são custosos e acarretam grande sobrecarga de controle à rede. Neste trabalho, apresentamos um novo protocolo de fluxo contínuo baseado em árvore, utilizando a desigualdade triangular entre cada três peers para gerenciamento dinâmico da topologia (o PALMS+). O novo protocolo é simples e com baixa sobrecarga. Mesmo assim, seu desempenho é tão bom quanto o estado da arte. Experimentos realizados na plataforma Oversim (OMNet++) demonstraram que o PALMS+ manteve desempenho tão bom quanto o estado da arte (e.g. protocolo NICE), mesmo quando submetido a alto churn em uma rede heterogênea. De fato, a sobrecarga nos peers do novo protocolo é menor que 10% da sobrecarga gerada pelo NICE. O protocolo PALMS+ entrega os dados em menos de 1,5s. O novo protocolo mostra-se adequado a vídeo ao vivo, escalando mesmo em cenários realistas e com alto churn. / Multimedia applications are very popular on the internet. Many of these applications need multicast to scale. However, network layer multicast has not been implemented in the internet. Application layer multicast (ALM) protocols are a practical alternative. However, despite their popularity, many existing ALM protocols and mechanisms are expensive and bring a large overhead control on the network. In the present work, a new protocol is proposed for content distribution based on tree, using the triangular inequality between every three peers to dynamic topology control (the PALMS+). The new protocol is simple and with low overhead. Nevertheless, its performance as good as the state of the art. Experimental results conducted with the OverSim platform (OMNet++) suggest that PALMS+ improves the performance of a state-of-art implementation of ALM protocol when compared against the NICE protocol. Furthermore, the control message overhead at peers using the PALMS+ protocol is reduced by 10%, when compared with NICE. In the PALMS+ protocol, chunks are delivered up to 1,5s. Results confirm that proposed implementation of PALMS+ is very suitable to real-time video streaming, even when churn is high.
20

Distributed cross-layer scalable multimedia services over next generation convergent networks : architectures and performances

Le, Tien Anh 15 June 2012 (has links) (PDF)
Multimedia services are the killer applications on next generation convergent networks. Video contents are the most resource consuming part of a multimedia flux. Video transmission, video multicast and video conferencing services are the most popular types of video communication with increasing difficulty levels. Four main parts of the distributed cross-layer scalable multimedia services over next generation convergent networks are considered in this research work, both from the architecture and performance point of views. Firstly, we evaluate the performance of scalable multimedia transmissions over an overlay network. For that, we evaluate the performance of scalable video end-to-end transmissions over EvalSVC. It is capable of evaluating the end-to-end transmission of SVC bit-streams. The output results are both objective and subjective metrics of the video transmission. Through the interfaces with real networks and an overlay simulation platform, the transmission performance of different types of SVC scalability and AVC bit-streams on a bottle-neck and an overlay network will be evaluated. This evaluation is new because it is conducted on the end-to-end transmission of SVC contents and not on the coding performance. Next, we will study the multicast mechanism for multimedia content over an overlay network in the following part of this PhD thesis. Secondly, we tackle the problems of the distributed cross-layer scalable multimedia multicast over the next generation convergent networks. For that, we propose a new application-network cross layer multi-variable cost function for application layer multicast of multimedia delivery over convergent networks. It optimizes the variable requirements and available resources from both the application and the network layers. It can dynamically update the available resources required for reaching a particular node on the ALM's media distribution tree. Mathematical derivation and theoretical analysis have been provided for the newly proposed cost function so that it can be applied in more general cases of different contexts. An evaluation platform of an overlay network built over a convergent underlay network comprised of a simulated Internet topology and a real 4G mobile WiMAX IEEE802.16e wireless network is constructed. If multicast is the one-to-many mechanism to distribute the multimedia content, a deeper study on the many-to-many mechanism will be done in the next part of the thesis through a new architecture for video conferencing services. Thirdly, we study the distributed cross-layer scalable video conferencing services over the overlay network. For that, an enriched human perception-based distributed architecture for scalable video conferencing services is proposed with theoretical models and performance analysis. Rich theoretical models of the three different architectures: the proposed perception-based distributed architecture, the conventional centralized architecture and perception-based centralized architecture have been constructed by using queuing theory to reflect the traffic generated, transmitted and processed at the perception-based distributed leaders, the perception-based centralized top leader, and the centralized server. The performance of these three different architectures has been considered in 4 different aspects. While the distributed architecture is better than the centralized architecture for a scalable multimedia conferencing service, it brings many problems to users who are using a wireless network to participate into the conferencing service. A special solution should be found out for mobile users in the next part of the thesis. Lastly, the distributed cross-layer scalable video conferencing services over the next generation convergent network is enabled. For that, an IMS-based distributed multimedia conferencing services for Next Generation Convergent Networks is proposed. [...]

Page generated in 0.1026 seconds