• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 11
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Schedulability of mode changes in flexible real-time distributed systems

Pedro, Paulo Sergio Martins January 2000 (has links)
No description available.
2

Hybrid ARQ Using Serially Concatenated Block Codes for Real-Time Communication : An Iterative Decoding Approach

Uhlemann, Elisabeth January 2001 (has links)
<p>The ongoing wireless communication evolution offers improvements for industrial applications where traditional wireline solutions causes prohibitive problems in terms of cost and feasibility. Many of these new wireless applications are packet oriented and time-critical. The deadline dependent coding (DDC) communication protocol presented here is explicitly intended for wireless real-time applications. The objective of the work described in this thesis is therefore to develop the foundation for an efficient and reliable real-time communication protocol for critical deadline dependent communication over unreliable wireless channels.</p><p>Since the communication is packet oriented, block codes are suitable for error control. Reed-Solomon codes are chosen and incorporated in a concatenated coding scheme using iterative detection with trellis based decoding algorithms. Performance bounds are given for parallel and serially concatenated Reed-Solomon codes using BPSK. The convergence behavior of the iterative decoding process for serially concatenated block codes is examined and two different stopping criteria are employed based on the log-likelihood ratio of the information bits.</p><p>The stopping criteria are also used as a retransmission criterion, incorporating the serially concatenated block codes in a type-I hybrid ARQ (HARQ) protocol. Different packet combining techniques specifically adapted to the concatenated HARQ (CHARQ) scheme are used. The extrinsic information used in the iterative decoding process is saved and used when decoding after a retransmission. This technique can be seen as turbo code combining or concatenated code combining and is shown to improve performance. Saving the extrinsic information may also be seen as a doping criterion yielding faster convergence. As such, the extrinsic information can be used in conjunction with traditional diversity combining schemes. The performance in terms of bit error rate and convergence speed is improved with only negligible additional complexity.</p><p>Consequently, CHARQ based on serially concatenated block codes using iterative detection creates a flexible and reliable scheme capable of meeting specified required realtime constraints.</p>
3

Hybrid ARQ Using Serially Concatenated Block Codes for Real-Time Communication : An Iterative Decoding Approach

Uhlemann, Elisabeth January 2001 (has links)
The ongoing wireless communication evolution offers improvements for industrial applications where traditional wireline solutions causes prohibitive problems in terms of cost and feasibility. Many of these new wireless applications are packet oriented and time-critical. The deadline dependent coding (DDC) communication protocol presented here is explicitly intended for wireless real-time applications. The objective of the work described in this thesis is therefore to develop the foundation for an efficient and reliable real-time communication protocol for critical deadline dependent communication over unreliable wireless channels. Since the communication is packet oriented, block codes are suitable for error control. Reed-Solomon codes are chosen and incorporated in a concatenated coding scheme using iterative detection with trellis based decoding algorithms. Performance bounds are given for parallel and serially concatenated Reed-Solomon codes using BPSK. The convergence behavior of the iterative decoding process for serially concatenated block codes is examined and two different stopping criteria are employed based on the log-likelihood ratio of the information bits. The stopping criteria are also used as a retransmission criterion, incorporating the serially concatenated block codes in a type-I hybrid ARQ (HARQ) protocol. Different packet combining techniques specifically adapted to the concatenated HARQ (CHARQ) scheme are used. The extrinsic information used in the iterative decoding process is saved and used when decoding after a retransmission. This technique can be seen as turbo code combining or concatenated code combining and is shown to improve performance. Saving the extrinsic information may also be seen as a doping criterion yielding faster convergence. As such, the extrinsic information can be used in conjunction with traditional diversity combining schemes. The performance in terms of bit error rate and convergence speed is improved with only negligible additional complexity. Consequently, CHARQ based on serially concatenated block codes using iterative detection creates a flexible and reliable scheme capable of meeting specified required realtime constraints.
4

Evaluating quality of experience and real-time performance of industrial internet of things

Zhohov, Roman January 2018 (has links)
The Industrial Internet of Things (IIoT) is one of the key technologies of Industry 4.0 thatwill be an integral part of future smart and sustainable production. The current constitutedmodels for estimating Quality of Experience (QoE) are mainly targeting the multimediasystems. Present models for evaluating QoE, specifically leveraged by the expensivesubjective tests, are not applicable for IIoT applications. This work triggers the discussionon defining the QoE domain for IIoT services and applications. Industry-specific KPIs areproposed to assure QoE by linking business and technology domains. Tele-remote miningmachines are considered as a case study for developing the QoE model by taking intoaccount key challenges in QoE domain. As a result, QoE layered model is proposed, whichas an outcome predicts the QoE of IIoT services and applications in a form of pre-definedIndustrial KPIs. Moreover, software tool and analytical model is proposed to be used as anevaluation method for certain traffic types in the developed model.
5

Android IP kamera / Android IP Camera

Chvála, Jan January 2015 (has links)
The goal of this thesis is to design a system which would allow video data streaming from a mobile device and real time playback using a standard web browser. The technological background and the implementation platform are both part of this thesis. Web Real Time Communications (WebRTC) technology was used for acquiring multimedia data on mobile device. This technology is natively supported in the latest major web browsers and in WebView component (Android version 5.0 and above). Sending push notifications from a server to a mobile device to start the streaming is done with Google Cloud Messaging technology. The resultant system allows a user to start the application on mobile device with easy web browser access. This starts the multimedia stream from device, which can be parametrized and secured by password. The benefit of this thesis is the overview of WebRTC technology and its demonstration. The IP camera implementation shows how easy it is to use the WebRTC in real applications.
6

Distribuované multimediální služby / Distributed multimedia service

Jaroš, Martin January 2016 (has links)
Diplomová práce se zabývá návrhem a realizací distribuované komunikační služby v reálném čase, jejím cílem je sestavit základní strukturu pro aplikace digitálního přenosu zvuku a videa v rámci klient-klient topologie sítě. Práce definuje komunikační protokol a popisuje jeho implementaci. Důraz je také kladen na zabezpečení pomocí moderních kryptografických metod. Realizace navazuje na stávající řešení přičemž využívá především svobodného softwaru.
7

Resource, Data and Application Management for Cloud Federations and Multi-Clouds

Xhagjika, Vamis January 2017 (has links)
Distributed Real-Time Media Processing refers to classes of highly distributed, delay no-tolerant applications that account for the majority of the data traffic generated in the world today. Real-Time audio/video conferencing and live content streaming are of particular research interests as technology forecasts predict video traffic surpassing every other type of data traffic in the world in the near future. Live streaming refers to applications in which audio/video streams from a source need to be delivered to a set of geo-distributed destinations while maintaining low latency of stream delivery. Real-time conferencing platforms are application platforms that implement many-to-many audio/video real-time communications. Both of these categories exhibit high sensitivity to both network state (latency, jitter, packet loss, bit rate) as well as stream processing backend load profiles (latency and jitter introduced as Cloud processing of media packets). This thesis addresses enhancing real-time media processing both at the network level parameters as well as Cloud optimisations. We provide a novel, bandwidth management algorithm, for cloud services sharing the same network infrastructure, which provides a 2x improvement in system stability. Further examining network impact on cloud services, we provide a novel hybrid Cloud-Network distributed Cloud architecture to enable locality aware, application enhancements. This architecture led to a multi-cloud management overlay algorithm that maintains low management overhead on large scale cloud deployments. On the application level we provide a study of Media Quality parameters for a WebRTC enabled Media Cloud back-end, and provide patterns of quality metrics with respect to back-end stream load and network parameters. Additionally we empirically show that a "minimal load" algorithm for stream allocation, outperforms other Rotational, or Static Threshold based algorithms. / El procesamiento de medios en tiempo real distribuido se refiere a clases de aplicaciones altamente distribuidas,no tolerantes al retardo, que representan la mayoría del tráfico de datos generado en el mundo actual. Las conferenciasde audio y video en tiempo real y la transmisión de contenido en vivo tienen especial interés en investigación, ya quela prospectiva tecnológica estima que el tráfico de video supere a cualquier otro tipo de tráfico de datos en el futurocercano. La transmisión envivo se refiere a aplicaciones en las que flujos de audio/vídeo de una fuente se han de entregara un conjunto de destinos en lugares geográficos diferentes mientras se mantiene baja la latencia de entrega del flujo(como por ejemplo la cobertura de eventos en vivo). Las plataformas de conferencia en tiempo real son plataformasde  aplicación  que  implementan  comunicaciones  de  audio/video  en  tiempo  real  entre  muchos  participantes.  Ambascategorías presentan una alta sensibilidad tanto al estado de la red (latencia, jitter, pérdida de paquetes, velocidad debits) como a los perfiles de carga de la infraestructura de procesamiento de flujo (latencia y jitter introducidos duranteel procesamiento en la nube de paquetes de datos multimedia). Esta tesis trata de mejorar el procesamiento de datosmultimedia en tiempo real tanto en los parámetros de nivel de red como en las optimizaciones en la nube.En este contexto, investigamos si los recursos de la red se podían controlar a nivel de servicio para aumentar laeficiencia y el rendimiento de la red, así como cuantificar el impacto del recurso compartido de la red en la calidad delservicio. Los recursos de red compartidos afectan el rendimiento del servicio en la nube y, por lo tanto, optimizandoo intercambiando recursos de red pueden mejorar el rendimiento del servicio en la nube. Esta posible degradación delrendimiento se debe a la infraestructura de red compartida no regulada (la asignación de recursos de ancho de banda noes consciente de los objetivos del acuerdo de nivel de servicio (SLO) y de comportamiento). Gestionando el ancho debanda de la red a través de control predictivo, permitimos un mejor uso de los recursos de red disponibles y menoresviolaciones de SLO, logrando una mayor estabilidad del sistema por al menos un factor de 2.Las redes de acceso (AN) (extremo, red principal) de los ISP, transportistas y redes comunitarias no tienen unainfraestructura de nube de propósito general, mientras que los proveedores de recursos de Internet proporcionan bajodemanda recursos de la nube. Encontramos una oportunidad para la unificación de los recursos dentro de un AN y fueracon el fin de proporcionar una oferta de nube unificada a través de una federación de nubes y proporcionar movilidad delservicio hacia los usuarios para optimizar la localidad. Este trabajo de investigación proporciona una nueva arquitecturade red híbrida y nube federada que proporciona una infraestructura de red extendida con un despliegue en nube a granescala, incorporándolo directamente a la infraestructura de red. La nueva arquitectura multi-nube permite a los serviciosllegar a un compromiso entre localidad respecto al usuario o el rendimiento en tiempo de ejecución optimizando asípara latencia para conseguir la asignación óptima de recursos de aplicaciones en tiempo real. Para  optimizar  la  latencia  en  las  aplicaciones  de  transmisión  en  vivo  se  propuso  un  nuevo  algoritmo  desuperposición  de  multi-nube  autogestionado  basado  en  una  topología  de  gradiente  en  la  que  cada  nube  de  unaaplicación de transmisión de flujos optimiza la proximidad del cliente a la fuente. El modelo de aplicación se separa enun  diseño  de  dos  capas,  el  back-end  de  entrega  multi-nube  y  los  clientes  de  flujo.  El  backend  de  gradienteautorregulado minimiza la carga de tráfico creando un árbol de expansión mínimo a través de las nubes que se utilizapara  el  enrutamiento  de  cada  flujo.  El  algoritmo  propuesto  tiene  una  tasa  de  convergencia  muy  rápida  en  losdespliegues de nube a gran escala, y no resulta afectado por la rotación de recursos de la nube, así como proporcionauna mayor estabilidad de la transmisión en vivo.En este trabajo ofrecemos un análisis de calidad de los medios de comunicación y mejoras de los emisores deflujo en la nube en tiempo real, así como estrategias de asignación para mejorar el rendimiento de nivel de serviciode las plataformas de comunicación de Web en tiempo real. Los patrones de calidad de los medios están fuertementeinfluenciados por el rendimiento del procesamiento en la nube, y por lo tanto, al ajustar este aspecto, podemos controlarla calidad de los medios. En particular, demostramos empíricamente que a medida que los tamaños de sesión aumentan,la difusión simultánea supera la codificación de capa única. Además, introducimos un algoritmo de asignación de flujopara minimizar los picos de carga en los retransmisores de flujos en la nube y comparamos el comportamiento devarias políticas de asignación de flujos. Con la mínima información y el requisito de asignación de sesión de un únicoservidor, la política de asignación de carga mínima se comporta bastante mejor que otros algoritmos basados en unumbral rotativo o estático. / Distribuerad realtidshantering av mediadata syftar på klasser av starkt distribuerade tillämpningar som inte tolererar fördröjningar, och som utgör majoriteten av datatrafiken som genereras i världen idag. Audio/video-konferenser i realtid och överföring av innehåll "live" är av speciellt intresse för forskningen eftersom teknikprognoser förutser att videotrafiken kommer att kraftigt dominera över all annan datatrafik i den nära framtiden. "Live streaming" syftar på tillämpningar i vilka audio/video strömmar från en källa och behöver distribueras till en mängd av geografisk distribuerade destinationer medan överföringen bibehåller låg latens i leveransen av det strömmade datat (som ett exempel kan nämnas "live"-täckning av händelser). Konferensplattformar för realtidsdata är tillämpningsplattformar som implementerar realtidskommunikation av audio/video-data av typen "många-till-många". Båda dessa kategorier uppvisar hög känslighet för såväl nätverkets tillstånd (latens, jitter, paketförluster, bithastighet) och lastprofiler av ström bearbetning "back-end" (latens och jitter introducerat som Cloud-hantering av mediadatapaket). Denna avhandling adresserar förbättringar inom realtidshantering av mediainnehåll både med avseende på nätverksnivåns parametrar och optimeraringar för molninfrastrukturen. I detta sammanhang har vi undersökt huruvida nätverksresurserna kan kontrolleras på servicenivån i syfte att öka nätverkets effektivitet och prestanda, och även att kvantifiera påverkan av den delade nätverksresursen på servicekvaliteten. Delade nätverksresurser påverkar molntjänstens prestanda och dessa kan genom en optimering eller handel med nätverksresurser förbättra molntjänstens prestanda. Denna potentiella prestandadegradering beror på en oreglerad delad nätverksinfrastruktur (allokeringen av bandbredd är inte medveten om prestanda och mål för servicenivån). Genom att mediera nätverkets bandbredd genom prediktiv kontroll, möjliggör vi ett bättre utnyttjande av de tillgängliga nätverksresurserna och en lägre grad av avvikelser mot SLO, vilket leder till en ökad stabilitet med åtminstone en faktor 2. Accessnätverken (AN) (edge, kärnnätverk) hos ISP, bärare och lokala nätverk har ingen generell molninfrastruktur, medan s.k. "Internet Resource Providers" erbjuder resurser för molntjänster "on demand". Vi ser en möjlighet till ensande av resurserna inuti ett AN och utanför i syfte att erbjuda ett samlat molntjänsterbjudande genom s.k. "Cloud Federation" och erbjuder tjänstemobilitet för användarna för att optimera lokaliteten. Denna forskningansats erbjuder ett nytt hybrid nätverk med Federated Cloud arkitektur vilken ger en utvidgad nätverksinfrastruktur med en storskalig användning av molntjänster, som direkt inkorporerar denna i nätverksinfrastrukturen. Den nyskapande "Multi-Cloud"-arkitekturen möjliggör för tjänster att balansera lokalitet för användaren mot run-time-prestanda och därigenom optimera för latens mot optimal resursallokering för realtidstillämpningar. För att optimera latensen i "live streaming"-tillämpningar föreslås en nyskapande självstyrd "multi-Cloud-overlay"-algorithm baserad på gradienttopologi i vilken varje moln för en tillämpning inom "stream broadcasting" optimerar klientens närhet till källan. Tillämpningsmodellen separeras i en tvålagersdesign, "multi-cloud delivery back-end" och "stream clients". Denna självreglerande gradientbaserade "back-end" minimerar trafiklasten genom att skapa ett minimalt spännande träd genom molnen som används för routing av strömmarna. Den föreslagna algoritmen har en mycket snabb konvergenshastighet vid större moln, och påverkas inte av "churn" hos molnresursen liksom att den erbjuder ökad motståndskraft hos "live"-strömmen. I detta arbete erbjuder vi mediakvalitetsanalys och förstärkning av realtidsmolnets "forwarders", liksom även allokeringsstrategier för att förstärka servicenivåprestanda hos "Web Real-Time Communication"-plattformar. Mediakvalitetsmönster påverkas kraftigt av molnets bearbetsningsprestanda, och således kan vi genom att påverka denna aspekt kontrollera mediakvaliteten. Specifikt demonstrerar vi empiriskt att efterhand som sessionsstorlekarna ökar, så utklassar simulcast enlagersinkodning. Dessutom introducerar vi en strömallokeringsalgoritm för att minimera "load spikes" hos "Cloud stream forwarders" och jämför beteendet hos olika strömallokeringspolicys. Med enbart minimal information och allokeringsbehoven hos en enskild serversession beter sig den minimala lastbalansallokeringspolicyn tydligt bättre än andra "rotational"- eller "static threshold"-baserade algoritmer. / <p>QC 20170425</p>
8

Predictable and Scalable Medium Access Control for Vehicular Ad Hoc Networks

Sjöberg Bilstrup, Katrin January 2009 (has links)
<p>This licentiate thesis work investigates two medium access control (MAC) methods, when used in traffic safety applications over vehicular <em>ad hoc</em> networks (VANETs). The MAC methods are carrier sense multiple access (CSMA), as specified by the leading standard for VANETs IEEE 802.11p, and self-organizing time-division multiple access (STDMA) as used by the leading standard for transponders on ships. All vehicles in traffic safety applications periodically broadcast cooperative awareness messages (CAMs). The CAM based data traffic implies requirements on a predictable, fair and scalable medium access mechanism. The investigated performance measures are <em>channel access delay</em>, <em>number of consecutive packet drops</em> and the <em>distance between concurrently transmitting nodes</em>. Performance is evaluated by computer simulations of a highway scenario in which all vehicles broadcast CAMs with different update rates and packet lengths. The obtained results show that nodes in a CSMA system can experience <em>unbounded channel access delays</em> and further that there is a significant difference between the best case and worst case channel access delay that a node could experience. In addition, with CSMA there is a very high probability that several <em>concurrently transmitting nodes are located close to each other</em>. This occurs when nodes start their listening periods at the same time or when nodes choose the same backoff value, which results in nodes starting to transmit at the same time instant. The CSMA algorithm is therefore both <em>unpredictable</em> and <em>unfair</em> besides the fact that it <em>scales badly</em> for broadcasted CAMs. STDMA, on the other hand, will always grant channel access for all packets before a predetermined time, regardless of the number of competing nodes. Therefore, the STDMA algorithm is <em>predictable</em> and <em>fair</em>. STDMA, using parameter settings that have been adapted to the vehicular environment, is shown to outperform CSMA when considering the performance measure <em>distance between concurrently transmitting nodes</em>. In CSMA the distance between concurrent transmissions is random, whereas STDMA uses the side information from the CAMs to properly schedule concurrent transmissions in space. The price paid for the superior performance of STDMA is the required network synchronization through a global navigation satellite system, e.g., GPS. That aside since STDMA was shown to be scalable, predictable and fair; it is an excellent candidate for use in VANETs when complex communication requirements from traffic safety applications should be met.</p>
9

Providing quality of service for realtime traffic in heterogeneous wireless infrastructure networks

Teh, Anselm January 2009 (has links)
In recent years, there has been a rapid growth in deployment and usage of realtime network applications, such as Voice-over-IP, video calls/video conferencing, live network seminars, and networked gaming. The continued increase in the popularity of realtime applications requires a more intense focus on the provision of strict guarantees for Quality of Service (QoS) parameters such as delay, jitter and packet loss in access networks. At the same time, wireless networking technologies have become increasingly popular with a wide array of devices such as laptop computers, Personal Digital Assistants (PDAs), and cellular phones being sold with built-in WiFi and WiMAX interfaces. For realtime applications to be popular over wireless networks, simple, robust and effective QoS mechanisms suited for a variety of heterogeneous wireless networks must be devised. Implementing the same QoS mechanisms across multiple neighbouring networks aids seamless handover by ensuring that a flow will be treated in the same way, both before and after handover. To provide guaranteed QoS, an access network should limit load using an admission control algorithm. In this research, we propose a method to provide effective admission control for variable bit rate realtime flows, based on the Central Limit Theorem. Our objective is to estimate the percentage of packets that will be delayed beyond a predefined delay threshold, based on the mean and variance of all the flows in the system. Any flow that will increase the percentage of delayed packets beyond an acceptable threshold can then be rejected. Using simulations we have shown that the proposed method provides a very effective control of the total system load, guaranteeing the QoS for a set of accepted flows with negligible reductions in the system throughput. To ensure that flow data is transmitted according to the QoS requirements of a flow, a scheduling algorithm must handle data intelligently. We propose methods to allow more efficient scheduling by utilising existing Medium Access Control mechanisms to exchange flow information. We also propose a method to determine the delay-dependent "value" of a packet based on the QoS requirements of the flow. Using this value in scheduling is shown to increase the number of packets sent before a predetermined deadline. We propose a measure of fairness in scheduling that is calculated according to how well each flow's QoS requirements are met. We then introduce a novel scheduling paradigm, Delay Loss Controlled-Earliest Deadline First (DLC-EDF), which is shown to provide better QoS for all flows compared to other scheduling mechanisms studied. We then study the performance of our admission control and scheduling methods working together, and propose a feedback mechanism that allows the admission control threshold to be tuned to maximise the efficient usage of available bandwidth in the network, while ensuring that the QoS requirements of all realtime flows are met. We also examine heterogeneous/vertical handover, providing an overview of the technologies supporting seamless handover. The issues studied in this area include a method of using the Signal to Noise Ratio to trigger handover in heterogeneous networks and QoS Mapping between heterogeneous networks. Our proposed method of QoS mapping establishes the minimum set of QoS parameters applicable to individual flows, and then maps these parameters into system parameter formats for both 802.11e and 802.16e networks.
10

Group Sequential Communication (GSC): Especifica??o e An?lise de Desempenho de umMecanismo de Comunica??o de Tempo Real Compat?vel ao Padr?o IEEE 802.11/11e Aplicado ? Automa??o Industrial

Vi?gas Junior, Raimundo 11 February 2010 (has links)
Made available in DSpace on 2014-12-17T14:54:54Z (GMT). No. of bitstreams: 1 ViegasJuniorR_TESE.pdf: 2611835 bytes, checksum: 668f48088b8bbf9b308f0cb4b0dd4535 (MD5) Previous issue date: 2010-02-11 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This thesis proposes the specification and performance analysis of a real-time communication mechanism for IEEE 802.11/11e standard. This approach is called Group Sequential Communication (GSC). The GSC has a better performance for dealing with small data packets when compared to the HCCA mechanism by adopting a decentralized medium access control using a publish/subscribe communication scheme. The main objective of the thesis is the HCCA overhead reduction of the Polling, ACK and QoS Null frames exchanged between the Hybrid Coordinator and the polled stations. The GSC eliminates the polling scheme used by HCCA scheduling algorithm by using a Virtual Token Passing procedure among members of the real-time group to whom a high-priority and sequential access to communication medium is granted. In order to improve the reliability of the mechanism proposed into a noisy channel, it is presented an error recovery scheme called second chance algorithm. This scheme is based on block acknowledgment strategy where there is a possibility of retransmitting when missing real-time messages. Thus, the GSC mechanism maintains the real-time traffic across many IEEE 802.11/11e devices, optimized bandwidth usage and minimal delay variation for data packets in the wireless network. For validation purpose of the communication scheme, the GSC and HCCA mechanisms have been implemented in network simulation software developed in C/C++ and their performance results were compared. The experiments show the efficiency of the GSC mechanism, especially in industrial communication scenarios. / Esta tese prop?e a especifica??o e an?lise de desempenho de um mecanismo de comunica??o de tempo real compat?vel com o Padr?o IEEE 802.11/11e, chamado Group Sequential Communication (GSC). O GSC apresenta um melhor desempenho quando comparado ao mecanismo HCCA para tratar pequenos pacotes de dados, al?m de adotar uma abordagem descentralizada do controle de acesso ao meio baseado no conceito produtor/consumidor. O objetivo principal da proposta ? a redu??o de overheads da rede, oriundos de quadros de Polling, ACK e QoS Null trocados entre o controlador h?brido e as esta??es no HCCA padr?o. O mecanismo GSC elimina o uso de quadros de Polling utilizados pelo escalonador do HCCA, atrav?s de um procedimento de Virtual Token Passing entre os membros do grupo de tempo real, a quem ? garantida alta prioridade de acesso ao meio de forma sequencial. A fim de melhorar a confiabilidade da proposta em ambientes ruidosos ? apresentado um esquema de recupera??o de erro chamado algoritmo de segunda chance. Este esquema ? baseado em uma estrat?gia de reconhecimento em bloco das mensagens enviadas, com possibilidade de serem retransmitidas quando n?o recebidas com sucesso. Desta forma, o mecanismo GSC mant?m o tr?fego das esta??es de tempo real entre os diversos dispositivos compat?vel ao Padr?o IEEE 802.11/11e, com o uso otimizado da banda e varia??es m?nimas de atraso m?dio entre as entregas dos pacotes de dados pertencentes ? rede sem fio. Para prop?sito de valida??o da proposta, os mecanismos GSC e HCCA foram implementados atrav?s de softwares de simula??o de redes desenvolvido em C/C++ e os resultados de desempenho foram comparados. Os experimentos mostram a efici?ncia do mecanismo GSC, principalmente em cen?rios de comunica??es industriais.

Page generated in 0.5212 seconds