• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 119
  • 60
  • 57
  • 38
  • 27
  • 23
  • 16
  • 9
  • 9
  • 7
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 746
  • 746
  • 195
  • 167
  • 145
  • 119
  • 107
  • 102
  • 100
  • 90
  • 89
  • 88
  • 86
  • 75
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

On test principles for a QoE evaluation using real services

Kurze, Albrecht, Eibl, Maximilian 16 January 2017 (has links) (PDF)
We report on our experiences from two user studies (lab experiments) with nearly 300 participants for QoE evaluation using real mobile services and devices in our WiFi network emulation testbed. We briefly introduce our principles for integrating real services in these studies: how we selected relevant services, how we investigated their testability and how we tested them with high efficiency.
312

On test principles for a QoE evaluation using real services: overview on methodology and challenges for defining test principles

Kurze, Albrecht, Eibl, Maximilian 16 January 2017 (has links)
We report on our experiences from two user studies (lab experiments) with nearly 300 participants for QoE evaluation using real mobile services and devices in our WiFi network emulation testbed. We briefly introduce our principles for integrating real services in these studies: how we selected relevant services, how we investigated their testability and how we tested them with high efficiency.
313

Modellierung des QoS-QoE-Zusammenhangs für mobile Dienste und empirische Bestimmung in einem Netzemulations-Testbed

Kurze, Albrecht 03 June 2016 (has links)
In der theoretischen Auseinandersetzung mit mobilen Internet-Diensten sind Quality of Service (QoS) und Quality of Experience (QoE) als hochkomplexe und verbundene Konzepte zu erkennen. QoS umfasst dabei die technische Sicht auf das Telekommunikationsnetz, charakterisiert durch leistungsrelevante Parameterwerte (z. B. Durchsatz und Latenz). QoE hingegen bezieht sich auf die Bewertung des Nutzererlebnisses (z. B. Zufriedenheit und Akzeptanz). Zur gemeinsamen Erklärung bedarf es einer multi- bzw. interdisziplinären Betrachtung zwischen Ingenieurs- und Humanwissenschaften, da neben der Technik auch der Mensch als Nutzer in den QoS-QoE-Zusammenhang involviert ist. Ein mehrschichtiges Modell erfasst die relevanten Einflussfaktoren und internen Zusammenhänge zwischen QoS und QoE sowohl aus Netz- als auch Nutzersicht. Zur Quantifizierung des Zusammenhangs konkreter Werte in einer empirischen QoE-Evaluation wurde ein umfangreiches psychophysikalisches Laborexperiment konzipiert. Das dafür entwickelte Netzemulations-Testbed erlaubt mobiltypische Netz- und Nutzungssituationen gezielt in einem Testparcours zusammenzubringen. Die formulierten Prinzipien zur Testrelevanz, -eignung und -effizienz berücksichtigen hierbei die Besonderheiten des Testaufbaus und -designs mit echten Endgeräten und Diensten. Die Ergebnisse von über 200 Probanden bestätigen die vorhergesagten QoS-QoE-Charakteristiken der sechs untersuchten Dienste als kontinuierlich-elastisch bzw. sprunghaft-fest. Dienstspezifisch lässt sich jeweils von einem angestrebten Grad der Nutzerzufriedenheit auf die notwendigen Werte der QoS-Netzparameter schließen, woraus sich ein QoS-QoE-Zufriedenheitskorridor zwischen einem unteren und oberen Schwellwert ergibt. Teilweise sind dabei QoS-unabhängige Faktoren, z. B. die Art der Präsentation der Stimuli in der App auf dem Endgerät, als ebenso relevant zu erkennen wie die QoS-Netzparameter selbst. / The thesis is centered on the relationship of Quality of Service (QoS) and Quality of Experience (QoE) for mobile Internet services. While QoS covers the technical view on the telecommunications network characterized by performance-related parameter values (e.g. throughput and latency), QoE refers to the assessment of the user experience (e.g. satisfaction and acceptability) in the use of the services. In the thesis QoS and QoE are revealed as highly complex and related concepts in theoretical contemplation. Integrating both concepts requires a multidisciplinary or interdisciplinary approach between engineering and human sciences to consider both - technological aspects of the network as well the human user. The designed multilayered model appropriately integrates the technical network view as well as the user's perspective by considering all relevant factors of influence and all internal relationships between QoS and QoE. The conducted extensive psychophysical laboratory experiment with real users, devices and services quantifies the relationship between specific QoS values and specific QoE values. A testbed developed for network emulation allows combining typical mobile network situations with typical usage situations in a controlled and focused manner. The three elaborated principles to test for relevance, suitability and efficiency take into account the special features of the test setup and test design. Test results gained from more than 200 volunteers confirm the predicted QoS-QoE-characteristics of the six tested mobile services to be either elastic or non-elastic. It is possible to conclude from the desired degree of user satisfaction on the necessary values of the QoS network parameters, which results in a QoS-QoE-corridor between lower and upper threshold values. Findings prove that QoS-independent factors, e.g. the type of presentation of the stimuli in the app on the user’s device, can be as relevant for QoE as the evaluated QoS network parameters themselves.
314

Ποιότητα ιατρικής πληροφορίας στο συνεργατικό διαδίκτυο (Web 2.0)

Μπάκαβος, Ιωάννης 26 July 2013 (has links)
Είναι κοινά αποδεκτό ότι τα συστήματα παροχής υπηρεσιών υγείας αντιμετωπίζουν μια διαρκώς αυξανόμενη πίεση για βελτίωση της ποιότητας των υπηρεσιών τους προς τους ασθενείς. Αυτό συμβαίνει σε μια περίοδο όπου από τη μια πλευρά υπάρχει ανάγκη ορθολογικότερης διαχείρισης των εξόδων περίθαλψης λόγω της οικονομικής κρίσης, ενώ από την άλλη η διείσδυση της πληροφορικής τόσο στο χώρο της υγείας όσο και στην καθημερινότητα του ανθρώπου καθιστά ευμετάβλητη τη σχέση ιατρού και ασθενή. Στη διπλωματική αυτή δίνεται μια σύνθετη περιγραφή τόσο των ατόμων που χρησιμοποιούν το διαδίκτυο για θέματα υγείας, όσο και των υπηρεσιών που αναζητούν μέσα από αυτό. Περιγράφεται η δημιουργία διαδικτυακών τόπων για πληροφορίες, υπηρεσίες και δεδομένα στο χώρο της υγείας με τις ιδιαιτερότητές του καθώς και οι δυνητικοί κίνδυνοι που ελλοχεύουν για τον πολίτη. Στη συνέχεια αναλύονται τα μέτρα που προωθεί η Ευρωπαϊκή Ένωση και οι προσπάθειες που γίνονται διεθνώς για τη διασφάλιση της ποιότητας των πληροφοριών και των υπηρεσιών που παρέχονται μέσα από τους διαδικτυακούς τόπους. Η εξέλιξη στον τρόπο αλληλεπίδρασης των παρόχων υπηρεσιών υγείας, αλλά και των ίδιων των ασθενών μεταξύ τους αποτελεί τη βάση για το συνεργατικό διαδίκτυο και πλαισιώνει τη μετάβαση από το Web 1.0 στο Web 2.0. Θα παρουσιαστούν οι κυριότερες προσπάθειες και οι τάσεις στο χώρο του Web 2.0. Παρουσιάζονται κυβερνητικές πολιτικές που θέτουν το πλαίσιο και ιδιωτικές πρωτοβουλίες που είναι πρωτοπόρες. Στο πλαίσιο αυτό ερευνάται κατά πόσο ιδρύματα τριτοβάθμιας φροντίδας υγείας τόσο στην Ελλάδα όσο και στην Η.Π.Α ενσωματώνουν Web 2.0 τεχνολογίες στους διαδικτυακούς τους τόπους. Αξιολογώντας αυτά τα δεδομένα προτείνουμε έναν πρότυπο διαδικτυακό τόπο που ενσωματώνει καινοτομίες προσφέροντας ασφαλή και ποιοτική πληροφόρηση στο χρήστη. Επιπλέον για την αξιολόγηση διαδικτυακών τόπων υγείας αναπτύξαμε ένα εργαλείο τόσο για τον ασθενή χρήστη που επιθυμεί να ελέγξει την εγκυρότητα ενός διαδικτυακού τόπου όσο και για το διαχειριστή που θέλει να βελτιώσει την ιστοσελίδα του. / It is commonly accepted that the health service systems face an increasing pressure to improve the quality of their services to patients. This happens at a time when there is a need for optimized management of health care costs due to the economic crisis, while the penetration of IT both in health and in everyday human life makes the relationship between doctor and patient becoming more mercurial. In this thesis we are giving a complex description for individuals who use the Internet for health issues and the services they are looking through it. Described as creating websites for information, services and data in health with its particularities as well as potential risks for the citizen. Then analyze the measures promoted by the European Union and the efforts being made internationally to ensure the quality of information and services provided through websites. The evolution in the way the interaction of health service providers, and patients themselves is the basis for collaborative internet and accompany the transition from Web 1.0 to Web 2.0. The paper will present the main efforts and trends in the field of Web 2.0. Featured government policies that set the framework and private innovative initiatives are been presented. In this context investigated whether higher health care both in Greece and in the USA incorporating Web 2.0 technologies in web sites. Evaluating these data suggest a model website that incorporates innovations offering safe and quality information to the user. In addition to evaluating health related websites we developed a tool for both the patient user who wants to check the validity of a website and for the manager who wants to improve his website.
315

WiMAX有服務品質保證的公平資源分配機制 / Fairness of Resource Allocation with QoS Guarantee in WiMAX

羅啟文, Lo, Chi Wen Unknown Date (has links)
近十年來,由於無線網路的普及與人們對於即時服務的需求提高,導致人們迫切需要更好的服務品質,WiMAX是其中最被看好的一種無線網路傳輸技術。但在WiMAX無線網路中,標準的規格中並未規範connection admission control (CAC)、bandwidth request (BR)、bandwidth allocation、scheduling等機制,在本篇論文中,我們將上述機制設計並實作於MAC layer中。 本論文首先探討在設計connection admission control、bandwidth request、bandwidth allocation、scheduling會遇到的相關參數及相關議題。並進一步提出一個有效的方法以改善目前大部分設計在bandwidth allocation的公平性 (Fairness)及contention bandwidth request等效率差的問題。我們將設計一個MAC Layer co-function,稱之為Dynamic Polling Interval function (DPI function)。利用DPI function設計no contention bandwidth request改善傳統 contention bandwidth request的效率,以及利用DPI function的特性改善bandwidth allocation以及scheduling的公平性。最後我們將利用網路模擬器NS-2 (Network Simulater version 2)與測試實驗架構作不同效能的驗證比較並評估所提方法的有效性。 / Over the past decade, wireless network access and real-time services have become more popular than ever. People are eager to have better quality of service. Among all, WiMAX is one of the best wireless communication technigues . However, WiMAX standard does not specify those mechanisms of connection admision control (CAC)、bandwidth request (BR)、bandwidth allocation and scheduling . In this thesis, we propose the above mechanisms and imcorporate them as MAC layer functions. First, we discuss those related parameters and issues when designing connection admision control、bandwidth request、bandwidth allocation and scheduling. Second, we propose an efficient method to improve the fairness of bandwidth allocation and efficiency of contention bandwidth request. We design a MAC layer co-function called dynamic polling interval function (DPI function). We use the DPI function to design a no contention bandwidth request method to improve the efficiency of traditional bandwidth request method and use the features of DPI function to improve the fairness of bandwidth allocation and scheduling. At last , we use NS-2 (Network Simulator version 2) as our network simulator and compare the result of simulations to prove the efficiency of our proposed methods.
316

Allocation optimale des ressources pour les applications et services de grille de calcul

Abdelhanine, Filali January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
317

New quality of service routing algorithms based on local state information : the development and performance evaluation of new bandwidth-constrained and delay-constrained quality of service routing algorithms based on localized routing strategies

Aldosari, Fahd M. January 2011 (has links)
The exponential growth of Internet applications has created new challenges for the control and administration of large-scale networks, which consist of heterogeneous elements under dynamically changing traffic conditions. These emerging applications need guaranteed service levels, beyond those supported by best-effort networks, to deliver the intended services to the end user. Several models have been proposed for a Quality of Service (QoS) framework that can provide the means to transport these services. It is desirable to find efficient routing strategies that can meet the strict routing requirements of these applications. QoS routing is considered as one of the major components of the QoS framework in communication networks. In QoS routing, paths are selected based upon the knowledge of resource availability at network nodes and the QoS requirements of traffic. Several QoS routing schemes have been proposed that differ in the way they gather information about the network state and the way they select paths based on this information. The biggest downside of current QoS routing schemes is the frequent maintenance and distribution of global state information across the network, which imposes huge communication and processing overheads. Consequently, scalability is a major issue in designing efficient QoS routing algorithms, due to the high costs of the associated overheads. Moreover, inaccuracy and staleness of global state information is another problem that is caused by relatively long update intervals, which can significantly deteriorate routing performance. Localized QoS routing, where source nodes take routing decisions based solely on statistics collected locally, was proposed relatively recently as a viable alternative to global QoS routing. It has shown promising results in achieving good routing performance, while at the same time eliminating many scalability related problems. In localized QoS routing each source-destination pair needs to determine a set of candidate paths from which a path will be selected to route incoming flows. The goal of this thesis is to enhance the scalability of QoS routing by investigating and developing new models and algorithms based on the localized QoS routing approach. For this thesis, we have extensively studied the localized QoS routing approach and demonstrated that it can achieve a higher routing performance with lower overheads than global QoS routing schemes. Existing localized routing algorithms, Proportional Sticky Routing (PSR) and Credit-Based Routing (CBR), use the blocking probability of candidate paths as the criterion for selecting routing paths based on either flow proportions or a crediting mechanism, respectively. Routing based on the blocking probability of candidate paths may not always reflect the most accurate state of the network. This has motivated the search for alternative localized routing algorithms and to this end we have made the following contributions. First, three localized bandwidth-constrained QoS routing algorithms have been proposed, two are based on a source routing strategy and the third is based on a distributed routing strategy. All algorithms utilize the quality of links rather than the quality of paths in order to make routing decisions. Second, a dynamic precautionary mechanism was used with the proposed algorithms to prevent candidate paths from reaching critical quality levels. Third, a localized delay-constrained QoS routing algorithm was proposed to provide routing with an end-to-end delay guarantee. We compared the performance of the proposed localized QoS routing algorithms with other localized and global QoS routing algorithms under different network topologies and different traffic conditions. Simulation results show that the proposed algorithms outperform the other algorithms in terms of routing performance, resource balancing and have superior computational complexity and scalability features.
318

Design and quality of service of mixed criticality systems in embedded architectures based on Network-on-Chip (NoC) / Dimensionnement et Qualité de Service pour les systèmes à criticité mixte dans les architectures embarquées à base de Network on Chip (NoC)

Papastefanakis, Ermis 28 November 2017 (has links)
L'évolution de Systems-on-Chip (SoCs) est rapide et le nombre des processeurs augmente conduisant à la transition des les plates-formes Multi-core vers les Manycore. Dans telles plates-formes, l'architecture d'interconnexion a également décalé des bus traditionnels vers les Réseaux sur puce (NoC) afin de faire face à la mise en échelle. Les NoC permettent aux processeurs d'échanger des informations avec la mémoire et les périphériques lors de l'exécution d'une tâche et d'effectuer plusieurs communications en parallèle. Les plates-formes basées sur un NoC sont aussi présentes dans des systèmes embarqués, caractérisés par des exigences comme la prédictibilité, la sécurité et la criticité mixte. Afin de fournir telles fonctionnalités dans les plates-formes commerciales existantes, il faut prendre en considération le NoC qui est un élément clé ayant un impact important sur les performances d'un SoC. Une tâche échange des informations à travers du NoC et par conséquent, son temps d'exécution dépend du temps de transmission des flux qu'elle génère. En calculant le temps de transmission de pire cas (WCTT) des flux dans le NoC, une étape est faite vers le calcul du temps d'exécution de pire cas (WCET) d'une tâche. Ceci contribue à la prédictibilité globale du système. De plus, en prenant en compte les politiques d'arbitrage dans le NoC, il est possible de fournir des garanties de sécurité contre des tâches compromises qui pourraient essayer de saturer les ressources du système (attaque DoS). Dans les systèmes critiques de sécurité, une distinction des tâches par rapport à leur niveau de criticité, permet aux tâches de criticité mixte de coexister et d'exécuter en harmonie. De plus, ça permet aux tâches critiques de maintenir leurs temps d'exécution au prix de tâches de faible criticité qui seront ralenties ou arrêtées. Cette thèse vise à fournir des méthodes et des mécanismes dans le but de contribuer aux axes de prédictibilité, de sécurité et de criticité mixte dans les architectures Manycore basées sur Noc. En outre, l'incitation consiste à relever conjointement les défis dans ces trois axes en tenant compte de leur impact mutuel. Chaque axe a été étudié individuellement, mais très peu de recherche prend en compte leur interdépendance. Cette fusion des aspects est de plus en plus intrinsèque dans des domaines tels que Internet-of-Things, Cyber-Physical Systems (CPS), véhicules connectés et autonomes qui gagnent de l'élan. La raison en est leur haut degré de connectivité qui crée une grande surface d'exposition ainsi que leur présence croissante qui rend l'impact des attaques sévère et visible. Les contributions de cette thèse consistent en une méthode pour fournir une prédictibilité aux flux dans le NoC, un mécanisme pour la sécurité du NoC et une boîte à outils pour la génération de trafic utilisée pour l'analyse comparative. La première contribution est une adaptation de l'approche de la trajectoire traditionnellement utilisée dans les réseaux avioniques (AFDX) pour calculer le WCET. Dans cette thèse, nous identifions les différences et les similitudes dans l'architecture NoC et modifions l'approche de la trajectoire afin de calculer le WCTT des flux NoC. La deuxième contribution est un mécanisme qui permet de détecter les attaques de DoS et d'atténuer leur impact dans un ensemble des flux de criticité mixte. Plus précisément, un mécanisme surveille le NoC et lors de la détection d'un comportement anormal, un deuxième mécanisme d'atténuation s'active. Ce dernier applique des limites de trafic à la source et restreint le taux auquel le NoC est occupé. Cela atténuera l'impact de l'attaque, garantissant la disponibilité des ressources pour les tâches de haute criticité. Finalement NTGEN, est un outil qui peut générer automatiquement des jeux des flux aléatoires mais qui provoquent une occupation NoC prédéterminée. Ces ensembles sont ensuite injectés dans le NoC et les informations sont collectées en fonction de la latence / The evolution of Systems-on-Chip (SoCs) is rapid and the number of processors has increased transitioning from Multi-core to Manycore platforms. In such platforms, the interconnect architecture has also shifted from traditional buses to Networks-on-Chip (NoC) in order to cope with scalability. NoCs allow the processors to exchange information with memory and peripherals during task execution and enable multiple communications in parallel. NoC-based platforms are also present in embedded systems, characterized by requirements like predictability, security and mixed-criticality. In order to enable such features in existing commercial platforms it is necessary to take into consideration the NoC which is a key element with an important impact to a SoC's performance. A task exchanges information through the NoC and as a result, its execution time depends on the transmission time of the flows it generates. By calculating the Worst Case Transmission Time (WCTT) of flows in the NoC, a step is made towards the calculation of the Worst Case Execution Time (WCET) of a task. This contributes to the overall predictability of the system. Similarly by leveraging arbitration and traffic policies in the NoC it is possible to provide security guarantees against compromised tasks that might try to saturate the system's resources (DoS attack). In safety critical systems, a distinction of tasks in relation to their criticality level, allows tasks of mixed criticality to co-exist and execute in harmony. In addtition, it allows critical tasks to maintain their execution times at the cost of tasks of lower criticality that will be either slowed down or stopped. This thesis aims to provide methods and mechanisms with the objective to contribute in the axes of predictability, security and mixed criticality in NoC-based Manycore architectures. In addition, the incentive is to jointly address the challenges in these three axes taking into account their mutual impact. Each axis has been researched individually, but very little research takes under consideration their interdependence. This fusion of aspects is becoming more and more intrinsic in fields like the Internet-of-Things, Cyber-Physical Systems (CPSs), connected and autonomous vehicles which are gaining momentum. The reason being their high degree of connectivity which is creates great exposure as well as their increasing presence which makes attacks severe and visible. The contributions of this thesis consist of a method to provide predictability to a set of flows in the NoC, a mechanism to provide security properties to the NoC and a toolkit for traffic generation used for benchmarking. The first contribution is an adaptation of the trajectory approach traditionally used in avionics networks (AFDX) to calculate WCET. In this thesis, we identify the differences and similarities in NoC architecture and modify the trajectory approach in order to calculate the WCTT of NoC flows. The second contribution is a mechanism that detects DoS attacks and mitigates their impact in a mixed criticality set of flows. More specifically, a monitor mechanism will detect abnormal behavior, and activate a mitigation mechanism. The latter, will apply traffic shaping at the source and restrict the rate at which the NoC is occupied. This will limit the impact of the attack, guaranteeing resource availability for high criticality tasks. Finally NTGEN, is a toolkit that can automatically generate random sets of flows that result to a predetermined NoC occupancy. These sets are then injected in the NoC and information is collected related to latency
319

Utilização da álgebra de caminhos para realizar o mapeamento de requisições virtuais sobre redes de substrato. / Path algebra to make the mapping of virtual network requests over substrate networks.

Molina, Miguel Angelo Tancredi 13 July 2012 (has links)
A tecnologia de virtualização de redes é um novo paradigma de redes que permite a múltiplas redes virtuais (VNs) compartilharem de uma forma eficiente e eficaz a mesma rede de infraestrutura denominada rede de substrato (SN). A implementação e o desenvolvimento de novos protocolos, testes de novas soluções e arquiteturas para a Internet atual e do futuro podem ser tratadas por meio da virtualização de redes. Com a virtualização de redes surge um desafio denominado problema VNE. O problema de virtualização de redes embutidas (VNE) consiste em realizar o mapeamento dos nós virtuais e o mapeamento dos enlaces virtuais sobre uma rede de substrato (SN). O problema é conhecido como NP-Hard e a sua solução é realizada por meio de algoritmos heurísticos e aproximados que realizam o mapeamento de nós e enlaces virtuais em dois estágios de forma independente ou coordenada. A presente tese tem o objetivo de resolver o mapeamento dos enlaces virtuais do problema VNE com a utilização da álgebra de caminhos. A solução apresentada fornece o melhor desempenho quando comparada com as demais soluções de virtualização de redes encontradas na literatura. Os resultados obtidos nas simulações para o problema VNE foram avaliados e analisados com a utilização do algoritmo desenvolvido nesta tese denominado Path Algebra for Virtual Link Mapping (PAViLiM), que utiliza a álgebra de caminhos para realizar o mapeamento de enlaces virtuais para caminhos na rede de substrato. A álgebra de caminhos é poderosa e flexível. Tal flexibilidade permite que ocorra uma exploração detalhada do espaço de soluções e a identificação do melhor critério e política que devem ser utilizados para a virtualização de redes. / The network virtualization technology is a new paradigm of network that allows multiple virtual networks (VNs) share in an efficient and effective way the same network infrastructure called substrate network (SN). The implementation and the development of new protocols, testing of new solutions and architectures for current and future Internet can be addressed through network virtualization. With the network virtualization arises a challenge called VNE problem. The problem of virtual network embedded (VNE) is to conduct the mapping of the virtual nodes and mapping of the virtual links over a substrate network (SN).The problem is known as NP-Hard and its solution is accomplished by means of approximate and heuristic algorithms that perform the mapping of virtual nodes and links in two stages independently or coordinated. This thesis aims to solve the mapping of virtual links for VNE problem using the paths algebra. The solution presented provides the best performance when compared with other networks virtualization solutions from the literature. The results of simulation for the VNE problem were evaluated and analyzed using the algorithm developed in this thesis called Path Algebra for Virtual Link Mapping (PAViLiM), which uses the paths algebra to perform the mapping of virtual links to paths in substrate network. The paths algebra is powerful and flexible. This flexibility allows the occurrence of a detailed exploration for identifying the best solutions and political criteria to be used for network virtualization.
320

Fatores de equivalência de veículos pesados em rodovias de pista dupla / Passenger-car equivalents for heavy vehicles on expressways

Piva, Fernando José 19 June 2015 (has links)
Este trabalho visa avaliar o impacto de veículos pesados na qualidade de serviço de rodovias de pista dupla através de equivalentes veiculares. Para isso, foram feitas estimativas dos fatores de equivalência veicular em rodovias paulistas de pista dupla, com três ou mais faixas de tráfego em cada sentido. Essas estimativas foram obtidas a partir de dados empíricos coletados separadamente para cada faixa de tráfego, em intervalos de curta duração (5 ou 6 minutos). Foram utilizadas 53.655 observações, coletadas em oito estações de monitoramento, nos anos 2010 e 2011. O fator de equivalência foi calculado para cada intervalo através de uma equação obtida a partir do método de Huber, admitindo-se que a qualidade de serviço é a mesma para todas as faixas de tráfego naquele intervalo. Foi considerado como fluxo básico o da faixa da esquerda, nos intervalos em que são detectados apenas automóveis, e fluxo misto o da faixa da direita, em que passam automóveis e caminhões. Os resultados sugerem que: (1) em uma parte signicativa do tempo (52%), a qualidade de serviço não é a mesma em todas as faixas da rodovia; (2) o impacto marginal dos caminhões decresce à medida que a porcentagem de caminhões na corrente aumenta; e (3) as diferenças que existem no fator de equivalência em função do nível de serviço são menos evidentes em rampas mais íngremes, onde o efeito das limitações de desempenho dos caminhões é mais notado. A comparação deste estudo com outras duas pesquisas, em que foram utilizados dados gerados em simuladores de tráfego para estimar os fatores de equivalência, mostrou que as estimativas dos equivalentes veiculares obtidos usando dados empíricos são consistentemente maiores que as obtidas através de simulação. / The objective of this study is to evaluate the impact of heavy vehicles on the quality of service on Brazilian expressways (freeways and divided multilane highways), using passenger-car equivalents (PCEs) for heavy vehicles (trucks and buses). PCE estimates for expressways with three or more traffic lanes in each direction were obtained using traffic data collected over short time intervals (5 or 6 minutes) on expressways in the state of São Paulo. A total of 53,655 speed-flow observations, made at eight permanent trac sensor installations during 2010 and 2011, were used in this study. A PCE estimate was calculated for each time interval, using an equation derived from Huber\'s method, based on the assumption that the quality of service is the same across all traffic lanes during the time interval over which the traffic data is collected. Basic flow (passenger cars only) was assumed to be the observed traffic flow on the lane closest to the median, whereas mixed flow (passenger cars and heavy vehicles) was assumed to be the observed traffic flow on the lane closest to the shoulder. The results indicate that: (1) in a signicant portion of the time (52% of the observations) the quality of service is not the same across all traffic lanes; (2) the marginal impact of heavy vehicles decreases as the fraction of heavy vehicles in the traffic stream increases; and (3) the variations in PCE estimates due to the level of service are less evident on steeper grades, where the effect of heavy vehicles\' poorer performance is greater. PCE estimates obtained in this study were compared with PCEs obtained using simulation. The results indicate that PCE from empirical data are consistently higher than those estimated from simulation results.

Page generated in 0.0817 seconds