• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 361
  • 44
  • 41
  • 38
  • 19
  • 11
  • 9
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 669
  • 216
  • 163
  • 93
  • 93
  • 91
  • 88
  • 83
  • 80
  • 71
  • 65
  • 61
  • 61
  • 52
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Addressing the innovation lag of port congestion in Durban, South Africa

Misra, Trishna January 2021 (has links)
One of the key indicators of port performance lies in port’s efficiency in minimising port congestion. However, the port of Durban like many other ports in Africa and the world is faced with a congestion challenge. This study aimed to identify the causes of congestion and proffer a solution to alleviate congestion. By understanding the causes of congestion, adopting incremental solutions can achieve the desired outcome. A qualitative, exploratory research study was conducted with 14 participants from the maritime sector that have experienced port congestion. Data analysis was done through thematic analysis where all data collected was transcribed and the researcher observed and articulated emerging themes to attach meaning to the respondents’ interpretations and perceptions of their own lived reality on what causes port congestion in Durban and possible solutions thereof. The key findings confirmed that Wind, Labour issues and Equipment are the main causes of congestion in the Port of Durban. Further research to determine the impact of climate change on congestion is needed. The incremental and radical solutions proffered by the participants was compared to the causes of congestion. This study contributes to the field of maritime studies, by understanding the causes of congestion in the Port and the field of innovation studies by contributing to innovative theory. / Mini Dissertation (MBA)--University of Pretoria, 2021. / Gordon Institute of Business Science (GIBS) / MBA / Unrestricted
182

A Clean-Slate Architecture for Reliable Data Delivery in Wireless Mesh Networks

ElRakabawy, Sherif M., Lindemann, Christoph 17 December 2018 (has links)
In this paper, we introduce a clean-slate architecture for improving the delivery of data packets in IEEE 802.11 wireless mesh networks. Opposed to the rigid TCP/IP layer architecture which exhibits serious deficiencies in such networks, we propose a unitary layer approach that combines both routing and transport functionalities in a single layer. The new Mesh Transmission Layer (MTL) incorporates cross-interacting routing and transport modules for a reliable data delivery based on the loss probabilities of wireless links. Due to the significant drawbacks of standard TCP over IEEE 802.11, we particularly focus on the transport module, proposing a pure rate-based approach for transmitting data packets according to the current contention in the network. By considering the IEEE 802.11 spatial reuse constraint and employing a novel acknowledgment scheme, the new transport module improves both goodput and fairness in wireless mesh networks. In a comparative performance study, we show that MTL achieves up to 48% more goodput and up to 100% less packet drops than TCP/IP, while maintaining excellent fairness results.
183

Towards more scalable mutual exclusion for multicore architectures / Vers des mécanismes d'exclusion mutuelle plus efficaces pour les architectures multi-cœur

Lozi, Jean-Pierre 16 July 2014 (has links)
Le passage à l'échelle des applications multi-fil sur les systèmes multi-cœuractuels est limité par la performance des algorithmes de verrou, à cause descoûts d'accès à la mémoire sous forte congestion et des défauts de cache. Lacontribution principale présentée dans cette thèse est un nouvel algorithme,Remote Core Locking (RCL), qui a pour objectif d'améliorer la vitessed'exécution des sections critiques des applications patrimoniales sur lesarchitectures multi-cœur. L'idée de RCL est de remplacer les acquisitions deverrou par des appels de fonction distants (RPC) optimisés vers un fild'exécution matériel dédié appelé serveur. RCL réduit l'effondrement desperformances observé avec d'autres algorithmes de verrou lorsque de nombreuxfils d'exécution essaient d'obtenir un verrou de façon concurrente, et supprimele besoin de transférer les données partagées protégées par le verrou vers lefil d'exécution matériel qui l'acquiert car ces données peuvent souventdemeurer dans les caches du serveur.D'autres contributions sont présentées dans cette thèse, notamment un profilerpermettant d'identifier les verrous qui sont des goulots d'étranglement dansles applications multi-fil et qui peuvent par conséquent être remplacés par RCLafin d'améliorer les performances, ainsi qu'un outil de réécriture de codedéveloppé avec l'aide de Julia Lawall. Cet outil transforme les acquisitions deverrou POSIX en acquisitions RCL. L'évaluation de RCL a porté sur dix-huitapplications: les neuf applications des benchmarks SPLASH-2, les septapplications des benchmarks Phoenix 2, Memcached, ainsi que Berkeley DB avec unclient TPC-C. Huit de ces applications sont incapables de passer à l'échelle àcause de leurs verrous et leur performance est améliorée par RCL sur unemachine x86 avec quatre processeurs AMD Opteron et 48 fils d'exécutionmatériels. Utiliser RCL permet de multiplier les performances par 2.5 parrapport aux verrous POSIX sur Memcached, et par 11.6 fois sur Berkeley DB avecle client TPC-C. Sur une machine SPARC avec deux processeurs Sun Ultrasparc T2+et 128 fils d'exécution matériels, les performances de trois applications sontaméliorées par RCL: les performances sont multipliées par 1.3 par rapport auxverrous POSIX sur Memcached et par 7.9 fois sur Berkeley DB avec le clientTPC-C. / The scalability of multithreaded applications on current multicore systems is hampered by the performance of lock algorithms, due to the costs of access contention and cache misses. The main contribution presented in this thesis is a new lock algorithm, Remote Core Locking (RCL), that aims to improve the performance of critical sections in legacy applications on multicore architectures. The idea of RCL is to replace lock acquisitions by optimized remote procedure calls to a dedicated hardware thread, which is referred to as the server. RCL limits the performance collapse observed with other lock algorithms when many threads try to acquire a lock concurrently and removes the need to transfer lock-protected shared data to the hardware thread acquiring the lock because such data can typically remain in the server's cache. Other contributions presented in this thesis include a profiler that identifies the locks that are the bottlenecks in multithreaded applications and that can thus benefit from RCL, and a reengineering tool developed with Julia Lawall that transforms POSIX locks into RCL locks. Eighteen applications were used to evaluate RCL: the nine applications of the SPLASH-2 benchmark suite, the seven applications of the Phoenix 2 benchmark suite, Memcached, and Berkeley DB with a TPC-C client. Eight of these applications are unable to scale because of locks and benefit from RCL on an x86 machine with four AMD Opteron processors and 48 hardware threads. Using RCL locks, performance is improved by up to 2.5 times with respect to POSIX locks on Memcached, and up to 11.6 times with respect to Berkeley DB with the TPC-C client. On an SPARC machine with two Sun Ultrasparc T2+ processors and 128 hardware threads, three applications benefit from RCL. In particular, performance is improved by up to 1.3 times with respect to POSIX locks on Memcached, and up to 7.9 times with respect to Berkeley DB with the TPC-C client.
184

A Hybrid (Active-Passive) VANET Clustering Technique

Moore, Garrett Lee 01 January 2019 (has links)
Clustering serves a vital role in the operation of Vehicular Ad hoc Networks (VANETs) by continually grouping highly mobile vehicles into logical hierarchical structures. These moving clusters support Intelligent Transport Systems (ITS) applications and message routing by establishing a more stable global topology. Clustering increases scalability of the VANET by eliminating broadcast storms caused by packet flooding and facilitate multi-channel operation. Clustering techniques are partitioned in research into two categories: active and passive. Active techniques rely on periodic beacon messages from all vehicles containing location, velocity, and direction information. However, in areas of high vehicle density, congestion may occur on the long-range channel used for beacon messages limiting the scale of the VANET. Passive techniques use embedded information in the packet headers of existing traffic to perform clustering. In this method, vehicles not transmitting traffic may cause cluster heads to contain stale and malformed clusters. This dissertation presents a hybrid active/passive clustering technique, where the passive technique is used as a congestion control strategy for areas where congestion is detected in the network. In this case, cluster members halt their periodic beacon messages and utilize embedded position information in the header to update the cluster head of their position. This work demonstrated through simulation that the hybrid technique reduced/eliminated the delays caused by congestion in the modified Distributed Coordination Function (DCF) process, thus increasing the scalability of VANETs in urban environments. Packet loss and delays caused by the hidden terminal problem was limited to distant, non-clustered vehicles. This dissertation report presents a literature review, methodology, results, analysis, and conclusion.
185

VANETomo: A Congestion Identification and Control Scheme in Connected Vehicles Using Network Tomography

Paranjothi, Anirudh, Khan, Mohammad S., Patan, Rizwan, Parizi, Reza M., Atiquzzaman, Mohammed 01 February 2020 (has links)
The Internet of Things (IoT) is a vision for an internetwork of intelligent, communicating objects, which is on the cusp of transforming human lives. Smart transportation is one of the critical application domains of IoT and has benefitted from using state-of-the-art technology to combat urban issues such as traffic congestion while promoting communication between the vehicles, increasing driver safety, traffic efficiency and ultimately paving the way for autonomous vehicles. Connected Vehicle (CV) technology, enabled by Dedicated Short Range Communication (DSRC), has attracted significant attention from industry, academia, and government, due to its potential for improving driver comfort and safety. These vehicular communications have stringent transmission requirements. To assure the effectiveness and reliability of DRSC, efficient algorithms are needed to ensure adequate quality of service in the event of network congestion. Previously proposed congestion control methods that require high levels of cooperation among Vehicular Ad-Hoc Network (VANET) nodes. This paper proposes a new approach, VANETomo, which uses statistical Network Tomography (NT) to infer transmission delays on links between vehicles with no cooperation from connected nodes. Our proposed method combines open and closed loops congestion control in a VANET environment. Simulation results show VANETomo outperforming other congestion control strategies.
186

Congestion Avoidance And Fairness In Wireless Sensor Networks

Ahmad, Mohammad 01 January 2007 (has links)
Sensor network congestion avoidance and control primarily aims to reduce packet drops while maintaining fair bandwidth allocation to existing network flows. The design of a congestion control algorithm suited for all types of applications in sensor networks is a challenging task due to the application-specific nature of these networks. With numerous sensors transmitting data simultaneously to one or more base stations (also called sinks), sensor nodes located near the base station will most likely experience congestion and packet loss. In this thesis, we propose a novel distributed congestion avoidance algorithm which calculates the ratio of the number of downstream and upstream nodes. This ratio value (named Characteristic ratio) is used to take a routing decision and incorporate load balancing while also serving as a pointer to the congestion state of the network. Available queue sizes of the downstream nodes are used to detect incipient congestion. Queue characteristics of candidate downstream nodes are used collectively to implement both congestion avoidance and fairness by adjusting the node's forwarding rate and next hop destination. Such an approach helps to minimize packet drops, improve energy efficiency and load balancing. In cases of severe congestion, the source is signaled to reduce its sending rate and enable the network recovery process. This is essentially a transport layer algorithm and would work best with a multi-path routing protocol and almost any MAC layer standard. We present the design and implementation of the proposed protocol and compare it with the existing avoidance protocols like Global rate control and Lightweight buffering. Our simulation results show a higher packet delivery ratio with greater node buffer utilization for our protocol in comparison with the conventional mechanisms.
187

Link Adaptation Algorithm and Metric for IEEE Standard 802.16

Ramachandran, Shyamal 26 March 2004 (has links)
Broadband wireless access (BWA) is a promising emerging technology. In the past, most BWA systems were based on proprietary implementations. The Institute of Electrical and Electronics Engineers (IEEE) 802.16 task group recently standardized the physical (PHY) and medium-access control (MAC) layers for BWA systems. To operate in a wide range of physical channel conditions, the standard defines a robust and flexible PHY. A wide range of modulation and coding schemes are defined. While the standard provides a framework for implementing link adaptation, it does not define how exactly adaptation algorithms should be developed. This thesis develops a link adaptation algorithm for the IEEE 802.16 standard's WirelessMAN air interface. This algorithm attempts to minimize the end-to-end delay in the system by selecting the optimal PHY burst profile on the air interface. The IEEE 802.16 standard recommends measuring C/(N+I) at the receiver to initiate a change in the burst profile, based on the comparison of the instantaneous the C/(N+I) with preset C/(N+I) thresholds. This research determines the C/(N+I) thresholds for the standard specified channel Type 1. To determine the precise C/(N+I) thresholds, the end-to-end(ETE) delay performance of IEEE 802.16 is studied for different PHY burst profiles at varying signal-to-noise ratio values. Based on these performance results, we demonstrate that link layer ETE delay does not reflect the physical channel condition and is therefore not suitable for use as the criterion for the determination of the C/(N+I) thresholds. The IEEE 802.16 standard specifies that ARQ should not be implemented at the MAC layer. Our results demonstrate that this design decision renders the link layer metrics incapable of use in the link adaptation algorithm. Transmission Control Protocol (TCP) delay is identified as a suitable metric to serve as the link quality indicator. Our results show that buffering and retransmissions at the transport layer cause ETE TCP delay to rise exponentially below certain SNR values. We use TCP delay as the criterion to determine the SNR entry and exit thresholds for each of the PHY burst profiles. We present a simple link adaptation algorithm that attempts to minimize the end-to-end TCP delay based on the measured signal-to-noise ratio (SNR). The effects of Internet latency, TCP's performance enhancement features and network traffic on the adaptation algorithm are also studied. Our results show that delay in the Internet can considerably affect the C/(N+I) thresholds used in the LA algorithm. We also show that the load on the network also impacts the C/(N+I) thresholds significantly. We demonstrate that it is essential to characterize Internet delays and network load correctly, while developing the LA algorithm. We also demonstrate that TCP's performance enhancement features do not have a significant impact on TCP delays over lossy wireless links. / Master of Science
188

Congestion control based on cross-layer game optimization in wireless mesh networks

Ma, X., Xu, L., Min, Geyong January 2013 (has links)
No / Due to the attractive characteristics of high capacity, high-speed, wide coverage and low transmission power, Wireless Mesh Networks become the ideal choice for the next-generation wireless communication systems. However, the network congestion of WMNs deteriorates the quality of service provided to end users. Game theory optimization model is a novel modeling tool for the study of multiple entities and the interaction between them. On the other hand, cross-layer design is shown to be practical for optimizing the performance of network communications. Therefore, a combination of the game theory and cross-layer optimization, named cross-layer game optimization, is proposed to reduce network congestion in WMNs. In this paper, the network congestion control in the transport layer and multi-path flow assignment in the network layer of WMNs are investigated. The proposed cross-layer game optimization algorithm is then employed to enable source nodes to change their set of paths and adjust their congestion window according to the round-trip time to achieve a Nash equilibrium. Finally, evaluation results show that the proposed cross-layer game optimization scheme achieves high throughput with low transmission delay.
189

Improving TCP performance over heterogeneous networks : The investigation and design of End to End techniques for improving TCP performance for transmission errors over heterogeneous data networks.

Alnuem, M.A. January 2009 (has links)
Transmission Control Protocol (TCP) is considered one of the most important protocols in the Internet. An important mechanism in TCP is the congestion control mechanism which controls TCP sending rate and makes TCP react to congestion signals. Nowadays in heterogeneous networks, TCP may work in networks with some links that have lossy nature (wireless networks for example). TCP treats all packet loss as if they were due to congestion. Consequently, when used in networks that have lossy links, TCP reduces sending rate aggressively when there are transmission (non-congestion) errors in an uncongested network. One solution to the problem is to discriminate between errors; to deal with congestion errors by reducing TCP sending rate and use other actions for transmission errors. In this work we investigate the problem and propose a solution using an end-to-end error discriminator. The error discriminator will improve the current congestion window mechanism in TCP and decide when to cut and how much to cut the congestion window. We have identified three areas where TCP interacts with drops: congestion window update mechanism, retransmission mechanism and timeout mechanism. All of these mechanisms are part of the TCP congestion control mechanism. We propose changes to each of these mechanisms in order to allow TCP to cope with transmission errors. We propose a new TCP congestion window action (CWA) for transmission errors by delaying the window cut decision until TCP receives all duplicate acknowledgments for a given window of data (packets in flight). This will give TCP a clear image about the number of drops from this window. The congestion window size is then reduced only by number of dropped packets. Also, we propose a safety mechanism to prevent this algorithm from causing congestion to the network by using an extra congestion window threshold (tthresh) in order to save the safe area where there are no drops of any kind. The second algorithm is a new retransmission action to deal with multiple drops from the same window. This multiple drops action (MDA) will prevent TCP from falling into consecutive timeout events by resending all dropped packets from the same window. A third algorithm is used to calculate a new back-off policy for TCP retransmission timeout based on the network¿s available bandwidth. This new retransmission timeout action (RTA) helps relating the length of the timeout event with current network conditions, especially with heavy transmission error rates. The three algorithms have been combined and incorporated into a delay based error discriminator. The improvement of the new algorithm is measured along with the impact on the network in terms of congestion drop rate, end-to-end delay, average queue size and fairness of sharing the bottleneck bandwidth. The results show that the proposed error discriminator along with the new actions toward transmission errors has increased the performance of TCP. At the same time it has reduced the load on the network compared to existing error discriminators. Also, the proposed error discriminator has managed to deliver excellent fairness values for sharing the bottleneck bandwidth. Finally improvements to the basic error discriminator have been proposed by using the multiple drops action (MDA) for both transmission and congestion errors. The results showed improvements in the performance as well as decreases in the congestion loss rates when compared to a similar error discriminator. / Ministry of Higher Education and King Saud University in Saudi Arabia.
190

An Analysis of Changes In Work Trip Travel Behaviour

Lo, Pui-Chin 12 1900 (has links)
<p> This work trip study is part of the studies on King Street closure. The objectives are to examine the effect of changed traffic conditions on change in travel behaviour, and to identify variables for choice modelling. Some behavioural changes are observed, but none is related to the increased road congestion. The household survey data shows that people did not perceive a difference in travel times before and during closure. Thus the reliability of reported times on modelling is suspected. However, modelling on time of day in a multinomial legit framework using measured travel data does not help to explain the behavioural changes with either travel time or a congestion factor. It is concluded that the changes observed in this study represent random occurrences and the change in congestion is too moderate to effect behavioural changes. </p> / Thesis / Master of Arts (MA)

Page generated in 0.0695 seconds