• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 495
  • 208
  • 135
  • 93
  • 75
  • 74
  • 47
  • 41
  • 28
  • 18
  • 16
  • 16
  • 15
  • 14
  • 10
  • Tagged with
  • 1371
  • 490
  • 354
  • 353
  • 252
  • 191
  • 167
  • 150
  • 149
  • 116
  • 116
  • 112
  • 101
  • 98
  • 97
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Preparing a Surpassing Moral Force: The Dynamics of the Brigham University Singers

Burton, David Ray 26 March 2007 (has links)
This is a qualitative study that takes a close look at an exemplary performing group, the Brigham Young University Singers. Using the methods of phenomenology and naturalistic inquiry, the author presents a rich, thick description of the daily activities and unique culture of the choir. Both strengths and weaknesses of the group are identified so that others can have an authentic, vicarious experience through reading the Singers' story. The author also identifies seven principles that contribute to the success of the group so that other choral conductors can adapt them to their own unique situations. Educators in all disciplines can benefit from a deeper understanding of this model community of learners.
612

Servidor Modbus/TCP para sistemas de identificação e recolha de dados automáticos (AIDC)

Teixeira, José Filipe Alves January 2009 (has links)
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Automação). Faculdade de Engenharia. Universidade do Porto. 2009
613

Routing of traffic in an IP-network using combined routing patterns

Lindblad, Andreas January 2015 (has links)
In IP networks using the OPSF-principle together with the ECMP-principle, thetraffic is routed in all shortest paths. Weights on links are set by an administrator,not knowing how the resulting routing pattern will become. In this final thesis, I givea heuristic solution to the problem of changing a set of desired routing patterns inan ordered way to make them compatible with each other. An implementation of thealgorithm has been made and some testing with provided data for performance is alsopresented.
614

Study of FPGA Implementation of Entropy Norm Computation for IP Data Streams

Nagalakshmi, Subramanya 18 April 2008 (has links)
Recent literature has reported the use of entropy measurements for anomaly detection purposes in IP data streams. Space efficient randomized algorithms for estimating entropy of data streams are available in the literature. However no hardware implementation of these algorithms is available. The main challenge to software implementation for IP data streams has been in storing large volumes of data, along with, the requirement of high speed at which they have to be analyzed. In this thesis, a recent randomized algorithm available in the literature is analyzed for hardware implementation. Software/hardware simulations indicate it is possible to implement a large portion of the algorithm on a low cost Xilinx Virtex-II Pro FPGA with trade-offs for real-time operation. The thesis reports on the feasibility of this algorithm's FPGA implementation and the corresponding trade-offs and limitations.
615

INTER PROCESS COMMUNICATION BETWEEN TWO SERVERS USING MPICH

Narla, Nagabhavana 01 June 2018 (has links)
The main aim of the project is to launch multiple processes and have those processes communicate with each other using peer to peer communication to eliminate the problems of multiple processes running on a single server, and multiple processes running on inhomogeneous servers as well as the problems of scalability. This entire process is done using MPICH which is a high performance and portable implementation of Message Passing Interface standard. The project involves setting up the passwordless authentication between two local servers with the help of SSH connection. By establishing a peer to peer communication and by using a unique shell script which is written using MPICH and its derivatives, I am going to demonstrate the process of inter-process communication between the servers.
616

SF-SACK: A Smooth Friendly TCP Protocol for Streaming Multimedia Applications

Bakthavachalu, Sivakumar 16 April 2004 (has links)
Voice over IP and video applications continue to increase the amount of traffic over the Internet. These applications utilize the UDP protocol because TCP is not suitable for streaming applications. The flow and congestion control mechanisms of TCP can change the connection transmission rate too drastically, affecting the user-perceived quality of the transmission. Also, the TCP protocol provides a level of reliability that may waste network resources, retransmitting packets that have no value. On the other hand, the use of end-to-end flow and congestion control mechanisms for streaming applications has been acknowledged as an important measure to ease or eliminate the unfairness problem that exist when TCP and UDP share the same congested bottleneck link. Actually, router-based and end-to-end solutions have been proposed to solve this problem. This thesis introduces a new end-to-end protocol based on TCP SACK called SF-SACK that promises to be smooth enough for streaming applications while implementing the known flow and congestion control mechanisms available in TCP. Through simulations, it is shown that in terms of smoothness the SF-SACK protocol is considerably better than TCP SACK and only slightly worse than TFRC. Regarding friendliness, SF-SACK is not completely fair to TCP but considerably fairer than UDP. Furthermore, if SF-SACK is used by both streaming and data-oriented applications, complete fairness is achieved. In addition, SF-SACK only needs sender side modifcations and it is simpler than TFRC.
617

Studies in agent based IP traffic congestion management in diffserv networks

Sankaranarayanan, Suresh January 2006 (has links)
The motivation for the research carried out was to develop a rule based traffic management scheme for DiffServ networks with a view to introducing QoS (Quality of Service). This required definition of rules for congestion management/control based on the type and nature of IP traffic encountered, and then constructing and storing these rules to enable future access for application and enforcement. We first developed the required rule base and then developed the software based mobile agents using the Java (RMI) application package, for accessing these rules for application and enforcement. Consequently, these mobile agents act as smart traffic managers at nodes/routers in the computer based communication network and manage congestion.
618

RPX ??? a system for extending the IPv4 address range

Rattananon, Sanchai, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
In recent times, the imminent lack of public IPv4 addresses has attracted the attention of both the research community and industry. The cellular industry has decided to combat this problem by using IPv6 for all new terminals. However, the success of 3G network deployment will depend on the services offered to end users. Currently, almost all services reside in the IPv4 address space, making them inaccessible to users in IPv6 networks. Thus, an intermediate translation mechanism is required. Previous studies on network address translation methods have shown that Realm Base Kluge Address Heuristic-IP, REBEKAH-IP supports all types of services that can be offered to IPv6 hosts from the public IPv4 based Internet, and provides excellent scalability. However, the method suffers from an ambiguity problem which may lead to call blocking. This thesis presents an improvement to REBEKAH-IP scheme in which the side effect is removed, creating a robust and fully scalable system. The improvement can be divided into two major tasks including a full investigation on the scalability of addressing and improvements to the REBEKAH-IP scheme that allow it to support important features such as ICMP and IP mobility. To address the first task a method called REBEKAH-IP with Port Extension (RPX) is introduced. RPX is extended from the original REBEKAH-IP scheme to incorporate centralised management of both IP address and port numbers. This method overcomes the ambiguity problem, and improves scalability. We propose a priority queue algorithm to further increase scalability. Finally, we present extensive simulation results on the practical scalability of RPX with different traffic compositions, to provide a guideline of the expected scalability in large-scale networks. The second task concerns enabling IP based communication. Firstly, we propose an ICMP translation mechanism which allows the RPX server to support important end-toend control functions. Secondly, we extend the RPX scheme with a mobility support scheme based on Mobile IP. In addition, we have augmented Mobile IP with a new tunneling mechanism called IP-in-FQDN tunneling. The mechanism allows for unique mapping despite the sharing of IP addresses while maintaining the scalability of RPX. We examine the viability of our design through our experimental implementation.
619

Fuzzy logic based robust control of queue management and optimal treatment of traffic over TCP/IP networks

Li, Zhi January 2005 (has links)
Improving network performance in terms of efficiency, fairness in the bandwidth, and system stability has been a research issue for decades. Current Internet traffic control maintains sophistication in end TCPs but simplicity in routers. In each router, incoming packets queue up in a buffer for transmission until the buffer is full, and then the packets are dropped. This router queue management strategy is referred to as Drop Tail. End TCPs eventually detect packet losses and slow down their sending rates to ease congestion in the network. This way, the aggregate sending rate converges to the network capacity. In the past, Drop Tail has been adopted in most routers in the Internet due to its simplicity of implementation and practicability with light traffic loads. However Drop Tail, with heavy-loaded traffic, causes not only high loss rate and low network throughput, but also long packet delay and lengthy congestion conditions. To address these problems, active queue management (AQM) has been proposed with the idea of proactively and selectively dropping packets before an output buffer is full. The essence of AQM is to drop packets in such a way that the congestion avoidance strategy of TCP works most effectively. Significant efforts in developing AQM have been made since random early detection (RED), the first prominent AQM other than Drop Tail, was introduced in 1993. Although various AQMs also tend to improve fairness in bandwidth among flows, the vulnerability of short-lived flows persists due to the conservative nature of TCP. It has been revealed that short-lived flows take up traffic with a relatively small percentage of bytes but in a large number of flows. From the user’s point of view, there is an expectation of timely delivery of short-lived flows. Our approach is to apply artificial intelligence technologies, particularly fuzzy logic (FL), to address these two issues: an effective AQM scheme, and preferential treatment for short-lived flows. Inspired by the success of FL in the robust control of nonlinear complex systems, our hypothesis is that the Internet is one of the most complex systems and FL can be applied to it. First of all, state of the art AQM schemes outperform Drop Tail, but their performance is not consistent under different network scenarios. Research reveals that this inconsistency is due to the selection of congestion indicators. Most existing AQM schemes are reliant on queue length, input rate, and extreme events occurring in the routers, such as a full queue and an empty queue. This drawback might be overcome by introducing an indicator which takes account of not only input traffic but also queue occupancy for early congestion notification. The congestion indicator chosen in this research is traffic load factor. Traffic load factor is in fact dimensionless and thus independent of link capacity, and also it is easy to use in more complex networks where different traffic classes coexist. The traffic load indicator is a descriptive measure of the complex communication network, and is well suited for use in FL control theory. Based on the traffic load indicator, AQM using FL – or FLAQM – is explored and two FLAQM algorithms are proposed. Secondly, a mice and elephants (ME) strategy is proposed for addressing the problem of the vulnerability of short-lived flows. The idea behind ME is to treat short-lived flows preferably over bulk flows. ME’s operational location is chosen at user premise gateways, where surplus processing resources are available compared to other places. By giving absolute priority to short-lived flows, both short and long-lived flows can benefit. One problem with ME is starvation of elephants or long-lived flows. This issue is addressed by dynamically adjusting the threshold distinguishing between mice and elephants with the guarantee that minimum capacity is maintained for elephants. The method used to dynamically adjust the threshold is to apply FL. FLAQM is deployed to control the elephant queue with consideration of capacity usage of mice packets. In addition, flow states in a ME router are periodically updated to maintain the data storage. The application of the traffic load factor for early congestion notification and the ME strategy have been evaluated via extensive experimental simulations with a range of traffic load conditions. The results show that the proposed two FLAQM algorithms outperform some well-known AQM schemes in all the investigated network circumstances in terms of both user-centric measures and network-centric measures. The ME strategy, with the use of FLAQM to control long-lived flow queues, improves not only the performance of short-lived flows but also the overall performance of the network without disadvantaging long-lived flows.
620

Congestion Removal in the Next Generation Internet

Suryasaputra, Robert, rsuryasaputra@gmail.com January 2007 (has links)
The ongoing development of new and demanding Internet applications requires the Internet to deliver better service levels that are significantly better than the best effort service that the Internet currently provides and was built for. These improved service levels include guaranteed delays, jitter and bandwidth. Through extensive research into Quality of Service and Differentiated Service (DiffServ) it has become possible to provide guaranteed services, however this turns out to be inadequate without the application of Traffic Engineering methodologies and principles. Traffic Engineering is an integral part of network operation. Its major goal is to deliver the best performance from an existing service provider's network resources and, at the same time, to enhance a customers' view of network performance. In this thesis, several different traffic engineering methods for optimising the operation of native IP and IP networks employing MPLS are proposed. A feature of these new methods is their fast run times and this opens the way to making them suitable for application in an online traffic engineering environment. For native IP networks running shortest path based routing protocols, we show that an LP-based optimisation based on the well known multi-commodity flow problem can be effective in removing network congestion. Having realised that Internet service providers are now moving towards migrating their networks to the use of MPLS, we have also formulated optimisation methods to traffic engineer MPLS networks by selecting suitable routing paths and utilising the feature of explicit routing contained in MPLS. Although MPLS is capable of delivering traffic engineering across different classes of traffic, network operators still prefer to rely on the proven and simple IP based routing protocols for best effort traffic and only use MPLS to route traffic requiring special forwarding treatment. Based on this fact, we propose a method that optimises the routing patterns applicable to different classes of traffic based on their bandwidth requirements. A traffic engineering comparison study that evaluates the performance of a neural network-based method for MPLS networks and LP-based weight setting approach for shortest path based networks has been performed using a well-known open source network simulator, called ns2. The comparative evaluation is based upon the packet loss probability. The final chapter of the thesis describes the software development of a network management application called OptiFlow which integrates techniques described in earlier chapters including the LP-based weight setting optimisation methodology; it also uses traffic matrix estimation techniques that are required as input to the weight setting models that have been devised. The motivation for developing OptiFlow was to provide a prototype set of tools that meet the congestion management needs of networking industries (ISPs and telecommunications companies - telcos).

Page generated in 0.0658 seconds