• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 988
  • 277
  • 143
  • 110
  • 86
  • 35
  • 30
  • 28
  • 19
  • 19
  • 16
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 2081
  • 647
  • 498
  • 476
  • 386
  • 341
  • 274
  • 242
  • 240
  • 239
  • 238
  • 203
  • 185
  • 175
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Multiple Scan Trees Synthesis for Test Time/Data and Routing Length Reduction under Output Constraint

Hung, Yu-Chen 29 July 2009 (has links)
A synthesis methodology for multiple scan trees that considers output pin limitation, scan chain routing length, test application time and test data compression rate simultaneously is proposed in this thesis. Multiple scan trees, also known as a scan forest, greatly reduce test data volume and test application time in SOC testing. However, previous research on scan tree synthesis rarely considered issues such as routing length and output port limitation, and hence created scan trees with a large number of scan output ports and excessively long routing paths. The proposed algorithm provides a mechanism that effectively reduces test time and test data volume, and routing length under output port constraint. As a result, no output compressors are required, which significantly reduce the hardware overhead.
392

Lifenet: a flexible ad hoc networking solution for transient environments

Mehendale, Hrushikesh Sanjay 18 November 2011 (has links)
In the wake of major disasters, the failure of existing communications infrastructure and the subsequent lack of an effective communication solution results in increased risks, inefficiencies, damage and casualties. Currently available options such as satellite communication are expensive and have limited functionality. A robust communication solution should be affordable, easy to deploy, require little infrastructure, consume little power and facilitate Internet access. Researchers have long proposed the use of ad hoc wireless networks for such scenarios. However such networks have so far failed to create any impact, primarily because they are unable to handle network transience and have usability constraints such as static topologies and dependence on specific platforms. LifeNet is a WiFi-based ad hoc data communication solution designed for use in highly transient environments. After presenting the motivation, design principles and key insights from prior literature, the dissertation introduces a new routing metric called Reachability and a new routing protocol based on it, called Flexible Routing. Roughly speaking, reachability measures the end-to-end multi-path probability that a packet transmitted by a source reaches its final destination. Using experimental results, it is shown that even with high transience, the reachability metric - (1) accurately captures the effects of transience (2) provides a compact and eventually consistent global network view at individual nodes, (3) is easy to calculate and maintain and (4) captures availability. Flexible Routing trades throughput for availability and fault-tolerance and ensures successful packet delivery under varying degrees of transience. With the intent of deploying LifeNet on field we have been continuously interacting with field partners, one of which is Tata Institute of Social Sciences India. We have refined LifeNet iteratively refined base on their feedback. I conclude the thesis with lessons learned from our field trips so far and deployment plans for the near future.
393

Mobility-based Routing Overhead Management in Reconfigurable Wireless Ad hoc Networks / Ein mobilitätsbasiertes Routing-Overhead-Management für rekonfigurierbar drahtlose ad-hoc-netzwerke

Gikaru, Wilfred Githuka 30 October 2004 (has links) (PDF)
Mobility-Based Routing Overhead Management in Reconfigurable Wireless Ad Hoc Networks Routing Overheads are the non-data message packets whose roles are establishment and maintenance of routes for data packets as well as neighbourhood discovery and maintenance. They have to be broadcasted in the network either through flooding or other techniques that can ensure that a path exists before data packets can be sent to various destinations. They can be sent reactively or periodically to neighbours so as to keep nodes updated on their neighbourhoods. While we cannot do without these overhead packets, they occupy much of the limited wireless bandwidth available in wireless networks. In a reconfigurable wireless ad hoc network scenario, these packets have more negative effects, as links need to be confirmed more frequently than in traditional networks mainly because of the unpredictable behaviour of the ad hoc networks. We therefore need suitable algorithms that will manage these overheads so as to allow data packet to have more access to the wireless medium, save node energy for longer life of the network, increased efficiency, and scalability. Various protocols have been suggested in the research area. They mostly address routing overheads for suitability of particular protocols leading to lack of standardisation and inapplicability to other protocol classes. In this dissertation ways of ensuring that the routing overheads are kept low are investigated. The issue is addressed both at node and network levels with a common goal of improving efficiency and performance of ad hoc networks without dedicating ourselves to a particular class of routing protocol. At node level, a method hereby referred to as "link availability forecast", that minimises routing overheads used for maintenance of neighbourhood, is derived. The targeted packets are packets that are broadcasted periodically (e.g. hello messages). The basic idea in this method is collection of mobility parameters from the neighbours and predictions or forecasts of these parameters in future. Using these parameters in simple calculations helps in identifying link availabilities between nodes participating in maintenance of networks backbone. At the network level, various approaches have been suggested. The first approach is the cone flooding method that broadcasts route request messages through a predetermined cone shaped region. This region is determined through computation using last known mobility parameters of the destination. Another approach is what is hereby referred as "destination search reverse zone method". In this method, a node will keep routes to destinations for a long time and use these routes for tracing the destination. The destination will then initiate route search in a reverse manner, whereby the source selects the best route for next delivery. A modification to this method is for the source node to determine the zone of route search and define the boundaries within which the packet should be broadcasted. The later method has been used for simulation purposes. The protocol used for verification of the improvements offered by the schemes was the AODV. The link availability forecast scheme was implemented on the AODV and labelled AODV_LA while the network level implementation was labelled AODV_RO. A combination of the two schemes was labelled AODV_LARO.
394

Dienstgüte in Wireless Mesh Networks

Herms, André January 2009 (has links)
Zugl.: Magdeburg, Univ., Diss., 2009
395

Improving geographic routing with neighbor sectoring

Jin, Jingren. Lim, Alvin S. January 2007 (has links)
Thesis--Auburn University, 2007. / Abstract. Includes bibliographic references (p.44-46).
396

The Lusus protocol /

Morton, Daniel H. January 2005 (has links)
Thesis (M.S.)--University of Hawaii at Manoa, 2005. / Includes bibliographical references (leaves 84-86). Also available via World Wide Web.
397

Connectionless approach--a localized scheme to mobile ad hoc networks

Ho, Yao Hua. January 2009 (has links)
Thesis (Ph.D.)--University of Central Florida, 2009. / Adviser: Kien A. Hua. Includes bibliographical references (p. 131-138).
398

Interdomain Traffic Engineering and Faster Restoration in Optical Networks

Muchanga, Americo Francisco January 2006 (has links)
Internet traffic has surpassed voice traffic and is dominating in transmission networks. The Internet Protocol (IP) is now being used to encapsulate various kinds of services. The new services have different requirements than the initial type of traffic that was carried by the Internet network and IP. Interactive services such as voice and video require paths than can guarantee some bandwidth level, minimum delay and jitter. In addition service providers need to be able to improve the performance of their networks by having an ability to steer the traffic along the less congested links or paths, thus balancing the load in a uniform way as a mechanism to provide differentiated service quality. This needs to be provided not only within their domains but also along paths that might traverse more than one domain. For this to be possible changes have been proposed and some are being applied to provide quality of service (QoS) and traffic engineering (TE) within and between domains. Because data networks now carry critical data and there are new technologies that enable providers to carry huge amount of traffic, it is important to have mechanisms to safeguard against failures that can render the network unavailable. In this thesis we propose and develop mechanisms to enable interdomain traffic engineering as well as to speed up the restoration time in optical transport networks. We propose a mechanism, called abstracted path information, that enable peering entities to exchange just enough information to engage in QoS and TE operations without divulging all the information about the internal design of the network. We also extend BGP to carry the abstracted information. Our simulations show that BGP could still deliver the same performance with the abstracted information. In this thesis we also develop a method of classifying failures of links or paths. To improve the restoration time we propose that common failures be classified and assigned error type numbers and we develop a mechanism for interlayer communication and faster processing of signalling messages that are used to carry notification signals. Additionally we develop a mechanism of exchanging the failure information between layers through the use of service primitives; that way we can speed up the restoration process. Finally we simulate the developed mechanism for a 24 node Pan American optical transport network. / <p>QC 20100913</p>
399

Physical design methodologies for monolithic 3D ICs

Panth, Shreepad Amar 08 June 2015 (has links)
The objective of this research is to develop physical design methodologies for monolithic 3D ICs and use them to evaluate the improvements in the power-performance envelope offered over 2D ICs. In addition, design-for-test (DfT) techniques essential for the adoption of shorter term through-silicon-via (TSV) based 3D ICs are explored. Testing of TSV-based 3D ICs is one of the last challenges facing their commercialization. First, a pre-bond testable 3D scan chain construction technique is developed. Next, a transition-delay-fault test architecture is presented, along with a study on how to mitigate IR-drop. Finally, to facilitate partitioning, a quick and accurate framework for test-TSV estimation is developed. Block-level monolithic 3D ICs will be the first to emerge, as significant IP can be reused. However, no physical design flows exist, and hence a monolithic 3D floorplanning framework is developed. Next, inter-tier performance differences that arise due to the not yet mature fabrication process are investigated and modeled. Finally, an inter-tier performance-difference aware floorplanner is presented, and it is demonstrated that high quality 3D floorplans are achievable even under these inter-tier differences. Monolithic 3D offers sufficient integration density to place individual gates in three dimensions and connect them together. However, no tools or techniques exist that can take advantage of the high integration density offered. Therefore, a gate-level framework that leverages existing 2D ICs tools is presented. This framework also provides congestion modeling and produces results that minimize routing congestion. Next, this framework is extended to commercial 2D IC tools, so that steps such as timing optimization and clock tree synthesis can be applied. Finally, a voltage-drop-aware partitioning technique is presented that can alleviate IR-drop issues, without any impact on the performance or maximum operating temperature of the chip.
400

Optimization of kitting process : A case study of Dynapac Compaction Equipment AB

Hantoft, Jonas January 2015 (has links)
A case study has been done at Dynapac Compaction Equipment AB in Karlskrona in order to improve the internal flow of the production. The “Supermarket Storage”, an adjoining storage that feed material to the lean production in the “Z-line” assembly line with the help of kitting, was chosen to be focused during the optimization of the internal flow. Also, due to the little academic research about kitting it was decided to focus the research on the kitting process and identify how to optimize it. The purpose of the research is to determine optimization methods of a kitting process and fill in the gap in the subject field about kitting optimization. Given the research time limit, the focus was only on the kitting process in the Supermarket Storage and no optimization could change the storage’s layout. This resulted in three research question that will be investigated in the thesis.  Which common approaches exist when it comes to optimizing a kitting process?  What is the result of each optimizing method in the time aspect?  When should an optimization method be used, compared to the other methods that will be tested in this research? In order to solve these questions, was a needfinding process used in order to identify the kitting process current problems and the needs of the employees. With this, three optimization methods were identified and selected to be used to optimize the kitting process; optimization of routing, optimization of family grouping and optimization of an electronic system. The optimization of routing focused on the route that the kitters travel and the optimization of the family grouping focused on the article distribution in the Supermarket Storage; there each kitting operation’s articles should be stored in the same zone. Finally, the optimization of the electronic system, investigated the possibility to utilize a pick to scan system with the kitting process. Each optimization was implemented in different field experiment in order to identify how each optimization affected the kitting process. This resulted in that each optimization had improved the kitting process time efficiency and the electronic system had the biggest impact. Some other results were also observed during the experiments. The route optimization improved the learning curve of the kitting process and the family grouping optimization decreased the bottlenecks in kitting process. The electronic system optimization also implemented new benefits that resulted in a profit 2.5 times the cost of the system. Some of the benefits include removal of unneeded processes, quality control of the kitting process and statistics gathering that can be used to improve the process in the future. These results imply that all three optimization methods can be used in order to improve the time efficiency of a kitting process in a similar storage layout. The routing optimization should be used in a kitting operation with a high rotation of new kitters. The family grouping should be used in a kiting process with bottlenecks in the process and low organization of the article distribution. Ultimately, the electronic system optimization should be used in a kitting process that has unneeded processes and has the need of new tools that the electronic system can implement.

Page generated in 0.0635 seconds