• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 19
  • 11
  • 10
  • 9
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 153
  • 27
  • 21
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Built-in proactive tuning for circuit aging and process variation resilience

Shah, Nimay Shamik 15 May 2009 (has links)
VLSI circuits in nanometer VLSI technology experience significant variations - intrinsic process variations and variations brought about by transistor degradation or aging. These are generally embodied by yield loss or performance degradation over operation time. Although the degradation can be compensated by the worst-case scenario based over-design approach, it induces remarkable power overhead which is undesirable in tightly power-constrained designs. Dynamic voltage scaling (DVS) is a more powerefficient approach. However, its coarse granularity implies difficulty in handling finegrained variations. These factors have contributed to the growing interest in poweraware robust circuit design. In this thesis, we propose a Built-In Proactive Tuning (BIPT) system, a lowpower typical case design methodology based on dynamic prediction and prevention of possible circuit timing errors. BIPT makes use of the canary circuit to predict the variation induced performance degradation. The approach presented allows each circuit block to autonomously tune its performance according to its own degree of variation. The tuning is conducted offline, either at power on or periodically. A test pattern generator is included to reduce the uncertainty of the aging prediction due to different input vectors. The BIPT system is validated through SPICE simulations on benchmark circuits with consideration of process variations and NBTI, a static stress based PMOS aging effect. The experimental results indicate that to achieve the same variation resilience, proposed BIPT system leads to 33% power savings in case of process variations as compared to the over-design approach. In the case of aging resilience, the approach proposed in this thesis leads to 40% less power than the approach of over-design while 30% less power as compared to DVS with NBTI effect modeling.
22

Arm-P : Almost Reliable Multicast protocol

Jonsson, Fredrik January 2008 (has links)
<p>Distribution of information across IP based networks is today part of our everyday life. IP is the backbone of the Internet and most office networks. We use IP to access web pages, listen to radio, and to create computation clusters. All these examples use bandwidth, and bandwidth is a limited resource.</p><p>Many applications distribute the same information to multiple receivers, but in many cases the same information is sent to a single receiver at a time, thus multiple copies of the same information is sent, thus consuming bandwidth.</p><p>What if the information could be broadcasted to all the clients at the same time, similar to a television broadcast. TCP/IP provides some means to do that. For example UDP supports broadcasting; the problem faced when using UDP is that it’s not reliable. There is no guarantee that the information actually reaches the clients.</p><p>This Bachelor thesis in Computer Science aims to investigate the problems and solutions of how to achieve reliable distribution of fixed size data sets using a non reliable multicast communication channel, like UDP, in a LAN environment.</p><p>The thesis defines a protocol (Almost Reliable Multicast Protocol – Arm-P) that provides maximum scalability for delivery of versioned data sets that are designed to work in a LAN-environment. A proof-of-concept application is implemented for testing purposes.</p>
23

Θέματα υλοποίησης active δικτύων / Active networks implementation issues

Ακρίδα, Κατερίνα 16 May 2007 (has links)
Ενεργά λέμε τα δίκτυα τα οποία επεξεργάζονται και τα περιεχόμενα (και όχι μόνο την επικεφαλίδα) των πακέτων που μετάγουν. Επικεντρωνόμαστε στα ενεργά δίκτυα ενθυλάκωσης, όπου ο προς εκτέλεση κώδικας συμπεριλαμβάνεται στα μεταγώμενα πακέτα, σε αντιδιαστολή με τους προγραμματιζόμενους μεταγωγείς. Παρουσιάζεται αναλυτικά το Active Networks Encapsulation Protocol (ANEP). Παρουσιάζονται δικτυακές εφαρμογές στις οποίες τα ενεργά δίκτυα βελτιώνουν την απόδοση της εφαρμογής και ταυτόχρονα μειώνουν τις απαιτήσεις σε δικτυακούς πόρους. Ακολούθως εστιάζουμε στην \"Ενεργή Αξιόπιστη Πολλαπλή Μετάδοση\", ένα πρωτόκολλο αξιόπιστης πολλαπλής μετάδοσης το οποίο χρησιμοποιεί την ενεργή μεταγωγή για να διαχειριστεί την ανάκτηση απωλειών πακέτων εντός του δικτύου (καταστολή NACK, λανθάνουσα μνήμη για πακέτα διόρθωσης, πολλαπλή μετάδοση περιορισμένης εμβάλειας). Παρέχονται αποτελέσματα προσομοιώσεων που υποστηρίζουν την θέση ότι ακόμα και με μικρό ποσοστό ενεργών κόμβων, ένα ενεργό δίκτυο μπορεί να βελτιώσει ουσιαστικά τις επιδόσεις της εφαρμογής και να μειώσει ταυτόχρονα την χρήση εύρους ζώνης. Κλείνοντας, κάνουμε κάποια τελικά σχόλια και εξάγουμε συμπεράσματα σχετικά με το υψηλό κόστος εγκατάστασης και συντήρησης των ενεργών δίκτυων, και πως αυτό αντιδιαστέλλεται με τα πλεονεκτήματα των τελευταίων σε σχέση με τις επιδόσεις των εφαρμογών και την χρήση των δικτυακών πόρων. / Active Networks are networks consisting (at least partially) of active nodes. A node is active if it doesn’t only processes a packet’s header in order to route it, but is also able to evaluate and process the packet’s payload. There are two kinds of active networks, depending on whether they are based on programmable switches or on capsules which bundle code together with the data. This dissertation focuses on the latter. The operational model of an active network of this kind comprises code execution models, network node management models and resource allocation policies. The Active Networks Encapsulation Protocol (ANEP) sets the mechanism for defining the platform required to evaluate the code that is encapsulated in the packet, as well the nodes’ behaviour when they do not support the required platform (drop the packet or simply forward it). This mechanism provides active networks with the flexibility to operate even when a very small percentage of the network’s node is actually active. There are various situations and where active networks can make better use of network resources. There are, for example, applications where different users might make similar, but different, requests resulting in unnecessary bandwith consumption when supported by conventional caching mechanisms. Active networks can provide smart caches that will dynamically synthesize pages from data cached by previous requests. Another situation where active networks can improve network performance is network applications like tele-conference, that depend heavily on new network services. Active networks allow for the faster deployment of new network services that enhance network speed and security and rationalise bandwidth usage through, for example multicast. Furthermore, active networks can support specialised applications, like for example on-line auctions, with custom-made network services. It is important to note that when measuring network performance, one should focus onto the network application’s performance, rather than network per-packet metrics like throughput and latency. In other words, intranetwork processing might increase both packet size and latency, but will improve the application’s end-to-end performance and reduce total network load. The protocols for three innovative network applications are presented: active reliable multicast, auctions over the network and remote sensor merging. For each of these we present network services that can be easily implemented and deployed in active networks to improve application performance. Finally, a more detailed analysis (by means of simulation) of an active reliable multicast protocol is presented. Active networks achieve two ends: on the one hand they push the idea of a network proxy to its logical end by effectively turning all network elements into smart proxies that provide caching, filtering, NACK suppression and other services. On the other hand they carry out part of the computation inside the network, bringing it closer to the data sources. When the computation is, for example, data merging this is beneficial to both the application and the network resources. This, however, can only be achieved at a cost. First of all in hardware, since network elements have to be upgraded from simple routers to full-blown computers capable of supporting Java and scripting languages. But also in latency, since packets have to undergo much more complex processing along the way that simple routing. In the applications presented here the costs associated with active transport are counter-balanced by the advantages the latter has to offer to the application as well as to the network. The bet that active networks have to win in order to get widely accepted, is to have enough active application protocols developed that their installation and maintenance cost can be justified.
24

Arm-P : Almost Reliable Multicast protocol

Jonsson, Fredrik January 2008 (has links)
Distribution of information across IP based networks is today part of our everyday life. IP is the backbone of the Internet and most office networks. We use IP to access web pages, listen to radio, and to create computation clusters. All these examples use bandwidth, and bandwidth is a limited resource. Many applications distribute the same information to multiple receivers, but in many cases the same information is sent to a single receiver at a time, thus multiple copies of the same information is sent, thus consuming bandwidth. What if the information could be broadcasted to all the clients at the same time, similar to a television broadcast. TCP/IP provides some means to do that. For example UDP supports broadcasting; the problem faced when using UDP is that it’s not reliable. There is no guarantee that the information actually reaches the clients. This Bachelor thesis in Computer Science aims to investigate the problems and solutions of how to achieve reliable distribution of fixed size data sets using a non reliable multicast communication channel, like UDP, in a LAN environment. The thesis defines a protocol (Almost Reliable Multicast Protocol – Arm-P) that provides maximum scalability for delivery of versioned data sets that are designed to work in a LAN-environment. A proof-of-concept application is implemented for testing purposes.
25

A Reliability Study on the Self-Report Behavioral Measure for Evaluating Therapeutic Outcome

Anderson, Sharon B. 01 May 1990 (has links)
Because the original reliability study using the Self-Report Behavioral Measure for Evaluating Therapeutic Outcomes (Behavioral Checklist) used college students as subjects, and since the target population for use with this instrument is a client population, there is a need for a reliability study using clients in treatment as subjects. The objective of this study was to assess the reliability of the Behavioral Checklist using a client population. The secondary objective was to revise the Behavioral Checklist, if necessary, in order to meet the standards of reliability for testing instruments. Three reliability measures were implemented in order to evaluate and revise the Behavioral Checklist. An item analysis and split-half reliability analysis were conducted after one administration of Elliott's Behavioral Checklist using a client population in treatment at a mental health center. Since these methods are measures of internal consistency, the statistical analyses were used to revise the instrument, eliminating unnecessary items and simplifying instructions based on the statistical analysis. The revised Behavioral Checklist was then administered to two subject populations (clients at a mental health center and people on probation) using the test-retest model for evaluating reliability. The test-retest analysis resulted in correlations of .889 for the subject population drawn from a mental health center and .899 for t he subject population drawn from probationers. The current study did, in fact, improve the Behavioral Checklist, making it easy to administer, and demonstrated that it is a reliable instrument for use with a client population.
26

Scalable and Reliable File Transfer for Clusters Using Multicast.

Shukla, Hardik Dikpal 01 August 2002 (has links) (PDF)
A cluster is a group of computing resources that are connected by a single computer network and are managed as a single system. Clusters potentially have three key advantages over workstations operated in isolation—fault tolerance, load balancing and support for distributed computing. Information sharing among the cluster’s resources affects all phases of cluster administration. The thesis describes a new tool for distributing files within clusters. This tool, the Scalable and Reliable File Transfer Tool (SRFTT), uses Forward Error Correction (FEC) and multiple multicast channels to achieve an efficient reliable file transfer, relative to heterogeneous clusters. SRFTT achieves scalability by avoiding feedback from the receivers. Tests show that, for large files, retransmitting recovery information on multiple multicast channels gives significant performance gains when compared to a single retransmission channel.
27

Variance Validation for Post-Silicon Debugging in Network on Chip

Liu, Jiayong 21 October 2013 (has links)
No description available.
28

Expressed emotion in parents of children with early-onset mood disorders

Sisson, Dorothy Phillips 14 July 2005 (has links)
No description available.
29

Reliability Analysis and Robust Design of Metal Forming Process

Li, Bing 07 1900 (has links)
<p>Metal forming processes have been widely applied in many industries. With the severe competition in the market, a reliable and robust metal forming process becomes crucial for the manufacturer to reduce product development time and cost. For the purpose of supplying engineers with an effective tool for a reliable and robust design of metal forming process, this research investigates the application of traditional reliability theory and robust design methods in metal forming processes for the ultimate goal of increasing quality and reducing cost in manufacturing.</p> <p> A method to assess the probability of failure of the process based on traditional reliability theory and the forming limit diagram (FLD) is presented. The forming limit of a material is chosen as the failure criteria for analysis of reliability.</p> <p> A study of prediction of forming limit diagrams using finite element simulation without pre-defined geometrical imperfection or material imperfection is presented. A 3D model of the dome test is used to predict the FLD for AA 5182-0. The FE predicted forming limit diagram is in good agreement with the experimental one. The uncertainty sources for the scatter of forming limits are categorized and investigated to see their effects on the shape of FLD.</p> <p>A novel method of improving the reliability of a forming process using the Taguchi method at the design stage is presented. The thickness-thinning ratio is chosen as the failure criteria for the reliability analysis of the process. A Taguchi orthogonal array is constructed to evaluate the effects of design parameters on the thinning ratio. A series of finite element simulations is conducted according to the established orthogonal array. Based on the simulation results, Taguchi S/N analysis and ANOVA analysis are applied to identify the optimal combination of design parameters for minimum thinning ratio, minimum variance of thinning ratio, and maximum expected process reliability.</p> <p> A multi-objective optimization approach is presented, which simultaneously maximizes the bulge ratio and minimizes the thinning ratio for a tube hydroforming process. Taguchi method and finite element simulations are used to eliminate the parameters insignificant to the process quality performance. The significant parameters are then optimized to achieve the multiple optimization objectives. The optimization problem is solved by using a goal attainment method. An illustrative case study shows the practicability of this approach and ease of use by product designers and process engineers.</p> / Thesis / Doctor of Philosophy (PhD)
30

Towards Interpretable and Reliable Deep Neural Networks for Visual Intelligence

Xie, Ning 06 August 2020 (has links)
No description available.

Page generated in 0.0879 seconds