• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 431
  • 89
  • 76
  • 65
  • 65
  • 18
  • 15
  • 13
  • 11
  • 7
  • 6
  • 5
  • 5
  • 4
  • 2
  • Tagged with
  • 969
  • 969
  • 184
  • 67
  • 62
  • 61
  • 60
  • 60
  • 57
  • 57
  • 56
  • 56
  • 53
  • 51
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Analysis and verification of routing effects on signal integrity for high-speed digital stripline interconnects in multi-layer PCB designs / Analys och verifiering av ledardragningens betydelse för signalintegriteten hos digitala höghastighetsanslutningar på flerlagermönsterkort

Frejd, Andreas January 2010 (has links)
The way printed circuit board interconnects for high-speed digital signals are designed ultimately determines the performance that can be achieved for a certain interface, thus having a profound impact on whether the complete communication channel will comply with the desired standard specification or not. Good understanding and methods for anticipating and verifying this behaviour through computer simulations and practical measurements are therefore essential. Characterization of an interconnect can be performed either in the time domain or in the frequency domain. Regardless of the domain chosen, a method for unobstrusively connecting to the test object is required. After various different attempts it could be concluded that frequency domain measurements using a vector network analyzer together with microwave probes will provide the best measurement fidelity and ease of use. In turn, this method requires the test object to be prepared for the measurement. Advanced computer simulation software is available, but comes with the drawback of dramatically increasing the requirements on computational resources for improved accuracy. In general, these simulators can be configured to show good agreement with measurements at frequencies as high as ten gigahertz. For ideal interconnects, the simplest and, thus, fastest methods will provide good enough accuracy. These simple methods should be complemented with the results from more accurate simulations in cases where the physical structure is complex or in other ways deviates from the ideal. Several practical routing situations were found to introduce severe signal integrity issues. Through appropriate use of the methods developed in this thesis, these can be identified in the design process and thereby avoided.
252

Implementation and Evaluation of Single Filter Frequency Masking Narrow-Band High-Speed Recursive Digital Filters / Implementering och utvärdering av smalbandiga rekursiva digitala frekvensmaskningsfilter för hög hastighet med identiska subfilter

Mohsén, Mikael January 2003 (has links)
In this thesis two versions of a single filter frequency masking narrow-band high-speed recursive digital filter structure, proposed in [1], have been implemented and evaluated considering the maximal clock frequency, the maximal sample frequency and the power consumption. The structures were compared to a conventional filter structure, that was also implemented. The aim was to see if the proposed structure had some benefits when implemented and synthesized, not only in theory. For the synthesis standard cells from AMS csx 0.35 mm CMOS technology were used.
253

Design and Implementation of a High Speed Cable-Based Planar Parallel Manipulator

Chan, Edmon January 2005 (has links)
Robotic automation has been the major driving force in modern industrial developments. High speed pick-and-place operations find their place in many manufacturing applications. The goal of this project is to develop a class of high speed robots that has a planar workspace. The presented robots are intended for pick-and-place applications that have a relatively large workspace. In order to achieve this goal, the robots must be both stiff and light. The design strategies adapted in this study were expanded from the research work by Prof Khajepour and Dr. Behzadipour. The fundamental principles are to utilize a parallel mechanism to enhance robot stiffness and cable construction to reduce moving inertia. A required condition for using cable construction is the ability to hold all cables under tension. This can only be achieved under certain conditions. The design phase of the study includes a static analysis on the robot manipulator that ensures certain mechanical components are always held under tension. This idea is extended to address dynamic situations where the manipulator velocity and acceleration are bounded. Two concept robot configurations, 2D-Deltabot, and 2D-Betabot are presented. Through a series of analyses from the robot inverse kinematic model, the dynamic properties of a robot can be computed in an effective manner. It was determined that the presented robots can achieve 4g acceleration and 4m/s maximum speed within their 700mm by 100mm workspace with a pair of 890W rotary actuators controlling two degrees of freedom. The 2D-Deltabot was chosen for prototype development. A kinematics calibration algorithm was developed to enhance the robot accuracy. Experimental test results had shown that the 2D-Deltabot was capable of running at 81 cycles per minute on a 730mm long pick-and-place path. Further experiments showed that the robot had a position accuracy of 0. 62mm and a position repeatability of 0. 15mm, despite a few manufacturing errors from the prototype fabrication.
254

High-Speed Clocking Deskewing Architecture

Li, David January 2007 (has links)
As the CMOS technology continues to scale into the deep sub-micron regime, the demand for higher frequencies and higher levels of integration poses a significant challenge for the clock generation and distribution design of microprocessors. Hence, skew optimization schemes are necessary to limit clock inaccuracies to a small fraction of the clock period. In this thesis, a crude deskew buffer (CDB) is designed to facilitate an adaptive deskewing scheme that reduces the clock skew in an ASIC clock network under manufacturing process, supply voltage, and temperature (PVT)variations. The crude deskew buffer adopts a DLL structure and functions on a 1GHz nominal clock frequency with an operating frequency range of 800MHz to 1.2GHz. An approximate 91.6ps phase resolution is achieved for all simulation conditions including various process corners and temperature variation. When the crude deskew buffer is applied to seven ASIC clock networks with each under various PVT variations, a maximum of 67.1% reduction in absolute maximum clock skew has been achieved. Furthermore, the maximum phase difference between all the clock signals in the seven networks have been reduced from 957.1ps to 311.9ps, a reduction of 67.4%. Overall, the CDB serves two important purposes in the proposed deskewing methodology: reducing the absolute maximum clock skew and synchronizes all the clock signals to a certain limit for the fine deskewing scheme. By generating various clock phases, the CDB can also be potentially useful in high speed debugging and testing where the clock duty cycle can be adjusted accordingly. Various positive and negative duty cycle values can be generated based on the phase resolution and the number of clock phases being “hot swapped”. For a 500ps duty cycle, the following values can be achieved for both the positive and negative duty cycle: 224ps, 316ps, 408ps, 592ps, 684ps, and 776ps.
255

Modelling of energy requirements by a narrow tillage tool

Ashrafi Zadeh, Seyed Reza 04 July 2006 (has links)
The amount of energy consumed during a tillage operation depends on three categories of parameters: (1) soil parameters (2) tool parameters and (3) operating parameters. Although many research works have been reported on the effects of those parameters on tillage energy, the exact number of affecting parameters and the contribution of each parameter in total energy requirement have not been specified. A study with the objectives of specifying energy consuming components and determining the amount of each component for a vertical narrow tool, particularly at high speeds of operation, was conducted in the soil bin facilities of the Department of Agricultural and Bioresource Engineering, University of Saskatchewan. <p>Based on studies by Blumel (1986) and Kushwaha and Linke (1996), four main energy consuming components were assumed: <p>(1) energy requirements associated with soil-tool interactions;<p>(2) energy requirements associated with interactions between tilled and fixed soil masses;<p>(3) energy requirements associated with soil deformation; and <p>(4) energy requirements associated with the acceleration of the tilled soil. <p> Energy requirement of a vertical narrow tool was calculated based on the draft requirement of the tool measured in the soil bin. The effects of three variables, moisture content, operating depth and forward speed, were studied at different levels: (1) moisture content at 14% and 20%; (2) depth at 40, 80, 120 and 160 mm; and (3) speed at 1, 8, 16 and 24 km h-1. Total energy requirement was divided into these four components based upon the procedure developed in the research. <p>Regression equations for different energy components were developed based on experimental data of two replicates and then validated by extra soil bin experiments conducted at same soil and tool but different operational conditions. The set up of energy components data in the model development showed good correlation with the available experimental data for all four components. Coefficients of all regression equations showed a first order energy-moisture content relationship best applicable to those equations of energy components. For the acceleration component, energy-depth relationship at all speed levels resulted in an equation which included first and second orders of depth. In contrast, if only two higher levels of speed were used in the regression model, the relationship between acceleration energy and depth resulted in the second order of depth. When experimental data of acceleration energy at 8, 16, and 24 km h-1 speeds were used in the regression equation, the acceleration energy-speed relationship resulted in both linear and quadratic relationships. It was concluded that for the tool and soil conditions used in the experiments, 8 km h-1 speed resulted in only linear relationship. On the other hand, 16 and 24 km h-1 speeds resulted in a quadratic relationship. Therefore, for all 3 speeds used in experiments, both linear and quadratic relationships were obtained. Considering that the tool was operating at high speeds, this research is expected to contribute valuable experimental data to the researchers working in the field of soil dynamics.
256

High-speed coordination in groupware

Barjawi, Mutasem 18 November 2009 (has links)
Coordination is important in groupware because it helps users collaborate efficiently. However, groupware systems in which activities occur at a faster pace need faster coordination in order to keep up with the speed of the activity. Faster coordination is especially needed when actions are dependent on one another (i.e., they are tightly-coupled) and when each user can see and interact with other users actions as they occur (i.e., real time). There is little information available about this type of fast coordination (also named high-speed coordination or HSC) in groupware. In this thesis, I addressed this problem by providing a body of principles and information about high-speed coordination. This solution was achieved by creating a groupware game called RTChess and then conducting an exploratory evaluation in which high-speed coordination was investigated. The results of this evaluation show that there were small amounts of high-speed coordination in the game and that high-speed coordination was difficult to achieve. In addition, HSC was affected by five main characteristics of the groupware environment: user experience, level of awareness of the partners interactions, communication between partners, number of dependencies that affect the users interactions, and pace of activities in the system.
257

Network Data Streaming: Algorithms for Network Measurement and Monitoring

Kumar, Abhishek 18 November 2005 (has links)
With the emergence of computer networks as one of the primary modes of communication, and with their adoption for an increasingly wide range of applications, there is a growing need to understand and characterize the traffic they carry. The rise of large scale network attacks adds urgency to this need. However, the large size, high speed and increasing complexity of these networks imply that tracking and characterizing the traffic they carry is an increasingly difficult problem. Dealing with higher level aggregates, such as flows instead of packets, does not solve the problem because these aggregates tend to be quite numerous and exhibit dynamics of their own. In this thesis, we investigate a novel approach to deal with the immense amounts of data associated with problems in network measurement and monitoring. Building upon the paradigm of Data Streaming, which processes a large stream of data using a small working memory to answer a class of queries, we develop an architecture for Network Data Streaming that can accommodate additional constraints imposed in the context of network monitoring. Using this architecture, we design algorithms for monitoring properties of network traffic that have traditionally been considered too difficult to monitor at high speed network links and routers. Our first algorithm provides the ability to accurately estimate the size of individual flows. A second algorithm to estimate the distribution of flow sizes enables network operators to monitor anomalies in the traffic. Incorporating the use of packet sampling, we can extend the latter algorithm to estimate the flow size distribution of arbitrary subpopulations. Finally, we apply the tools of Network Data Streaming to the operation of packet sampling itself. Using the ability to efficiently estimate flow-statistics such as approximate per-flow size, we design a family of mechanisms where the sampling decision is guided by this knowledge. The individual solutions developed in this thesis share a common architectural theme, supporting the monitoring of highly dynamic populations. Integrating this with the traditional sampling based framework for network monitoring will enable a broad range of applications for accurate and comprehensive monitoring of network traffic.
258

Effects of Adaptive Discretization on Numerical Computation using Meshless Method with Live-object Handling Applications

Li, Qiang 07 March 2007 (has links)
The finite element method (FEM) has difficulty solving certain problems where adaptive mesh is needed. Motivated by two engineering problems in live-object handling project, this research focus on a new computational method called the meshless method (MLM). This method is built upon the same theoretical framework as FEM but needs no mesh. Consequently, the computation becomes more stable and the adaptive computational scheme becomes easier to develop. In this research, we investigate practical issues related to the MLM and develop an adaptive algorithm to automatically insert additional nodes and improve computational accuracy. The study has been in the context of the two engineering problems: magnetic field computation and large deformation contact. First, we investigate the effect of two discretization methods (strong-form and weak-form) in MLM for solving linear magnetic field problems. Special techniques for handling the discontinuity boundary condition at material interfaces are proposed in both discretization methods to improve the computational accuracy. Next, we develop an adaptive computational scheme in MLM that is comprised of an error estimation algorithm, a nodal insertion scheme and a numerical integration scheme. As a more general approach, this method can automatically locate the large error region around the material interface and insert nodes accordingly to reduce the error. We further extend the adaptive method to solve nonlinear large deformation contact problems. With the ability to adaptively insert nodes during the computation, the developed method is capable of using fewer nodes for initial computation and thus, effectively improves the computational efficiency. Engineering applications of the developed methods have been demonstrated by two practical engineering problems. In the first problem, the MLM has been utilized to simulate the dynamic response of a non-contact mechanical-magnetic actuator for optimizing the design of the actuator. In the second problem, the contact between the flexible finger and the live poultry product has been analyzed by using MLM. These applications show the developed method can be applied to a broad spectrum of engineering applications where an adaptive mesh is needed.
259

High-Speed Imaging of Polymer Induced Fiber Flocculation

Hartley, William H. 22 March 2007 (has links)
This study presents quantitative results on the effect on individual fiber length during fiber flocculation. Flocculation was induced by a cationic polyacrylamide (cPAM). A high speed camera recorded 25 second video clips. The videos were image-analyzed and the fiber length and the amount of fiber in each sample were measured. Prior to the flocculation process, fibers were fractionated into short and long fibers. Trials were conducted using the unfractionated fiber, short fiber, and long fiber. The short and long fibers were mixed in several trials to study the effect of fiber length. The concentration of cPAM was varied as well as the motor speed of the impeller (RPM). It was found that the average fiber length decreased more rapidly with increasing motor speed. Increasing the concentration of cPAM also led to a greater decrease in average fiber length. A key finding was that a plateau was reached where further increasing the amount of cPAM had no effect. Hence, fibers below a critical length resisted flocculation even if the chemical dose or shear was increased. This critical length was related to the initial length of the fiber.
260

Design of One-Time Implantable SCS System SOC and Inter-chip Capacitance Coupling Circuit

Tseng, Shao-Bin 15 August 2011 (has links)
The thesis is composed of two topics: A SOC design for one-time implantable spinal cord stimulation system ¡]SCS¡^, and the design of an inter-chip capacitance coupling circuit. In the first topic, the SOC design using wireless power and data transmission techniques for the SCS system is presented in this work. The proposed SOC can control 4 electrodes to generate different patterns of stimulation waves. It has multiple modes to drive whole the SCS system. Notably, the SOC contains a novel ASK demodulator which converts the ASK signals into digital signals reliably. The SOC is implemented using a typical 0.18-£gm 1P6M CMOS process. The chip area is only 1.71 * 1.41 mm2. Besides, the volume of the implantable SCS pulse generator utilizing this SOC is less than 24 cm3, and the power consumption is only 59.4 mW. In the second topic, a high-speed inter-chip capacitance coupling circuit is presented. Digital signals between two chips can be transceived through capacitive coupling of the proposed circuit. Notably, the transceivers are designed below the capacitors to attain the area reduction. It is an advanced application for high-speed wafer testing and 3D IC communication. A prototype chip is presented to achieve 2 Gbps on silicon using a typical 0.18 £gm 1P6M CMOS process. The chip area is 1045 ¡Ñ 894 £gm2. Besides, it only costs 21.47 mW in terms of power consumption. This capacitive coupling technique for high-speed digital circuit has great potential in the coming future.

Page generated in 0.036 seconds