• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • Tagged with
  • 31
  • 31
  • 31
  • 31
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An investigation of techniques for partial scan and parity testable BIST design

Park, Sung Ju 01 January 1992 (has links)
One method of reducing the difficulty of test generation for sequential circuits is by the use of full scan design. To overcome the large extra hardware overhead attendant in the full scan design, the concept of partial scan designs has emerged. With the sequential circuit modeled as a directed graph, much effort has been expended to remove the subset of arcs or vertices representing flip-flops (FF). First we describe an efficient algorithm for finding a minimum feedback arc set that breaks all cycles in directed graphs. A sequential ordering technique based on depth-first search and an efficient cut algorithm are discussed. We also introduce an efficient algorithm to find a Minimum Feedback Vertex Set (MFVS) in directed graphs. This algorithm is based on the removal of essential cycles in which no subset of vertices constitutes a cycle. Then cycles whose length are greater than K are removed under the observation that the complexity of test generation in the sequential circuits is often caused by the lengthy cycles (synchronization sequence). A new structure called a totally combinationalized structure is developed to simplify the problems of test generation and fault simulation for sequential circuits to those for combinational circuits. This structure requires less scan FFs than full scan design and totally combinationalizes the sequential problem. The FFs in the sequential circuits are dedicated Test Pattern Generators and Test Response Compressors in Built-In Self-Test. Most of the benchmark circuits are known to be parity-even. However, it is the parity-odd circuits that are likely to detect most of the faults using a parity-bit checker test response compressor. After investigating parity testable faults, a novel technique which imposes linear constraints among primary inputs is described which changes most of the primary outputs to parity-odd and also compacts the test signals. It is shown that high fault coverage can be obtained by combining both MISR and the parity-bit checker.
22

Issues in large ultra-reliable system design

Suri, Neeraj 01 January 1992 (has links)
A primary design basis of ultra-reliable systems is the system synchronization framework. This provides for time synchronization within the system, and also forms the basis of the distributed agreement protocols, which direct the aspects of fault-tolerance, redundancy management, scheduling, and error-handling functions within the system. Traditional system designs have focused on developing the theory and techniques for fully connected systems. These basic principles do not directly extrapolate to both large and non-fully connected designs. Also, the designs have ranged between two extremes: those considering all system faults to be benign, and models where all system faults are considered malicious--for system operations, algorithm design and for reliability assessment purposes. This dissertation considers the synchronization problem in systems with a two-fold objective. Firstly, it addresses the problem of fault classification based on fault manifestations and develops the theory for convergence based synchronization functions. Next, a large cluster based non-fully connected architectural model is presented. The system model is shown to be physically realizable without the dominating graph complexities associated in a similarly sized fully connected approach. For this model and for general networks, an initial synchronization and a unique variation of the steady state synchronization algorithm applicable to non-fully connected systems is presented. An important design consideration is the accurate assessment of the system design. A novel reliability assessment approach is presented for the architecture models, in conjunction with the fault classification model, to obtain a precise and realistic fault coverage reliability model. ftn$\sp1$This work was funded in part under ONR N00014-91-C-0014, AFOSR 88-0205 and grants for Allied-Signal.
23

Scheduling algorithms for mobile packet radio networks

Ahn, Hyeyeon 01 January 1994 (has links)
This dissertation presents new scheduling algorithms for sharing the common radio channel in mobile packet radio networks. The idea of sharing a common, multiple-access channel among many users was used in the past as the basis of many communication systems. However, the unique characteristics of the packet radio networks preclude the straightforward application of existing protocols that are tuned to other types of networks. For single hop packet radio networks with arbitrary multiple reception capacity, the first nontrivial scheduling algorithm is developed with guaranteed, almost optimal efficiency, obtained without requiring the simultaneous transmission/reception capability. We propose a robust scheduling protocol for multihop networks, which is unique in providing a topology transparent solution to scheduled access in mobile radio networks with guaranteed packet delivery.
24

The evaluation of massively parallel array architectures

Herbordt, Martin Christopher 01 January 1994 (has links)
Although massively parallel arrays have been proposed since the 1950's and built since the 1960's, they have undergone very few systematic studies and these have covered only a small fraction of the design space. The major problems limiting previous studies are: computational cost of detailed and accurate simulations; programming cost of creating a test suite that compiles to the various target architectures and runs on them with comparable efficiency; and diversity of the architectural design space, especially communication networks. These issues are addressed in the construction of ENPASSANT, an evaluation environment for massively parallel array architectures that obtains performance measures of candidate designs with respect to real program executions. We address the computational cost problem with a novel approach to trace-based simulation. Code is run on an abstract virtual machine to generate a coarse-grained trace, which is then refined through a series of transformations (a process we call trace compilation) wherein greater resolution is obtained with respect to the details of the target architecture. We have found this technique to be one to two orders of magnitude faster than detailed simulation, while still retaining much of the accuracy of the model. Furthermore, abstract machine traces must be regenerated for only a small fraction of the possible architectural parameter combinations. Using virtual machine emulation and trace compilation also addresses program portability by allowing the user to code in a single language with a single compiler, regardless of the target architecture. Fairness and programmability are obtained with architecture dependent application libraries for a small set of critical functions. The diverse design space is covered by using parameterized models of the architectural components which direct ENPASSANT in the evaluation of the target machines on the basis of user specifications. ENPASSANT has already generated significant results, including effects of varying the number of dimensions in k-ary n-cubes, trade-offs in register and cache design, and usefulness of certain ALU features. Some surprising results are that bidirectional links provide a large advantage for k-ary n-cubes (where n = 2) in an essential application, and that smaller rather than larger cache block sizes are favored for most applications studied.
25

Space and time scheduling in multicomputers

Das Sharma, Debendra 01 January 1995 (has links)
Multicomputers are expensive resources that must be shared among multiple users to achieve the desired levels of throughput, utilization, and price-performance ratio. A multi-user environment may be provided either by space partitioning, or time-sharing, or a combination of both. This dissertation presents fast and efficient techniques to improve the performance of multi-user multicomputers using both space partitioning as well as 'time-sharing on space partitions' approaches. The techniques have been specifically targeted for mesh and hypercube multicomputers; the two popular topologies for commercial multicomputers. Space partitioning deals with executing independent tasks on independent partitions. It comprises of two components: the processor allocator and the job sequencer. This dissertation presents fast and efficient strategies for processor allocation in mesh and hypercubes. Simulation results demonstrate that the proposed strategies outperform all the existing methods in terms of the response times while possessing the least time and space overheads. The strategies proposed for job sequencing are independent of the topology and improve the turn-around times of jobs significantly. In addition, they are shown to efficiently utilize the proposed processor allocation strategies. This results in an improved performance and low space and time overheads. Time-sharing on space partitions possesses a promising prospect for improving the response times of users while providing an interactive service. A subcube level time-sharing strategy has been proposed for hypercube multicomputers. The proposed strategy tries to allocate overlapping subcubes to incoming tasks where each processor is time-shared between the various processes running on it. The proposed strategy was implemented on a 64 node nCUBE 2E system running real applications. The proposed strategy is shown to outperform both the space partitioning and the time-sharing approaches.
26

Model dependent inference of three-dimensional information from a sequence of two-dimensional images

Kumar, Rakesh 01 January 1992 (has links)
In order to autonomously navigate through a complex environment, a mobile robot requires sensory feedback. This feedback will typically include the 3D motion and location of the robot and the 3D structure and motion of obstacles and other environmental features. The general problem considered in this thesis is how this 3D information may be obtained from a sequence of images generated by a camera mounted on a mobile robot. The first set of algorithms developed in this thesis are for robust determination of the 3D pose of the mobile robot from a matched set of model and image landmark features. Least-squares techniques for point and line tokens, which minimize both rotation and translation simultaneously are developed and shown to be far superior to the earlier techniques which solved for rotation first and then translation. However, least-squares techniques fail catastrophically when outliers (or gross errors) are present in the match data. Outliers arise frequently due to incorrect correspondences or gross errors in the 3D model. Robust techniques for pose determination are developed to handle data contaminated by fewer than 50.0% outliers. To make the model based approach widely applicable, it is necessary to be able to automatically build the landmark models. The approach adopted in this thesis is one of model extension and refinement. A partial model of the environment is assumed to exist and this model is extended over a sequence of frames. As will be shown in the experiments, the prior knowledge of the small partial model greatly enhances the robustness of the 3D structure computations. The initial 3D model may have errors and these too are refined over the sequence of frames. Finally, the sensitivity of pose determination and model extension to incorrect estimates of camera parameters is analyzed. It is shown that for small field of view systems, offsets in the image center do not significantly affect the location of the camera and the location of new 3D points in a world coordinate system. Errors in the focal length significantly affect only the component of translation along the optical axis in the pose computation.
27

Connectionist modeling and control of finite state environments

Bachrach, Jonathan Richard 01 January 1992 (has links)
A robot wanders around an unfamiliar environment, performing actions and observing their perceptual consequences. The robot's task is to construct a model of its environment that will allow it to predict the outcome of its actions and to determine what action sequences take it to particular goal states. In any reasonably complex situation, a robot that aims to manipulate its environment toward some desired end requires an internal representation of the environment because the robot can directly perceive only a small fraction of the global environmental state at any time; some portion of the rest must be stored internally if the robot is to act effectively. Rivest and Schapire (72, 74, 87) have studied this problem and have designed a symbolic algorithm to strategically explore and infer the structure of finite-state environments. At the heart of this algorithm is a clever representation of the environment called an update graph. This dissertation presents a connectionist implementation of the update graph using a highly specialized network architecture and a technique for using the connectionist update graph to guide the robot from an arbitrary starting state to a goal state. This technique requires a critic that associates the update graph's current state with the expected time to reach the goal state. At each time step, the robot selects the action that minimizes the output of the critic. The basic control acquisition technique is demonstrated on several environments, and it is generalized to handle a navigation task involving a more realistic environment characterized by a high-dimensional continuous state-space with real-valued actions and sensations in which a simulated cylindrical robot with a sensor belt operates in a planar environment. The task is short-range homing in the presence of obstacles. Unlike many approaches to robot navigation, our approach assumes no prior map of the environment. Instead, the robot has to use its limited sensory information to construct a model of its environment. A connectionist architecture is presented for building such a model. It incorporates a large amount of a priori knowledge in the form of hard-wired networks, architectural constraints, and initial weights. This navigation example demonstrates the use of a large modular architecture on a difficult task.
28

Direct adaptive control using artificial neural networks with parameter projection

Tzanzalian, Svetlozara Krasteva 01 January 1994 (has links)
This research is focused on the development of a stable nonlinear direct adaptive control algorithm. The nonlinearity is realized through a one-hidden-layer forward artificial neural network (ANN) of sigmoidal basis functions. The control scheme incorporates a linear adaptive controller, acting in parallel with the ANN, so that if all nonlinear elements are set to zero, a linear controller results. The control scheme is based on inverse identification. An inherent problem with that scheme is the existence of multiple steady states of the controller. This issue is addressed and sufficient conditions for stability and convergence of the algorithm are derived. In particular, it is shown that if (1) the identification algorithm converges so that the prediction error tends to zero, (2) the plant is stably invertible, (3) parameter projection is applied to prevent singularities, then the tracking error converges to zero as well. The one-step-ahead nonlinear controller with the proposed parameter projection has been tested in simulation studies of a CSTR system and in a pilot distillation column experiment. In both studies the nonlinear adaptive controller showed performance superior to that obtained using linear adaptive control. Applying parameter projection proved to be crucial to the successful operation of the control system. The validity of the approach was investigated further by performing a theoretical robustness analysis and generalized to non-invertible systems by applying ideas from predictive control. In particular, it is shown that if the prediction horizon is increased, the nonlinear adaptive controller can be applied to non-minimum-phase systems.
29

Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

Matsypura, Dmytro 01 January 2006 (has links)
In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following coauthored papers: Nagurney, Cruz, and Matsypura (2003), Nagurney and Matsypura (2004, 2005, 2006), Matsypura and Nagurney (2005), Matsypura, Nagurney, and Liu (2006).
30

On some modeling issues in high speed networks

Yan, Anlu 01 January 1998 (has links)
Communication networks have experienced tremendous growth in recent years, and it has become ever more challenging to design, control and manage systems of such speed, size and complexity. The traditional performance modeling tools include analysis, discrete-event simulation and network emulation. In this dissertation, we propose a new approach for performance modeling and we call it time-driven fluid simulation. Time-driven fluid simulation is a technique based on modeling the traffic going through the network as continuous fluid flows and the network nodes as fluid servers. Time is discretized into fixed-length intervals and the system is simulated by recursively computing the system state and advance the simulation clock. When the interval length is large, each chunk of fluid processed within one interval may represent thousands of packets/cells. In addition, since the simulation is synchronized by the fixed time intervals, it is easy to parallelize the simulator. These two factors enable us to tremendously speed up the simulation. For single class fluid with probability routing, we prove that the error introduced by discretizing a fluid model is within a deterministic bound proportional to the discretization interval length and is not related to the network size. For multi-class traffic passing through FIFO servers with class-based routing, we prove that the worst case discretization error for any fluid flow may grow linearly with the number of hops the flow passes but unaffected by the overall network size and the discretization error of other classes. We further show via simulation that certain performance measures are in fact quite robust with respect to the discretization interval length and the path length of the flow (in number of hops), and the discretization error is much smaller than that given by the worst case bound. These results show that fluid simulation can be a useful performance modeling tool filling the gap between discrete-event simulation and analysis. In this dissertation, we also apply another technique, rational approximation, to estimate the cell loss probabilities for an ATM multiplexer fed by a self-similar process. This is another method that compensates the analysis and simulation techniques.

Page generated in 0.1412 seconds