191 |
Probabilistic Error Analysis Models for Nano-Domain VLSI CircuitsLingasubramanian, Karthikeyan 03 March 2010 (has links)
Technology scaling to the nanometer levels has paved the way to realize multi-dimensional applications in a single product by increasing the density of the electronic devices on integrated chips. This has naturally attracted a wide variety of industries like medicine, communication, automobile, defense and even house-hold appliance, to use high speed multi-functional computing machines. Apart from the advantages of these nano-domain computing devices, their usage in safety-centric applications like implantable biomedical chips and automobile safety has immensely increased the need for comprehensive error analysis to enhance their reliability. Moreover, these nano-electronic devices have increased propensity to transient errors due to extremely small device dimensions and low switching energy. The nature of these transient errors is more probabilistic than deterministic, and so requires probabilistic models for estimation and analysis. In this dissertation, we present comprehensive analytic studies of error behavior in nano-level digital logic circuits using probabilistic reliability models. It comprises the design of exact probabilistic error models, to compute the maximum error over all possible input space in a circuit-specific manner; to study the behavior of transient errors in sequential circuits; and to achieve error mitigation through redundancy techniques. The model to compute maximum error, also provides the worst-case input vector, which has the highest probability to generate an erroneous output, for any given logic circuit. The model for sequential logic that can measure the expected output error probability, given a probabilistic input space, can account for both spatial dependencies and temporal correlations across the logic, using a time evolving causal network. For comprehensive error reduction in logic circuits, temporal, spatial and hybrid redundancy models, are implemented. The temporal redundancy model uses the triple temporal redundancy technique that applies redundancy in the input space, spatial redundancy model uses the cascaded triple modular redundancy technique that applies redundancy in the intermediate signal space and the hybrid redundancy techniques encapsulates both temporal and spatial redundancy schemes. All the above studies are performed on standard benchmark circuits from ISCAS and MCNC suites and the subsequent experimental results are obtained. These results clearly encompasses the various aspects of error behavior in nano VLSI circuits and also shows the efficiency and versatility of the probabilistic error models.
|
192 |
Partial Circuit Replication for Masking and Detecting Soft Errors in SRAM-Based FPGAsKeller, Andrew Mark 08 December 2021 (has links)
Partial circuit replication is a soft error mitigation technique that uses redundant copies of a circuit to mask or detect the effects of soft errors. By masking or detecting the effect of soft errors on SRAM-based FPGAs, implemented circuits can be made more reliable. The technique is applied selectively, to only a portion of the components within a circuit. Partial application lowers the cost of implementation. The objective of partial circuit replication is to provide maximal benefit at limited or minimized cost. The greatest challenge of partial circuit replication is selecting which components within a circuit to replicate. This dissertation advances the state of the art in the effective use of partial circuit replication for masking and detecting soft errors in SRAM-based FPGAs. It provides a theoretical foundation in which the expected benefits and challenges of partial circuit replication can be understood. It proposes several new selection approaches for identifying the most beneficial areas of a circuit to replicate. These approaches are applied to two complex FPGA-based computer networking systems and another FPGA design. The effectiveness of the selection approaches are evaluated through fault injection and accelerated radiation testing. More benefit than expected is obtained through partial circuit replication when applied to critical components and sub-regions of the designs. In one example, in an open-source computer networking design, partial circuit replication masks and detects approximately 70% of failures while replicating only 5% of circuit components, a benefit-cost ratio of 14.0.
|
193 |
Spatio-Temporal Evolution of Rocky Desertification and Its Driving Forces in Karst Areas of Northwestern Guangxi, ChinaYang, Qing qing, Wang, Ke lin, Zhang, Chunhua, Yue, Yue min, Tian, Ri chang, Fan, Fei de 01 September 2011 (has links)
Rocky desertification (RD) is a process of land degradation that often results in extensive soil erosion, bedrock exposure and considerable decrease of land productivity. The spatio-temporal evolution of RD not only reflects regional ecological environmental changes but also directly impacts regional economic and social development. The study area, Hechi, is a typical karst peak cluster depression area in southwest China. Remote sensing, geographic information systems (GIS) and statistical techniques were employed to examine the evolution, including the identification of driving forces, of karst RD in the Northwestern Guangxi. The results indicate that RD became most apparent between 1990 and 2005 when areas of various types of RD increased. Within the karst RD landscape, slight RD was identified as the matrix of the landscape while potential RD had the largest patch sizes. Extremely strong RD, with the simplest shape, was the most influenced by human activities. Overall the landscape evolved from fragmented to agglomerate within the 15-year timeframe. Land condition changes were categorized as five types; desertified, recovered, unchanged, worsened, and alleviated land. The largest turnover within the RD landscape was between slight and moderate RD. With regards to the driving forces all RD had been increasingly influenced by human activities (i. e., the stronger the RD, the stronger the intensity of human disturbances). Dominant impact factors of the RD landscape had shifted from town influence and bare rock land in 1990 to bare rock and grassland in 2005. Moreover, the impacts of stony soil, mountainous proportion and river density on RD increased over time, while that of others decreased. The significant factors included human activities, land use, soil types, environmental geology, and topography. However, only anthropogenic factors (human activities and land use) were reported as leading factors whereas the others acted simply as constraining factors.
|
194 |
USING N-MODULAR REDUNDANCYWITH KALMAN FILTERS FORUNDERWATER VEHICLE POSITIONESTIMATIONEnquist, Axel January 2022 (has links)
Underwater navigation faces many problems with accurately estimating the absolute positionof an underwater vehicle. Neither Global Positioning system (GPS) nor Long Baseline (LBL) orShort Baseline (SBL) are possible to use for a military vehicle acting under stealth, since thesetechniques require the vehicle to be in the vicinity of a nearby ship or to surface and raise its antenna. It will therefore have to rely on sensors such as Doppler Velocity Log (DVL) and a compassto estimate its absolute position using dead reckoning or an Inertial Navigation System (INS). Thisthesis presents an alternative Multiple model Kalman Filter (KF) to the existing Multiple ModelAdaptive Estimator (MMAE) algorithm using n-Modular Redundancy (NMR), in order to gaina more accurate result than with a single KF. By analyzing how different amounts of filters andvoter types affect the accuracy and precision of the velocity and heading estimations, the potentialbenefits and drawbacks can be drawn for each solution. Such benefits and drawbacks were alsovisually evaluated in a Matlab script which was used to calculate the coordinates using the velocityand heading from the speed sensors and compass, without the need for running the filtered states onthe vehicle’s navigation system. The results present the potential of using a multiple model KF inthe form of an NMR, which was demonstrated by both the amount of reduced noise in the velocitystates and how the filters were used in a virtual navigation system created in Matlab.
|
195 |
Task Oriented Simulation And Control Of A Wheelchair Mounted Robotic ArmFarelo, Fabian 05 November 2009 (has links)
The main objective of my research is to improve the control structure for the new Wheelchair Mounted Robotic Arm (WMRA) to include new algorithms for optimized task execution; that is, making the WMRA a modular task oriented mobile manipulator. The main criterion to be optimized is the fashion in which the wheelchair approaches a final target as well as the starting and final orientation of the wheelchair. This is a novel approach in non-holonomic wheeled manipulators that will help in autonomously executing complex activities of daily living (ADL) tasks.
The WMRA is a 9 degree of freedom system, which provides 3 degrees of kinematic redundancy. A single control structure is used to control the WMRA system, which gives much more flexibility to the system. The combination of mobility and manipulation expands the workspace that a mobile base attains to a manipulator. This approach opens a broad field of applications: from maintenance and storage to rehabilitation robotics. This structure is based on optimization algorithms that can resolve redundancy based on several subtasks: maximizing the manipulability measure, minimizing the joint velocities (hence minimizing the energy), and avoiding joint limits. This work utilizes redundancy to control 2 separate trajectories, a primary trajectory for the end-effector and an optimized secondary trajectory for the wheelchair. Even though this work presents results and implementation in the WMRA system, this approach offers expandability to many wheeled base mobile manipulators in different types of applications.
The WMRA usage was simulated in a virtual environment, by developing a test setting for sensors and task performance. The different trajectories and tasks can be shown in a virtual world created not only for illustration purposes, but to provide training to the users once the system is ready for use.
|
196 |
Strategies to Recover from Satellite Communication FailuresLomotey, Charles 01 January 2019 (has links)
In natural and manmade disasters, inadequate strategies to recover from satellite communication (SATCOM) failures can affect the ability of humanitarian organizations to provide timely assistance to the affected populations. This single case study explored strategies used by network administrators (NAs) to recover from SATCOM failures in humanitarian operations. The study population were NAs in Asia, the Middle East, Central Africa, East Africa, and West Africa. Data were collected from semistructured interviews with 9 NAs and an analysis of network statistics for their locations. The resource-based view was used as the conceptual framework for the study. Using inductive analysis, 3 themes emerged from coding and triangulation: redundancy of equipment, knowledge transfer, and the use of spare parts to service the SATCOM infrastructure. The findings showed that the organization's use of knowledge, and collaboration among NAs and nontechnical staff improved the organization's ability to recover from SATCOM failures. The implication of this study for social change was the reduced cost of satellite services due to the efficient use of the bandwidth. These savings can be channeled into the purchase of vaccines, shelter, and the improvement in the quality of water and sanitation for displaced persons in humanitarian disasters, which improve the organization's delivery of humanitarian services to the affected populations in the disaster.
|
197 |
Exploration of Information Processing Outcomes in 360-Degree VideoHolmes, Christine Margaret January 2018 (has links)
No description available.
|
198 |
Fracture Critical Analysis Procedure for Pony Truss BridgesButler, Martin A. January 2018 (has links)
No description available.
|
199 |
On Fault Resilient Network-on-Chip for Many Core SystemsMoriam, Sadia 24 May 2019 (has links)
Rapid scaling of transistor gate sizes has increased the density of on-chip integration and paved the way for heterogeneous many-core systems-on-chip, significantly improving the speed of on-chip processing. The design of the interconnection network of these complex systems is a challenging one and the network-on-chip (NoC) is now the accepted scalable and bandwidth efficient interconnect for multi-processor systems on-chip (MPSoCs). However, the performance enhancements of technology scaling come at the cost of reliability as on-chip components particularly the network-on-chip become increasingly prone to faults. In this thesis, we focus on approaches to deal with the errors caused by such faults. The results of these approaches are obtained not only via time-consuming cycle-accurate simulations but also by analytical approaches, allowing for faster and accurate evaluations, especially for larger networks.
Redundancy is the general approach to deal with faults, the mode of which varies according to the type of fault. For the NoC, there exists a classification of faults into transient, intermittent and permanent faults. Transient faults appear randomly for a few cycles and may be caused by the radiation of particles. Intermittent faults are similar to transient faults, however, differing in the fact that they occur repeatedly at the same location, eventually leading to a permanent fault. Permanent faults by definition are caused by wires and transistors being permanently short or open. Generally, spatial redundancy or the use of redundant components is used for dealing with permanent faults. Temporal redundancy deals with failures by re-execution or by retransmission of data while information redundancy adds redundant information to the data packets allowing for error detection and correction. Temporal and information redundancy methods are useful when dealing with transient and intermittent faults.
In this dissertation, we begin with permanent faults in NoC in the form of faulty links and routers. Our approach for spatial redundancy adds redundant links in the diagonal direction to the standard rectangular mesh topology resulting in the hexagonal and octagonal NoCs. In addition to redundant links, adaptive routing must be used to bypass faulty components. We develop novel fault-tolerant deadlock-free adaptive routing algorithms for these topologies based on the turn model without the use of virtual channels. Our results show that the hexagonal and octagonal NoCs can tolerate all 2-router and 3-router faults, respectively, while the mesh has been shown to tolerate all 1-router faults. To simplify the restricted-turn selection process for achieving deadlock freedom, we devised an approach based on the channel dependency matrix instead of the state-of-the-art Duato's method of observing the channel dependency graph for cycles. The approach is general and can be used for the turn selection process for any regular topology.
We further use algebraic manipulations of the channel dependency matrix to analytically assess the fault resilience of the adaptive routing algorithms when affected by permanent faults. We present and validate this method for the 2D mesh and hexagonal NoC topologies achieving very high accuracy with a maximum error of 1%. The approach is very general and allows for faster evaluations as compared to the generally used cycle-accurate simulations. In comparison, existing works usually assume a limited number of faults to be able to analytically assess the network reliability. We apply the approach to evaluate the fault resilience of larger NoCs demonstrating the usefulness of the approach especially compared to cycle-accurate simulations.
Finally, we concentrate on temporal and information redundancy techniques to deal with transient and intermittent faults in the router resulting in the dropping and hence loss of packets. Temporal redundancy is applied in the form of ARQ and retransmission of lost packets. Information redundancy is applied by the generation and transmission of redundant linear combinations of packets known as random linear network coding. We develop an analytic model for flexible evaluation of these approaches to determine the network performance parameters such as residual error rates and increased network load. The analytic model allows to evaluate larger NoCs and different topologies and to investigate the advantage of network coding compared to uncoded transmissions.
We further extend the work with a small insight to the problem of secure communication over the NoC. Assuming large heterogeneous MPSoCs with components from third parties, the communication is subject to active attacks in the form of packet modification and drops in the NoC routers. Devising approaches to resolve these issues, we again formulate analytic models for their flexible and accurate evaluations, with a maximum estimation error of 7%.
|
200 |
Variable Speed Limits Control for Freeway Work Zone with Sensor FaultsDu, Shuming January 2020 (has links)
Freeway work zones with lane closures can adversely affect mobility, safety, and sustainability. Capacity drop phenomena near work zone areas can further decrease work zone capacity and exacerbate traffic congestion. To mitigate the negative impacts caused by freeway work zones, many variable speed limits (VSL) control methods have been proposed to proactively regulate the traffic flow. However, a simple yet robust VSL controller that considers the nonlinearity induced by the associated capacity drop is still needed. Also, most existing studies of VSL control neglected the impacts of traffic sensor failures that commonly occur in transportation systems. Large deviations of traffic measurements caused by sensor faults can greatly affect the reliability of VSL controllers.
To address the aforementioned challenges, this research proposes a fault-tolerant VSL controller for a freeway work zone with consideration of sensor faults. A traffic flow model was developed to understand and describe the traffic dynamics near work zone areas. Then a VSL controller based on sliding mode control was designed to generate dynamic speed limits in real time using traffic measurements. To achieve VSL control fault tolerance, analytical redundancy was exploited to develop an observer-based method and an interacting multiple model with a pseudo-model set (IMMP) based method for permanent and recurrent sensor faults respectively. The proposed system was evaluated under realistic freeway work zone conditions using the traffic simulator SUMO.
This research contributes to the body of knowledge by developing fault-tolerant VSL control for freeway work zones with reliable performance under permanent and recurrent sensor faults. With reliable sensor fault diagnosis, the fault-tolerant VSL controller can consistently reduce travel time, safety risks, emissions, and fuel consumption. Therefore, with a growing number of work zones due to aging road infrastructure and increasing demand, the proposed system offers broader impacts through congestion mitigation and consistent improvements in mobility, safety, and sustainability near work zones. / Thesis / Doctor of Philosophy (PhD) / Freeway work zones can increase congestion with higher travel time, safety risk, emissions and fuel consumption. This research aims to improve traffic conditions near work zones using a variable speed limits control system. By exploiting redundant traffic information, a variable speed limit control system that is insensitive to traffic sensor failures is presented. The proposed system was evaluated under realistic freeway work zone conditions in a simulation environment. The results show that the proposed system can reliably detect sensor failures and consistently provide improvements in mobility, safety and sustainability despite the presence of traffic sensor failures.
|
Page generated in 0.0164 seconds