161 |
Redundancy Resolution of Cable-Driven Parallel ManipulatorsAgahi, MARYAM 27 September 2012 (has links)
In this thesis, the redundancy resolution and failure analysis of Cable-Driven Parallel Manipulators (CDPM) are investigated. A CDPM consists mainly of a Mobile Platform (MP) actuated by cables. Cables can only apply force in the form of tension. So, to design a fully controllable CDPM, the manipulator has to be redundantly actuated (e.g., by using redundant cables, external force/moment or gravity). In this research, the redundancy resolution of planar CDPMs is investigated at the kinematic and dynamic levels in order to improve the manipulator safety, reliability and performance, e.g., by avoiding large tension in the cables that may result in high impact forces, and avoiding large MP velocities that may cause instability in the manipulator, or on the contrary, by increasing the cable tensions and the stiffness for high-precision applications. The proposed approaches are utilized in trajectory planning, design of controllers, and safe dynamic workspace analysis where collision is imminent and the safety of humans, objects and the manipulator itself are at risk. The kinematic and dynamic models of the manipulator required in the design and control of manipulators are examined and simulated under various operating conditions and manufacturing automation tasks to predict the behaviour of the CDPM.
In the presented research, some of the challenges associated with the redundancy resolution are resolved including positive tension requirement in each cable, infinite inverse dynamic solutions, slow-computation abilities when using optimization techniques, failure of the manipulator, and elasticity of cables that has a significant role in the dynamics of a heavy loaded manipulator with a large workspace. Optimization-based and non-optimization-based techniques are employed to resolve the redundancy of CDPM. Depending on the advantages and disadvantages of each method, task requirements, the used redundancy resolution technique, and the objective function suitable optimization-based and non-optimization-based routines are employed. Methodologies that could combine redundancy resolution techniques at various levels (e.g., position, velocity, acceleration, and torque levels) are proposed. / Thesis (Ph.D, Mechanical and Materials Engineering) -- Queen's University, 2012-09-26 22:39:34.35
|
162 |
Optimal Control and Multibody Dynamic Modelling of Human Musculoskeletal SystemsSharif Shourijeh, Mohammad January 2013 (has links)
Musculoskeletal dynamics is a branch of biomechanics that takes advantage of interdisciplinary
models to describe the relation between muscle actuators and the corresponding
motions of the human body. Muscle forces play a principal role in musculoskeletal
dynamics. Unfortunately, these forces cannot be measured non-invasively. Measuring
surface EMGs as a non-invasive technique is recognized as a surrogate to invasive muscle
force measurement; however, these signals do not reflect the muscle forces accurately.
Instead of measurement, mathematical modelling of the musculoskeletal dynamics is a well established
tool to simulate, predict and analyse human movements. Computer simulations
have been used to estimate a variety of variables that are difficult or impossible to measure
directly, such as joint reaction forces, muscle forces, metabolic energy consumption, and
muscle recruitment patterns.
Musculoskeletal dynamic simulations can be divided into two branches: inverse and
forward dynamics. Inverse dynamics is the approach in which net joint moments and/or
muscle forces are calculated given the measured or specified kinematics. It is the most
popular simulation technique used to study human musculoskeletal dynamics. The major
disadvantage of inverse dynamics is that it is not predictive and can rarely be used in the
cause-effect interpretations. In contrast with inverse dynamics, forward dynamics can be
used to determine the human body movement when it is driven by known muscle forces.
The musculoskeletal system (MSS) is dynamically under-determinate, i.e., the number
of muscles is more than the degrees of freedom (dof) of the system. This redundancy will
lead to infinite solutions of muscle force sets, which implies that there are infinite ways of recruiting different muscles for a specific motion. Therefore, there needs to be an extra
criterion in order to resolve this issue. Optimization has been widely used for solving the
redundancy of the force-sharing problem. Optimization is considered as the missing consideration
in the dynamics of the MSS such that, once appended to the under-determinate
problem, \human-like" movements will be acquired. \Human-like" implies that the human
body tends to minimize a criterion during a movement, e.g., muscle fatigue or metabolic energy. It is commonly accepted that using those criteria, within the optimization necessary
in the forward dynamic simulations, leads to a reasonable representation of real
human motions.
In this thesis, optimal control and forward dynamic simulation of human musculoskeletal
systems are targeted. Forward dynamics requires integration of the differential equations
of motion of the system, which takes a considerable time, especially within an optimization
framework. Therefore, computationally efficient models are required. Musculoskeletal
models in this thesis are implemented in the symbolic multibody package MapleSim that uses Maple as the leverage. MapleSim generates the equations of motion governing a multibody system automatically using linear graph theory. These equations will be simplified and highly optimized for further simulations taking advantage of symbolic techniques in Maple. The output codes are the best form for the equations to be applied in optimization-based simulation fields, such as the research area of this thesis.
The specific objectives of this thesis were to develop frameworks for such predictive
simulations and validate the estimations. Simulating human gait motion is set as the end
goal of this research. To successfully achieve that, several intermediate steps are taken prior
to gait modelling. One big step was to choose an efficient strategy to solve the optimal
control and muscle redundancy problems. The optimal control techniques are benchmarked
on simpler models, such as forearm flexion/extension, to study the efficacy of the proposed
approaches more easily. Another major step to modelling gait is to create a high-fidelity
foot-ground contact model. The foot contact model in this thesis is based on a nonlinear
volumetric approach, which is able to generate the experimental ground reaction forces
more effectively than the previously used models.
Although the proposed models and approaches showed strong potential and capability,
there is still room for improvement in both modelling and validation aspects. These cutting-edge
future works can be followed by any researcher working in the optimal control and forward dynamic modelling of human musculoskeletal systems.
|
163 |
Exploiting the implicit error correcting ability of networks that use random network coding / by Suné von SolmsVon Solms, Suné January 2009 (has links)
In this dissertation, we developed a method that uses the redundant information implicitly
generated inside a random network coding network to apply error correction to the transmitted
message. The obtained results show that the developed implicit error correcting method can
reduce the effect of errors in a random network coding network without the addition of
redundant information at the source node. This method presents numerous advantages
compared to the documented concatenated error correction methods.
We found that various error correction schemes can be implemented without adding
redundancy at the source nodes. The decoding ability of this method is dependent on the
network characteristics. We found that large networks with a high level of interconnectivity
yield more redundant information allowing more advanced error correction schemes to be
implemented.
Network coding networks are prone to error propagation. We present the results of the
effect of link error probability on our scheme and show that our scheme outperforms
concatenated error correction schemes for low link error probability. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.
|
164 |
Exploiting the implicit error correcting ability of networks that use random network coding / by Suné von SolmsVon Solms, Suné January 2009 (has links)
In this dissertation, we developed a method that uses the redundant information implicitly
generated inside a random network coding network to apply error correction to the transmitted
message. The obtained results show that the developed implicit error correcting method can
reduce the effect of errors in a random network coding network without the addition of
redundant information at the source node. This method presents numerous advantages
compared to the documented concatenated error correction methods.
We found that various error correction schemes can be implemented without adding
redundancy at the source nodes. The decoding ability of this method is dependent on the
network characteristics. We found that large networks with a high level of interconnectivity
yield more redundant information allowing more advanced error correction schemes to be
implemented.
Network coding networks are prone to error propagation. We present the results of the
effect of link error probability on our scheme and show that our scheme outperforms
concatenated error correction schemes for low link error probability. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.
|
165 |
System-level Structural Reliability of BridgesElhami Khorasani, Negar 30 November 2011 (has links)
The purpose of this thesis is to demonstrate that two-girder or two-web structural systems can be employed to design efficient bridges with an adequate level of redundancy. The issue of redundancy in two-girder bridges is a constraint for the bridge designers in North America who want to take advantage of efficiency in this type of structural system. Therefore, behavior of two-girder or two-web structural systems after failure of one main load-carrying component is evaluated to validate their safety. A procedure is developed to perform system-level reliability analysis of bridges. This procedure is applied to two bridge concepts, a twin steel girder with composite deck slab and a concrete double-T girder with unbonded external tendons. The results show that twin steel girder bridges can be designed to fulfill the requirements of a redundant structure and the double-T girder with external unbonded tendons can be employed to develop a robust structural system.
|
166 |
System-level Structural Reliability of BridgesElhami Khorasani, Negar 30 November 2011 (has links)
The purpose of this thesis is to demonstrate that two-girder or two-web structural systems can be employed to design efficient bridges with an adequate level of redundancy. The issue of redundancy in two-girder bridges is a constraint for the bridge designers in North America who want to take advantage of efficiency in this type of structural system. Therefore, behavior of two-girder or two-web structural systems after failure of one main load-carrying component is evaluated to validate their safety. A procedure is developed to perform system-level reliability analysis of bridges. This procedure is applied to two bridge concepts, a twin steel girder with composite deck slab and a concrete double-T girder with unbonded external tendons. The results show that twin steel girder bridges can be designed to fulfill the requirements of a redundant structure and the double-T girder with external unbonded tendons can be employed to develop a robust structural system.
|
167 |
Multi-State Reliability Analysis of Nuclear Power Plant SystemsVeeramany, Arun January 2012 (has links)
The probabilistic safety assessment of engineering systems involving high-consequence low-probability events is stochastic in nature due to uncertainties inherent in time to an event. The event could be a failure, repair, maintenance or degradation associated with system ageing. Accurate reliability prediction accounting for these uncertainties is a precursor to considerably good risk assessment model.
Stochastic Markov reliability models have been constructed to quantify basic events in a static fault tree analysis as part of the safety assessment process. The models assume that a system transits through various states and that the time spent in a state is statistically random. The system failure probability estimates of these models assuming constant transition rate are extensively utilized in the industry to obtain failure frequency of catastrophic events. An example is core damage frequency in a nuclear power plant where the initiating event is loss of cooling system. However, the assumption of constant state transition rates for analysis of safety critical systems is debatable due to the fact that these rates do not properly account for variability in the time to an event. An ill-consequence of such an assumption is conservative reliability prediction leading to addition of unnecessary redundancies in modified versions of prototype designs, excess spare inventory and an expensive maintenance policy with shorter maintenance intervals. The reason for this discrepancy is that a constant transition rate is always associated with an exponential distribution for the time spent in a state.
The subject matter of this thesis is to develop sophisticated mathematical models to improve predictive capabilities that accurately represent reliability of an engineering system. The generalization of the Markov process called the semi-Markov process is a well known stochastic process, yet it is not well explored in the reliability analysis of nuclear power plant systems. The continuous-time, discrete-state semi-Markov process model is a stochastic process model that describes the state transitions through a system of integral equations which can be solved using the trapezoidal rule. The primary objective is to determine the probability of being in each state. This process model ensures that time spent in the states can be represented by a suitable non-exponential distribution thus capturing the variability in the time to event. When exponential distribution is assumed for all the state transitions, the model reduces to the standard Markov model.
This thesis illustrates the proposed concepts using basic examples and then develops advanced case studies for nuclear cooling systems, piping systems, digital instrumentation and control (I&C) systems, fire modelling and system maintenance. The first case study on nuclear component cooling water system (NCCW) shows that the proposed technique can be used to solve a fault tree involving redundant repairable components to yield initiating event probability quantifying the loss of cooling system. The time-to-failure of the pump train is assumed to be a Weibull distribution and the resulting system failure probability is validated using a Monte Carlo simulation of the corresponding reliability block diagram.
Nuclear piping systems develop flaws, leaks and ruptures due to various underlying damage mechanisms. This thesis presents a general model for evaluating rupture frequencies of such repairable piping systems. The proposed model is able to incorporate the effect of aging related degradation of piping systems. Time dependent rupture frequencies are computed and the influence of inspection intervals on the piping rupture probability is investigated.
There is an increasing interest worldwide in the installation of digital instrumentation and control systems in nuclear power plants. The main feedwater valve (MFV) controller system is used for regulating the water level in a steam generator. An existing Markov model in the literature is extended to a semi-Markov model to accurately predict the controller system reliability. The proposed model considers variability in the time to output from the computer to the controller with intrinsic software and mechanical failures.
State-of-the-art time-to-flashover fire models used in the nuclear industry are either based on conservative analytical equations or computationally intensive simulation models. The proposed semi-Markov based case study describes an innovative fire growth model that allows prediction of fire development and containment including time to flashover. The model considers variability in time when transiting from one stage of the fire to the other. The proposed model is a reusable framework that can be of importance to product design engineers and fire safety regulators.
Operational unavailability is at risk of being over-estimated because of assuming a constant degradation rate in a slowly ageing system. In the last case study, it is justified that variability in time to degradation has a remarkable effect on the choice of an effective maintenance policy. The proposed model is able to accurately predict the optimal maintenance interval assuming a non-exponential time to degradation. Further, the model reduces to a binary state Markov model equivalent to a classic probabilistic risk assessment model if the degradation and maintenance states are eliminated.
In summary, variability in time to an event is not properly captured in existing Markov type reliability models though they are stochastic and account for uncertainties. The proposed semi-Markov process models are easy to implement, faster than intensive simulations and accurately model the reliability of engineering systems.
|
168 |
A Dependable Computing ApplicationGungor, Ugur 01 April 2005 (has links) (PDF)
ABSTRACT
A DEPENDABLE COMPUTING APPLICATION
Gü / ngö / r, Ugur
M.S., Department of Electric and Electronics Engineering
Supervisor : Prof. Dr. Hasan Cengiz Gü / ran
April 2005, 129 pages
This thesis focuses on fault tolerance which is kind of dependable computing
implementation. It deals with the advantages of fault tolerance techniques on Single
Event Upsets (SEU) occurred in a Field Programmable Gate Array (FPGA). Two fault
tolerant methods are applied to floating point multiplier. Most common SEU mitigation
method is Triple Modular Redundancy (TMR). So, two fault tolerance methods, which
use TMR, are tested.
There are three printed circuit boards (PCBs) and one user interface software in the
setup. By user interface software running on a computer, user can inject fault or faults to
the selected part of the system, which uses TMR with voting circuit or TMRVC TMR
with voting and correction circuits on floating point multiplier. After inserting fault or
faults, user can watch results of the fault injection test by user interface software. One of
these printed circuit boards is called as a Test Pattern Generator. It is responsible for
communication between the Fault Tolerant Systems and the user interface software
running on a computer. Fault Tolerant Systems is second PCB in the setup. It is used to
implement fault tolerant methods on fifteen bits floating point multiplier in the FPGA.
First one of these methods is TMR with voter circuit (TMRV) and second one is TMR
with voter and correction circuits (TMRVC). Last PCB in the setup is Display PCB.
This PCB displays fault tolerant test result and floating point multiplication result. All the functions on Test Pattern Generator and Fault Tolerant Systems are implemented
through the use of a Field Programmable Gate Array (FPGA), which is programmed
using the Very High Speed IC Description Language (VHDL).
Implementation results of the used methods in FPGA are evaluated to observe the
performance of applied methods for tolerating SEU.
|
169 |
The role of heterogenic spinal reflexes in coordinating and stabilizing a model feline hindlimbBunderson, Nathan Eric 01 April 2008 (has links)
In addition to its intrinsic importance during quiet standing, posture also serves as
the background for a wide variety of other critical motor tasks. The hierarchical nature of
the motor control system suggests that the different layers may be responsible for
different aspects of posture. I tested the hypothesis that spinal reflexes are organized
according to optimal principles of stability, control accuracy, and energy. I found that
there were no globally stable muscle activation patterns for muscles operating near
optimal fiber length, suggesting that the intrinsic viscoelastic properties of muscle are
insufficient to provide limb stability. However, for stiffer muscles a stable limb could be
created by selectively activating muscles based on their moment-arm joint angle
relationships. The optimal organization of length and velocity feedback to control and
stabilize the endpoint position of a limb could not be produced from a purely muscle
controller, but required neural feedback to improve endpoint performance, reduce
energetic cost, and produce greater coordination among joints. I found that while
muscles at near optimal fiber length were insufficient to provide limb stability, the length
feedback provided by the autogenic stretch reflex was sufficient to stabilize. Length
feedback was also sufficient to produce the directional tuning of muscle activity and
constrained ground reaction forces as is observed in experiments. These results have
implications for controlling powered prosthetic devices, suggesting that subdividing the
responsibility for stability among hierarchical control structures will simultaneous
improve stability and maneuverability of the devices.
|
170 |
Formal specification of requirements for analytical redundancy based fault tolerant flight control systemsDel Gobbo, Diego. January 2000 (has links)
Thesis (Ph. D.)--West Virginia University, 2000. / Title from document title page. Document formatted into pages; contains ix, 185 p. : ill. Includes abstract. Includes bibliographical references (p. 87-91).
|
Page generated in 0.0257 seconds