• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 303
  • Tagged with
  • 303
  • 303
  • 303
  • 32
  • 28
  • 26
  • 20
  • 18
  • 16
  • 16
  • 16
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Intuitive Teleoperation of an Intelligent Robotic System Using Low-Cost 6-DOF Motion Capture

Gagne, Jonathan January 2011 (has links)
There is currently a wide variety of six degree-of-freedom (6-DOF) motion capture technologies available. However, these systems tend to be very expensive and thus cost prohibitive. A software system was developed to provide 6-DOF motion capture using the Nintendo Wii remote’s (wiimote) sensors, an infrared beacon, and a novel hierarchical linear-quaternion Kalman filter. The software is made freely available, and the hardware costs less than one hundred dollars. Using this motion capture software, a robotic control system was developed to teleoperate a 6-DOF robotic manipulator via the operator’s natural hand movements. The teleoperation system requires calibration of the wiimote’s infrared cameras to obtain an estimate of the wiimote’s 6-DOF pose. However, since the raw images from the wiimote’s infrared camera are not available, a novel camera-calibration method was developed to obtain the camera’s intrinsic parameters, which are used to obtain a low-accuracy estimate of the 6-DOF pose. By fusing the low-accuracy estimate of 6-DOF pose with accelerometer and gyroscope measurements, an accurate estimation of 6-DOF pose is obtained for teleoperation. Preliminary testing suggests that the motion capture system has an accuracy of less than a millimetre in position and less than one degree in attitude. Furthermore, whole-system tests demonstrate that the teleoperation system is capable of controlling the end effector of a robotic manipulator to match the pose of the wiimote. Since this system can provide 6-DOF motion capture at a fraction of the cost of traditional methods, it has wide applicability in the field of robotics and as a 6-DOF human input device to control 3D virtual computer environments.
242

Micromechanics of Fiber Networks Including Nonlinear Hysteresis and its Application to Multibody Dynamic Modeling of Piano Mechanisms

Masoudi, Ramin 09 April 2012 (has links)
Many engineering applications make use of fiber assemblies under compression. Unfortunately, this compression behavior is difficult to predict, due to nonlinear compliance, hysteresis, and anelasticity. The main objective of this research is to develop an algorithm which is capable of incorporating the microscale features of the fiber network into macroscopic scale applications, particularly the modeling of contact mechanics in multibody systems. In micromechanical approaches, the response of a fiber assembly to an external force is related to the response of basic fiber units as well as the interactions between these units, i.e. the mechanical properties of the constituent fibers and the architecture of the assembly will both have a significant influence on the overall response of the assembly to compressive load schemes. Probabilistic and statistical principles are used to construct the structure of the uniformly-distributed random network. Different micromechanical approaches in modeling felt, as a nonwoven fiber assembly with unique mechanical properties, are explored to gain insight into the key mechanisms that influence its compressive response. Based on the deformation processes and techniques in estimating the number of fiber contacts, three micromechanical models are introduced: (1) constitutive equations for micromechanics of three-dimensional fiberwebs under small strains, in which elongation of the fibers is the key deformation mechanism, adapted for large deformation ranges; (2) micromechanical model based on the rate theory of granular media, in which bending and torsion of fibers are the predominant elemental deformations used to calculate compliances of a particular contact; and (3) a mechanistic model developed using the general deformation theory of the fiber networks with fiber bending at the micro level and a binomial distribution of fiber contacts. A well-established mechanistic model, based on fiber-to-fiber friction at the micro level, is presented for predicting the hysteresis in compression behavior of wool fiberwebs. A novel algorithm is introduced to incorporate a hysteretic micromechanical model - a combination of the mechanistic model with microstructural fiber bending, which uses a binomial distribution of the number of fiber-to-fiber contacts, and the friction-based hysteresis idea - into the contact mechanics of multibody simulations with felt-lined interacting bodies. Considering the realistic case in which a portion of fibers slides, the fiber network can be treated as two subnetworks: one from the fibers with non-sliding contact points, responsible for the elastic response of the network, and the other consisting of fibers that slide, generating irreversible hysteresis phenomenon in the fiberweb compression. A parameter identification is performed to minimize the error between the micromechanical model and the elastic part of the loading-unloading experimental data for felt, then contribution of friction was added to the obtained mechanistic compression-recovery curves. The theoretical framework for constructing a mechanistic multibody dynamic model of a vertical piano action is developed, and its general validity is established using a prototype model. Dynamic equations of motion are derived symbolically for the piano action using a graph-theoretic formulation. The model fidelity is increased by including hammer-string interaction, backcheck wire and hammer shank flexibility, a sophisticated key pivot model, nonlinear models of bridle strap and butt spring, and a novel mathematical contact model. The developed nonlinear hysteretic micromechanical model is used for the hammer-string interaction to affirm the reliability and applicability of the model in general multibody dynamic simulations. In addition, dynamic modeling of a flexible hub-beam system with an eccentric tip mass including nonlinear hysteretic contact is studied. The model represents the mechanical finger of an actuator for a piano key. Achieving a desired finger-key contact force profile that replicates that of a real pianist's finger requires dynamic and vibration analysis of the actuator device. The governing differential equations for the dynamic behavior of the system are derived using Euler-Bernoulli beam theory along with Lagrange's method. To discretize the distributed parameter flexible beam in the model, the finite element method is utilized. Excessive vibration due to the arm flexibility and also the rigid-body oscillations of the arm, especially during the period of key-felt contact, is eliminated utilizing a simple grounded rotational dashpot and a grounded rotational dashpot with a one-sided relation. The effect on vibration behavior attributed to these additional components is demonstrated using the simulated model.
243

Cognitive Work Analysis to Support Collaboration in Teamwork Environments

Ashoori, Maryam January 2012 (has links)
Cognitive Work analysis (CWA) as an analytical approach for examining complex socio-technical systems has shown success in modeling the work of single operators. The CWA approach allows room for social and team interactions, but a more explicit analysis of team aspects can reveal more information for systems design. CWA techniques and models do not yet provide sufficient guidance on identifying shared constraints, team strategies, or social competencies of team players. In this thesis, I explore whether a team approach to CWA can yield more information than a typical CWA. Team CWA techniques and models emerge and extend from theories and models of teamwork, past attempts to model teams with CWA, and the results of two sets of observational studies. The potential benefits of using Team CWA models in domains with strong team collaboration are demonstrated through the results of a two-week observation at the Labour and Delivery Department of The Ottawa Hospital and a fifteen-week observation at the IBM Ottawa Software Group.
244

Metamodel-Based Probabilistic Design for Dynamic Systems with Degrading Components

Seecharan, Turuna Saraswati January 2012 (has links)
The probabilistic design of dynamic systems with degrading components is difficult. Design of dynamic systems typically involves the optimization of a time-invariant performance measure, such as Energy, that is estimated using a dynamic response, such as angular speed. The mechanistic models developed to approximate this performance measure are too complicated to be used with simple design calculations and lead to lengthy simulations. When degradation of the components is assumed, in order to determine suitable service times, estimation of the failure probability over the product lifetime is required. Again, complex mechanistic models lead to lengthy lifetime simulations when the Monte Carlo method is used to evaluate probability. Based on these problems, an efficient methodology is presented for probabilistic design of dynamic systems and to estimate the cumulative distribution function of the time to failure of a performance measure when degradation of the components is assumed. The four main steps include; 1) transforming the dynamic response into a set of static responses at discrete cycle-time steps and using Singular Value Decomposition to efficiently estimate a time-invariant performance measure that is based upon a dynamic response, 2) replacing the mechanistic model with an approximating function, known as a “metamodel” 3) searching for the best design parameters using fast integration methods such as the First Order Reliability Method and 4) building the cumulative distribution function using the summation of the incremental failure probabilities, that are estimated using the set-theory method, over the planned lifetime. The first step of the methodology uses design of experiments or sampling techniques to select a sample of training sets of the design variables. These training sets are then input to the computer-based simulation of the mechanistic model to produce a matrix of corresponding responses at discrete cycle-times. Although metamodels can be built at each time-specific column of this matrix, this method is slow especially if the number of time steps is large. An efficient alternative uses Singular Value Decomposition to split the response matrix into two matrices containing only design-variable-specific and time-specific information. The second step of the methodology fits metamodels only for the significant columns of the matrix containing the design variable-specific information. Using the time-specific matrix, a metamodel is quickly developed at any cycle-time step or for any time-invariant performance measure such as energy consumed over the cycle-lifetime. In the third step, design variables are treated as random variables and the First Order Reliability Method is used to search for the best design parameters. Finally, the components most likely to degrade are modelled using either a degradation path or a marginal distribution model and, using the First Order Reliability Method or a Monte Carlo Simulation to estimate probability, the cumulative failure probability is plotted. The speed and accuracy of the methodology using three metamodels, the Regression model, Kriging and the Radial Basis Function, is investigated. This thesis shows that the metamodel offers a significantly faster and accurate alternative to using mechanistic models for both probabilistic design optimization and for estimating the cumulative distribution function. For design using the First-Order Reliability Method to estimate probability, the Regression Model is the fastest and the Radial Basis Function is the slowest. Kriging is shown to be accurate and faster than the Radial Basis Function but its computation time is still slower than the Regression Model. When estimating the cumulative distribution function, metamodels are more than 100 times faster than the mechanistic model and the error is less than ten percent when compared with the mechanistic model. Kriging and the Radial Basis Function are more accurate than the Regression Model and computation time is faster using the Monte Carlo Simulation to estimate probability than using the First-Order Reliability Method.
245

In Search of Lost Time

Wu, Yan January 2012 (has links)
In Marcel Proust's most famous novel, In Search of Lost Time, a Madeleine cake elicited in him a nostalgic memory of Combray. Here we present a computational hypothesis of how such an episodic memory is represented in a brain area called the hippocampus, and how the dynamics of the hippocampus allow the storage and recall of such past events. Using the Neural Engineering Framework (NEF), we show how different aspects of an event, after compression, are represented together by hippocampal neurons as a vector in a high dimensional memory space. Single neuron simulation results using this representation scheme match well with the observation that hippocampal neurons are tuned to both spatial and non-spatial inputs. We then show that sequences of events represented by high dimensional vectors can be stored as episodic memories in a recurrent neural network (RNN) which is structurally similar to the hippocampus. We use a state-of-the-art Hessian-Free optimization algorithm to efficiently train this RNN. At the behavioural level we also show that, consistent with T-maze experiments on rodents, the storage and retrieval of past experiences facilitate subsequent decision-making tasks.
246

Investigating the Impact of Table Size on External Cognition in Collaborative Problem-Solving Tabletop Activities

Hajizadehgashti, Sepinood 23 August 2012 (has links)
Tables have been used for working and studying for years, and people continue using tables to work with digital artifacts. Collaborative tabletop activities such as planning, designing, and scheduling are common on traditional tables, but digital tables still face a variety of design issues to facilitate doing the same tasks. For example, due to the high cost of digital tables, it is unclear how large a digital table must be to support collaborative problem solving. This thesis examines the impact of physical features, in particular the table size, on collaborative tasks. This research leverages findings of previous studies of traditional and digital tables, and focuses on exploring the interaction of table size and users’ seating arrangement in collaborative problem solving. An experimental study is used to observe the behaviors of two-member groups while doing problem-solving tasks. Two tasks, storytelling and travel planning, were selected for this study, and the experiments were performed on two traditional tables, one small and one large. Although working on digital and traditional tables differs, investigating the impact of physical features in traditional tables can help us better understand how these features interact with workspace awareness and external cognition factors during taskwork. In the empirical study, external cognitive behaviors of participants were deeply analyzed to understand how physical settings of the table and seating arrangement affect the way people manipulate artifacts in the table workspace. Collaborators passed through different stages of problem solving using varied strategies, and the data analysis revealed that they manipulated material on the tabletop for understanding, organizing and solution making through visual separation, cognitive tracing and piling. Table size, task type and user seating arrangement showed strong effects on the external cognition of collaborators. In particular, the accessibility of sufficient space on the table influenced how much users could distribute their materials to improve workspace awareness and cognitive tracing. On the other hand, lack of space or inaccessible space forced people to use the space above the table—by holding materials in their hands—or to pile materials to compensate for space limitations. The insights gained from this research inform design decisions regarding size and seating arrangement for tabletop workspaces. For cases in which there is insufficient space, design alternatives are recommended to improve accessibility to artifacts to compensate for space limitations. These solutions aim to enhance the external cognition of users when space is insufficient to work with artifacts in problem-solving tasks.
247

Design and Hardware-in-the-Loop Testing of Optimal Controllers for Hybrid Electric Powertrains

Sharif Razavian, Reza January 2012 (has links)
The main objective of this research is the development of a flexible test-bench for evaluation of hybrid electric powertrain controllers. As a case study, a real-time near-optimal powertrain controller for a series hybrid electric vehicle (HEV) has been designed and tests. The designed controller, like many other optimal controllers, is based on a simple model. This control-oriented model aims to be as simple as possible in order to minimize the controller computational effort. However, a simple model may not be able to capture the vehicle's dynamics accurately, and the designed controller may fail to deliver the anticipated behavior. Therefore, it is crucial that the controller be tested in a realistic environment. To evaluate the performance of the designed model-based controller, it is first applied to a high-fidelity series HEV model that includes physics-based component models and low-level controllers. After successfully passing this model-in-the-loop test, the controller is programmed into a rapid-prototyping controller unit for hardware-in-the-loop simulations. This type of simulation is mostly intended to consider controller computational resources, as well as the communication issues between the controller and the plant (model solver). As the battery pack is one of the most critical components in a hybrid electric powertrain, the component-in-the-loop simulation setup is used to include a physical battery in the simulations in order to further enhance simulation accuracy. Finally, the driver-in-the-loop setup enables us to receive the inputs from a human driver instead of a fixed drive cycle, which allows us to study the effects of the unpredictable driver behavior. The developed powertrain controller itself is a real-time, drive cycle-independent controller for a series HEV, and is designed using a control-oriented model and Pontryagin's Minimum Principle. Like other proposed controllers in the literature, this controller still requires some information about future driving conditions; however, the amount of information is reduced. Although the controller design procedure is based on a series HEV with NiMH battery as the electric energy storage, the same procedure can be used to obtain the supervisory controller for a series HEV with an ultra-capacitor. By testing the designed optimal controller with the prescribed simulation setups, it is shown that the controller can ensure optimal behavior of the powertrain, as the dominant system behavior is very close to what is being predicted by the control-oriented model. It is also shown that the controller is able to handle small uncertainties in the driver behavior.
248

A Volumetric Contact Model for Planetary Rover Wheel/Soil Interaction

Petersen, Willem January 2012 (has links)
The main objective of this research is the development of a volumetric wheel-soil ground contact model that is suitable for mobile robotics applications with a focus on efficient simulations of planetary rover wheels operating on compliant and irregular terrains. To model the interaction between a rover wheel and soft soil for use in multibody dynamic simualtions, the terrain material is commonly represented by a soil continuum that deforms substantially when in contact with the locomotion system of the rover. Due to this extensive deformation and the large size of the contact patch, a distributed representation of the contact forces is necessary. This requires time-consuming integration processes to solve for the contact forces and moments during simulation. In this work, a novel approach is used to represent these contact reactions based on the properties of the hypervolume of penetration, which is defined by the intersection of the wheel and the terrain. This approach is based on a foundation of springs for which the normal contact force can be calculated by integrating the spring deflections over the contact patch. In the case of an elastic foundation, this integration results in a linear relationship between the normal force and the penetration volume, with the foundation stiffness as the proportionality factor. However, due to the highly nonlinear material properties of the soft terrain, a hyperelastic foundation has to be considered and the normal contact force becomes proportional to a volume with a fractional dimension --- a hypervolume. The continuous soil models commonly used in terramechanics simulations can be used in the derivation of the hypervolumetric contact forces. The result is a closed-form solution for the contact forces between a planetary rover wheel and the soft soil, where all the information provided by a distributed load is stored in the hypervolume of interpenetration. The proposed approach is applied to simulations of rigid and flexible planetary rover wheels. In both cases, the plastic behaviour of the terrain material is the main source of energy loss during the operation of planetary rovers. For the rigid wheel model, a penetration geometry is proposed to capture the nonlinear dissipative properties of the soil. The centroid of the hypervolume based on this geometry then allows for the calculation of the contact normal that defines the compaction resistance of the soil. For the flexible wheel model, the deformed state of the tire has to be determined before applying the hypervolumetric contact model. The tire deformation is represented by a distributed parameter model based on the Euler-Bernoulli beam equations. There are several geometric and soil parameters that are required to fully define the normal contact force. While the geometric parameters can be measured, the soil parameters have to be obtained experimentally. The results of a drawbar pull experiment with the Juno rover from the Canadian Space Agency were used to identify the soil parameters. These parameters were then used in a forward dynamics simulation of the rover on an irregular 3-dimensional terrain. Comparison of the simulation results with the experimental data validated the planetary rover wheel model developed in this work.
249

Bridging Private and Shared Interaction Surfaces in Collocated Groupware

McClelland, Phillip James January 2013 (has links)
Multi-display environments (such as the pairing of a digital tabletop computer with a set of handheld tablet computers) can support collocated interaction in groups by providing individuals with private workspaces that can be used alongside shared interaction surfaces. However, such a configuration necessitates the inclusion of intuitive and seamless interactions to move digital objects between displays. While existing research has suggested numerous methods to bridge devices in this manner, these methods often require highly specialized equipment and are seldom examined using real-world tasks. This thesis investigates the use of two cross-device object transfer methods as adapted for use with commonly-available hardware and applied for use in a realistic task, a familiar tabletop card game. A digital tabletop and tablet implementation of the tabletop card game Dominion is developed to support each of the two cross-device object transfer methods (as well as two different turn-taking methods to support user identification). An observational user study is then performed to examine the effect of the transfer methods on groups’ behaviour, examining player preferences and the strategies which players applied to pursue their varied goals within the game. The study reveals that players’ choices and use of the methods is shaped greatly by the way in which each player personally defines the Dominion task, not simply by the objectives outlined in its rulebook. Design considerations for the design of cross-device object transfer methods and lessons-learned for system and experimental design as applied to the gaming domain are also offered.
250

Data-guided statistical sparse measurements modeling for compressive sensing

Schwartz, Tal Shimon January 2013 (has links)
Digital image acquisition can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). In this work, a CS-based data-guided statistical sparse measurements method is presented, implemented and evaluated. This method significantly improves image reconstruction from sparse measurements. In the data-guided statistical sparse measurements approach, signal sampling distribution is optimized for improving image reconstruction performance. The sampling distribution is based on underlying data rather than the commonly used uniform random distribution. The optimal sampling pattern probability is accomplished by learning process through two methods - direct and indirect. The direct method is implemented for learning a nonparametric probability density function directly from the dataset. The indirect learning method is implemented for cases where a mapping between extracted features and the probability density function is required. The unified model is implemented for different representation domains, including frequency domain and spatial domain. Experiments were performed for multiple applications such as optical coherence tomography, bridge structure vibration, robotic vision, 3D laser range measurements and fluorescence microscopy. Results show that the data-guided statistical sparse measurements method significantly outperforms the conventional CS reconstruction performance. Data-guided statistical sparse measurements method achieves much higher reconstruction signal-to-noise ratio for the same compression rate as the conventional CS. Alternatively, Data-guided statistical sparse measurements method achieves similar reconstruction signal-to-noise ratio as the conventional CS with significantly fewer samples.

Page generated in 0.1192 seconds