• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 303
  • Tagged with
  • 303
  • 303
  • 303
  • 32
  • 28
  • 26
  • 20
  • 18
  • 16
  • 16
  • 16
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Perception of Compliance in Laparoscopic Surgery

Xin, Hao January 2009 (has links)
Laparoscopic surgery provides major benefits to patients in terms of decreased pain and post-operative hospital stays, but also increases their risks of intra-operative injuries because of the reduction in feedback in the tactile and visual channels compared to open surgery. Although the limitations of laparoscopy have been studied, the specific role of force feedback in laparoscopic surgery performance is not well understood. The purpose of this thesis is to determine the effect of force feedback on the ability to accurately discriminate tissue compliance by comparing subjective tissue softness assessment, force output, and subjective force assessment, in conventional and laparoscopic setups. The experimental trials involved eleven participants providing evaluations for a range of compliant samples, and analyzed their force output as well as their subjective evaluation of force output. The results of this investigation show that the accuracy of compliance discrimination is worse when using indirect probing compared to direct probing, and that the force used in direct probing is lower than the indirect scenario. Further, the subjective assessment of force output in direct probing is not significantly different compared to indirect probing. Further research involving more replication, expert of laparoscopy, and a focus on grip force are recommended to better understand our awareness of the subjective force output.
42

VEmap: A Visualization Tool for Evaluating Emotional Responses in Virtual Environments

Zhu, Hong January 2009 (has links)
VEMap (virtual emotion map) can be seen as an advanced application of virtual environment (VE) technology to aid with design activities in architecture and urban planning, which can assist designers to understand users’ opinions. The aim of this research and development work is to create a software application that allows designers to evaluate a user’s emotional response to virtual representations of architectural or urban planning environments. In this project, a galvanic skin response (GSR) test is adopted as an objective measurement for collecting skin conductance data representing emotional arousal. At the same time, the user’s self-reports are used as a form of subjective measurement for identifying emotional valence (i.e. positive, neutral, and negative). Finally, all of the information collected from both GSR readings (objective measurement) and self-reports (subjective measurement) are converted into coloured dots on the base map of the corresponding virtual environment (VE). According to the results of the VEmap evaluation and validation procedure, the beta-testing and evaluation of this project has been confirmed that VEmap may interpret users’ emotional changes as evoked by VE mostly. From a usability perspective, there is no obvious difficulty present for participants on all the controls. Moreover, according to participants’ comments, VEmap may increase users’ interests and promote their involvement if it is applied in architectural design and urban planning. However, gender might have influence on self-report part, and virtual reality usage or 3D game experiences might affect navigation in VE.
43

Matrix Representations and Extension of the Graph Model for Conflict Resolution

Xu, Haiyan January 2009 (has links)
The graph model for conflict resolution (GMCR) provides a convenient and effective means to model and analyze a strategic conflict. Standard practice is to carry out a stability analysis of a graph model, and then to follow up with a post-stability analysis, two critical components of which are status quo analysis and coalition analysis. In stability analysis, an equilibrium is a state that is stable for all decision makers (DMs) under appropriate stability definitions or solution concepts. Status quo analysis aims to determine whether a particular equilibrium is reachable from a status quo (or an initial state) and, if so, how to reach it. A coalition is any subset of a set of DMs. The coalition stability analysis within the graph model is focused on the status quo states that are equilibria and assesses whether states that are stable from individual viewpoints may be unstable for coalitions. Stability analysis began within a simple preference structure which includes a relative preference relationship and an indifference relation. Subsequently, preference uncertainty and strength of preference were introduced into GMCR but not formally integrated. In this thesis, two new preference frameworks, hybrid preference and multiple-level preference, and an integrated algebraic approach are developed for GMCR. Hybrid preference extends existing preference structures to combine preference uncertainty and strength of preference into GMCR. A multiple-level preference framework expands GMCR to handle a more general and flexible structure than any existing system representing strength of preference. An integrated algebraic approach reveals a link among traditional stability analysis, status quo analysis, and coalition stability analysis by using matrix representation of the graph model for conflict resolution. To integrate the three existing preference structures into a hybrid system, a new preference framework is proposed for graph models using a quadruple relation to express strong or mild preference of one state or scenario over another, equal preference, and an uncertain preference. In addition, a multiple-level preference framework is introduced into the graph model methodology to handle multiple-level preference information, which lies between relative and cardinal preferences in information content. The existing structure with strength of preference takes into account that if a state is stable, it may be either strongly stable or weakly stable in the context of three levels of strength. However, the three-level structure is limited in its ability to depict the intensity of relative preference. In this research, four basic solution concepts consisting of Nash stability, general metarationality, symmetric metarationality, and sequential stability, are defined at each level of preference for the graph model with the extended multiple-level preference. The development of the two new preference frameworks expands the realm of applicability of the graph model and provides new insights into strategic conflicts so that more practical and complicated problems can be analyzed at greater depth. Because a graph model of a conflict consists of several interrelated graphs, it is natural to ask whether well-known results of Algebraic Graph Theory can help analyze a graph model. Analysis of a graph model involves searching paths in a graph but an important restriction of a graph model is that no DM can move twice in succession along any path. (If a DM can move consecutively, then this DM's graph is effectively transitive. Prohibiting consecutive moves thus allows for graph models with intransitive graphs, which are sometimes useful in practice.) Therefore, a graph model must be treated as an edge-weighted, colored multidigraph in which each arc represents a legal unilateral move and distinct colors refer to different DMs. The weight of an arc could represent some preference attribute. Tracing the evolution of a conflict in status quo analysis is converted to searching all colored paths from a status quo to a particular outcome in an edge-weighted, colored multidigraph. Generally, an adjacency matrix can determine a simple digraph and all state-by-state paths between any two vertices. However, if a graph model contains multiple arcs between the same two states controlled by different DMs, the adjacency matrix would be unable to track all aspects of conflict evolution from the status quo. To bridge the gap, a conversion function using the matrix representation is designed to transform the original problem of searching edge-weighted, colored paths in a colored multidigraph to a standard problem of finding paths in a simple digraph with no color constraints. As well, several unexpected and useful links among status quo analysis, stability analysis, and coalition analysis are revealed using the conversion function. The key input of stability analysis is the reachable list of a DM, or a coalition, by a legal move (in one step) or by a legal sequence of unilateral moves, from a status quo in 2-DM or $n$-DM ($n > 2$) models. A weighted reachability matrix for a DM or a coalition along weighted colored paths is designed to construct the reachable list using the aforementioned conversion function. The weight of each edge in a graph model is defined according to the preference structure, for example, simple preference, preference with uncertainty, or preference with strength. Furthermore, a graph model and the four basic graph model solution concepts are formulated explicitly using the weighted reachability matrix for the three preference structures. The explicit matrix representation for conflict resolution (MRCR) that facilitates stability calculations in both 2-DM and $n$-DM ($n > 2$) models for three existing preference structures. In addition, the weighted reachability matrix by a coalition is used to produce matrix representation of coalition stabilities in multiple-decision-maker conflicts for the three preference frameworks. Previously, solution concepts in the graph model were traditionally defined logically, in terms of the underlying graphs and preference relations. When status quo analysis algorithms were developed, this line of thinking was retained and pseudo-codes were developed following a similar logical structure. However, as was noted in the development of the decision support system (DSS) GMCR II, the nature of logical representations makes coding difficult. The DSS GMCR II, is available for basic stability analysis and status quo analysis within simple preference, but is difficult to modify or adapt to other preference structures. Compared with existing graphical or logical representation, matrix representation for conflict resolution (MRCR) is more effective and convenient for computer implementation and for adapting to new analysis techniques. Moreover, due to an inherent link between stability analysis and post-stability analysis presented, the proposed algebraic approach establishes an integrated paradigm of matrix representation for the graph model for conflict resolution.
44

Modeling and Analysis of Location Service Management in Vehicular Ad Hoc Networks

Saleet, Hanan January 2010 (has links)
Recent technological advances in wireless communication and the pervasiveness of various wireless communication devices have offered novel and promising solutions to enable vehicles to communicate with each other, establishing a decentralized communication system. An emerging solution in this area is the Vehicular Ad Hoc Networks (VANETs), in which vehicles cooperate in receiving and delivering messages to each other. VANETs can provide a viable alternative in situations where existing infrastructure communication systems become overloaded, fail (due for instance to natural disaster), or inconvenient to use. Nevertheless, the success of VANETs revolves around a number of key elements, an important one of which is the way messages are routed between sources and destinations. Without an effective message routing strategy VANETs' success will continue to be limited. In order for messages to be routed to a destination effectively, the location of the destination must be determined. Since vehicles move in relatively fast and in a random manner, determining the location (hence the optimal message routing path) of (to) the destination vehicle constitutes a major challenge. Recent approaches for tackling this challenge have resulted in a number of Location Service Management Protocols. Though these protocols have demonstrated good potential, they still suffer from a number of impediments, including, signaling volume (particularly in large scale VANETs), inability to deal with network voids and inability to leverage locality for communication between the network nodes. In this thesis, a Region-based Location Service Management Protocol (RLSMP) is proposed. The protocol is a self-organizing framework that uses message aggregation and geographical clustering to minimize the volume of signalling overhead. To the best of my knowledge, RLSMP is the first protocol that uses message aggregation in both updating and querying, and as such it promises scalability, locality awareness, and fault tolerance. Location service management further addresses the issue of routing location updating and querying messages. Updating and querying messages should be exchanged between the network nodes and the location servers with minimum delay. This necessity introduces a persuasive need to support Quality of Service (QoS) routing in VANETs. To mitigate the QoS routing challenge in VANETs, the thesis proposes an Adaptive Message Routing (AMR) protocol that utilizes the network's local topology information in order to find the route with minimum end-to-end delay, while maintaining the required thresholds for connectivity probability and hop count. The QoS routing problem is formulated as a constrained optimization problem for which a genetic algorithm is proposed. The thesis presents experiments to validate the proposed protocol and test its performance under various network conditions.
45

Developing Effective Rule-Based Training Using the Cognitive Work Analysis Framework

Robinson, Thomas Alan January 2013 (has links)
Cognitive Work Analysis (CWA) is a framework for the analysis of complex systems that involve technological tools and human operators who must operate the systems in sometimes-unexpected situations. CWA consists of five phases that cover analysis of ecological aspects of systems, such as the tools in the system and the environment, and cognitive aspects, related to the human operators in the systems. Human operators often need to be trained to use complex systems, and training research has been conducted related to the ecological aspects of CWA; however, there is a gap in the training research related to the cognitive aspects of CWA. The Skills-Rules-Models (SRM) framework is currently the main method of cognitive analysis for CWA; therefore, to begin to fill the gap in the training literature at the cognitive end of CWA, this dissertation examines the need for training as related to SRM. In this dissertation, the current research related to training and CWA is reviewed and literature on the nature of expertise as related to SRM is examined. From this review, the need for training rules that can be used in unexpected situations, as a means of reducing cognitive demand, is identified. In order to assist operators with developing the knowledge to use such rule-based behaviour, training needs and methods to meet those needs must be identified. The reuse of knowledge in new situations is the essence of training transfer, so ideas from training transfer are used to guide the development of three guidelines for determining rules that might be transferable to new situations. Then methods of developing rules that fit those three guidelines by using information from the Work Domain Analysis (WDA) and Control Task Analysis (CTA) phases of CWA are presented. In addition to methods for identifying training needs, methods for meeting those training needs are required. The review of training transfer also identified the need to use contextual examples in training while still avoiding examples that place too high of a cognitive demand on the operator, possibly reducing learning. Therefore, as a basis for designing training material that imposes a reduced cognitive demand, Cognitive Load Theory (CLT) is reviewed, and methods for reducing cognitive demand are discussed. To demonstrate the methods of identifying and meeting training needs, two examples are presented. First, two sets of training rules are created based on the results of the application of the WDA and CTA phases of CWA for two different work domains: Computer Algebra Systems and the programming language Logo. Then, from these sets of rules, instructional materials are developed using methods based on CLT to manage the cognitive demands of the instructions. Finally, two experiments are presented that test how well operators learn from the instructional materials. The experiments provide support for the effectiveness of applying CLT to the design of instructional materials based on the sets of rules developed. This combined work represents a new framework for identifying and meeting training needs related to the cognitive aspects of CWA.
46

Design of a Propulsion System for Swimming Under Low Reynolds Flow Conditions

Wybenga, Michael William January 2007 (has links)
This work focuses on the propulsion of swimming micro-robots through accessible, quasi-static, fluid-filled, environments of the human body. The operating environment dictates that the system must function under low Reynolds number flow conditions. In this fluidic regime, viscous forces dominate. Inspiration is drawn from biological examples of propulsion systems that exploit the dominance of viscous forces. A system based on the prokaryotic flagella is chosen due to its simplicity; it is essentially a rigid helix that rotates about its base. To eliminate the piercing threat posed by a rigid helix, a propulsion system utilizing a flexible filament is proposed. The filament is designed such that under rotational load, and the resulting viscous drag, it contorts into a helix and provides propulsive force. Four mathematical models are created to investigate the behaviour of the proposed flexible filament. An experimental prototype of the flexible tail is built for similar purposes. An experimental rigid tail is also built to serve as a benchmark. The experimental results for propulsive force generated by the rigid tail match the Resistive-Force Theory (RFT) model. An analysis of the system concludes that experimental error is likely minor. An ADAMS model of the rigid tail, as a result of modelling error, under-predicts the propulsive force. The experimental flexible filament shows that the proposed propulsion system is feasible. When actuated, the tail contorts into a `helix-like' shape and generates propulsive force. An ADAMS model of an ideal flexible filament shows that, if a complete helix is formed, there is no loss in performance when compared to a rigid counterpart. The experimental filament is too stiff to form a complete helix and, accordingly, the ADAMS model does not simulate the filament well. To decrease this discrepancy, a second ADAMS model, attempting to directly simulate the experimental filament, rather than an ideal one, is created. Regardless, the second ADAMS model gives confidence that a multi-body dynamic model using lumped-parameter drag forces, after further modifications, can simulate the experimental flexible filament well.
47

Dynamics and Control of a Piano Action Mechanism

Izadbakhsh, Adel January 2006 (has links)
The piano action is the mechanism that transforms the finger force applied to a key into the motion of a hammer that strikes a piano string. This thesis focuses on improving the fidelity of the dynamic model of a grand piano action which has been already developed by Hirschkorn et al. at the University of Waterloo. This model is the state-of-the-art dynamic model of the piano in the literature and is based on the real components of the piano action mechanism (key, whippen, jack, repetition lever, and hammer). Two main areas for improving the fidelity of the dynamic model are the hammer shank and the connection point between the key and the ground. The hammer shank is a long narrow wooden rod and, by observation with a high-speed video camera, the flexibility of this part has been confirmed. In previous work, the piano hammer had been modelled as a rigid body. In this work, a Rayleigh beam model is used to model the flexible behaviour of the hammer shank. By comparing the experimental and analytical results, it turns out that the flexibility of the hammer shank does not significantly affect the rotation of the other parts of the piano mechanism, compared with the case that the hammer shank has been modelled as a rigid part. However, the flexibility of the hammer shank changes the impact velocity of the hammer head, and also causes a greater scuffing motion for the hammer head during the contact with the string. The connection of the piano key to the ground had been simply modelled with a revolute joint, but the physical form of the connection at that point suggests that a revoluteprismatic joint with a contact force underneath better represents this connection. By comparing the experimental and analytical results, it is concluded that incorporating this new model significantly increases the fidelity of the model for the blows. In order to test the accuracy of the dynamic model, an experimental setup, including a servo motor, a load cell, a strain gauge, and three optical encoders, is built. The servo motor is used to actuate the piano key. Since the purpose of the motor is to consistently mimic the finger force of the pianist, the output torque of the motor is controlled. To overcome the problem associated with the motor torque control method used in previous work, a new torque control method is implemented on a real-time PC and a better control of the motor torque output is established. Adding a more realistic model of the piano string to the current piano action model and finding a better contact model for the contacts that happen between the surfaces that are made of felt (or leather), are two main areas that can be worked on in the future research. These two areas will help to further increase the fidelity of the present piano action model.
48

Automated Multiple Point Stimulation Technique for Motor Unit Number Estimation

Marzieh, Abdollahi 28 September 2007 (has links)
Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to estimate the number of MUs in a muscle. In this thesis, a new MUNE technique, called Automated MPS, has been developed to overcome the shortcomings of two current techniques, namely MPS and MUESA. This method can be summarized as follows. First, a muscle is stimulated with a train of constant intensity current pulses. Depending on various factors, one to three MUs activate probabilistically after each pulse, and several responses are collected. These collected responses should be divided into up to 2^n clusters, such that each cluster represents one possible combination of n Surface-detected Motor Unit Potentials (SMUPs). After clustering the collected responses, the average response of each cluster is calculated, the outliers are excluded, and similar groups are merged together. Then, depending on the number of response set groups, a decomposition technique is applied to the response clusters to obtain the $n$ constituent SMUPs. To estimate the number of MUs, the aforementioned process is repeated several times until enough SMUPs to calculate a reliable mean-SMUP are acquired. The number of MUs can then be determined by dividing the maximal compound muscle action potential (CMAP) size by the mean-SMUP size. The focus of this thesis was on using pattern recognition techniques to detect n SMUPs from a collected set of waveforms. Several experiments were performed using both simulated and real data to evaluate the ability of Automated MPS in finding the constituent SMUPs of a response set. Our experiments showed that performing Automated MPS needs less experience compared with MPS. Moreover, it can deal with more difficult situations and detect more accurate SMUPs compared with MUESA.
49

Structure from Infrared Stereo Images

Hajebi, Kiana January 2007 (has links)
With the rapid growth in infrared sensor technology and its drastic cost reduction, the potential of application of these imaging technologies in computer vision systems has increased. One potential application for IR imaging is depth from stereo. Discerning depth from stereopsis is difficult because the quality of un-cooled sensors is not sufficient for generating dense depth maps. In this thesis, we investigate the production of sparse disparity maps from un-calibrated infrared stereo images and agree that a dense depth field may not be attained directly from IR stereo images, but perhaps a sparse depth field may be obtained that can be interpolated to produce a dense/semi-dense depth field. In our proposed technique, the sparse disparity map is produced by a robust features-based stereo matching method capable of dealing with the problems of infrared images, such as low resolution and high noise; initially, a set of stable features are extracted from stereo pairs using the phase congruency model, which contrary to the gradient-based feature detectors, provides features that are invariant to geometric transformations. Then, a set of Log-Gabor wavelet coefficients at different orientations and frequencies is used to analyze and describe the extracted features for matching. The resulted sparse disparity map is then refined by triangular and epipolar geometrical constraints. In densifying the sparse map, a watershed transformation is applied to divide the image into several segments, where the disparity inside each segment is assumed to vary smoothly. The surface of each segment is then reconstructed independently by fitting a spline to its known disparities; Experiments on a set of indoor and outdoor IR stereo pairs lend credibility to the robustness of our IR stereo matching and surface reconstruction techniques and hold promise for low-resolution stereo images which don’t have a large amount of texture and local details.
50

Contact Dynamics Modelling for Robotic Task Simulation

Gonthier, Yves 09 October 2007 (has links)
This thesis presents the theoretical derivations and the implementation of a contact dynamics modelling system based on compliant contact models. The system was designed to be used as a general-purpose modelling tool to support the task planning process space-based robot manipulator systems. This operational context imposes additional requirements on the contact dynamics modelling system beyond the usual ones of fidelity and accuracy. The system must not only be able to generate accurate and reliable simulation results, but it must do it in a reasonably short period of time, such that an operations engineer can investigate multiple scenarios within a few hours. The system is easy to interface with existing simulation facilities. All physical parameters of the contact model can be identified experimentally or can be obtained by other means through analysis or theoretical derivations based on the material properties. Similarly, the numerical parameters can be selected automatically or by using heuristic rules that give an indication of the range of values that would ensure that the simulations results are qualitatively correct. The contact dynamics modelling system is comprised of two contact models. On one hand, a point contact model is proposed to tackle simulations involving bodies with non-conformal surfaces. Since it is based on Hertz theory, the contacting surfaces must be smooth and without discontinuity, i.e., no corners or sharp edges. The point contact model includes normal damping and tangential friction and assumes the contact surface is very small, such that the contact force is assumed to be acting through a point. An expression to set the normal damping as a function of the effective coefficient of restitution is given. A new seven-parameter friction model is introduced. The friction model is based on a bristle friction model, and is adapted to the context of 3-dimensional frictional impact modelling with introduction of load-dependent bristle stiffness and damping terms, and with the expression of the bristle deformation in vectorial form. The model features a dwell-time stiction force dependency and is shown to be able to reproduce the dynamic nature of the friction phenomenon. A second contact model based on the Winkler elastic foundation model is then proposed to deal with a more general class of geometries. This so-called volumetric contact model is suitable for a broad range of contact geometries, as long as the contact surface can be approximated as being flat. A method to deal with objects where this latter approximation is not reasonable is also presented. The effect of the contact pressure distribution across the contact surface is accounted for in the form of the rolling resistance torque and spinning friction torque. It is shown that the contact forces and moments can be expressed in terms of the volumetric properties of the volume of interference between the two bodies, defined as the volume spanned by the intersection of the two undeformed geometries of the colliding bodies. The properties of interest are: the volume of the volume of interference, the position of its centroid, and its inertia tensor taken about the centroid. The analysis also introduces a new way of defining the contact normal; it is shown that the contact normal must correspond to one of the eigenvectors of the inertia tensor. The investigation also examines how the Coulomb friction is affected by the relative motion of the objects. The concept of average surface velocity is introduced. It accounts for both the relative translational and angular motions of the contacting surfaces. The average surface velocity is then used to find dimensionless factors that relate friction force and spinning torque caused by the Coulomb friction. These latter factors are labelled the Contensou factors. Also, the radius of gyration of the moment of inertia of the volume of interference about the contact normal was shown to correlate the spinning Coulomb friction torque to the translational Coulomb friction force. A volumetric version of the seven-parameter bristle friction model is then presented. The friction model includes both the tangential friction force and spinning friction torque. The Contensou factors are used to control the behaviour of the Coulomb friction. For both contact models, the equations are derived from first principles, and the behaviour of each contact model characteristic was studied and simulated. When available, the simulation results were compared with benchmark results from the literature. Experiments were performed to validate the point contact model using a six degrees-of-freedom manipulator holding a half-spherical payload, and coming into contact with a flat plate. Good correspondence between the simulated and experimental results was obtained.

Page generated in 0.0814 seconds