• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 536
  • 76
  • 18
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 3
  • 2
  • Tagged with
  • 775
  • 775
  • 217
  • 195
  • 143
  • 123
  • 107
  • 106
  • 87
  • 86
  • 75
  • 71
  • 70
  • 66
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Application of Data mining in Medical Applications

Eapen, Arun George January 2004 (has links)
Abstract Data mining is a relatively new field of research whose major objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make timely and accurate decisions. Two medical databases are considered, one for describing the various tools and the other as the case study. The first database is related to breast cancer and the second is related to the minimum data set for mental health (MDS-MH). The breast cancer database consists of 10 attributes and the MDS-MH dataset consists of 455 attributes. As there are a number of data mining algorithms and tools available we consider only a few tools to evaluate on these applications and develop classification rules that can be used in prediction. Our results indicate that for the major case study, namely the mental health problem, over 70 to 80% accurate results are possible. A further extension of this work is to make available classification rules in mobile devices such as PDAs. Patient information is directly inputted onto the PDA and the classification of these inputted values takes place based on the rules stored on the PDA to provide real time assistance to practitioners.
82

Examining the Process of Automation Development and Deployment

Barsalou, Edward January 2005 (has links)
In order to develop a better understanding of the process of development and deployment of automated systems, this thesis examines aspects of project execution and knowledge transfer in the context of a large automation project. <br /><br /> Background issues of project execution are examined, including the challenges of knowledge sharing in project development, as well as a brief discussion of measures of project success. The lifecycle of a large automation project is presented, including aspects of development and the development team, as well as design challenges inherent in the development process of a successful automation project which consisted of approximately 11,000 hours of combined effort by vendor and customer development teams. <br /><br /> Human factors aspects of large automation projects are explored, including an investigation of the workings of a large project team, by examining the cognitive aspects of the project team, as well as ecological aspects of the automation development process. <br /><br /> Using an interview methodology that can be termed the "echo method", project team members were interviewed in order to elicit helpful and unhelpful behaviours exhibited by other team members throughout the project. The results of these interviews are categorized and examined in the context of both knowledge management and social networks. Common themes in interview comments are identified, and related to both the areas of knowledge management and social networks. <br /><br /> Results indicated that team member experience and availability affect overall team performance. However, overlapping capabilities within a team were found to allow the team to adapt to changing circumstances, as well as to overcome weaknesses in team member availability. Better understanding of team interactions and capabilities supports improvements in project performance, ultimately delivering higher quality automation and streamlining the development process.
83

Dynamic Model of a Piano Action Mechanism

Hirschkorn, Martin C. January 2004 (has links)
While some attempts have been made to model the behaviour of the grand piano action (the mechanism that translates a key press into a hammer striking a string), most researchers have reduced the system to a simple model with little relation to the components of a real action. While such models are useful for certain applications, they are not appropriate as design tools for piano makers, since the model parameters have little physical meaning and must be calibrated from the behaviour of a real action. A new model for a piano action is proposed in this thesis. The model treats each of the five main action components (key, whippen, jack, repetition lever, and hammer) as a rigid body. The action model also incorporates a contact model to determine the normal and friction forces at 13 locations between each of the contacting bodies. All parameters in the model are directly measured from the physical properties of individual action components, allowing the model to be used as a prototyping tool for actions that have not yet been built. To test whether the model can accurately predict the behaviour of a piano action, an experimental apparatus was built. Based around a keyboard from a Boston grand piano, the apparatus uses an electric motor to actuate the key, a load cell to measure applied force, and optical encoders and a high speed video camera to measure the positions of the bodies. The apparatus was found to produce highly repeatable, reliable measurements of the action. The behaviour of the action model was compared to the measurements from the experimental apparatus for several types of key blows from a pianist. A qualitative comparison showed that the model could very accurately reproduce the behaviour of a real action for high force blows. When the forces were lower, the behaviour of the action model was still reasonable, but some discrepancy from the experimental results could be seen. In order to reduce the discrepancy, it was recommended that certain improvements could be made to the action model. Rigid bodies, most importantly the key and hammer, should be replaced with flexible bodies. The normal contact model should be modified to account for the speed-independent behaviour of felt compression. Felt bushings that are modelled as perfect revolute joints should instead be modelled as flexible contact surfaces.
84

Unsupervised Clustering and Automatic Language Model Generation for ASR

Podder, Sushil January 2004 (has links)
The goal of an automatic speech recognition system is to enable the computer in understanding human speech and act accordingly. In order to realize this goal, language modeling plays an important role. It works as a knowledge source through mimicking human comprehension mechanism in understanding the language. Among many other approaches, statistical language modeling technique is widely used in automatic speech recognition systems. However, the generation of reliable and robust statistical model is very difficult task, especially for a large vocabulary system. For a large vocabulary system, the performance of such a language model degrades as the vocabulary size increases. Hence, the performance of the speech recognition system also degrades due to the increased complexity and mutual confusion among the candidate words in the language model. In order to solve these problems, reduction of language model size as well as minimization of mutual confusion between words are required. In our work, we have employed clustering techniques, using self-organizing map, to build topical language models. Moreover, in order to capture the inherent semantics of sentences, a lexical dictionary, WordNet has been used in the clustering process. This thesis work focuses on various aspects of clustering, language model generation, extraction of task dependent acoustic parameters, and their implementations under the framework of the CMU Sphinx3 speech engine decoder. The preliminary results, presented in this thesis show the effectiveness of the topical language models.
85

Vision-Inertial SLAM using Natural Features in Outdoor Environments

Asmar, Daniel January 2006 (has links)
Simultaneous Localization and Mapping (SLAM) is a recursive probabilistic inferencing process used for robot navigation when Global Positioning Systems (GPS) are unavailable. SLAM operates by building a map of the robot environment, while concurrently localizing the robot within this map. The ultimate goal of SLAM is to operate anywhere using the environment's natural features as landmarks. Such a goal is difficult to achieve for several reasons. Firstly, different environments contain different types of natural features, each exhibiting large variance in its shape and appearance. Secondly, objects look differently from different viewpoints and it is therefore difficult to always recognize them. Thirdly, in most outdoor environments it is not possible to predict the motion of a vehicle using wheel encoders because of errors caused by slippage. Finally, the design of a SLAM system to operate in a large-scale outdoor setting is in itself a challenge. <br /><br /> The above issues are addressed as follows. Firstly, a camera is used to recognize the environmental context (e. g. , indoor office, outdoor park) by analyzing the holistic spectral content of images of the robot's surroundings. A type of feature (e. g. , trees for a park) is then chosen for SLAM that is likely observable in the recognized setting. A novel tree detection system is introduced, which is based on perceptually organizing the content of images into quasi-vertical structures and marking those structures that intersect ground level as tree trunks. Secondly, a new tree recognition system is proposed, which is based on extracting Scale Invariant Feature Transform (SIFT) features on each tree trunk region and matching trees in feature space. Thirdly, dead-reckoning is performed via an Inertial Navigation System (INS), bounded by non-holonomic constraints. INS are insensitive to slippage and varying ground conditions. Finally, the developed Computer Vision and Inertial systems are integrated within the framework of an Extended Kalman Filter into a working Vision-INS SLAM system, named VisSLAM. <br /><br /> VisSLAM is tested on data collected during a real test run in an outdoor unstructured environment. Three test scenarios are proposed, ranging from semi-automatic detection, recognition, and initialization to a fully automated SLAM system. The first two scenarios are used to verify the presented inertial and Computer Vision algorithms in the context of localization, where results indicate accurate vehicle pose estimation for the majority of its journey. The final scenario evaluates the application of the proposed systems for SLAM, where results indicate successful operation for a long portion of the vehicle journey. Although the scope of this thesis is to operate in an outdoor park setting using tree trunks as landmarks, the developed techniques lend themselves to other environments using different natural objects as landmarks.
86

A Study of Segmentation and Normalization for Iris Recognition Systems

Mohammadi Arvacheh, Ehsan January 2006 (has links)
Iris recognition systems capture an image from an individual's eye. The iris in the image is then segmented and normalized for feature extraction process. The performance of iris recognition systems highly depends on segmentation and normalization. For instance, even an effective feature extraction method would not be able to obtain useful information from an iris image that is not segmented or normalized properly. This thesis is to enhance the performance of segmentation and normalization processes in iris recognition systems to increase the overall accuracy. <br /><br /> The previous iris segmentation approaches assume that the boundary of pupil is a circle. However, according to our observation, circle cannot model this boundary accurately. To improve the quality of segmentation, a novel active contour is proposed to detect the irregular boundary of pupil. The method can successfully detect all the pupil boundaries in the CASIA database and increase the recognition accuracy. <br /><br /> Most previous normalization approaches employ polar coordinate system to transform iris. Transforming iris into polar coordinates requires a reference point as the polar origin. Since pupil and limbus are generally non-concentric, there are two natural choices, pupil center and limbus center. However, their performance differences have not been investigated so far. We also propose a reference point, which is the virtual center of a pupil with radius equal to zero. We refer this point as the linearly-guessed center. The experiments demonstrate that the linearly-guessed center provides much better recognition accuracy. <br /><br /> In addition to evaluating the pupil and limbus centers and proposing a new reference point for normalization, we reformulate the normalization problem as a minimization problem. The advantage of this formulation is that it is not restricted by the circular assumption used in the reference point approaches. The experimental results demonstrate that the proposed method performs better than the reference point approaches. <br /><br /> In addition, previous normalization approaches are based on transforming iris texture into a fixed-size rectangular block. In fact, the shape and size of normalized iris have not been investigated in details. In this thesis, we study the size parameter of traditional approaches and propose a dynamic normalization scheme, which transforms an iris based on radii of pupil and limbus. The experimental results demonstrate that the dynamic normalization scheme performs better than the previous approaches.
87

Implementation of a Variable Duty Factor Controller on a Six-Legged Axi-Symmetric Walking Robot

Cutler, Steven January 2006 (has links)
Hexplorer is a six-legged walking robot developed at the University of Waterloo. The robot is controlled by a network of seven digital signal processors, six of which control three motors each, for a total of 18 motors. Brand new custom electronics were designed to house the digital signal processors and associated circuitry. A variable duty factor wave gait, developed by Yoneda et al. was simulated and implemented on the robot. Simulation required an in-depth kinematic analysis that was complicated by the mechanical design of parallel mechanism comprising the legs. These complications were handled in both simulation and implementation. However, due to mechanical issues Hexplorer walked for only one or two steps at a time.
88

Decision Making Strategies for Probabilistic Aerospace Systems Design

Borer, Nicholas Keith 24 March 2006 (has links)
Modern aerospace systems design problems are often characterized by the necessity to identify and enable multiple tradeoffs. This can be accomplished by transformation of the design problem to a multiple objective optimization formulation. However, existing multiple criteria techniques can lead to unattractive solutions due to their basic assumptions; namely that of monotonically increasing utility and independent decision criteria. Further, it can be difficult to quantify the relative importance of each decision metric, and it is very difficult to view the pertinent tradeoffs for large-scale problems. This thesis presents a discussion and application of Multiple Criteria Decision Making (MCDM) to aerospace systems design and quantifies the complications associated with switching from single to multiple objectives. It then presents a procedure to tackle these problems by utilizing a two-part relative importance model for each criterion. This model contains a static and dynamic portion with respect to the current value of the decision metric. The static portion is selected based on an entropy analogy of each metric within the decision space to alleviate the problems associated with quantifying basic (monotonic) relative importance. This static value is further modified by examination of the interdependence of the decision metrics. The dynamic contribution uses a penalty function approach for any constraints and further reduces the importance of any metric approaching a user-specified threshold level. This reduces the impact of the assumption of monotonically increasing utility by constantly updating the relative importance of a given metric based on its current value. A method is also developed to determine a linearly independent subset of the original requirements, resulting in compact visualization techniques for large-scale problems.
89

Adaptive output feedback controllers for a class of nonlinear mechanical systems

Miwa, Hideaki 28 August 2008 (has links)
Not available / text
90

Design and performance analysis of data broadcasting systems

蕭潤明, Siu, Yun-ming. January 1995 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy

Page generated in 0.0459 seconds