• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 503
  • 358
  • 96
  • 59
  • 43
  • 25
  • 17
  • 11
  • 10
  • 7
  • 6
  • 6
  • 4
  • 3
  • 2
  • Tagged with
  • 1370
  • 1370
  • 440
  • 234
  • 192
  • 177
  • 135
  • 134
  • 127
  • 113
  • 110
  • 109
  • 108
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

A Predictive Control Method for Human Upper-Limb Motion: Graph-Theoretic Modelling, Dynamic Optimization, and Experimental Investigations

Seth, Ajay January 2000 (has links)
Optimal control methods are applied to mechanical models in order to predict the control strategies in human arm movements. Optimality criteria are used to determine unique controls for a biomechanical model of the human upper-limb with redundant actuators. The motivation for this thesis is to provide a non-task-specific method of motion prediction as a tool for movement researchers and for controlling human models within virtual prototyping environments. The current strategy is based on determining the muscle activation levels (control signals) necessary to perform a task that optimizes several physical determinants of the model such as muscular and joint stresses, as well as performance timing. Currently, the initial and final location, orientation, and velocity of the hand define the desired task. Several models of the human arm were generated using a graph-theoretical method in order to take advantage of similar system topology through the evolution of arm models. Within this framework, muscles were modelled as non-linear actuator components acting between origin and insertion points on rigid body segments. Activation levels of the muscle actuators are considered the control inputs to the arm model. Optimization of the activation levels is performed via a hybrid genetic algorithm (GA) and a sequential quadratic programming (SQP) technique, which provides a globally optimal solution without sacrificing numerical precision, unlike traditional genetic algorithms. Advantages of the underlying genetic algorithm approach are that it does not require any prior knowledge of what might be a 'good' approximation in order for the method to converge, and it enables several objectives to be included in the evaluation of the fitness function. Results indicate that this approach can predict optimal strategies when compared to benchmark minimum-time maneuvers of a robot manipulator. The formulation and integration of the aforementioned components into a working model and the simulation of reaching and lifting tasks represents the bulk of the thesis. Results are compared to motion data collected in the laboratory from a test subject performing the same tasks. Discrepancies in the results are primarily due to model fidelity. However, more complex models are not evaluated due to the additional computational time required. The theoretical approach provides an excellent foundation, but further work is required to increase the computational efficiency of the numerical implementation before proceeding to more complex models.
302

Wavelet Shrinkage Based Image Denoising using Soft Computing

Bai, Rong 08 August 2008 (has links)
Noise reduction is an open problem and has received considerable attention in the literature for several decades. Over the last two decades, wavelet based methods have been applied to the problem of noise reduction and have been shown to outperform the traditional Wiener filter, Median filter, and modified Lee filter in terms of root mean squared error (MSE), peak signal noise ratio (PSNR) and other evaluation methods. In this research, two approaches for the development of high performance algorithms for de-noising are proposed, both based on soft computing tools, such as fuzzy logic, neural networks, and genetic algorithms. First, an improved additive noise reduction method for digital grey scale nature images, which uses an interval type-2 fuzzy logic system to shrink wavelet coefficients, is proposed. This method is an extension of a recently published approach for additive noise reduction using a type-1 fuzzy logic system based wavelet shrinkage. Unlike the type-1 fuzzy logic system based wavelet shrinkage method, the proposed approach employs a thresholding filter to adjust the wavelet coefficients according to the linguistic uncertainty in neighborhood values, inter-scale dependencies and intra-scale correlations of wavelet coefficients at different resolutions by exploiting the interval type-2 fuzzy set theory. Experimental results show that the proposed approach can efficiently and rapidly remove additive noise from digital grey scale images. Objective analysis and visual observations show that the proposed approach outperforms current fuzzy non-wavelet methods and fuzzy wavelet based methods, and is comparable with some recent but more complex wavelet methods, such as Hidden Markov Model based additive noise de-noising method. The main differences between the proposed approach and other wavelet shrinkage based approaches and the main improvements of the proposed approach are also illustrated in this thesis. Second, another improved method of additive noise reduction is also proposed. The method is based on fusing the results of different filters using a Fuzzy Neural Network (FNN). The proposed method combines the advantages of these filters and has outstanding ability of smoothing out additive noise while preserving details of an image (e.g. edges and lines) effectively. A Genetic Algorithm (GA) is applied to choose the optimal parameters of the FNN. The experimental results show that the proposed method is powerful for removing noise from natural images, and the MSE of this approach is less, and the PSNR of is higher, than that of any individual filters which are used for fusion. Finally, the two proposed approaches are compared with each other from different point of views, such as objective analysis in terms of mean squared error(MSE), peak signal to noise ratio (PSNR), image quality index (IQI) based on quality assessment of distorted images, and Information Theoretic Criterion (ITC) based on a human vision model, computational cost, universality, and human observation. The results show that the proposed FNN based algorithm optimized by GA has the best performance among all testing approaches. Important considerations for these proposed approaches and future work are discussed.
303

Stochastic Stepwise Ensembles for Variable Selection

Xin, Lu 30 April 2009 (has links)
Ensembles methods such as AdaBoost, Bagging and Random Forest have attracted much attention in the statistical learning community in the last 15 years. Zhu and Chipman (2006) proposed the idea of using ensembles for variable selection. Their implementation used a parallel genetic algorithm (PGA). In this thesis, I propose a stochastic stepwise ensemble for variable selection, which improves upon PGA. Traditional stepwise regression (Efroymson 1960) combines forward and backward selection. One step of forward selection is followed by one step of backward selection. In the forward step, each variable other than those already included is added to the current model, one at a time, and the one that can best improve the objective function is retained. In the backward step, each variable already included is deleted from the current model, one at a time, and the one that can best improve the objective function is discarded. The algorithm continues until no improvement can be made by either the forward or the backward step. Instead of adding or deleting one variable at a time, Stochastic Stepwise Algorithm (STST) adds or deletes a group of variables at a time, where the group size is randomly decided. In traditional stepwise, the group size is one and each candidate variable is assessed. When the group size is larger than one, as is often the case for STST, the total number of variable groups can be quite large. Instead of evaluating all possible groups, only a few randomly selected groups are assessed and the best one is chosen. From a methodological point of view, the improvement of STST ensemble over PGA is due to the use of a more structured way to construct the ensemble; this allows us to better control over the strength-diversity tradeoff established by Breiman (2001). In fact, there is no mechanism to control this fundamental tradeoff in PGA. Empirically, the improvement is most prominent when a true variable in the model has a relatively small coefficient (relative to other true variables). I show empirically that PGA has a much higher probability of missing that variable.
304

A Genetic Algorithm for Solar Boat

Ma, Jiya January 2008 (has links)
Genetic algorithm has been widely used in different areas of optimization problems. Ithas been combined with renewable energy domain, photovoltaic system, in this thesis.To participate and win the solar boat race, a control program is needed and C++ hasbeen chosen for programming. To implement the program, the mathematic model hasbeen built. Besides, the approaches to calculate the boundaries related to conditionhave been explained. Afterward, the processing of the prediction and real time controlfunction are offered. The program has been simulated and the results proved thatgenetic algorithm is helpful to get the good results but it does not improve the resultstoo much since the particularity of the solar driven boat project such as the limitationof energy production
305

ANALYSIS & STUDY OF AI TECHNIQUES FORAUTOMATIC CONDITION MONITORING OFRAILWAY TRACK INFRASTRUCTURE : Artificial Intelligence Techniques

Podder, Tanmay January 2010 (has links)
Since the last decade the problem of surface inspection has been receiving great attention from the scientific community, the quality control and the maintenance of products are key points in several industrial applications.The railway associations spent much money to check the railway infrastructure. The railway infrastructure is a particular field in which the periodical surface inspection can help the operator to prevent critical situations. The maintenance and monitoring of this infrastructure is an important aspect for railway association.That is why the surface inspection of railway also makes importance to the railroad authority to investigate track components, identify problems and finding out the way that how to solve these problems. In railway industry, usually the problems find in railway sleepers, overhead, fastener, rail head, switching and crossing and in ballast section as well. In this thesis work, I have reviewed some research papers based on AI techniques together with NDT techniques which are able to collect data from the test object without making any damage. The research works which I have reviewed and demonstrated that by adopting the AI based system, it is almost possible to solve all the problems and this system is very much reliable and efficient for diagnose problems of this transportation domain. I have reviewed solutions provided by different companies based on AI techniques, their products and reviewed some white papers provided by some of those companies. AI based techniques likemachine vision, stereo vision, laser based techniques and neural network are used in most cases to solve the problems which are performed by the railway engineers.The problems in railway handled by the AI based techniques performed by NDT approach which is a very broad, interdisciplinary field that plays a critical role in assuring that structural components and systems perform their function in a reliable and cost effective fashion. The NDT approach ensures the uniformity, quality and serviceability of materials without causing any damage of that materials is being tested. This testing methods use some way to test product like, Visual and Optical testing, Radiography, Magnetic particle testing, Ultrasonic testing, Penetrate testing, electro mechanic testing and acoustic emission testing etc. The inspection procedure has done periodically because of better maintenance. This inspection procedure done by the railway engineers manually with the aid of AI based techniques.The main idea of thesis work is to demonstrate how the problems can be reduced of thistransportation area based on the works done by different researchers and companies. And I have also provided some ideas and comments according to those works and trying to provide some proposal to use better inspection method where it is needed.The scope of this thesis work is automatic interpretation of data from NDT, with the goal of detecting flaws accurately and efficiently. AI techniques such as neural networks, machine vision, knowledge-based systems and fuzzy logic were applied to a wide spectrum of problems in this area. Another scope is to provide an insight into possible research methods concerning railway sleeper, fastener, ballast and overhead inspection by automatic interpretation of data.In this thesis work, I have discussed about problems which are arise in railway sleepers,fastener, and overhead and ballasted track. For this reason I have reviewed some research papers related with these areas and demonstrated how their systems works and the results of those systems. After all the demonstrations were taking place of the advantages of using AI techniques in contrast with those manual systems exist previously.This work aims to summarize the findings of a large number of research papers deploying artificial intelligence (AI) techniques for the automatic interpretation of data from nondestructive testing (NDT). Problems in rail transport domain are mainly discussed in this work. The overall work of this paper goes to the inspection of railway sleepers, fastener, ballast and overhead.
306

Developing Box-Pushing Behaviours Using Evolutionary Robotics

Van Lierde, Boris January 2011 (has links)
The context of this report and the IRIDIA laboratory are described in the preface. Evolutionary Robotics and the box-pushing task are presented in the introduction.The building of a test system supporting Evolutionary Robotics experiments is then detailed. This system is made of a robot simulator and a Genetic Algorithm. It is used to explore the possibility of evolving box-pushing behaviours. The bootstrapping problem is explained, and a novel approach for dealing with it is proposed, with results presented.Finally, ideas for extending this approach are presented in the conclusion.
307

A Predictive Control Method for Human Upper-Limb Motion: Graph-Theoretic Modelling, Dynamic Optimization, and Experimental Investigations

Seth, Ajay January 2000 (has links)
Optimal control methods are applied to mechanical models in order to predict the control strategies in human arm movements. Optimality criteria are used to determine unique controls for a biomechanical model of the human upper-limb with redundant actuators. The motivation for this thesis is to provide a non-task-specific method of motion prediction as a tool for movement researchers and for controlling human models within virtual prototyping environments. The current strategy is based on determining the muscle activation levels (control signals) necessary to perform a task that optimizes several physical determinants of the model such as muscular and joint stresses, as well as performance timing. Currently, the initial and final location, orientation, and velocity of the hand define the desired task. Several models of the human arm were generated using a graph-theoretical method in order to take advantage of similar system topology through the evolution of arm models. Within this framework, muscles were modelled as non-linear actuator components acting between origin and insertion points on rigid body segments. Activation levels of the muscle actuators are considered the control inputs to the arm model. Optimization of the activation levels is performed via a hybrid genetic algorithm (GA) and a sequential quadratic programming (SQP) technique, which provides a globally optimal solution without sacrificing numerical precision, unlike traditional genetic algorithms. Advantages of the underlying genetic algorithm approach are that it does not require any prior knowledge of what might be a 'good' approximation in order for the method to converge, and it enables several objectives to be included in the evaluation of the fitness function. Results indicate that this approach can predict optimal strategies when compared to benchmark minimum-time maneuvers of a robot manipulator. The formulation and integration of the aforementioned components into a working model and the simulation of reaching and lifting tasks represents the bulk of the thesis. Results are compared to motion data collected in the laboratory from a test subject performing the same tasks. Discrepancies in the results are primarily due to model fidelity. However, more complex models are not evaluated due to the additional computational time required. The theoretical approach provides an excellent foundation, but further work is required to increase the computational efficiency of the numerical implementation before proceeding to more complex models.
308

Wavelet Shrinkage Based Image Denoising using Soft Computing

Bai, Rong 08 August 2008 (has links)
Noise reduction is an open problem and has received considerable attention in the literature for several decades. Over the last two decades, wavelet based methods have been applied to the problem of noise reduction and have been shown to outperform the traditional Wiener filter, Median filter, and modified Lee filter in terms of root mean squared error (MSE), peak signal noise ratio (PSNR) and other evaluation methods. In this research, two approaches for the development of high performance algorithms for de-noising are proposed, both based on soft computing tools, such as fuzzy logic, neural networks, and genetic algorithms. First, an improved additive noise reduction method for digital grey scale nature images, which uses an interval type-2 fuzzy logic system to shrink wavelet coefficients, is proposed. This method is an extension of a recently published approach for additive noise reduction using a type-1 fuzzy logic system based wavelet shrinkage. Unlike the type-1 fuzzy logic system based wavelet shrinkage method, the proposed approach employs a thresholding filter to adjust the wavelet coefficients according to the linguistic uncertainty in neighborhood values, inter-scale dependencies and intra-scale correlations of wavelet coefficients at different resolutions by exploiting the interval type-2 fuzzy set theory. Experimental results show that the proposed approach can efficiently and rapidly remove additive noise from digital grey scale images. Objective analysis and visual observations show that the proposed approach outperforms current fuzzy non-wavelet methods and fuzzy wavelet based methods, and is comparable with some recent but more complex wavelet methods, such as Hidden Markov Model based additive noise de-noising method. The main differences between the proposed approach and other wavelet shrinkage based approaches and the main improvements of the proposed approach are also illustrated in this thesis. Second, another improved method of additive noise reduction is also proposed. The method is based on fusing the results of different filters using a Fuzzy Neural Network (FNN). The proposed method combines the advantages of these filters and has outstanding ability of smoothing out additive noise while preserving details of an image (e.g. edges and lines) effectively. A Genetic Algorithm (GA) is applied to choose the optimal parameters of the FNN. The experimental results show that the proposed method is powerful for removing noise from natural images, and the MSE of this approach is less, and the PSNR of is higher, than that of any individual filters which are used for fusion. Finally, the two proposed approaches are compared with each other from different point of views, such as objective analysis in terms of mean squared error(MSE), peak signal to noise ratio (PSNR), image quality index (IQI) based on quality assessment of distorted images, and Information Theoretic Criterion (ITC) based on a human vision model, computational cost, universality, and human observation. The results show that the proposed FNN based algorithm optimized by GA has the best performance among all testing approaches. Important considerations for these proposed approaches and future work are discussed.
309

Stochastic Stepwise Ensembles for Variable Selection

Xin, Lu 30 April 2009 (has links)
Ensembles methods such as AdaBoost, Bagging and Random Forest have attracted much attention in the statistical learning community in the last 15 years. Zhu and Chipman (2006) proposed the idea of using ensembles for variable selection. Their implementation used a parallel genetic algorithm (PGA). In this thesis, I propose a stochastic stepwise ensemble for variable selection, which improves upon PGA. Traditional stepwise regression (Efroymson 1960) combines forward and backward selection. One step of forward selection is followed by one step of backward selection. In the forward step, each variable other than those already included is added to the current model, one at a time, and the one that can best improve the objective function is retained. In the backward step, each variable already included is deleted from the current model, one at a time, and the one that can best improve the objective function is discarded. The algorithm continues until no improvement can be made by either the forward or the backward step. Instead of adding or deleting one variable at a time, Stochastic Stepwise Algorithm (STST) adds or deletes a group of variables at a time, where the group size is randomly decided. In traditional stepwise, the group size is one and each candidate variable is assessed. When the group size is larger than one, as is often the case for STST, the total number of variable groups can be quite large. Instead of evaluating all possible groups, only a few randomly selected groups are assessed and the best one is chosen. From a methodological point of view, the improvement of STST ensemble over PGA is due to the use of a more structured way to construct the ensemble; this allows us to better control over the strength-diversity tradeoff established by Breiman (2001). In fact, there is no mechanism to control this fundamental tradeoff in PGA. Empirically, the improvement is most prominent when a true variable in the model has a relatively small coefficient (relative to other true variables). I show empirically that PGA has a much higher probability of missing that variable.
310

Understanding the role of shaft stiffness in the golf swing

MacKenzie, Sasho James 22 December 2005 (has links)
The purpose of this thesis was to determine how shaft stiffness affects clubhead speed and how it alters clubhead orientation at impact. For the first time, a 3D, six-segment forward dynamics model of a golfer and club was developed and optimized to answer these questions. A range of shaft stiffness levels from flexible to stiff were evaluated at three levels of swing speed (38, 45 and 53 m/s). At any level of swing speed, the difference in clubhead speed did not exceed 0.1 m/s across levels of shaft stiffness. Therefore, it was concluded that customizing the stiffness of a golf club shaft to perfectly suit a particular swing will not increase clubhead speed sufficiently to have any meaningful effect on performance. The magnitude of lead deflection at impact increased as shaft stiffness decreased. The magnitude of lead deflection at impact also increased as swing speed increased. For an optimized swing that generated a clubhead speed of 45 m/s, with a shaft of regular stiffness, lead deflection of the shaft at impact was 6.25 cm. The same simulation resulted in a toe-down shaft deflection of 2.27 cm at impact. Using the model, it was estimated that for each centimeter of lead deflection of the shaft, dynamic loft increased by approximately 0.8 degrees. Toe-down shaft deflection had relatively no influence on dynamic loft. For every centimeter increase in lead deflection of the shaft, dynamic closing of the clubface increased by approximately 0.7 degrees. For every centimeter increase in toe-down shaft deflection, dynamic closing of the clubface decreased by approximately 0.5 degrees. The results from this thesis indicate that improvements in driving distance brought about by altering shaft stiffness are the result of altered clubhead orientation at impact and not increased clubhead speed.

Page generated in 0.0521 seconds