• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1288
  • 349
  • 214
  • 91
  • 65
  • 53
  • 40
  • 36
  • 27
  • 17
  • 14
  • 13
  • 13
  • 13
  • 7
  • Tagged with
  • 2666
  • 2666
  • 836
  • 820
  • 592
  • 571
  • 449
  • 410
  • 405
  • 333
  • 310
  • 284
  • 259
  • 248
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Applying Data Mining Techniques on Continuous Sensed Data : For daily living activity recognition

Li, Yunjie January 2014 (has links)
Nowadays, with the rapid development of the Internet of Things, the applicationfield of wearable sensors has been continuously expanded and extended, especiallyin the areas of remote electronic medical treatment, smart homes ect. Human dailyactivities recognition based on the sensing data is one of the challenges. With avariety of data mining techniques, the activities can be automatically recognized. Butdue to the diversity and the complexity of the sensor data, not every kind of datamining technique can performed very easily, until after a systematic analysis andimprovement. In this thesis, several data mining techniques were involved in theanalysis of a continuous sensing dataset in order to achieve the objective of humandaily activities recognition. This work studied several data mining techniques andfocuses on three of them; Decision Tree, Naive Bayes and neural network, analyzedand compared these techniques according to the classification results. The paper alsoproposed some improvements to the data mining techniques according to thespecific dataset. The comparison of the three classification results showed that eachclassifier has its own limitations and advantages. The proposed idea of combing theDecision Tree model with the neural network model significantly increased theclassification accuracy in this experiment.
652

Node Localization using Fractal Signal Preprocessing and Artificial Neural Network

Kaiser, Tashniba January 2012 (has links)
This thesis proposes an integrated artificial neural network based approach to classify the position of a wireless device in an indoor protected area. Our experiments are conducted in two different types of interference affected indoor locations. We found that the environment greatly influences the received signal strength. We realized the need of incorporating a complexity measure of the Wi-Fi signal as additional information in our localization algorithm. The inputs to the integrated artificial neural network were comprised of an integer dimension representation and a fractional dimension representation of the Wi-Fi signal. The integer dimension representation consisted of the raw signal strength, whereas the fractional dimension consisted of a variance fractal dimension of the Wi-Fi signal. The results show that the proposed approach performed 8.7% better classification than the “one dimensional input” ANN approach, achieving an 86% correct classification rate. The conventional Trilateration method achieved only a 47.97% correct classification rate.
653

Improved quantitative estimation of rainfall by radar

Islam, Md Rashedul 06 January 2006 (has links)
Although higher correlation between gauge and radar at hourly or daily accumulations are reported, it is rarely observed at higher time resolution (e.g. 10 -minute). This study investigates six major rainfall events in year 2000 in the greater Winnipeg area with durations varying from four to nine hours. The correlation between gauge and radar measurements of precipitation is found to be only 0.3 at 10-minute resolution and 0.55 at hourly resolution using Marshall-Palmer’s Z-R relationship (Z=200R1.6). The rainfalls are classified into convective and stratiform regions using Steiner et al. (1995)’s algorithm and two different Z-R relationships are tested to minimize the error associated with the variability of drop-size-distribution, however no improvement is observed. The performance of the artificial neural network is explored as a reflectivity-rainfall mapping function. Three different types of neural networks are explored: the back propagation network, the radial basis function network, and the generalized regression neural network. It is observed that the neural network’s performance is better than the Z-R relationship to estimate the rainfall events which was used for training and validation (correlation 0.67). When this network is tested on a new rainfall its performance is found quite similar to that obtained from the Z-R relationship (correlation 0.33). Based on this observation neural network may be recommended as a post-processing tool but may not be very useful for operational purposes - at least as used in this study. Variability in weather and precipitation scenarios affects the radar measurements which apparently makes it impossible for the neural network or the Z-R relationship to show consistent performance at every rainfall event. To account for variability in weather and rainfall scenarios conventional correction schemes for attenuation and hail contamination are applied and a trajectory model is developed to account for rainfall advection due to wind drift. The trajectory model uses velocity obtained from the single-doppler observation. A space-time interpolation technique is applied to generate reflectivity maps at one-minute resolution based on the direction obtained from the correlation based tracking algorithm. The trajectory model uses the generated reflectivity maps having one-minute resolution which help to account for the travel time by the rainfall mass to reach to the ground. It was found that the attenuation correction algorithm adversely increases the reflectivity. This study assumes that the higher reflectivity caused by hail contaminated regions is one reason for the overestimation in the attenuation correction process. It was observed that the hail capping method applied prior to the attenuation correction algorithm helps to improve the situation. A statistical expression to account for radome attenuation is also developed. It is observed that the correlation between the gauge and the radar measurement is 0.81 after applying the various algorithms. Although Marshall-Palmer’s relationship is recommended for stratiform precipitation only, this study found it suitable for both convective and stratiform precipitation when attenuation is properly taken into account. The precipitation processing model developed in this study generates more accurate rainfall estimates at the surface from radar observations and may be a better choice for rainfall-runoff modellers.
654

Neural Network Approach for Predicting the Failure of Turbine Components

Bano, Nafisa 24 July 2013 (has links)
Turbine components operate under severe loading conditions and at high and varying temperatures that result in thermal stresses in the presence of temperature gradients created by hot gases and cooling air. Moreover, static and cyclic loads as well as the motion of rotating components create mechanical stresses. The combined effect of complex thermo-mechanical stresses promote nucleation and propagation of cracks that give rise to fatigue and creep failure of the turbine components. Therefore, the relationship between thermo-mechanical stresses, chemical composition, heat treatment, resulting microstructure, operating temperature, material damage, and potential failure modes, i.e. fatigue and/or creep, needs to be well understood and studied. Artificial neural networks are promising candidate tools for such studies. They are fast, flexible, efficient, and accurate tools to model highly non-linear multi-dimensional relationships and reduce the need for experimental work and time-consuming regression analysis. Therefore, separate neural network models for γ’ precipitate strengthened Ni based superalloys have been developed for predicting the γ’ precipitate size, thermal expansion coefficient, fatigue life, and hysteresis energy. The accumulated fatigue damage is then estimated as the product of hysteresis energy and fatigue life. The models for γ’ precipitate size, thermal expansion coefficient, and hysteresis energy converge very well and match experimental data accurately. The fatigue life proved to be the most challenging aspect to predict, and fracture mechanics proved to potentially be a necessary supplement to neural networks. The model for fatigue life converges well, but relatively large errors are observed partly due to the generally large statistical variations inherent to fatigue life. The deformation mechanism map for 1.23Cr-1.2Mo-0.26V rotor steel has been constructed using dislocation glide, grain boundary sliding, and power law creep rate equations. The constructed map is verified with experimental data points and neural network results. Although the existing set of experimental data points for neural network modeling is limited, there is an excellent match with boundaries constructed using rate equations which validates the deformation mechanism map.
655

Data Collection, Analysis, and Classification for the Development of a Sailing Performance Evaluation System

Sammon, Ryan 28 August 2013 (has links)
The work described in this thesis contributes to the development of a system to evaluate sailing performance. This work was motivated by the lack of tools available to evaluate sailing performance. The goal of the work presented is to detect and classify the turns of a sailing yacht. Data was collected using a BlackBerry PlayBook affixed to a J/24 sailing yacht. This data was manually annotated with three types of turn: tack, gybe, and mark rounding. This manually annotated data was used to train classification methods. Classification methods tested were multi-layer perceptrons (MLPs) of two sizes in various committees and nearest- neighbour search. Pre-processing algorithms tested were Kalman filtering, categorization using quantiles, and residual normalization. The best solution was found to be an averaged answer committee of small MLPs, with Kalman filtering and residual normalization performed on the input as pre-processing.
656

Defect and thickness inspection system for cast thin films using machine vision and full-field transmission densitometry

Johnson, Jay Tillay 12 1900 (has links)
Quick mass production of homogeneous thin film material is required in paper, plastic, fabric, and thin film industries. Due to the high feed rates and small thicknesses, machine vision and other nondestructive evaluation techniques are used to ensure consistent, defect-free material by continuously assessing post-production quality. One of the fastest growing inspection areas is for 0.5-500 micrometer thick thin films, which are used for semiconductor wafers, amorphous photovoltaics, optical films, plastics, and organic and inorganic membranes. As a demonstration application, a prototype roll-feed imaging system has been designed to inspect high-temperature polymer electrolyte membrane (PEM), used for fuel cells, after being die cast onto a moving transparent substrate. The inspection system continuously detects thin film defects and classifies them with a neural network into categories of holes, bubbles, thinning, and gels, with a 1.2% false alarm rate, 7.1% escape rate, and classification accuracy of 96.1%. In slot die casting processes, defect types are indicative of a misbalance in the mass flow rate and web speed; so, based on the classified defects, the inspection system informs the operator of corrective adjustments to these manufacturing parameters. Thickness uniformity is also critical to membrane functionality, so a real-time, full-field transmission densitometer has been created to measure the bi-directional thickness profile of the semi-transparent PEM between 25-400 micrometers. The local thickness of the 75 mm x 100 mm imaged area is determined by converting the optical density of the sample to thickness with the Beer-Lambert law. The PEM extinction coefficient is determined to be 1.4 D/mm and the average thickness error is found to be 4.7%. Finally, the defect inspection and thickness profilometry systems are compiled into a specially-designed graphical user interface for intuitive real-time operation and visualization.
657

Solving Partial Differential Equations Using Artificial Neural Networks

Rudd, Keith January 2013 (has links)
<p>This thesis presents a method for solving partial differential equations (PDEs) using articial neural networks. The method uses a constrained backpropagation (CPROP) approach for preserving prior knowledge during incremental training for solving nonlinear elliptic and parabolic PDEs adaptively, in non-stationary environments. Compared to previous methods that use penalty functions or Lagrange multipliers,</p><p>CPROP reduces the dimensionality of the optimization problem by using direct elimination, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic</p><p>and parabolic PDEs with changing parameters and non-homogeneous terms. The computational complexity analysis shows that CPROP compares favorably to existing methods of solution, and that it leads to considerable computational savings when subject to non-stationary environments.</p><p>The CPROP based approach is extended to a constrained integration (CINT) method for solving initial boundary value partial differential equations (PDEs). The CINT method combines classical Galerkin methods with CPROP in order to constrain the ANN to approximately satisfy the boundary condition at each stage of integration. The advantage of the CINT method is that it is readily applicable to PDEs in irregular domains and requires no special modification for domains with complex geometries. Furthermore, the CINT method provides a semi-analytical solution that is infinitely differentiable. The CINT method is demonstrated on two hyperbolic and one parabolic initial boundary value problems (IBVPs). These IBVPs are widely used and have known analytical solutions. When compared with Matlab's nite element (FE) method, the CINT method is shown to achieve significant improvements both in terms of computational time and accuracy. The CINT method is applied to a distributed optimal control (DOC) problem of computing optimal state and control trajectories for a multiscale dynamical system comprised of many interacting dynamical systems, or agents. A generalized reduced gradient (GRG) approach is presented in which the agent dynamics are described by a small system of stochastic dierential equations (SDEs). A set of optimality conditions is derived using calculus of variations, and used to compute the optimal macroscopic state and microscopic control laws. An indirect GRG approach is used to solve the optimality conditions numerically for large systems of agents. By assuming a parametric control law obtained from the superposition of linear basis functions, the agent control laws can be determined via set-point regulation, such</p><p>that the macroscopic behavior of the agents is optimized over time, based on multiple, interactive navigation objectives.</p><p>Lastly, the CINT method is used to identify optimal root profiles in water limited ecosystems. Knowledge of root depths and distributions is vital in order to accurately model and predict hydrological ecosystem dynamics. Therefore, there is interest in accurately predicting distributions for various vegetation types, soils, and climates. Numerical experiments were were performed that identify root profiles that maximize transpiration over a 10 year period across a transect of the Kalahari. Storm types were varied to show the dependence of the optimal profile on storm frequency and intensity. It is shown that more deeply distributed roots are optimal for regions where</p><p>storms are more intense and less frequent, and shallower roots are advantageous in regions where storms are less intense and more frequent.</p> / Dissertation
658

Coagulation Optimization to Minimize and Predict the Formation of Disinfection By-products

Wassink, Justin 04 January 2012 (has links)
The formation of disinfection by-products (DBPs) in drinking water has become an issue of greater concern in recent years. Bench-scale jar tests were conducted on a surface water to evaluate the impact of enhanced coagulation on the removal of organic DBP precursors and the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). The results of this testing indicate that enhanced coagulation practices can improve treated water quality without increasing coagulant dosage. The data generated were also used to develop artificial neural networks (ANNs) to predict THM and HAA formation. Testing of these models showed high correlations between the actual and predicted data. In addition, an experimental plan was developed to use ANNs for treatment optimization at the Peterborough pilot plant.
659

Coagulation Optimization to Minimize and Predict the Formation of Disinfection By-products

Wassink, Justin 04 January 2012 (has links)
The formation of disinfection by-products (DBPs) in drinking water has become an issue of greater concern in recent years. Bench-scale jar tests were conducted on a surface water to evaluate the impact of enhanced coagulation on the removal of organic DBP precursors and the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). The results of this testing indicate that enhanced coagulation practices can improve treated water quality without increasing coagulant dosage. The data generated were also used to develop artificial neural networks (ANNs) to predict THM and HAA formation. Testing of these models showed high correlations between the actual and predicted data. In addition, an experimental plan was developed to use ANNs for treatment optimization at the Peterborough pilot plant.
660

Intelligent Recognition of Texture and Material Properties of Fabrics

Wang, Xin 02 November 2011 (has links)
Fabrics are unique materials which consist of various properties affecting their performance and end-uses. A computerized fabric property evaluation and analysis method plays a crucial role not only in textile industry but also in scientific research. An accurate analysis and measurement of fabric property provides a powerful tool for gauging product quality, assuring regulatory compliance and assessing the performance of textile materials. This thesis investigated the solutions for applying computerized methods to evaluate and intelligently interpret the texture and material properties of fabric in an inexpensive and efficient way. Firstly, a method which allows automatic recognition of basic weave pattern and precisely measuring the yarn count is proposed. The yarn crossed-areas are segmented by a spatial domain integral projection approach. Combining fuzzy c-means (FCM) and principal component analysis (PCA) on grey level co-occurrence matrix (GLCM) feature vectors extracted from the segments enables to classify detected segments into two clusters. Based on the analysis on texture orientation features, the yarn crossed-area states are automatically determined. An autocorrelation method is used to find weave repeats and correct detection errors. The method was validated by using computer simulated woven samples and real woven fabric images. The test samples have various yarn counts, appearance, and weave types. All weave patterns of tested fabric samples are successfully recognized and computed yarn counts are consistent to the manual counts. Secondly, we present a methodology for using the high resolution 3D surface data of fabric samples to measure surface roughness in a nondestructive and accurate way. A parameter FDFFT, which is the fractal dimension estimation from 2DFFT of 3D surface scan, is proposed as the indicator of surface roughness. The robustness of FDFFT, which consists of the rotation-invariance and scale-invariance, is validated on a number of computer simulated fractal Brownian images. Secondly, in order to evaluate the usefulness of FDFFT, a novel method of calculating standard roughness parameters from 3D surface scan is introduced. According to the test results, FDFFT has been demonstrated as a fast and reliable parameter for measuring the fabric roughness from 3D surface data. We attempt a neural network model using back propagation algorithm and FDFFT for predicting the standard roughness parameters. The proposed neural network model shows good performance experimentally. Finally, an intelligent approach for the interpretation of fabric objective measurements is proposed using supported vector machine (SVM) techniques. The human expert assessments of fabric samples are used during the training phase in order to adjust the general system into an applicable model. Since the target output of the system is clear, the uncertainty which lies in current subjective fabric evaluation does not affect the performance of proposed model. The support vector machine is one of the best solutions for handling high dimensional data classification. The complexity problem of the fabric property has been optimally dealt with. The generalization ability shown in SVM allows the user to separately implement and design the components. Sufficient cross-validations are performed and demonstrate the performance test of the system.

Page generated in 0.0604 seconds