• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 180
  • 31
  • 25
  • 21
  • 16
  • 11
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 647
  • 647
  • 647
  • 135
  • 135
  • 124
  • 120
  • 107
  • 93
  • 85
  • 73
  • 71
  • 71
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Systematic Digitized Treatment of Engineering Line-Diagrams

Sui, T.Z., Qi, Hong Sheng, Qi, Q., Wang, L., Sun, J.W. 05 1900 (has links)
Yes / In engineering design, there are many functional relationships which are difficult to express into a simple and exact mathematical formula. Instead they are documented within a form of line graphs (or plot charts or curve diagrams) in engineering handbooks or text books. Because the information in such a form cannot be used directly in the modern computer aided design (CAD) process, it is necessary to find a way to numerically represent the information. In this paper, a data processing system for numerical representation of line graphs in mechanical design is developed, which incorporates the process cycle from the initial data acquisition to the final output of required information. As well as containing the capability for curve fitting through Cubic spline and Neural network techniques, the system also adapts a novel methodology for use in this application: Grey Models. Grey theory have been used in various applications, normally involved with time-series data, and have the characteristic of being able to handle sparse data sets and data forecasting. Two case studies were then utilized to investigate the feasibility of Grey models for curve fitting. Furthermore, comparisons with the other two established techniques show that the accuracy was better than the Cubic spline function method, but slightly less accurate than the Neural network method. These results are highly encouraging and future work to fully investigate the capability of Grey theory, as well as exploiting its sparse data handling capabilities is recommended.
202

Artificial Neural Networks based Modeling and Analysis of Semi-Active Damper System

Bhanot, Nishant 30 June 2017 (has links)
The suspension system is one of the most sensitive systems of a vehicle as it affects the dynamic behavior of the vehicle with even minor changes. These systems are designed to carry out multiple tasks such as isolating the vehicle body from the road/tire vibrations as well as achieving desired ride and handling performance levels in both steady state and limit handling conditions. The damping coefficient of the damper plays a crucial role in determining the overall frequency response of the suspension system. Considerable research has been carried out on semi active damper systems as the damping coefficient can be varied without the system requiring significant external power giving them advantages over both passive and fully active suspension systems. Dampers behave as non-linear systems at higher frequencies and hence it has been difficult to develop accurate models for its full range of motion. This study aims to develop a velocity sensitive damper model using artificial neural networks and essentially provide a 'black-box' model which encapsulates the non-linear behavior of the damper. A feed-forward neural network was developed by testing a semi active damper on a shock dynamometer at CenTiRe for multiple frequencies and damping ratios. This data was used for supervised training of the network using MATLAB Neural Network Toolbox. The developed NN model was evaluated for its prediction accuracy. Further, the developed damper model was analyzed for feasibility of use for simulations and controls by integrating it in a Simulink based quarter car model and applying the well-known skyhook control strategy. Finally, effects on ride and handling dynamics were evaluated in Carsim by replacing the default damper model with the proposed model. It was established that this damper modeling technique can be used to help evaluate the behavior of the damper on both component as well as vehicle level without needing to develop a complex physics based model. This can be especially beneficial in the earlier stages of vehicle development. / Master of Science / The suspension system is one of the most sensitive systems of a vehicle as it affects the dynamic behavior of the vehicle with even minor changes. These systems are designed to carry out multiple tasks such as absorbing shocks from the road as well as improving the handling of the vehicle for a smoother and safer drive. The level of firmness of the shock absorber/damper plays a crucial role in determining the overall behavior of the suspension system. Considerable research has been carried out on semi active damper systems as the damper stiffness can be varied quickly and easily as compared to other passive and fully active damper systems. Dampers are complex systems to model especially for high speed operations and hence it has been difficult to develop accurate mathematical models for its full range of motion. This study aims to develop an accurate mathematical model for a semi active damper using artificial neural networks. A semi active damper was fabricated and tested on a shock dynamometer at CenTiRe for multiple speeds and stiffness values. Thistest data obtained was used for training of the mathematical model using the computer software MATLAB. The developed model was evaluated for its accuracy and further analyzed for feasibility of use in computer simulations. It was established that this damper modeling technique can be used to help evaluate the behavior of the damper with high accuracy while still running the simulations relatively quickly whereas in current simulations compromise has to be made on at least the accuracy of the model or the simulation speed. This can be especially beneficial in the earlier stages of vehicle development.
203

Modelling the chlorophenol removal from wastewater via reverse osmosis process using a multilayer artificial neural network with genetic algorithm

Mohammad, A.T., Al-Obaidi, Mudhar A.A.R., Hameed, E.M., Basheer, B.N., Mujtaba, Iqbal 04 July 2022 (has links)
Yes / Reverse Osmosis (RO) can be considered as one of the most widely used technologies used to abate the existence of highly toxic compounds from wastewater. In this paper, a multilayer artificial neural network (MLANN) with Genetic Algorithm (GA) have been considered to build a comprehensive mathematical model, which can be used to predict the performance of an individual RO process in term of chlorophenol removal from wastewater. The MLANN model has been validated against 70 observational experimental datasets collected from the open literature. The MLANN model predictions have outperformed the predictions of several structures developed for the same chlorophenol removal using RO process based on performance in terms of coefficient of correlation, coefficient determination (R2) and average error (AVE). In this respect, two structures (4-2-2-1) and (4-8-8-1) were also used to study the effect of a number of neurons in the hidden layers based on the difference between the measured and ANN predicted values. The model responses clearly confirm the successfulness of estimating the chlorophenol rejection for network structure 4-8-8-1 based on a wide range of the control variables. This also represents a high consistency between the ANN model predictions and the experimental data.
204

Artificial neural network based modelling and optimization of refined palm oil process

Tehlah, N., Kaewpradit, P., Mujtaba, Iqbal 28 July 2016 (has links)
Yes / The content and concentration of beta-carotene, tocopherol and free fatty acid is one of the important parameters that affect the quality of edible oil. In simulation based studies for refined palm oil process, three variables are usually used as input parameters which are feed flow rate (F), column temperature (T) and pressure (P). These parameters influence the output concentration of beta-carotene, tocopherol and free fatty acid. In this work, we develop 2 different ANN models; the first ANN model based on 3 inputs (F, T, P) and the second model based on 2 inputs (T and P). Artificial neural network (ANN) models are set up to describe the simulation. Feed forward back propagation neural networks are designed using different architecture in MATLAB toolbox. The effects of numbers for neurons and layers are examined. The correlation coefficient for this study is greater than 0.99; it is in good agreement during training and testing the models. Moreover, it is found that ANN can model the process accurately, and is able to predict the model outputs very close to those predicted by ASPEN HYSYS simulator for refined palm oil process. Optimization of the refined palm oil process is performed using ANN based model to maximize the concentration of beta-carotene and tocopherol at residue and free fatty acid at distillate.
205

A Wavelet-Based Rail Surface Defect Prediction and Detection Algorithm

Hopkins, Brad Michael 16 April 2012 (has links)
Early detection of rail defects is necessary for preventing derailments and costly damage to the train and railway infrastructure. A rail surface flaw can quickly propagate from a small fracture to a broken rail after only a few train cars have passed over it. Rail defect detection is typically performed by using an instrumented car or a separate railway monitoring vehicle. Rail surface irregularities can be measured using accelerometers mounted to the bogie side frames or wheel axles. Typical signal processing algorithms for detecting defects within a vertical acceleration signal use a simple thresholding routine that considers only the amplitude of the signal. As a result, rail surface defects that produce low amplitude acceleration signatures may not be detected, and special track components that produce high amplitude acceleration signatures may be flagged as defects. The focus of this research is to develop an intelligent signal processing algorithm capable of detecting and classifying various rail surface irregularities, including defects and special track components. Three algorithms are proposed and validated using data collected from an instrumented freight car. For the first two algorithms, one uses a windowed Fourier Transform while the other uses the Wavelet Transform for feature extraction. Both of these algorithms use an artificial neural network for feature classification. The third algorithm uses the Wavelet Transform to perform a regularity analysis on the signal. The algorithms are validated with the collected data and shown to out-perform the threshold-based algorithm for the same data set. Proper training of the defect detection algorithm requires a large data set consisting of operating conditions and physical parameters. To generate this training data, a dynamic wheel-rail interaction model was developed that relates defect geometry to the side frame vertical acceleration signature. The model was generated by using combined systems dynamic modeling, and the system was solved with a developed combined lumped and distributed parameter system numerical approximation. The broken rail model was validated with real data collected from an instrumented freight car. The model was then used to train and validate the defect detection methodologies for various train and rail physical parameters and operating conditions. / Ph. D.
206

An artificial neural network approach to transformer fault diagnosis

Zhang, Yuwen 22 August 2008 (has links)
This thesis presents an artificial neural network (ANN) approach to diagnose and detect faults in oil-filled power transformers based on dissolved gas-in-oil analysis. The goal of the research is to investigate the available transformer incipient fault diagnosis methods and then develop an ANN approach for this purpose. This ANN classifier should not only be able to detect the fault type, but also should be able to judge the cellulosic material breakdown. This classifier should also be able to accommodate more than one type of fault. This thesis describes a two-step ANN method that is used to detect faults with or without cellulose involved. Utilizing a feedforward artificial neural network, the classifier was trained with back-propagation, using training samples collected from different failed transformers. It is shown in the thesis that such a neural-net based approach can yield a high diagnosis accuracy. Several possible design alternatives and comparisons are also addressed in the thesis. The final system has been successfully tested, exhibiting a classification accuracy of 95% for major fault type and 90% for cellulose breakdown. / Master of Science
207

Control Power Optimization using Artificial Intelligence for Hybrid Wing Body Aircraft

Chhabra, Rupanshi 15 September 2015 (has links)
Traditional methods of control allocation optimization have shown difficulties in exploiting the full potential of controlling a large array of control surfaces. This research investigates the potential of employing artificial intelligence methods like neurocomputing to the control allocation optimization problem of Hybrid Wing Body (HWB) aircraft concepts for minimizing control power, hinge moments, and actuator forces, while keeping the system weights within acceptable limits. The main objective is to develop a proof-of-concept process suitable to demonstrate the potential of using neurocomputing for optimizing actuation power for aircraft featuring multiple independently actuated control surfaces and wing flexibility. An aeroelastic Open Rotor Engine Integration and Optimization (OREIO) model was used to generate a database of hinge moment and actuation power characteristics for an array of control surface deflections. Artificial neural network incorporating a genetic algorithm then performs control allocation optimization for an example aircraft. The results showed that for the half-span model, the optimization results (for the sum of the required hinge moment) are improved by more than 11%, whereas for the full-span model, the same approach improved the result by nearly 14% over the best MSC Nastran solution by using the neural network optimization process. The results were improved by 23% and 27% over the case where only the elevator is used for both half-span and full-span models, respectively. The methods developed and applied here can be used for a wide variety of aircraft configurations. / Master of Science
208

Understanding Fixed Object Crashes with SHRP2 Naturalistic Driving Study Data

Hao, Haiyan 30 August 2018 (has links)
Fixed-object crashes have long time been considered as major roadway safety concerns. While previous relevant studies tended to address such crashes in the context of roadway departures, and heavily relied on police-reported accidents data, this study integrated the SHRP2 NDS and RID data for analyses, which fully depicted the prior to, during, and after crash scenarios. A total of 1,639 crash, near-crash events, and 1,050 baseline events were acquired. Three analysis methods: logistic regression, support vector machine (SVM) and artificial neural network (ANN) were employed for two responses: crash occurrence and severity level. Logistic regression analyses identified 16 and 10 significant variables with significance levels of 0.1, relevant to driver, roadway, environment, etc. for two responses respectively. The logistic regression analyses led to a series of findings regarding the effects of explanatory variables on fixed-object event occurrence and associated severity level. SVM classifiers and ANN models were also constructed to predict these two responses. Sensitivity analyses were performed for SVM classifiers to infer the contributing effects of input variables. All three methods obtained satisfactory prediction performance, that was around 88% for fixed-object event occurrence and 75% for event severity level, which indicated the effectiveness of NDS event data on depicting crash scenarios and roadway safety analyses. / Master of Science / Fixed-object crashes happen when a single vehicle strikes a roadway feature such as a curb or a median, or runs off the road and hits a roadside feature such as a tree or utility pole. They have long time been considered as major highway safety concerns due to their high frequency, fatality rate, and associated property cost. Previous studies relevant to fixed-object crashes tended to address such crashes in the contexture of roadway departures, and heavily relied on police-reported accident data. However, many fixed-object crashes involved objects in roadway such as traffic control devices, roadway debris, etc. The police-reported accident data were found to be weak in depicting scenarios prior to, during crashes. Also, many minor crashes were often kept unreported. The Second Strategic Highway Research Program (SHRP2) Naturalistic Driving Study (NDS) is the largest NDS project launched across the country till now, aimed to study driver behavior or, performance-related safety problems under real-world scenarios. The data acquisition systems (DASs) equipped on participated vehicles collect vehicle kinematics, roadway, traffic, environment, and driver behavior data continuously, which enable researchers to address such crash scenarios closely. This study integrated SHRP2 NDS and roadway information database (RID) data to conduct a comprehensive analysis of fixed-object crashes. A total of 1,639 crash, near-crash events relevant to fixed objects and animals, and 1,050 baseline events were used. Three analysis methods: logistic regression, support vector machine (SVM) and artificial neural network (ANN) were employed for two responses: crash occurrence and severity level. The logistic regression analyses identified 16 and 10 variables with significance levels of 0.1 for fixed-object event occurrence and severity level models respectively. The influence of explanatory variables was discussed in detail. SVM classifiers and ANN models were also constructed to predict the fixed-object crash occurrence and severity level. Sensitivity analyses were performed for SVM classifiers to infer the contributing effects of input variables. All three methods achieved satisfactory prediction accuracies of around 88% for crash occurrence prediction and 75% for crash severity level prediction, which suggested the effectiveness of NDS event data on depicting crash scenarios and roadway safety analyses.
209

<b>Benchmarking tool development for commercial buildings' energy consumption using machine learning</b>

Paniz Hosseini (18087004) 03 June 2024 (has links)
<p dir="ltr">This thesis investigates approaches to classify and anticipate the energy consumption of commercial office buildings using external and performance benchmarking to reduce the energy consumption. External benchmarking in the context of building energy consumption considers the influence of climate zones that significantly impact a building's energy needs. Performance benchmarking recognizes that different types of commercial buildings have distinct energy consumption patterns. Benchmarks are established separately for each building type to provide relevant comparisons.</p><p dir="ltr">The first part of this thesis is about providing a benchmarking baseline for buildings to show their consumption levels. This involves simulating the buildings based on standards and developing a model based on real-time results. Software tools like Open Studio and Energy Plus were utilized to simulate buildings representative of different-sized structures to organize the benchmark energy consumption baseline. These simulations accounted for two opposing climate zones—one cool and humid and one hot and dry. To ensure the authenticity of the simulation, details, which are the building envelope, operational hours, and HVAC systems, were matched with ASHRAE standards.</p><p dir="ltr">Secondly, the neural network machine learning model is needed to predict the consumption of the buildings based on the trend data came out of simulation part, by training a comprehensive set of environmental characteristics, including ambient temperature, relative humidity, solar radiation, wind speed, and the specific HVAC (Heating, Ventilation, and Air Conditioning) load data for both heating and cooling of the building. The model's exceptional accuracy rating of 99.54% attained across all, which comes from the accuracy of training, validation, and test about 99.6%, 99.12%, and 99.42%, respectively, and shows the accuracy of the predicted energy consumption of the building. The validation check test confirms that the achieved accuracy represents the optimal performance of the model. A parametric study is done to show the dependency of energy consumption on the input, including the weather data and size of the building, which comes from the output data of machine learning, revealing the reliability of the trained model. Establishing a Graphic User Interface (GUI) enhances accessibility and interaction for users. In this thesis, we have successfully developed a tool that predicts the energy consumption of office buildings with an impressive accuracy of 99.54%. Our investigation shows that temperature, humidity, solar radiation, wind speed, and the building's size have varying impacts on energy use. Wind speed is the least influential component for low-rise buildings but can have a more substantial effect on high-rise structures.</p>
210

Understanding Human Imagination Through Diffusion Model

Pham, Minh Nguyen 22 December 2023 (has links)
This paper develops a possible explanation for a facet of visual processing inspired by the biological brain's mechanisms for information gathering. The primary focus is on how humans observe elements in their environment and reconstruct visual information within the brain. Drawing on insights from diverse studies, personal research, and biological evidence, the study posits that the human brain captures high-level feature information from objects rather than replicating exact visual details, as is the case in digital systems. Subsequently, the brain can either reconstruct the original object using its specific features or generate an entirely new object by combining features from different objects, a process referred to as "Imagination." Central to this process is the "Imagination Core," a dedicated unit housing a modified diffusion model. This model allows high-level features of an object to be employed for tasks like recreating the original object or forming entirely new objects from existing features. The experimental simulation, conducted with an Artificial Neural Network (ANN) incorporating a Convolutional Neural Network (CNN) for high-level feature extraction within the Information Processing Network and a Diffusion Network for generating new information in the Imagination Core, demonstrated the ability to create novel images based solely on high-level features extracted from previously learned images. This experimental outcome substantiates the theory that human learning and storage of visual information occur through high-level features, enabling us to recall events accurately, and these details are instrumental in our imaginative processes. / Master of Science / This study takes inspiration from how our brains process visual information to explore how we see and imagine things. Think of it like a digital camera, but instead of saving every tiny detail, our brains capture the main features of what we see. These features are then used to recreate images or even form entirely new ones through a process called "Imagination." It is like when you remember something from the past – your brain does not store every little detail but retains enough to help you recall events and create new ideas. In our study, we created a special unit called the "Imagination Core," using a modified diffusion model, to simulate how this process works. We trained an Artificial Neural Network (ANN) with a Convolutional Neural Network (CNN) to extract the main features of objects and a Diffusion Network to generate new information in the Imagination Core. The exciting part? We were able to make the computer generate new images it had never seen before, only using details it learned from previous images. This supports the idea that, like our brains, focusing on important details helps us remember things and fuels our ability to imagine new things.

Page generated in 0.0645 seconds