• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 180
  • 31
  • 25
  • 21
  • 16
  • 11
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 644
  • 644
  • 644
  • 135
  • 134
  • 123
  • 119
  • 107
  • 93
  • 85
  • 73
  • 70
  • 69
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Development of a Water Cloud Radiance Model for Use in Training an Artificial Neural Network to Recover Cloud Properties from Sun Photometer Observations

Meehan, Patrick James 09 June 2021 (has links)
As the planetary climate continues to evolve, it is important to build an accurate long-term climate record. State-of-the-art atmospheric science requires a variety of approaches to the measurement of the atmospheric structure and composition. This thesis supports the possibility of inferring cloud properties from sun photometer observations of the cloud solar aureole using an artificial neural network (ANN). Training of an ANN requires a large number of input and output parameter sets. A cloud radiance model is derived that takes into consideration the cloud depth, the mean size of the cloud water particles, and the cloud liquid water content. The cloud radiance model derived here is capable of considering the wavelength of the incident sunlight and the cloud lateral dimensions as parameters; however, here we consider only one wavelength—550 nm—and one lateral dimension—500 m—to demonstrate its performance. The cloud radiance model is then used to generate solar aureole profiles corresponding to the cloud parameters as they would be observed using a sun photometer. Coefficients representative of the solar aureole profiles may then be used as inputs to a trained ANN to infer the parameters used to generate the profile. This process is demonstrated through examples. A manuscript submitted for possible publication based on an early version of the cloud radiance model was deemed naïve by reviewers, ultimately leading to improvements documented here. / Master of Science / The Earth's climate is driven by heat from the sun and the exchange of heat between the Earth and space. The role of clouds is paramount in this process. One aspect of "cloud forcing" is cloud structure and composition. Required measures may be obtained by satellite or surface-based observations. Described here is the creation of a numerical model that calculates the disposition of individual bundles of light within water clouds. The clouds created in the model are all described by the mean size of the cloud water droplets, the amount of water in the cloud, and cloud depth. Changing these factors relative to each other changes the amount of light that traverses the cloud and the angle at which the individual bundles of light leave the cloud as measured using a device called a sun photometer. The measured amount and angle of bundles of light leaving the cloud are used to recover the parameters that characterize the cloud; i.e., the size of the cloud water droplets, the amount of water in the cloud, and the cloud depth. Two versions of the cloud radiance model are described.
62

Machine-Learning based tool to predict Tire Noise using both Tire and Pavement Parameters

Spies, Lucas Daniel 10 July 2019 (has links)
Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, new machine learning algorithms to model TPIN have been implemented. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research studied the correct configuration of such tools. More specifically, Artificial Neural Network (ANN) configurations were studied. Their implementation was based on the problem requirements (acoustic sound pressure prediction). Moreover, a customized neuron configuration showed improvements on the ANN TPIN prediction capabilities. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement profile when predicting TPIN. Finally, the new ANN configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed. Tire noise narrowband spectra for a frequency range of 400-1600 Hz is obtained as a result. / Master of Science / Tire-Pavement Interaction Noise (TPIN) becomes the main noise source contributor for passenger vehicles traveling at speeds above 40 kph. Therefore, it represents one of the main contributors to noise environmental pollution in residential areas nearby highways. TPIN has been subject of exhaustive studies since the 1970s. Still, almost 50 years later, there is still not an accurate way to model it. This is a consequence of a large number of noise generation mechanisms involved in this phenomenon, and their high complexity nature. It is acknowledged that the main noise mechanisms involve tire vibration, and air pumping within the tire tread and pavement surface. Moreover, TPIN represents the only vehicle noise source strongly affected by an external factor such as pavement roughness. For the last decade, machine learning algorithms, based on the human brain structure, have been implemented to model TPIN. However, their development relay on experimental data, and do not provide strong physical insight into the problem. This research focused on the study of the correct configuration of such machine learning algorithms applied to the very specific task of TPIN prediction. Moreover, a customized configuration showed improvements on the TPIN prediction capabilities of these algorithms. During the second stage of this thesis, tire noise test was undertaken for different tires at different pavements surfaces on the Virginia Tech SMART road. The experimental data was used to develop an approach to account for the pavement roughness when predicting TPIN. Finally, the new machine learning algorithm configuration, along with the approach to account for pavement roughness were complemented using previous work to obtain what is the first reasonable accurate and complete computational tool to predict tire noise. This tool uses as inputs: 1) tire parameters, 2) pavement parameters, and 3) vehicle speed.
63

New Computational Methodologies for Microstructure Quantification

Catania, Richard Knight 26 May 2022 (has links)
This work explores physics-based and data-driven methods for material property prediction for metallic microstructures while indicating the context and benefit for microstructure- sensitive design. From this, the use of shape moment invariants is offered as solution to quantifying microstructure topology numerically using images. This offers a substantial benefit for computational time since image data is converted to numeric values. The goal of quantifying the image data is to help index grains based on their crystallographic orientation. Additionally, individual grains are isolated in order to investigate the effect of their shapes. After the microstructures are quantified, two methods for identifying the grain boundaries are proposed to make a more comprehensive approach to material property prediction. The grain boundaries as well as the grains of the quantified image are used to train artificial neural networks capable of predicting the material properties of the material. This prediction technique can be used as a tool for a microstructure-sensitive approach to design subtractively manufactured and Laser Engineered Net Shaping (LENS)-produced metallic materials. / Master of Science / Material properties are dependent on the underlying microstructural features. This work pro- poses numerical methods to quantify topology and grain boundaries of metallic microstruc- tures by developing physics-based and data-driven techniques for subtractively manufactured and Laser Engineered Net Shaping (LENS)-produced materials.
64

Modeling and simulation of VMD desalination process by ANN

Cao, W., Liu, Q., Wang, Y., Mujtaba, Iqbal 21 August 2015 (has links)
Yes / In this work, an artificial neural network (ANN) model based on the experimental data was developed to study the performance of vacuum membrane distillation (VMD) desalination process under different operating parameters such as the feed inlet temperature, the vacuum pressure, the feed flow rate and the feed salt concentration. The proposed model was found to be capable of predicting accurately the unseen data of the VMD desalination process. The correlation coefficient of the overall agreement between the ANN predictions and experimental data was found to be more than 0.994. The calculation value of the coefficient of variation (CV) was 0.02622, and there was coincident overlap between the target and the output data from the 3D generalization diagrams. The optimal operating conditions of the VMD process can be obtained from the performance analysis of the ANN model with a maximum permeate flux and an acceptable CV value based on the experiment.
65

Calibration of an Artificial Neural Network for Predicting Development in Montgomery County, Virginia: 1992-2001

Thekkudan, Travis Francis 18 July 2008 (has links)
This study evaluates the effectiveness of an artificial neural network (ANN) to predict locations of urban change at a countywide level by testing various calibrations of the Land Transformation Model (LTM). It utilizes the Stuttgart Neural Network Simulator (SNNS), a common medium through which ANNs run a back-propagation algorithm, to execute neural net training. This research explores the dynamics of socioeconomic and biophysical variables (derived from the 1990 Comprehensive Plan) and how they affect model calibration for Montgomery County, Virginia. Using NLCD Retrofit Land Use data for 1992 and 2001 as base layers for urban change, we assess the sensitivity of the model with policy-influenced variables from data layers representing road accessibility, proximity to urban lands, distance from urban expansion areas, slopes, and soils. Aerial imagery from 1991 and 2002 was used to visually assess changes at site-specific locations. Results show a percent correct metric (PCM) of 32.843% and a Kappa value of 0.319. A relative operating characteristic (ROC) value of 0.660 showed that the model predicted locations of change better than chance (0.50). It performs consistently when compared to PCMs from a logistic regression model, 31.752%, and LTMs run in the absence of each driving variable ranging 27.971% – 33.494%. These figures are similar to results from other land use and land cover change (LUCC) studies sharing comparable landscape characteristics. Prediction maps resulting from LTM forecasts driven by the six variables tested provide a satisfactory means for forecasting change inside of dense urban areas and urban fringes for countywide urban planning. / Master of Science
66

A combined soft computing-mechanics approach to damage evaluation and detection in reinforced concrete beams

Al-Rahmani, Ahmed Hamid Abdulrahman January 1900 (has links)
Master of Science / Department of Civil Engineering / Hayder A. Rasheed / Damage detection and structural health monitoring are topics that have been receiving increased attention from researchers around the world. A structure can accumulate damage during its service life, which in turn can impair the structure’s safety. Currently, visual inspection is performed by experienced personnel in order to evaluate damage in structures. This approach is affected by the constraints of time and availability of qualified personnel. This study aims to facilitate damage evaluation and detection in concrete bridge girders without the need for visual inspection while minimizing field measurements. Simply-supported beams with different geometric, material and cracking parameters (cracks’ depth, width and location) were modeled in three phases using Abaqus finite element analysis software in order to obtain stiffness values at specified nodes. In the first two phases, beams were modeled using beam elements. Phase I included beams with a single crack, while phase II included beams with up to two cracks. For phase III, beams with a single crack were modeled using plane stress elements. The resulting damage databases from the three phases were then used to train two types of Artificial Neural Networks (ANNs). The first network type (ANNf) solves the forward problem of providing a health index parameter based on the predicted stiffness values. The second network type (ANNi) solves the inverse problem of predicting the most probable cracking pattern, where a unique analytical solution is not attainable. In phase I, beams with 3, 5, 7 and 9 stiffness nodes and a single crack were modeled. For the forward problem, ANNIf had the geometric, material and cracking parameters as inputs and stiffness values as outputs. This network provided excellent prediction accuracy measures (R2 > 99%). For the inverse problem, ANNIi had the geometric and material parameters as well as stiffness values as inputs and the cracking parameters as outputs. Better prediction accuracy measures were achieved when more stiffness nodes were utilized in the ANN modeling process. It was also observed that decreasing the number of required outputs immensely improved the quality of predictions provided by the ANN. This network provided less accurate predictions (R2 = 68%) compared to ANNIf, however, ANNIi still provided reasonable results, considering the non-uniqueness of this problem’s solution. In phase II, beams with 9 stiffness nodes and two cracks were modeled following the same procedure. ANNIIf provided excellent results (R2 > 99%) while ANNIIi had less accurate (R2 = 65%) but still reasonable predictions. Finally, in phase III, simple span beams with 3, 5, 7 and 9 stiffness nodes and a single crack were modeled using plane stress elements. ANNIIIf (R2 > 99%) provided excellent results while ANNIIIi had less accurate (R2 = 65%) but still reasonable predictions. Predictions in this phase were very accurate for the crack depth and location parameters (R2 = 97% and 99%, respectively). Further inspection showed that ANNIIIi provided more accurate predictions when compared with ANNIi. Overall, the obtained results were reasonable and showed good agreement with the actual values. This indicates that using ANNs is an excellent approach to damage evaluation, and a viable approach to obtain the, analytically unattainable, solution of the inverse damage detection problem.
67

Predicting reliability in multidisciplinary engineering systems under uncertainty

Hwang, Sungkun 27 May 2016 (has links)
The proposed study develops a framework that can accurately capture and model input and output variables for multidisciplinary systems to mitigate the computational cost when uncertainties are involved. The dimension of the random input variables is reduced depending on the degree of correlation calculated by relative entropy. Feature extraction methods; namely Principal Component Analysis (PCA), the Auto-Encoder (AE) algorithm are developed when the input variables are highly correlated. The Independent Features Test (IndFeaT) is implemented as the feature selection method if the correlation is low to select a critical subset of model features. Moreover, Artificial Neural Network (ANN) including Probabilistic Neural Network (PNN) is integrated into the framework to correctly capture the complex response behavior of the multidisciplinary system with low computational cost. The efficacy of the proposed method is demonstrated with electro-mechanical engineering examples including a solder joint and stretchable patch antenna examples.
68

Predicting corporate credit ratings using neural network models

Frank, Simon James 12 1900 (has links)
Thesis (MBA (Business Management))--University of Stellenbosch, 2009. / ENGLISH ABSTRACT: For many organisations who wish to sell their debt, or investors who are looking to invest in an organisation, company credit ratings are an important surrogate measure for the marketability or risk associated with a particular issue. Credit ratings are issued by a limited number of authorised companies – with the predominant being Standard & Poor’s, Moody’s and Fitch – who have the necessary experience, skills and motive to calculate an objective credit rating. In the wake of some high profile bankruptcies, there has been recent conjecture about the accuracy and reliability of current ratings. Issues relating specifically to the lack of competition in the rating market have been identified as possible causes of the poor timeliness of rating updates. Furthermore, the cost of obtaining (or updating) a rating from one of the predominant agencies has also been identified as a contributing factor. The high costs can lead to a conflict of interest where rating agencies are obliged to issue more favourable ratings to ensure continued patronage. Based on these issues, there is sufficient motive to create more cost effective alternatives to predicting corporate credit ratings. It is not the intention of these alternatives to replace the relevancy of existing rating agencies, but rather to make the information more accessible, increase competition, and hold the agencies more accountable for their ratings through better transparency. The alternative method investigated in this report is the use of a backpropagation artificial neural network to predict corporate credit ratings for companies in the manufacturing sector of the United States of America. Past research has shown that backpropagation neural networks are effective machine learning techniques for predicting credit ratings because no prior subjective or expert knowledge, or assumptions on model structure, are required to create a representative model. For the purposes of this study only public information and data is used to develop a cost effective and accessible model. The basis of the research is the assumption that all information (both quantitive and qualitative) that is required to calculate a credit rating for a company, is contained within financial data from income statements, balance sheets and cash flow statements. The premise of the assumption is that any qualitative or subjective assessment about company creditworthiness will ultimately be reflected through financial performance. The results show that a backpropagation neural network, using 10 input variables on a data set of 153 companies, can classify 75% of the ratings accurately. The results also show that including collinear inputs to the model can affect the classification accuracy and prediction variance of the model. It is also shown that latent projection techniques, such as partial least squares, can be used to reduce the dimensionality of the model without making any assumption about data relevancy. The output of these models, however, does not improve the classification accuracy achieved using selected un-correlated inputs. / AFRIKAANSE OPSOMMING: Vir baie organisasies wat skuldbriewe wil verkoop, of beleggers wat in ʼn onderneming wil belê is ʼn maatskappy kredietgradering ’n belangrike plaasvervangende maatstaf vir die bemarkbaarheid van, of die risiko geassosieer met ʼn betrokke uitgifte. Kredietgraderings word deur ʼn beperkte aantal gekeurde maatskappye uitgereik – met die belangrikste synde Standard & Poor’s, Moody’s en Fitch. Hulle het almal die nodige ervaring, kundigheid en rede om objektiewe kredietgraderings te bereken. In die nadraai van ʼn aantal hoë profiel bankrotskappe was daar onlangs gissings oor die akkuraatheid en betroubaarheid van huidige graderings. Kwessies wat spesifiek verband hou met die gebrek aan kompetisie in die graderingsmark is geïdentifiseer as ‘n moontlike oorsaak vir die swak tydigheid van gradering opdatering. Verder word die koste om ‘n gradering (of opdatering van gradering) van een van die dominante agentskappe te bekom ook geïdentifiseer as ʼn verdere bydraende faktor gesien. Die hoë koste kan tot ‘n belange konflik lei as graderingsagentskappe onder druk kom om gunstige graderings uit te reik om sodoende volhoubare klante te behou. As gevolg van hierdie kwessies is daar voldoende motivering om meer koste doeltreffende alternatiewe vir die skatting van korporatiewe kredietgraderings te ondersoek. Dit is nie die doelwit van hierdie alternatiewe om die toepaslikheid van bestaande graderingsagentskappe te vervang nie, maar eerder om die inligting meer toeganklik te maak, mededinging te verhoog en om die agentskappe meer toerekenbaar vir hul graderings te maak deur beter deursigtigheid. Die alternatiewe manier wat in hierdie verslag ondersoek word, is die gebruik van ‘n kunsmatige neurale netwerk om die kredietgraderings van vervaardigingsmaatskappye in die VSA te skat. Vorige navorsing het getoon dat neurale netwerke doeltreffende masjienleer tegnieke is om kredietgraderings te skat omdat geen voorafkennis of gesaghebbende kundigheid, of aannames oor die modelstruktuur nodig is om ‘n verteenwoordigende model te bou. Vir doeleindes van hierdie navorsingsverslag word slegs openbare inligting en data gebruik om ʼn kostedoeltreffende en toeganklike model te bou. Die grondslag van hierdie navorsing is die aanname dat alle inligting (beide kwantitatief en kwalitatief) wat benodig word om ʼn kredietgradering vir ʼn onderneming te bereken, opgesluit is in die finansiële data in die inkomstestate, balansstate en kontantvloei state. Die aanname is dus dat alle kwalitatiewe of subjektiewe assessering oor ‘n maatskappy se kredietwaardigheid uiteindelik in die finansiële prestasie sal reflekteer. Die resultate toon dat ʼn neurale netwerk met 10 toevoer veranderlikes op ‘n datastel van 153 maatskappye 75% van die graderings akkuraat klassifiseer. Die resultate toon ook dat die insluiting van kollineêre toevoere tot die model die klassifikasie akkuraatheid en die variansie van die skatting kan beïnvloed. Daar word verder getoon dat latente projeksietegnieke, soos parsiële kleinste kwadrate, die dimensies van die model kan verminder sonder om enige aannames oor data toepaslikheid te maak. Die afvoer van hierdie modelle verhoog egter nie die klassifikasie akkuraatheid wat behaal is met die gekose ongekorreleerde toevoere nie. 121 pages.
69

An artificially-intelligent biomeasurement system for total hip arthroplasty patient rehabilitation

Law, Ewan James January 2012 (has links)
This study concerned the development and validation of a hardware and software biomeasurement system, which was designed to be used by physiotherapists, general practitioners and other healthcare professionals. The purpose of the system is to detect and assess gait deviation in the form of reduced post-operative range of movement (ROM) of the replacement hip joint in total hip arthroplasty (THA) patients. In so doing, the following original work is presented: Production of a wearable, microcontroller-equipped system which was able to wirelessly relay accelerometer sensor data of the subject’s key hip-position parameters to a host computer, which logs the data for later analysis. Development of an artificial neural network is also reported, which was produced to process the sensor data and output assessment of the subject’s hip ROM in the flexion/extension and abduction/adduction rotations (forward and backward swing and outward and inward movement of the hip respectively). The review of literature in the area of biomeasurement devices is also presented. A major data collection was carried out using twenty-one THA patients, where the device output was compared to the output of a Vicon motion analysis system which is considered the ‘gold standard’ in clinical gait analysis. The Vicon system was used to show that the device developed did not itself affect the patient’s hip, knee or ankle gait cycle parameters when in use, and produced measurement of hip flexion/extension and abduction/adduction closely approximating those of the Vicon system. In patients who had gait deviations manifesting in reduced ROM of these hip parameters, it was demonstrated that the device was able to detect and assess the severity of these excursions accurately. The results of the study substantiate that the system developed could be used as an aid for healthcare professionals in the following ways: · To objectively assess gait deviation in the form of reduced flexion/extension and abduction/adduction in the human hip, after replacement, · Monitoring of patient hip ROM post-operatively · Assist in the planning of gait rehabilitation strategies related to these hip parameters.
70

A New Islanding Detection Method Based On Wavelet-transform and ANN for Inverter Assisted Distributed Generator

Guan, Zhengyuan 01 January 2015 (has links)
Nowadays islanding has become a big issue with the increasing use of distributed generators in power system. In order to effectively detect islanding after DG disconnects from main source, author first studied two passive islanding methods in this thesis: THD&VU method and wavelet-transform method. Compared with other passive methods, each of them has small non-detection zone, but both of them are based on the threshold limit, which is very hard to set. What’s more, when these two methods were applied to practical signals distorted with noise, they performed worse than anticipated. Thus, a new composite intelligent based method is presented in this thesis to solve the drawbacks above. The proposed method first uses wavelet-transform to detect the occurrence of events (including islanding and non-islanding) due to its sensitivity of sudden change. Then this approach utilizes artificial neural network (ANN) to classify islanding and non-islanding events. In this process, three features based on THD&VU are extracted as the input of ANN classifier. The performance of proposed method was tested on two typical distribution networks. The obtained results of two cases indicated the developed method can effectively detect islanding with low misclassification.

Page generated in 0.2891 seconds