• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 601
  • 70
  • 40
  • 37
  • 25
  • 22
  • 16
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 6
  • Tagged with
  • 1859
  • 1859
  • 1208
  • 715
  • 702
  • 682
  • 510
  • 283
  • 241
  • 227
  • 221
  • 199
  • 182
  • 151
  • 149
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Decision making in engineering prediction systems

Yasarer, Hakan January 1900 (has links)
Doctor of Philosophy / Department of Civil Engineering / Yacoub M. Najjar / Access to databases after the digital revolutions has become easier because large databases are progressively available. Knowledge discovery in these databases via intelligent data analysis technology is a relatively young and interdisciplinary field. In engineering applications, there is a demand for turning low-level data-based knowledge into a high-level type knowledge via the use of various data analysis methods. The main reason for this demand is that collecting and analyzing databases can be expensive and time consuming. In cases where experimental or empirical data are already available, prediction models can be used to characterize the desired engineering phenomena and/or eliminate unnecessary future experiments and their associated costs. Phenomena characterization, based on available databases, has been utilized via Artificial Neural Networks (ANNs) for more than two decades. However, there is a need to introduce new paradigms to improve the reliability of the available ANN models and optimize their predictions through a hybrid decision system. In this study, a new set of ANN modeling approaches/paradigms along with a new method to tackle partially missing data (Query method) are introduced for this purpose. The potential use of these methods via a hybrid decision making system is examined by utilizing seven available databases which are obtained from civil engineering applications. Overall, the new proposed approaches have shown notable prediction accuracy improvements on the seven databases in terms of quantified statistical accuracy measures. The proposed new methods are capable in effectively characterizing the general behavior of a specific engineering/scientific phenomenon and can be collectively used to optimize predictions with a reasonable degree of accuracy. The utilization of the proposed hybrid decision making system (HDMS) via an Excel-based environment can easily be utilized by the end user, to any available data-rich database, without the need for any excessive type of training.
172

Analysing the behaviour of neural networks

Breutel, Stephan Werner January 2004 (has links)
A new method is developed to determine a set of informative and refined interface assertions satisfied by functions that are represented by feed-forward neural networks. Neural networks have often been criticized for their low degree of comprehensibility.It is difficult to have confidence in software components if they have no clear and valid interface description. Precise and understandable interface assertions for a neural network based software component are required for safety critical applications and for theintegration into larger software systems. The interface assertions we are considering are of the form &quote if the input x of the neural network is in a region (alpha symbol) of the input space then the output f(x) of the neural network will be in the region (beta symbol) of the output space &quote and vice versa. We are interested in computing refined interface assertions, which can be viewed as the computation of the strongest pre- and postconditions a feed-forward neural network fulfills. Unions ofpolyhedra (polyhedra are the generalization of convex polygons in higher dimensional spaces) are well suited for describing arbitrary regions of higher dimensional vector spaces. Additionally, polyhedra are closed under affine transformations. Given a feed-forward neural network, our method produces an annotated neural network, where each layer is annotated with a set of valid linear inequality predicates. The main challenges for the computation of these assertions is to compute the solution of a non-linear optimization problem and the projection of a polyhedron onto a lower-dimensional subspace.
173

A Comparison of Various Interpolation Techniques for Modeling and Estimation of Radon Concentrations in Ohio

Gummadi, Jayaram January 2013 (has links)
No description available.
174

A combined soft computing-mechanics approach to damage evaluation and detection in reinforced concrete beams

Al-Rahmani, Ahmed Hamid Abdulrahman January 1900 (has links)
Master of Science / Department of Civil Engineering / Hayder A. Rasheed / Damage detection and structural health monitoring are topics that have been receiving increased attention from researchers around the world. A structure can accumulate damage during its service life, which in turn can impair the structure’s safety. Currently, visual inspection is performed by experienced personnel in order to evaluate damage in structures. This approach is affected by the constraints of time and availability of qualified personnel. This study aims to facilitate damage evaluation and detection in concrete bridge girders without the need for visual inspection while minimizing field measurements. Simply-supported beams with different geometric, material and cracking parameters (cracks’ depth, width and location) were modeled in three phases using Abaqus finite element analysis software in order to obtain stiffness values at specified nodes. In the first two phases, beams were modeled using beam elements. Phase I included beams with a single crack, while phase II included beams with up to two cracks. For phase III, beams with a single crack were modeled using plane stress elements. The resulting damage databases from the three phases were then used to train two types of Artificial Neural Networks (ANNs). The first network type (ANNf) solves the forward problem of providing a health index parameter based on the predicted stiffness values. The second network type (ANNi) solves the inverse problem of predicting the most probable cracking pattern, where a unique analytical solution is not attainable. In phase I, beams with 3, 5, 7 and 9 stiffness nodes and a single crack were modeled. For the forward problem, ANNIf had the geometric, material and cracking parameters as inputs and stiffness values as outputs. This network provided excellent prediction accuracy measures (R2 > 99%). For the inverse problem, ANNIi had the geometric and material parameters as well as stiffness values as inputs and the cracking parameters as outputs. Better prediction accuracy measures were achieved when more stiffness nodes were utilized in the ANN modeling process. It was also observed that decreasing the number of required outputs immensely improved the quality of predictions provided by the ANN. This network provided less accurate predictions (R2 = 68%) compared to ANNIf, however, ANNIi still provided reasonable results, considering the non-uniqueness of this problem’s solution. In phase II, beams with 9 stiffness nodes and two cracks were modeled following the same procedure. ANNIIf provided excellent results (R2 > 99%) while ANNIIi had less accurate (R2 = 65%) but still reasonable predictions. Finally, in phase III, simple span beams with 3, 5, 7 and 9 stiffness nodes and a single crack were modeled using plane stress elements. ANNIIIf (R2 > 99%) provided excellent results while ANNIIIi had less accurate (R2 = 65%) but still reasonable predictions. Predictions in this phase were very accurate for the crack depth and location parameters (R2 = 97% and 99%, respectively). Further inspection showed that ANNIIIi provided more accurate predictions when compared with ANNIi. Overall, the obtained results were reasonable and showed good agreement with the actual values. This indicates that using ANNs is an excellent approach to damage evaluation, and a viable approach to obtain the, analytically unattainable, solution of the inverse damage detection problem.
175

Prediction of muscle activity during loaded movements of the upper limb

Tibold, R., Fuglevand, A. J. January 2015 (has links)
BACKGROUND: Accurate prediction of electromyographic (EMG) signals associated with a variety of motor behaviors could, in theory, serve as activity templates needed to evoke movements in paralyzed individuals using functional electrical stimulation. Such predictions should encompass complex multi-joint movements and include interactions with objects in the environment. METHODS: Here we tested the ability of different artificial neural networks (ANNs) to predict EMG activities of 12 arm muscles while human subjects made free movements of the arm or grasped and moved objects of different weights and dimensions. Inputs to the trained ANNs included hand position, hand orientation, and thumb grip force. RESULTS: The ability of ANNs to predict EMG was equally as good for tasks involving interactions with external loads as for unloaded movements. The ANN that yielded the best predictions was a feed-forward network consisting of a single hidden layer of 30 neural elements. For this network, the average coefficient of determination (R2 value) between predicted and actual EMG signals across all nine subjects and 12 muscles during movements that involved episodes of moving objects was 0.43. CONCLUSION: This reasonable accuracy suggests that ANNs could be used to provide an initial estimate of the complex patterns of muscle stimulation needed to produce a wide array of movements, including those involving object interaction, in paralyzed individuals.
176

Predicting reliability in multidisciplinary engineering systems under uncertainty

Hwang, Sungkun 27 May 2016 (has links)
The proposed study develops a framework that can accurately capture and model input and output variables for multidisciplinary systems to mitigate the computational cost when uncertainties are involved. The dimension of the random input variables is reduced depending on the degree of correlation calculated by relative entropy. Feature extraction methods; namely Principal Component Analysis (PCA), the Auto-Encoder (AE) algorithm are developed when the input variables are highly correlated. The Independent Features Test (IndFeaT) is implemented as the feature selection method if the correlation is low to select a critical subset of model features. Moreover, Artificial Neural Network (ANN) including Probabilistic Neural Network (PNN) is integrated into the framework to correctly capture the complex response behavior of the multidisciplinary system with low computational cost. The efficacy of the proposed method is demonstrated with electro-mechanical engineering examples including a solder joint and stretchable patch antenna examples.
177

Data driven modelling for environmental water management

Syed, Mofazzal January 2007 (has links)
Management of water quality is generally based on physically-based equations or hypotheses describing the behaviour of water bodies. In recent years models built on the basis of the availability of larger amounts of collected data are gaining popularity. This modelling approach can be called data driven modelling. Observational data represent specific knowledge, whereas a hypothesis represents a generalization of this knowledge that implies and characterizes all such observational data. Traditionally deterministic numerical models have been used for predicting flow and water quality processes in inland and coastal basins. These models generally take a long time to run and cannot be used as on-line decision support tools, thereby enabling imminent threats to public health risk and flooding etc. to be predicted. In contrast, Data driven models are data intensive and there are some limitations in this approach. The extrapolation capability of data driven methods are a matter of conjecture. Furthermore, the extensive data required for building a data driven model can be time and resource consuming or for the case predicting the impact of a future development then the data is unlikely to exist. The main objective of the study was to develop an integrated approach for rapid prediction of bathing water quality in estuarine and coastal waters. Faecal Coliforms (FC) were used as a water quality indicator and two of the most popular data mining techniques, namely, Genetic Programming (GP) and Artificial Neural Networks (ANNs) were used to predict the FC levels in a pilot basin. In order to provide enough data for training and testing the neural networks, a calibrated hydrodynamic and water quality model was used to generate input data for the neural networks. A novel non-linear data analysis technique, called the Gamma Test, was used to determine the data noise level and the number of data points required for developing smooth neural network models. Details are given of the data driven models, numerical models and the Gamma Test. Details are also given of a series experiments being undertaken to test data driven model performance for a different number of input parameters and time lags. The response time of the receiving water quality to the input boundary conditions obtained from the hydrodynamic model has been shown to be a useful knowledge for developing accurate and efficient neural networks. It is known that a natural phenomenon like bacterial decay is affected by a whole host of parameters which can not be captured accurately using solely the deterministic models. Therefore, the data-driven approach has been investigated using field survey data collected in Cardiff Bay to investigate the relationship between bacterial decay and other parameters. Both of the GP and ANN models gave similar, if not better, predictions of the field data in comparison with the deterministic model, with the added benefit of almost instant prediction of the bacterial levels for this recreational water body. The models have also been investigated using idealised and controlled laboratory data for the velocity distributions along compound channel reaches with idealised rods have located on the floodplain to replicate large vegetation (such as mangrove trees).
178

Distributed online machine learning for mobile care systems

Prueller, Hans January 2014 (has links)
Telecare and especially Mobile Care Systems are getting more and more popular. They have two major benefits: first, they drastically improve the living standards and even health outcomes for patients. In addition, they allow significant cost savings for adult care by reducing the needs for medical staff. A common drawback of current Mobile Care Systems is that they are rather stationary in most cases and firmly installed in patients’ houses or flats, which makes them stay very near to or even in their homes. There is also an upcoming second category of Mobile Care Systems which are portable without restricting the moving space of the patients, but with the major drawback that they have either very limited computational abilities and only a rather low classification quality or, which is most frequently, they only have a very short runtime on battery and therefore indirectly restrict the freedom of moving of the patients once again. These drawbacks are inherently caused by the restricted computational resources and mainly the limitations of battery based power supply of mobile computer systems. This research investigates the application of novel Artificial Intelligence (AI) and Machine Learning (ML) techniques to improve the operation of 2 Mobile Care Systems. As a result, based on the Evolving Connectionist Systems (ECoS) paradigm, an innovative approach for a highly efficient and self-optimising distributed online machine learning algorithm called MECoS - Moving ECoS - is presented. It balances the conflicting needs of providing a highly responsive complex and distributed online learning classification algorithm by requiring only limited resources in the form of computational power and energy. This approach overcomes the drawbacks of current mobile systems and combines them with the advantages of powerful stationary approaches. The research concludes that the practical application of the presented MECoS algorithm offers substantial improvements to the problems as highlighted within this thesis.
179

Predicting corporate credit ratings using neural network models

Frank, Simon James 12 1900 (has links)
Thesis (MBA (Business Management))--University of Stellenbosch, 2009. / ENGLISH ABSTRACT: For many organisations who wish to sell their debt, or investors who are looking to invest in an organisation, company credit ratings are an important surrogate measure for the marketability or risk associated with a particular issue. Credit ratings are issued by a limited number of authorised companies – with the predominant being Standard & Poor’s, Moody’s and Fitch – who have the necessary experience, skills and motive to calculate an objective credit rating. In the wake of some high profile bankruptcies, there has been recent conjecture about the accuracy and reliability of current ratings. Issues relating specifically to the lack of competition in the rating market have been identified as possible causes of the poor timeliness of rating updates. Furthermore, the cost of obtaining (or updating) a rating from one of the predominant agencies has also been identified as a contributing factor. The high costs can lead to a conflict of interest where rating agencies are obliged to issue more favourable ratings to ensure continued patronage. Based on these issues, there is sufficient motive to create more cost effective alternatives to predicting corporate credit ratings. It is not the intention of these alternatives to replace the relevancy of existing rating agencies, but rather to make the information more accessible, increase competition, and hold the agencies more accountable for their ratings through better transparency. The alternative method investigated in this report is the use of a backpropagation artificial neural network to predict corporate credit ratings for companies in the manufacturing sector of the United States of America. Past research has shown that backpropagation neural networks are effective machine learning techniques for predicting credit ratings because no prior subjective or expert knowledge, or assumptions on model structure, are required to create a representative model. For the purposes of this study only public information and data is used to develop a cost effective and accessible model. The basis of the research is the assumption that all information (both quantitive and qualitative) that is required to calculate a credit rating for a company, is contained within financial data from income statements, balance sheets and cash flow statements. The premise of the assumption is that any qualitative or subjective assessment about company creditworthiness will ultimately be reflected through financial performance. The results show that a backpropagation neural network, using 10 input variables on a data set of 153 companies, can classify 75% of the ratings accurately. The results also show that including collinear inputs to the model can affect the classification accuracy and prediction variance of the model. It is also shown that latent projection techniques, such as partial least squares, can be used to reduce the dimensionality of the model without making any assumption about data relevancy. The output of these models, however, does not improve the classification accuracy achieved using selected un-correlated inputs. / AFRIKAANSE OPSOMMING: Vir baie organisasies wat skuldbriewe wil verkoop, of beleggers wat in ʼn onderneming wil belê is ʼn maatskappy kredietgradering ’n belangrike plaasvervangende maatstaf vir die bemarkbaarheid van, of die risiko geassosieer met ʼn betrokke uitgifte. Kredietgraderings word deur ʼn beperkte aantal gekeurde maatskappye uitgereik – met die belangrikste synde Standard & Poor’s, Moody’s en Fitch. Hulle het almal die nodige ervaring, kundigheid en rede om objektiewe kredietgraderings te bereken. In die nadraai van ʼn aantal hoë profiel bankrotskappe was daar onlangs gissings oor die akkuraatheid en betroubaarheid van huidige graderings. Kwessies wat spesifiek verband hou met die gebrek aan kompetisie in die graderingsmark is geïdentifiseer as ‘n moontlike oorsaak vir die swak tydigheid van gradering opdatering. Verder word die koste om ‘n gradering (of opdatering van gradering) van een van die dominante agentskappe te bekom ook geïdentifiseer as ʼn verdere bydraende faktor gesien. Die hoë koste kan tot ‘n belange konflik lei as graderingsagentskappe onder druk kom om gunstige graderings uit te reik om sodoende volhoubare klante te behou. As gevolg van hierdie kwessies is daar voldoende motivering om meer koste doeltreffende alternatiewe vir die skatting van korporatiewe kredietgraderings te ondersoek. Dit is nie die doelwit van hierdie alternatiewe om die toepaslikheid van bestaande graderingsagentskappe te vervang nie, maar eerder om die inligting meer toeganklik te maak, mededinging te verhoog en om die agentskappe meer toerekenbaar vir hul graderings te maak deur beter deursigtigheid. Die alternatiewe manier wat in hierdie verslag ondersoek word, is die gebruik van ‘n kunsmatige neurale netwerk om die kredietgraderings van vervaardigingsmaatskappye in die VSA te skat. Vorige navorsing het getoon dat neurale netwerke doeltreffende masjienleer tegnieke is om kredietgraderings te skat omdat geen voorafkennis of gesaghebbende kundigheid, of aannames oor die modelstruktuur nodig is om ‘n verteenwoordigende model te bou. Vir doeleindes van hierdie navorsingsverslag word slegs openbare inligting en data gebruik om ʼn kostedoeltreffende en toeganklike model te bou. Die grondslag van hierdie navorsing is die aanname dat alle inligting (beide kwantitatief en kwalitatief) wat benodig word om ʼn kredietgradering vir ʼn onderneming te bereken, opgesluit is in die finansiële data in die inkomstestate, balansstate en kontantvloei state. Die aanname is dus dat alle kwalitatiewe of subjektiewe assessering oor ‘n maatskappy se kredietwaardigheid uiteindelik in die finansiële prestasie sal reflekteer. Die resultate toon dat ʼn neurale netwerk met 10 toevoer veranderlikes op ‘n datastel van 153 maatskappye 75% van die graderings akkuraat klassifiseer. Die resultate toon ook dat die insluiting van kollineêre toevoere tot die model die klassifikasie akkuraatheid en die variansie van die skatting kan beïnvloed. Daar word verder getoon dat latente projeksietegnieke, soos parsiële kleinste kwadrate, die dimensies van die model kan verminder sonder om enige aannames oor data toepaslikheid te maak. Die afvoer van hierdie modelle verhoog egter nie die klassifikasie akkuraatheid wat behaal is met die gekose ongekorreleerde toevoere nie. 121 pages.
180

An artificially-intelligent biomeasurement system for total hip arthroplasty patient rehabilitation

Law, Ewan James January 2012 (has links)
This study concerned the development and validation of a hardware and software biomeasurement system, which was designed to be used by physiotherapists, general practitioners and other healthcare professionals. The purpose of the system is to detect and assess gait deviation in the form of reduced post-operative range of movement (ROM) of the replacement hip joint in total hip arthroplasty (THA) patients. In so doing, the following original work is presented: Production of a wearable, microcontroller-equipped system which was able to wirelessly relay accelerometer sensor data of the subject’s key hip-position parameters to a host computer, which logs the data for later analysis. Development of an artificial neural network is also reported, which was produced to process the sensor data and output assessment of the subject’s hip ROM in the flexion/extension and abduction/adduction rotations (forward and backward swing and outward and inward movement of the hip respectively). The review of literature in the area of biomeasurement devices is also presented. A major data collection was carried out using twenty-one THA patients, where the device output was compared to the output of a Vicon motion analysis system which is considered the ‘gold standard’ in clinical gait analysis. The Vicon system was used to show that the device developed did not itself affect the patient’s hip, knee or ankle gait cycle parameters when in use, and produced measurement of hip flexion/extension and abduction/adduction closely approximating those of the Vicon system. In patients who had gait deviations manifesting in reduced ROM of these hip parameters, it was demonstrated that the device was able to detect and assess the severity of these excursions accurately. The results of the study substantiate that the system developed could be used as an aid for healthcare professionals in the following ways: · To objectively assess gait deviation in the form of reduced flexion/extension and abduction/adduction in the human hip, after replacement, · Monitoring of patient hip ROM post-operatively · Assist in the planning of gait rehabilitation strategies related to these hip parameters.

Page generated in 0.044 seconds