• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1282
  • 349
  • 214
  • 91
  • 65
  • 53
  • 40
  • 36
  • 27
  • 17
  • 13
  • 13
  • 13
  • 12
  • 7
  • Tagged with
  • 2651
  • 2651
  • 831
  • 812
  • 588
  • 568
  • 448
  • 408
  • 399
  • 331
  • 310
  • 284
  • 259
  • 247
  • 242
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Environmental site characterization via artificial neural network approach

Mryyan, Mahmoud January 1900 (has links)
Doctor of Philosophy / Department of Civil Engineering / Yacoub M. Najjar / This study explored the potential use of ANNs for profiling and characterization of various environmental sites. A static ANN with back-propagation algorithm was used to model the environmental containment at a hypothetical data-rich contaminated site. The performance of the ANN profiling model was then compared with eight known profiling methods. The comparison showed that the ANN-based models proved to yield the lowest error values in the 2-D and 3-D comparison cases. The ANN-based profiling models also produced the best contaminant distribution contour maps when compared to the actual maps. Along with the fact that ANN is the only profiling methodology that allows for efficient 3-D profiling, this study clearly demonstrates that ANN-based methodology, when properly used, has the potential to provide the most accurate predictions and site profiling contour maps for a contaminated site. ANN with a back-propagation learning algorithm was utilized in the site characterization of contaminants at the Kansas City landfill. The use of ANN profiling models made it possible to obtain reliable predictions about the location and concentration of lead and copper contamination at the associated Kansas City landfill site. The resulting profiles can be used to determine additional sampling locations, if needed, for both groundwater and soil in any contaminated zones. Back-propagation networks were also used to characterize the MMR Demo 1 site. The purpose of the developed ANN models was to predict the concentrations of perchlorate at the MMR from appropriate input parameters. To determine the most-appropriate input parameters for this model, three different cases were investigated using nine potential input parameters. The ANN modeling used in this case demonstrates the neural network’s ability to accurately predict perchlorate contamination using multiple variables. When comparing the trends observed using the ANN-generated data and the actual trends identified in the MMR 2006 System Performance Monitoring Report, both agree that perchlorate levels are decreasing due to the use of the Extraction, Treatment, and Recharge (ETR) systems. This research demonstrates the advantages of ANN site characterization modeling in contrast with traditional modeling schemes. Accordingly, characterization task-related uncertainties of site contaminations were curtailed by the use of ANN-based models.
162

The art of forecasting – an analysis of predictive precision of machine learning models

Kalmár, Marcus, Nilsson, Joel January 2016 (has links)
Forecasting is used for decision making and unreliable predictions can instill a false sense of condence. Traditional time series modelling is astatistical art form rather than a science and errors can occur due to lim-itations of human judgment. In minimizing the risk of falsely specifyinga process the practitioner can make use of machine learning models. Inan eort to nd out if there's a benet in using models that require lesshuman judgment, the machine learning models Random Forest and Neural Network have been used to model a VAR(1) time series. In addition,the classical time series models AR(1), AR(2), VAR(1) and VAR(2) havebeen used as comparative foundation. The Random Forest and NeuralNetwork are trained and ultimately the models are used to make pre-dictions evaluated by RMSE. All models yield scattered forecast resultsexcept for the Random Forest that steadily yields comparatively precisepredictions. The study shows that there is denitive benet in using Random Forests to eliminate the risk of falsely specifying a process and do infact provide better results than a correctly specied model.
163

Towards the discrimination of milk (origin) applied in cheddar cheese manufacturing through the application of an artificial neural network approach on Lactococcus lactis profiles

Venter, P., Venter, T., Luwes, N., De Smidt, O., Lues, J.F.R. January 2013 (has links)
Published Article / An artificial neural network (ANN) that is able to distinguish between Cheddar cheese produced with milk from mixed and single breed sources was designed. Samples of each batch (4 pure Ayrshire/4 mixed with no Ayrshire milk) were ripened for 92 days and analysed every 14 days. A novel ANN was designed and applied which, based only on Lactococcus lactis counts, provided an acceptable classification of the cheeses. The ANN consisted of a multi-layered network with supervised training arranged in an ordered hierarchy of layers, in which connections were allowed only between nodes in immediately adjacent layers.
164

Relative-fuzzy : a novel approach for handling complex ambiguity for software engineering of data mining models

Imam, Ayad Tareq January 2010 (has links)
There are two main defined classes of uncertainty namely: fuzziness and ambiguity, where ambiguity is ‘one-to-many’ relationship between syntax and semantic of a proposition. This definition seems that it ignores ‘many-to-many’ relationship ambiguity type of uncertainty. In this thesis, we shall use complex-uncertainty to term many-to-many relationship ambiguity type of uncertainty. This research proposes a new approach for handling the complex ambiguity type of uncertainty that may exist in data, for software engineering of predictive Data Mining (DM) classification models. The proposed approach is based on Relative-Fuzzy Logic (RFL), a novel type of fuzzy logic. RFL defines a new formulation of the problem of ambiguity type of uncertainty in terms of States Of Proposition (SOP). RFL describes its membership (semantic) value by using the new definition of Domain of Proposition (DOP), which is based on the relativity principle as defined by possible-worlds logic. To achieve the goal of proposing RFL, a question is needed to be answered, which is: how these two approaches; i.e. fuzzy logic and possible-world, can be mixed to produce a new membership value set (and later logic) that able to handle fuzziness and multiple viewpoints at the same time? Achieving such goal comes via providing possible world logic the ability to quantifying multiple viewpoints and also model fuzziness in each of these multiple viewpoints and expressing that in a new set of membership value. Furthermore, a new architecture of Hierarchical Neural Network (HNN) called ML/RFL-Based Net has been developed in this research, along with a new learning algorithm and new recalling algorithm. The architecture, learning algorithm and recalling algorithm of ML/RFL-Based Net follow the principles of RFL. This new type of HNN is considered to be a RFL computation machine. The ability of the Relative Fuzzy-based DM prediction model to tackle the problem of complex ambiguity type of uncertainty has been tested. Special-purpose Integrated Development Environment (IDE) software, which generates a DM prediction model for speech recognition, has been developed in this research too, which is called RFL4ASR. This special purpose IDE is an extension of the definition of the traditional IDE. Using multiple sets of TIMIT speech data, the prediction model of type ML/RFL-Based Net has classification accuracy of 69.2308%. This accuracy is higher than the best achievements of WEKA data mining machines given the same speech data.
165

Towards an effective automated interpretation method for modern hydrocarbon borehole geophysical images

Thomas, Angeleena January 2012 (has links)
Borehole imaging is one of the fastest and most precise methods for collecting subsurface data that provides high resolution information on layering, texture and dips, permitting a core-like description of the subsurface. Although the range of information recoverable from this technology is widely acknowledged, image logs are still used in a strictly qualitative manner. Interpreting image logs manually is cumbersome, time consuming and is subjective based on the experience of the interpreter. This thesis outlines new methods that automate image log interpretation and extract subsurface lithofacies information in a quantitative manner. We developed two methodologies based on advanced image analysis techniques successfully employed in remote sensing and medical imaging. The first one is a pixelbased pattern recognition technique applying textural analysis to quantify image textural properties. These properties together with standard logs and core-derived lithofacies information are used to train a back propagation Neural Network. In principle the trained and tested Neural Network is applicable for automated borehole image interpretation from similar geological settings. However, this pixel-based approach fails to make use explicitly of the spatial characteristics of a high resolution image. TAT second methodology is introduced which groups identical neighbouring pixels into objects. The resultant spectrally and spatially consistent objects are then related to geologically meaningful groups such as lithofacies by employing fuzzy classifiers. This method showed better results and is applied to outcrop photos, core photos and image logs, including a ‘difficult’ data set from a deviated well. The latter image log did not distinguish some of the conductive and resistive regions, as observed from standard logs and core photos. This is overcome by marking bed boundaries using standard logs. Bed orientations were estimated using an automated sinusoid fitting algorithm within a formal uncertainty framework in order to distinguish dipping beds and horizontal stratification. Integration of these derived logs in the methodology yields a complete automated lithofacies identification, even from the difficult dataset. The results were validated through the interpretation of cored intervals by a geologist. This is a supervised classification method which incorporates the expertise of one or several geologists, and hence includes human logic, reasoning, and current knowledge of the field heterogeneity. By including multiple geologists in the training, the results become less dependent on each individual’s subjectivity and prior experience. The method is also easily adaptable to other geological settings. In addition, it is applicable to several kinds of borehole images, for example wireline electrical borehole wall images, core photographs, and logging-while-drilling (LWD) images. Thus, the theme of this dissertation is the development of methodologies which makes image log interpretation simpler, faster, less subjective, and efficient such that it can be applied to large quantities of data.
166

MODELING DEMENTIA RISK, COGNITIVE CHANGE, PREDICTIVE RULES IN LONGITUDINAL STUDIES

Ding, Xiuhua 01 January 2016 (has links)
Dementia is increasing recognized as a major problem to public health worldwide. Prevention and treatment strategies are in critical need. Nowadays, research for dementia usually featured as complex longitudinal studies, which provide extensive information and also propose challenge to statistical methodology. The purpose of this dissertation research was to apply statistical methodology in the field of dementia to strengthen the understanding of dementia from three perspectives: 1) Application of statistical methodology to investigate the association between potential risk factors and incident dementia. 2) Application of statistical methodology to analyze changes over time, or trajectory, in cognitive tests and symptoms. 3) Application of statistical learning methods to predict development of dementia in the future. Prevention of Alzheimer’s disease with Vitamin E and Selenium (PREADViSE) (7547 subjects included) and Alzheimer’s disease Neuroimaging Initiative (ADNI) (591 participants included) were used in this dissertation. The first study, “Self-reported sleep apnea and dementia risk: Findings from the PREADViSE Alzheimer’s disease prevention trial ”, shows that self-reported baseline history of sleep apnea was borderline significantly associated with risk of dementia after adjustment for confounding. Stratified analysis by APOE ε4 carrier status showed that baseline history of sleep apnea was associated with significantly increased risk of dementia in APOE ε4 non-carriers. The second study, “comparison of trajectories of episodic memory for over 10 years between baseline normal and MCI ADNI subjects,” shows that estimated 30% normal subjects at baseline assigned to group 3 and 6 stay stable for over 9 years, and normal subjects at baseline assigned to Group 1 (18.18%) and Group 5 (16.67%) were more likely to develop into dementia. In contrast to groups identified for normal subjects, all trajectory groups for MCI subjects at baseline showed the tendency to decline. The third study, “comparison between neural network and logistic regression in PREADViSE trial,” demonstrates that neural network has slightly better predictive performance than logistic regression, and also it can reveal complex relationships among covariates. In third study, the effect of years of education on response variable depends on years of age, status of APOE ɛ4 allele and memory change.
167

Performance based diagnostics of a twin shaft aeroderivative gas turbine: water wash scheduling

Baudin Lastra, Tomas 05 1900 (has links)
Aeroderivative gas turbines are used all over the world for different applications as Combined Heat and Power (CHP), Oil and Gas, ship propulsion and others. They combine flexibility with high efficiencies, low weight and small footprint, making them attractive where power density is paramount as off shore Oil and Gas or ship propulsion. In Western Europe they are widely used in CHP small and medium applications thanks to their maintainability and efficiency. Reliability, Availability and Performance are key parameters when considering plant operation and maintenance. The accurate diagnose of Performance is fundamental for the plant economics and maintenance planning. There has been a lot of work around units like the LM2500® , a gas generator with an aerodynamically coupled gas turbine, but nothing has been found by the author for the LM6000® . Water wash, both on line or off line, is an important maintenance practice impacting Reliability, Availability and Performance. This Thesis aims to select and apply a suitable diagnostic technique to help establishing the schedule for off line water wash on a specific model of this engine type. After a revision of Diagnostic Methods Artificial Neural Network (ANN) has been chosen as diagnostic tool. There was no WebEngine model available of the unit under study so the first step of setting the tool has been creating it. The last step has been testing of ANN as a suitable diagnostic tool. Several have been configured, trained and tested and one has been chosen based on its slightly better response. Finally, conclusions are discussed and recommendations for further work laid out.
168

Localising imbalance faults in rotating machinery

Walker, Ryan January 2013 (has links)
This thesis presents a novel method of locating imbalance faults in rotating machinery through the study of bearing nonlinearities. Localisation in this work is presented as determining which discs/segments of a complex machine are affected with an imbalance fault. The novel method enables accurate localisation to be achieved using a single accelerometer, and is valid for both sub and super-critical machine operations in the presence of misalignment and rub faults. The development of the novel system for imbalance localisation has been driven by the desire for improved maintenance procedures, along with the increased requirement for Integrated Vehicle Health Management (IVHM) systems for rotating machinery in industry. Imbalance faults are of particular interest to aircraft engine manufacturers such as Rolls Royce plc, where such faults still result in undesired downtime of machinery. Existing methods of imbalance localisation have yet to see widespread implementation in IVHM and Engine Health Monitoring (EHM) systems, providing the motivation for undertaking this project. The imbalance localisation system described has been developed primarily for a lab-based Machine Fault Simulator (MFS), with validation and verification performed on two additional test rigs. Physics based simulations have been used in order to develop and validate the system. An Artificial Neural Network (ANN) has been applied for the purposes of reasoning, using nonlinear features in the frequency domain originating from bearing nonlinearities. The system has been widely tested in a range of situations, including in the presence of misalignment and rub faults and on a full scale aircraft engine model. The novel system for imbalance localisation has been used as the basis for a methodology aimed at localising common faults in future IVHM systems, with the aim of communicating the results and findings of this research for the benefit of future research. The works contained herein therefore contribute to scientific knowledge in the field of IVHM for rotating machinery.
169

Bioprocess Software Sensors Development Facing Modelling and Model uncertainties/Développement de Capteurs Logiciels pour les Bioprocédés face aux incertitudes de modélisation et de modèle

Hulhoven, Xavier 07 December 2006 (has links)
The exponential development of biotechnology has lead to a quasi unlimited number of potential products going from biopolymers to vaccines. Cell culture has therefore evolved from the simple cell growth outside its natural environment to its use to produce molecules that they do not naturally produce. This rapid development could not be continued without new control and supervising tools as well as a good process understanding. This requirement involves however a large diversity and a better accessibility of process measurements. In this framework, software sensors show numerous potentialities. The objective of a software sensor is indeed to provide an estimation of the system state variables and particularly those which are not obtained through in situ hardware sensors or laborious and expensive analysis. In this context, This work attempts to join the knowledge of increasing bioprocess complexity and diversity and the time scale of process developments and favours systematic modelling methodology, its flexibility and the speed of development. In the field of state observation, an important modelling constraint is the one induced by the selection of the state to estimate and the available measurements. Another important constraint is the model quality. The central axe of this work is to provide solutions in order to reduce the weight of these constraints to software sensors development. On this purpose, we propose four solutions to four main questions that may arise. The first two ones concern modelling uncertainties. 1."How to develop a software sensor using measurements easily available on pilot scale bioreactor?" The proposed solution is a static software sensor using an artificial neural network. Following this modelling methodology we developed static software sensors for the biomass and ethanol concentrations in a pilot scale S. cerevisae cell culture using the measurement of titrating base quantity, agitation rate and CO₂ concentration in the exhaust gas. 2."How to obtain a reaction scheme and a kinetic model to develop a dynamic observation model?". The proposed solution is to combine three elements: a systematic methodology to generate, identify and select the possible reaction schemes, a general kinetic model and a systematic identification procedure where the last step is particularly dedicated to the identification of observation models. Combining these methodologies allowed us to develop a software sensor for the concentrations of an allergen produced by an animal cell culture using the discrete measurement of glucose, glutamine and ammonium concentrations (which are also estimated in continuous time by the software sensors). The two other questions are dealing with kinetic model uncertainty. 3 "How to correct kinetic model parameters while keeping the system observability?". We consider the possibility to correct some model parameters during the process observation. We propose indeed an adaptive observer based on the theory of the most likely initial conditions observer and exploiting the information from the asymptotic observer. This algorithm allows to jointly estimate the state and some kinetic model parameters. 4 "How to avoid any state observer selection that requires an a priori knowledge on the model quality?". Answering this question lead us to the development of hybrid state observers. The general principle of a hybrid observer is to automatically evaluate the model quality and to select the appropriate state observer. In this work we focus on kinetic model quality and propose hybrid observers that evolves between the state observation from an exponential observer (free convergence rate tuning but model error sensitivity) and the one provided by an asymptotic observer (no kinetic model requirement but a convergence rate depending on the dilution rate). Two strategies are investigated in order to evaluate the model quality and to induce the state observation evolution. Each of them have been validated on two simulated cultures (microbial and animal cells) and one real industrial one (B. subtilis). ∙ In a first strategy, the hybrid observer is based on the determination of a parameter that drives the state estimation from the one obtained with an exponential observer (exponential observation) when the model is of good quality to the one provided by an asymptotic observer (asymptotic observation) when a kinetic model error is detected. The evaluation of this driving parameter is made either with an a priori defined function or is coupled to the identification of the initial conditions in a most likely initial conditions observer. ∙ In another strategy, the hybrid observer is based on a statistical test that compares the state estimations provided by an exponential and an asymptotic observer and corrects the state estimation according to it./ Le rapide développement des biotechnologies permet actuellement d'envisager un nombre quasi illimité de produits potentiels allant du biopolymère au vaccin. La culture cellulaire a dès lors évolué de la simple croissance de cellules en dehors de leur environnement naturel à son exploitation pour la production de molécules qu'elles ne produisent pas naturellement. Un tel développement ne peut se poursuivre sans l'utilisation de nouvelles technologies de contrôle et de supervision ainsi q'une bonne compréhension et maîtrise du biprocédé. Cette exigence nécessite cependant une meilleure accessibilité et une plus grande variabilité des mesures des différentes variables de ce procédé. Dans ce contexte, les capteurs logiciels présentent de nombreuses potentialités. L'objectif d'un capteur logiciel est en effet de fournir une estimation des états d'un système et particulièrement de ceux qui ne sont pas mesurés par des capteurs physiquement installés sur le système ou par de longues et coûteuses analyses. Cet objectif peut être obtenu en combinant un modèle du système avec certaines mesures physiques au sein d'un algorithme d'observation d'état. Dans ce domaine de l'observation des bioprocédés, ce travail tente de considérer, à la fois, l'augmentation de la complexité et de la diversité des bioprocédés et l'exigence d'un développement rapide en favorisant le caractère systématique, flexible et rapide des méthodes proposées. Dans le cadre de l'observation des bioprocédés, une importante contrainte de modélisation est induite par la sélection des états à estimer et des mesures disponibles pour cette estimation. Une seconde contrainte est la qualité du modèle. L'axe central de ce travail est de fournir certaines solutions afin de réduire le poids de ces contraintes dans le développement de capteurs logiciels. Pour ce faire, nous proposons quatre réponses à quatre questions qui peuvent survenir lors de ce développement. Les deux premières questions concernent l'incertitude de modélisation. Quant aux deux questions suivantes, elles concernent l'incertitude du modèle lui-même. 1."Comment développer un capteur logiciel exploitant des mesures facilement disponibles sur un bioréacteur pilote?". La réponse que nous apportons à cette question est le développement d'un capteur logiciel statique basé sur un réseau de neurones artificiels. Cette structure nous a permis de développer des capteurs logiciels de concentrations en biomasse et éthanol au sein d'une culture de S. cerevisae utilisant les mesures en ligne de quantité de base titrante, de vitesse d'agitation et de concentration en CO₂ dans le gaz sortant du réacteur. 2."Comment obtenir un schéma réactionnel et un modèle cinétique pour l'identification d'un modèle dynamique d'observation". Afin de répondre à cette question, nous proposons de combiner trois éléments: une méthode de génération systématique de schémas réactionnels, une structure générale de modèle cinétique et une méthode d'identification dont la dernière étape est particulièrement dédiée à l'identification de modèles d'observation. La combinaison de ces éléments nous a permis de développer un capteur logiciel permettant l'estimation continue de la concentration en un allergène produit par une culture de cellules animales en utilisant des mesures échantillonnées de glucose, glutamine et ammonium (qui sont elles aussi estimées en continu par le capteur logiciel). 3."Comment corriger certains paramètres cinétiques tout en maintenant l'observabilité du système?". Nous considérons ici la possibilité de corriger certains paramètres du modèle cinétique durant le procédé de culture. Nous proposons, en effet, un observateur d'état adaptatif exploitant la théorie de l'observateur par identification des conditions initiales les plus vraisemblables et l'information fournie par un observateur asymptotique. L'algorithme proposé permet ainsi de fournir une estimation conjointe de l'état et de certains paramètres cinétiques. 4."Comment éviter la sélection d'un observateur d'état nécessitant une connaissance, a priori, de la qualité du modèle?". La dernière contribution de ce travail concerne le développement d'observateurs d'état hybrides. Le principe général d'un observateur hybride est d'évaluer automatiquement la qualité du modèle et de sélectionner l'observateur d'état approprié. Au sein de ce travail nous considérons la qualité du modèle cinétique et proposons des observateurs d'état hybrides évoluant entre un observateur dit exponentiel (libre ajustement de la vitesse de convergence mais forte sensibilité aux erreurs de mesures) et un observateur asymptotique (ne nécessite aucun modèle cinétique mais présente une vitesse de convergence dépendante du taux de dilution). Afin de réaliser cette évaluation et d'induire l'évolution de l'observation d'état entre ces deux extrémités, deux stratégies sont proposées. Chacune d'elle est illustrée sur deux cultures simulées (une croissance bactérienne et une culture de cellules animales) et une culture réelle de B. subtilis. ∙ Une première stratégie est basée sur la détermination d'un paramètre de pondération entre l'observation fournie par un observateur exponentiel et un observateur asymptotique. L'évaluation de ce paramètre peut être obtenue soit au moyen d'une fonction définie a priori soit par une identification conjointe aux conditions initiales d'un observateur par identification des conditions initiales les plus vraisemblables. ∙ Une seconde stratégie est basée sur une comparaison statistique entre les observations fournies par les deux types d'observateurs. Le résultat de cette comparaison, lorsqu'il indique une incohérence entre les deux observateurs d'état, est alors utilisé pour corriger l'estimation fournie par l'observateur exponentiel.
170

A Novel Method for Water irrigation System for paddy fields using ANN

Prisilla, L., Rooban, P. Simon Vasantha, Arockiam, L. 01 April 2012 (has links)
In our country farmers have to face many difficulties because of the poor irrigation system. During flood situation, excessive waters will be staged in paddy field producing great loss and pain to farmers. So, proper Irrigation mechanism is an essential component of paddy production. Poor irrigation methods and crop management are rapidly depleting the country’s water table. Most small hold farmers cannot afford new wells or lawns and they are looking for alternative methods to reduce their water consumption. So proper irrigation mechanism not only leads to high crop production but also pave a way for water saving techniques. Automation of irrigation system has the potential to provide maximum water usage efficiency by monitoring soil moistures. The control unit based on Artificial Neural Network is the pivotal block of entire irrigation system. Using this control unit certain factors like temperature, kind of the soil and crops, air humidity, radiation in the ground were estimated and this will help to control the flow of water to acquire optimized results. / Water is an essential resource in the earth. It is also essential for irrigation, so irrigation technique is essential for agriculture. To irrigate large area of lands is a tedious process. In our country farmers are not following proper irrigation techniques. Currently, most of the irrigation scheduling systems and their corresponding automated tools are in fixed rate. Variable rate irrigation is very essential not only for the improvement of irrigation system but also to save water resource for future purpose. Most of the irrigation controllers are ON/OFF Model. These controllers cannot give optimal results for varying time delays and system parameters. Artificial Neural Network (ANN) based intelligent control system is used for effective irrigation scheduling in paddy fields. The input parameters like air, temperature, soil moisture, radiations and humidity are modeled. Using appropriate method, ecological conditions, Evapotranspiration, various growing stages of crops are considered and based on that the amount of water required for irrigation is estimated. Using this existing ANN based intelligent control system, the water saving procedure in paddy field can be achieved. This model will lead to avoid flood in paddy field during the rainy seasons and save that water for future purposes.

Page generated in 0.5738 seconds