• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 336
  • 39
  • 21
  • 15
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 1191
  • 1191
  • 1191
  • 571
  • 556
  • 423
  • 157
  • 134
  • 129
  • 128
  • 120
  • 110
  • 94
  • 93
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Crop Condition and Yield Prediction at the Field Scale with Geospatial and Artificial Neural Network Applications

Hollinger, David L. 13 July 2011 (has links)
No description available.
592

Development of a Neural Based Biomarker Forecasting Tool to Classify Recreational Water Quality

Motamarri, Srinivas January 2010 (has links)
No description available.
593

A Study of Nutrient Dynamics in Old Woman Creek Using Artificial Neural Networks and Bayesian Belief Networks

Anderson, Jerone S. 05 August 2009 (has links)
No description available.
594

[pt] MODELAMENTO DO CONSUMO DE CAL NO PROCESSO DE DESSULFURAÇÃO DE GASES DE COMBUSTÃO DE UMA COQUERIA DO TIPO HEAT RECOVERY USANDO REDES NEURAIS ARTIFICIAIS / [en] MODELING LIME CONSUMPTION OF A DESULFURIZATION PROCESS FROM GASES OF A HEAT RECOVERY COKE PRODUCTION PLANT USING NEURAL NETWORK DEVELOPMENT

FREDERICO MADUREIRA MATOS ALMEIDA 26 February 2021 (has links)
[pt] A produção de coque metalúrgico em plantas do tipo heat recovery convertem todo o calor gerado da combustão de gases destilados durante a coqueificação em vapor e eletricidade, portanto eliminando a necessidade de processamento de sub-produtos químicos e rejeitos perigosos. Os gases, após a etapa de inertização no interior dos fornos, são direcionados à planta de dessulfuração denominada flue gas dessulfurization que utiliza lama de cal para abatimento de compostos SOx (SO2 e SO3) e filtros de mangas para remoção do resíduo gerado, cinzas de cal, precedente ao descarte para a atmosfera. Em virtude do alto custo da cal torna-se importante modelar o processo e avaliar quais são as principais variáveis que impactam no re-sultado, logo permitindo atuação no processo para torna-lo mais competitivo e am-bientalmente sustentável. A proposta deste trabalho foi elaborar um modelo matemático usando redes neurais artificiais para determinar as principais variáveis que impactam o consumo específico de cal no processo. A literatura existente revela que os principais parâmetros que impactam a eficiência de remoção de enxofre, logo a redução de consumo específico de cal, são temperatura de aproximação e relação Ca/S no processo. Este estudo indicou que o consumo está relacionado, principal-mente, aos parâmetros de temperatura dos gases na entrada e saída do SDA, além de concentração de oxigênio na chaminé principal e densidade da lama de cal utilizada de acordo com a análise de sensibilidade de rede neural feedfoward backpropagation com arquitetura MLP 14-19-2 e função de transferência tangente hiperbólica na ca-mada intermediária e logística na camada de saída. A avaliação reforçou o efeito do aumento da temperatura de saída dos gases no consumo específico de cal conforme literatura e adicionou parâmetros relevantes: temperatura de entrada dos gases, con-centração de O2(g) na chaminé e densidade da lama. / [en] The production of metallurgical coke in heat recovery coke production plants converts all heat generated from the combustion of distilled gases during coking pro-duction to steam and electricity, thus eliminating the need of processing hazardous by-products chemical substances. The gases, after inertization inside the ovens, are directed to the desulphurization plant called flue gas desulphurization (FGD) which uses lime slurry to remove SOx compounds (SO2 and SO3) and bag filters to remove the generated residue, lime ash. Due to the high cost of lime, it is important to model the process and evaluate which are the main variables that affects its result, thus allowing action in the process to make it more competitive and environmentally sus-tainable. The purpose of this work was to develop a mathematical model using arti-ficial neural networks to determine the main variables that affect lime consumption in the desulphurization process. Literature reveals that the main parameters that in-fluence sulfur removal efficiency, thus reducing specific lime consumption, are ap-proach to adiabatic saturation temperature and Ca/S ratio in the process. This study indicated that consumption is mainly related to the inlet and outlet SDA gas temper-atures, oxygen concentration in stack and lime slurry density according to the feed-foward backpropagation neural network sensitivity analysis. MLP 14-19-2 and hy-perbolic tangent transfer function in the intermediate layer and logistics in the output layer. Thus, the evaluation reinforced the effect of the increase of the gas outlet tem-perature on the specific lime consumption according to the literature, but also added new parameters: gas inlet temperature, O2 (g) concentration in the outlet of stack and lime slurry density.
595

Neural Network-based Optimization of Solid- and Fluid Mechanical Simulations / Neurala nätverksbaserad optimering av mekaniska simuleringar avfasta och flytande ämnen

Jeken Rico, Pablo January 2021 (has links)
The following project deals with the optimization of simulation parameters such as the injection location and pitch angle of polyurethane foaming simulations using artificial neural networks. The model's target is to predict quality variables based on the process parameters and the geometry features. Through several evaluations of the model, good parameter combinations can be found which in turn can be used as good initial guesses by high fidelity optimization tools. For handling different mould geometries, a meshing tool has been programmed which transforms variable-sized surface meshes into voxel meshes. Cross-section images of the meshes are then passed together with a series of simulation settings to the neural network which processes the data streams into one set of predictions. The model has been implemented using the TensorFlow interface and trained with a custom generated data set of roughly 10000 samples. The results show well-matching prediction and simulation profiles for the validation cases. The magnitudes of the quality parameters often differ, but the especially relevant areas of optimal injection points are well covered. Good results together with a small model size provide evidence for a feasible and successful extension towards a full 3D application. / Följande projekt handlar om optimering av simuleringsparametrar, såsom injektionsplats och stigningsvinkel för polyuretanskummande simuleringar med hjälp av artificiella neurala nätverk. Modellens mål är att förutsäga kvalitetsvariabler baserat på processparametrarna och geometrifunktionerna. Genom flera utvärderingar av modellen kan man hitta goda parameterkombinationer som i sin tur kan användas som gedigna förutsägelser med högkvalitativa optimeringsverktyg. För hantering av olika geometriska former har ett maskverktyg programmerats som omvandlar ytmaskor med varierande storlek till voxelmaskor. Tvärsnittsbilder av maskor na tillsammans med en serie simuleringsinställningar överförs till det neurala nätverket som behandlar dataströmmarna till en uppsättning förutsägelser. Modellen har implementerats med hjälp av TensorFlow och utbildats med en anpassad genererad datauppsättning på cirka 10000 prover. Resultaten påvisar väl matchande förutsägelser och simuleringsprofiler för valideringsfall. Kvalitetsparametrarnas storlek varierar ofta, men de särskilt relevanta områdena med optimala injektionspunkter är väl täckta. Goda resultat tillsammans med en liten modellstorlek ger bevis för en genomförbar och framgångsrik förlängning mot en fullständig 3D applikation.
596

Biologically Inspired Modular Neural Networks

Azam, Farooq 19 June 2000 (has links)
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network. The artificial neural networks are touted to be a neurobiologicaly inspired paradigm that emulate the functioning of the vertebrate brain. The brain is a highly structured entity with localized regions of neurons specialized in performing specific tasks. On the other hand, the mainstream monolithic feed-forward neural networks are generally unstructured black boxes which is their major performance limiting characteristic. The non explicit structure and monolithic nature of the current mainstream artificial neural networks results in lack of the capability of systematic incorporation of functional or task-specific a priori knowledge in the artificial neural network design process. The problem caused by these limitations are discussed in detail in this dissertation and remedial solutions are presented that are driven by the functioning of the brain and its structural organization. Also, this dissertation presents an in depth study of the currently available modular neural network architectures along with highlighting their shortcomings and investigates new modular artificial neural network models in order to overcome pointed out shortcomings. The resulting proposed modular neural network models have greater accuracy, generalization, comprehensible simplified neural structure, ease of training and more user confidence. These benefits are readily obvious for certain problems, depending upon availability and usage of available a priori knowledge about the problems. The modular neural network models presented in this dissertation exploit the capabilities of the principle of divide and conquer in the design and learning of the modular artificial neural networks. The strategy of divide and conquer solves a complex computational problem by dividing it into simpler sub-problems and then combining the individual solutions to the sub-problems into a solution to the original problem. The divisions of a task considered in this dissertation are the automatic decomposition of the mappings to be learned, decompositions of the artificial neural networks to minimize harmful interaction during the learning process, and explicit decomposition of the application task into sub-tasks that are learned separately. The versatility and capabilities of the new proposed modular neural networks are demonstrated by the experimental results. A comparison of the current modular neural network design techniques with the ones introduced in this dissertation, is also presented for reference. The results presented in this dissertation lay a solid foundation for design and learning of the artificial neural networks that have sound neurobiological basis that leads to superior design techniques. Areas of the future research are also presented. / Ph. D.
597

Forecasting Short-Term Returns on Tennis Betting Exchange Markets Using Deep Learning

Alm, David, Markai, Edward January 2024 (has links)
In this work, we propose a regressional framework, built on the work ”Deep Order Flow Imbalance: Extracting Alpha at Multiple Horizons from the Limit Order Book” by Kolm, et al. (2023), for predicting short term returns of odds on binary betting exchange markets. Using the framework, we apply five different deep learning models that leverage order book data from tennis betting exchanges during the calendar month of July 2023 with the purpose of examining the predictive capabilities of deep learning models in this setting. We train each model on either raw limit order book states or order flow. The models predict the returns of the best available odds returns on five different short term time horizons on the four order book sides, back and lay for each of the two players in a given tennis match. Applying windowing, for each vector prediction we use the 100 latest market messages consisting of 81 features (odds and volumes per the ten first levels in the order book and time delta between market messages) in the case of the raw limit order book state and 41 features (order book flow per the ten first levels in the order book and time delta between market messages) in the case of the order book flow. All code is written in Python and run on Google Colab, leveraging cloud computing, off-the-shelf models and popular libraries, TensorFlow and Keras, for data processing and pipelining, model implementation, training and testing. The models are evaluated relative to a benchmark in the form of a naive predictor based on the average odds returns on the training set. The models do not converge towards an optimal parameter composition duringtraining, indicating low predictive capabilities of the input data. Despite this, we generally find all models to outperform the benchmark on the lay order book sides and while some perform better than others, we see similar relative performance distributions within each model across horizon-order book side combinations. To enhance discussion and suggest the direction of future research we examine relationships between key game characteristics such asthe variation of odds returns and the accuracy of predictions on a given market.
598

Autonomous auscultation of the human heart

Botha, J. S. F. 03 1900 (has links)
Thesis (MScEng (Mechanical and Mechatronic Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The research presented in this thesis serves to provide a tool to autonomously screen for cardiovascular disease in the rural areas of Africa. Vital information thus obtained from patients can be communicated to advanced medical centres by Telemedicine. Cardiovascular disease is then detected in its initial stages, which is essential to its effective treatment. The system developed in this study uses recorded heart sounds and electrocardiogram signals to distinguish between normal and abnormal heart conditions. This system improves on standard diagnostic tools in that it does not require cumbersome and expensive imaging equipment or a highly trained operator. Heart sound- and electrocardiogram signals from 62 volunteers were recorded with the prototype Precordialcardiogram device as part of a clinical study to aid in the development of the autonomous auscultation software and to screen patients for cardiovascular disease. These volunteers consisted of 28 patients of Tygerberg Hospital with cardiovascular disease and, for control purposes, 34 persons with normal heart conditions. The autonomous auscultation system developed during this study, interprets data obtained with the Precordialcardiogram device to autonomously acquire a normal or abnormal diagnosis. The system employs wavelet soft thresholding to denoise the recorded signals, followed by the segmentation of heart sound by identifying peaks in the electrocardiogram. Novel frequency spectral information was extracted as features from the heart sounds, by means of ensemble empirical mode decomposition and auto regressive modelling. These features proved to be particularly significant and played a major role in the screening capability of the system. New time domain based features were identified, established on the specific characteristics of the various cardiovascular diseases encountered during the study. These features were extracted via the energy ratios between different parts of ventricular systole and diastole of each recorded cardiac cycle. The respective features were classified to characterise typical heart diseases as well as healthy hearts with an ensemble artificial neural network. Herein the decisions of all the members were combined to obtain a final diagnosis. The performance of the autonomous auscultation system used in concert with the Precordialcardiogram device prototype, as determined through the leave-one-out crossvalidation method, had a sensitivity rating of 82% and a specificity rating of 88%. These results demonstrate the potential benefit of the Precordialcardiogram device and the developed autonomous auscultation software in a Telemedicine environment. / AFRIKAANSE OPSOMMING: Hierdie tesis beskryf die navorsing van 'n outonome toetsing en sifting stelsel vir kardiovaskulêre siektes in landelike dele van Afrika, vanwaar mediese inligting per telefoon versend kan word. Die apparaat maak vroeë opsporing van kardiovaskulêre siektes moontlik, wat essensieel is vir effektiewe behandeling daarvan en ook die koste-effek van hierdie siektes verminder. In die huidige ontwikkelde stelsel word normale sowel as abnormale hart-toestande getipeer met opnames van hartklanke sowel as elektrokardiogram-seine. Voordele wat hierdie stelsel bo standaard diagnostiese metodes het, sluit die hanteerbare formaat van die hele apparaat sowel as die nie-noodsaaklikheid van duur beeldskeppende apparaat, of hoogs opgeleide personeel. Hartklank- en elektrokardiogramseine van 62 vrywilligers is met die prototipe "Precordialcardiogram" apparaat opgeneem om by te dra tot die ontwikkeling van die rekenaar sagteware vir die outonome auscultatsie stelsel en om die pasiëntsiftingsvermoë daarvan te toets. Die vrywilligers het 28 pasiënte van Tygerberg hospitaal met abnormale harttoestande ingesluit, sowel as ‘n kontrolegroep van 34 persone met normale harttoestande. Die outonome auskultasie-stelsel wat tot stand gekom het deur hierdie ondersoek maak gebruik van “wavelet” sagte drempeling om geraas uit die opgeneemde seine te verwyder. Daarna word die hartklanke gesegmenteer deur die pieke van die elektrokardiogram te identifiseer. Deur middel van "ensemble empirical mode decomposition" en outoregressiewe modellering, is nuwe inligting aangaande die frekwensie spektra van hartklanke, aanwysend van spesifieke harttoestande, verkry. Die beduidendheid van hierdie eienskappe is bewys en het 'n belangrike rol in die siftingsvermoë van die stelsel gespeel. Hierbenewens is nuwe tyd-gebaseerde eienskappe van die onderskeie kardiovaskulêre siektes wat tydens die ondersoek bestudeer is, geïdentifiseer. Hierdie eienskappe is geëien deur die energie-verhoudings tussen verskillende dele van die ventrikulêre sistolie en diastolie van elke opgeneemde hartsiklus te ontleed. 'n "Ensemble artificial neural network" is gebruik om die geïdentifiseerde eienskappe van hartsiektes sowel as normale harttoestande, te klassifiseer. Hierin is besluite van al die lede van die netwerk gekombineer, ten einde ‘n finale diagnose te maak. Die klassifiseerder se geldigheid is kruis-bevestig deur middel van die laat-een-uit kruisbevestigings-metode. Deur middel van die kruis-bevestigingsmetode is die bedryfsvermoëns van die outonome auskultasie-stelsel, toegerus met die "Precordialcardiogram" apparaat, repektiewelik op 82% vir sensitiwiteit en 88% vir spesifisiteit vasgestel. Hierdie resultate demonstreer die benuttingspotensiaal van die apparaat in 'n Telemedisyne omgewing.
599

Quantification of the normal patellofemoral shape and its clinical applications

Cho, Kyung Jin 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: The shape of the knee’s trochlear groove is a very important factor in the overall stability of the knee. However, a quantitative description of the normal three-dimensional geometry of the trochlea is not available in the literature. This is also reflected in the poor outcomes of patellofemoral arthroplasty (PFA). In this study, a standardised method for femoral parameter measurements on three-dimensional femur models was established. Using software tools, virtual femur models were aligned with the mechanical and the posterior condylar planes and this framework was used to measure the femoral parameters in a repeatable way. An artificial neural network (ANN), incorporating the femoral parameter measurements and classifications done by experienced surgeons, was used to classify knees into normal and abnormal categories. As a result, 15 knees in the database were classified by the ANN as being normal. Furthermore, the geometry of the normal knees was analysed by fitting B-spline curves and circular arcs on their sagittal surface curves to prove and reconfirm that the groove has a circular shape on a sagittal plane. Self-organising maps (SOM), which is a type of ANN, was trained with the acquired data of the normal knees and in this way the normal trochlear geometry could be predicted. The prediction of the anterior-posterior (AP) distance and the trochlear heights showed an average agreement of 97 % between the actual and the predicted normal geometries. A case study was conducted on four types of trochlear dysplasia to determine a normal geometry for these knees, and a virtual surface reconstruction was performed on them. The study showed that the trochlea was deepened after the surface reconstruction, having an average trochlea depth of 5.5 mm compared to the original average value of 2.9 mm. In summary, this research proposed a quantitative method for describing and predicting the normal geometry of a knee by making use of ANN and the femoral parameters that are unaffected by trochlear dysplasia. / AFRIKAANSE OPSOMMING: Die vorm van die trogleêre keep is ’n belangrike faktor in patella-stabiliteit. Tog is ’n kwantitatiewe beskrywing van die normale driedimensionele geometrie van die troglea nog nie beskikbaar nie, wat duidelik blyk uit die swak uitkomste van patellofemorale artroplastie (PFA). In hierdie studie is ’n gestandaardiseerde metode vir die meting van femorale parameters op grond van driedimensionele femurmodelle ontwikkel. Die femurmodel is in lyn gebring met die meganiese en posterior kondilêre vlak, welke raamwerk gebruik is om die femorale parameters op ’n herhaalbare wyse te meet. Die normale knieë is geklassifiseer met ’n kunsmatige neurale netwerk (ANN), wat die femorale parameter-mate sowel as die chirurgiese klassifikasie ingesluit het, en 15 knieë is gevolglik as normaal aangewys. Die normaleknie-geometrie is ontleed deur B-latkrommes en sirkelboë op die sagittale oppervlak-kurwes aan te bring om te bewys en te herbevestig dat die keep uit ’n sirkelvorm op ’n sagittale vlak bestaan. Die ingesamelde data van die normale knieë is ingevoer by selfreëlende kaarte (SOM), synde ’n soort ANN, wat die navorser in staat gestel het om die normale trogleêre geometrie te voorspel. Die voorspelling van die anterior-posterior (AP) afstand en die trogleêre hoogtes toon ’n gemiddelde ooreenkoms van meer as 97 % tussen die werklike en voorspelde normale geometrie. ’n Gevallestudie is op vier soorte trogleêre displasie uitgevoer om die normale geometrie te voorspel en ’n oppervlakrekonstruksie daarop uit te voer. Hierdie studie het getoon dat die troglea ná oppervlakrekonstruksie verdiep was, met ’n gemiddelde trogleadiepte van 5.5 mm in vergelyking met die aanvanklike gemiddelde waarde van 2.9 mm. Hierdie navorsing het dus ’n metode aan die hand gedoen vir die kwantitatiewe beskrywing en voorspelling van normale geometrie met behulp van ANN sowel as met die femorale parameters wat nie deur die trogleêre displasie geraak word nie.
600

Βελτίωση μετεωρολογικών προγνώσεων με χρήση τεχνητών νευρωνικών δικτύων για τη βελτιστοποίηση συστήματος ενεργειακής διαχείρισης κτιρίων

Θραμπουλίδης, Εμμανουήλ 27 January 2014 (has links)
Σημαντική παράμετρος στο σχεδιασμό των σύγχρονων κτιρίων αποτελεί η ορθολογικότερη διαχείριση της ενέργειας. Η ορθολογικότερη διαχείριση ενέργειας επιτυγχάνεται με το σχεδιασμό κατάλληλων ενεργειακών συστημάτων. Για την αποτελεσματική σχεδίαση αυτών των συστημάτων λαμβάνονται υπόψιν τα μετεωρολογικά δεδομένα, όχι μόνο τα τρέχοντα αλλά και τα προγνωστικά. Τα αριθμητικά πρότυπα πρόγνωσης καιρού παρέχουν εκτιμήσεις των διαφόρων μετεωρολογικών παραμέτρων σε δεδομένα σημεία του χώρου κοντά στην επιφάνεια του εδάφους αλλά και σε διάφορα ύψη. Οι εκτιμήσεις αυτές αποκλίνουν αρκετά από τα πραγματικά δεδομένα γεγονός που παρέχει ένα σημαντικό περιθώριο βελτίωσης της πρόγνωσης. Στην εργασία αυτή προτείνεται μία μέθοδος βελτίωσης της πρόγνωσης μετεωρολογικών δεδομένων με στόχο την αξιοποίηση τους για βελτιστοποίηση της ενεργειακής κατανάλωσης κτιρίου. Η μέθοδος αναπτύχθηκε χρησιμοποιώντας μετρήσεις της ταχύτητας του ανέμου από το μετεωρολογικό σταθμό του Εργαστηρίου Φυσικής της Ατμόσφαιρας του Τμήματος Φυσικής του Πανεπιστημίου Πατρών (ΕΦΑΠ2), καθώς και προγνώσεις του ΕΦΑΠ2 μέσω του αριθμητικού προτύπου πρόγνωσης καιρού WRF (Weather Research and Forecasting model) στο πλησιέστερο δυνατό πλεγματικό σημείο. . Η μέθοδος που προτείνεται, αξιοποιεί τα τεχνητά νευρωνικά δίκτυα και όντας ανεξάρτητη της φύσης της εισόδου μπορεί να χρησιμοποιηθεί για τη βελτίωση της πρόγνωσης μετεωρολογικών παραμέτρων. Επιπλέον, μελετήθηκε η συνεισφορά της μεθόδου στον ακριβέστερο υπολογισμό της ροής αέρα, η οποία υπολογίζεται για ένα πειραματικό θάλαμο δοκιμών, ο οποίος έχει υιοθετηθεί από την Ευρωπαϊκή επιτροπή για την εναρμονισμένη μελέτη ενεργειακών συστημάτων κτιρίων υπό πραγματικές συνθήκες. / An important consideration in the design of modern buildings is the rational use of energy. The rational energy management is achieved by designing appropriate energy systems. For efficient design of these systems we should take into account the meteorological data, not only current but also predictive.Numerical weather prediction models provide estimates of various meteorological parameters to data points of space near the surface and at various heights. These estimates differ considerably from the actual data which provides a significant margin improvement of prognosis. In this work we propose a method of improving the prediction of meteorological data to exploit them to optimize energy consumption in building management systems. The method was developed using measurements of wind speed, from the meteorological station of the Laboratory of Atmospheric Physics of the Department of Physics of the University of Patras (LAPUP), and prognostications LAPUP through numerical weather prediction model WRF (Weather Research and Forecasting model) to the closest possible lattice point. The proposed method utilizes the artificial neural networks and being independent of the nature of the inputs it can be used to improve forecasting meteorological parameters. Furthermore, we studied the contribution of the method to accurately calculate the air flow of an experimental test chamber, which has been adopted by the European Committee for the study of building management systems under real conditions.

Page generated in 0.0709 seconds