• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 336
  • 39
  • 21
  • 15
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 1191
  • 1191
  • 1191
  • 571
  • 556
  • 423
  • 157
  • 134
  • 129
  • 128
  • 120
  • 110
  • 94
  • 93
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
891

Modélisation spatio-temporelle de la pollution atmosphérique urbaine à partir d'un réseau de surveillance de la qualité de l'air / Spatio-temporal modelling of atmospheric pollution based on observations provided by an air quality monitoring network at a regional scale

Coman, Adriana 26 September 2008 (has links)
Cette étude est consacrée à la modélisation spatio-temporelle de la pollution atmosphérique urbaine en utilisant un ensemble de méthodes statistiques exploitant les mesures de concentrations de polluants (NO2, O3) fournies par un réseau de surveillance de la qualité de l'air (AIRPARIF). Le principal objectif visé est l'amélioration de la cartographie des champs de concentration de polluants (le domaine d'intérêt étant la région d'Île-de-France) en utilisant, d'une part, des méthodes d'interpolation basées sur la structure spatiale ou spatio-temporelle des observations (krigeage spatial ou spatio-temporel), et d'autre part, des algorithmes, prenant en compte les mesures, pour corriger les sorties d'un modèle déterministe (Filtre de Kalman d'Ensemble). Les résultats obtenus montrent que dans le cas du dioxyde d'azote la cartographie basée uniquement sur l'interpolation spatiale (le krigeage) conduit à des résultats satisfaisants, car la répartition spatiale des stations est bonne. En revanche, pour l'ozone, c'est l'assimilation séquentielle de données appliquée au modèle (CHIMERE) qui permet une meilleure reconstitution de la forme et de la position du panache pendant les épisodes de forte pollution analysés. En complément de la cartographie, un autre but de ce travail est d'effectuer localement la prévision des niveaux d'ozone sur un horizon de 24 heures. L'approche choisie est celle mettant en œuvre des méthodes de type réseaux neuronaux. Les résultats obtenus en appliquant deux types d'architectures neuronales indiquent une précision correcte surtout pour les 8 premières heures de l'horizon de prédiction / This study is devoted to the spatio-temporal modelling of air pollution at a regional scale using a set of statistical methods in order to treat the measurements of pollutant concentrations (NO2, O3) provided by an air quality monitoring network (AIRPARIF). The main objective is the improvement of the pollutant _elds mapping using either interpolation methods based on the spatial or spatio-temporal structure of the data (spatial or spatiotemporal kriging) or some algorithms taking into account the observations, in order to correct the concentrations simulated by a deterministic model (Ensemble Kalman Filter). The results show that nitrogen dioxide mapping based only on spatial interpolation (kriging) gives the best results, while the spatial repartition of the monitoring sites is good. For the ozone mapping it is the sequential data assimilation that leads us to a better reconstruction of the plume's form and position for the analyzed cases. Complementary to the pollutant mapping, another objective was to perform a local prediction of ozone concentrations on a 24-hour horizon; this task was performed using Artificial Neural Networks. The performance indices obtained using two types of neural architectures indicate a fair accuracy especially for the first 8 hours of prediction horizon
892

Utilizing Diversity and Performance Measures for Ensemble Creation

Löfström, Tuve January 2009 (has links)
An ensemble is a composite model, aggregating multiple base models into one predictive model. An ensemble prediction, consequently, is a function of all included base models. Both theory and a wealth of empirical studies have established that ensembles are generally more accurate than single predictive models. The main motivation for using ensembles is the fact that combining several models will eliminate uncorrelated base classifier errors. This reasoning, however, requires the base classifiers to commit their errors on different instances – clearly there is no point in combining identical models. Informally, the key term diversity means that the base classifiers commit their errors independently of each other. The problem addressed in this thesis is how to maximize ensemble performance by analyzing how diversity can be utilized when creating ensembles. A series of studies, addressing different facets of the question, is presented. The results show that ensemble accuracy and the diversity measure difficulty are the two individually best measures to use as optimization criterion when selecting ensemble members. However, the results further suggest that combinations of several measures are most often better as optimization criteria than single measures. A novel method to find a useful combination of measures is proposed in the end. Furthermore, the results show that it is very difficult to estimate predictive performance on unseen data based on results achieved with available data. Finally, it is also shown that implicit diversity achieved by varied ANN architecture or by using resampling of features is beneficial for ensemble performance. / <p><strong>Sponsorship</strong>:</p><p>This work was supported by the Information Fusion Research Program (www.infofusion.se) at the University of Skövde, Sweden, in partnership with the Swedish Knowledge Foundation under grant 2003/0104.</p>
893

Comparative Choice Analysis using Artificial Intelligence and Discrete Choice Models in A Transport Context

Sehmisch, Sebastian 23 November 2021 (has links)
Artificial Intelligence in form of Machine Learning classifiers is increasingly applied for travel choice modeling issues and therefore constitutes a promising, competitive alternative towards conventional discrete choice models like the Logit approach. In comparison to traditional theory-based models, data-driven Machine Learning generally shows powerful predictive performance, but often lacks in model interpretability, i.e., the provision of comprehensible explanations of individual decision behavior. Consequently, the question about which approach is superior remains unanswered. Thus, this paper performs an in-depth comparison between benchmark Logit models and Artificial Neural Networks and Decision Trees representing two popular algorithms of Artificial Intelligence. The primary focus of the analysis is on the models’ prediction performance and its ability to provide reasonable economic behavioral information such as the value of travel time and demand elasticities. For this purpose, I use crossvalidation and extract behavioral indicators numerically from Machine Learning models by means of post-hoc sensitivity analysis. All models are specified and estimated on synthetic and empirical data. As the results show, Neural Networks provide plausible aggregate value of time and elasticity measures, even though their values are in different regions as those of the Logit models. The simple Classification Tree algorithm, however, appears unsuitable for the applied computation procedure of these indicators, although it provides reasonable interpretable decision rules for travel choice behavior. Consistent with the literature, both Machine Learning methods achieve strong overall predictive performance and therefore outperform the Logit models in this regard. Finally, there is no clear indication of which approach is superior. Rather, there seems to be a methodological tradeoff between Artificial Intelligence and discrete choice models depending on the underlying modeling objective.
894

Využití modelů neuronových sítí pro hodnocení kvality vody ve vodovodních sítích / Using Artificial Neural Network Models to Assess Water Quality in Water Distribution Networks

Cuesta Cordoba, Gustavo Andres January 2013 (has links)
A water distribution system (WDS) is based in a network of interconnected hydraulic components to transport the water directly to the customers. Water must be treated in a Water Treatment Plant (WTP) to provide safe drinking water to consumers, free from pathogenic and other undesirable organisms. The disinfection is an important aspect in achieving safe drinking water and preventing the spread of waterborne diseases. Chlorine is the most commonly used disinfectant in conventional water treatment processes because of its low cost, its capacity to deactivate bacteria, and because it ensures residual concentrations in WDS to prevent microbiological contamination. Chlorine residual concentration is affected by a phenomenon known as chlorine decay, which means that chlorine reacts with other components along the system and its concentration decrease. Chlorine is measured at the output of the WTP and also in several considered points within the WDS to control the water quality in the system. Simulation and modeling methods help to predict in an effective way the chlorine concentration in the WDS. The purpose of the thesis is to assess chlorine concentration in some strategic points within the WDS by using the historical measured data of some water quality parameters that influence chlorine decay. Recent investigations of the water quality have shown the need of the use of non-linear modeling for chlorine decay prediction. Chlorine decay in a pipeline is a complex phenomenon so it requires techniques that can provide reliable and efficient representation of the complexity of this behavior. Statistical models based on Artificial Neural Networks (ANN) have been found appropriated for the investigation and solution of problems related with non-linearity in the chlorine decay prediction offering advantages over more conventional modeling techniques. In this sense, this thesis uses a specific neural network application to solve the problem of forecasting the residual chlorine
895

Vibration-based Cable Tension Estimation in Cable-Stayed Bridges

Haji Agha Mohammad Zarbaf, Seyed Ehsan 11 October 2018 (has links)
No description available.
896

Strategisk förnyelseplanering av spillvattenledningar : Med ett artificiellt neuralt nätverk som analysverktyg / Strategic sewage pipe renewal process with the help of artificial neural networks

Rehn, David January 2017 (has links)
Sveriges kommunala spillvattenledningsnät står idag inför en enorm utmaning, då eftersattunderhåll i kombination med klimatförändringar kommer kräva stora framtida investeringaroch tidskrävande analyser. Detta examensarbete har utförts med målet att förenkla dettastundande förnyelsearbete. Som metod har en enkät utformats, och besvarats av totalt 84kommuner, med syftet att presentera en lägesbild. Vidare har ett artificiellt neuralt nätverkutvecklats, och tillämpats på data från Täby kommun, med syftet att förutspå vilkaspillvattenledningar i ett ledningsnät som har behov av förnyelse. Resultatet visar att det finns ett stort förbättringsbehov i det strategiska förnyelsearbetet.Störst behov, och potential, finns i hantering och insamling av data, där artificiella neuralanätverk kan tillämpas och utnyttjas som ett effektivt och intelligent verktyg. Det artificiellaneurala nätverket som utvecklats, och tillämpats, i detta examensarbete uppnådde en högprecision (93 %), och beräknade att Täby kommun har ca 10-20 spillvattenledningar medoupptäckt förnyelsebehov. Detta bör dock tas med viss reservation pga. bristandedatakvalitet. Avslutningsvis kan konstateras att lösningen för framtidens ledningsförnyelserelateradeproblem och utmaningar, ligger i förmågan att effektivt och intelligent samla in, struktureraoch analysera data om ledningsnäten. Artificiella neurala nätverk är ett verktyg som kanoch bör användas för detta ändamål då man, med hjälp av artificiell intelligens, kan göraprecisa analyser och skapa helhetsbilder över ledningsnät, vilket kan spara bådefinansiella, temporala och personella resurser. / Aging sewer systems and deferred maintenance pose one of the greatest challenges toSwedish municipal infrastructure in the future. This degree project has been completedwith the aim to develop a method with which to sufficiently solve these future challenges,and help decision makers to properly invest in the networks, and optimise the pipe renewalprocess. As a methodology, a survey has been created, and answered by 84representatives from various municipalities and water and waste organisations, in order topresent a deeper understanding of the current situation in Sweden. Furthermore, anartificial neural network has been developed, and trained with data from Täby municipality,with the purpose of predicting which pipes in a sewer network that need to be renewed. The results show that there is a great need for improvement in the strategic renewalplanning. The greatest need, and potential, is found in the collection and processing ofdata, where artificial neural networks can be applied as a highly efficient and intelligenttool, which is proven by the high accuracy (93 %) and strong ability to predict pipes withrenewal needs (ca 10-20 pipes for Täby municipality) that the neural network developedfor this degree project showed. It is, however, important to emphasize that the quality ofthe obtained data from Täby was relatively low, and that the results therefore has to beviewed with some skepticism. It is nevertheless reasonable to assume that artificial intelligence, and more specifically,artificial neural networks, will play an important role in tackling future challenges related tostrategic asset management and renewal planning for underground sewer infrastructure.The main solution lies in the ability to efficiently and intelligently collect, structure, andprocess data, and this is a field where artificial neural networks, as made evident by thisdegree project, certainly have abilities to flourish and contribute to savings in bothfinancial, temporal and human resources.
897

[en] INFERENCE OF THE QUALITY OF DESTILLATION PRODUCTS USING ARTIFICIAL NEURAL NETS AND FILTER OF EXTENDED KALMAN / [pt] INFERÊNCIA DA QUALIDADE DE PRODUTOS DE DESTILAÇÃO UTILIZANDO REDES NEURAIS ARTIFICIAIS E FILTRO DE KALMAN ESTENDIDO

LEONARDO GUILHERME CAETANO CORREA 19 December 2005 (has links)
[pt] Atualmente cresce o interesse científico e industrial na elaboração de métodos de controle não lineares. Porém, estes modelos costumam ter difícil implementação e um custo elevado até que se obtenha uma ferramenta de controle confiável. Desta forma, estudos na área de métodos de apoio à decisão procuram desenvolver aplicações inteligentes com custos reduzidos, capazes de executar controles industriais avançados com excelentes resultados, como no caso da indústria petroquímica. Na destilação de derivados de petróleo, por exemplo, é comum fazer uso de análises laboratoriais de amostras para identificar se uma substância está com suas características físico-químicas dentro das normas internacionais de produção. Além disso, o laudo pericial desta análise permite regular os instrumentos da planta de produção para que se consiga um controle mais acurado do processo e, conseqüentemente, um produto final com maior qualidade. Entretanto, apesar da análise laboratorial ter maior acurácia nos resultados que avaliam a qualidade do produto final, exige, às vezes, muitas horas de análise, o que retarda o ajuste dos equipamentos de produção, reduzindo a eficiência do processo e aumentando o tempo de produção de certos produtos, que precisam ter sua composição, posteriormente, corrigida com outros reagentes. Outra desvantagem está relacionada aos custos de manutenção e calibração dos instrumentos localizados na área de produção, pois, como estes equipamentos estão instalados em ambientes hostis, normalmente sofrem uma degradação acelerada, o que pode gerar leituras de campo erradas, dificultando a ação dos operadores. Em contrapartida, dentre os métodos inteligentes mais aplicados em processos industriais químicos, destacam-se as redes neurais artificiais. Esta estrutura se inspira nos neurônios biológicos e no processamento paralelo do cérebro humano, tendo assim a capacidade de armazenar e utilizar o conhecimento experimental que for a ela apresentado. Apesar do bom resultado que a estrutura de redes neurais gera, existe uma desvantagem relacionada à necessidade de re-treinamento da rede quando o processo muda seu ponto de operação, ou seja, quando a matériaprima sofre algum tipo de mudança em suas características físico-químicas. Como solução para este problema, foi elaborado um método híbrido que busca reunir as vantagens de uma estrutura de redes neurais com a habilidade de um filtro estocástico, conhecido por filtro de Kalman estendido. Em termos práticos, o filtro atua em cima dos pesos sinápticos da rede neural, atualizando os mesmos em tempo real e permitindo assim que o sistema se adapte constantemente às variações de mudança de processo. O sistema também faz uso de pré-processamentos específicos para eliminar ruídos dos instrumentos de leitura, erros de escalas e incompatibilidade entre os sinais de entrada e saída do sistema, que foram armazenados em freqüências distintas; o primeiro em minutos e o segundo em horas. Além disso, foram aplicadas técnicas de seleção de variáveis para melhorar o desempenho da rede neural no que diz respeito ao erro de inferência e ao tempo de processamento. O desempenho do método foi avaliado em cada etapa elaborada através de diferentes grupos de testes utilizados para verificar o que cada uma delas agregou ao resultado final. O teste mais importante, executado para avaliar a resposta da metodologia proposta em relação a uma rede neural simples, foi o de mudança de processo. Para isso, a rede foi submetida a um grupo de teste com amostras dos sinais de saída somados a um sinal tipo rampa. Os experimentos mostraram que o sistema, utilizando redes neurais simples, apresentou um resultado com erros MAPE em torno de 1,66%. Por outro lado, ao utilizar redes neurais associadas ao filtro de Kalman estendido, o erro cai à metade, ficando em torno de 0,8%. Isto comprova que, além do filtro de Kalman não destruir a qualidade da rede neural original, ele consegue adaptá-la a mudanças de processo, permitindo, assim, que a variável de saída seja inferida adequadamente sem a necessidade de retreinamento da rede. / [en] Nowadays, scientific and industrial interest on the development of nonlinear control systems increases day after day. However, before these models become reliable, they must pass through a hard and expensive implementation process. In this way, studies involving decision support methods try to develop low cost intelligent applications to build up advanced industrial control systems with excellent results, as in the petrochemical industry. In the distillation of oil derivatives, for example, it is very common the use of laboratorial sample analysis to identify if a substance has its physical- chemistry characteristics in accordance to international production rules. Besides, the analyses results allow the adjustment of production plant instruments, so that the process reaches a thorough control, and, consequently, a final product with higher quality. However, although laboratory analyses are more accurate to evaluate final product quality, sometimes it demands many hours of analysis, delaying the adjustments in the production equipment. In this manner, the process efficiency is reduced and some products have its production period increased because they should have its composition corrected with other reagents. Another disadvantage is the equipments´ maintenance costs and calibration, since these instruments are installed in hostile environments that may cause unaccurate field measurements, affecting also operator´s action. On the other hand, among the most applied intelligent systems in chemical industry process are the artificial neural networks. Their structure is based on biological neurons and in the parallel processing of the human brain. Thus, they are capable of storing and employing experimental knowledge presented to it earlier. Despite good results presented by neural network structures, there is a disadvantage related to the need for retraining whenever the process changes its operational point, for example, when the raw material suffers any change on its physical-chemistry characteristics. The proposed solution for this problem is a hybrid method that joins the advantages of a neural network structure with the ability of a stochastic filter, known as extended Kalman filter. This filter acts in the synaptic weights, updating them online and allowing the system to constantly adapt itself to process changes. It also uses specific pre-processing methods to eliminate scale mistakes, noises in instruments readings and incompatibilities between system input and output, which are measured with different acquisition frequencies; the first one in minutes and the second one in hours. Besides, variable selection techniques were used to enhance neural network performance in terms of inference error and processing time. The method´s performance was evaluated in each process step through different test groups used to verify what each step contributes to the final result. The most important test, executed to analyse the system answer in relation to a simple neural network, was the one which simulated process changes. For that end, the network was submitted to a test group with output samples added to a ramp signal. Experiments demonstrated that a system using simple neural networks presented results with MAPE error of about 1,66%. On the other hand, when using neural networks associated to an extended Kalman filter, the error decreases to 0,8%. In this way, it´s confirmed that Kalman filter does not destroy the original neural network quality and also adapts it to process changes, allowing the output inference without the necessity of network retraining.
898

Distributed Optimisation in Multi-Agent Systems Through Deep Reinforcement Learning

Eriksson, Andreas, Hansson, Jonas January 2019 (has links)
The increased availability of computing power have made reinforcement learning a popular field of science in the most recent years. Recently, reinforcement learning has been used in applications like decreasing energy consumption in data centers, diagnosing patients in medical care and in text-tospeech software. This project investigates how well two different reinforcement learning algorithms, Q-learning and deep Qlearning, can be used as a high-level planner for controlling robots inside a warehouse. A virtual warehouse was created, and the two different algorithms were tested. The reliability of both algorithms where found to be insufficient for real world applications but the deep Q-learning algorithm showed great potential and further research is encouraged.
899

Exploring backward stochastic differential equations and deep learning for high-dimensional partial differential equations and European option pricing

Leung, Jonathan January 2023 (has links)
Many phenomena in our world can be described as differential equations in high dimensions. However, they are notoriously challenging to solve numerically due to the exponential growth in computational cost with increasing dimensions. This thesis explores an algorithm, known as deep BSDE, for solving high-dimensional partial differential equations and applies it to finance, namely European option pricing. In addition, an implementation of the method is provided that seemingly shortens the runtime by a factor of two, compared with the results in previous studies. From the results, we can conclude that the deep BSDE method does handle high-dimensional problems well. Lastly, the thesis gives the relevant prerequisites required to be able to digest the theory from an undergraduate level.
900

Användning av artificiella neurala nätverk (ANNs) för att upptäcka cyberattacker: En systematisk litteraturgenomgång av hur ANN kan användas för att identifiera cyberattacker

Wongkam, Nathalie, Shameel, Ahmed Abdulkareem Shameel January 2023 (has links)
Denna studie undersöker användningen av maskininlärning (ML), särskilt artificiella neurala nätverk (ANN), inom nätverksdetektering för att upptäcka och förebygga cyberattacker. Genom en systematisk litteraturgenomgång sammanställs och analyseras relevant forskning för att erbjuda insikter och vägledning för framtida studier. Forskningsfrågorna utforskar tillämpningen av maskininlärningsalgoritmer för att effektivt identifiera och förhindra nätverksattacker samt de utmaningar som uppstår vid användningen av ANN. Metoden innefattar en strukturerad sökning, urval och granskning av vetenskapliga artiklar. Resultaten visar att maskininlärningsalgoritmer kan effektivt användas för att bekämpa cyberattacker. Dock framkommer utmaningar kopplade till ANNs känslighet för störningar i nätverkstrafiken och det ökade behovet av stor datamängd och beräkningskraft. Studien ger vägledning för utveckling av tillförlitliga och kostnadseffektiva ANN-baserade lösningar inom nätverksdetektering. Genom att sammanställa och analysera befintlig forskning ger studien en djupare förståelse för tillämpningen av ML-algoritmer, särskilt ANN, inom cybersäkerhet. Detta bidrar till kunskapsutveckling och tillför en grund för framtida forskning inom området. Studiens betydelse ligger i att främja utvecklingen av effektiva lösningar för att upptäcka och förebygga nätverksattacker. / This research study investigates the application of machine learning (ML), specifically artificial neural networks (ANN), in network intrusion detection to identify and prevent cyber-attacks. The study employs a systematic literature review to compile and analyse relevant research, aiming to offer insights and guidance for future studies. The research questions explore the effectiveness of machine learning algorithms in detecting and mitigating network attacks, as well as the challenges associated with using ANN. The methodology involves conducting a structured search, selection, and review of scientific articles. The findings demonstrate the effective utilization of machine learning algorithms, particularly ANN, in combating cyber-attacks. The study also highlights challenges related to ANN's sensitivity to network traffic disturbances and the increased requirements for substantial data and computational power. The study provides valuable guidance for developing reliable and cost-effective solutions based on ANN for network intrusion detection. By synthesizing and analysing existing research, the study contributes to a deeper understanding of the practical application of machine learning algorithms, specifically ANN, in the realm of cybersecurity. This contributes to knowledge development and provides a foundation for future research in the field. The significance of the study lies in promoting the development of effective solutions for detecting and preventing network attacks.

Page generated in 0.101 seconds