Spelling suggestions: "subject:"artificial, beural, bnetwork"" "subject:"artificial, beural, conetwork""
611 |
Precise Mapping for Retinal Photocoagulation in SLIM (Slit-Lamp Image Mosaicing) / Cartographie précise pour la photocoagulation rétinienne dans SLIM (Mosaïque de l’image de la lampe à fente)Prokopetc, Kristina 10 November 2017 (has links)
Cette thèse est issue d’un accord CIFRE entre le groupe de recherche EnCoV de l’Université Clermont Auvergne et la société Quantel Medical (www.quantel-medical.fr). Quantel Medical est une entreprise spécialisée dans le développement innovant des ultrasons et des produits laser en ophtalmologie. Cette thèse présente un travail de recherche visant à l’application du diagnostic assisté par ordinateur et du traitement des maladies de la rétine avec une utilisation du prototype industriel TrackScan développé par Quantel Medical. Plus précisément, elle contribue au problème du mosaicing précis de l’image de la lampe à fente (SLIM) et du recalage automatique et multimodal en utilisant les images SLIM avec l’angiographie par fluorescence (FA) pour aider à la photo coagulation pan-rétienne naviguée. Nous abordons trois problèmes différents.Le premier problème est lié à l’accumulation des erreurs du recalage en SLIM., il dérive de la mosaïque. Une approche commune pour obtenir la mosaïque consiste à calculer des transformations uniquement entre les images temporellement consécutives dans une séquence, puis à les combiner pour obtenir la transformation entre les vues non consécutives temporellement. Les nombreux algorithmes existants suivent cette approche. Malgré le faible coût de calcul et la simplicité de cette méthode, en raison de sa nature de ‘chaînage’, les erreurs d’alignement s’accumulent, ce qui entraîne une dérive des images dans la mosaïque. Nous proposons donc d’utilise les récents progrès réalisés dans les méthodes d’ajustement de faisceau et de présenter un cadre de réduction de la dérive spécialement conçu pour SLIM. Nous présentons aussi une nouvelle procédure de raffinement local.Deuxièmement, nous abordons le problème induit par divers types d’artefacts communs á l’imagerie SLIM. Ceus-sont liés à la lumière utilisée, qui dégrade considérablement la qualité géométrique et photométrique de la mosaïque. Les solutions existantes permettent de faire face aux blouissements forts qui corrompent entièrement le rendu de la rétine dans l’image tout en laissant de côté la correction des reflets spéculaires semi-transparents et reflets des lentilles. Cela introduit des images fantômes et des pertes d’information. En outre, les méthodes génériques ne produisent pas de résultats satisfaisants dans SLIM. Par conséquent, nous proposons une meilleure alternative en concevant une méthode basée sur une technique rapide en utilisant une seule image pour éliminer les éblouissements et la notion de feux spéculaires semi-transparents en utilisant les indications de mouvement pour la correction intelligente de reflet de lentille.Finalement, nous résolvons le problème du recalage multimodal automatique avec SLIM. Il existe une quantité importante de travaux sur le recalage multimodal de diverses modalités d’image rétinienne. Cependant, la majorité des méthodes existantes nécessitent une détection de points clés dans les deux modalités d’image, ce qui est une tâche très difficile. Dans le cas de SLIM et FA ils ne tiennent pas compte du recalage précis dans la zone maculaire - le repère prioritaire. En outre, personne n’a développé une solution entièrement automatique pour SLIM et FA. Dans cette thèse, nous proposons la première méthode capable de recolu ces deux modalités sans une saisie manuelle, en détectant les repères anatomiques uniquement sur une seule image pour assurer un recalage précis dans la zone maculaire. (...) / This thesis arises from an agreement Convention Industrielle de Formation par la REcherche (CIFRE) between the Endoscopy and Computer Vision (EnCoV) research group at Université Clermont Auvergne and the company Quantel Medical (www.quantel-medical.fr), which specializes in the development of innovative ultrasound and laser products in ophthalmology. It presents a research work directed at the application of computer-aided diagnosis and treatment of retinal diseases with a use of the TrackScan industrial prototype developed at Quantel Medical. More specifically, it contributes to the problem of precise Slit-Lamp Image Mosaicing (SLIM) and automatic multi-modal registration of SLIM with Fluorescein Angiography (FA) to assist navigated pan-retinal photocoagulation. We address three different problems.The first is a problem of accumulated registration errors in SLIM, namely the mosaicing drift.A common approach to image mosaicking is to compute transformations only between temporally consecutive images in a sequence and then to combine them to obtain the transformation between non-temporally consecutive views. Many existing algorithms follow this approach. Despite the low computational cost and the simplicity of such methods, due to its ‘chaining’ nature, alignment errors tend to accumulate, causing images to drift in the mosaic. We propose to use recent advances in key-frame Bundle Adjustment methods and present a drift reduction framework that is specifically designed for SLIM. We also introduce a new local refinement procedure.Secondly, we tackle the problem of various types of light-related imaging artifacts common in SLIM, which significantly degrade the geometric and photometric quality of the mosaic. Existing solutions manage to deal with strong glares which corrupt the retinal content entirely while leaving aside the correction of semi-transparent specular highlights and lens flare. This introduces ghosting and information loss. Moreover, related generic methods do not produce satisfactory results in SLIM. Therefore, we propose a better alternative by designing a method based on a fast single-image technique to remove glares and the notion of the type of semi-transparent specular highlights and motion cues for intelligent correction of lens flare.Finally, we solve the problem of automatic multi-modal registration of FA and SLIM. There exist a number of related works on multi-modal registration of various retinal image modalities. However, the majority of existing methods require a detection of feature points in both image modalities. This is a very difficult task for SLIM and FA. These methods do not account for the accurate registration in macula area - the priority landmark. Moreover, none has developed a fully automatic solution for SLIM and FA. In this thesis, we propose the first method that is able to register these two modalities without manual input by detecting retinal features only on one image and ensures an accurate registration in the macula area.The description of the extensive experiments that were used to demonstrate the effectiveness of each of the proposed methods is also provided. Our results show that (i) using our new local refinement procedure for drift reduction significantly ameliorates the to drift reduction allowing us to achieve an improvement in precision over the current solution employed in the TrackScan; (ii) the proposed methodology for correction of light-related artifacts exhibits a good efficiency, significantly outperforming related works in SLIM; and (iii) despite our solution for multi-modal registration builds on existing methods, with the various specific modifications made, it is fully automatic, effective and improves the baseline registration method currently used on the TrackScan.
|
612 |
Técnicas de inteligência artificial aplicadas ao método de monitoramento de integridade estrutural baseado na impedância eletromecânica para monitoramento de danos em estruturas aeronáuticas / Artificial intelligence techniques applied to the impedance-based structural health monitoring technique for monitoring damage in aircraft structuresPalomino, Lizeth Vargas 03 July 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / The basic concept of impedance-based structure health monitoring is measuring the
variation of the electromechanical impedance of the structure as caused by the presence of
damage by using patches of piezoelectric material bonded on the surface of the structure (or
embedded into). The measured electrical impedance of the PZT patch is directly related to
the mechanical impedance of the structure. That is why the presence of damage can be
detected by monitoring the variation of the impedance signal. In order to quantify damage, a
metric is specially defined, which allows to assign a characteristic scalar value to the fault.
This study initially evaluates the influence of environmental conditions in the impedance
measurement, such as temperature, magnetic fields and ionic environment. The results show
that the magnetic field does not influence the impedance measurement and that the ionic
environment influences the results. However, when the sensor is shielded, the effect of the
ionic environment is significantly reduced. The influence of the sensor geometry has also
been studied. It has been established that the shape of the PZT patch (rectangular or
circular) has no influence on the impedance measurement. However, the position of the
sensor is an important issue to correctly detect damage. This work presents the development
of a low-cost portable system for impedance measuring to automatically measure and store
data from 16 PZT patches, without human intervention. One fundamental aspect in the
context of this work is to characterize the damage type from the various impedance signals
collected. In this sense, the techniques of artificial intelligence known as neural networks and
fuzzy cluster analysis were tested for classifying damage of aircraft structures, obtaining
satisfactory results. One last contribution of the present work is the study of the performance
of the electromechanical impedance-based structural health monitoring technique to detect
damage in structures under dynamic loading. Encouraging results were obtained for this aim. / O conceito básico da técnica de integridade estrutural baseada na impedância tem a ver com o
monitoramento da variação da impedância eletromecânica da estrutura, causada pela presença
alterações estruturais, através de pastilhas de material piezelétrico coladas na superfície da
estrutura ou nela incorporadas. A impedância medida se relaciona com a impedância mecânica
da estrutura. A partir da variação dos sinais de impedância pode-se concluir pela existência ou
não de uma falha. Para quantificar esta falha, métricas de dano são especialmente definidas,
permitindo atribuir-lhe um valor escalar característico. Este trabalho pretende inicialmente avaliar
a influência de algumas condições ambientais, tais como os campos magnéticos e os meios
iônicos na medição de impedância. Os resultados obtidos mostram que os campos magnéticos
não tem influência na medição de impedância e que os meios iônicos influenciam os resultados;
entretanto, ao blindar o sensor, este efeito se reduz consideravelmente. Também foi estudada a
influencia da geometria, ou seja, do formato do PZT e da posição do sensor com respeito ao
dano. Verificou-se que o formato do PZT não tem nenhuma influência na medição e que a
posição do sensor é importante para detectar corretamente o dano. Neste trabalho se apresenta
o desenvolvimento de um sistema de medição de impedância de baixo custo e portátil que tem a
capacidade de medir e armazenar a medição de 16 PZTs sem a necessidade de intervenção
humana. Um aspecto de fundamental importância no contexto deste trabalho é a caracterização
do dano a partir dos sinais de impedância coletados. Neste sentido, as técnicas de inteligência
artificial conhecidas como redes neurais e análises de cluster fuzzy, foram testadas para
classificar danos em estruturas aeronáuticas, obtendo resultados satisfatórios para esta tarefa.
Uma última contribuição deste trabalho é o estudo do comportamento da técnica de
monitoramento de integridade estrutural baseado na impedância eletromecânica na detecção de
danos em estruturas submetidas a carregamento dinâmico. Os resultados obtidos mostram que
a técnica funciona adequadamente nestes casos. / Doutor em Engenharia Mecânica
|
613 |
MP-Draughts - Um Sistema Multiagente de Aprendizagem Automática para Damas Baseado em Redes Neurais de Kohonen e Perceptron MulticamadasDuarte, Valquíria Aparecida Rosa 17 July 2009 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The goal of this work is to present MP-Draughts (MultiPhase- Draughts), that is
a multiagent environment for Draughts, where one agent - named IIGA- is built and
trained such as to be specialized for the initial and the intermediate phases of the games
and the remaining ones for the final phases of them. Each agent of MP-Draughts is a
neural network which learns almost without human supervision (distinctly from the world
champion agent Chinook). MP-Draughts issues from a continuous activity of research
whose previous product was the efficient agent VisionDraughts. Despite its good general
performance, VisionDraughts frequently does not succeed in final phases of a game, even
being in advantageous situation compared to its opponent (for instance, getting into
endgame loops). In order to try to reduce this misbehavior of the agent during endgames,
MP-Draughts counts on 25 agents specialized for endgame phases, each one trained such
as to be able to deal with a determined cluster of endgame boardstates. These 25 clusters
are mined by a Kohonen-SOM Network from a Data Base containing a large quantity of
endgame boardstates. After trained, MP-Draughts operates in the following way: first,
an optimized version of VisionDraughts is used as IIGA; next, the endgame agent that
represents the cluster which better fits the current endgame board-state will replace it up
to the end of the game. This work shows that such a strategy significantly improves the
general performance of the player agents. / O objetivo deste trabalho é propor um sistema de aprendizagem de Damas, o MPDraughts
(MultiPhase- Draughts): um sistema multiagentes, em que um deles - conhecido
como IIGA (Initial/Intermediate Game Agent)- é desenvolvido e treinado para ser especializado
em fases iniciais e intermediárias de jogo e os outros 25 agentes, em fases finais.
Cada um dos agentes que compõe o MP-Draughts é uma rede neural que aprende a jogar
com o mínimo possível de intervenção humana (distintamente do agente campeão do
mundo Chinook). O MP-Draughts é fruto de uma contínua atividade de pesquisa que
teve como produto anterior o VisionDraughts. Apesar de sua eficiência geral, o Vision-
Draughts, muitas vezes, tem seu bom desempenho comprometido na fase de finalização
de partidas, mesmo estando em vantagem no jogo em comparação com o seu oponente
(por exemplo, entrando em loop de final de jogo). No sentido de reduzir o comportamento
indesejado do jogador, o MP-Draughts conta com 25 agentes especializados em final de
jogo, sendo que cada um é treinado para lidar com um determinado tipo de cluster de
tabuleiros de final de jogo. Esses 25 clusters são minerados por redes de Kohonen-SOM
de uma base de dados que contém uma grande quantidade de estado de tabuleiro de final
de jogo. Depois de treinado, o MP-Draughts atua da seguinte maneira: primeiro, uma
versão aprimorada do VisionDraughts é usada como o IIGA; depois, um agente de final
de jogo que representa o cluster que mais se aproxima do estado corrente do tabuleiro do
jogo deverá substituir o IIGA e conduzir o jogo até o final. Este trabalho mostra que essa
estratégia melhorou, significativamente, o desempenho geral do agente jogador. / Mestre em Ciência da Computação
|
614 |
Classificação de dados cinéticos da inicialização da marcha utilizando redes neurais artificiais e máquinas de vetores de suporteTakáo, Thales Baliero 01 July 2015 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-20T12:55:18Z
No. of bitstreams: 2
Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-20T12:56:48Z (GMT) No. of bitstreams: 2
Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2016-05-20T12:56:48Z (GMT). No. of bitstreams: 2
Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Previous issue date: 2015-07-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The aim of this work was to assess the performance of computational methods to classify ground reaction force (GRF) to identify on which surface was done the gait initiation. Twenty-five subjects were evaluated while performing the gait initiation task in two experimental conditions barefoot on hard surface and barefoot on soft surface (foam). The center of pressure (COP) variables were calculate from the GRF and the principal component analysis was used to retain the main features of medial-lateral, anterior-posterior and vertical force components. The principal components representing each force component were retained using the broken stick test. Then the support vector machines and multilayer neural networks ware trained with Backpropagation and Levenberg-Marquartd algorithm to perform the GRF classification . The evaluation of classifier models was done based on area under ROC curve and accuracy criteria. The Bootstrap cross-validation have produced area under ROC curve a and accuracy criteria using 500 samples database. The support vector machine with linear kernel and margin parameter equal 100 produced the best result using medial-lateral force as input. It registered area under ROC curve and accuracy with 0.7712 and 0.7974. Those results showed significance difference from the vertical and anterior-posterior force. Then we may conclude that the choice of GRF component and the classifier model directly influences the performance of the classification. / O objetivo deste trabalho foi avaliar o desempenho de ferramentas de inteligência computacional para a classificação da força de reação do solo (FRS) identificando em que tipo de superfície foi realizada a inicialização da marcha. A base de dados foi composta pela força de reação do solo de 25 indivíduos, adquiridas por duas plataformas de força, durante a inicialização da marcha sobre uma superfície macia (SM - colchão), e depois sobre uma superfície dura (SD). A partir da FRS foram calculadas as variáveis que descrevem o comportamento do centro de pressão (COP) e também foram extraídas as características relevantes das forças mediolateral (Fx), anteroposterior (Fy) e vertical (Fz) por meio da análise de componentes principais (ACP). A seleção das componentes principais que descrevem cada uma das forças foi feita por meio do teste broken stick . Em seguida, máquinas de vetores de suporte (MVS) e redes neurais artificiais multicamada (MLP) foram treinadas com o algoritmo Backpropagation e de Levenberg-Marquartd (LMA) para realizar a classificação da FRS. Para a avaliação dos modelos implementados a partir das ferramentas de inteligência computacional foram utilizados os índices de acurácia (ACC) e área abaixo da curva ROC (AUC). Estes índices foram obtidos na validação cruzada utilizando a técnicas bootstrap com 500 bases de dados de amostras. O melhor resultado foi obtido para a máquina de vetor de suporte com kernel linear com parâmetro de margem igual a 100 utilizando a Fx como entrada para classificação das amostras. Os índices AUC e ACC foram 0.7712 e 0.7974, respectivamente. Estes resultados apresentaram diferença estatística em relação aos modelos que utilizaram as componentes principais da Fy e Fz, permitindo concluir que a escolha da componente da FRS assim como o modelo a ser implementado influencia diretamente no desempenho dos índices que avaliam a classificação.
|
615 |
Sistemas inteligentes aplicados às redes ópticas passivas com acesso múltiplo por divisão de código OCDMA-PON / The application of intelligent systems in passive optical networks based on optical code division multiple access OCDMA-PONJosé Valdemir dos Reis Júnior 14 May 2015 (has links)
As redes ópticas passivas (PON), em virtude da oferta de maior largura de banda a custos relativamente baixos, vêm se destacando como possível candidata para suprir a demanda dos novos serviços como, tráfego de voz, vídeo, dados e de serviços móveis, exigidos pelos usuários finais. Uma importante candidata, para realizar o controle de acesso nas PONs, é a técnica de acesso múltiplo por divisão de código óptico (OCDMA), por apresentar características relevantes, como maior segurança e capacidade flexível sob demanda. No entanto, agentes físicos externos, como as variações de temperatura ambiental no enlace, exercem uma influência considerável sobre as condições de operação das redes ópticas. Especificamente, nas OCDMA-PONs, os efeitos da variação de temperatura ambiental no enlace de transmissão, afetam o valor do pico do autocorrelação do código OCDMA a ser detectado, degradando a qualidade de serviço (QoS), além do aumento da taxa de erro de bit (BER) do sistema. O presente trabalho apresenta duas novas propostas de técnicas, utilizando sistemas inteligentes, mais precisamente, controladores lógicos fuzzy (FLC) aplicados nos transmissores e nos receptores das OCDMA-PONs, com o objetivo de mitigar os efeitos de variação de temperatura. Os resultados das simulações mostram que o desempenho da rede é melhorado quando as abordagens propostas são empregadas. Por exemplo, para a distância de propagação de 10 km e variações de temperatura de 20°C, o sistema com FLC, suporta 40 usuários simultâneos com a BER = 10-9, enquanto que, sem FLC, acomoda apenas 10. Ainda neste trabalho, é proposta uma nova técnica de classificação de códigos OCDMA, com o uso de redes neurais artificiais, mais precisamente, mapas auto-organizáveis de Kohonen (SOM), importante para que o sistema de gerenciamento da rede possa oferecer uma maior segurança para os usuários. Por fim, sem o uso de técnica inteligente, é apresentada, uma nova proposta de código OCDMA, cujo formalismo desenvolvido, permite generalizar a obtenção de código com propriedades distintas, como diversas ponderações e comprimentos de códigos. / Passive optical networks (PON), due to the provision of higher bandwidth at relatively low cost, have been excelling as a possible candidate to meet the demand of new services, such as voice traffic, video, data and mobile services, as required by end users. An important candidate to perform access control in PONs, is the Optical Code-Division Multiple-Access (OCDMA) technique, due to relevant characteristics, such as improved security and flexible capacity on demand. However, external physical agents, such as variations in environmental temperature on the Fiber Optic Link, have considerable influence on the operating conditions of optical networks. Specifically, in OCDMA-PONs, the effects of environmental temperature variation in the transmission link affect the peak value on the autocorrelation of the OCDMA code to be detected, degrading the quality of service (QoS), in addition to increasing the Bit Error Rate (BER) of the system. This thesis presents two new proposals of techniques using intelligent systems, more precisely, Fuzzy Logic Controllers (FLC) applied on the transmitters and receivers of OCDMA-PONs, in order to mitigate the effects of temperature variation. The simulation results show that the network performance is improved when the proposed approaches are employed. For example, for the propagation distance of 10 kilometers and temperature variations of 20°C, the FLC system supports 40 simultaneous users at BER = 10-9, whereas without the FLC, the system can accommodate only 10. Furthermore, in this work is proposed a new technique of OCDMA codes classification, using Artificial Neural Networks (ANN), more precisely, the Self-Organizing Maps (SOM) of Kohonen, important for the network management system to provide increased security for users. Finally, without the use of intelligent technique, it is presented a new proposal of OCDMA code, whose formalism developed, allows to generalize the code acquisition with distinct properties, such as different weights and length codes.
|
616 |
Dynamics of isolated quantum many-body systems far from equilibriumSchmitt, Markus 11 January 2018 (has links)
No description available.
|
617 |
應用機器學習於標準普爾指數期貨 / An application of machine learning to Standard & Poor's 500 index future.林雋鈜, Lin, Jyun-Hong Unknown Date (has links)
本系統係藉由分析歷史交易資料來預測S&P500期貨市場之漲幅。 我們改進了Tsaih et al. (1998)提出的混和式AI系統。 該系統結合了Rule Base 系統以及類神經網路作為其預測之機制。我們針對該系統在以下幾點進行改善:(1) 將原本的日期資料改為使用分鐘資料作為輸入。(2) 本研究採用了“移動視窗”的技術,在移動視窗的概念下,每一個視窗我們希望能夠在60分鐘內訓練完成。(3)在擴增了額外的變數 – VIX價格做為系統的輸入。(4) 由於運算量上升,因此本研究利用TensorFlow 以及GPU運算來改進系統之運作效能。
我們發現VIX變數確實可以改善系統之預測精準度,但訓練的時間雖然平均低於60分鐘,但仍有部分視窗的時間會小幅超過60分鐘。 / The system is made to predict the Futures’ trend through analyzing the transaction data in the past, and gives advices to the investors who are hesitating to make decisions. We improved the system proposed by Tsaih et al. (1998), which was called hybrid AI system. It was combined with rule-based system and artificial neural network system, which can give suggestions depends on the past data. We improved the hybrid system with the following aspects: (1) The index data are changed from daily-based in into the minute-based in this study. (2) The “moving-window” mechanism is adopted in this study. For each window, we hope we can finish training in 60 minutes. (3) There is one extra variable VIX, which is calculated by the VIX in this study. (4) Due to the more computation demand, TensorFlow and GPU computing is applied in our system.
We discover that the VIX can obviously has positively influence of the predicting performance of our proposed system. The average training time is lower than 60 minutes, however, some of the windows still cost more than 60 minutes to train.
|
618 |
The use of Inverse Neural Networks in the Fast Design of Printed Lens AntennasGosal, Gurpreet Singh January 2015 (has links)
In this thesis the major objective is the implementation of the inverse neural network concept in the design of printed lens (transmitarray) antenna. As it is computationally extensive to perform full-wave simulations for entire transmitarray structure and thereafter perform optimization, the idea is to generate a design database assuming that a unit cell of the transmitarray is situated inside a 2D infinite periodic structure. This way we generate a design database of transmission coefficient by varying the unit cell parameters. Since, for the actual design, we need dimensions for each cell on the transmitarray aperture and to do this we need to invert the design database.
The major contribution of this thesis is the proposal and the implementation of database inversion methodology namely inverse neural network modelling. We provide the algorithms for carrying out the inversion process as well as provide check results to demonstrate the reliability of the proposed methodology. Finally, we apply this approach to design a transmitarray antenna, and measure its performance.
|
619 |
Preprocesserings påverkan på prediktiva modeller : En experimentell analys av tidsserier från fjärrvärme / Impact of preprocessing on predictive models : An experimental analysis of time series from district heatingAndersson, Linda, Laurila, Alex, Lindström, Johannes January 2021 (has links)
Värme står för det största energibehovet inom hushåll och andra byggnader i samhället och olika tekniker används för att kunna reducera mängden energi som går åt för att spara på både miljö och pengar. Ett angreppssätt på detta problem är genom informatiken, där maskininlärning kan användas för att analysera och förutspå värmebehovet. I denna studie används maskininlärning för att prognostisera framtida energiförbrukning för fjärrvärme utifrån historisk fjärrvärmedata från ett fjärrvärmebolag tillsammans med exogena variabler i form av väderdata från Sveriges meteorologiska och hydrologiska institut. Studien är skriven på svenska och utforskar effekter av preprocessering hos prediktionsmodeller som använder tidsseriedata för att prognostisera framtida datapunkter. Stegen som utförs i studien är normalisering, interpolering, hantering av numeric outliers och missing values, datetime feature engineering, säsongsmässighet, feature selection, samt korsvalidering. Maskininlärningsmodellen som används i studien är Multilayer Perceptron som är en subkategori av artificiellt neuralt nätverk. Forskningsfrågan som besvaras fokuserar på effekter av preprocessering och feature selection för prediktiva modellers prestanda inom olika datamängder och kombinationer av preprocesseringsmetoder. Modellerna delades upp i tre olika datamängder utifrån datumintervall: 2009, 2007–2011, samt 2007–2017, där de olika kombinationerna utgörs av preprocesseringssteg som kombineras inom en iterativ process. Procentuella ökningar på R2-värden för dessa olika intervall har uppnått 47,45% för ett år, 9,97% för fem år och 32,44% för 11 år. I stora drag bekräftar och förstärker resultatet befintlig teori som menar på att preprocessering kan förbättra prediktionsmodeller. Ett antal mindre observationer kring enskilda preprocesseringsmetoders effekter har identifierats och diskuterats i studien, såsom DateTime Feature Engineerings negativa effekter på modeller som tränats med ett mindre antal iterationer. / Heat accounts for the greatest energy needs in households and other buildings in society. Effective production and distribution of heat energy require techniques for minimising economic and environmental costs. One approach to this problem is through informatics where machine learning is used to analyze and predict the heating needs with the help of historical data from a district heating company and exogenous variables in the form of weather data from Sweden's Meteorological and Hydrological Institute (SMHI). This study is written in Swedish and explores the importance of preprocessing practices before training and using prediction models which utilizes time-series data to predict future energy consumption. The preprocessing steps explored in this study consists of normalization, interpolation, identification and management of numerical outliers and missing values, datetime feature engineering, seasonality, feature selection and cross-validation. The machine learning model used in this study is Multilayer Perceptron which is a subcategory of artificial neural network. The research question focuses on the effects of preprocessing and feature selection for predictive model performance within different datasets and combinations of preprocessing methods. The models were divided into three different data sets based on date ranges: 2009, 2007–2011, and 2007–2017, where the different combinations consist of preprocessing steps that are combined within an iterative process. Percentage increases in R2 values for these different ranges have reached 47,45% for one year, 9,97% for five years and 32,44% for 11 years. The results broadly confirm and reinforce the existing theory that preprocessing can improve prediction models. A few minor observations about the effects of individual preprocessing methods have been identified and discussed in the study, such as DateTime Feature Engineering having a detrimental effect on models with very few training iterations.
|
620 |
Neuronové modelování elektromegnetických polí uvnitř automobilů / Neural Modeling of Electromagnetic Fields in CarsKotol, Martin January 2018 (has links)
Disertační práce se věnuje využití umělých neuronových sítí pro modelování elektromagnetických polí uvnitř automobilů. První část práce je zaměřena na analytický popis šíření elektromagnetických vlny interiérem pomocí Nortonovy povrchové vlny. Následující část práce se věnuje praktickému měření a ověření analytických modelů. Praktická měření byla zdrojem trénovacích a verifikačních dat pro neuronové sítě. Práce se zaměřuje na kmitočtová pásma 3 až 11 GHz a 55 až 65 GHz.
|
Page generated in 0.0564 seconds