• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 336
  • 39
  • 21
  • 15
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 1191
  • 1191
  • 1191
  • 571
  • 556
  • 423
  • 157
  • 134
  • 129
  • 128
  • 120
  • 110
  • 94
  • 93
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1141

Sequential Machine learning Approaches for Portfolio Management

Chapados, Nicolas 11 1900 (has links)
Cette thèse envisage un ensemble de méthodes permettant aux algorithmes d'apprentissage statistique de mieux traiter la nature séquentielle des problèmes de gestion de portefeuilles financiers. Nous débutons par une considération du problème général de la composition d'algorithmes d'apprentissage devant gérer des tâches séquentielles, en particulier celui de la mise-à-jour efficace des ensembles d'apprentissage dans un cadre de validation séquentielle. Nous énumérons les desiderata que des primitives de composition doivent satisfaire, et faisons ressortir la difficulté de les atteindre de façon rigoureuse et efficace. Nous poursuivons en présentant un ensemble d'algorithmes qui atteignent ces objectifs et présentons une étude de cas d'un système complexe de prise de décision financière utilisant ces techniques. Nous décrivons ensuite une méthode générale permettant de transformer un problème de décision séquentielle non-Markovien en un problème d'apprentissage supervisé en employant un algorithme de recherche basé sur les K meilleurs chemins. Nous traitons d'une application en gestion de portefeuille où nous entraînons un algorithme d'apprentissage à optimiser directement un ratio de Sharpe (ou autre critère non-additif incorporant une aversion au risque). Nous illustrons l'approche par une étude expérimentale approfondie, proposant une architecture de réseaux de neurones spécialisée à la gestion de portefeuille et la comparant à plusieurs alternatives. Finalement, nous introduisons une représentation fonctionnelle de séries chronologiques permettant à des prévisions d'être effectuées sur un horizon variable, tout en utilisant un ensemble informationnel révélé de manière progressive. L'approche est basée sur l'utilisation des processus Gaussiens, lesquels fournissent une matrice de covariance complète entre tous les points pour lesquels une prévision est demandée. Cette information est utilisée à bon escient par un algorithme qui transige activement des écarts de cours (price spreads) entre des contrats à terme sur commodités. L'approche proposée produit, hors échantillon, un rendement ajusté pour le risque significatif, après frais de transactions, sur un portefeuille de 30 actifs. / This thesis considers a number of approaches to make machine learning algorithms better suited to the sequential nature of financial portfolio management tasks. We start by considering the problem of the general composition of learning algorithms that must handle temporal learning tasks, in particular that of creating and efficiently updating the training sets in a sequential simulation framework. We enumerate the desiderata that composition primitives should satisfy, and underscore the difficulty of rigorously and efficiently reaching them. We follow by introducing a set of algorithms that accomplish the desired objectives, presenting a case-study of a real-world complex learning system for financial decision-making that uses those techniques. We then describe a general method to transform a non-Markovian sequential decision problem into a supervised learning problem using a K-best paths search algorithm. We consider an application in financial portfolio management where we train a learning algorithm to directly optimize a Sharpe Ratio (or other risk-averse non-additive) utility function. We illustrate the approach by demonstrating extensive experimental results using a neural network architecture specialized for portfolio management and compare against well-known alternatives. Finally, we introduce a functional representation of time series which allows forecasts to be performed over an unspecified horizon with progressively-revealed information sets. By virtue of using Gaussian processes, a complete covariance matrix between forecasts at several time-steps is available. This information is put to use in an application to actively trade price spreads between commodity futures contracts. The approach delivers impressive out-of-sample risk-adjusted returns after transaction costs on a portfolio of 30 spreads.
1142

Mathematical and Statistical Investigation of Steamflooding in Naturally Fractured Carbonate Heavy Oil Reservoirs

Shafiei, Ali 25 March 2013 (has links)
A significant amount of Viscous Oil (e.g., heavy oil, extra heavy oil, and bitumen) is trapped in Naturally Fractured Carbonate Reservoirs also known as NFCRs. The word VO endowment in NFCRs is estimated at ~ 2 Trillion barrels mostly reported in Canada, the USA, Russia, and the Middle East. To date, contributions to the world daily oil production from this immense energy resource remains negligible mainly due to the lack of appropriate production technologies. Implementation of a VO production technology such as steam injection is expensive (high capital investment), time-consuming, and people-intensive. Hence, before selecting a production technology for detailed economic analysis, use of cursory or broad screening tools or guides is a convenient means of gaining a quick overview of the technical feasibility of the various possible production technologies applied to a particular reservoir. Technical screening tools are only available for the purpose of evaluation of the reservoir performance parameters in oil sands for various thermal VO exploitation technologies such as Steam Assisted Gravity Drainage (SAGD), Cyclic Steam Stimulation (CSS), Horizontal well Cyclic steam Stimulation (HCS), and so on. Nevertheless, such tools are not applicable for VO NFCRs assessment without considerable modifications due to the different nature of these two reservoir types (e.g., presence and effects of fracture network on reservoir behavior, wettability, lithology, fabric, pore structure, and so on) and also different mechanisms of energy and mass transport. Considering the lack of robust and rapid technical reservoir screening tools for the purpose of quick assessment and performance prediction for VO NFCRs under thermal stimulation (e.g., steamflooding), developing such fast and precise tools seems inevitable and desirable. In this dissertation, an attempt was made to develop new screening tools for the purpose of reservoir performance prediction in VO NFCRs using all the field and laboratory available data on a particular thermal technology (vertical well steamflooding). Considering the complex and heterogeneous nature of the NFCRs, there is great uncertainty associated with the geological nature of the NFCRs such as fracture and porosity distribution in the reservoir which will affect any modeling tasks aiming at modeling of processes involved in thermal VO production from these types of technically difficult and economically unattractive reservoirs. Therefore, several modeling and analyses technqiues were used in order to understand the main parameters controlling the steamflooding process in NFCRs and also cope with the uncertainties associated with the nature of geologic, reservoir and fluid properties data. Thermal geomechanics effects are well-known in VO production from oil sands using thermal technologies such as SAGD and cyclic steam processes. Hence, possible impacts of thermal processes on VO NFCRs performance was studied despite the lack of adequate field data. This dissertation makes the following contributions to the literature and the oil industry: Two new statistical correlations were developed, introduced, and examined which can be utilized for the purpose of estimation of Cumulative Steam to Oil Ratio (CSOR) and Recovery Factor (RF) as measures of process performance and technical viability during vertical well steamflooding in VO Naturally Fractured Carbonate Reservoirs (NFCRs). The proposed correlations include vital parameters such as in situ fluid and reservoir properties. The data used are taken from experimental studies and also field trials of vertical well steamflooding pilots in viscous oil NFCRs reported in the literature. The error percentage for the proposed correlations is < 10% for the worst case and contains fewer empirical constants compared with existing correlations for oil sands. The interactions between the parameters were also considered. The initial oil saturation and oil viscosity are the most important predictive factors. The proposed correlations successfully predicted steam/oil ratios and recovery factors in two heavy oil NFCRs. These correlations are reported for the first time in the literature for this type of VO reservoirs. A 3-D mathematical model was developed, presented, and examined in this research work, investigating various parameters and mechanisms affecting VO recovery from NFCRs using vertical well steamflooding. The governing equations are written for the matrix and fractured medium, separately. Uncertainties associated with the shape factor for the communication between the matrix and fracture is eliminated through setting a continuity boundary condition at the interface. Using this boundary condition, the solution method employed differs from the most of the modeling simulations reported in the literature. A Newton-Raphson approach was also used for solving mass and energy balance equations. RF and CSOR were obtained as a function of steam injection rate and temperature and characteristics of the fractured media such as matrix size and permeability. The numerical solution clearly shows that fractures play an important role in better conduction of heat into the matrix part. It was also concluded that the matrix block size and total permeability are the most important parameters affecting the dependent variables involved in steamflooding. A hybrid Artificial Neural Network model optimized by co-implementation of a Particle Swarm Optimization method (ANN-PSO) was developed, presented, and tested in this research work for the purpose of estimation of the CSOR and RF during vertical well steamflooding in VO NFCRs. The developed PSO-ANN model, conventional ANN models, and statistical correlations were examined using field data. Comparison of the predictions and field data implies superiority of the proposed PSO-ANN model with an absolute average error percentage < 6.5% , a determination coefficient (R2) > 0.98, and Mean Squared Error (MSE) < 0.06, a substantial improvement in comparison with conventional ANN model and empirical correlations for prediction of RF and CSOR. This indicates excellent potential for application of hybrid PSO-ANN models to screen VO NFCRs for steamflooding. This is the first time that the ANN technique has been applied for the purpose of performance prediction of steamflooding in VO NFCRs and also reported in the literature. The predictive PSO-ANN model and statistical correlations have strong potentials to be merged with heavy oil recovery modeling softwares available for thermal methods. This combination is expected to speed up their performance, reduce their uncertainty, and enhance their prediction and modeling capabilities. An integrated geological-geophysical-geomechanical approach was designed, presented, and applied in the case of a NFCR for the purpose of fracture and in situ stresses characterization in NFCRs. The proposed methodology can be applied for fracture and in situ stresses characterization which is beneficial to various aspects of asset development such as well placement, drilling, production, thermal reservoir modeling incorporating geomechanics effects, technology assessment and so on. A conceptual study was also conducted on geomechanics effects in VO NFCRs during steamflooding which is not yet well understood and still requires further field, laboratory, and theoretical studies. This can be considered as a small step forward in this area identifying positive potential of such knowledge to the design of large scale thermal operations in VO NFCRs.
1143

Mathematical and Statistical Investigation of Steamflooding in Naturally Fractured Carbonate Heavy Oil Reservoirs

Shafiei, Ali 25 March 2013 (has links)
A significant amount of Viscous Oil (e.g., heavy oil, extra heavy oil, and bitumen) is trapped in Naturally Fractured Carbonate Reservoirs also known as NFCRs. The word VO endowment in NFCRs is estimated at ~ 2 Trillion barrels mostly reported in Canada, the USA, Russia, and the Middle East. To date, contributions to the world daily oil production from this immense energy resource remains negligible mainly due to the lack of appropriate production technologies. Implementation of a VO production technology such as steam injection is expensive (high capital investment), time-consuming, and people-intensive. Hence, before selecting a production technology for detailed economic analysis, use of cursory or broad screening tools or guides is a convenient means of gaining a quick overview of the technical feasibility of the various possible production technologies applied to a particular reservoir. Technical screening tools are only available for the purpose of evaluation of the reservoir performance parameters in oil sands for various thermal VO exploitation technologies such as Steam Assisted Gravity Drainage (SAGD), Cyclic Steam Stimulation (CSS), Horizontal well Cyclic steam Stimulation (HCS), and so on. Nevertheless, such tools are not applicable for VO NFCRs assessment without considerable modifications due to the different nature of these two reservoir types (e.g., presence and effects of fracture network on reservoir behavior, wettability, lithology, fabric, pore structure, and so on) and also different mechanisms of energy and mass transport. Considering the lack of robust and rapid technical reservoir screening tools for the purpose of quick assessment and performance prediction for VO NFCRs under thermal stimulation (e.g., steamflooding), developing such fast and precise tools seems inevitable and desirable. In this dissertation, an attempt was made to develop new screening tools for the purpose of reservoir performance prediction in VO NFCRs using all the field and laboratory available data on a particular thermal technology (vertical well steamflooding). Considering the complex and heterogeneous nature of the NFCRs, there is great uncertainty associated with the geological nature of the NFCRs such as fracture and porosity distribution in the reservoir which will affect any modeling tasks aiming at modeling of processes involved in thermal VO production from these types of technically difficult and economically unattractive reservoirs. Therefore, several modeling and analyses technqiues were used in order to understand the main parameters controlling the steamflooding process in NFCRs and also cope with the uncertainties associated with the nature of geologic, reservoir and fluid properties data. Thermal geomechanics effects are well-known in VO production from oil sands using thermal technologies such as SAGD and cyclic steam processes. Hence, possible impacts of thermal processes on VO NFCRs performance was studied despite the lack of adequate field data. This dissertation makes the following contributions to the literature and the oil industry: Two new statistical correlations were developed, introduced, and examined which can be utilized for the purpose of estimation of Cumulative Steam to Oil Ratio (CSOR) and Recovery Factor (RF) as measures of process performance and technical viability during vertical well steamflooding in VO Naturally Fractured Carbonate Reservoirs (NFCRs). The proposed correlations include vital parameters such as in situ fluid and reservoir properties. The data used are taken from experimental studies and also field trials of vertical well steamflooding pilots in viscous oil NFCRs reported in the literature. The error percentage for the proposed correlations is < 10% for the worst case and contains fewer empirical constants compared with existing correlations for oil sands. The interactions between the parameters were also considered. The initial oil saturation and oil viscosity are the most important predictive factors. The proposed correlations successfully predicted steam/oil ratios and recovery factors in two heavy oil NFCRs. These correlations are reported for the first time in the literature for this type of VO reservoirs. A 3-D mathematical model was developed, presented, and examined in this research work, investigating various parameters and mechanisms affecting VO recovery from NFCRs using vertical well steamflooding. The governing equations are written for the matrix and fractured medium, separately. Uncertainties associated with the shape factor for the communication between the matrix and fracture is eliminated through setting a continuity boundary condition at the interface. Using this boundary condition, the solution method employed differs from the most of the modeling simulations reported in the literature. A Newton-Raphson approach was also used for solving mass and energy balance equations. RF and CSOR were obtained as a function of steam injection rate and temperature and characteristics of the fractured media such as matrix size and permeability. The numerical solution clearly shows that fractures play an important role in better conduction of heat into the matrix part. It was also concluded that the matrix block size and total permeability are the most important parameters affecting the dependent variables involved in steamflooding. A hybrid Artificial Neural Network model optimized by co-implementation of a Particle Swarm Optimization method (ANN-PSO) was developed, presented, and tested in this research work for the purpose of estimation of the CSOR and RF during vertical well steamflooding in VO NFCRs. The developed PSO-ANN model, conventional ANN models, and statistical correlations were examined using field data. Comparison of the predictions and field data implies superiority of the proposed PSO-ANN model with an absolute average error percentage < 6.5% , a determination coefficient (R2) > 0.98, and Mean Squared Error (MSE) < 0.06, a substantial improvement in comparison with conventional ANN model and empirical correlations for prediction of RF and CSOR. This indicates excellent potential for application of hybrid PSO-ANN models to screen VO NFCRs for steamflooding. This is the first time that the ANN technique has been applied for the purpose of performance prediction of steamflooding in VO NFCRs and also reported in the literature. The predictive PSO-ANN model and statistical correlations have strong potentials to be merged with heavy oil recovery modeling softwares available for thermal methods. This combination is expected to speed up their performance, reduce their uncertainty, and enhance their prediction and modeling capabilities. An integrated geological-geophysical-geomechanical approach was designed, presented, and applied in the case of a NFCR for the purpose of fracture and in situ stresses characterization in NFCRs. The proposed methodology can be applied for fracture and in situ stresses characterization which is beneficial to various aspects of asset development such as well placement, drilling, production, thermal reservoir modeling incorporating geomechanics effects, technology assessment and so on. A conceptual study was also conducted on geomechanics effects in VO NFCRs during steamflooding which is not yet well understood and still requires further field, laboratory, and theoretical studies. This can be considered as a small step forward in this area identifying positive potential of such knowledge to the design of large scale thermal operations in VO NFCRs.
1144

計算智慧在選擇權定價上的發展-人工神經網路、遺傳規劃、遺傳演算法

李沃牆 Unknown Date (has links)
Black-Scholes選擇權定價模型是各種選擇定價的開山始祖,無論在理論或實務上均獲致許多的便利及好評,美中不足的是,這種既定模型下結構化參數的估計問題,在真實體系的結構訊息未知或是不明朗時,或是模式錯誤,亦或政治結構或金融環境不知時,該模型在實證資料的評價上會面臨價格偏誤的窘境。是故,許多的數值演算法(numerical algorithms)便因應而生,這些方法一則源於對此基本模型的修正,一則是屬於逼近的數值解。 評價選擇權的方法雖不一而足,然所有的這些理論或模型可分為二大類即模型驅動的理論(model-drive approach)及資料驅動的理論(data-driven approach)。前者是建構在許多重要的假設,當這些假設成立時,則選擇權的價格可用如Black-Scholes偏微分方程來表示,而後再用數值解法求算出,許多的數值方法即屬於此類的範疇;而資料驅動的理論(data-driven approach),其理論的特色是它的有效性(validity)不像前者是依其假設,職是之故,他在處理現實世界的財務資料時更顯見其具有極大的彈性。這些以計算智慧(computation intelligence)為主的財務計量方法,如人工神經網路(ANNs),遺傳演算法(GAs),遺傳規劃(GP)已在財務工程(financial engineering)領域上萌芽,並有日趨蓬勃的態勢,而將機器學習技術(machine learning techniques)應用在衍生性商品的定價,應是目前財務應用上最複雜及困難,亦是最富挑戰性的問題。 本文除了對現有文獻的整理評析外,在人工神經網路方面,除用於S&P 500的實證外,並用於台灣剛推行不久的認購構證評價之實證研究;而遺傳規劃在計算智慧發展的領域中,算是較年輕的一員,但發展卻相當的快速,雖目前在經濟及財務上已有一些文獻,但就目前所知的二篇文獻選擇權定價理論的文獻中,仍是試圖學習Black-Scholes選擇權定價模型,而本文則提出修正模型,使之成為完全以資料驅動的模型,應用於S&P 500實證,亦證實可行。最後,本文結合計算智慧中的遺傳演算法( genetic algorithms)及數學上的加權殘差法(weight-residual method)來建構一條除二項式定價模型,人工神經網路定價模型,遺傳規劃定價模型等資料驅動模型之外的另一種具適應性學習能力的選擇權定價模式。 / The option pricing development rapid in recent years. However, the recent rapid development of theory and the application can be traced to the pathbreaking paper by Fischer Black and Myron Scholes(1973). In that pioneer paper, they provided the first explicit general equilibrium solution to the option pricing problem for simple calls and puts and formed a basis for the contingent claim asset pricing and many subsequent academic studies. Although the Black-Scholes option pricing model has enjoyed tremendous success both in practice and research, Nevertheless, it produce biased price estimates. So, many numerical algorithms have advanced to modify the basic model. I classified these traditional numerical algorithms and computational intelligence methods into two categories. Namely, the model-driven approach and the data-driven approach. The model-driven approach is built on several major assumptions. When these assumption hold, the option price usually can be described as a partial differential equation such as the Black-Scholes formula and can be solved numerically. Several numerical methods can be regarded as a member of this category. There are the Galerkin method, finite-difference method, Monte-Carlo method, etc. Another is the data-driven approach. The validity of this approach does not rests on the assumptions usually made for the model-driven one, and hence has a great flexibility in handling real world financial data. Artificial neural networks, genetic algorithms and genetic programming are a member of this approach. In my dissertation, I take a literature review about option pricing. I use artificial neural networks in S & P 500 index option and Taiwan stock call warrant pricing empirical study. On the other hand, genetic programming development rapid in recent three years, I modified the past model and contruct a data-driven genetic programming model. andThen, I usd it to S & P 500 index option empirical study. In the last, I combined genetic algorithms and weight-residual method to develop a option pricing model.
1145

Inteligência computacional aplicada na geração de respostas impulsivas bi-auriculares e em aurilização de salas / Computational intelligence applied to binaural impulse responses generation and room auralization

José Francisco Lucio Naranjo 19 May 2014 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / Neste trabalho é apresentada uma nova abordagem para obter as respostas impulsivas biauriculares (BIRs) para um sistema de aurilização utilizando um conjunto de redes neurais artificiais (RNAs). O método proposto é capaz de reconstruir as respostas impulsivas associadas à cabeça humana (HRIRs) por meio de modificação espectral e de interpolação espacial. A fim de cobrir todo o espaço auditivo de recepção, sem aumentar a complexidade da arquitetura da rede, uma estrutura com múltiplas RNAs (conjunto) foi adotada, onde cada rede opera uma região específica do espaço (gomo). Os três principais fatores que influenciam na precisão do modelo arquitetura da rede, ângulos de abertura da área de recepção e atrasos das HRIRs são investigados e uma configuração ideal é apresentada. O erro de modelagem no domínio da frequência é investigado considerando a natureza logarítmica da audição humana. Mais ainda, são propostos novos parâmetros para avaliação do erro, definidos em analogia com alguns dos bem conhecidos parâmetros de qualidade acústica de salas. Através da metodologia proposta obteve-se um ganho computacional, em redução do tempo de processamento, de aproximadamente 62% em relação ao método tradicional de processamento de sinais utilizado para aurilização. A aplicabilidade do novo método em sistemas de aurilização é reforçada mediante uma análise comparativa dos resultados, que incluem a geração das BIRs e o cálculo dos parâmetros acústicos biauriculares (IACF e IACC), os quais mostram erros de magnitudes reduzidas. / This work presents a new approach to obtain the Binaural Impulse Responses (BIRs) for an auralization system by using a committee of artificial neural networks (ANNs). The proposed method is capable to reconstruct the desired modified Head Related Impulse Responses (HRIRs) by means of spectral modification and spatial interpolation. In order to cover the entire auditory reception space, without increasing the networks architecture complexity, a structure with multiple RNAs (group) was adopted, where each network operates in a specific reception region (bud). The three major parameters that affect the models accuracy the networks architecture, the reception regions aperture angles and the HRIRs time shifts are investigated and an optimal setup is presented. The modeling error, in the frequency domain, is investigated considering the logarithmic nature of the human hearing. Moreover, new parameters are proposed for error evaluation in the time domain, defined in analogy with some of the well known acoustical quality parameters in rooms. It was observed that the proposed methodology obtained a computational gain of approximately 62%, in terms of processing time reduction, compared to the classical signal processing method used to obtain auralizations. The applicability of the new method in auralizations systems is validated by comparative analysis, which includes the BIRs generation and calculation of the binaural acoustic parameters (IACF and IACC), showing very low magnitude errors.
1146

Algoritmos de inteligência computacional em instrumentação: uso de fusão de dados na avaliação de amostras biológicas e químicas / Computational intelligence algorithms for instrumentation: biological and chemical samples evaluation by using data fusion

Negri, Lucas Hermann 24 February 2012 (has links)
Made available in DSpace on 2016-12-12T20:27:37Z (GMT). No. of bitstreams: 1 LUCAS HERMANN NEGRI.pdf: 2286573 bytes, checksum: 5c0e3c77c1d910bd47dd444753c142c4 (MD5) Previous issue date: 2012-02-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work presents computational methods to process data from electrical impedance spectroscopy and fiber Bragg grating interrogation in order to characterize the evaluated samples. Estimation and classification systems were developed, by using the signals isolatedly or simultaneously. A new method to adjust the parameters of functions that describes the electrical impedance spectra by using particle swarm optimization is proposed. Such method were also extended to correct distorted spectra. A benchmark for peak detection algorithms in fiber Bragg grating interrogation was performed, including the currently used algorithms as obtained from literature, where the accuracy, precision, and computational performance were evaluated. This comparative study was performed with both simulated and experimental data. It was perceived that there is no optimal algorithm when all aspects are taken into account, but it is possible to choose a suitable algorithm when one has the application requirements. A novel peak detection algorithm based on an artificial neural network is proposed, being recommended when the analyzed spectra have distortions or is not symmetrical. Artificial neural networks and support vector machines were employed with the data processing algorithms to classify or estimate sample characteristics in experiments with bovine meat, milk, and automotive fuel. The results have shown that the proposed data processing methods are useful to extract the data main information and that the employed data fusion schemes were useful, in its initial classification and estimation objectives. / Neste trabalho são apresentados métodos computacionais para o processamento de dados produzidos em sistemas de espectroscopia de impedância elétrica e sensoriamento a redes de Bragg em fibra óptica com o objetivo de inferir características das amostras analisadas. Sistemas de estimação e classificação foram desenvolvidos, utilizando os sinais isoladamente ou de forma conjunta com o objetivo de melhorar as respostas dos sistemas. Propõe-se o ajuste dos parâmetros de funções que modelam espectros de impedância elétrica por meio de um novo algoritmo de otimização por enxame de partículas, incluindo a sua utilização na correção de espectros com determinadas distorções. Um estudo comparativo foi realizado entre os métodos correntes utilizados na detecção de pico de sinais resultantes de sensores em fibras ópticas, onde avaliou-se a exatidão, precisão e desempenho computacional. Esta comparação foi feita utilizando dados simulados e experimentais, onde percebeu-se que não há algoritmo simultaneamente superior em todos os aspectos avaliados, mas que é possível escolher o ideal quando se têm os requisitos da aplicação. Um método de detecção de pico por meio de uma rede neural artificial foi proposto, sendo recomendado em situações onde o espectro analisado possui distorções ou não é simétrico. Redes neurais artificiais e máquinas de vetor de suporte foram utilizadas em conjunto com os algoritmos de processamento com o objetivo de classificar ou estimar alguma característica de amostras em experimentos que envolveram carnes bovinas, leite bovino e misturas de combustível automotivo. Mostra-se neste trabalho que os métodos de processamento propostos são úteis para a extração das características importantes dos dados e que os esquemas utilizados para a fusão destes dados foram úteis dentro dos seus objetivos iniciais de classificação e estimação.
1147

Modelo estimativo de movimento de pedestres baseado em sintaxe espacial, medidas de desempenho e redes neurais artificiais

Zampieri, Fabio Lúcio Lopes January 2006 (has links)
O movimento de pedestres está associado ao espaço em que ele acontece, de maneira local, onde cada calçada oferece vantagens físicas e de maneira global ao determinar rotas através dos caminhos da cidade. Entender como os pedestres escolhem as calçadas por onde se locomovem é essencial para determinar as características do ambiente necessárias aos espaços. Uma maneira de entender essas relações é através da criação de modelos urbanos, um modo de associar diretamente os atributos aos fenômenos. Buscou-se analisar metodologias utilizadas em modelos de pedestres, bem como novas tecnologias incorporadas a eles, para avaliar a movimentação peatonal urbana em áreas centrais de tecido tradicional. Dentre as metodologias observadas, aquelas que mais se adequaram para entender os fatores contidos no espaço urbano foram a sintaxe espacial e as medidas de desempenho dos passeios. A sintaxe se destaca por relacionar o efeito da malha urbana como indutora do movimento de pedestres, e as medidas de desempenho por criarem maneiras de avaliar a qualidade do passeio. Esta pesquisa procura compatibilizar esses dois métodos de abordar o movimento para descrever e compreender as relações entre o espaço e o fluxo de pedestres na área central da cidade de Santa Maria-RS. As variáveis do espaço urbano foram processadas com as redes neurais artificiais, uma tecnologia inovadora com muito potencial na área de modelagem urbana, por sua aptidão de aprendizado a partir de exemplos - fenômenos que não possuem regras explícitas - e processamento em paralelo dos dados - todas as variáveis se influenciam ao mesmo tempo para resultar no fenômeno estudado. Os resultados obtidos mostraram-se pertinentes às bases teóricas e contribuem para a explicação do movimento natural em cidades. / The pedestrians’ movement is associated to the space where it happens, on a local way, where each sidewalk offers physical advantages and in a global way when determining routes through the city roads. To understand how the pedestrians choose the sidewalks where they will move around is essential to determine the ambient characteristics that are necessary on the spaces. A way of understanding these relations is by creating urban models, a way of associating directly the attributes to the phenomena. It was tried to analyze methodologies used in pedestrians' models, as well as new technologies incorporated to them, to evaluate the urban pedestrian movement at central areas of the traditional cities. Among the observed methodologies, those which were more appropriated to understand the factors contained in the urban space were the spatial syntax and the measures of sidewalks performance. The syntax stands out by relating the effect of the urban grid as the factor that induces the pedestrians’ movement and the performance measures because they create forms of evaluating the sidewalk’s quality. This research attempts to make compatible those two methods of approaching the movement to describe and to understand the relations between the space and the pedestrians' flow in the central area of Santa Maria-RS The urban space variables were processed with the artificial neural networks, an innovative technology with a lot of potential in the urban modeling area, on account of its learning aptitude starting from examples - phenomena that don't have explicit rules - and the parallel processing of the data - all the variables influence each other at the same time to result in the studied phenomenon. The obtained results were shown pertinent to the theoretical bases and they contribute to the explanation of the natural movement in cities. The results were shown pertinent to the theoretical bases and they contribute to explaining the natural movement in the cities.
1148

Ensaios sobre previsão de inflação e análise de dados em tempo real no Brasil

Cusinato, Rafael Tiecher January 2009 (has links)
Esta tese apresenta três ensaios sobre previsão de inflação e análise de dados em tempo real no Brasil. Utilizando uma curva de Phillips, o primeiro ensaio propõe um “modelo evolucionário” para prever inflação no Brasil. O modelo evolucionário consiste em uma combinação de um modelo não-linear (que é formado pela combinação de três redes neurais artificiais – RNAs) e de um modelo linear (que também é a referência para propósitos de comparação). Alguns parâmetros do modelo evolucionário, incluindo os pesos das combinações, evoluem ao longo do tempo segundo ajustes definidos por três algoritmos que avaliam os erros fora-da-amostra. As RNAs foram estimadas através de uma abordagem híbrida baseada em um algoritmo genético (AG) e em um algoritmo simplex de Nelder-Mead. Em um experimento de previsão fora-da-amostra para 3, 6, 9 e 12 passos à frente, o desempenho do modelo evolucionário foi comparado ao do modelo linear de referência, segundo os critérios de raiz do erro quadrático médio (REQM) e de erro absoluto médio (EAM). O desempenho do modelo evolucionário foi superior ao desempenho do modelo linear para todos os passos de previsão analisados, segundo ambos os critérios. O segundo ensaio é motivado pela recente literatura sobre análise de dados em tempo real, que tem mostrado que diversas medidas de atividade econômica passam por importantes revisões de dados ao longo do tempo, implicando importantes limitações para o uso dessas medidas. Elaboramos um conjunto de dados de PIB em tempo real para o Brasil e avaliamos a extensão na qual as séries de crescimento do PIB e de hiato do produto são revisadas ao longo do tempo. Mostramos que as revisões de crescimento do PIB (trimestre/trimestre anterior) são economicamente relevantes, embora as revisões de crescimento do PIB percam parte da importância à medida que o período de agregação aumenta (por exemplo, crescimento em quatro trimestres). Para analisar as revisões do hiato do produto, utilizamos quatro métodos de extração de tendência: o filtro de Hodrick-Prescott, a tendência linear, a tendência quadrática, e o modelo de Harvey-Clark de componentes não-observáveis. Todos os métodos apresentaram revisões de magnitudes economicamente relevantes. Em geral, tanto a revisão de dados do PIB como a baixa precisão das estimativas de final-de-amostra da tendência do produto mostraram-se fontes relevantes das revisões de hiato do produto. O terceiro ensaio é também um estudo de dados em tempo real, mas que analisa os dados de produção industrial (PI) e as estimativas de hiato da produção industrial. Mostramos que as revisões de crescimento da PI (mês/mês anterior) e da média móvel trimestral são economicamente relevantes, embora as revisões de crescimento da PI tornem-se menos importantes à medida que o período de agregação aumenta (por exemplo, crescimento em doze meses). Para analisar as revisões do hiato da PI, utilizamos três métodos de extração de tendência: o filtro de Hodrick-Prescott, a tendência linear e a tendência quadrática. Todos os métodos apresentaram revisões de magnitudes economicamente relevantes. Em geral, tanto a revisão de dados da PI como a baixa precisão das estimativas de final-de-amostra da tendência da PI mostraram-se fontes relevantes das revisões de hiato da PI, embora os resultados sugiram certa predominância das revisões provenientes da baixa precisão de final-de-amostra. / This thesis presents three essays on inflation forecasting and real-time data analysis in Brazil. By using a Phillips curve, the first essay presents an “evolutionary model” to forecast Brazilian inflation. The evolutionary model consists in a combination of a non-linear model (that is formed by a combination of three artificial neural networks - ANNs) and a linear model (that is also a benchmark for comparison purposes). Some parameters of the evolutionary model, including the combination weight, evolve throughout time according to adjustments defined by three algorithms that evaluate the out-of-sample errors. The ANNs were estimated by using a hybrid approach based on a genetic algorithm (GA) and on a Nelder-Mead simplex algorithm. In a 3, 6, 9 and 12 steps ahead out-of-sample forecasting experiment, the performance of the evolutionary model was compared to the performance of the benchmark linear model, according to root mean squared errors (RMSE) and to mean absolute error (MAE) criteria. The evolutionary model performed better than the linear model for all forecasting steps that were analyzed, according to both criteria. The second essay is motivated by recent literature on real-time data analysis, which has shown that several measures of economic activities go through important data revisions throughout time, implying important limitations to the use of these measures. We developed a GDP real-time data set to Brazilian economy and we analyzed the extent to which GDP growth and output gap series are revised over time. We showed that revisions to GDP growth (quarter-onquarter) are economic relevant, although the GDP growth revisions lose part of their importance as aggregation period increases (for example, four-quarter growth). To analyze the output gap revisions, we applied four detrending methods: the Hodrick-Prescott filter, the linear trend, the quadratic trend, and the Harvey-Clark model of unobservable components. It was shown that all methods had economically relevant magnitude of revisions. In a general way, both GDP data revisions and the low accuracy of end-of-sample output trend estimates were relevant sources of output gap revisions. The third essay is also a study about real-time data, but focused on industrial production (IP) data and on industrial production gap estimates. We showed that revisions to IP growth (month-on-month) and to IP quarterly moving average growth are economic relevant, although the IP growth revisions become less important as aggregation period increases (for example, twelve-month growth). To analyze the output gap revisions, we applied three detrending methods: the Hodrick-Prescott filter, the linear trend, and the quadratic trend. It was shown that all methods had economically relevant magnitude of revisions. In general, both IP data revisions and low accuracy of end-of-sample IP trend estimates were relevant sources of IP gap revisions, although the results suggest some prevalence of revisions originated from low accuracy of end-of-sample estimates.
1149

Elimination of systematic faults and maintenance uncertainties on the City of Johannesburg's roads Intelligent Transport Systems

Makhwathana, Phalanndwa Lawrence 02 1900 (has links)
Road transport mobility continues to be a challenge to the City of Johannesburg (CoJ)’s economy in general. Traffic signals, their remote monitoring and control systems are the current implemented Intelligent Transport Systems (ITS), but daily systematic faults and maintenance uncertainties on such systems decrease the effectiveness of traffic engineers’ intersections optimization techniques. Inefficient electrical power supply to such ITS is a challenge, with conditional power cuts and fluctuations, uncertainties on traffic control system faults. Another factor leading to the problem is the communication channel which is using traditional modems which are not reliable. Reporting through both customer complaints and such unreliable remote monitoring systems makes maintenance to be ineffective. In this dissertation, the factors leading to the faults and uncertainties are considered. The proposed solution considers the important concerns of ITS, such as electrical power source performance optimization technique, road traffic control systems compatibility and communications systems / Electrical and Mining Engineering / M. Tech. (Electrical Engineering)
1150

Development of an intelligent analytics-based model for product sales optimisation in retail enterprises

Matobobo, Courage 03 July 2016 (has links)
A retail enterprise is a business organisation that sells goods or services directly to consumers for personal use. Retail enterprises such as supermarkets enable customers to go around the shop picking items from the shelves and placing them into their baskets. The basket of each customer is captured into transactional systems. In this research study, retail enterprises were classified into two main categories: centralised and distributed retail enterprises. A distributed retail enterprise is one that issues the decision rights to the branches or groups nearest to the data collection, while in centralised retail enterprises the decision rights of the branches are concentrated in a single authority. It is difficult for retail enterprises to ascertain customer preferences by merely observing transactions. This has led to quantifiable losses. Although some enterprises implemented classical business models to address these challenging issues, they still lacked analytics-based marketing programs to gain competitive advantage. This research study develops an intelligent analytics-based (ARANN) model for both distributed and centralised retail enterprises in the cross-demographics of a developing country. The ARANN model is built on association rules (AR), complemented by artificial neural networks (ANN) to strengthen the results of these two individual models. The ARANN model was tested using real-life and publicly available transactional datasets for the generation of product arrangement sets. In centralised retail enterprises, the data from different branches was integrated and pre-processed to remove data impurities. The cleaned data was then fed into the ARANN model. On the other hand, in distributed retail enterprises data was collected branch per branch and cleaned. The cleaned data was fed into the ARANN model. According to experimental analytics, the ARANN model can generate improved product arrangement sets, thereby improving the confidence of retail enterprise decision-makers in competitive environments. It was also observed that the ARANN model performed faster in distributed than in centralised retail enterprises. This research is beneficial for sustainable businesses and consideration of the results is therefore recommended to retail enterprises. / Computing / M Sc. (Computing)

Page generated in 0.1081 seconds