• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 393
  • 168
  • 46
  • 44
  • 29
  • 21
  • 19
  • 18
  • 17
  • 17
  • 15
  • 7
  • 4
  • 3
  • 3
  • Tagged with
  • 949
  • 949
  • 748
  • 149
  • 148
  • 142
  • 124
  • 113
  • 97
  • 87
  • 76
  • 74
  • 72
  • 64
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

A simplified approach in FAVAR estimation

Lien Oskarsson, Mathias, Lin, Christopher January 2018 (has links)
In the field of empirical macroeconomics factor-augmented vector autoregressive (FAVAR) models have become a popular tool in explaining how economic variables interact over time. FAVAR is based upon a data-reduction step using factor estimation, which are then employed in a vector autoregressive model. This paper aims to study alternative methods regarding factor estimation. More precisely, we compare the generally used principal component method with the uncomplicated common correlated effect estimation. Results show low divergence between the two factor estimation methods employed, indicating interchangeability between the two estimation approaches.
362

Use of multivariate statistical methods for control of chemical batch processes

Lopez Montero, Eduardo January 2016 (has links)
In order to meet tight product quality specifications for chemical batch processes, it is vital to monitor and control product quality throughout the batch duration. However, the frequent lack of in situ sensors for continuous monitoring of batch product quality complicates the control problem and calls for novel control approaches. This thesis focuses on the study and application of multivariate statistical methods to control product quality in chemical batch processes. These multivariate statistical methods can be used to identify data-driven prediction models that can be integrated within a model predictive control (MPC) framework. The ideal MPC control strategy achieves end-product quality specifications by performing trajectory tracking during the batch operating time. However, due to the lack of in-situ sensors, measurements of product quality are usually obtained by laboratory assays and are, therefore, inherently intermittent. This thesis proposes a new approach to realise trajectory tracking control of batch product quality in those situations where only intermittent measurements are available. The scope of this methodology consists of: 1) the identification of a partial least squares (PLS) model that works as an estimator of product quality, 2) the transformation of the PLS model into a recursive formulation utilising a moving window technique, and 3) the incorporation of the recursive PLS model as a predictor into a standard MPC framework for tracking the desired trajectory of batch product quality. The structure of the recursive PLS model allows a straightforward incorporation of process constraints in the optimisation process. Additionally, a method to incorporate a nonlinear inner relation within the proposed PLS recursive model is introduced. This nonlinear inner relation is a combination of feedforward artificial neural networks (ANNs) and linear regression. Nonlinear models based on this method can predict product quality of highly nonlinear batch processes and can, therefore, be used within an MPC framework to control such processes. The use of linear regression in addition to ANNs within the PLS model reduces the risk of overfitting and also reduces the computational e↵ort of the optimisation carried out by the controller. The benefits of the proposed modelling and control methods are demonstrated using a number of simulated batch processes.
363

Resting state functional connectivity in the default mode network and aerobic exercise in young adults

Goss, Andrew 12 July 2017 (has links)
Around the world Alzheimer’s Disease (AD) is on the rise. Previous studies have shown the default mode network (DMN) sees changes with AD progression as the disease erodes away cortical areas. Aerobic exercise with significant increases to cardiorespiratory fitness could show neuro-protective changes to delay AD. This study will explore if functional connectivity changes in the DMN can be seen in a young adult sample by using group independent component analysis through FSL MELODIC. The young adult sample of 19 were selected from a larger study at the Brain Plasticity and Neuroimaging Laboratory at Boston University. The participants engaged in a twelve-week exercise intervention in either a strength training or aerobic training group. They also completed pre-intervention and post-intervention resting-state fMRI scans to evaluate change in functional connectivity in the default mode network. Cardiorespiratory fitness was assessed using a modified Balke protocol with pre-intervention and post-intervention VO2 max percentiles being used. Through two repeated-measure ANOVA analyses, this study found no significant increase in mean functional connectivity or cardiorespiratory fitness in the young adult sample. While improvements in mean VO2 max percentile and functional connectivity would have been seen with a larger sample size, this study adds to the literature by suggesting if fitness does not improve significantly, neither will functional connectivity in the default mode network.
364

Mensuração de risco para empresas do ramo frigorífico

Shirassu, Fabio Koiti 06 February 2015 (has links)
Submitted by Fabio Koiti Shirassu (faks85@gmail.com) on 2015-03-12T01:50:32Z No. of bitstreams: 1 FABIO KOITI SHIRASSU.pdf: 4557681 bytes, checksum: e9184ef217a2a98f318db3ab9277d6f1 (MD5) / Approved for entry into archive by JOANA MARTORINI (joana.martorini@fgv.br) on 2015-03-12T11:13:06Z (GMT) No. of bitstreams: 1 FABIO KOITI SHIRASSU.pdf: 4557681 bytes, checksum: e9184ef217a2a98f318db3ab9277d6f1 (MD5) / Made available in DSpace on 2015-03-12T11:59:38Z (GMT). No. of bitstreams: 1 FABIO KOITI SHIRASSU.pdf: 4557681 bytes, checksum: e9184ef217a2a98f318db3ab9277d6f1 (MD5) Previous issue date: 2015-02-06 / This paper analyzes a risk measurement and management methodology for meat processing companies. Earnings at Risk (EaR) is used with the adoption of a topdown approach that expresses income variations as a function of market and idiosyncratic explanatory variables. With the elimination of multicolinearity between those variables thanks to the use of Principal Component Analysis (PCA), we analyze how the new EaR behaves against the more usual multiple linear regression model. Dummy variables are included in the estimation of future results for meat processing companies, representing the occurrence of diseases affecting cattle and the withdrawal of economic embargoes by importing countries during the period. As a result, it is found that the dummy variables do not contribute to determining the variation of EaR, and that no one comes to the conclusion that the EaR model using PCA shown better with less variables, with the same original variance and statistical significance. / Este trabalho apresenta metodologia de mensuração e gestão de risco para empresas do ramo frigorífico. A ferramenta utilizada é conhecida como Earnings at Risk (EaR), e se adota uma visão top-down, que mostra a variação do resultado da empresa de acordo com variáveis explicativas de mercado e variáveis idiossincráticas. Através da eliminação de multicolinearidade entre essas variáveis com o uso da métrica de Análise de Componentes Principais (ACP), busca-se analisar como o novo EaR se comportaria frente ao enfoque usual, construído com um modelo de regressão linear múltipla. Variáveis dummy fazem parte do modelo de estimação do resultado futuro das empresas frigoríficas, relacionadas à ocorrência ou não de doenças que afetam o gado bovino, e à retirada de embargos econômicos de países importadores durante o período de análise. Ao fim do trabalho é verificado que as variáveis dummy não possuem relevância para a determinação de EaR, e que não se chega a conclusão de que o modelo de EaR com ACP se mostra melhor com menos variáveis, mantendo a mesma variância e significância estatística originais.
365

Abordagens multivariadas para a seleção de variáveis com vistas à caracterização de medicamentos / Multivariate approaches to variable selection in order to characterize medicines

Yamashita, Gabrielli Harumi January 2015 (has links)
A averiguação da autenticidade de medicamentos tem se apoiado na análise de perfil por espectroscopia de infravermelho (ATR-FTIR). Contudo, tal análise tipicamente gera dados caracterizados por elevado número de variáveis (comprimentos de onda) ruidosas e correlacionadas, necessitando assim da aplicação de técnicas para seleção das variáveis mais relevantes e informativas, tornando os modelos preditivos e exploratórios mais robustos. Esta dissertação testa sistemáticas para a seleção de variáveis com vistas à clusterização e classificação de medicamentos. Para tanto, inicialmente faz-se uso dos parâmetros oriundos da Análise de Componentes Principais (ACP) para a geração de três índices de importância de variáveis; tais índices guiam um processo iterativo de eliminação de variáveis com vistas a uma clusterização mais consistente, medida através do Silhouette Index. Na sequência, utiliza-se o Algoritmo Genético (AG) combinado com a ferramenta de classificação k nearest neighbor (kNN) para selecionar o subconjunto de variáveis que resultem na maior acurácia média com propósito de classificação das amostras em dois grupos, originais ou falsificados. Por fim, aplica-se a divisão dos dados ATR-FTIR em intervalos para selecionar as regiões espectroscópicas mais relevantes para a classificação das amostras via kNN; na sequência, aplica-se o AG para refinar os intervalos retidos anteriormente. A aplicação dos métodos de seleção de variáveis propostos permitiu realizar clusterizações e classificações mais precisas com base em um subconjunto reduzido de variáveis. / The investigation of the authenticity of drugs has relied on the profile analysis by infrared spectroscopy (ATR-FTIR). However, such analysis typically yields a large number of correlated and noisy variables (wavelengths), which require the application of techniques for selecting the most informative and relevant variables to improve model ability. This thesis test an approach to variable selection aimed at clustering and classifying drug samples. For that matter, it derives three variable importance indices based on Principal Component Analysis (PCA) components that guide an iterative process of variable elimination; clustering performance based on the reduced sets is assessed via Silhouette Index. Next, we combine the Genetic Algorithm (GA) with the k nearest neighbor classification technique (kNN) to select the subset of variables yielding the highest average accuracy for classifying samples into authentic or counterfeit categories. Finally, we split the ATR-FTIR data into intervals to select the most relevant spectroscopic regions for sample classification via kNN; we then apply GA to refine the ranges previously retained. The implementation of the proposed variable selection methods led to more accurate clustering and classification procedures based on a small subset of variables.
366

IMAGE-BASED MODELING AND PREDICTION OF NON-STATIONARY GROUND MOTIONS

DAK HAZIRBABA, YILDIZ 01 May 2015 (has links)
Nonlinear dynamic analysis is a required step in seismic performance evaluation of many structures. Performing such an analysis requires input ground motions, which are often obtained through simulations, due to the lack of sufficient records representing a given scenario. As seismic ground motions are characterized by time-varying amplitude and frequency content, and the response of nonlinear structures is sensitive to the temporal variations in the seismic energy input, ground motion non-stationarities should be taken into account in simulations. This paper describes a nonparametric approach for modeling and prediction of non-stationary ground motions. Using Relevance Vector Machines, a regression model which takes as input a set of seismic predictors, and produces as output the expected evolutionary power spectral density, conditioned on the predictors. A demonstrative example is presented, where recorded and predicted ground motions are compared in time, frequency, and time-frequency domains. Analysis results indicate reasonable match between the recorded and predicted quantities.
367

Separação cega de fontes em tempo real utilizando FPGA

Fratini Filho, Oswaldo January 2017 (has links)
Orientador: Prof. Dr. Ricardo Suyama / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2017. / O metodo estatistico de Independent Component Analysis (ICA) e um dos mais amplamente utilizados para solucionar o problema de Blind Source Separation (BSS) que, junto a outros metodos de processamento de sinais, sao colocados a prova com o aumento do numero das fontes de sinais e amostras disponiveis para processamento, e sao a base de aplicacoes com requisitos de desempenho cada vez maiores. O objetivo deste trabalho e realizar o estudo do metodo ICA e analise dos algoritmos FastICA e Joint Approximate Diagonalization of Eigen-matrices (JADE) implementados em Field-Programmable Gate Array (FPGA) e seu comportamento quando variamos o numero de amostras das misturas e os numeros de iteracoes ou updates. Outros trabalhos de pesquisa ja foram realizados com o objetivo de demonstrar a viabilidade da implementacao de tais algoritmos em FPGA, mas pouco apresentam sobre o metodo utilizado para definir detalhes de implementacao como numero de amostradas utilizados, a razao da representacao numerica escolhida e sobre o thoughtput alcancado. A analise que este trabalho propos realizar, num primeiro momento, passa por demonstrar o comportamento do core dos algoritmos quando implementados utilizando diferentes representacoes numericas de ponto flutuante com precisao simples (32 bits) e ponto fixo com diferentes numeros de amostras e fontes a serem estimadas, por meio de simulacoes. Foi verificada a viabilidade desses serem utilizados para atender aplicacoes que precisam resolver o problema de BSS com boa acuracia, quando comparados com implementacoes dos mesmos algoritmos que se utilizaram de uma representacao numerica de ponto flutuante com precisao dupla (64 bits). Utilizando o Simulink R¿e a biblioteca DSP Builder R¿da Altera R¿para implementar os modelos de cada algoritmo, foi possivel analisar outros aspectos importantes, em busca de demonstrar a possibilidade da utilizacao de tais implementacoes em aplicacoes com requisitos de tempo real, que necessitam de alto desempenho, utilizando FPGA de baixo custo, como: a quantidade de recursos de FPGA necessarios na implementacao de cada algoritmo, principalmente buscando minimizar a utilizacao de blocos DSP, a latencia, e maximizar o throughput de processamento. / Independent Component Analysis (ICA) is one of the most widely used statistical method to solve the problem of Blind Source Separation (BSS), which, along with other signal processing methods, faces new challenges with the increasing the number of signal sources and samples available for processing, being the base of applications with increasing performance requirements. The aim of this work is to study the FastICA and the Joint Approximate Diagonalization of Eigen-matrices (JADE) algorithms and implement them in Field- Programmable Gate Array (FPGA). Other researches have already been carried out with the objective of demonstrating the feasibility of implementing such algorithms in FPGA, but they present little about the methodology used and implementation details such as the number of samples used, why the numerical representation was chosen and the obtained thoughtput. The analysis carried out in this work demonstrates the behavior of the core of the algorithms when implemented using different representations, such as singleprecision floating-point (32 bits) and fixed point with different numbers of samples and sources to be estimated. It was verified these immplementations are able to solve the BSS problem with good accuracy when compared with implementations of the same algorithms that exmploy a double-precision floating-point representation (64 bits). Using the Simulink R¿ and Alterafs R¿ DSP Builder R¿ library to implement the models of each algorithm, it was possible to analyze other important aspects, in order to demonstrate the possibility of using such implementations in applications with real-time requirements that require high performance, using low cost FPGA, such as: the necessary FPGA resources in the implementation of each algorithm, mainly seeking to minimize the use of DSP blocks, latency, and to maximize the processing throughput.
368

Distinct Feature Learning and Nonlinear Variation Pattern Discovery Using Regularized Autoencoders

January 2016 (has links)
abstract: Feature learning and the discovery of nonlinear variation patterns in high-dimensional data is an important task in many problem domains, such as imaging, streaming data from sensors, and manufacturing. This dissertation presents several methods for learning and visualizing nonlinear variation in high-dimensional data. First, an automated method for discovering nonlinear variation patterns using deep learning autoencoders is proposed. The approach provides a functional mapping from a low-dimensional representation to the original spatially-dense data that is both interpretable and efficient with respect to preserving information. Experimental results indicate that deep learning autoencoders outperform manifold learning and principal component analysis in reproducing the original data from the learned variation sources. A key issue in using autoencoders for nonlinear variation pattern discovery is to encourage the learning of solutions where each feature represents a unique variation source, which we define as distinct features. This problem of learning distinct features is also referred to as disentangling factors of variation in the representation learning literature. The remainder of this dissertation highlights and provides solutions for this important problem. An alternating autoencoder training method is presented and a new measure motivated by orthogonal loadings in linear models is proposed to quantify feature distinctness in the nonlinear models. Simulated point cloud data and handwritten digit images illustrate that standard training methods for autoencoders consistently mix the true variation sources in the learned low-dimensional representation, whereas the alternating method produces solutions with more distinct patterns. Finally, a new regularization method for learning distinct nonlinear features using autoencoders is proposed. Motivated in-part by the properties of linear solutions, a series of learning constraints are implemented via regularization penalties during stochastic gradient descent training. These include the orthogonality of tangent vectors to the manifold, the correlation between learned features, and the distributions of the learned features. This regularized learning approach yields low-dimensional representations which can be better interpreted and used to identify the true sources of variation impacting a high-dimensional feature space. Experimental results demonstrate the effectiveness of this method for nonlinear variation pattern discovery on both simulated and real data sets. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2016
369

Abordagens multivariadas para a seleção de variáveis com vistas à caracterização de medicamentos / Multivariate approaches to variable selection in order to characterize medicines

Yamashita, Gabrielli Harumi January 2015 (has links)
A averiguação da autenticidade de medicamentos tem se apoiado na análise de perfil por espectroscopia de infravermelho (ATR-FTIR). Contudo, tal análise tipicamente gera dados caracterizados por elevado número de variáveis (comprimentos de onda) ruidosas e correlacionadas, necessitando assim da aplicação de técnicas para seleção das variáveis mais relevantes e informativas, tornando os modelos preditivos e exploratórios mais robustos. Esta dissertação testa sistemáticas para a seleção de variáveis com vistas à clusterização e classificação de medicamentos. Para tanto, inicialmente faz-se uso dos parâmetros oriundos da Análise de Componentes Principais (ACP) para a geração de três índices de importância de variáveis; tais índices guiam um processo iterativo de eliminação de variáveis com vistas a uma clusterização mais consistente, medida através do Silhouette Index. Na sequência, utiliza-se o Algoritmo Genético (AG) combinado com a ferramenta de classificação k nearest neighbor (kNN) para selecionar o subconjunto de variáveis que resultem na maior acurácia média com propósito de classificação das amostras em dois grupos, originais ou falsificados. Por fim, aplica-se a divisão dos dados ATR-FTIR em intervalos para selecionar as regiões espectroscópicas mais relevantes para a classificação das amostras via kNN; na sequência, aplica-se o AG para refinar os intervalos retidos anteriormente. A aplicação dos métodos de seleção de variáveis propostos permitiu realizar clusterizações e classificações mais precisas com base em um subconjunto reduzido de variáveis. / The investigation of the authenticity of drugs has relied on the profile analysis by infrared spectroscopy (ATR-FTIR). However, such analysis typically yields a large number of correlated and noisy variables (wavelengths), which require the application of techniques for selecting the most informative and relevant variables to improve model ability. This thesis test an approach to variable selection aimed at clustering and classifying drug samples. For that matter, it derives three variable importance indices based on Principal Component Analysis (PCA) components that guide an iterative process of variable elimination; clustering performance based on the reduced sets is assessed via Silhouette Index. Next, we combine the Genetic Algorithm (GA) with the k nearest neighbor classification technique (kNN) to select the subset of variables yielding the highest average accuracy for classifying samples into authentic or counterfeit categories. Finally, we split the ATR-FTIR data into intervals to select the most relevant spectroscopic regions for sample classification via kNN; we then apply GA to refine the ranges previously retained. The implementation of the proposed variable selection methods led to more accurate clustering and classification procedures based on a small subset of variables.
370

Quantificação óptica de carboidratos e etanol em mosto cervejeiro / Optical quantification of carbohidrates ; ethanol in beer wort

Éverton Sérgio Estracanholli 08 October 2012 (has links)
Neste estudo realizamos uma prova de conceito através da combinação de três técnicas com a finalidade de monitorar a mosturação e fermentação da cerveja durante o processo de fabricação. O princípio deste trabalho é baseado em uma análise espectral, utilizando um equipamento de absorção na região do infravermelho médio por transformada de Fourier (FTIR - Fourier Transform Infrared) de amostras coletadas durante a fabricação da cerveja. Combinado com técnicas de processamento de Análise de Componentes Principais e Redes Neurais Artificiais é possível quantificar a concentração dos principais carboidratos e etanol presentes nestas amostras. Estas medidas físicas e químicas irão permitir a redução de erros durante a produção de cerveja além de optimizar as reações enzimáticas intrínsecas de suas principais etapas de análise. As técnicas ópticas de absorção, juntamente com o processamento neural, apresentam grandes vantagens, principalmente devido ao fato de serem facilmente adaptáveis aos equipamentos industriais, fornecendo respostas em curtos intervalos de tempo com alta sensibilidade e especificidade. / This study is fundamentally a proof of concept. By the combination of three techniques, our aim is to develop a new method of monitoring beer wort production and fermentation during brewing. The principle is based on spectral analyses, using Fourier Transform Infrared (FTIR) spectroscopy to collect absorption data from beer wort samples. This data is refined by the application of a statistical method, Principal Component Analysis (PCA), to reduce the number of variables. A computational method, Artificial Neural Network (ANN), enables quantification of carbohydrates and ethanol concentrations. Such physical-chemical measurements are expected to allow both reduction of mistakes during beer processing and optimization of enzymatic reactions, enhancing brewing processes. Optical absorption techniques associated with Artificial Neural Network present great advantages, mainly because the first ones are more easily inserted in industries than the latter ones, since they enable assessing the process status at short intervals, with high sensibility ; specificity.

Page generated in 0.1511 seconds