• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

On the nonnegative least squares

Santiago, Claudio Prata. January 2009 (has links)
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2010. / Committee Chair: Earl Barnes; Committee Member: Arkadi Nemirovski; Committee Member: Faiz Al-Khayyal; Committee Member: Guillermo H. Goldsztein; Committee Member: Joel Sokol. Part of the SMARTech Electronic Thesis and Dissertation Collection.
412

Spatial econometrics models, methods and applications /

Tao, Ji, January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Document formatted into pages; contains x, 140 p. Includes bibliographical references (p. 137-140). Available online via OhioLINK's ETD Center
413

Fast Rates for Regularized Least-squares Algorithm

Caponnetto, Andrea, Vito, Ernesto De 14 April 2005 (has links)
We develop a theoretical analysis of generalization performances of regularized least-squares on reproducing kernel Hilbert spaces for supervised learning. We show that the concept of effective dimension of an integral operator plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. In fact, a minimax analysis is performed which shows asymptotic optimality of the above-mentioned criterion.
414

Application of control, modelling and optimisation to biomaterials manufacturing

Onel, Oliver January 2013 (has links)
This thesis presents the work conducted during a three year research project in the field of Control Systems and Biomaterials Engineering. The findings are presented over seven chapters, starting with a thorough literature review of the existing methods and key technologies, and following through by highlighting the existing problems with the current methods and how they have been overcome. The data is presented in tables, figures and photographs to enhance understanding and clarification. The research focuses on two relatively new manufacturing methods in the field of Tissue Engineering. Both of the methods are used for creating materials for regeneration of human and animal tissue, with the aim of replacing the current surgical methods. The methods are viewed from a control systems perspective and improvements have been made with the implementation of new technologies and methods. Additionally, further advancements are presented on the theoretical modelling field of control systems, where the shortfalls of existent modelling methods are highlighted and solutions proposed.
415

On the development of control systems technology for fermentation processes

Loftus, John January 2017 (has links)
Fermentation processes play an integral role in the manufacture of pharmaceutical products. The Quality by Design initiative, combined with Process Analytical Technologies, aims to facilitate the consistent production of high quality products in the most efficient and economical way. The ability to estimate and control product quality from these processes is essential in achieving this aim. Large historical datasets are commonplace in the pharmaceutical industry and multivariate methods based on PCA and PLS have been successfully used in a wide range of applications to extract useful information from such datasets. This thesis has focused on the development and application of novel multivariate methods to the estimation and control of product quality from a number of processes. The document is divided into four main categories. Firstly, the related literature and inherent mathematical techniques are summarised. Following this, the three main technical areas of work are presented. The first of these relates to the development of a novel method for estimating the quality of products from a proprietary process using PCA. The ability to estimate product quality is useful for identifying production steps that are potentially problematic and also increases process efficiency by ensuring that any defective products are detected before they undergo any further processing. The proposed method is simple and robust and has been applied to two separate case studies, the results of which demonstrate the efficacy of the technique. The second area of work concentrates on the development of a novel method of identifying the operational phases of batch fermentation processes and is based on PCA and associated statistics. Knowledge of the operational phases of a process can be beneficial from a monitoring and control perspective and allows a process to be divided into phases that can be approximated by a linear model. The devised methodology is applied to two separate fermentation processes and results show the capability of the proposed method. The third area of work focuses on undertaking a performance evaluation of two multivariate algorithms, PLS and EPLS, in controlling the end-point product yield of fermentation processes. Control of end-point product quality is of crucial importance in many manufacturing industries, such as the pharmaceutical industry. Developing a controller based on historical and identification process data is attractive due to the simplicity of modelling and the increasing availability of process data. The methodology is applied to two case studies and performance evaluated. From both a prediction and control perspective, it is seen that EPLS outperforms PLS, which is important if modelling data is limited.
416

Characterization of the acoustic properties of cementitious materials

Sun, Ruting (Michelle) January 2017 (has links)
The primary aim of this research was to investigate the fundamental acoustic properties of several cementitious materials, the influence of mix design parameters/constituents, and finally the effect of the physical and mechanical properties of cementitious material concrete/mortar on the acoustic properties of the material. The main objectives were: To understand the mechanism of sound production in musical instruments and the effects of the material(s) employed on the sound generated; To build upon previous research regarding selection of the tested physical/mechanical properties and acoustic properties of cementitious materials; To draw conclusions regarding the effect of different constituents, mix designs and material properties upon the acoustic properties of the material; To build a model of the relationship between the acoustic properties of a cementitious material and its mix design via its physical/mechanical properties. In order to meet the aim, this research was conducted by employing the semi-experimental (half analytical) method: two experimental programmes were performed (I and II); a mathematical optimization technique (least square method) was then implemented in order to construct an optimized mathematical model to match with the experimental data. In Experimental Programme I, six constituents/factors were investigated regarding the effect on the physical/mechanical and acoustic properties: cementitious material additives (fly ash, silica fume, and GGBS), superplasticizer, and basic mix design parameters (w/c ratio, and sand grading). 11 properties (eight physical/mechanical properties: compressive strength, density, hardness, flexural strength, flexural modulus, elastic modulus, dynamic modulus and slump test; and three acoustic properties: resonant frequency, speed of sound and quality factor (internal damping)) were tested for each constituents/factors related mortar type. For each type of mortar, there were three cubes, three prisms and three cylinders produced. In Experimental Programme I, 20 mix designs were investigated, 180 specimens produced, and 660 test results recorded. After analysing the results of Experimental Programme I, fly ash (FA), w/b ratio and b/s ratio were selected as the cementitious material/factors which had the greatest influence on the acoustic properties of the material; these were subsequently investigated in detail in Experimental Programme II. In Experimental Programme II, various combinations of FA replacement level, w/b ratios and b/s ratios (three factors) resulted in 1122 test results. The relationship between these three factors on the selected 11 properties was then determined. Through using regression analysis and optimization technique (least square method), the relationship between the physical/mechanical properties and acoustic properties was then determined. Through both experimental programmes, 54 mix designs were investigated in total, with 486 specimens produced and tested, and 1782 test results recorded. Finally, based upon well-known existing relationships (including, model of compressive strength and elastic modulus, and the model of elastic modulus and dynamic modulus), and new regressioned models of FA-mortar (the relationship of compressive strength and constituents, which is unique for different mixes), the optimized object function of acoustic properties (speed of sound and damping ratio) and mix design (proportions of constituents) were constructed via the physical/mechanical properties.
417

Synthetic Aperture Radar Image Formation Via Sparse Decomposition

January 2011 (has links)
abstract: Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation. / Dissertation/Thesis / M.S. Electrical Engineering 2011
418

Multi-Label Dimensionality Reduction

January 2011 (has links)
abstract: Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering the correlation among different labels in multi-label learning. Specifically, I propose Hypergraph Spectral Learning (HSL) to perform dimensionality reduction for multi-label data by exploiting correlations among different labels using a hypergraph. The regularization effect on the classical dimensionality reduction algorithm known as Canonical Correlation Analysis (CCA) is elucidated in this thesis. The relationship between CCA and Orthonormalized Partial Least Squares (OPLS) is also investigated. To perform dimensionality reduction efficiently for large-scale problems, two efficient implementations are proposed for a class of dimensionality reduction algorithms, including canonical correlation analysis, orthonormalized partial least squares, linear discriminant analysis, and hypergraph spectral learning. The first approach is a direct least squares approach which allows the use of different regularization penalties, but is applicable under a certain assumption; the second one is a two-stage approach which can be applied in the regularization setting without any assumption. Furthermore, an online implementation for the same class of dimensionality reduction algorithms is proposed when the data comes sequentially. A Matlab toolbox for multi-label dimensionality reduction has been developed and released. The proposed algorithms have been applied successfully in the Drosophila gene expression pattern image annotation. The experimental results on some benchmark data sets in multi-label learning also demonstrate the effectiveness and efficiency of the proposed algorithms. / Dissertation/Thesis / Ph.D. Computer Science 2011
419

Previsão de níveis fluviais em tempo atual com modelo de regressão adaptativo: aplicação na bacia do rio Uruguai

Moreira, Giuliana Chaves January 2016 (has links)
Este trabalho avaliou o potencial da aplicação da técnica recursiva dos mínimos quadrados (MQR) para o ajuste em tempo atual dos parâmetros de modelos autorregressivos com variáveis exógenas (ARX), as quais são constituídas pelos níveis de montante para melhorar o desempenho das previsões de níveis fluviais em tempo atual. Três aspectos foram estudados em conjunto: variação do alcance escolhido para a previsão, variação da proporção da área controlada em bacias a montante e variação da área da bacia da seção de previsão. A pesquisa foi realizada em três dimensões principais: a) metodológica (sem recursividade; com recursividade; com recursividade e fator de esquecimento); b) temporal (6 alcances diferentes: 10, 24, 34, 48, 58 e 72 horas); e c) espacial (variação da área controlada da bacia e da área da bacia definida pela seção de previsão). A área de estudo escolhida para essa pesquisa foi a bacia do rio Uruguai com exutório no posto fluviométrico de Uruguaiana (190.000 km²) e as suas sub-bacias embutidas de Itaqui (131.000 km²), Passo São Borja (125.000km²), Garruchos (116.000 km²), Porto Lucena (95.200 km²), Alto Uruguai (82.300 km²) e Iraí (61.900 km²). Os dados de níveis fluviométricos, com leituras diárias às 07:00 e às 17:00 horas, foram fornecidos pela Companhia de Pesquisa de Recursos Minerais (CPRM), sendo utilizados os dados de 1/1/1991 a 30/6/2015. Para a análise de desempenho dos modelos, foi aplicado como estatística de qualidade o coeficiente de Nash-Sutcliffe (NS) e o quantil 0,95 dos erros absolutos (EA(0,95): erro que não foi ultrapassado com a frequência de 0,95). Observou-se que os erros EA(0,95) dos melhores modelos obtidos para cada bacia sempre aumentam com a redução da área controlada, ou seja, a qualidade das previsões diminui com o deslocamento da seção de controle de jusante para montante. O ganho na qualidade das previsões com a utilização dos recursos adaptativos torna-se mais evidente, especialmente quando observam-se os valores de EA(0,95), pois esta estatística é mais sensível, com diferenças maiores em relação ao coeficiente NS. Além disso, este é mais representativo para os erros maiores, que ocorrem justamente durante os eventos de inundações. De modo geral, foi observado que, à medida que diminui a área da bacia, é possível obter previsões com alcances cada vez menores. Porém a influência do tamanho da área controlada de bacias a montante melhora o desempenho de bacias menores quando se observam principalmente os erros EA(0,95). Por outro lado, se a proporção da bacia controlada de montante já é bastante grande, como é o caso das alternativas 1 e 2 utilizadas para previsão em Itaqui (entre 88,5% e 95,4 %, respectivamente), os recursos adaptativos não fazem muita diferença na obtenção de melhores resultados. Todavia, quando se observam bacias com menores áreas de montante controladas, como é o caso de Porto Lucena para a alternativa 2 (65% de área controlada), o ganho no desempenho dos modelos com a utilização dos recursos adaptativos completos (MQR+f.e: mínimos quadrados recursivos com fator de esquecimento) torna-se relevante. / This study evaluated the potential of the application of the recursive least squares technique (RLS) to adjust in real time the model parameters of the autoregressive models with exogenous variables (ARX), which consists of the upstream levels, to improve the performance of the forecasts of river levels in real time. Three aspects were studied jointly: the variation of the lead time chosen for the forecast, the variation in the proportion of controlled area in upstream basins and variation in the area of forecasting section of the basin. The research was conducted in three main dimensions: a) methodological (without recursion; with recursion; with recursion and forgetting factor); b) temporal (6 different lead times: 10, 24, 34, 48, 58 and 72 hours); and c) spatial (variation in the controlled area of the basin and the area of the basin defined by the forecast section). The study area chosen for this research was the Uruguay River basin with its outflow at the river gage station of Uruguaiana (190,000 km²) and its entrenched sub-basins in Itaqui (131,000 km²), Passo São Borja (125,000 km²), Garruchos (116,000 km²), Porto Lucena (95,200 km²), Alto Uruguai (82,300 km²), and Iraí (61,900 km²). The river levels data, with daily readings at 7am and 5pm, were provided by the Company of Mineral Resources Research (CPRM), with the data used from January 1, 1991 to June 30, 2015. We applied the Nash-Sutcliffe coefficient (NS) and the quantile 0.95 of absolute errors (EA(0,95): error has not been exceeded at the rate of 0.95) for the analysis of models performances. We observed that the errors EA(0.95) of the best models obtained for each basin always increase with the reduction of the controlled area then the quality of the forecasts decreases with displacement of the downstream control section upstream. The gain in quality of the forecasts with the use of adaptive resources becomes more evident especially when the observed values of EA(0.95) as this statistic is more sensitive with greater differences in relation to the Nash-Sutcliffe Coefficient (NS). Moreover, this is most representative for larger errors which occur precisely during flooding events. In general, we observed that, as much as the area of the basin decreases, it is possible to obtain forecasts with smaller lead times, but the influence of the size of the area controlled upstream basins improves the performance of smaller basins when observing, especially the errors EA (0.95). However, if the proportion of the upstream of controlled basin is already quite large - as in the case of the alternatives 1 and 2 used for forecast in Itaqui (between 88.5% and 95.4%, respectively) - the adaptive resources do not differ too much in getting better results. However, when observing basins with smaller areas controlled upstream - as is the case of Porto Lucena to alternative 2 (65% controlled area) - the performance gain of the models with the use of the complete adaptive resources (MQR+f.e.) becomes relevant.
420

Estimação de parâmetros de máquinas de indução através de ensaio de partida em vazio

Sogari, Paulo Antônio Brudna January 2017 (has links)
Neste trabalho são propostos métodos para a estimação de parâmetros de motores de indução através do método dos Mínimos Quadrados com medição apenas de tensões, correntes e resistência do estator em um ensaio de partida em vazio. São detalhados os procedimentos para o tratamento dos sinais medidos, além das estimações do fluxo magnético e da velocidade mecânica do motor. Para a estimação dos parâmetros elétricos, são propostos métodos que diferem nos requisitos e no tratamento dos parâmetros como invariantes ou variantes no tempo. Em relação a esse último caso, é empregado um método de estimação de parâmetros por janelas de dados, aplicando um modelo com parâmetros invariantes no tempo localmente em diversas partes do ensaio. São feitas simulações para validar os métodos propostos, e dados de ensaio de três motores de diferentes potências são utilizados para analisar a escala de variação paramétrica durante a partida. É feita uma comparação entre os resultados obtidos com e sem consideração de variação nos parâmetros. / In this work, methods are proposed to estimate the parameters of induction motors through the Least Squares method with the measurement of only voltages, currents and resistance of the stator in a no-load startup test. Procedures are detailed to process the measured signals, as well as to estimate magnetic flux and rotor mechanical speed. In order to estimate the electrical parameters, methods are proposed which differ in their requisites and in the treatment of parameters as time invariant or time-varying. For the latter, a methodology for parameter estimation through data windows is used, applying a model with time invariant parameters locally to different parts of the test. Simulations are made to validate the proposed methodology, and data from tests of three motors with different powers are used to analyze the scale of parameter variation during startup. A comparison is made between the results obtained with and without the consideration of variation in the parameters.

Page generated in 0.0302 seconds