Spelling suggestions: "subject:"square""
351 |
Spatial econometrics models, methods and applications /Tao, Ji, January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Document formatted into pages; contains x, 140 p. Includes bibliographical references (p. 137-140). Available online via OhioLINK's ETD Center
|
352 |
Fast Rates for Regularized Least-squares AlgorithmCaponnetto, Andrea, Vito, Ernesto De 14 April 2005 (has links)
We develop a theoretical analysis of generalization performances of regularized least-squares on reproducing kernel Hilbert spaces for supervised learning. We show that the concept of effective dimension of an integral operator plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. In fact, a minimax analysis is performed which shows asymptotic optimality of the above-mentioned criterion.
|
353 |
A matemática por trás do sudoku, um estudo de caso em análise combinatória / The mathematics behind sudoku, a case study in combinatorial analysisSantos, Ricardo Pessoa dos 29 November 2017 (has links)
Submitted by Ricardo Pessoa Dos Santos null (ricopessoa@gmail.com) on 2017-12-14T17:35:33Z
No. of bitstreams: 1
Dissertação.pdf: 4489608 bytes, checksum: 2c9d751844c4b178546f2154b0718705 (MD5) / Approved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2017-12-14T18:53:30Z (GMT) No. of bitstreams: 1
santos_rp_me_sjrp.pdf: 4489608 bytes, checksum: 2c9d751844c4b178546f2154b0718705 (MD5) / Made available in DSpace on 2017-12-14T18:53:30Z (GMT). No. of bitstreams: 1
santos_rp_me_sjrp.pdf: 4489608 bytes, checksum: 2c9d751844c4b178546f2154b0718705 (MD5)
Previous issue date: 2017-11-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Iremos apresentar a um grupo de alunos do Ensino Médio da rede pública de Ensino do Estado de São Paulo, o mundialmente conhecido quebra cabeças Sudoku, e realizar com eles várias atividades buscando apresentá-lo como subsídio didático na aprendizagem de conceitos matemáticos importantes, além de proporcionar oportunidades de aprimorar a concentração e o raciocínio lógico. Iremos explorar conceitos matemáticos ocultos por trás de suas linhas, colunas e blocos, partindo de uma das primeiras perguntas que podem ser feitas: Qual é a quantidade total de jogos válidos existentes? Para responde-la, será proposto a realização de diversas atividades, primeiramente com um Shidoku (matriz 4 × 4), em seguida iremos calcular o total desses jogos. O tamanho reduzido dessa grade, facilita os cálculos manuais, permitindo visualizar e compreender o processo utilizado, aproveitando para introduzir o princípio fundamental da contagem. A discussão principal desse trabalho, concentra-se na exploração de um método para se determinar a quantidade de jogos válidos existentes para um Sudoku, e para isso, utilizaremos as demonstrações de Bertrand Felgenhauer e Frazer Jarvis. Também apresentaremos um método capaz de gerar uma grade completa de Sudoku, partindo de uma matriz quadrada de ordem 3, que em seguida, será utilizada para gerar uma solução de Sudoku ortogonal. Finalizando, iremos apresentar e explorar algumas formas diferenciadas para os quebra cabeças Sudoku, mostrando variações no formato dos blocos, no tamanho das grades e uma variação que utiliza formas geométricas em suas pistas (Shapedoku). Como desafio de leitura, pesquisa e aprofundamento, será proposto o problema ainda em aberto do número mínimo de dados iniciais para se ter um jogo válido. Podemos afirmar que um dos objetivos esperados, é que tal atividade venha interferir na concentração e raciocínio, auxiliando nas atividades propostas nesse trabalho e que possam ser utilizadas em outros problemas do cotidiano. / We will present to a group of high school students of the public Education of Sao Paulo state, the world-known puzzle Sudoku, and perform with them several activities seeking to present it as a didactic subsidy in the learning important mathematical concepts, besides opportunities to enhance concentration and logical reasoning. We will explore hidden mathematical concepts behind their lines, columns and blocks, starting from one of the rst questions that can be asked: What is the total number of valid games in existence? To answer this question, it will be proposed to perform several activities, rst with a Shidoku (4 × 4 matrix), then we will calculate the total of these games. The reduced size of this grid facilitates manual calculations, allowing to visualize and understand the process used, taking advantage to introduce the fundamental principle of counting. The main discussion of this paper focuses on the exploration of a method to determine the amount of valid games existing for a Sudoku, and for that, we will use the demonstrations of Bertrand Felgenhauer and Frazer Jarvis. We will also present a method capable of generating a complete Sudoku grid, starting from a square matrix of order 3, which will then be used to generate an orthogonal Sudoku solution. Finally, we will introduce and explore some di erent shapes for the Sudoku puzzle, showing variations in the shape of the blocks, the size of the grids and a variation that uses geometric forms in their tracks (Shapedoku). As a challenge for reading, searching and deepening, the open problem of the minimum number of initial data to have a valid game will be proposed. We can say that one of the expected objectives is that such activity will interfere in concentration and reasoning, helping in the activities proposed in this paper and that can be used in other daily problems. / 3107510001F5
|
354 |
Application of control, modelling and optimisation to biomaterials manufacturingOnel, Oliver January 2013 (has links)
This thesis presents the work conducted during a three year research project in the field of Control Systems and Biomaterials Engineering. The findings are presented over seven chapters, starting with a thorough literature review of the existing methods and key technologies, and following through by highlighting the existing problems with the current methods and how they have been overcome. The data is presented in tables, figures and photographs to enhance understanding and clarification. The research focuses on two relatively new manufacturing methods in the field of Tissue Engineering. Both of the methods are used for creating materials for regeneration of human and animal tissue, with the aim of replacing the current surgical methods. The methods are viewed from a control systems perspective and improvements have been made with the implementation of new technologies and methods. Additionally, further advancements are presented on the theoretical modelling field of control systems, where the shortfalls of existent modelling methods are highlighted and solutions proposed.
|
355 |
On the development of control systems technology for fermentation processesLoftus, John January 2017 (has links)
Fermentation processes play an integral role in the manufacture of pharmaceutical products. The Quality by Design initiative, combined with Process Analytical Technologies, aims to facilitate the consistent production of high quality products in the most efficient and economical way. The ability to estimate and control product quality from these processes is essential in achieving this aim. Large historical datasets are commonplace in the pharmaceutical industry and multivariate methods based on PCA and PLS have been successfully used in a wide range of applications to extract useful information from such datasets. This thesis has focused on the development and application of novel multivariate methods to the estimation and control of product quality from a number of processes. The document is divided into four main categories. Firstly, the related literature and inherent mathematical techniques are summarised. Following this, the three main technical areas of work are presented. The first of these relates to the development of a novel method for estimating the quality of products from a proprietary process using PCA. The ability to estimate product quality is useful for identifying production steps that are potentially problematic and also increases process efficiency by ensuring that any defective products are detected before they undergo any further processing. The proposed method is simple and robust and has been applied to two separate case studies, the results of which demonstrate the efficacy of the technique. The second area of work concentrates on the development of a novel method of identifying the operational phases of batch fermentation processes and is based on PCA and associated statistics. Knowledge of the operational phases of a process can be beneficial from a monitoring and control perspective and allows a process to be divided into phases that can be approximated by a linear model. The devised methodology is applied to two separate fermentation processes and results show the capability of the proposed method. The third area of work focuses on undertaking a performance evaluation of two multivariate algorithms, PLS and EPLS, in controlling the end-point product yield of fermentation processes. Control of end-point product quality is of crucial importance in many manufacturing industries, such as the pharmaceutical industry. Developing a controller based on historical and identification process data is attractive due to the simplicity of modelling and the increasing availability of process data. The methodology is applied to two case studies and performance evaluated. From both a prediction and control perspective, it is seen that EPLS outperforms PLS, which is important if modelling data is limited.
|
356 |
Sports arenas in Sweden : A study investigating the impact of sports arenas on net migration and amenity premiums.Gambina, Andrew January 2018 (has links)
This paper examines the impact of the building or renovation of a sports arena on net migration and amenity premiums. Swedish municipal data is collected for 289 municipalities over the period 1999 to 2016. The econometric analysis makes use of fixed effects (FE) and feasible generalised linear squares (FGLS) estimation techniques. This study builds on the growing literature of the intangible benefits of sports arenas and is one of the few Swedish studies of its kind. The results show that a sports arena built in year t, realises a 3.458% increase in net migration in year t + 5, for those sports arenas being used by football and ice hockey teams in the highest and second highest leagues.
|
357 |
Synthetic Aperture Radar Image Formation Via Sparse DecompositionJanuary 2011 (has links)
abstract: Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation. / Dissertation/Thesis / M.S. Electrical Engineering 2011
|
358 |
Multi-Label Dimensionality ReductionJanuary 2011 (has links)
abstract: Multi-label learning, which deals with data associated with multiple labels simultaneously, is ubiquitous in real-world applications. To overcome the curse of dimensionality in multi-label learning, in this thesis I study multi-label dimensionality reduction, which extracts a small number of features by removing the irrelevant, redundant, and noisy information while considering the correlation among different labels in multi-label learning. Specifically, I propose Hypergraph Spectral Learning (HSL) to perform dimensionality reduction for multi-label data by exploiting correlations among different labels using a hypergraph. The regularization effect on the classical dimensionality reduction algorithm known as Canonical Correlation Analysis (CCA) is elucidated in this thesis. The relationship between CCA and Orthonormalized Partial Least Squares (OPLS) is also investigated. To perform dimensionality reduction efficiently for large-scale problems, two efficient implementations are proposed for a class of dimensionality reduction algorithms, including canonical correlation analysis, orthonormalized partial least squares, linear discriminant analysis, and hypergraph spectral learning. The first approach is a direct least squares approach which allows the use of different regularization penalties, but is applicable under a certain assumption; the second one is a two-stage approach which can be applied in the regularization setting without any assumption. Furthermore, an online implementation for the same class of dimensionality reduction algorithms is proposed when the data comes sequentially. A Matlab toolbox for multi-label dimensionality reduction has been developed and released. The proposed algorithms have been applied successfully in the Drosophila gene expression pattern image annotation. The experimental results on some benchmark data sets in multi-label learning also demonstrate the effectiveness and efficiency of the proposed algorithms. / Dissertation/Thesis / Ph.D. Computer Science 2011
|
359 |
Previsão de níveis fluviais em tempo atual com modelo de regressão adaptativo: aplicação na bacia do rio UruguaiMoreira, Giuliana Chaves January 2016 (has links)
Este trabalho avaliou o potencial da aplicação da técnica recursiva dos mínimos quadrados (MQR) para o ajuste em tempo atual dos parâmetros de modelos autorregressivos com variáveis exógenas (ARX), as quais são constituídas pelos níveis de montante para melhorar o desempenho das previsões de níveis fluviais em tempo atual. Três aspectos foram estudados em conjunto: variação do alcance escolhido para a previsão, variação da proporção da área controlada em bacias a montante e variação da área da bacia da seção de previsão. A pesquisa foi realizada em três dimensões principais: a) metodológica (sem recursividade; com recursividade; com recursividade e fator de esquecimento); b) temporal (6 alcances diferentes: 10, 24, 34, 48, 58 e 72 horas); e c) espacial (variação da área controlada da bacia e da área da bacia definida pela seção de previsão). A área de estudo escolhida para essa pesquisa foi a bacia do rio Uruguai com exutório no posto fluviométrico de Uruguaiana (190.000 km²) e as suas sub-bacias embutidas de Itaqui (131.000 km²), Passo São Borja (125.000km²), Garruchos (116.000 km²), Porto Lucena (95.200 km²), Alto Uruguai (82.300 km²) e Iraí (61.900 km²). Os dados de níveis fluviométricos, com leituras diárias às 07:00 e às 17:00 horas, foram fornecidos pela Companhia de Pesquisa de Recursos Minerais (CPRM), sendo utilizados os dados de 1/1/1991 a 30/6/2015. Para a análise de desempenho dos modelos, foi aplicado como estatística de qualidade o coeficiente de Nash-Sutcliffe (NS) e o quantil 0,95 dos erros absolutos (EA(0,95): erro que não foi ultrapassado com a frequência de 0,95). Observou-se que os erros EA(0,95) dos melhores modelos obtidos para cada bacia sempre aumentam com a redução da área controlada, ou seja, a qualidade das previsões diminui com o deslocamento da seção de controle de jusante para montante. O ganho na qualidade das previsões com a utilização dos recursos adaptativos torna-se mais evidente, especialmente quando observam-se os valores de EA(0,95), pois esta estatística é mais sensível, com diferenças maiores em relação ao coeficiente NS. Além disso, este é mais representativo para os erros maiores, que ocorrem justamente durante os eventos de inundações. De modo geral, foi observado que, à medida que diminui a área da bacia, é possível obter previsões com alcances cada vez menores. Porém a influência do tamanho da área controlada de bacias a montante melhora o desempenho de bacias menores quando se observam principalmente os erros EA(0,95). Por outro lado, se a proporção da bacia controlada de montante já é bastante grande, como é o caso das alternativas 1 e 2 utilizadas para previsão em Itaqui (entre 88,5% e 95,4 %, respectivamente), os recursos adaptativos não fazem muita diferença na obtenção de melhores resultados. Todavia, quando se observam bacias com menores áreas de montante controladas, como é o caso de Porto Lucena para a alternativa 2 (65% de área controlada), o ganho no desempenho dos modelos com a utilização dos recursos adaptativos completos (MQR+f.e: mínimos quadrados recursivos com fator de esquecimento) torna-se relevante. / This study evaluated the potential of the application of the recursive least squares technique (RLS) to adjust in real time the model parameters of the autoregressive models with exogenous variables (ARX), which consists of the upstream levels, to improve the performance of the forecasts of river levels in real time. Three aspects were studied jointly: the variation of the lead time chosen for the forecast, the variation in the proportion of controlled area in upstream basins and variation in the area of forecasting section of the basin. The research was conducted in three main dimensions: a) methodological (without recursion; with recursion; with recursion and forgetting factor); b) temporal (6 different lead times: 10, 24, 34, 48, 58 and 72 hours); and c) spatial (variation in the controlled area of the basin and the area of the basin defined by the forecast section). The study area chosen for this research was the Uruguay River basin with its outflow at the river gage station of Uruguaiana (190,000 km²) and its entrenched sub-basins in Itaqui (131,000 km²), Passo São Borja (125,000 km²), Garruchos (116,000 km²), Porto Lucena (95,200 km²), Alto Uruguai (82,300 km²), and Iraí (61,900 km²). The river levels data, with daily readings at 7am and 5pm, were provided by the Company of Mineral Resources Research (CPRM), with the data used from January 1, 1991 to June 30, 2015. We applied the Nash-Sutcliffe coefficient (NS) and the quantile 0.95 of absolute errors (EA(0,95): error has not been exceeded at the rate of 0.95) for the analysis of models performances. We observed that the errors EA(0.95) of the best models obtained for each basin always increase with the reduction of the controlled area then the quality of the forecasts decreases with displacement of the downstream control section upstream. The gain in quality of the forecasts with the use of adaptive resources becomes more evident especially when the observed values of EA(0.95) as this statistic is more sensitive with greater differences in relation to the Nash-Sutcliffe Coefficient (NS). Moreover, this is most representative for larger errors which occur precisely during flooding events. In general, we observed that, as much as the area of the basin decreases, it is possible to obtain forecasts with smaller lead times, but the influence of the size of the area controlled upstream basins improves the performance of smaller basins when observing, especially the errors EA (0.95). However, if the proportion of the upstream of controlled basin is already quite large - as in the case of the alternatives 1 and 2 used for forecast in Itaqui (between 88.5% and 95.4%, respectively) - the adaptive resources do not differ too much in getting better results. However, when observing basins with smaller areas controlled upstream - as is the case of Porto Lucena to alternative 2 (65% controlled area) - the performance gain of the models with the use of the complete adaptive resources (MQR+f.e.) becomes relevant.
|
360 |
Estimação de parâmetros de máquinas de indução através de ensaio de partida em vazioSogari, Paulo Antônio Brudna January 2017 (has links)
Neste trabalho são propostos métodos para a estimação de parâmetros de motores de indução através do método dos Mínimos Quadrados com medição apenas de tensões, correntes e resistência do estator em um ensaio de partida em vazio. São detalhados os procedimentos para o tratamento dos sinais medidos, além das estimações do fluxo magnético e da velocidade mecânica do motor. Para a estimação dos parâmetros elétricos, são propostos métodos que diferem nos requisitos e no tratamento dos parâmetros como invariantes ou variantes no tempo. Em relação a esse último caso, é empregado um método de estimação de parâmetros por janelas de dados, aplicando um modelo com parâmetros invariantes no tempo localmente em diversas partes do ensaio. São feitas simulações para validar os métodos propostos, e dados de ensaio de três motores de diferentes potências são utilizados para analisar a escala de variação paramétrica durante a partida. É feita uma comparação entre os resultados obtidos com e sem consideração de variação nos parâmetros. / In this work, methods are proposed to estimate the parameters of induction motors through the Least Squares method with the measurement of only voltages, currents and resistance of the stator in a no-load startup test. Procedures are detailed to process the measured signals, as well as to estimate magnetic flux and rotor mechanical speed. In order to estimate the electrical parameters, methods are proposed which differ in their requisites and in the treatment of parameters as time invariant or time-varying. For the latter, a methodology for parameter estimation through data windows is used, applying a model with time invariant parameters locally to different parts of the test. Simulations are made to validate the proposed methodology, and data from tests of three motors with different powers are used to analyze the scale of parameter variation during startup. A comparison is made between the results obtained with and without the consideration of variation in the parameters.
|
Page generated in 0.0447 seconds