• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 66
  • 28
  • 12
  • 10
  • 9
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 320
  • 63
  • 54
  • 39
  • 38
  • 38
  • 36
  • 30
  • 29
  • 28
  • 26
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Data sampling strategies in stochastic algorithms for empirical risk minimization

Csiba, Dominik January 2018 (has links)
Gradient descent methods and especially their stochastic variants have become highly popular in the last decade due to their efficiency on big data optimization problems. In this thesis we present the development of data sampling strategies for these methods. In the first four chapters we focus on four views on the sampling for convex problems, developing and analyzing new state-of-the-art methods using non-standard data sampling strategies. Finally, in the last chapter we present a more flexible framework, which generalizes to more problems as well as more sampling rules. In the first chapter we propose an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization (ERM) problem. Our modification consists in allowing the method to adaptively change the probability distribution over the dual variables throughout the iterative process. AdaSDCA achieves a provably better complexity bound than SDCA with the best fixed probability distribution, known as importance sampling. However, it is of a theoretical character as it is expensive to implement. We also propose AdaSDCA+: a practical variant which in our experiments outperforms existing non-adaptive methods. In the second chapter we extend the dual-free analysis of SDCA, to arbitrary mini-batching schemes. Our method is able to better utilize the information in the data defining the ERM problem. For convex loss functions, our complexity results match those of QUARTZ, which is a primal-dual method also allowing for arbitrary mini-batching schemes. The advantage of a dual-free analysis comes from the fact that it guarantees convergence even for non-convex loss functions, as long as the average loss is convex. We illustrate through experiments the utility of being able to design arbitrary mini-batching schemes. In the third chapter we study importance sampling of minibatches. Minibatching is a well studied and highly popular technique in supervised learning, used by practitioners due to its ability to accelerate training through better utilization of parallel processing power and reduction of stochastic variance. Another popular technique is importance sampling { a strategy for preferential sampling of more important examples also capable of accelerating the training process. However, despite considerable effort by the community in these areas, and due to the inherent technical difficulty of the problem, there is no existing work combining the power of importance sampling with the strength of minibatching. In this chapter we propose the first importance sampling for minibatches and give simple and rigorous complexity analysis of its performance. We illustrate on synthetic problems that for training data of certain properties, our sampling can lead to several orders of magnitude improvement in training time. We then test the new sampling on several popular datasets, and show that the improvement can reach an order of magnitude. In the fourth chapter we ask whether randomized coordinate descent (RCD) methods should be applied to the ERM problem or rather to its dual. When the number of examples (n) is much larger than the number of features (d), a common strategy is to apply RCD to the dual problem. On the other hand, when the number of features is much larger than the number of examples, it makes sense to apply RCD directly to the primal problem. In this paper we provide the first joint study of these two approaches when applied to L2-regularized ERM. First, we show through a rigorous analysis that for dense data, the above intuition is precisely correct. However, we find that for sparse and structured data, primal RCD can significantly outperform dual RCD even if d ≪ n, and vice versa, dual RCD can be much faster than primal RCD even if n ≫ d. Moreover, we show that, surprisingly, a single sampling strategy minimizes both the (bound on the) number of iterations and the overall expected complexity of RCD. Note that the latter complexity measure also takes into account the average cost of the iterations, which depends on the structure and sparsity of the data, and on the sampling strategy employed. We confirm our theoretical predictions using extensive experiments with both synthetic and real data sets. In the last chapter we introduce two novel generalizations of the theory for gradient descent type methods in the proximal setting. Firstly, we introduce the proportion function, which we further use to analyze all the known block-selection rules for coordinate descent methods under a single framework. This framework includes randomized methods with uniform, non-uniform or even adaptive sampling strategies, as well as deterministic methods with batch, greedy or cyclic selection rules. We additionally introduce a novel block selection technique called greedy minibatches, for which we provide competitive convergence guarantees. Secondly, the whole theory of strongly-convex optimization was recently generalized to a specific class of non-convex functions satisfying the so-called Polyak- Lojasiewicz condition. To mirror this generalization in the weakly convex case, we introduce the Weak Polyak- Lojasiewicz condition, using which we give global convergence guarantees for a class of non-convex functions previously not considered in theory. Additionally, we give local convergence guarantees for an even larger class of non-convex functions satisfying only a certain smoothness assumption. By combining the two above mentioned generalizations we recover the state-of-the-art convergence guarantees for a large class of previously known methods and setups as special cases of our framework. Also, we provide new guarantees for many previously not considered combinations of methods and setups, as well as a huge class of novel non-convex objectives. The flexibility of our approach offers a lot of potential for future research, as any new block selection procedure will have a convergence guarantee for all objectives considered in our framework, while any new objective analyzed under our approach will have a whole fleet of block selection rules with convergence guarantees readily available.
172

Optimal Design of Experiments for Functional Responses

January 2015 (has links)
abstract: Functional or dynamic responses are prevalent in experiments in the fields of engineering, medicine, and the sciences, but proposals for optimal designs are still sparse for this type of response. Experiments with dynamic responses result in multiple responses taken over a spectrum variable, so the design matrix for a dynamic response have more complicated structures. In the literature, the optimal design problem for some functional responses has been solved using genetic algorithm (GA) and approximate design methods. The goal of this dissertation is to develop fast computer algorithms for calculating exact D-optimal designs. First, we demonstrated how the traditional exchange methods could be improved to generate a computationally efficient algorithm for finding G-optimal designs. The proposed two-stage algorithm, which is called the cCEA, uses a clustering-based approach to restrict the set of possible candidates for PEA, and then improves the G-efficiency using CEA. The second major contribution of this dissertation is the development of fast algorithms for constructing D-optimal designs that determine the optimal sequence of stimuli in fMRI studies. The update formula for the determinant of the information matrix was improved by exploiting the sparseness of the information matrix, leading to faster computation times. The proposed algorithm outperforms genetic algorithm with respect to computational efficiency and D-efficiency. The third contribution is a study of optimal experimental designs for more general functional response models. First, the B-spline system is proposed to be used as the non-parametric smoother of response function and an algorithm is developed to determine D-optimal sampling points of a spectrum variable. Second, we proposed a two-step algorithm for finding the optimal design for both sampling points and experimental settings. In the first step, the matrix of experimental settings is held fixed while the algorithm optimizes the determinant of the information matrix for a mixed effects model to find the optimal sampling times. In the second step, the optimal sampling times obtained from the first step is held fixed while the algorithm iterates on the information matrix to find the optimal experimental settings. The designs constructed by this approach yield superior performance over other designs found in literature. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2015
173

Aplicação de NURBS em MMCs, com apalpador touch trigger, para escaneamento de superfícies de formas livres e geometrias complexas

Silva, Esly César Marinho da 31 March 2011 (has links)
Made available in DSpace on 2015-05-08T14:59:31Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3097304 bytes, checksum: d5aed81c6383bfcc8edaae1463e53296 (MD5) Previous issue date: 2011-03-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nowadays the increasing demand for products with high dimensional accuracy and geometric measures has required increasingly accurate. It is necessary to use inspection systems more accurate and flexible to expect these demands. The Coordinate Measuring Machines (CMMs) are an important tool in the design, fabrication and inspection of products manufactured today. These machines are used by the engineer, whose main purpose is to produce an accurate digital model in a virtual space for later use in CAD / CAE / CAM / CAI. To identify the several shapes and surface features, there are many techniques such as Bezier surfaces, splines, B-Splines and NURBS (Non-Rational B-Splines). The NURBS present many advantages, simplicity and facility of data handling, which tends to minimize the problems of randomness and inaccuracy of the cloud of points obtained by the CMM. Thus, the NURBS are an important tool for modeling free-form, making significant contributions in reverse engineering. It is known however that the accuracy of the modeling process of a piece will be greater the larger the number of points collected or measured on the surface (whether by use of laser or continuous scanning by contact or touch point to point). Simulations results showed the effective of the proposed approach. Additionally, experimental results demonstrated that it is of practical use, non time consuming and an alternative way to apply CMMs that incorporated touch trigger probe in modeling processes. The results obtained both by simulation and experimentally demonstrated the relevance of this methodology. / Hoje em dia, a crescente demanda por produtos com alta exatidão dimensional e geométrica tem exigido tolerâncias cada vez mais estreitas. Adicionalmente, peças de geometrias complexas e formas livres tem sido uma prática comum em indústrias dos setores automotivos, aeronáuticos, bioengenharia dentre outros. Para atender as estas demandas se faz necessário sistemas de inspeção cada vez mais exatos e flexíveis. As Máquinas de Medição por Coordenadas (MMCs) são uma importante ferramenta no processo de concepção, fabricação e inspeção dos produtos manufaturados nos dias de hoje. Para identificar as mais diversas formas e características das superfícies existem várias técnicas, tais como, superfícies Bézier, Splines, B-Splines, e NURBS (Non- Uniform Rational B-splines). As NURBS são uma ferramenta importante e uma grande aliada na modelagem de formas geométricas complexas, dando significativas contribuições na engenharia reversa. Sem dúvida, a exatidão do processo de modelagem de uma peça será tão maior quanto maior for o número de pontos coletados ou medidos sobre a sua superfície (seja por uso de laser ou escaneamento contínuo por contato ou por contato ponto a ponto). Então, o principal objetivo desta tese é desenvolver e implementar uma nova metodologia para modelagem de superfícies de formas livres e geometrias complexas utilizando a técnica NURBS em MMC com apalpador touch trigger. Também, foi desenvolvida uma estratégia de medição para obtenção de pontos sobre a superfície estudada. A metodologia proposta foi aplicada experimentalmente para a obtenção de um perfil evolvental de uma engrenagem cilíndrica de dentes retos, um modelo físico reduzido de um avião e um capacete de ciclista. Os resultados obtidos tanto por simulação quanto experimentalmente mostraram a relevância da metodologia desenvolvida.
174

Sistemática para Garantia da Qualidade na Medição de Peças com Geometria Complexa e Superfície com Forma Livre Utilizando Máquina de Medir por Coordenadas / SYSTEMATIC FOR QUALITY ASSURANCE IN MEASUREMENT PROCESS OF PARTS WITH COMPLEX GEOMETRY AND FREEFORM SURFACE BY USING COORDINATE MEASURING MACHINES

Soares Júnior, Luiz 13 December 2010 (has links)
Made available in DSpace on 2015-05-08T15:00:09Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 5203027 bytes, checksum: 79c21eb8016ee6896b1af0b560f73d99 (MD5) Previous issue date: 2010-12-13 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Parts with complex geometry and with free-form surface are of great interest in many industrial applications, either for functional or aesthetic issue. Its spread is due to advances in CAD / CAM systems and coordinate measuring technology. Despite technological advances, product design remains a major problem in industry. The problems range from design conception to those inherent in the manufacturing process and control, which are often discovered only in the product application phase. The dimensional variations of shape and surface texture are specified in the technical drawing using geometric and dimensional tolerance. To part with complex geometry variations are allowable tolerances specified by line and surface profile. Their control typically consists of a comparison of the coordinate points measured on the surface to the CAD model available. This paper contains a proposal to systematize procedures for quality assurance of measurement of parts with complex geometry and free-form surface by using coordinate measuring machines. The proposal was based on extensive study on the subject, the findings of problems revealed in visits to six companies that use technology to coordinate measurement and the results of case studies from a company in the automotive sector. The system focuses on the major sources of errors of coordinate measuring and proved easy to be applied in the selected company. / Peças com geometria complexa e superfície com forma livre são de grande interesse em muitas aplicações industriais, seja por questão funcional ou estética. Sua disseminação deve-se, em parte, aos avanços nos sistemas CAD/CAM e na tecnologia de medição por coordenadas. Apesar dos avanços tecnológicos, o projeto do produto continua sendo um dos maiores problemas da indústria. Os problemas vão desde a concepção do projeto até àqueles inerentes ao processo de fabricação e controle, que muitas vezes são descobertos somente na aplicação do produto. As variações dimensionais, de forma e de textura da superfície são especificadas no desenho técnico através de tolerância dimensional e geométrica. Para peça com geometria complexa as variações admissíveis são especificadas através de tolerâncias de perfil de linha e de superfície. O seu controle tipicamente consiste na comparação dos pontos coordenados medidos sobre a superfície com o modelo CAD disponível. Este trabalho contém uma proposta de sistematização de procedimentos para garantia da qualidade da medição de peças com geometria complexa e superfície com forma livre através de máquina de medir por coordenadas cartesianas. A proposta foi baseada no amplo estudo sobre o tema, nas constatações de problemas evidenciados nas visitas realizadas em seis empresas que utilizam a tecnologia de medição por coordenadas e nos resultados de estudos de casos realizados numa empresa do setor automotivo. A sistemática foca nas principais fontes de erros da medição por coordenadas e demonstrou ser de fácil aplicação na empresa selecionada.
175

Modelo reduzido de sintetização de erros para máquinas de medir a três coordenadas / not available

Renata Belluzzo Zirondi 27 March 2002 (has links)
Desde sua criação por Ferranti, cerca de 50 anos atrás, as tecnologias de projeto e de fabricação das Máquinas de Medir a Três Coordenadas evoluíram muito. Entretanto, ainda é possível fabricar equipamentos livres de erros. Para garantir a acuracidade das medições realizadas é necessário que se conheça tais erros e que rotinas de compensação sejam implementadas. O levantamento dos erros de qualquer equipamento é feito através de procedimentos de calibração. Entretanto, devido à complexidade das MM3Cs, ainda não existem procedimentos internacionalmente aceitos, por usuários e fabricantes, para avaliar o desempenho metrológico desses equipamentos. Técnicas normalizadas existentes, a exemplo, JIS B 7440 de 1987, B89.4.1 de 1997, VDI/VDE 2617 de 1986 entre outras, propõem testes de desempenho que na maioria das vezes superestimam os erros da MM3C. Além disso, dificultam a rastreabilidade para quaisquer condições de medição diferente daquelas em que foi realizado o teste. Assim, diante do exposto, este trabalho tem por objetivo principal apresentar um novo modelo de sintetização de erros para MM3Cs, o Modelo Reduzido de Sintetização de erros. Este modelo possui equações de sintetização, para Ex, Ey e Ez, reduzidas, em comparação à outros modelos conhecidos, necessita de pequeno tempo na calibração o que reduz o custo desta atividade, possibilita o diagnóstico das fontes de erros e garante a rastreabilidade dos erros calculados. / Since the introduction of coordinate measuring machines by Ferranti in late fifties, the CMM design and manufacturing technology have enormously developed. Nevertheless, it is stile impossible to produce mechanical devices that are exempted from errors. In order to ensure measurements accuracy, it becomes necessary to understand such errors and establish compensation routines. The errors survey of any piece of equipment is performed by means of calibration procedures. However, due to the complexity of CMMs, there are not, until the present time, procedures that are internationally accepted by uses and manufactures to evaluate the metrological performance of this sort of equipment. Current standardized techniques, for instance, JIS B 7440 from 1987, B89.4.1. from 1997, VDI/VDE 2617 from 1986, among others, propose performance tests that usually overestimate CMM errors. Furthermore, they obstruct traceability to whatever measurement condition that is different from the ones used to perform the test. This being the case, the aim of this work comprises the presentation of the new error sinthetization model. This model deals with sinthetization equations for Ex, Ey and Ez that are reduced if compared to other known models, requires short calibration periods, reducing its expenses, allow the determination of the sources of errors and ensures traceability of calculated errors.
176

Um estudo sobre o emprego de funções de base gaussianas geradas pelo método da coordenada geradora em cálculos de propriedades eletrônicas de átomos e moléculas / A study on the application of gaussian-type basis sets generated with the Genarator Coordinate method in ab-initio calculation of atoms and mol

Milena Palhares Maringolo 12 December 2014 (has links)
O método da Coordenada Geradora é uma poderosa ferramenta para gerar funções de base. Sua última versão, chamada de método da Coordenada Geradora polinomial, permite a geração de funções de base mais eficientes e precisas a um baixo custo computacional. Nesta tese, além da geração de funções de base para os átomos do primeiro período da Tabela Periódica, uma estratégia de selecionar expoentes da própria função de base para posteriormente refiná-los, com o intuito de gerar funções de polarização e difusas, é apresentada e testada em cálculos de propriedades eletrônicas de átomos e moléculas. / Ab initio electronic structure calculations for atoms and especially for molecules are mostly carried out within the finite basis set expansion method in the Hartree-Fock theory by Roothaan. The search for ever more efficient basis sets has been a constant quest and here we show a new alternative to develop efficient Gaussian-Type Functions (GTF) basis sets for atomic and molecular calculations by employing the Polynomial Generator Coordinate Hartree-Fock method.
177

Geração de conjuntos de funções de base Gaussianos para metais de transição do Sc - Zn a partir do método da coordenada geradora polinomial / Generation of Gaussians Basis Sets for atoms from Sc to Zn by means of the Polynomial Generator Coordinate Method

Ana Cristina Mora Tello 09 September 2016 (has links)
Conjuntos de funções de base Gaussianos são desenvolvidos para os átomos da primeira fila dos metais de transição Sc - Zn. Esses conjuntos de base foram construídos por meio do método da Coordenada Geradora Hartree-Fock (GCHF - Generator Coordinate Hartre-Fock) baseado em uma expansão polinomial de grau 3 para discretizar as equações Griffin-Wheeler-Hartree-Fock. Neste procedimento, a maneira na qual as equações são discretizadas está baseada em uma malha de pontos flexíveis não igualmente espaçada para cada uma das simetrias orbitais requeridas para descrever os átomos estudados, a diferença do método GCHF original é que a malha é igualmente espaçada para todas as simetrias. Inicialmente, foi gerado um conjunto de base balanceado consistindo de 23s17p13d funções gaussianas primitivas. A partir deste, um conjunto padrão de qualidade 7Z na valência foi construído e, posteriormente, enriquecido com grupos de funções de polarização e gerando assim:pGCHF - 7Z - 2f 1g, pGCHF - 7Z - 3f 2g e pGCHF - 7Z - 3f 2g1h. Energias atômicas Hartree-Fock para os dois estados eletrônicos de menor energia, para os átomos do Sc - Zn, foram calculadas com nossos conjuntos e comparadas com valores da energia numérica. Os resultados apresentaram um erro máximo de 1.02 mHartree, demostrando a capacidade desta expansão polinomial no desenvolvimento de conjuntos de base acurados para átomos dos metais de transição (MT) 3d. Cálculos em nível da Teoria do Funcional da Densidade (DFT - Density Functional Theory), utilizando nove diferentes funcionais, foram realizados com nossos conjuntos de base. A energia eletrônica total e propriedades incluíndo: geometrias otimizadas, cumprimentos de ligação e, frequências vibracionais, foram examinadas para um conjunto de sistemas moleculares (hidretos, dicloretos, dímeros, trímeros e óxidos de metais de transição). Os resultados são comparados com valores teóricos obtidos com conjuntos de base cc-pVnZ (n= Q ou 5) e com valores experimentais, quando disponiveís na literatura. Os resultados mostram que os valores de energia total cc-pV5Z, podem ser atingidos com nossos conjuntos de base com um menor número de funções de polarização. Outros cálculos moleculares dão resultados aproximados com valores experimentais e valores de referência DFT/cc-pV5Z. O ponto mais importante para ser mencionado é que os conjuntos de base da coordenada geradora requerem somente uma pequena fração de tempo computacional para alcançar a convergência, quando comparados com cálculos DFT/cc-pVQZ e DFT/cc-pV5Z. / Gaussian basis set functions have been constructed for the atoms of the first row of the transition metals from Sc to Zn. These basis sets were built by means of the Generator Coordinate Hartree-Fock (GCHF) method based on a polynomial expansion of degree 3 for the discretization of the Grifin-Wheler-Hartree-Fock equations. In this procedure, the equations were discretized through a mesh which is not equally spaced for each one of the orbital symmetries required to describe the atoms studied in this work, differently from the original method, in which the mesh is equally spaced for all symmetries. At first, it was generated a minimal basis set consisting of 23s17p13d primitive functions. Starting with this pattern set was construted a set of 7Z quality in the valence and then, this one was enriched with polarization functions classified as: 2f1g, 3f2g y 3f2g1h, which originated 3 sets of basis functions namely: pGCHF-7Z-2f1g, pGCHF-7Z-3f2g y pGCHF-7Z-3f2g1h. Hartree-Fock energies for the atoms Sc-Zn were calculated with our basis sets, for the two electronic states with the lowest energy and compared with values of numerical energy. The maximum error presented for the results was 1.02mH, showing the ability of this polynomial expansion to create accurate basis sets for atoms of third-rows transition metals. Density Functional Theory (DFT) calculations using a set of nine functionals were realized with our basis sets. The total electronic energy and properties such as: optimized geometries, bond distances and vibrational frequencies were calculated for a set of molecules (hydrides, dichlorides, dimers, trimmers and oxides). The obtained values were compared with theoretical values obtained with calculations using the basis set cc-pVnZ and with some experimental values found in the literature. The outcomes showed that reference values of the total electronic can be reproduced by ours basis sets, although ours have a less degree of polarization. Other molecular calculations yield results very close to experimental values and reference theorical values calculated with DFT/cc-pV5Z. The most important point to be mentioned here is that our generator coordinate basis sets require only a tiny fraction of the computational time when compared to DFT/cc-pV5Z calculations.
178

Geração, contração e polarização de bases gaussianas para cálculos quânticos de átomos e moléculas / Generation, contraction and polarization for gaussian basis set for quantum calculations of atoms and molecules

Amanda Ribeiro Guimarães 10 September 2013 (has links)
Muitos grupos de pesquisa já trabalharam com o desenvolvimento de conjuntos de bases, no intuito de obter melhores resultados em tempo e custo de cálculo computacional reduzidos. Para tal finalidade, o tamanho e a precisão são fatores a ser considerados, para que o número de funções do conjunto gerado proporcione uma boa descrição do sistema em estudo, num tempo de convergência reduzido. Esta dissertação tem como objetivo apresentar os conjuntos de bases obtidos pelo Método da Coordenada Geradora, para os átomos Na, Mg, Al, Si, P, S e Cl, e avaliar a qualidade de tais conjuntos pela comparação da energia eletrônica total, em nível atômico e molecular. Foi realizada uma busca para a obtenção do melhor conjunto contraído e do melhor conjunto de funções de polarização. A qualidade do conjunto gerado foi avaliada pelo cálculo DFT-B3LYP, cujos resultados foram comparados aos valores obtidos por cálculos que utilizam funções de bases conhecidas na literatura, tais como: cc-pVXZ do Dunning e pc-n do Jensen. Pelos resultados obtidos, pode-se notar que os conjuntos de bases gerados neste trabalho, denominados MCG-3d2f, podem representar sistemas atômicos ou moleculares. Tanto os valores de energia quanto os de tempo computacional são equivalentes e, em alguns casos, melhores que os obtidos aqui com os conjuntos de bases escolhidos como referência (conjuntos de Dunning e Jensen). / Many research groups have been working with the development of basis sets in order to get the best results in reduced time and cost of computational calculation. It is known that for such purpose, size and accuracy are the primary factors to be considered, so that the number of the generated set of functions allows a good description of the system being studied in a small convergence time. This essay aims to present the basis sets obtained by the Generator Coordinate Method for the atoms Na, Mg, Al, Si, P, S and Cl, as well as evaluating the quality of such clusters by comparing the electron energy at atomic and molecular levels. A research was also performed to obtain the best set contracted as well as the best set of polarization functions. The quality of the generated set was evaluated by calculating DFT-B3LYP results, which were compared to values obtained through calculation using basis functions such as cc-pVXZ of Dunning and pcn of Jensen. It can be noted, from the results obtained, that the basis sets generated in this study, named MCG-3d2f, may well represent atomic or molecular systems. Energy values and the computational time are equivalent and in some cases, even better than those obtained with the sets of bases chosen here as reference sets (Dunning and Jensen).
179

As coordenadas de Fenchel-Nielsen / Fenchel-Nielsen Coordinate

Angélica Turaça 09 June 2015 (has links)
Nesta dissertação, definimos a geometria hiperbólica usando o disco de Poincaré (D2) e o semiplano superior (H2) com as respectivas propriedades. Além disso, apresentamos algumas funções e relações importantes da geometria hiperbólica; conceituamos as superfícies de Riemann, analisando suas propriedades e representações; estudamos o espaço de Teichmüller com a devida decomposição em calças. Esses temas são ferramentas necessárias para atingir o objetivo da dissertação: definir as coordenadas de Fenchel Nielsen como um sistema de coordenadas locais do espaço de Teichmüller Tg. / In this dissertation, we defined the hyperbolic geometry using the Poincares disk (D2) and upper half-plane (H2) with its properties. Besides, we presented some functions and important relations of the hyperbolic geometry; we conceptualize the Riemann surfaces, analyzing its properties and representations; we studied the Teichmüller Space with proper decomposition pants. These themes are essential tools to reach the goal of the work: The definition of the Fenchel Nielsen coordenates as local coordinate system of the Teichmüller space Tg.
180

A correlação entre os erros de retilineidade e angulares nas máquinas de medir a três coordenadas / The correlation between the straightness errors and angular errors in three coordinate measuring machines

Alessandro Marques 29 March 1999 (has links)
O desempenho metrológico de uma Máquina de Medir a Três Coordenadas (MM3C) está relacionado com a sua capacidade de medir peças com a precisão requerida ou desejada. No entanto, como todo instrumento de medição, essas máquinas possuem erros que afetam as medições gerando o que se convencionou chamar de erros volumétricos. Tais erros podem ser obtidos através de modelos matemáticos que descrevem como os erros individuais de todos os componentes da MM3C se combinam por todo o volume de trabalho. Atualmente, no modelamento dos erros, adota-se a independência entre os erros individuais, entretanto, se analisada a geometria estrutural da máquina, verifica-se que existe dependência entre os erros de retilineidade e os erros angulares. O objetivo deste trabalho é expressar o erro angular em função do erro de retilineidade, possibilitando assim minimizar o número de calibrações necessárias e, consequentemente, o tempo de máquina parada requerido para o levantamento do seu comportamento metrológico. Para que se pudesse atingir o objetivo proposto, foram levantados os erros de retilineidade e os angulares de uma Máquina de Medir a Três Coordenadas da marca Brown & Sharp do tipo Ponte Móvel. Com esses dados e o conhecimento da geometria da máquina, os erros angulares foram equacionados, calculados e comparados com os obtidos experimentalmente. / The metrological performance of a Three Coordinate Measuring Machines (CMM) is related to the capacity of measuring workpieces with a required precision. As every measurement instrument, these machines undergo the effects of internal and external factors that affect the measurement, generating what has been denominated as volumetric errors. Such errors can be obtained through a mathematical model that simulates how the individual errors of the CMM are combined and propagated to any point within the machine working volume. Usually, the independence among the individual errors is adopted when the machine error model is built. However, if the machine geometry is analyzed, the dependence between the straightness error and the angular error can be noted. The objective of this work is to express the angular error as a function of the straightness error. The formulation proposed in this work to express this correlation minimizes the number of calibrations necessary to evaluate the machine behavior. A Brown & Sharp Moving Bridge Coordinate Measuring Machine was used for the experimental evaluation. With this data set and knowing the machine geometry, a mathematical expression relating straightness and angular error was obtained. The calculated error values were then compared with the errors experimentally measured.

Page generated in 0.0437 seconds