• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 19
  • 8
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 22
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Fast spectral multiplication for real-time rendering

Waddle, C Allen 02 May 2018 (has links)
In computer graphics, the complex phenomenon of color appearance, involving the interaction of light, matter and the human visual system, is modeled by the multiplication of RGB triplets assigned to lights and materials. This efficient heuristic produces plausible images because the triplets assigned to materials usually function as color specifications. To predict color, spectral rendering is required, but the O(n) cost of computing reflections with n-dimensional point-sampled spectra is prohibitive for real-time rendering. Typical spectra are well approximated by m-dimensional linear models, where m << n, but computing reflections with this representation requires O(m^2) matrix-vector multiplication. A method by Drew and Finlayson [JOSA A 20, 7 (2003), 1181-1193], reduces this cost to O(m) by “sharpening” an n x m orthonormal basis with a linear transformation, so that the new basis vectors are approximately disjoint. If successful, this transformation allows approximated reflections to be computed as the products of coefficients of lights and materials. Finding the m x m change of basis matrix requires solving m eigenvector problems, each needing a choice of wavelengths in which to sharpen the corresponding basis vector. These choices, however, are themselves an optimization problem left unaddressed by the method's authors. Instead, we pose a single problem, expressing the total approximation error incurred across all wavelengths as the sum of dm^2 squares for some number d, where, depending on the inherent dimensionality of the rendered reflectance spectra, m <= d << n, a number that is independent of the number of approximated reflections. This problem may be solved in real time, or nearly, using standard nonlinear optimization algorithms. Results using a variety of reflectance spectra and three standard illuminants yield errors at or close to the best lower bound attained by projection onto the leading m characteristic vectors of the approximated reflections. Measured as CIEDE2000 color differences, a heuristic proxy for image difference, these errors can be made small enough to be likely imperceptible using values of 4 <= m <= 9. An examination of this problem reveals a hierarchy of simpler, more quickly solved subproblems whose solutions yield, in the typical case, increasingly inaccurate approximations. Analysis of this hierarchy explains why, in general, the lowest approximation error is not attained by simple spectral sharpening, the smallest of these subproblems, unless the spectral power distributions of all light sources in a scene are sufficiently close to constant functions. Using the methods described in this dissertation, spectra can be rendered in real time as the products of m-dimensional vectors of sharp basis coefficients at a cost that is, in a typical application, a negligible fraction above the cost of RGB rendering. / Graduate
42

Optimal Bidding Strategy for a Strategic Power Producer Using Mixed Integer Programming

Sadat, Sayed Abdullah 14 March 2017 (has links)
The thesis focuses on a mixed integer linear programming (MILP) formulation for a bi-level mathematical program with equilibrium constraints (MPEC) considering chance constraints. The particular MPEC problem relates to a power producer’s bidding strategy: maximize its total benefit through determining bidding price and bidding power output while considering an electricity pool’s operation and guessing the rival producer’s bidding price. The entire decision-making process can be described by a bi-level optimization problem. The contribution of our thesis is the MILP formulation of this problem considering the use of chance constrained mathematical program for handling the uncertainties. First, the lower-level poor operation problem is replaced by Karush-Kuhn-Tucker (KKT) optimality condition, which is further converted to an MILP formulation except a bilinear item in the objective function. Secondly, duality theory is implemented to replace the bilinear item by linear items. Finally, two types of chance constraints are examined and modeled in MILP formulation. With the MILP formulation, the entire MPEC problem considering randomness in price guessing can be solved using off-shelf MIP solvers, e.g., Gurobi. A few examples and a case study are given to illustrate the formulation and show the case study results.
43

Parallel implementation of curve reconstruction from noisy samples

Randrianarivony, Maharavo, Brunnett, Guido 06 April 2006 (has links)
This paper is concerned with approximating noisy samples by non-uniform rational B-spline curves with special emphasis on free knots. We show how to set up the problem such that nonlinear optimization methods can be applied efficiently. This involves the introduction of penalizing terms in order to avoid undesired knot positions. We report on our implementation of the nonlinear optimization and we show a way to implement the program in parallel. Parallel performance results are described. Our experiments show that our program has a linear speedup and an efficiency value close to unity. Runtime results on a parallel computer are displayed.
44

Parallel implementation of surface reconstruction from noisy samples

Randrianarivony, Maharavo, Brunnett, Guido 06 April 2006 (has links)
We consider the problem of reconstructing a surface from noisy samples by approximating the point set with non-uniform rational B-spline surfaces. We focus on the fact that the knot sequences should also be part of the unknown variables that include the control points and the weights in order to find their optimal positions. We show how to set up the free knot problem such that constrained nonlinear optimization can be applied efficiently. We describe in detail a parallel implementation of our approach that give almost linear speedup. Finally, we provide numerical results obtained on the Chemnitzer Linux Cluster supercomputer.
45

Hierarchical Combined Plant and Control Design for Thermal Management Systems

Austin L Nash (8063924) 03 December 2019 (has links)
Over the last few decades, many factors, including increased electrification, have led to a critical need for fast and efficient transient cooling. Thermal management systems (TMSs) are typically designed using steady-state assumptions and to accommodate the most extreme operating conditions that could be encountered, such as maximum expected heat loads. Unfortunately, by designing systems in this manner, closed-loop transient performance is neglected and often constrained. If not constrained, conventional design approaches result in oversized systems that are less efficient under nominal operation. Therefore, it is imperative that \emph{transient} component modeling and subsystem interactions be considered at the design stage to avoid costly future redesigns. Simply put, as technological advances create the need for rapid transient cooling, a new design paradigm is needed to realize next generation systems to meet these demands. <br><br>In this thesis, I develop a new design approach for TMSs called hierarchical control co-design (HCCD). More specifically, I develop a HCCD algorithm aimed at optimizing high-fidelity design and control for a TMS across a system hierarchy. This is accomplished in part by integrating system level (SL) CCD with detailed component level (CL) design optimization. The lower-fidelity SL CCD algorithm incorporates feedback control into the design of a TMS to ensure controllability and robust transient response to exogenous disturbances, and the higher-fidelity CL design optimization algorithms provide a way of designing detailed components to achieve the desired performance needed at the SL. Key specifications are passed back and forth between levels of the hierarchy at each iteration to converge on an optimal design that is responsive to desired objectives at each level. The resulting HCCD algorithm permits the design and control of a TMS that is not only optimized for steady-state efficiency, but that can be designed for robustness to transient disturbances while achieving said disturbance rejection with minimal compromise to system efficiency. Several case studies are used to demonstrate the utility of the algorithm in designing systems with different objectives. Additionally, high-fidelity thermal modeling software is used to validate a solution to the proposed model-based design process. <br>
46

Autonomous Motion Learning for Near Optimal Control

Jennings, Alan Lance 21 August 2012 (has links)
No description available.
47

Globale Optimierungsverfahren, garantiert globale Lösungen und energieeffiziente Fahrzeuggetriebe

Stöcker, Martin 03 June 2015 (has links) (PDF)
Der Schwerpunkt der vorliegenden Arbeit liegt auf Methoden zur Lösung nichtlinearer Optimierungsprobleme mit der Anforderung, jedes globale Optimum garantiert zu finden und mit einer im Voraus festgesetzten Genauigkeit zu approximieren. Eng verbunden mit dieser deterministischen Optimierung ist die Berechnung von Schranken für den Wertebereich einer Funktion über einem gegebenen Hyperquader. Verschiedene Ansätze, z. B. auf Basis der Intervallarithmetik, werden vorgestellt und analysiert. Im Besonderen werden Methoden zur Schrankengenerierung für multivariate ganz- und gebrochenrationale Polynome mit Hilfe der Darstellung in der Basis der Bernsteinpolynome weiterentwickelt. Weiterhin werden in der Arbeit schrittweise die Bausteine eines deterministischen Optimierungsverfahrens unter Verwendung der berechneten Wertebereichsschranken beschrieben und Besonderheiten für die Optimierung polynomialer Aufgaben näher untersucht. Die Analyse und Bearbeitung einer Aufgabenstellung aus dem Entwicklungsprozess für Fahrzeuggetriebe zeigt, wie die erarbeiteten Ansätze zur Lösung nichtlinearer Optimierungsprobleme die Suche nach energieeffizienten Getrieben mit einer optimalen Struktur unterstützen kann. Kontakt zum Autor: [Nachname] [.] [Vorname] [@] gmx [.] de
48

Carteiras de Black-Litterman com análises baseadas em redes neurais. / A neural network approach for Back Litterman model investor views.

Bernardes, Diego Guerreiro 26 April 2019 (has links)
Neste trabalho é apresentado um sistema autônomo de gestão de carteiras que utiliza Redes Neurais Artificiais para monitoramento do mercado e o modelo de Black-Litterman para otimização da alocação de patrimônio. O sistema analisa as dez ações mais negociadas do índice Bovespa, com redes neurais dedicadas a cada ação, e prevê estimativas de variações de preços para um dia no futuro a partir de indicadores da análise técnica. As estimativas das redes são então inseridas em um otimizador de carteiras, que utiliza o modelo de Black-Litterman, para compor carteiras diárias que empregam a estratégia Long and Short. Os resultados obtidos são comparados a um segundo sistema de trading autônomo, sem o emprego da otimização de carteiras. Foram observados resultados com ótimo índice de Sharpe em comparação ao Benchmark. Buscou-se, assim, contribuir com evidências a favor da utilização de modelos de inferência bayesiana utilizados junto à técnicas quantitativas para a gestão de patrimônio. / This work presents an autonomous portfolio management system which uses a Neural Network approach for monitoring the market and the Black-Litterman model for portfolio composition. The ten most traded assets from the Bovespa Index are analyzed, with dedicated neural networks, which suggests future return estimates using technical indicators as input. Those estimates are inserted in the Black-Litterman model which propose daily portfolio composition using long & short positions. The results are compared to a second autonomous trading system without the Black-Litterman approach. The results show great performance compared to the Benchmark, specially the risk and return relation, captured by the Sharpe Index. The work sought to bring positive evidences for the use of Bayesian Inference techniques in quantitative portfolio management.
49

Estimação de parâmetros de modelos compartimentais para tomografia por emissão de pósitrons. / Parameter estimation of compartmental models for positron emission tomography.

Silva, João Eduardo Maeda Moreira da 23 April 2010 (has links)
O presente trabalho possui como metas o estudo, simulação, identificação de parâmetros e comparação estatística de modelos compartimentais utilizados em tomografia por emissão de pósitrons (PET). Para tanto, propõe-se utilizar a metodologia de equações de sensibilidade e o método de Levenberg-Marquardt para a tarefa de estimação de parâmetros característicos das equações diferenciais descritoras dos referidos sistemas. Para comparação entre modelos, foi empregado o critério de informação de Akaike. São consideradas três estruturas compartimentais compostas, respectivamente, por dois compartimentos e duas constantes características, três compartimentos e quatro constantes características e quatro compartimentos e seis constantes características. Os dados considerados neste texto foram sintetizados preocupando-se em reunir as principais características de um exame de tomografia real, tais como tipo e nível de ruído e morfologia de função de excitação do sistema. Para tanto, foram utilizados exames de pacientes do setor de Medicina Nuclear do Instituto do Coração da Faculdade de Medicina da Universidade de São Paulo. Aplicando-se a metodologia proposta em três níveis de ruído (baixo, médio e alto), obteve-se concordância do melhor modelo em graus forte e considerável (com índices de Kappa iguais a 0.95, 0.93 e 0.63, respectivamente). Observou-se que, com elevado nível de ruído e modelos mais complexos (quatro compartimentos), a classificação se deteriora devido ao pequeno número de dados para a decisão. Foram desenvolvidos programas e uma interface gráfica que podem ser utilizadas na investigação, elaboração, simulação e identificação de parâmetros de modelos compartimentais para apoio e análise de diagnósticos clínicos e práticas científicas. / This work has as goals the study, simulation, parameter identification and statistical comparison of compartmental models used in positron emission tomography (PET). We propose to use the methodology of sensitivity equations and the method of Levenberg-Marquardt for the task of estimating the characteristic parameters of the differential equations describing such systems. For model comparison, Akaikes information criterion is applied. We have considered three compartmental structures represented, respectively, by two compartments and two characteristic constants, three compartments and four characteristic constants and four compartments and six characteristics constants. The data considered in this work were synthesized taking into account key features of a real tomography exam, such as type and level of noise and morphology of the input function of the system. To this end, we used tests of patients in the sector of Nuclear Medicine of the Heart Institute of the Faculty of Medicine, University of São Paulo. Applying the proposed methodology with three noise levels (low, medium and high), we obtained agreement of the best model with strong and considerable degrees (with Kappa indexes equal to 0.95, 0.93 and 0.63, respectively). It was observed that, with high noise level and more complex models (four compartments), the classification is deteriorated due to lack of data for the decision. Programs have been developed and a graphical interface that can be used in research, development, simulation and parameter identification of compartmental models, supporting analysis of clinical diagnostics and scientific practices.
50

Abordagem do problema de fluxo de potência ótimo por métodos de programação não-linear via penalidade quadrática e Função Lagrangeana Aumentada / not available

Nascimento, Clebea Araújo 25 July 1997 (has links)
Neste trabalho são estudadas três metodologias de otimização não-linear: o Método da Função Lagrangeana, o Método da Função Penalidade e o Método da Função Lagrangeana Aumentada. Com o estudo da Função Lagrangeana e do Método da Função Penalidade, foi possível alcançar a formulação da Função Lagrangeana Aumentada com o objetivo de resolver problemas de programação não-linear não-convexos. Testes numéricos são apresentados para o problema não-convexo de programação não-linear conhecido como Fluxo de Potência Ótimo. / In this dissertation, three nonlinear optimization methodologies are studied: the Lagrangian Function Method, the Penalty Function Method and Augmented Lagrangian Function Method. Through the studies ofthe Lagrangian Function and the Penalty function Method, it was possible to reach the formulation of the Augmented Lagrangian Function aiming to solve nonlinear nonconvex programming problems. Numerical tests are presented for the nonconvex nonlinear programming problem known as optimal power flow.

Page generated in 0.1049 seconds