• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 513
  • 85
  • 53
  • 49
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • Tagged with
  • 864
  • 322
  • 133
  • 94
  • 90
  • 88
  • 86
  • 79
  • 76
  • 68
  • 68
  • 67
  • 66
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Model-independent arbitrage bounds on American put options

Höggerl, Christoph January 2015 (has links)
The standard approach to pricing financial derivatives is to determine the discounted, risk-neutral expected payoff under a model. This model-based approach leaves us prone to model risk, as no model can fully capture the complex behaviour of asset prices in the real world. Alternatively, we could use the prices of some liquidly traded options to deduce no-arbitrage conditions on the contingent claim in question. Since the reference prices are taken from the market, we are not required to postulate a model and thus the conditions found have to hold under any model. In this thesis we are interested in the pricing of American put options using the latter approach. To this end, we will assume that European options on the same underlying and with the same maturity are liquidly traded in the market. We can then use the market information incorporated into these prices to derive a set of no-arbitrage conditions that are valid under any model. Furthermore, we will show that in a market trading only finitely many American and co-terminal European options it is always possible to decide whether the prices are consistent with a model or there has to exist arbitrage in the market.
302

Distributed Statistical Learning under Communication Constraints

El Gamal, Mostafa 21 June 2017 (has links)
"In this thesis, we study distributed statistical learning, in which multiple terminals, connected by links with limited capacity, cooperate to perform a learning task. As the links connecting the terminals have limited capacity, the messages exchanged between the terminals have to be compressed. The goal of this thesis is to investigate how to compress the data observations at multiple terminals and how to use the compressed data for inference. We first focus on the distributed parameter estimation problem, in which terminals send messages related to their local observations using limited rates to a fusion center that will obtain an estimate of a parameter related to the observations of all terminals. It is well known that if the transmission rates are in the Slepian-Wolf region, the fusion center can fully recover all observations and hence can construct an estimator having the same performance as that of the centralized case. One natural question is whether Slepian-Wolf rates are necessary to achieve the same estimation performance as that of the centralized case. In this thesis, we show that the answer to this question is negative. We then examine the optimality of data dimensionality reduction via sufficient statistics compression in distributed parameter estimation problems. The data dimensionality reduction step is often needed especially if the data has a very high dimension and the communication rate is not as high as the one characterized above. We show that reducing the dimensionality by extracting sufficient statistics of the parameter to be estimated does not degrade the overall estimation performance in the presence of communication constraints. We further analyze the optimal estimation performance in the presence of communication constraints and we verify the derived bound using simulations. Finally, we study distributed optimization problems, for which we examine the randomized distributed coordinate descent algorithm with quantized updates. In the literature, the iteration complexity of the randomized distributed coordinate descent algorithm has been characterized under the assumption that machines can exchange updates with an infinite precision. We consider a practical scenario in which the messages exchange occurs over channels with finite capacity, and hence the updates have to be quantized. We derive sufficient conditions on the quantization error such that the algorithm with quantized update still converge."
303

Large Scale Matrix Completion and Recommender Systems

Amadeo, Lily 04 September 2015 (has links)
"The goal of this thesis is to extend the theory and practice of matrix completion algorithms, and how they can be utilized, improved, and scaled up to handle large data sets. Matrix completion involves predicting missing entries in real-world data matrices using the modeling assumption that the fully observed matrix is low-rank. Low-rank matrices appear across a broad selection of domains, and such a modeling assumption is similar in spirit to Principal Component Analysis. Our focus is on large scale problems, where the matrices have millions of rows and columns. In this thesis we provide new analysis for the convergence rates of matrix completion techniques using convex nuclear norm relaxation. In addition, we validate these results on both synthetic data and data from two real-world domains (recommender systems and Internet tomography). The results we obtain show that with an empirical, data-inspired understanding of various parameters in the algorithm, this matrix completion problem can be solved more efficiently than some previous theory suggests, and therefore can be extended to much larger problems with greater ease. "
304

Programação linear e suas aplicações: definição e métodos de soluções / Linear programming and its applications: definition and methods of solutions

Araújo, Pedro Felippe da Silva 18 March 2013 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T11:12:32Z No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T11:34:23Z (GMT) No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-23T11:34:23Z (GMT). No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-03-18 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Problems involving the idea of optimization are found in various elds of study, such as, in Economy is in search of cost minimization and pro t maximization in a rm or country, from the available budget; in Nutrition is seeking to redress the essential nutrients daily with the lowest possible cost, considering the nancial capacity of the individual; in Chemistry studies the pressure and temperature minimum necessary to accomplish a speci c chemical reaction in the shortest possible time; in Engineering seeks the lowest cost for the construction of an aluminium alloy mixing various raw materials and restrictions obeying minimum and maximum of the respective elements in the alloy. All examples cited, plus a multitude of other situations, seek their Remedy by Linear Programming. They are problems of minimizing or maximizing a linear function subject to linear inequalities or Equalities, in order to nd the best solution to this problem. For this show in this paper methods of problem solving Linear Programming. There is an emphasis on geometric solutions and Simplex Method, to form algebraic solution. Wanted to show various situations which may t some of these problems, some general cases more speci c cases. Before arriving eventually in solving linear programming problems, builds up the eld work of this type of optimization, Convex Sets. There are presentations of de nitions and theorems essential to the understanding and development of these problems, besides discussions on the e ciency of the methods applied. During the work, it is shown that there are cases which do not apply the solutions presented, but mostly t e ciently, even as a good approximation. / Problemas que envolvem a ideia de otimiza c~ao est~ao presentes em v arios campos de estudo como, por exemplo, na Economia se busca a minimiza c~ao de custos e a maximiza c~ao do lucro em uma rma ou pa s, a partir do or camento dispon vel; na Nutri c~ao se procura suprir os nutrientes essenciais di arios com o menor custo poss vel, considerando a capacidade nanceira do indiv duo; na Qu mica se estuda a press~ao e a temperatura m nimas necess arias para realizar uma rea c~ao qu mica espec ca no menor tempo poss vel; na Engenharia se busca o menor custo para a confec c~ao de uma liga de alum nio misturando v arias mat erias-primas e obedencendo as restri c~oes m nimas e m aximas dos respectivos elementos presentes na liga. Todos os exemplos citados, al em de uma in nidade de outras situa c~oes, buscam sua solu c~ao atrav es da Programa c~ao Linear. S~ao problemas de minimizar ou maximizar uma fun c~ao linear sujeito a Desigualdades ou Igualdades Lineares, com o intuito de encontrar a melhor solu c~ao deste problema. Para isso, mostram-se neste trabalho os m etodos de solu c~ao de problemas de Programa c~ao Linear. H a ^enfase nas solu c~oes geom etricas e no M etodo Simplex, a forma alg ebrica de solu c~ao. Procuram-se mostrar v arias situa c~oes as quais podem se encaixar alguns desses problemas, dos casos gerais a alguns casos mais espec cos. Antes de chegar, eventualmente, em como solucionar problemas de Programa c~ao Linear, constr oi-se o campo de trabalho deste tipo de otimiza c~ao, os Conjuntos Convexos. H a apresenta c~oes das de ni c~oes e teoremas essenciais para a compreens~ao e o desenvolvimento destes problemas; al em de discuss~oes sobre a e ci^encia dos m etodos aplicados. Durante o trabalho, mostra-se que h a casos os quais n~ao se aplicam as solu c~oes apresentadas, por em, em sua maioria, se enquadram de maneira e ciente, mesmo como uma boa aproxima c~ao.
305

ROI: An extensible R Optimization Infrastructure

Theußl, Stefan, Schwendinger, Florian, Hornik, Kurt 01 1900 (has links) (PDF)
Optimization plays an important role in many methods routinely used in statistics, machine learning and data science. Often, implementations of these methods rely on highly specialized optimization algorithms, designed to be only applicable within a specific application. However, in many instances recent advances, in particular in the field of convex optimization, make it possible to conveniently and straightforwardly use modern solvers instead with the advantage of enabling broader usage scenarios and thus promoting reusability. This paper introduces the R Optimization Infrastructure which provides an extensible infrastructure to model linear, quadratic, conic and general nonlinear optimization problems in a consistent way. Furthermore, the infrastructure administers many different solvers, reformulations, problem collections and functions to read and write optimization problems in various formats. / Series: Research Report Series / Department of Statistics and Mathematics
306

Hosting Capacity for Renewable Generations in Distribution Grids

January 2018 (has links)
abstract: Nowadays, the widespread introduction of distributed generators (DGs) brings great challenges to the design, planning, and reliable operation of the power system. Therefore, assessing the capability of a distribution network to accommodate renewable power generations is urgent and necessary. In this respect, the concept of hosting capacity (HC) is generally accepted by engineers to evaluate the reliability and sustainability of the system with high penetration of DGs. For HC calculation, existing research provides simulation-based methods which are not able to find global optimal. Others use OPF (optimal power flow) based methods where too many constraints prevent them from obtaining the solution exactly. They also can not get global optimal solution. Due to this situation, I proposed a new methodology to overcome the shortcomings. First, I start with an optimization problem formulation and provide a flexible objective function to satisfy different requirements. Power flow equations are the basic rule and I transfer them from the commonly used polar coordinate to the rectangular coordinate. Due to the operation criteria, several constraints are incrementally added. I aim to preserve convexity as much as possible so that I can obtain optimal solution. Second, I provide the geometric view of the convex problem model. The process to find global optimal can be visualized clearly. Then, I implement segmental optimization tool to speed up the computation. A large network is able to be divided into segments and calculated in parallel computing where the results stay the same. Finally, the robustness of my methodology is demonstrated by doing extensive simulations regarding IEEE distribution networks (e.g. 8-bus, 16-bus, 32-bus, 64-bus, 128-bus). Thus, it shows that the proposed method is verified to calculate accurate hosting capacity and ensure to get global optimal solution. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
307

Arqueologia do Noroeste Mineiro: análise de indústria lítica da bacia do Rio Preto - Unaí, Minas Gerais, Brasil / Archaeology of Minas Gerais Northwest: lithic industry analysis from Rio Preto bassin - Unaí, Minas Gerais, Brasil.

Xavier, Leandro Augusto Franco 12 February 2008 (has links)
O objetivo deste dissertação é apresentar análise da indústria lítica de superfície do Sítio Corredor de Chumbo, da bacia do Rio Preto, situada na região de Unaí, Noroeste de Minas Gerais. Partindo das informações disponibilizadas pelas pesquisas do IAB (Instituto de Arqueologia Brasileira na década de 1970 e 1980 por meio do PRONAPA (Programa Nacional de Pesquisa Arqueológica) e PROPEVALE (Programa de Pesquisas Arqueológicas do Vale do São Paulo), a pesquisa procurou responder a questões relativas aos sítios líticos de superfície, que ainda não eram bem conhecidos na região. O trabalho incluiu ainda as relações entre o meio físico, a paisagem e os aspectos arqueológicos relativos ao sítio estudado. A metodologia utilizada procurou dialogar entre tipologia e tecnologia dos instrumentos, além de formalizar uma Cadeia Operatória para a indústria lítica analisada. Os resultados indicam que o sítio se constitui em uma mina a céu aberto (Pellegrin, 1995), sendo identificado parte de seu tratamento in situ. Contudo, as partes mais avançadas da Cadeia Operatória estão presentes dentre os vestígios analisados, demonstrando que um sítio dado como de extração e tratamento, também foi utilizado para a finalização de uma gama de instrumentos. Os tipos mais observados, que se destacam pela quantidade e pela excelência são os artefatos Plano-Convexos, os Raspadores sobre lascas (Façonnage e debitagem) e os Artefatos de Ocasião - este último, indicando um alto nível de reaproveitamento de matérias primas marginais, enquanto as mesmas abundavam no sítio e suas imediações. / This dissertation objective is to present an analysis on the lithic industry in Corredor de Chumbo site, in Rio Preto basin, located in the Northwest part of Minas Gerais state. From IAB researches information. In the decade of 70 and 80 by means of PRONAPA and PROPEVALE, this research aimed to answer to the questions related to the surfaces lithic sites that had not been contemplated in a systematic way (Dias Jr & Carvalho, 1982). The work still included relations between the archaeological environment, landscape and Archaeological aspects relating to the studied site. The used methodology purposed to converse between the instruments typology and technology, besides formalizing an Operational Chain for the analyzed lithic industry. The results indicate that the site is composed of an open-air mine (Pellegrin, 1995), being identified as a part of its treatment in situ. However, the Operating Chain most advanced parts are presented amongst the analyzed vestiges, demonstrating that a site considered as an extraction and treatment one also was used for a gamut of instruments finalization. The most observed types, that are outstanding for its quantity excellency are the Convex-Flat devices, the Scrapes (on flakes) and the Expedit Tools - this last one, indicating a considerable level of reuse of despised raw material unused, while those ones appeared in great quantity in the site and its immediacy.
308

Optimization Methods for a Reconfigurable OTA Chamber

Arnold, Matthew David 01 April 2018 (has links)
Multiple-input multiple-output (MIMO) technology has enabled increased performance of wireless communication devices. The increased complexity associated with MIMO devices requires more realistic testing environments to ensure device performance. This testing can be accomplished by either very accurate but expensive anechoic chambers, less accurate but inexpensive mode-stirred chambers, or the newly introduced reconfigurable over-the-air chamber (ROTAC) that combines the benefits of both anechoic chambers and reverberation chambers. This work focuses on efficient optimization methods to quantify the performance of the ROTAC. First, an efficient optimization technique that combines convex optimization and a simple gradient descent algorithm is developed that can be applied to different ROTAC performance metrics. Plane wave synthesis is used to benchmark performance versus chamber complexity, where the complexity is defined in terms of chamber size and the number of ports in the chamber. Next, the optimization technique is used to study the spatial channel characteristics (power angular spectrum) of the chamber and the generation of arbitrary fading statistics inside the chamber. Lastly, simulation results are compared with practical hardware measurements to highlight the accuracy of the simulation model for the chamber. Overall, this work provides a comprehensive analysis for optimization of different ROTAC performance metrics.
309

Multiple surface segmentation using novel deep learning and graph based methods

Shah, Abhay 01 May 2017 (has links)
The task of automatically segmenting 3-D surfaces representing object boundaries is important in quantitative analysis of volumetric images, which plays a vital role in numerous biomedical applications. For the diagnosis and management of disease, segmentation of images of organs and tissues is a crucial step for the quantification of medical images. Segmentation finds the boundaries or, limited to the 3-D case, the surfaces, that separate regions, tissues or areas of an image, and it is essential that these boundaries approximate the true boundary, typically by human experts, as closely as possible. Recently, graph-based methods with a global optimization property have been studied and used for various applications. Sepecifically, the state-of-the-art graph search (optimal surface segmentation) method has been successfully used for various such biomedical applications. Despite their widespread use for image segmentation, real world medical image segmentation problems often pose difficult challenges, wherein graph based segmentation methods in its purest form may not be able to perform the segmentation task successfully. This doctoral work has a twofold objective. 1)To identify medical image segmentation problems which are difficult to solve using existing graph based method and develop novel methods by employing graph search as a building block to improve segmentation accuracy and efficiency. 2) To develop a novel multiple surface segmentation strategy using deep learning which is more computationally efficient and generic than the exisiting graph based methods, while eliminating the need for human expert intervention as required in the current surface segmentation methods. This developed method is possibly the first of its kind where the method does not require and human expert designed operations. To accomplish the objectives of this thesis work, a comprehensive framework of graph based and deep learning methods is proposed to achieve the goal by successfully fulfilling the follwoing three aims. First, an efficient, automated and accurate graph based method is developed to segment surfaces which have steep change in surface profiles and abrupt distance changes between two adjacent surfaces. The developed method is applied and validated on intra-retinal layer segmentation of Spectral Domain Optical Coherence Tomograph (SD-OCT) images of eye with Glaucoma, Age Related Macular Degneration and Pigment Epithelium Detachment. Second, a globally optimal graph based method is developed to attain subvoxel and super resolution accuracy for multiple surface segmentation problem while imposing convex constraints. The developed method was applied to layer segmentation of SD-OCT images of normal eye and vessel walls in Intravascular Ultrasound (IVUS) images. Third, a deep learning based multiple surface segmentation is developed which is more generic, computaionally effieient and eliminates the requirement of human expert interventions (like transformation designs, feature extrraction, parameter tuning, constraint modelling etc.) required by existing surface segmentation methods in varying capacities. The developed method was applied to SD-OCT images of normal and diseased eyes, to validate the superior segmentaion performance, computation efficieny and the generic nature of the framework, compared to the state-of-the-art graph search method.
310

Optimization Techniques for Image Processing

Chapagain, Prerak 01 April 2019 (has links)
This research thesis starts off with a basic introduction to optimization and image processing. Because there are several different tools to apply optimization in image processing applications, we started researching one category of mathematical optimization techniques, namely Convex Optimization. This thesis provides a basic background consisting of mathematical concepts, as well as some challenges of employing Convex Optimization in solving problems. One major issue is to be able to identify the convexity of the problem in a potential application (Boyd). After spending a couple of months researching and learning Convex Optimization, my advisor and I decided to go on a different route. We decided to use Heuristic Optimization techniques instead, and in particular, Genetic Algorithms (GA). We also conjectured that the application of GA in image processing for the purpose of object matching could potentially yield good results. As a first step, we used MATLAB as the programming language, and we wrote the GA code from scratch. Next, we applied the GA algorithm in object matching. More specifically, we constructed specific images to demonstrate the effectiveness of the algorithm in identifying objects of interest. The results presented in this thesis indicate that the technique is capable of identifying objects under noise conditions.

Page generated in 0.0399 seconds