• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 312
  • 61
  • 42
  • 38
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Nonlinear Approaches to Periodic Signal Modeling

Abd-Elrady, Emad January 2005 (has links)
<p>Periodic signal modeling plays an important role in different fields. The unifying theme of this thesis is using nonlinear techniques to model periodic signals. The suggested techniques utilize the user pre-knowledge about the signal waveform. This gives these techniques an advantage as compared to others that do not consider such priors.</p><p>The technique of Part I relies on the fact that a sine wave that is passed through a static nonlinear function produces a harmonic spectrum of overtones. Consequently, the estimated signal model can be parameterized as a known periodic function (with unknown frequency) in cascade with an unknown static nonlinearity. The unknown frequency and the parameters of the static nonlinearity are estimated simultaneously using the recursive prediction error method (RPEM). A treatment of the local convergence properties of the RPEM is provided. Also, an adaptive grid point algorithm is introduced to estimate the unknown frequency and the parameters of the static nonlinearity in a number of adaptively estimated grid points. This gives the RPEM more freedom to select the grid points and hence reduces modeling errors.</p><p>Limit cycle oscillations problem are encountered in many applications. Therefore, mathematical modeling of limit cycles becomes an essential topic that helps to better understand and/or to avoid limit cycle oscillations in different fields. In Part II, a second-order nonlinear ODE is used to model the periodic signal as a limit cycle oscillation. The right hand side of the ODE model is parameterized using a polynomial function in the states, and then discretized to allow for the implementation of different identification algorithms. Hence, it is possible to obtain highly accurate models by only estimating a few parameters.</p><p>In Part III, different user aspects for the two nonlinear approaches of the thesis are discussed. Finally, topics for future research are presented. </p>
152

Risk theory under partial information with applications in actuarial science and finance

Courtois, Cindy 19 June 2007 (has links)
Cette thèse s'articule autour de deux grands thèmes: l'amélioration de la gestion des risques assurantiels souscrits par les entreprises d'assurance et l'intégration des techniques actuarielles et financières. L'intérêt majeur de notre démarche est de proposer de nouvelles méthodes modernes de gestion des risques pour les sociétés d'assurance, fournissant des alternatives pertinentes aux approches classiques des actuaires. Dans bon nombre de problèmes actuariels, l'information dont on dispose à propos des risques en présence n'est que partielle et il peut être intéressant d'obtenir des approximations de quantités d'intérêt (fonctions de répartition, primes stop-loss, coefficients d'ajustement, probabilités de ruine, etc.) basées sur les premiers moments des risques en présence. Dans tous les cas, il est évidemment très important de pouvoir évaluer la qualité de ces approximations. A cet égard, l'obtention de bornes sur ces quantités d'intérêt permet de contrôler l'erreur qui pourrait entacher l'approximation. Dans une telle perspective, la majeure partie de la thèse a pour cadre de travail les classes de risques partageant les mêmes premiers moments (notamment, moyenne, variance et coefficient de dissymétrie). L'existence de risques extrémaux par rapport à certaines relations d'ordres stochastiques de type convexe permet alors d'obtenir des bornes sur les quantités d'intérêt considérées. Dans certains cas, et ce afin d'obtenir des bornes plus précises, il peut également s'avérer intéressant de se restreindre à d'autres classes de risques. Par exemple, la classe des risques discrets, qui constitue un cas particulier de première importance en sciences actuarielles, a retenu toute notre attention. Cette thèse est composée d'articles (rédigés en anglais) publiés dans des revues nationales et internationales.
153

Joint source-channel turbo techniques and variable length codes

Jaspar, Xavier 08 April 2008 (has links)
Efficient multimedia communication over mobile or wireless channels remains a challenging problem. To deal with that problem so far, the industry has followed mostly a divide and conquer approach, by considering separately the source of data (text, image, video, etc.) and the communication channel (electromagnetic waves across the air, a telephone line, a coaxial cable, etc.). The goal is always the same: to transmit (or store) more data reliably per unit of time, of energy, of physical medium, etc. With today's applications, the divide and conquer approach has, in a sense, started to show its limits. Let us consider, for example, the digital transmission of an image. At the transmitter, the first main step is data compression, at the source level. The number of bits that are necessary to represent the image with a given level of quality is reduced, usually by removing details in the image that are invisible (or less visible) to the human eye. The second main step is data protection, at the channel level. The transmission is made ideally resistant to deteriorations caused by the channel, by implementing techniques such as time/frequency/space expansions. In a sense, the two steps are quite antagonistic --- we first compress then expand the original signal --- and have different goals --- compression enables to transfer more data per unit of time/energy/medium while protection enables to transfer data reliably. At the receiver, the "reversed" operations are implemented. This separation in two steps dates back to Shannon's source and channel coding separation theorem in 1948 and has encouraged the division of the research community in two groups, one focusing on data compression, the other on data protection. This separation has also seduced the industry for the design, thereby supported by theory, of layered communication protocols. But this theorem holds only under asymptotic conditions that are rarely satisfied with today's multimedia content and mobile channels. Therefore, it is usually wise in practice to drop this strict separation and to allow at least some cross-layer cooperation between the source and channel layers. This is what lies behind the words joint source-channel techniques. As the name suggests, these techniques are optimized jointly, without a strict separation. Intuitively, since the optimization is less constrained from a mathematical standpoint, the solution can only be better or equivalent. In this thesis, we investigate a promising subset of these techniques, based on the turbo principle and on variable length codes. The potential of this subset has been illustrated for the first time in 2000, with an example that, since then, has been successfully improved in several directions. Unfortunately, most decoding algorithms have been so far developed on an ad hoc basis, without a unified view and often without specifying the approximations made. Besides, most code-related conclusions are based on simulations or on extrinsic information analysis. A theoretical framework on the error correcting properties of variable length codes in turbo systems is lacking. The purpose of this work, in three parts, is to fill in these gaps up to a certain extent. The first part presents the literature in this field and attempts to give a unified overview. The second part proposes a transmission system that generalizes previous systems from the literature, with the simple addition of a repetition code. While most previous systems are designed for bit streams with a high level of residual redundancy, the proposed system has the interesting flexibility to handle easily different levels of redundancy. Its performance is then analyzed for small levels of redundancy, which is a case not tackled extensively in the literature. This analysis leads notably to the discovery of surprising interleaving gains with reversible variable length codes. The third part develops the mathematical framework that was motivated during the second part but skipped on purpose for the sake of clarity. We first clarify several issues that arise with non-uniform bits and the extrinsic information charts, and propose and discuss two methods to compute these charts. Next, several theoretical results are stated on the robustness of variable length codes concatenated with linear error correcting codes. Notably, an approximate average distance spectrum of the concatenated code is rigorously developed. Together with the union bound, this spectrum provides upper bounds on the symbol and frame/packet error rates. These bounds are then analyzed from an interleaving gain standpoint and it is proved that the variable length code improves the interleaving gain if its spectrum is bounded.
154

Reliable Real-Time Optimization of Nonconvex Systems Described by Parametrized Partial Differential Equations

Oliveira, I.B., Patera, Anthony T. 01 1900 (has links)
The solution of a single optimization problem often requires computationally-demanding evaluations; this is especially true in optimal design of engineering components and systems described by partial differential equations. We present a technique for the rapid and reliable optimization of systems characterized by linear-functional outputs of partial differential equations with affine parameter dependence. The critical ingredients of the method are: (i) reduced-basis techniques for dimension reduction in computational requirements; (ii) an "off-line/on-line" computational decomposition for the rapid calculation of outputs of interest and respective sensitivities in the limit of many queries; (iii) a posteriori error bounds for rigorous uncertainty and feasibility control; (iv) Interior Point Methods (IPMs) for efficient solution of the optimization problem; and (v) a trust-region Sequential Quadratic Programming (SQP) interpretation of IPMs for treatment of possibly non-convex costs and constraints. / Singapore-MIT Alliance (SMA)
155

The market impact of short-sale constraints

Nilsson, Roland January 2005 (has links)
The thesis addresses two areas of research within financial economics: empirical asset pricing and the borderline area between finance and economics with emphasis on econometrical methods. The empirical asset pricing section considers the effects of short-sale constraints on both the stock market as well as the derivatives market. Many arbitrage relations in the economy are intimately tied to the possibility to go short. One such arbitrage relation is the put-call-parity (PCP) relation that dictates a pricing relation between several derivative instruments and their underlying assets. During the latter part of the 1980s stock options could be traded in Sweden, while at the same time shorting was not permitted. The main contribution of the paper is to show that this shorting prohibition indeed implied larger deviations from PCP. Furthermore, this effect is only relevant for firms with stocks that were not shortable abroad, as firms with stocks shortable abroad did not show any deviations from PCP. The second paper investigates the asymmetries found in the momentum effect. Previous studies have found that the momentum effect is mostly due to the fact that a portfolio of loser firms tend to continue perform poorly, rather than because a portfolio of winner firms continue to do well. The explanation for this phenomenon investigated in the paper is based on the theoretical work by Diamond and Verrecchia (1985). In this model they demonstrate that the effects of restrictions on the ability to go short will have as a result that negative news are incorporated more slowly than positive news. The main contribution of my paper is to explore this hypothesis, and provide a link to the momentum effect. This has been achieved by considering Sweden during the 1980s during which the rare situation of a complete shorting prohibition was enforced. The second section of the thesis foremost addresses the CCAPM model. In the third paper the joint effect of market frictions, different utility specifications, as well as more stringent econometrical analysis, on the CCAPM are considered. Since all these remedies tend to co-exist and should not be considered on a stand alone basis, as has been the case in the previous literature. The paper also shows how several measures of misspecification available in the literature are implemented when market frictions are present. In particular, the paper presents the Hansen and Jagannathan measure with market frictions. The final paper considers L1-norm-based alternatives to the L2-norm-based Hansen and Jagannathan (1997) measure. It is well known that L1-norm methods may show good properties in the presence of non-normal distributions, for instance, with respect to heavy-tailed and/or asymmetric distributions. These methods provide more robust estimators, since they are less easily influenced by outliers or other extreme observations. The basic intuition for this is that L2-norm methods involve squaring errors, which magnifies large deviations, while L1-norm methods are based on absolute deviations. Since financial data are known to frequently display non-normal properties, L1-norm methods have found considerable use in financial economics. / Diss. Stockholm : Handelshögskolan, 2005
156

Nonlinear Approaches to Periodic Signal Modeling

Abd-Elrady, Emad January 2005 (has links)
Periodic signal modeling plays an important role in different fields. The unifying theme of this thesis is using nonlinear techniques to model periodic signals. The suggested techniques utilize the user pre-knowledge about the signal waveform. This gives these techniques an advantage as compared to others that do not consider such priors. The technique of Part I relies on the fact that a sine wave that is passed through a static nonlinear function produces a harmonic spectrum of overtones. Consequently, the estimated signal model can be parameterized as a known periodic function (with unknown frequency) in cascade with an unknown static nonlinearity. The unknown frequency and the parameters of the static nonlinearity are estimated simultaneously using the recursive prediction error method (RPEM). A treatment of the local convergence properties of the RPEM is provided. Also, an adaptive grid point algorithm is introduced to estimate the unknown frequency and the parameters of the static nonlinearity in a number of adaptively estimated grid points. This gives the RPEM more freedom to select the grid points and hence reduces modeling errors. Limit cycle oscillations problem are encountered in many applications. Therefore, mathematical modeling of limit cycles becomes an essential topic that helps to better understand and/or to avoid limit cycle oscillations in different fields. In Part II, a second-order nonlinear ODE is used to model the periodic signal as a limit cycle oscillation. The right hand side of the ODE model is parameterized using a polynomial function in the states, and then discretized to allow for the implementation of different identification algorithms. Hence, it is possible to obtain highly accurate models by only estimating a few parameters. In Part III, different user aspects for the two nonlinear approaches of the thesis are discussed. Finally, topics for future research are presented.
157

Packing Unit Disks

Lafreniere, Benjamin J. January 2008 (has links)
Given a set of unit disks in the plane with union area A, what fraction of A can be covered by selecting a pairwise disjoint subset of the disks? Richard Rado conjectured 1/4 and proved 1/4.41. In this thesis, we consider a variant of this problem where the disjointness constraint is relaxed: selected disks must be k-colourable with disks of the same colour pairwise-disjoint. Rado's problem is then the case where k = 1, and we focus our investigations on what can be proven for k > 1. Motivated by the problem of channel-assignment for Wi-Fi wireless access points, in which the use of 3 or fewer channels is a standard practice, we show that for k = 3 we can cover at least 1/2.09 and for k = 2 we can cover at least 1/2.82. We present a randomized algorithm to select and colour a subset of n disks to achieve these bounds in O(n) expected time. To achieve the weaker bounds of 1/2.77 for k = 3 and 1/3.37 for k = 2 we present a deterministic O(n^2) time algorithm. We also look at what bounds can be proven for arbitrary k, presenting two different methods of deriving bounds for any given k and comparing their performance. One of our methods is an extension of the method used to prove bounds for k = 2 and k = 3 above, while the other method takes a novel approach. Rado's proof is constructive, and uses a regular lattice positioned over the given set of disks to guide disk selection. Our proofs are also constructive and extend this idea: we use a k-coloured regular lattice to guide both disk selection and colouring. The complexity of implementing many of the constructions used in our proofs is dominated by a lattice positioning step. As such, we discuss the algorithmic issues involved in positioning lattices as required by each of our proofs. In particular, we show that a required lattice positioning step used in the deterministic O(n^2) algorithm mentioned above is 3SUM-hard, providing evidence that this algorithm is optimal among algorithms employing such a lattice positioning approach. We also present evidence that a similar lattice positioning step used in the constructions for our better bounds for k = 2 and k = 3 may not have an efficient exact implementation.
158

Packing Unit Disks

Lafreniere, Benjamin J. January 2008 (has links)
Given a set of unit disks in the plane with union area A, what fraction of A can be covered by selecting a pairwise disjoint subset of the disks? Richard Rado conjectured 1/4 and proved 1/4.41. In this thesis, we consider a variant of this problem where the disjointness constraint is relaxed: selected disks must be k-colourable with disks of the same colour pairwise-disjoint. Rado's problem is then the case where k = 1, and we focus our investigations on what can be proven for k > 1. Motivated by the problem of channel-assignment for Wi-Fi wireless access points, in which the use of 3 or fewer channels is a standard practice, we show that for k = 3 we can cover at least 1/2.09 and for k = 2 we can cover at least 1/2.82. We present a randomized algorithm to select and colour a subset of n disks to achieve these bounds in O(n) expected time. To achieve the weaker bounds of 1/2.77 for k = 3 and 1/3.37 for k = 2 we present a deterministic O(n^2) time algorithm. We also look at what bounds can be proven for arbitrary k, presenting two different methods of deriving bounds for any given k and comparing their performance. One of our methods is an extension of the method used to prove bounds for k = 2 and k = 3 above, while the other method takes a novel approach. Rado's proof is constructive, and uses a regular lattice positioned over the given set of disks to guide disk selection. Our proofs are also constructive and extend this idea: we use a k-coloured regular lattice to guide both disk selection and colouring. The complexity of implementing many of the constructions used in our proofs is dominated by a lattice positioning step. As such, we discuss the algorithmic issues involved in positioning lattices as required by each of our proofs. In particular, we show that a required lattice positioning step used in the deterministic O(n^2) algorithm mentioned above is 3SUM-hard, providing evidence that this algorithm is optimal among algorithms employing such a lattice positioning approach. We also present evidence that a similar lattice positioning step used in the constructions for our better bounds for k = 2 and k = 3 may not have an efficient exact implementation.
159

Spectral Image Processing Theory and Methods: Reconstruction, Target Detection, and Fundamental Performance Bounds

Krishnamurthy, Kalyani January 2011 (has links)
<p>This dissertation presents methods and associated performance bounds for spectral image processing tasks such as reconstruction and target detection, which are useful in a variety of applications such as astronomical imaging, biomedical imaging and remote sensing. The key idea behind our spectral image processing methods is the fact that important information in a spectral image can often be captured by low-dimensional manifolds embedded in high-dimensional spectral data. Based on this key idea, our work focuses on the reconstruction of spectral images from <italic>photon-limited</italic>, and distorted observations. </p><p>This dissertation presents a partition-based, maximum penalized likelihood method that recovers spectral images from noisy observations and enjoys several useful properties; namely, it (a) adapts to spatial and spectral smoothness of the underlying spectral image, (b) is computationally efficient, (c) is near-minimax optimal over an <italic>anisotropic</italic> Holder-Besov function class, and (d) can be extended to inverse problem frameworks.</p><p>There are many applications where accurate localization of desired targets in a spectral image is more crucial than a complete reconstruction. Our work draws its inspiration from classical detection theory and compressed sensing to develop computationally efficient methods to detect targets from few projection measurements of each spectrum in the spectral image. Assuming the availability of a spectral dictionary of possible targets, the methods discussed in this work detect targets that either come from the spectral dictionary or otherwise. The theoretical performance bounds offer insight on the performance of our detectors as a function of the number of measurements, signal-to-noise ratio, background contamination and properties of the spectral dictionary. </p><p>A related problem is that of level set estimation where the goal is to detect the regions in an image where the underlying intensity function exceeds a threshold. This dissertation studies the problem of accurately extracting the level set of a function from indirect projection measurements without reconstructing the underlying function. Our partition-based set estimation method extracts the level set of proxy observations constructed from such projection measurements. The theoretical analysis presented in this work illustrates how the projection matrix, proxy construction and signal strength of the underlying function affect the estimation performance.</p> / Dissertation
160

Multiple-Input Multiple-Output Wireless Systems: Coding, Distributed Detection and Antenna Selection

Bahceci, Israfil 26 August 2005 (has links)
This dissertation studies a number of important issues that arise in multiple-input multiple-out wireless systems. First, wireless systems equipped with multiple-transmit multiple-receive antennas are considered where an energy-based antenna selection is performed at the receiver. Three different situations are considered: (i) selection over iid MIMO fading channel, (ii) selection over spatially correlated fading channel, and (iii) selection for space-time coded OFDM systems. In all cases, explicit upper bounds are derived and it is shown that using the proposed antenna selection, one can achieve the same diversity order as that attained by full-complexity MIMO systems. Next, joint source-channel coding problem for MIMO antenna systems is studied and a turbo-coded multiple description code for multiple antenna transmission is developed. Simulations indicate that by the proposed iterative joint source-channel decoding that exchanges the extrinsic information between the source code and the channel code, one can achieve better reconstruction quality than that can be achieved by the single-description codes at the same rate. The rest of the dissertation deals with wireless networks. Two problems are studied: channel coding for cooperative diversity in wireless networks, and distributed detection in wireless sensor networks. First, a turbo-code based channel code for three-terminal full-duplex wireless relay channels is proposed where both the source and the relay nodes employ turbo codes. An iterative turbo decoding algorithm exploiting the information arriving from both the source and relay nodes is proposed. Simulation results show that the proposed scheme can perform very close to the capacity of a wireless relay channel. Next the parallel and serial binary distributed detection problem in wireless sensor networks is investigated. Detection strategies based on single-bit and multiple-bit decisions are considered. The expressions for the detection and false alarm rates are derived and used for designing the optimal detection rules at all sensor nodes. Also, an analog approach to the distributed detection in wireless sensor networks is proposed where each sensor nodes simply amplifies-and-forwards its sufficient statistics to the fusion center. This method requires very simple processing at the local sensor. Numerical examples indicate that the analog approach is superior to the digital approach in many cases.

Page generated in 0.0388 seconds