• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1176
  • 397
  • 328
  • 150
  • 79
  • 29
  • 24
  • 13
  • 11
  • 11
  • 10
  • 8
  • 7
  • 5
  • 5
  • Tagged with
  • 2647
  • 675
  • 339
  • 284
  • 264
  • 258
  • 197
  • 190
  • 163
  • 147
  • 144
  • 139
  • 139
  • 137
  • 130
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

The spectral properties and singularities of monodromy-free Schrödinger operators

Hemery, Adrian D. January 2012 (has links)
The main object of study is the theory of Schrödinger operators with meromorphic potentials, having trivial monodromy in the complex domain. In the first part we study the spectral properties of a class of such operators related to the classical Whittaker-Hill equation (-d^2/dx^2+Acos2x+Bcos4x)Ψ=λΨ. The equation, for special choices of A and B, is known to have the remarkable property that half of the gaps eventually become closed (semifinite-gap operator). Using the Darboux transformation we construct new trigonometric examples of semifinite-gap operators with real, smooth potentials. A similar technique applied to the Lamé operator gives smooth, real, finite-gap potentials in terms of classical Jacobi elliptic functions. In the second part we study the singular locus of monodromy-free potentials in the complex domain. A particular case is given by the zeros of Wronskians of Hermite polynomials, which are studied in detail. We introduce a class of partitions (doubled partitions) for which we observe a direct qualitative relationship between the pattern of zeros and the shape of the corresponding Young diagram. For the Wronskians W(H_n,H_{n+k}) we give an asymptotic formula for the curve on which zeros lie as n → ∞. We also give some empirical formulas for asymptotic behaviour of zeros of Wronskians of 3 and 4 Hermite polynomials. In the last chapter we apply the theory of monodromy-free operators to produce new vortex equilibria in the periodic case and in the presence of background flow.
92

Algorithms for MARS spectral CT.

Knight, David Warwick January 2015 (has links)
This thesis reports on algorithmic design and software development completed for the Medipix All Resolution System (MARS) multi-energy CT scanner. Two areas of research are presented - the speed and usability improvements made to the post-reconstruction material decomposition software; and the development of two algorithms designed for the implementation of a novel voxel system into the MARS image reconstruction chain. The MARS MD software package is the primary material analysis tool used by members of the MARS group. The photon-processing ability of the MARS scanner is what makes material decomposition possible. MARS MD loads reconstructed images created after a scan and creates a new set of images, one for every individual material within the object. The software is capable of discriminating at least six different materials, plus air, within the object. A significant speed improvement to this program was attained by moving the code base from GNU Octave to MATLAB and applying well known optimisation routines, while the creation of a graphical user interface made the software more accessible and easy to use. The changes made to MARS MD represented a significant contribution to the productivity of the entire MARS group. A drawback of the MARS image reconstruction chain is the time required to generate images of a scanned object. Compared to commercially available CT systems, the MARS system takes several orders of magnitude longer to do essentially the same job. With up to eight energy bins worth of data to consider during reconstruction, compared to a single energy bin in most com- mercial scanners, it is not surprising that there is a shortfall. A major performance limitation of the reconstruction process lies in the calculation of the small distances travelled by every detected photon within individual portions of the reconstruction volume. This thesis investigates a novel volume geometry that was developed by Prof. Phil Butler and Dr. Peter Renaud, and is designed to partially mitigate this time constraint. By treating the volume as a cylinder instead of a traditional cubic structure, the number of individual path length calculations can be drastically reduced. Two sets of algorithms are prototyped, coded in MATLAB, C++ and CUDA, and finally compared in terms of speed and visual accuracy.
93

Long cycles : with particular reference to Kondratieffs

Davies, Gaynor Margaret January 1995 (has links)
No description available.
94

Analysis of phonocardiographic signals using advanced signal processing techniques

Haghighi-Mood, Ali January 1996 (has links)
No description available.
95

The regulator, the Bloch group, hyperbolic manifolds, and the #eta#-invariant

Cisneros-Molina, Jose Luis January 1999 (has links)
No description available.
96

Superfluid turbulence

Melotte, David John January 1999 (has links)
No description available.
97

Estimating the fractional differencing parameter, d, of a long memory time series and simulating stationary and invertible time series

Zhou, Yinghui January 2000 (has links)
No description available.
98

Methods for in vivo '1'3C magnetic resonance spectroscopy

Mann, Robert David January 1999 (has links)
No description available.
99

Theory and realization of novel algorithms for random sampling in digital signal processing

Lo, King Chuen January 1996 (has links)
Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N(^2). The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N"^. The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% , of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement instruments where computing a spectrum is either minimal or not required. Such applications in instrumentation are easily found in the literature. In this thesis, two applications in digital signal processing are introduced. (5) To suggest an inverse transformation for random sampling so as to complete a two-way process and to broaden its scope of application. Apart from the above, a case study of realizing in a transputer network the prime factor algorithm with regular sampling is given in Chapter 2 and a rough estimation of the signal-to-noise ratio for a spectrum obtained from random sampling is found in Chapter 3. Although random sampling is alias-free, problems in computational complexity and noise prevent it from being adopted widely in engineering applications. In the conclusions, the criteria for adopting random sampling are put forward and the directions for its development are discussed.
100

IP Multicasting over DVB-T/H and eMBMS : Efficient System Spectral Efficiency Schemes for Wireless TV Distributions

Rahman, S.M. Hasibur January 2012 (has links)
In today’s DVB-T/H (Digital Video Broadcasting-Terrestrial/Handheld) systems, broadcasting is employed, meaning that TV programs are sent over all transmitters, also where there are no viewers. This is inefficient utilization of spectrum and transmitter equipment. IP multicasting is increasingly used for IP-TV over fixed broadband access. In this thesis, IP multicasting is proposed to also be used for terrestrial and mobile TV, meaning that TV programs are only transmitted where viewers have sent join messages over an interaction channel. This would substantially improve the system spectral efficiency (SSE) in (bit/s)/Hz/site, allowing reduced spectrum for the same amount of TV programs. It would even further improve the multiuser system spectral efficiency (MSSE – a measure defined in this study), allowing increased number of TV programs to be transmitted over a given spectrum. Further efficiency or coverage improvement, may be achieved by forming single-frequency networks (SFN), i.e. groups of adjacent transmitters sending the same signal simultaneously, on the same carrier frequency. The combination of multicasting and SFNs is also the principle of eMBMS (evolved Multicast Broadcast Multimedia Service) for cellular mobile TV over 4G LTE. PARPS (packet and resource plan scheduling) is an optimized approach to dynamically forming SFNs that is employed in this study. The target applications are DVB-T/H and eMBMS. Combining SFNs with non-continuous transmission (switching transmitters on and off dynamically) may give even further gain, and is used in LTE, but is difficult to achieve in DVB-T/H. Seven schemes are suggested and analyzed, in view to compare unicasting, multicasting and broadcasting, with or without SFN, with or without PARPS, and with or without continuous transmission. The schemes are evaluated in terms of coverage probability, SSE and MSSE. The schemes are simulated in MATLAB for a system of 4 transmitters, with random viewer positions. Zipf-law TV program selection is employed, using both a homogeneous and heterogeneous user behavior model. The SFN schemes provide substantially better system spectral efficiency compared to the multi-frequency networks (MFN) schemes. IP multicasting over non-continuous transmission dynamic SFN achieves as much as 905% and 1054% gain respectively in system spectral efficiency and multiuser system  spectral efficiency, from broadcasting over MFN, and 425% and 442% gain respectively from  IP multicasting over MFN, for heterogeneous fading case. Additionally, the SFN schemes gives a diversity gain of 3 dB over MFN, that may be utilized to increase the coverage probability by 4.35% for the same data rate, or to increase the data rate by 27 % for the same coverage as MFN.   Keywords: IP multicasting, broadcasting, coverage probability, system spectral efficiency, multiuser system spectral efficiency, DVB-T/H, eMBMS, mobile TV, IP-TV, SFN, MFN, Dynamic SFN, PARPS, homogeneous, heterogeneous, zipf-law

Page generated in 0.0344 seconds