• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1289
  • 376
  • 212
  • 163
  • 71
  • 63
  • 36
  • 33
  • 28
  • 28
  • 26
  • 13
  • 12
  • 10
  • 10
  • Tagged with
  • 2852
  • 398
  • 284
  • 280
  • 207
  • 195
  • 190
  • 163
  • 156
  • 156
  • 156
  • 152
  • 147
  • 142
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Wideband spectrum sensing using sub-Nyquist sampling / Shanu Aziz

Aziz, Shanu January 2014 (has links)
Spectrum sensing is the process of identifying the frequencies of a spectrum in which Signals Of Interest (SOI) are present. In case of continuous time signals present in a wideband spectrum, the information rate is seen to be much less than that suggested by its bandwidth and are therefore known as sparse signals. A review of the literature in [1] and [2] indicates that two of the many techniques used in wideband spectrum sensing of sparse signals are the Wideband Compressive Radio Receiver (WCRR) for multitoned signals and the mixed analog digital system for multiband signals. In both of these techniques even though the signals are sampled at sub-Nyquist rates using Compressive Sampling (CS), the recovery algorithms used by them are different from that of CS. In WCRR, a simple correlation function is used for the detection of carrier frequencies and in a mixed analog digital system, a simple digital algorithm is used for the identification of frequency support. Through a literature survey, we could identify that a VHSIC hardware descriptive ModelSim simulation model for wideband spectrum sensing of multitoned and multiband signals using sub Nyquist sampling does not exist. If a ModelSim simulation model can be developed using VHDL codes, it can be easily adapted for FPGA implementation leading to the development of a realistic hardware prototype for use in Cognitive Radio (CR) communication systems. The research work reported through this dissertation deals with the implementation of simulation models of WCRR and mixed analog digital system in ModelSim by making use of VHDL coding. Algorithms corresponding to different blocks contained in the conceptual design of these models have been formulated prior to the coding phase. After the coding phase, analyses of the models are performed using test parameter choices to ensure that they meet the design requirements. Different parametric choices are then assigned for the parametric study and a sufficient number of iterations of these simulations were carried out to verify and validate these models. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2014
312

On large deviations and design of efficient importance sampling algorithms

Nyquist, Pierre January 2014 (has links)
This thesis consists of four papers, presented in Chapters 2-5, on the topics large deviations and stochastic simulation, particularly importance sampling. The four papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory, and to the design of efficient algorithms using the subsolution approach developed by Dupuis and Wang (2007). In the first two papers of the thesis, the random output of an importance sampling algorithm is viewed as a sequence of weighted empirical measures and weighted empirical processes, respectively. The main theoretical results are a Laplace principle for the weighted empirical measures (Paper 1) and a moderate deviation result for the weighted empirical processes (Paper 2). The Laplace principle for weighted empirical measures is used to propose an alternative measure of efficiency based on the associated rate function.The moderate deviation result for weighted empirical processes is an extension of what can be seen as the empirical process version of Sanov's theorem. Together with a delta method for large deviations, established by Gao and Zhao (2011), we show moderate deviation results for importance sampling estimators of the risk measures Value-at-Risk and Expected Shortfall. The final two papers of the thesis are concerned with the design of efficient importance sampling algorithms using subsolutions of partial differential equations of Hamilton-Jacobi type (the subsolution approach). In Paper 3 we show a min-max representation of viscosity solutions of Hamilton-Jacobi equations. In particular, the representation suggests a general approach for constructing subsolutions to equations associated with terminal value problems and exit problems. Since the design of efficient importance sampling algorithms is connected to such subsolutions, the min-max representation facilitates the construction of efficient algorithms. In Paper 4 we consider the problem of constructing efficient importance sampling algorithms for a certain type of Markovian intensity model for credit risk. The min-max representation of Paper 3 is used to construct subsolutions to the associated Hamilton-Jacobi equation and the corresponding importance sampling algorithms are investigated both theoretically and numerically. The thesis begins with an informal discussion of stochastic simulation, followed by brief mathematical introductions to large deviations and importance sampling. / <p>QC 20140424</p>
313

On Infinitesimal Inverse Spectral Geometry

dos Santos Lobo Brandao, Eduardo January 2011 (has links)
Spectral geometry is the field of mathematics which concerns relationships between geometric structures of manifolds and the spectra of canonical differential operators. Inverse Spectral Geometry in particular concerns the geometric information that can be recovered from the knowledge of such spectra. A deep link between inverse spectral geometry and sampling theory has recently been proposed. Specifically, it has been shown that the very shape of a Riemannian manifold can be discretely sampled and then reconstructed up to a cutoff scale. In the context of Quantum Gravity, this means that, in the presence of a physically motivated ultraviolet cuttoff, spacetime could be regarded as simultaneously continuous and discrete, in the sense that information can. In this thesis, we look into the properties of the Laplace-Beltrami operator on a compact Riemannian manifold with no boundary. We discuss the behaviour of its spectrum regarding a perturbation of the Riemannian structure. Specifically, we concern ourselves with infinitesimal inverse spectral geometry, the inverse spectral problem of locally determining the shape of a Riemannian manifold. We discuss the recenSpectral geometry is the field of mathematics which concerns relationships between geometric structures of manifolds and the spectra of canonical differential operators. Inverse Spectral Geometry in particular concerns the geometric information that can be recovered from the knowledge of such spectra. A deep link between inverse spectral geometry and sampling theory has recently been proposed. Specifically, it has been shown that the very shape of a Riemannian manifold can be discretely sampled and then reconstructed up to a cutoff scale. In the context of Quantum Gravity, this means that, in the presence of a physically motivated ultraviolet cuttoff, spacetime could be regarded as simultaneously continuous and discrete, in the sense that information can. In this thesis, we look into the properties of the Laplace-Beltrami operator on a compact Riemannian manifold with no boundary. We discuss the behaviour of its spectrum regarding a perturbation of the Riemannian structure. Specifically, we concern ourselves with infinitesimal inverse spectral geometry, the inverse spectral problem of locally determining the shape of a Riemannian manifold. We discuss the recently presented idea that, in the presence of a cutoff, a perturbation of a Riemannian manifold could be uniquely determined by the knowledge of the spectra of natural differential operators. We apply this idea to the specific problem of determining perturbations of the two dimensional flat torus through the knowledge of the spectrum of the Laplace-Beltrami operator.tly presented idea that, in the presence of a cutoff, a perturbation of a Riemannian manifold could be uniquely determined by the knowledge of the spectra of natural differential operators. We apply this idea to the specific problem of determining perturbations of the two dimensional flat torus through the knowledge of the spectrum of the Laplace-Beltrami operator.
314

Single Complex Image Matting

Shen, Yufeng 06 1900 (has links)
Single image matting refers to the problem of accurately estimating the foreground object given only one input image. It is a fundamental technique in many image editing applications and has been extensively studied in the literature. Various matting techniques and systems have been proposed and impressive advances have been achieved in efficiently extracting high quality mattes. However, existing matting methods usually perform well for relatively uniform and smooth images only but generate noisy alpha mattes for complex images. The main motivation of this thesis is to develop a new matting approach that can handle complex images. We examine the color sampling and alpha propagation techniques in detail, which are two popular techniques employed by many state-of-the-art matting methods, to understand the reasons why the performance of these methods degrade significantly for complex images. The main contribution of this thesis is the development of two novel matting algorithms that can handle images with complex texture patterns. The first proposed matting method is aimed at complex images with homogeneous texture pattern background. A novel texture synthesis scheme is developed to utilize the known texture information to infer the texture information in the unknown region and thus alleviate the problems introduced by textured background. The second proposed matting algorithm is for complex images with heterogeneous texture patterns. A new foreground and background pixels identification algorithm is used to identify the pure foreground and background pixels in the unknown region and thus effectively handle the challenges of large color variation introduced by complex images. Our experimental results, both qualitative and quantitative, show that the proposed matting methods can effectively handle images with complex background and generate cleaner alpha mattes than existing matting methods.
315

An evaluation of the method of random action sampling

Magneberg, Rutger January 1995 (has links)
Diss. Stockholm : Handelshögskolan, 1995
316

Something to do with community structure : the influence of sampling and analysis on measures of community structure

Anderson, Barbara J., n/a January 2006 (has links)
Diversity indices confound two components: species richness and evenness. Community structure should therefore be evaluated by employing separate measures of the number of species and their relative abundances. However, the relative abundances of species are dependent on the abundance measure used. Although the use of biomass or productivity is recommended by theory, in practice a surrogate measure is more often used. Frequency (local or relative) and point-quadrat cover provide two objective measures of abundance which are fast, less destructive and avoid problems associated with distinguishing individuals. However, both give discrete bounded data which may further alter the relative abundances of species. These measures have a long history of use and, as the need for objective information on biodiversity becomes more pressing, their use is likely to become more widespread. Consequently, it seems appropriate to investigate the effect of these abundance measures, and the resolution at which they are used, on calculated evenness. Field, artificial and simulated data were used to investigate the effect of abundance measure and resolution on evidence for community structure. The field data consisted of seventeen sites. Sites from four vegetation types (saltmeadow, geothermal, ultramafic and high-altitude meadow) were sampled in three biogeographical regions. Most of the indices of community structure (species richness, diversity and evenness) detected differences between the different vegetation types, and different niche-apportionment models were fitted to the field data from saltmeadow and geothermal vegetation. Estimates of community structure based on local frequency and point-quadrat data differed. Local frequency tended to give higher calculated evenness; whereas point-quadrat data tended to fit to niche apportionment models where local frequency data failed. The effect of resolution on the eighteen evenness indices investigated depended on community species richness and the particular index used. The investigated evenness indices were divided into three groups (symmetric, continuous and traditional indices) based on how they ranked real and artificially constructed communities. Contrary to Smith and Wilson�s recommendation the symmetric indices E[VAR] and E[Q] proved unsuitable for use with most types of plant data. In particular, E[Q] tends to assign most communities low values and has a dubious relationship with intrinsic evenness. The continuous indices, E[MS] and E[2,1], were the indices best able to discriminate between field, artificial and simulated communities, and their use should be re-evaluated. Traditional indices used with low resolution tended to elevate the calculated evenness, especially in species-rich communities. The relativized indices, E[Hurlbert] and EO[dis], were an exception, as they were always able to attain the minimum of zero; however, they were more sensitive to changes in resolution, particularly when resolution was low. Overall, traditional indices based on Hill�s ratios, including E[1/D] (=E[2,0]), and G[2,1] gave the best performance, while the general criticism of the use of Pielou�s J� as an index of evenness was further substantiated by this study. As a final recommendation, ecologists are implored to investigate their data and the likely effects that sampling and analysis have had on the calculated values of their indices.
317

A Quality of Service Monitoring System for Service Level Agreement Verification

Ta, Xiaoyuan January 2006 (has links)
Master of Engineering by Research / Service-level-agreement (SLA) monitoring measures network Quality-of-Service (QoS) parameters to evaluate whether the service performance complies with the SLAs. It is becoming increasingly important for both Internet service providers (ISPs) and their customers. However, the rapid expansion of the Internet makes SLA monitoring a challenging task. As an efficient method to reduce both complexity and overheads for QoS measurements, sampling techniques have been used in SLA monitoring systems. In this thesis, I conduct a comprehensive study of sampling methods for network QoS measurements. I develop an efficient sampling strategy, which makes the measurements less intrusive and more efficient, and I design a network performance monitoring software, which monitors such QoS parameters as packet delay, packet loss and jitter for SLA monitoring and verification. The thesis starts with a discussion on the characteristics of QoS metrics related to the design of the monitoring system and the challenges in monitoring these metrics. Major measurement methodologies for monitoring these metrics are introduced. Existing monitoring systems can be broadly classified into two categories: active and passive measurements. The advantages and disadvantages of both methodologies are discussed and an active measurement methodology is chosen to realise the monitoring system. Secondly, the thesis describes the most common sampling techniques, such as systematic sampling, Poisson sampling and stratified random sampling. Theoretical analysis is performed on the fundamental limits of sampling accuracy. Theoretical analysis is also conducted on the performance of the sampling techniques, which is validated using simulation with real traffic. Both theoretical analysis and simulation results show that the stratified random sampling with optimum allocation achieves the best performance, compared with the other sampling methods. However, stratified sampling with optimum allocation requires extra statistics from the parent traffic traces, which cannot be obtained in real applications. In order to overcome this shortcoming, a novel adaptive stratified sampling strategy is proposed, based on stratified sampling with optimum allocation. A least-mean-square (LMS) linear prediction algorithm is employed to predict the required statistics from the past observations. Simulation results show that the proposed adaptive stratified sampling method closely approaches the performance of the stratified sampling with optimum allocation. Finally, a detailed introduction to the SLA monitoring software design is presented. Measurement results are displayed which calibrate systematic error in the measurements. Measurements between various remote sites have demonstrated impressively good QoS provided by Australian ISPs for premium services.
318

Topics in Bayesian sample size determination and Bayesian model selection

Cheng, Dunlei. Stamey, James D. January 2007 (has links)
Thesis (Ph.D.)--Baylor University, 2007. / Includes bibliographical references (p. 84-87).
319

Intelligent sampling over wireless sensor networks /

Zhuang, Yongzhen. January 2008 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2008. / Includes bibliographical references (leaves 123-129). Also available in electronic version.
320

The application of two-dimensional genomic DNA nylon matrix for environmental samples analysis

Tsai, Yeng-Chieh. January 2009 (has links)
Thesis (M.C.E.)--University of Delaware, 2009. / Principal faculty advisor: Chin-Pao Huang, Dept. of Civil & Environmental Engineering. Includes bibliographical references.

Page generated in 0.0426 seconds