• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Finding A Subset Of Non-defective Items From A Large Population : Fundamental Limits And Efficient Algorithms

Sharma, Abhay 05 1900 (has links) (PDF)
Consider a large population containing a small number of defective items. A commonly encountered goal is to identify the defective items, for example, to isolate them. In the classical non-adaptive group testing (NAGT) approach, one groups the items into subsets, or pools, and runs tests for the presence of a defective itemon each pool. Using the outcomes the tests, a fundamental goal of group testing is to reliably identify the complete set of defective items with as few tests as possible. In contrast, this thesis studies a non-defective subset identification problem, where the primary goal is to identify a “subset” of “non-defective” items given the test outcomes. The main contributions of this thesis are: We derive upper and lower bounds on the number of nonadaptive group tests required to identify a given number of non-defective items with arbitrarily small probability of incorrect identification as the population size goes to infinity. We show that an impressive reduction in the number of tests is achievable compared to the approach of first identifying all the defective items and then picking the required number of non-defective items from the complement set. For example, in the asymptotic regime with the population size N → ∞, to identify L nondefective items out of a population containing K defective items, when the tests are reliable, our results show that O _ K logK L N _ measurements are sufficient when L ≪ N − K and K is fixed. In contrast, the necessary number of tests using the conventional approach grows with N as O _ K logK log N K_ measurements. Our results are derived using a general sparse signal model, by virtue of which, they are also applicable to other important sparse signal based applications such as compressive sensing. We present a bouquet of computationally efficient and analytically tractable nondefective subset recovery algorithms. By analyzing the probability of error of the algorithms, we obtain bounds on the number of tests required for non-defective subset recovery with arbitrarily small probability of error. By comparing with the information theoretic lower bounds, we show that the upper bounds bounds on the number of tests are order-wise tight up to a log(K) factor, where K is the number of defective items. Our analysis accounts for the impact of both the additive noise (false positives) and dilution noise (false negatives). We also provide extensive simulation results that compare the relative performance of the different algorithms and provide further insights into their practical utility. The proposed algorithms significantly outperform the straightforward approaches of testing items one-by-one, and of first identifying the defective set and then choosing the non-defective items from the complement set, in terms of the number of measurements required to ensure a given success rate. We investigate the use of adaptive group testing in the application of finding a spectrum hole of a specified bandwidth in a given wideband of interest. We propose a group testing based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent sub-bands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios. Energy-based hypothesis tests are used to provide an occupancy decision over the group of sub-bands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme. We illustrate the performance of the proposed algorithms via Monte Carlo simulations.
12

Modelování vlastností digitálních modulace pro DVB-T v Matlabu / Simulation of the DVB-T digital modulation in Matlab

Málek, Pavel January 2008 (has links)
Digital Video Broadcasting (standard DVB) is a system for transmission of the television signals in the digital form. There are used a various types of modulations in this system as QPSK modulation is used in the systems of satellite video broadcasting DVB-S (Satellite) and M-QAM modulations in the cable transmitting DVB-C (Cable). This paper mainly deals with system of the terrestrial digital video broadcasting DVB-T (Terrestrial), where OFDM modulation is used. This type of signal processing is more resistant to the distortions caused by multipath transmitting, which is main problem in the DVB-T. Matlab application, which can simulate digital modulation and demodulation of the transmission signals in the DVB-T, is created in this thesis. The models of the transmission channel is inserted between structures of modulator and demodulator. The user of this application can set the parameters of the broadcasting (e.g. constellation, OFDM mode, guard interval insertion) and the type of distortions (additive noise, reflected and delayed signals). By calculation of the channel bit error rate (BER) user can study influences of broadcasting parameters to the quality of transmission.
13

Adaptive Filters for 2-D and 3-D Digital Images Processing / Adaptive Filters for 2-D and 3-D Digital Images Processing

Martišek, Karel January 2012 (has links)
Práce se zabývá adaptivními filtry pro vizualizaci obrazů s vysokým rozlišením. V teoretické části je popsán princip činnosti konfokálního mikroskopu a matematicky korektně zaveden pojem digitální obraz. Pro zpracování obrazů je volen jak frekvenční přístup (s využitím 2-D a 3-D diskrétní Fourierovy transformace a frekvenčních filtrů), tak přístup pomocí digitální geometrie (s využitím adaptivní ekvalizace histogramu s adaptivním okolím). Dále jsou popsány potřebné úpravy pro práci s neideálními obrazy obsahujícími aditivní a impulzní šum. Závěr práce se věnuje prostorové rekonstrukci objektů na základě jejich optických řezů. Veškeré postupy a algoritmy jsou i prakticky zpracovány v softwaru, který byl vyvinut v rámci této práce.
14

Systèmes de numérisation hautes performances – Architectures robustes adaptées à la radio cognitive. / High performance digitization systems - robust architecture adapted to the cognitive radio

Song, Zhiguo 17 December 2010 (has links)
Les futures applications de radio cognitive requièrent des systèmes de numérisation capables de convertir alternativement ou simultanément soit une bande très large avec une faible résolution soit une bande plus étroite avec une meilleure résolution, ceci de manière versatile (i.e. par contrôle logiciel). Pour cela, les systèmes de numérisation basés sur les Bancs de Filtres Hybrides (BFH) sont une solution attractive. Ils se composent d'un banc de filtres analogiques, un banc de convertisseurs analogique-numérique et un banc de filtres numériques. Cependant, ils sont très sensibles aux imperfections analogiques. L'objectif de cette thèse était de proposer et d’étudier une méthode de calibration qui permette de corriger les erreurs analogiques dans la partie numérique. De plus, la méthode devait être implémentable dans un système embarqué. Ce travail a abouti à une nouvelle méthode de calibration de BFH utilisant une technique d'Égalisation Adaptative Multi-Voies (EAMV) qui ajuste les coefficients des filtres numériques par rapport aux filtres analogiques réels. Cette méthode requiert d'injecter un signal de test connu à l'entrée du BFH et d'adapter la partie numérique afin de reconstruire le signal de référence correspondant. Selon le type de reconstruction souhaité (d’une large-bande, d’une sous-bande ou d’une bande étroite particulière), nous avons proposé plusieurs signaux de test et de référence. Ces signaux ont été validés en calculant les filtres numériques optimaux par la méthode de Wiener-Hopf et en évaluant leurs performances de ces derniers dans le domaine fréquentiel. Afin d’approcher les filtres numériques optimaux avec une complexité calculatoire minimum, nous avons implémenté un algorithme du gradient stochastique. La robustesse de la méthode a été évaluée en présence de bruit dans la partie analogique et de en tenant compte de la quantification dans la partie numérique. Un signal de test plus robuste au bruit analogique a été proposé. Les nombres de bits nécessaires pour coder les différentes données dans la partie numérique ont été dimensionnés pour atteindre les performances visées (à savoir 14 bits de résolution). Ce travail de thèse a permis d'avancer vers la réalisation des futurs systèmes de numérisation basés sur les BFH. / The future applications of cognitive radio require digitization systems being capable to perform a flexible conversion in terms of bandwidth and Resolution. The digitization systems based on Hybrid Filter Bancs (HFB) provide an attractive solution for achieving this purpose. The HFBs consist of a bank of analog filters, a bank of analog/digital converters and a bank of digital filters. However, they are so sensitive that the presence of analog errors renders them impossible to carry out. Therefore, the goal of the thesis was to propose and study a calibration method for the analog errors to be corrected in the digital part. Furthermore, the proposed method had to be implementable in an embedded system. Based on Multichannel Adaptive Equalization (MCAE), we proposed a new calibration method. The digital filter coefficients are adjusted according to the real analog filters. To perform this calibration process, a known test signal is injected into the HFB which output is compared to a linked desired signal, their difference is used to adjust the digital part iteratively until the goal is achieved. For different reconstruction goals (wideband, subband or a particular narrow band), we proposed two ways to generate the test and desired signals. With the filters achieved by using method Wiener-Hopf, these signals have been validated by the evaluation of the reconstruction performances. In order to approach the optimal coefficients with a minimal computational complexity, we have implemented an algorithm of stochastic gradient. The robustness of the MCAE method has been studied both in presence of the thermal noise in the analog part and in presence of quantization errors in the digital part. A more robust test signal against the analog noise has been proposed. According to our analytical expressions, for the reconstruction goal (i.e. resolution of 14 bits), the numbers of bits needed for coding the different data of the digital part can be indicated. This thesis is a step forward for realizing future digitization systems based on HFBs.
15

Estimation of a class of nonlinear time series models.

Sando, Simon Andrew January 2004 (has links)
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
16

Imaging Reflectometry Measuring Thin Films Optical Properties / Imaging Reflectometry Measuring Thin Films Optical Properties

Běhounek, Tomáš January 2009 (has links)
V této práci je prezentována inovativní metoda zvaná \textit{Zobrazovací Reflektometrie}, která je založena na principu spektroskopické reflektometrie a je určena pro vyhodnocování optických vlastností tenkých vrstev .\ Spektrum odrazivosti je získáno z map intenzit zaznamenaných CCD kamerou. Každý záznam odpovídá předem nastavené vlnové délce a spektrum odrazivosti může být určeno ve zvoleném bodu nebo ve vybrané oblasti.\ Teoretický model odrazivosti se fituje na naměřená data pomocí Levenberg~-~Marquardtova algoritmu, jehož výsledky jsou optické vlastnosti vrstvy, jejich přesnost, a určení spolehlivosti dosažených výsledků pomocí analýzy citlivosti změn počátečních nastavení optimalizačního algoritmu.
17

Анализ стохастической модели взаимодействия потребителей : магистерская диссертация / Analysis of the stochastic model of consumer network

Павлецов, М. М., Pavletsov, M. M. January 2023 (has links)
В работе рассматривается n-мерная дискретная модель, которая описывает динамику взаимодействия n потребителей. В рамках детерминированного анализа были построены карты режимов и бифуркационные диаграммы, описаны бифуркационные сценарии. Были обнаружены и описаны зоны мультистабильности системы, построены бассейны притяжения аттракторов. Далее в работе рассматривается стохастический вариант модели. Было изучено воздействие на систему аддитивного и параметрического шумов. С помощью функции стохастической чувствительности был проведен сравнительный анализ чувствительности равновесий и циклов. Опираясь на метод доверительных областей получены значения интенсивности шума, при которых наблюдаются индуцированные шумом явления. / The paper considers n-dimensional discrete model that describes the interaction dynamics of n consumers. As a part of the deterministic analysis, 2- and 1- parameter bifurcation diagrams were plotted, bifurcation scenarios were described. Multistability zones of the system were found and investigated, basins of attraction were plotted. Then, a stochastic version of the model is studied. The effect of additive and parametric noise on the system was described. Using the stochastic sensitivity function, a comparative analysis of the sensitivity of equilibria and cycles was carried out. Based on the method of confidence domains, the values of noise intensity, at which noise-induced phenomena can be observed, are obtained.
18

Etude d'équations aux dérivées partielles stochastiques / Study on stochastic partial differential equations

Bauzet, Caroline 26 June 2013 (has links)
Cette thèse s’inscrit dans le domaine mathématique de l’analyse des équations aux dérivées partielles (EDP) non-linéaires stochastiques. Nous nous intéressons à des EDP paraboliques et hyperboliques que l’on perturbe stochastiquement au sens d’Itô. Il s’agit d’introduire l’aléatoire via l’ajout d’une intégrale stochastique (intégrale d’Itô) qui peut dépendre ou non de la solution, on parle alors de bruit multiplicatif ou additif. La présence de la variable de probabilité ne nous permet pas d’utiliser tous les outils classiques de l’analyse des EDP. Notre but est d’adapter les techniques connues dans le cadre déterministe aux EDP non linéaires stochastiques en proposant des méthodes alternatives. Les résultats obtenus sont décrits dans les cinq chapitres de cette thèse : Dans le Chapitre I, nous étudions une perturbation stochastique des équations de Barenblatt. En utilisant une semi- discrétisation implicite en temps, nous établissons l’existence et l’unicité d’une solution dans le cas additif, et grâce aux propriétés de la solution nous sommes en mesure d’étendre ce résultat au cas multiplicatif à l’aide d’un théorème de point fixe. Dans le Chapitre II, nous considérons une classe d’équations de type Barenblatt stochastiques dans un cadre abstrait. Il s’agit là d’une généralisation des résultats du Chapitre I. Dans le Chapitre III, nous travaillons sur l’étude du problème de Cauchy pour une loi de conservation stochastique. Nous montrons l’existence d’une solution par une méthode de viscosité artificielle en utilisant des arguments de compacité donnés par la théorie des mesures de Young. L’unicité repose sur une adaptation de la méthode de dédoublement des variables de Kruzhkov.. Dans le Chapitre IV, nous nous intéressons au problème de Dirichlet pour la loi de conservation stochastique étudiée au Chapitre III. Le point remarquable de l’étude repose sur l’utilisation des semi-entropies de Kruzhkov pour montrer l’unicité. Dans le Chapitre V, nous introduisons une méthode de splitting pour proposer une approche numérique du problème étudié au Chapitre IV, suivie de quelques simulations de l’équation de Burgers stochastique dans le cas unidimensionnel. / This thesis deals with the mathematical field of stochastic nonlinear partial differential equations’ analysis. We are interested in parabolic and hyperbolic PDE stochastically perturbed in the Itô sense. We introduce randomness by adding a stochastic integral (Itô integral), which can depend or not on the solution. We thus talk about a multiplicative noise or an additive one. The presence of the random variable does not allow us to apply systematically classical tools of PDE analysis. Our aim is to adapt known techniques of the deterministic setting to nonlinear stochastic PDE analysis by proposing alternative methods. Here are the obtained results : In Chapter I, we investigate on a stochastic perturbation of Barenblatt equations. By using an implicit time discretization, we establish the existence and uniqueness of the solution in the additive case. Thanks to the properties of such a solution, we are able to extend this result to the multiplicative noise using a fixed-point theorem. In Chapter II, we consider a class of stochastic equations of Barenblatt type but in an abstract frame. It is about a generalization of results from Chapter I. In Chapter III, we deal with the study of the Cauchy problem for a stochastic conservation law. We show existence of solution via an artificial viscosity method. The compactness arguments are based on Young measure theory. The uniqueness result is proved by an adaptation of the Kruzhkov doubling variables technique. In Chapter IV, we are interested in the Dirichlet problem for the stochastic conservation law studied in Chapter III. The remarkable point is the use of the Kruzhkov semi-entropies to show the uniqueness of the solution. In Chapter V, we introduce a splitting method to propose a numerical approach of the problem studied in Chapter IV. Then we finish by some simulations of the stochastic Burgers’ equation in the one dimensional case.

Page generated in 0.0879 seconds