• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 23
  • 22
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 231
  • 44
  • 38
  • 33
  • 31
  • 31
  • 29
  • 28
  • 26
  • 23
  • 22
  • 18
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Fingerprinting for Chiplet Architectures Using Power Distribution Network Transients

Burke, Matthew G 09 August 2023 (has links) (PDF)
Chiplets have become an increasingly popular technology for extending Moore's Law and improving the reliability of integrated circuits. They do this by placing several small, interacting chips on an interposer rather than the traditional, single chip used for a device. Like any other type of integrated circuit, chiplets are in need of a physical layer of security to defend against hardware Trojans, counterfeiting, probing, and other methods of tampering and physical attacks. Power distribution networks are ubiquitous across chiplet and monolithic ICs, and are essential to the function of the device. Thus, we propose a method of fingerprinting transient signals within the PDN to identify individual chiplet systems and physical-layer threats against these devices. In this work, we describe a Python-wrapped HSPICE model we have built to automate testing of our proposed PDN fingerprinting methods. We also document the methods of analysis used- wavelet transforms and time-domain measurements- to identify unique characteristics in the voltage response signals to transient stimuli. We provide the true positive and false positive rates of these methods for a simulated lineup of chips across varying operating conditions to determine uniqueness and reliability of our techniques. Our simulations show that, if characterized at varying supply voltage and temperature conditions in the factory, and the sensors used for identification meet the sample rates and voltage resolutions used in our tests, our protocol provides sufficient uniqueness and reliability to be enrolled. We recommend that experimentation be done to evaluate our methods in hardware and implement sensing techniques to meet the requirements shown in this work.
162

Development of digital imaging technologies for the segmentation of solar features and the extraction of filling factors from SODISM images

Alasta, Amro F.A. January 2018 (has links)
Solar images are one of the most important sources of available information on the current state and behaviour of the sun, and the PICARD satellite is one of several ground and space-based observatories dedicated to the collection of that data. The PICARD satellite hosts the Solar Diameter Imager and Surface Mapper (SODISM), a telescope aimed at continuously monitoring the Sun. It has generated a huge cache of images and other data that can be analysed and interpreted to improve the monitoring of features, such as sunspots and the prediction and diagnosis of solar activity. In proportion to the available raw material, the little-published analysis of SODISM data has provided the impetus for this study, specifically a novel method of contributing to the development of a system to enhance, detect and segment sunspots using new hybrid methods. This research aims to yield an improved understanding of SODISM data by providing novel methods to tabulate a sunspot and filling factor (FF) catalogue, which will be useful for future forecasting activities. The developed technologies and the findings achieved in this research will work as a corner stone to enhance the accuracy of sunspot segmentation; create efficient filling factor catalogue systems, and enhance our understanding of SODISM image enhancement. The results achieved can be summarised as follows: i) Novel enhancement method for SODISM images. ii) New efficient methods to segment dark regions and detect sunspots. iii) Novel catalogue for filling factor including the number, size and sunspot location. v) Novel statistical method to summarise FFs catalogue. Image processing and partitioning techniques are used in this work; these methods have been applied to remove noise and detect sunspots and will provide more information such as sunspot numbers, size and filling factor. The performance of the model is compared to the fillers extracted from other satellites, such as SOHO. Also, the results were compared with the NOAA catalogue and achieved a precision of 98%. Performance measurement is also introduced and applied to verify results and evaluate proposal methods. Algorithms, implementation, results and future work have been explained in this thesis.
163

DYNAMIC HARMONIC DOMAIN MODELING OF FLEXIBLE ALTERNATING CURRENT TRANSMISSION SYSTEM CONTROLLERS

Vyakaranam, Bharat GNVSR January 2011 (has links)
No description available.
164

Optimizing Threads of Computation in Constraint Logic Programs

Pippin, William E., Jr. 29 January 2003 (has links)
No description available.
165

Random projectors with continuous resolutions of the identity in a finite-dimensional Hilbert space

Vourdas, Apostolos 22 October 2019 (has links)
Yes / Random sets are used to get a continuous partition of the cardinality of the union of many overlapping sets. The formalism uses Möbius transforms and adapts Shapley's methodology in cooperative game theory, into the context of set theory. These ideas are subsequently generalized into the context of finite-dimensional Hilbert spaces. Using random projectors into the subspaces spanned by states from a total set, we construct an infinite number of continuous resolutions of the identity, that involve Hermitian positive semi-definite operators. The simplest one is the diagonal continuous resolution of the identity, and it is used to expand an arbitrary vector in terms of a continuum of components. It is also used to define the function on the 'probabilistic quadrant' , which is analogous to the Wigner function for the harmonic oscillator, on the phase-space plane. Systems with finite-dimensional Hilbert space (which are naturally described with discrete variables) are described here with continuous probabilistic variables. / Research Development Fund Publication Prize Award winner, October 2019.
166

保本型指數連動商品創新設計與實務---應用Esscher transforms

黃昶華 Unknown Date (has links)
本論文的研究目的,主要是希望利用新奇選擇權(exotic option)來降低保本型基金的高風險投資部分的權利金,提升參與率,藉此吸引投資人。因為近年來保本型基金面臨最大的問題就是『市場波動度變大,造成衍生性商品的價格上升,侵蝕了保本率。』(Lee,2001),因為波動度和商品價格具有正比的關係。再加入浮動利率之考量之後,求出更精確的封閉解,以及本文所提『雙邊連動』,提升商品吸引力。 在精算科學界,Esscher transform是一種沿用已久的工具。Gerber and Shiu (1994)闡述在某些假設下評價衍生性證券時,Esscher transform是一種有效率的方法。本論文延伸『Esscher transform』方法來求出商品評價的公式解。 本論文的主要貢獻就是引用Esscher transform(Gerber and Shiu,1994架構傳統機率測度轉換並且求出上(下)出局、上(下)生效等保本型指數連動商品的封閉解,並且加入一個新的概念,『雙邊連動』,作為整篇論文的主要貢獻。基於上述原因,本論文研究成果可以分為下面幾項: 1.以『Esscher transform』為本論文的評價模型,加以說明驗證。 2.設計出雙邊保本的保本型指數連動商品,並且找出封閉解以及探討此種商品的可行性及市場性。 3.利用電腦模擬求算評價公式的避險參數。求出多元常態累積機率分配函數,以期能夠解出多資產連動商品的理論價格。並且整理出上下限型的機率密度整理表。 在程式應用的方面,本論文利用了『Mathematica』求取避險參數,因而不必再費時的計算就可以求出正確的避險參數,及利用計量軟體『R』來求算多元常態累積機率分配函數,使本論文的多因子分析不在只是理論。
167

Etude de représentations parcimonieuses des statistiques d'erreur d'observation pour différentes métriques. Application à l'assimilation de données images / Study of sparse representations of statistical observation error for different metrics. Application to image data assimilation

Chabot, Vincent 11 July 2014 (has links)
Les dernières décennies ont vu croître en quantité et en qualité les données satellites. Au fil des ans, ces observations ont pris de plus en plus d'importance en prévision numérique du temps. Ces données sont aujourd'hui cruciales afin de déterminer de manière optimale l'état du système étudié, et ce, notamment car elles fournissent des informations denses et de qualité dansdes zones peu observées par les moyens conventionnels. Cependant, le potentiel de ces séquences d'images est encore largement sous–exploitée en assimilation de données : ces dernières sont sévèrement sous–échantillonnées, et ce, en partie afin de ne pas avoir à tenir compte des corrélations d'erreurs d'observation.Dans ce manuscrit nous abordons le problème d'extraction, à partir de séquences d'images satellites, d'information sur la dynamique du système durant le processus d'assimilation variationnelle de données. Cette étude est menée dans un cadre idéalisé afin de déterminer l'impact d'un bruit d'observations et/ou d'occultations sur l'analyse effectuée.Lorsque le bruit est corrélé en espace, tenir compte des corrélations en analysant les images au niveau du pixel n'est pas chose aisée : il est nécessaire d'inverser la matrice de covariance d'erreur d'observation (qui se révèle être une matrice de grande taille) ou de faire des approximationsaisément inversibles de cette dernière. En changeant d'espace d'analyse, la prise en compte d'une partie des corrélations peut être rendue plus aisée. Dans ces travaux, nous proposons d'effectuer cette analyse dans des bases d'ondelettes ou des trames de curvelettes. En effet, un bruit corréléen espace n'impacte pas de la même manière les différents éléments composants ces familles. En travaillant dans ces espaces, il est alors plus aisé de tenir compte d'une partie des corrélations présentes au sein du champ d'erreur. La pertinence de l'approche proposée est présentée sur différents cas tests.Lorsque les données sont partiellement occultées, il est cependant nécessaire de savoir comment adapter la représentation des corrélations. Ceci n'est pas chose aisée : travailler avec un espace d'observation changeant au cours du temps rend difficile l'utilisation d'approximations aisément inversibles de la matrice de covariance d'erreur d'observation. Dans ces travaux uneméthode permettant d'adapter, à moindre coût, la représentations des corrélations (dans des bases d'ondelettes) aux données présentes dans chaque image est proposée. L'intérêt de cette approche est présenté dans un cas idéalisé. / Recent decades have seen an increase in quantity and quality of satellite observations . Over the years , those observations has become increasingly important in numerical weather forecasting. Nowadays, these datas are crucial in order to determine optimally the state of the studied system. In particular, satellites can provide dense observations in areas poorly observed by conventionnal networks. However, the potential of such observations is clearly under--used in data assimilation : in order to avoid the management of observation errors, thinning methods are employed in association to variance inflation.In this thesis, we adress the problem of extracting information on the system dynamic from satellites images data during the variationnal assimilation process. This study is carried out in an academic context in order to quantify the influence of observation noise and of clouds on the performed analysis.When the noise is spatially correlated, it is hard to take into account such correlations by working in the pixel space. Indeed, it is necessary to invert the observation error covariance matrix (which turns out to be very huge) or make an approximation easily invertible of such a matrix. Analysing the information in an other space can make the job easier. In this manuscript, we propose to perform the analysis step in a wavelet basis or a curvelet frame. Indeed, in those structured spaces, a correlated noise does not affect in the same way the differents structures. It is then easier to take into account part of errors correlations : a suitable approximation of the covariance matrix is made by considering only how each kind of element is affected by a correlated noise. The benefit of this approach is demonstrated on different academic tests cases.However, when some data are missing one has to address the problem of adapting the way correlations are taken into account. This work is not an easy one : working in a different observation space for each image makes the use of easily invertible approximate covariance matrix very tricky. In this work a way to adapt the diagonal hypothesis of the covariance matrix in a wavelet basis, in order to take into account that images are partially hidden, is proposed. The interest of such an approach is presented in an idealised case.
168

Quasi transformées de Riesz, espaces de Hardy et estimations sous-gaussiennes du noyau de la chaleur / Quasi Riesz transforms, Hardy spaces and generalized sub-Gaussian heat kernel estimates

Chen, Li 24 April 2014 (has links)
Dans cette thèse nous étudions les transformées de Riesz et les espaces de Hardy associés à un opérateur sur un espace métrique mesuré. Ces deux sujets sont en lien avec des estimations du noyau de la chaleur associé à cet opérateur. Dans les Chapitres 1, 2 et 4, on étudie les transformées quasi de Riesz sur les variétés riemannienne et sur les graphes. Dans le Chapitre 1, on prouve que les quasi transformées de Riesz sont bornées dans Lp pour 1<p<2. Dans le Chapitre 2, on montre que les quasi transformées de Riesz est aussi de type faible (1,1) si la variété satisfait la propriété de doublement du volume et l'estimation sous-gaussienne du noyau de la chaleur. On obtient des résultats analogues sur les graphes dans le Chapitre 4. Dans le Chapitre 3, on développe la théorie des espaces de Hardy sur les espaces métriques mesurés avec des estimations différentes localement et globalement du noyau de la chaleur. On définit les espaces de Hardy par les molécules et par les fonctions quadratiques. On montre tout d'abord que ces deux espaces H1 sont les mêmes. Puis, on compare l'espace Hp défini par par les fonctions quadratiques et Lp. On montre qu'ils sont équivalents. Mais on trouve des exemples tels que l'équivalence entre Lp et Hp défini par les fonctions quadratiques avec l'homogénéité t2 n'est pas vraie. Finalement, comme application, on montre que les quasi transformées de Riesz sont bornées de H1 dans L1 sur les variétés fractales. Dans le Chapitre 5, on prouve des inégalités généralisées de Poincaré et de Sobolev sur les graphes de Vicsek. On montre aussi qu'elles sont optimales. / In this thesis, we mainly study Riesz transforms and Hardy spaces associated to operators. The two subjects are closely related to volume growth and heat kernel estimates. In Chapter 1, 2 and 4, we study Riesz transforms on Riemannian manifold and on graphs. In Chapter 1, we prove that on a complete Riemannian manifold, the quasi Riesz transform is always Lp bounded on for p strictly large than 1 and no less than 2. In Chapter 2, we prove that the quasi Riesz transform is also weak L1 bounded if the manifold satisfies the doubling volume property and the sub-Gaussian heat kernel estimate. Similarly, we show in Chapter 4 the same results on graphs. In Chapter 3, we develop a Hardy space theory on metric measure spaces satisfying the doubling volume property and different local and global heat kernel estimates. Firstly we define Hardy spaces via molecules and via square functions which are adapted to the heat kernel estimates. Then we show that the two H1 spaces via molecules and via square functions are the same. Also, we compare the Hp space defined via square functions with Lp. The corresponding Hp space for p large than 1 defined via square functions is equivalent to the Lebesgue space Lp. However, it is shown that in this situation, the Hp space corresponding to Gaussian estimates does not coincide with Lp any more. Finally, as an application of this Hardy space theory, we proved that quasi Riesz transforms are bounded from H1 to L1 on fractal manifolds. In Chapter 5, we consider Vicsek graphs. We prove generalised Poincaré inequalities and Sobolev inequalities on Vicsek graphs and we show that they are optimal.
169

Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Zhu, Jihai January 2007 (has links)
Image coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations
170

A Switching Black-Scholes Model and Option Pricing

Webb, Melanie Ann January 2003 (has links)
Derivative pricing, and in particular the pricing of options, is an important area of current research in financial mathematics. Experts debate on the best method of pricing and the most appropriate model of a price process to use. In this thesis, a ``Switching Black-Scholes'' model of a price process is proposed. This model is based on the standard geometric Brownian motion (or Black-Scholes) model of a price process. However, the drift and volatility parameters are permitted to vary between a finite number of possible values at known times, according to the state of a hidden Markov chain. This type of model has been found to replicate the Black-Scholes implied volatility smiles observed in the market, and produce option prices which are closer to market values than those obtained from the traditional Black-Scholes formula. As the Markov chain incorporates a second source of uncertainty into the Black-Scholes model, the Switching Black-Scholes market is incomplete, and no unique option pricing methodology exists. In this thesis, we apply the methods of mean-variance hedging, Esscher transforms and minimum entropy in order to price options on assets which evolve according to the Switching Black-Scholes model. C programs to compute these prices are given, and some particular numerical examples are examined. Finally, filtering techniques and reference probability methods are applied to find estimates of the model parameters and state of the hidden Markov chain. / Thesis (Ph.D.)--Applied Mathematics, 2003.

Page generated in 0.0606 seconds