Spelling suggestions: "subject:"multiplier""
211 |
New Algorithms for Local and Global Fiber Tractography in Diffusion-Weighted Magnetic Resonance ImagingSchomburg, Helen 29 September 2017 (has links)
No description available.
|
212 |
Interactions between fiscal policy and real economy in the Czech Republic: a quantitative analysis / Kvantitativní analýza interakcí fiskální politiky a reálné ekonomiky v České republiceValenta, Vilém January 2004 (has links)
After many decades, macroeconomic effects of fiscal policy have returned to the centre of the economic policy debate. Both automatic fiscal stabilizers and discretionary fiscal stimuli have been used to support aggregate demand during the recent global economic crisis with a subsequent need for large-scale fiscal consolidations. In this context, a proper assessment of the size of automatic fiscal stabilizers and fiscal multipliers represents a key input for fiscal policymaking. This dissertation provides a quantitative analysis of the interactions between fiscal policy and real economy in the Czech Republic. The impact of real economy developments on public finances is assessed based on the methods of the OECD, the European Commission and the ESCB for the identification of general government structural balances, i.e. balances adjusted for effects of the economic cycle and net of one-off and other temporary transactions. I find that the underlying fiscal position, as approximated by the government structural balance, was mostly below the level stabilising the debt-to-GDP ratio since mid-1990s. An indistinct improvement in the structural balance can be identified in the period 2004--2007, which was subsequently reversed by the adverse structural impact of the world economic crisis. At the same time, dynamics of unadjusted fiscal balance was largely determined by one-off transactions in the past. The effects of fiscal policy on real economy are analysed using the structural VAR approach. I find that an increase in government spending has a temporary positive effect on output that peaks after one to two years with a multiplier of around 0.6. Tax multiplier appears to be small and, in contrast to standard Keynesian assumptions, positive. Government spending is supportive to private consumption, contradicting the hypothesis of Ricardian equivalence, but it crowds out private investment in the short run. The results should be interpreted with caution, as the analysis is complicated by rapidly changing economic environment in the period of the economic transition, relatively short available time series and a large number of one-off fiscal transactions.
|
213 |
Modélisation et commande des robots : nouvelles approches basées sur les modèles Takagi-Sugeno / Modeling and control of robots : new approaches based on the Takagi-Sugeno modelsAllouche, Benyamine 15 September 2016 (has links)
Chaque année, plus de 5 millions de personne à travers le monde deviennent hémiplégiques suite à un accident vasculaire cérébral. Ce soudain déficit neurologique conduit bien souvent à une perte partielle ou totale de la station debout et/ou à la perte de la capacité de déambulation. Dans l’optique de proposer de nouvelles solutions d’assistance situées entre le fauteuil roulant et le déambulateur, cette thèse s’inscrit dans le cadre du projet ANR TECSAN VHIPOD « véhicule individuel de transport en station debout auto-équilibrée pour personnes handicapées avec aide à la verticalisation ». Dans ce contexte, ces travaux de recherche apportent des éléments de réponse à deux problématiques fondamentales du projet : l’assistance au passage assis-debout (PAD) des personnes hémiplégiques et le déplacement à l’aide d’un véhicule auto-équilibré à deux roues. Ces problématiques sont abordées du point de vue de la robotique avec comme question centrale : peut-on utiliser l’approche Takagi-Sugeno (TS) pour la synthèse d’une commande ? Dans un premier temps, la problématique de mobilité des personnes handicapées a été traitée sur la base d’une solution de type gyropode. Des lois de commande basées sur les approches TS standard et descripteur ont été proposées afin d’étudier la stabilisation des gyropodes dans des situations particulières telles que le déplacement sur un terrain en pente ou le franchissement de petites marches. Les résultats obtenus ont non seulement permis d’aboutir à un concept potentiellement capable de franchir des obstacles, mais ils ont également permis de souligner la principale difficulté liée à l’applicabilité de l’approche TS en raison du conservatisme des conditions LMIs (inégalités matricielles linéaires). Dans un second temps, un banc d’assistance au PAD à architecture parallèle a été conçu. Ce type de manipulateur constitué de multiples boucles cinématiques présente un modèle dynamique très complexe (habituellement donné sous forme d’équations différentielles ordinaires). L’application de lois de commande basées sur l’approche TS est souvent vouée à l’échec compte tenu du grand nombre de non-linéarités dans le modèle. Afin de remédier à ce problème, une nouvelle approche de modélisation a été proposée. À partir d’un jeu de coordonnées bien particulier, le principe des puissances virtuelles est utilisé pour générer un modèle dynamique sous forme d’équations algébro-différentielles (DAEs). Cette approche permet d’aboutir à un modèle quasi-LPV où les seuls paramètres variants représentent les multiplicateurs de Lagrange issus de la modélisation DAE. Les résultats obtenus ont été validés en simulation sur un robot parallèle à 2 degrés de liberté (ddl) puis sur un robot parallèle à 3 ddl développé pour l’assistance au PAD. / Every year more than 5 million people worldwide become hemiplegic as a direct consequence of stroke. This neurological deficiency, often leads to a partial or a total loss of standing up abilities and /or ambulation skills. In order to propose new supporting solutions lying between the wheelchair and the walker, this thesis comes within the ANR TECSAN project named VHIPOD “self-balanced transporter for disabled persons with sit-to-stand function”. In this context, this research provides some answers for two key issues of the project : the sit-to-stand assistance (STS) of hemiplegic people and their mobility through a two wheeled self-balanced solution. These issues are addressed from a robotic point of view while focusing on a key question : are we able to extend the use of Takagi-Sugeno approach (TS) to the control of complex systems ? Firstly, the issue of mobility of disabled persons was treated on the basis of a self-balanced solution. Control laws based on the standard and descriptor TS approaches have been proposed for the stabilization of gyropod in particular situations such as moving along a slope or crossing small steps. The results have led to the design a two-wheeled transporter which is potentially able to deal with the steps. On the other hand, these results have also highlighted the main challenge related to the use of TS approach such as the conservatisms of the LMIs constraints (Linear Matrix Inequalities). In a second time, a test bench for the STS assistance based on parallel kinematic manipulator (PKM) was designed. This kind of manipulator characterized by several closed kinematic chains often presents a complex dynamical model (given as a set of ordinary differential equations, ODEs). The application of control laws based on the TS approach is often doomed to failure given the large number of non-linear terms in the model. To overcome this problem, a new modeling approach was proposed. From a particular set of coordinates, the principle of virtual power was used to generate a dynamical model based on the differential algebraic equations (DAEs). This approach leads to a quasi-LPV model where the only varying parameters are the Lagrange multipliers derived from the constraint equations of the DAE model. The results were validated on simulation through a 2-DOF (degrees of freedom) parallel robot (Biglide) and a 3-DOF manipulator (Triglide) designed for the STS assistance.
|
214 |
Contributions au démélange non-supervisé et non-linéaire de données hyperspectrales / Contributions to unsupervised and nonlinear unmixing of hyperspectral dataAmmanouil, Rita 13 October 2016 (has links)
Le démélange spectral est l’un des problèmes centraux pour l’exploitation des images hyperspectrales. En raison de la faible résolution spatiale des imageurs hyperspectraux en télédetection, la surface représentée par un pixel peut contenir plusieurs matériaux. Dans ce contexte, le démélange consiste à estimer les spectres purs (les end members) ainsi que leurs fractions (les abondances) pour chaque pixel de l’image. Le but de cette thèse estde proposer de nouveaux algorithmes de démélange qui visent à améliorer l’estimation des spectres purs et des abondances. En particulier, les algorithmes de démélange proposés s’inscrivent dans le cadre du démélange non-supervisé et non-linéaire. Dans un premier temps, on propose un algorithme de démelange non-supervisé dans lequel une régularisation favorisant la parcimonie des groupes est utilisée pour identifier les spectres purs parmi les observations. Une extension de ce premier algorithme permet de prendre en compte la présence du bruit parmi les observations choisies comme étant les plus pures. Dans un second temps, les connaissances a priori des ressemblances entre les spectres à l’échelle localeet non-locale ainsi que leurs positions dans l’image sont exploitées pour construire un graphe adapté à l’image. Ce graphe est ensuite incorporé dans le problème de démélange non supervisé par le biais d’une régularisation basée sur le Laplacian du graphe. Enfin, deux algorithmes de démélange non-linéaires sont proposés dans le cas supervisé. Les modèles de mélanges non-linéaires correspondants incorporent des fonctions à valeurs vectorielles appartenant à un espace de Hilbert à noyaux reproduisants. L’intérêt de ces fonctions par rapport aux fonctions à valeurs scalaires est qu’elles permettent d’incorporer un a priori sur la ressemblance entre les différentes fonctions. En particulier, un a priori spectral, dans un premier temps, et un a priori spatial, dans un second temps, sont incorporés pour améliorer la caractérisation du mélange non-linéaire. La validation expérimentale des modèles et des algorithmes proposés sur des données synthétiques et réelles montre une amélioration des performances par rapport aux méthodes de l’état de l’art. Cette amélioration se traduit par une meilleure erreur de reconstruction des données / Spectral unmixing has been an active field of research since the earliest days of hyperspectralremote sensing. It is concerned with the case where various materials are found inthe spatial extent of a pixel, resulting in a spectrum that is a mixture of the signatures ofthose materials. Unmixing then reduces to estimating the pure spectral signatures and theircorresponding proportions in every pixel. In the hyperspectral unmixing jargon, the puresignatures are known as the endmembers and their proportions as the abundances. Thisthesis focuses on spectral unmixing of remotely sensed hyperspectral data. In particular,it is aimed at improving the accuracy of the extraction of compositional information fromhyperspectral data. This is done through the development of new unmixing techniques intwo main contexts, namely in the unsupervised and nonlinear case. In particular, we proposea new technique for blind unmixing, we incorporate spatial information in (linear and nonlinear)unmixing, and we finally propose a new nonlinear mixing model. More precisely, first,an unsupervised unmixing approach based on collaborative sparse regularization is proposedwhere the library of endmembers candidates is built from the observations themselves. Thisapproach is then extended in order to take into account the presence of noise among theendmembers candidates. Second, within the unsupervised unmixing framework, two graphbasedregularizations are used in order to incorporate prior local and nonlocal contextualinformation. Next, within a supervised nonlinear unmixing framework, a new nonlinearmixing model based on vector-valued functions in reproducing kernel Hilbert space (RKHS)is proposed. The aforementioned model allows to consider different nonlinear functions atdifferent bands, regularize the discrepancies between these functions, and account for neighboringnonlinear contributions. Finally, the vector-valued kernel framework is used in orderto promote spatial smoothness of the nonlinear part in a kernel-based nonlinear mixingmodel. Simulations on synthetic and real data show the effectiveness of all the proposedtechniques
|
215 |
2-microlocal spaces with variable integrabilityFerreira Gonçalves, Helena Daniela 15 May 2018 (has links)
In this work we study several important properties of the 2-microlocal Besov and Triebel-Lizorkin spaces with variable integrability. Due to the richness of the weight sequence used to measure smoothness, this scale of function spaces incorporates a wide range of function spaces, of which we mention the spaces with variable smoothness.
Within the existing characterizations of these spaces, the characterization via smooth atoms is undoubtedly one of the most used when it comes to obtain new results in varied directions. In this work we make use of such characterization to prove several embedding results, such as Sobolev, Franke and Jawerth embeddings, and also to study traces on hyperplanes.
Despite the considerable benefits of resorting on the smooth atomic decomposition, there are still some limitations when one tries to use it in order to prove some specific results, such as pointwise multipliers and diffeomorphisms assertions. The non-smooth atomic characterization proved in this work overcome these problems, due to the weaker conditions of the (non-smooth) atoms. Moreover, it also allows us to give an intrinsic characterization of the 2-microlocal Besov and Triebel-Lizorkin spaces with variable integrability on the class of regular domains, in which connected bounded Lipschitz domains are included. / In dieser Arbeit untersuchen wir einige wichtige Eigenschaften der 2-microlokalen Besov und Triebel-Lizorkin Räume mit variabler Integrabilität. Weil die Glattheit hier mit einer reicher Gewichtsfolge gemessen wird, beinhaltet diese Skala von Funktionsräumen eine große Anzahl von Funktionsräumen, von denen wir die Räume mit variabler Glattheit erwähnen.
Innerhalb der vorhandenen Charakterisierungen dieser Räume ist die Charakterisierung mit glatten Atomen zweifellos eine der am häufigsten verwendeten, um neue Ergebnisse in verschiedenen Richtungen zu erhalten. In dieser Arbeit verwenden wir eine solche Charakterisierung, um mehrere Einbettungsergebnisse zu bewiesen, wie Sobolev-Einbettungen und Einbettungen vom Franke-Jawerth Typ, und auch Spurresultate zu untersuchen.
Trotz der beträchtlichen Vorteile des Rückgriffs auf die glatte Atomaren-Zerlegung gibt es immer noch einige Einschränkungen, wenn man versucht, sie zu verwenden, um einige spezifische Ergebnisse zu beweisen, wie beispielsweise punktweise Multiplikatoren und Diffeomorphismen-Assertionen. Die nichtglatte atomare Charakterisierung, die wir in dieser Arbeit beweisen, überwindet diese Probleme aufgrund der schwächeren Bedingungen von (nichtglatten) Atomen. Außerdem erlaubt es uns, eine Intrinsische Charakterisierung der 2-mikrolokalen Besov- und Triebel-Lizorkin-Räume mit variabler Integrabilität auf regulärer Gebieten zu geben.
|
216 |
Design, Analysis, and Applications of Approximate Arithmetic ModulesUllah, Salim 06 April 2022 (has links)
From the initial computing machines, Colossus of 1943 and ENIAC of 1945, to modern high-performance data centers and Internet of Things (IOTs), four design goals, i.e., high-performance, energy-efficiency, resource utilization, and ease of programmability, have remained a beacon of development for the computing industry. During this period, the computing industry has exploited the advantages of technology scaling and microarchitectural enhancements to achieve these goals. However, with the end of Dennard scaling, these techniques have diminishing energy and performance advantages. Therefore, it is necessary to explore alternative techniques for satisfying the computational and energy requirements of modern applications. Towards this end, one promising technique is analyzing and surrendering the strict notion of correctness in various layers of the computation stack. Most modern applications across the computing spectrum---from data centers to IoTs---interact and analyze real-world data and take decisions accordingly. These applications are broadly classified as Recognition, Mining, and Synthesis (RMS). Instead of producing a single golden answer, these applications produce several feasible answers. These applications possess an inherent error-resilience to the inexactness of processed data and corresponding operations. Utilizing these applications' inherent error-resilience, the paradigm of Approximate Computing relaxes the strict notion of computation correctness to realize high-performance and energy-efficient systems with acceptable quality outputs.
The prior works on circuit-level approximations have mainly focused on Application-specific Integrated Circuits (ASICs). However, ASIC-based solutions suffer from long time-to-market and high-cost developing cycles. These limitations of ASICs can be overcome by utilizing the reconfigurable nature of Field Programmable Gate Arrays (FPGAs). However, due to architectural differences between ASICs and FPGAs, the utilization of ASIC-based approximation techniques for FPGA-based systems does not result in proportional performance and energy gains. Therefore, to exploit the principles of approximate computing for FPGA-based hardware accelerators for error-resilient applications, FPGA-optimized approximation techniques are required. Further, most state-of-the-art approximate arithmetic operators do not have a generic approximation methodology to implement new approximate designs for an application's changing accuracy and performance requirements. These works also lack a methodology where a machine learning model can be used to correlate an approximate operator with its impact on the output quality of an application. This thesis focuses on these research challenges by designing and exploring FPGA-optimized logic-based approximate arithmetic operators. As multiplication operation is one of the computationally complex and most frequently used arithmetic operations in various modern applications, such as Artificial Neural Networks (ANNs), we have, therefore, considered it for most of the proposed approximation techniques in this thesis.
The primary focus of the work is to provide a framework for generating FPGA-optimized approximate arithmetic operators and efficient techniques to explore approximate operators for implementing hardware accelerators for error-resilient applications.
Towards this end, we first present various designs of resource-optimized, high-performance, and energy-efficient accurate multipliers. Although modern FPGAs host high-performance DSP blocks to perform multiplication and other arithmetic operations, our analysis and results show that the orthogonal approach of having resource-efficient and high-performance multipliers is necessary for implementing high-performance accelerators. Due to the differences in the type of data processed by various applications, the thesis presents individual designs for unsigned, signed, and constant multipliers. Compared to the multiplier IPs provided by the FPGA Synthesis tool, our proposed designs provide significant performance gains. We then explore the designed accurate multipliers and provide a library of approximate unsigned/signed multipliers. The proposed approximations target the reduction in the total utilized resources, critical path delay, and energy consumption of the multipliers. We have explored various statistical error metrics to characterize the approximation-induced accuracy degradation of the approximate multipliers. We have also utilized the designed multipliers in various error-resilient applications to evaluate their impact on applications' output quality and performance.
Based on our analysis of the designed approximate multipliers, we identify the need for a framework to design application-specific approximate arithmetic operators. An application-specific approximate arithmetic operator intends to implement only the logic that can satisfy the application's overall output accuracy and performance constraints.
Towards this end, we present a generic design methodology for implementing FPGA-based application-specific approximate arithmetic operators from their accurate implementations according to the applications' accuracy and performance requirements. In this regard, we utilize various machine learning models to identify feasible approximate arithmetic configurations for various applications. We also utilize different machine learning models and optimization techniques to efficiently explore the large design space of individual operators and their utilization in various applications. In this thesis, we have used the proposed methodology to design approximate adders and multipliers.
This thesis also explores other layers of the computation stack (cross-layer) for possible approximations to satisfy an application's accuracy and performance requirements. Towards this end, we first present a low bit-width and highly accurate quantization scheme for pre-trained Deep Neural Networks (DNNs). The proposed quantization scheme does not require re-training (fine-tuning the parameters) after quantization. We also present a resource-efficient FPGA-based multiplier that utilizes our proposed quantization scheme. Finally, we present a framework to allow the intelligent exploration and highly accurate identification of the feasible design points in the large design space enabled by cross-layer approximations. The proposed framework utilizes a novel Polynomial Regression (PR)-based method to model approximate arithmetic operators. The PR-based representation enables machine learning models to better correlate an approximate operator's coefficients with their impact on an application's output quality.:1. Introduction
1.1 Inherent Error Resilience of Applications
1.2 Approximate Computing Paradigm
1.2.1 Software Layer Approximation
1.2.2 Architecture Layer Approximation
1.2.3 Circuit Layer Approximation
1.3 Problem Statement
1.4 Focus of the Thesis
1.5 Key Contributions and Thesis Overview
2. Preliminaries
2.1 Xilinx FPGA Slice Structure
2.2 Multiplication Algorithms
2.2.1 Baugh-Wooley’s Multiplication Algorithm
2.2.2 Booth’s Multiplication Algorithm
2.2.3 Sign Extension for Booth’s Multiplier
2.3 Statistical Error Metrics
2.4 Design Space Exploration and Optimization Techniques
2.4.1 Genetic Algorithm
2.4.2 Bayesian Optimization
2.5 Artificial Neural Networks
3. Accurate Multipliers
3.1 Introduction
3.2 Related Work
3.3 Unsigned Multiplier Architecture
3.4 Motivation for Signed Multipliers
3.5 Baugh-Wooley’s Multiplier
3.6 Booth’s Algorithm-based Signed Multipliers
3.6.1 Booth-Mult Design
3.6.2 Booth-Opt Design
3.6.3 Booth-Par Design
3.7 Constant Multipliers
3.8 Results and Discussion
3.8.1 Experimental Setup and Tool Flow
3.8.2 Performance comparison of the proposed accurate unsigned multiplier
3.8.3 Performance comparison of the proposed accurate signed multiplier with the state-of-the-art accurate multipliers
3.8.4 Performance comparison of the proposed constant multiplier with the state-of-the-art accurate multipliers
3.9 Conclusion
4. Approximate Multipliers
4.1 Introduction
4.2 Related Work
4.3 Unsigned Approximate Multipliers
4.3.1 Approximate 4 × 4 Multiplier (Approx-1)
4.3.2 Approximate 4 × 4 Multiplier (Approx-2)
4.3.3 Approximate 4 × 4 Multiplier (Approx-3)
4.4 Designing Higher Order Approximate Unsigned Multipliers
4.4.1 Accurate Adders for Implementing 8 × 8 Approximate Multipliers from 4 × 4 Approximate Multipliers
4.4.2 Approximate Adders for Implementing Higher-order Approximate Multipliers
4.5 Approximate Signed Multipliers (Booth-Approx)
4.6 Results and Discussion
4.6.1 Experimental Setup and Tool Flow
4.6.2 Evaluation of the Proposed Approximate Unsigned Multipliers
4.6.3 Evaluation of the Proposed Approximate Signed Multiplier
4.7 Conclusion
5. Designing Application-specific Approximate Operators
5.1 Introduction
5.2 Related Work
5.3 Modeling Approximate Arithmetic Operators
5.3.1 Accurate Multiplier Design
5.3.2 Approximation Methodology
5.3.3 Approximate Adders
5.4 DSE for FPGA-based Approximate Operators Synthesis
5.4.1 DSE using Bayesian Optimization
5.4.2 MOEA-based Optimization
5.4.3 Machine Learning Models for DSE
5.5 Results and Discussion
5.5.1 Experimental Setup and Tool Flow
5.5.2 Accuracy-Performance Analysis of Approximate Adders
5.5.3 Accuracy-Performance Analysis of Approximate Multipliers
5.5.4 AppAxO MBO
5.5.5 ML Modeling
5.5.6 DSE using ML Models
5.5.7 Proposed Approximate Operators
5.6 Conclusion
6. Quantization of Pre-trained Deep Neural Networks
6.1 Introduction
6.2 Related Work
6.2.1 Commonly Used Quantization Techniques
6.3 Proposed Quantization Techniques
6.3.1 L2L: Log_2_Lead Quantization
6.3.2 ALigN: Adaptive Log_2_Lead Quantization
6.3.3 Quantitative Analysis of the Proposed Quantization Schemes
6.3.4 Proposed Quantization Technique-based Multiplier
6.4 Results and Discussion
6.4.1 Experimental Setup and Tool Flow
6.4.2 Image Classification
6.4.3 Semantic Segmentation
6.4.4 Hardware Implementation Results
6.5 Conclusion
7. A Framework for Cross-layer Approximations
7.1 Introduction
7.2 Related Work
7.3 Error-analysis of approximate arithmetic units
7.3.1 Application Independent Error-analysis of Approximate Multipliers
7.3.2 Application Specific Error Analysis
7.4 Accelerator Performance Estimation
7.5 DSE Methodology
7.6 Results and Discussion
7.6.1 Experimental Setup and Tool Flow
7.6.2 Behavioral Analysis
7.6.3 Accelerator Performance Estimation
7.6.4 DSE Performance
7.7 Conclusion
8. Conclusions and Future Work
|
217 |
On Integral Transforms and Convolution Equations on the Spaces of Tempered Ultradistributions / Prilozi teoriji integralnih transformacija i konvolucionih jednačina na prostorima temperiranih ultradistribucijaPerišić Dušanka 03 July 1992 (has links)
<p>In the thesis are introduced and investigated spaces of Burling and of Roumieu type tempered ultradistributions, which are natural generalization of the space of Schwartz’s tempered distributions in Denjoy-Carleman-Komatsu’s theory of ultradistributions. It has been proved that the introduced spaces preserve all of the good properties Schwartz space has, among others, a remarkable one, that the Fourier transform maps continuposly the spaces into themselves.<br />In the first chapter the necessary notation and notions are given.<br />In the second chapter, the spaces of ultrarapidly decreasing ultradifferentiable functions and their duals, the spaces of Beurling and of Roumieu tempered ultradistributions, are introduced; their topological properties and relations with the known distribution and ultradistribution spaces and structural properties are investigated; characterization of the Hermite expansions and boundary value representation of the elements of the spaces are given.<br />The spaces of multipliers of the spaces of Beurling and of Roumieu type tempered ultradistributions are determined explicitly in the third chapter.<br />The fourth chapter is devoted to the investigation of Fourier, Wigner, Bargmann and Hilbert transforms on the spaces of Beurling and of Roumieu type tempered ultradistributions and their test spaces.<br />In the fifth chapter the equivalence of classical definitions of the convolution of Beurling type ultradistributions is proved, and the equivalence of, newly introduced definitions, of ultratempered convolutions of Beurling type ultradistributions is proved.<br />In the last chapter is given a necessary and sufficient condition for a convolutor of a space of tempered ultradistributions to be hypoelliptic in a space of integrable ultradistribution, is given, and hypoelliptic convolution equations are studied in the spaces.<br />Bibliograpy has 70 items.</p> / <p>U ovoj tezi su proučavani prostori temperiranih ultradistribucija Beurlingovog i Roumieovog tipa, koji su prirodna uopštenja prostora Schwarzovih temperiranih distribucija u Denjoy-Carleman-Komatsuovoj teoriji ultradistribucija. Dokazano je ovi prostori imaju sva dobra svojstva, koja ima i Schwarzov prostor, izmedju ostalog, značajno svojstvo da Furijeova transformacija preslikava te prostore neprekidno na same sebe.<br />U prvom poglavlju su uvedene neophodne oznake i pojmovi.<br />U drugom poglavlju su uvedeni prostori ultrabrzo opadajucih ultradiferencijabilnih funkcija i njihovi duali, prostori Beurlingovih i Rumieuovih temperiranih ultradistribucija; proučavana su njihova topološka svojstva i veze sa poznatim prostorima distribucija i ultradistribucija, kao i strukturne osobine; date su i karakterizacije Ermitskih ekspanzija i graničnih reprezentacija elemenata tih prostora.<br />Prostori multiplikatora Beurlingovih i Roumieuovih temperiranih ultradistribucija su okarakterisani u trećem poglavlju.<br />Četvrto poglavlje je posvećeno proučavanju Fourierove, Wignerove, Bargmanove i Hilbertove transformacije na prostorima Beurlingovih i Rouimieovih temperiranih ultradistribucija i njihovim test prostorima.<br />U petoj glavi je dokazana ekvivalentnost klasičnih definicija konvolucije na Beurlingovim prostorima ultradistribucija, kao i ekvivalentnost novouvedenih definicija ultratemperirane konvolucije ultradistribucija Beurlingovog tipa.<br />U poslednjoj glavi je dat potreban i dovoljan uslov da konvolutor prostora temperiranih ultradistribucija bude hipoeliptičan u prostoru integrabilnih ultradistribucija i razmatrane su neke konvolucione jednačine u tom prostoru.<br />Bibliografija ima 70 bibliografskih jedinica.</p>
|
218 |
Essays on Government Growth, Fiscal Policy and Debt SustainabilityKuckuck, Jan 29 April 2015 (has links)
The financial crisis of 2007/8 has triggered a profound debate about public budget finance sustainability, ever-increasing government expenditures and the efficiency of fiscal policy measures. Given this context, the following dissertation provides four contributions that analyze the long-run growth of government spending throughout economic development, discuss potential effects of fiscal policy measures on output, and provide new insights into the assessment of debt sustainability for a variety of industrialized countries.
Since the breakout of the European debt crisis in 2009/2010, there has been a revival of interest in the long-term growth of government expenditures. In this context, the relationship between the size of the public sector and economic growth - often referred to as Wagner's law - has been in the focus of numerous studies, especially with regard to public policy and fiscal sustainability. Using historical data from the mid-19th century, the first chapter analyzes the validity of Wagner's law for five industrialized European countries and links the discussion to different stages of economic development. In line with Wagner's hypothesis, our findings show that the relationship between public spending and economic growth has weakened at an advanced stage of development. Furthermore, all countries under review support the notion that Wagner's law may have lost its economic relevance in recent decades.
As a consequence of the 2007/8 financial crisis, there has been an increasing theoretical and empirical debate about the impact of fiscal policy measures on output. Accordingly, the Structural Vector Autoregression (SVAR) approach to estimating the fiscal multipliers developed by Blanchard and Perotti (2002) has been applied widely in the literature in recent years. In the second chapter, we point out that the fiscal multipliers derived from this approach include the predicted future path of the policy instruments as well as their dynamic interaction. We analyze a data set from the US and document that these interactions are economically and statistically significant. In a counterfactual simulation, we report fiscal multipliers that abstract from these dynamic responses. Furthermore, we use our estimates to analyze the recent fiscal stimulus of the American Recovery and Reinvestment Act (ARRA).
The third chapter contributes to the existing empirical literature on fiscal multipliers by applying a five-variable SVAR approach to a uniform data set for Belgium, France, Germany, and the United Kingdom. Besides studying the effects of expenditure and tax increases on output, we additionally analyze their dynamic effects on inflation and interest rates as well as the dynamic interaction of both policy instruments. By conducting counterfactual simulations, which abstract from the dynamic response of key macroeconomic variables to the initial fiscal shocks, we study the importance of these channels for the transmission of fiscal policy on output. Overall, the results demonstrate that the effects of fiscal shocks are limited and rather different across countries. Further, it is shown that the inflation and interest rate channel are insignificant for the transmission of fiscal policy. In the field of public finances, governmental budgetary policies are among the most controversial and disputed areas of political and scientific controversy. The sustainability of public debt is often analyzed by testing stationarity conditions of government's budget deficits.
The fourth chapter shows that this test can be implemented more effectively by means of an asymmetric unit root test. We argue that this approach increases the power of the test and reduces the likelihood of drawing false inferences. We illustrate this in an application to 14 countries of the European Monetary Union as well as in a Monte Carlo simulation. Distinguishing between positive and negative changes in deficits, we find consistency with the intertemporal budget constraint for more countries, i.e. lower persistence of positive changes in some countries, compared to the earlier literature.
|
219 |
Full-Scale Lateral-Load Tests of a 3x5 Pile Group in Soft Clays and SiltsSnyder, Jeffrey L. 15 March 2004 (has links) (PDF)
A series of static lateral load tests were conducted on a group of fifteen piles arranged in a 3x5 pattern. The piles were placed at a center-to-center spacing of 3.92 pile diameters. A single isolated pile was also tested for comparison to the group response. The subsurface profile consisted of cohesive layers of soft to medium consistency underlain by interbedded layers of sands and fine-grained soils. The piles were instrumented to measure pile-head deflection, rotation, and load, as well as strain versus pile depth.
|
220 |
Performance enhancement techniques for low power digital phase locked loopsElshazly, Amr 16 July 2014 (has links)
Desire for low-power, high performance computing has been at core of the symbiotic union between digital circuits and CMOS scaling. While digital circuit performance improves with device scaling, analog circuits have not gained these benefits. As a result, it has become necessary to leverage increased digital circuit performance to mitigate analog circuit deficiencies in nanometer scale CMOS in order to realize world class analog solutions.
In this thesis, both circuit and system enhancement techniques to improve performance of clock generators are discussed. The following techniques were developed: (1) A digital PLL that employs an adaptive and highly efficient way to cancel the effect of supply noise, (2) a supply regulated DPLL that uses low power regulator and improves supply noise rejection, (3) a digital multiplying DLL that obviates the need for high-resolution TDC while achieving sub-picosecond jitter and excellent supply noise immunity, and (4) a high resolution TDC based on a switched ring oscillator, are presented. Measured results obtained from the prototype chips are presented to illustrate the proposed design techniques. / Graduation date: 2013 / Access restricted to the OSU Community at author's request from July 16, 2012 - July 16, 2014
|
Page generated in 0.06 seconds