• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 312
  • 61
  • 42
  • 38
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Stability of charged rotating black holes for linear scalar perturbations

Civin, Damon January 2015 (has links)
In this thesis, the stability of the family of subextremal Kerr-Newman space- times is studied in the case of linear scalar perturbations. That is, nondegenerate energy bounds (NEB) and integrated local energy decay (ILED) results are proved for solutions of the wave equation on the domain of outer communications. The main obstacles to the proof of these results are superradiance, trapping and their interaction. These difficulties are surmounted by localising solutions of the wave equation in phase space and applying the vector field method. Miraculously, as in the Kerr case, superradiance and trapping occur in disjoint regions of phase space and can be dealt with individually. Trapping is a high frequency obstruction to the proof whereas superradiance occurs at both high and low frequencies. The construction of energy currents for superradiant frequencies gives rise to an unfavourable boundary term. In the high frequency regime, this boundary term is controlled by exploiting the presence of a large parameter. For low superradiant frequencies, no such parameter is available. This difficulty is overcome by proving quantitative versions of mode stability type results. The mode stability result on the real axis is then applied to prove integrated local energy decay for solutions of the wave equation restricted to a bounded frequency regime. The (ILED) statement is necessarily degenerate due to the trapping effect. This implies that a nondegenerate (ILED) statement must lose differentiability. If one uses an (ILED) result that loses differentiability to prove (NEB), this loss is passed onto the (NEB) statement as well. Here, the geometry of the subextremal Kerr-Newman background is exploited to obtain the (NEB) statement directly from the degenerate (ILED) with no loss of differentiability.
222

Design of Ultra-Low-Power Analog-to-Digital Converters

Zhang, Dai January 2012 (has links)
Power consumption is one of the main design constraints in today’s integrated circuits. For systems powered by small non-rechargeable batteries over their entire lifetime, such as medical implant devices, ultra-low power consumption is paramount. In these systems, analog-to-digital converters (ADCs) are key components as the interface between the analog world and the digital domain. This thesis addresses the design challenges, strategies, as well as circuit techniques of ultra-low-power ADCs for medical implant devices. Medical implant devices, such as pacemakers and cardiac defibrillators, typically requirelow-speed, medium-resolution ADCs. The successive approximation register (SAR) ADC exhibits significantly high energy efficiency compared to other prevalent ADC architectures due to its good tradeoffs among power consumption, conversion accuracy, and design complexity. To design an energy-efficient SAR ADC, an understanding of its error sources as well as its power consumption bounds is essential. This thesis analyzes the power consumption bounds of SAR ADC: 1) at low resolution, the power consumption is bounded by digital switching power; 2) at medium-to-high resolution, the power consumption is bounded by thermal noise if digital assisted techniques are used to alleviate mismatch issues; otherwise it is bounded by capacitor mismatch.  Conversion of the low frequency bioelectric signals does not require high speed, but ultra-low-power operation. This combined with the required conversion accuracy makes the design of such ADCs a major challenge. It is not straightforward to effectively reduce the unnecessary speed for lower power consumption using inherently fast components in advanced CMOS technologies. Moreover, the leakage current degrades the sampling accuracy during the long conversion time, and the leakage power consumption contributes to a significant portion of the total power consumption. Two SAR ADCs have been implemented in this thesis. The first ADC, implemented in a 0.13-µm CMOS process, achieves 9.1 ENOB with 53-nW power consumption at 1 kS/s. The second ADC, implemented in a 65-nm CMOS process, achieves the same resolution at 1 kS/s with a substantial (94%) improvement in power consumption, resulting in 3-nW total power consumption. Our work demonstrates that the ultra-low-power operation necessitates maximum simplicity in the ADC architecture.
223

Bayesian Framework for Sparse Vector Recovery and Parameter Bounds with Application to Compressive Sensing

January 2019 (has links)
abstract: Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report. / Dissertation/Thesis / Masters Thesis Computer Engineering 2019
224

Trade openness and economic growth: experience from three SACU countries

Malefane, Malefa Rose 02 1900 (has links)
This study uses annual data for the period 1975-2014 for South Africa and Botswana, and 1979-2013 for Lesotho to examine empirically the impact of trade openness on economic growth in these three South African Customs Union (SACU) countries. The motivation for this study is that SACU countries are governed by the common agreement for the union that oversees the movement of goods that enter the SACU area. However, although these countries are in a com-mon union, they have quite different levels of development. Based on the country’s level of development, Lesotho is a lower middle-income and least developed country, whereas Botswana and South Africa are upper middle-income economies. Thus, these disparities in the levels of economic development of SACU countries i are expected to have different implications in relation to the extent to which trade openness affects economic growth. It is within this background that the current study seeks to examine what impact trade openness has on economic growth in each of the three selected countries. To check the robustness of the empirical results, this study uses four equations based on four different indicators of trade openness to examine the linkage between trade openness and economic growth. While Equation 1, Equation 2 and Equation 3 employ trade-based indicators of openness, Equation 4 uses a modified version of the UNCTAD (2012a) trade openness index that incorporates differences in country size and geography. Using the autoregressive distributed lag (ARDL) bounds testing approach to cointegration and error-correction modelling, the study found that the impact of trade openness on economic growth varies across the three SACU countries. Based on the results for the first three equations, the study found that trade openness has a positive impact on economic growth in South Africa and Botswana, whereas it has no significant impact on economic growth in Lesotho. Based on Equation 4 results, the study found that after taking the differences in country size and geography into account, trade openness has a positive impact on economic growth in Botswana, but an insignificant impact in South Africa and Lesotho. For South Africa and Botswana, the main recommendation from this study is that policy makers should pursue policies that promote total trade to increase economic growth in both the short and the long run. For Lesotho, the study recommends, among other things, the adoption of policies aimed at enhancing human capital and infrastructural development as well as the broadening of exports, so as to enable the economy to grow to a threshold level necessary for the realisation of significant gains from trade. / Economics
225

Contributions to arithmetic complexity and compression / Contributions à la complexité arithmétique et à la compression

Lagarde, Guillaume 05 July 2018 (has links)
Cette thèse explore deux territoires distincts de l’informatique fondamentale : la complexité et la compression. Plus précisément, dans une première partie, nous étudions la puissance des circuits arithmétiques non commutatifs, qui calculent des polynômes non commutatifs en plusieurs indéterminées. Pour cela, nous introduisons plusieurs modèles de calcul, restreints dans leur manière de calculer les monômes. Ces modèles en généralisent d’autres, plus anciens et largement étudiés, comme les programmes à branchements. Les résultats sont de trois sortes. Premièrement, nous donnons des bornes inférieures sur le nombre d’opérations arithmétiques nécessaires au calcul de certains polynômes tels que le déterminant ou encore le permanent. Deuxièmement, nous concevons des algorithmes déterministes fonctionnant en temps polynomial pour résoudre le problème du test d’identité polynomiale. Enfin, nous construisons un pont entre la théorie des automates et les circuits arithmétiques non commutatifs, ce qui nous permet de dériver de nouvelles bornes inférieures en utilisant une mesure reposant sur le rang de la matrice dite de Hankel, provenant de la théorie des automates. Une deuxième partie concerne l’analyse de l’algorithme de compression sans perte Lempel-Ziv. Pourtant très utilisé, sa stabilité est encore mal établie. Vers la fin des années 90s, Jack Lutz popularise la question suivante, connue sous le nom de « one-bit catastrophe » : « étant donné un mot compressible, est-il possible de le rendre incompressible en ne changeant qu’un seul bit ? ». Nous montrons qu’une telle catastrophe est en effet possible. Plus précisément, en donnant des bornes optimales sur la variation de la taille de la compression, nous montrons qu’un mot « très compressible » restera toujours compressible après modification d’un bit, mais que certains mots « peu compressibles » deviennent en effet incompressibles. / This thesis explores two territories of computer science: complexity and compression. More precisely, in a first part, we investigate the power of non-commutative arithmetic circuits, which compute multivariate non-commutative polynomials. For that, we introduce various models of computation that are restricted in the way they are allowed to compute monomials. These models generalize previous ones that have been widely studied, such as algebraic branching programs. The results are of three different types. First, we give strong lower bounds on the number of arithmetic operations needed to compute some polynomials such as the determinant or the permanent. Second, we design some deterministic polynomial-time algorithm to solve the white-box polynomial identity problem. Third, we exhibit a link between automata theory and non-commutative arithmetic circuits that allows us to derive some old and new tight lower bounds for some classes of non-commutative circuits, using a measure based on the rank of a so-called Hankel matrix. A second part is concerned with the analysis of the data compression algorithm called Lempel-Ziv. Although this algorithm is widely used in practice, we know little about its stability. Our main result is to show that an infinite word compressible by LZ’78 can become incompressible by adding a single bit in front of it, thus closing a question proposed by Jack Lutz in the late 90s under the name “one-bit catastrophe”. We also give tight bounds on the maximal possible variation between the compression ratio of a finite word and its perturbation—when one bit is added in front of it.
226

On the Defining Ideals of Rees Rings for Determinantal and Pfaffian Ideals of Generic Height

Edward F Price (9188318) 04 August 2020 (has links)
<div>This dissertation is based on joint work with Monte Cooper and is broken into two main parts, both of which study the defining ideals of the Rees rings of determinantal and Pfaffian ideals of generic height. In both parts, we attempt to place degree bounds on the defining equations.</div><div> </div><div> The first part of the dissertation consists of Chapters 3 to 5. Let $R = K[x_{1},\ldots,x_{d}]$ be a standard graded polynomial ring over a field $K$, and let $I$ be a homogeneous $R$-ideal generated by $s$ elements. Then there exists a polynomial ring $\mathcal{S} = R[T_{1},\ldots,T_{s}]$, which is also equal to $K[x_{1},\ldots,x_{d},T_{1},\ldots,T_{s}]$, of which the defining ideal of $\mathcal{R}(I)$ is an ideal. The polynomial ring $\mathcal{S}$ comes equipped with a natural bigrading given by $\deg x_{i} = (1,0)$ and $\deg T_{j} = (0,1)$. Here, we attempt to use specialization techniques to place bounds on the $x$-degrees (first component of the bidegrees) of the defining equations, i.e., the minimal generators of the defining ideal of $\mathcal{R}(I)$. We obtain degree bounds by using known results in the generic case and specializing. The key tool are the methods developed by Kustin, Polini, and Ulrich to obtain degree bounds from approximate resolutions. We recover known degree bounds for ideals of maximal minors and submaximal Pfaffians of an alternating matrix. Additionally, we obtain $x$-degree bounds for sufficiently large $T$-degrees in other cases of determinantal ideals of a matrix and Pfaffian ideals of an alternating matrix. We are unable to obtain degree bounds for determinantal ideals of symmetric matrices due to a lack of results in the generic case; however, we develop the tools necessary to obtain degree bounds once similar results are proven for generic symmetric matrices.</div><div> </div><div> The second part of this dissertation is Chapter 6, where we attempt to find a bound on the $T$-degrees of the defining equations of $\mathcal{R}(I)$ when $I$ is a nonlinearly presented homogeneous perfect Gorenstein ideal of grade three having second analytic deviation one that is of linear type on the punctured spectrum. We restrict to the case where $\mathcal{R}(I)$ is not Cohen-Macaulay. This is a natural next step following the work of Morey, Johnson, and Kustin-Polini-Ulrich. Based on extensive computation in Macaulay2, we give a conjecture for the relation type of $I$ and provide some evidence for the conjecture. In an attempt to prove the conjecture, we obtain results about the defining ideals of general fibers of rational maps, which may be of independent interest. We end with some examples where the bidegrees of the defining equations exhibit unusual behavior.</div>
227

Directional constraint qualifications and optimality conditions with application to bilevel programs

Bai, Kuang 18 July 2020 (has links)
The main purpose of this dissertation is to investigate directional constraint qualifications and necessary optimality conditions for nonsmooth set-constrained mathematical programs. First, we study sufficient conditions for metric subregularity of the set-constrained system. We introduce the directional version of the quasi-/pseudo-normality as a sufficient condition for metric subregularity, which is weaker than the classical quasi-/pseudo-normality, respectively. Then we apply our results to complementarity and Karush-Kuhn-Tucker systems. Secondly, we study directional optimality conditions of bilevel programs. It is well-known that the value function reformulation of bilevel programs provides equivalent single-level optimization problems, which are nonsmooth and never satisfy the usual constraint qualifications such as the Mangasarian-Fromovitz constraint qualification (MFCQ). We show that even the first-order sufficient condition for metric subregularity (which is generally weaker than MFCQ) fails at each feasible point of bilevel programs. We introduce the directional Clarke calmness condition and show that under the directional Clarke calmness condition, the directional necessary optimality condition holds. We perform directional sensitivity analysis of the value function and propose the directional quasi-normality as a sufficient condition for the directional Clarke calmness. / Graduate / 2021-07-07
228

Shop-Scheduling Problems with Transportation

Knust, Sigrid 26 September 2000 (has links)
In this thesis scheduling problems with transportation aspects are studied. Classical scheduling models for problems with multiple operations are the so-called shop-scheduling models. In these models jobs consisting of different operations have to be planned on certain machines in such a way that a given objective function is minimized. Each machine may process at most one operation at a time and operations belonging to the same job cannot be processed simultaneously. We generalize these classical shop-scheduling problems by assuming that the jobs additionally have to be transported between the machines. This transportation has to be done by robots which can handle at most one job at a time. Besides transportation times which occur for the jobs during their transport, also empty moving times are considered which arise when a robot moves empty from one machine to another. Two types of problems are distinguished: on the one hand, problems without transportation conflicts (i.e. each transportation can be performed without delay), and on the other hand, problems where transportation conflicts may arise due to a limited capacity of transport robots. In the first part of this thesis several new complexity results are derived for flow-shop problems with a single robot. Since very special cases of these problems are already NP-hard, in the second part of this thesis some techniques are developed for dealing with these hard problems in practice. We concentrate on the job-shop problem with a single robot and the makespan objective. At first we study the subproblem which arises for the robot when some scheduling decisions for the machines have already been made. The resulting single-machine problem can be regarded as a generalization of the traveling salesman problem with time windows where additionally minimal time-lags between certain jobs have to be respected and the makespan has to be minimized. For this single-machine problem we adapt immediate selection techniques used for other scheduling problems and calculate lower bounds based on linear programming and the technique of column generation. On the other hand, to determine upper bounds for the single-machine problem we develop an efficient local search algorithm which finds good solutions in reasonable time. This algorithm is integrated into a local search algorithm for the job-shop problem with a single robot. Finally, the proposed algorithms are tested on different test data and computational results are presented.
229

PAC-Bayesian estimation of low-rank matrices / Estimation PAC-bayésienne de matrices de faible rang

MAI, The Tien 23 June 2017 (has links)
Les deux premi`eres parties de cette th`ese 'etudient respectivement des estimateurs pseudo-bay'esiens dans les probl`emes de compl'etion de matrices, et de tomographie quantique. Dans chaque probl`eme, on propose une loi a priori qui induit des matrices de faible rang. On 'etudie les performances statistiques: dans chacun des deux cas, on prouve des vitesses de convergence pour nos estimateurs. Notre analyse repose essentiellement sur des in'egalit'es PAC-Bay'esiennes. On propose aussi un algorithme MCMC pour impl'ementer notre estimateur. On teste ensuite ses performances sur des donn'ees simul'ees, et r'eelles. La derni`ere partie de la th`ese 'etudie le probl`eme de lifelong learning (que l'on peut traduire par apprentissage au long cours), o`u de l'information est conserv'ee et transf'er'ee d'un probl`eme d'apprentissage `a un autre. Nous proposons une formalisation de ce probl`eme dans un contexte de pr'ediction s'equentielle. Nous proposons un m'eta-algorithme pour le transfert d'information, qui repose sur l'agr'egation `a poids exponentiels. On prouve une borne sur le regret de cette m'ethode. Un avantage important de notre analyse est qu'elle ne requiert aucune hypoth`ese sur la forme des algorithmes d'apprentissages utilis'es `a l'int'erieur de chaque probl`eme. On termine cette partie par l''etude de quelques exemples: cas d'un nombre fini de pr'edicteurs, apprentissage d'une direction r'ev'elatrice, et apprentissage d'un dictionnaire. / The first two parts of the thesis study pseudo-Bayesian estimation for the problem of matrix completion and quantum tomography. A novel low-rank inducing prior distribution is proposed for each problem. The statistical performance is examined: in each case we provide the rate of convergence of the pseudo-Bayesian estimator. Our analysis relies on PAC-Bayesian oracle inequalities. We also propose an MCMC algorithm to compute our estimator. The numerical behavior is tested on simulated and real data sets. The last part of the thesis studies the lifelong learning problem, a scenario of transfer learning, where information is transferred from one learning task to another. We propose an online formalization of the lifelong learning problem. Then, a meta-algorithm is proposed for lifelong learning. It relies on the idea of exponentially weighted aggregation. We provide a regret bound on this strategy. One of the nice points of our analysis is that it makes no assumption on the learning algorithm used within each task. Some applications are studied in details: finite subset of relevant predictors, single index model, dictionary learning.
230

Incorporating Functionally Graded Materials and Precipitation Hardening into Microstructure Sensitive Design

Lyon, Mark Edward 07 August 2003 (has links) (PDF)
The methods of MSD are applied to the design of functionally graded materials. Analysis models are presented to allow the design of compliant derailleur for a case study and constraints are placed on the design. Several methods are presented for relating elements of the microstructure to the properties of the material, including Taylor yield theory, Hill elastic bounds, and precipitation hardening. Applying n-point statistics to the MSD framework is also discussed. Some results are presented for the information content of the 2-point correlation statistics that follow from the methods used to integrate functionally graded materials into MSD. For the compliant beam case study, the best design (98%Al-2%Li) was a 97% improvement over the worst (100%Al). The improvements were primarily due to the precipitation hardening, although anisotropy also significantly impacted the design. Under the constraints for the design, allowing the beam to be functionally graded had little effect on the overall design, unless there was significant stiffening occurring along with particulate formation.

Page generated in 0.0438 seconds