• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 310
  • 61
  • 42
  • 38
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 22
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Computing Bounds for Linear Functionals of Exact Weak Solutions to Poisson’s Equation

Sauer-Budge, A.M., Huerta, A., Bonet, J., Peraire, Jaime 01 1900 (has links)
We present a method for Poisson’s equation that computes guaranteed upper and lower bounds for the values of linear functional outputs of the exact weak solution of the infinite dimensional continuum problem using traditional finite element approximations. The guarantee holds uniformly for any level of refinement, not just in the asymptotic limit of refinement. Given a finite element solution and its output adjoint solution, the method can be used to provide a certificate of precision for the output with an asymptotic complexity which is linear in the number of elements in the finite element discretization. / Singapore-MIT Alliance (SMA)
32

Finite Element Output Bounds for a Stabilized Discretization of Incompressible Stokes Flow

Peraire, Jaime, Budge, Alexander M. 01 1900 (has links)
We introduce a new method for computing a posteriori bounds on engineering outputs from finite element discretizations of the incompressible Stokes equations. The method results from recasting the output problem as a minimization statement without resorting to an error formulation. The minimization statement engenders a duality relationship which we solve approximately by Lagrangian relaxation. We demonstrate the method for a stabilized equal-order approximation of Stokes flow, a problem to which previous output bounding methods do not apply. The conceptual framework for the method is quite general and shows promise for application to stabilized nonlinear problems, such as Burger's equation and the incompressible Navier-Stokes equations, as well as potential for compressible flow problems. / Singapore-MIT Alliance (SMA)
33

Reduced-Basis Output Bound Methods for Parametrized Partial Differential Equations

Prud'homme, C., Rovas, D.V., Veroy, K., Machiels, L., Maday, Y., Patera, Anthony T., Turinici, G. 01 1900 (has links)
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced-basis approximations -- Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation -- relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures -- methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage -- in which, given a new parameter value, we calculate the output of interest and associated error bound -- depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control. / Singapore-MIT Alliance (SMA)
34

Online Learning of Non-stationary Sequences

Monteleoni, Claire 12 June 2003 (has links)
We consider an online learning scenario in which the learner can make predictions on the basis of a fixed set of experts. The performance of each expert may change over time in a manner unknown to the learner. We formulate a class of universal learning algorithms for this problem by expressing them as simple Bayesian algorithms operating on models analogous to Hidden Markov Models (HMMs). We derive a new performance bound for such algorithms which is considerably simpler than existing bounds. The bound provides the basis for learning the rate at which the identity of the optimal expert switches over time. We find an analytic expression for the a priori resolution at which we need to learn the rate parameter. We extend our scalar switching-rate result to models of the switching-rate that are governed by a matrix of parameters, i.e. arbitrary homogeneous HMMs. We apply and examine our algorithm in the context of the problem of energy management in wireless networks. We analyze the new results in the framework of Information Theory.
35

Modelling queueing networks with blocking using probability mass fitting

Tancrez, Jean-Sébastien 18 March 2009 (has links)
In this thesis, we are interested in the modelling of queueing networks with finite buffers and with general service time distributions. Queueing networks models have shown to be very useful tools to evaluate the performance of complex systems in many application fields (manufacturing, communication networks, traffic flow, etc.). In order to analyze such networks, the original distributions are most often transformed into tractable distributions, so that the Markov theory can then be applied. Our main originality lies in this step of the modelling process. We propose to discretize the original distributions by probability mass fitting (PMF). The PMF discretization is simple: the probability masses on regular intervals are computed and aggregated on a single value in the corresponding interval. PMF has the advantage to be simple, refinable, and to conserve the shape of the distribution. Moreover, we show that it does not require more phases, and thus more computational effort, than concurrent methods. From the distributions transformed by PMF, the evolution of the system can then be modelled by a discrete Markov chain, and the performance of the system can be evaluated from the chain. This global modelling method leads to various interesting results. First, we propose two methodologies leading to bounds on the cycle time of the system. In particular, a tight lower bound on the cycle time can be computed. Second, probability mass fitting leads to accurate approximation of the performance measures (cycle time, work-in-progress, flow time, etc.). Together with the bounds, the approximations allow to get a good grasp on the exact measure with certainty. Third, the cycle time distribution can be computed in the discretized time and shows to be a good approximation of the original cycle time distribution. The distribution provides more information on the behavior of the system, compared to the isolated expectation (to which other methods are limited). Finally, in order to be able to analyze larger networks, the decomposition technique can be applied after PMF. We show that the accuracy of the performance evaluation is still good, and that the ability of PMF to accurately estimate the distributions brings an improvement in the application of the decomposition. In conclusion, we believe that probability mass fitting can be considered as a valuable alternative in order to build tractable distributions for the analytical modelling of queueing networks.
36

Classification of Perfect codes in Hamming Metric

Sabir, Tanveer January 2011 (has links)
The study of coding theory aims to detect and correct the errors during the transmission of the data. It enhances the quality of data transmission and provides better control over the noisy channels.The perfect codes are collected and analyzed in the premises of the Hamming metric.This classification yields that there exists only a few perfect codes. The perfect codes do not guarantee the perfection by all means but just satisfy certain bound and properties. The detection and correction of errors is always very important for better data transmission.
37

Time Bounds for Shared Objects in Partially Synchronous Systems

Wang, Jiaqi 2011 December 1900 (has links)
Shared objects are a key component in today's large distributed systems. Linearizability is a popular consistency condition for such shared objects which gives the illusion of sequential execution of operations. The time bound of an operation is the worst-case time complexity from the operation invocation to its response. Some time bounds have been proved for certain operations on linearizable shared objects in partially synchronous systems but there are some gaps between time upper bound and lower bound for each operation. In this work, the goal is to narrow or eliminate the gaps and find optimally fast implementations. To reach this goal, we prove larger lower bounds and show smaller upper bounds (compared to 2d for all operations in previous folklore implementations) by proposing an implementation for a shared object with an arbitrary data type in distributed systems of n processes in which every message delay is bounded within [d-u, d] and the maximum skew between processes' clocks is epsilon. Considering any operation for which there exist two instances such that individually, each instance is legal but in sequence they are not, we prove a lower bound of d + min{epsilon, u, d/3}, improving from d, and show this bound is tight when epsilon < d/3 and epsilon < u. Considering any operation for which there exist k instances such that each instance separately is legal and any sequence of them is legal, but the state of the object is different after different sequences, we prove a lower bound of (1-1/k)u, improving from u/2, and show this bound is tight when k = n. A pure mutator only modifies the object but does not return anything about the object. A pure accessor does not modify the object. For a pure mutator OP1 and a pure accessor OP2, if given a set of instances of OP1, the state of the object reflects the order in which the instances occur and an instance of OP2 can detect whether an instance of OP1 occurs, we prove the sum of the time bound for OP1 and OP2 is at least d + min{epsilon, u, d/3}, improving from d. The upper bound is d + 2*epsilon from our implementation.
38

Parameterized algorithms and computational lower bounds: a structural approach

Xia, Ge 30 October 2006 (has links)
Many problems of practical significance are known to be NP-hard, and hence, are unlikely to be solved by polynomial-time algorithms. There are several ways to cope with the NP-hardness of a certain problem. The most popular approaches include heuristic algorithms, approximation algorithms, and randomized algorithms. Recently, parameterized computation and complexity have been receiving a lot of attention. By taking advantage of small or moderate parameter values, parameterized algorithms provide new venues for practically solving problems that are theoretically intractable. In this dissertation, we design efficient parameterized algorithms for several wellknown NP-hard problems and prove strong lower bounds for some others. In doing so, we place emphasis on the development of new techniques that take advantage of the structural properties of the problems. We present a simple parameterized algorithm for Vertex Cover that uses polynomial space and runs in time O(1.2738k + kn). It improves both the previous O(1.286k + kn)-time polynomial-space algorithm by Chen, Kanj, and Jia, and the very recent O(1.2745kk4 + kn)-time exponential-space algorithm, by Chandran and Grandoni. This algorithm stands out for both its performance and its simplicity. Essential to the design of this algorithm are several new techniques that use structural information of the underlying graph to bound the search space. For Vertex Cover on graphs with degree bounded by three, we present a still better algorithm that runs in time O(1.194k + kn), based on an “almost-global” analysis of the search tree. We also show that an important structural property of the underlying graphs – the graph genus – largely dictates the computational complexity of some important graph problems including Vertex Cover, Independent Set and Dominating Set. We present a set of new techniques that allows us to prove almost tight computational lower bounds for some NP-hard problems, such as Clique, Dominating Set, Hitting Set, Set Cover, and Independent Set. The techniques are further extended to derive computational lower bounds on polynomial time approximation schemes for certain NP-hard problems. Our results illustrate a new approach to proving strong computational lower bounds for some NP-hard problems under reasonable conditions.
39

Information, complexity and structure in convex optimization

Guzman Paredes, Cristobal 08 June 2015 (has links)
This thesis is focused on the limits of performance of large-scale convex optimization algorithms. Classical theory of oracle complexity, first proposed by Nemirovski and Yudin in 1983, successfully established the worst-case behavior of methods based on local oracles (a generalization of first-order oracle for smooth functions) for nonsmooth convex minimization, both in the large-scale and low-scale regimes; and the complexity of approximately solving linear systems of equations (equivalent to convex quadratic minimization) over Euclidean balls, under a matrix-vector multiplication oracle. Our work extends the applicability of lower bounds in two directions: Worst-Case Complexity of Large-Scale Smooth Convex Optimization: We generalize lower bounds on the complexity of first-order methods for convex optimization, considering classes of convex functions with Hölder continuous gradients. Our technique relies on the existence of a smoothing kernel, which defines a smooth approximation for any convex function via infimal convolution. As a consequence, we derive lower bounds for \ell_p/\ell_q-setups, where 1\leq p,q\leq \infty, and extend to its matrix analogue: Smooth convex minimization (with respect to the Schatten q-norm) over matrices with bounded Schatten p-norm. The major consequences of this result are the near-optimality of the Conditional Gradient method over box-type domains (p=q=\infty), and the near-optimality of Nesterov's accelerated method over the cross-polytope (p=q=1). Distributional Complexity of Nonsmooth Convex Optimization: In this work, we prove average-case lower bounds for the complexity of nonsmooth convex ptimization. We introduce an information-theoretic method to analyze the complexity of oracle-based algorithms solving a random instance, based on the reconstruction principle. Our technique shows that all known lower bounds for nonsmooth convex optimization can be derived by an emulation procedure from a common String-Guessing Problem, which is combinatorial in nature. The derived average-case lower bounds extend to hold with high probability, and for algorithms with bounded probability error, via Fano's inequality. Finally, from the proposed technique we establish the equivalence (up to constant factors) of distributional, randomized, and worst-case complexity for black-box convex optimization. In particular, there is no gain from randomization in this setup.
40

Mixed framework for Darcy-Stokes mixtures

Taicher, Abraham Levy 09 February 2015 (has links)
We consider the system of equations arising from mantle dynamics introduced by McKenzie (J. Petrology, 1985). In this multi-phase model, the fluid melt velocity obeys Darcy's law while the deformable "solid" matrix is governed by a highly viscous Stokes equation. The system is then coupled through mass conservation and compaction relations. Together these equations form a coupled Darcy-Stokes system on a continuous single-domain mixture of fluid and matrix. The porosity φ, representing the relative volume of fluid melt to the bulk volume, is assumed to be much smaller than one. When coupled with solute transport and thermal evolution in a time-dependent problem, the model transitions dynamically from a non-porous single phase solid to a two-phase porous medium. Such mixture models have an advantage for numerical approximation since the free boundary between the one and two-phase regions need not be determined explicitly. The equations of mantle dynamics apply to a wide range of applications in deep earth physics such as mid-ocean ridges, subduction zones, and hot-spot volcanism, as well as to glacier dynamics and other two-phase flows in porous media. Mid-ocean ridges form when viscous corner flow of the solid mantle focuses fluid toward a central ridge. Melt is believed to migrate upward until it reaches the lithospheric "tent" where it then moves toward the ridge in a high porosity band. Simulation of this physical phenomenon required confidence in numerical methods to handle highly heterogeneous porosity as well as the single-phase to two-phase transition. In this work we present a standard mixed finite element method for the equations of mantle dynamics and investigate its limitations for vanishing porosity. While stable and optimally convergent for porosity bounded away from zero, the stability estimates we obtain suggest, and numerical results show, the method becomes unstable as porosity approaches zero. Moreover, the fluid pressure is no longer a physical variable when the fluid phase disappears and thus is not a good variable for numerical methods. Inspired by the stability estimates of the standard method, we develop a novel stable mixed method with uniqueness and existence of solutions by studying a linear degenerate elliptic sub-problem akin to the Darcy part of the full model: [mathematical equation], where a and b satisfy a(0)=b(0)=0 and are otherwise positive, and the porosity φ ≥ 0 may be zero on a set of positive measure. Using scaled variables and mild assumptions on the regularity of φ, we develop a practical mass-conservative method based on lowest order Raviart-Thomas finite elements. Finally, we adapt the numerical method for the sub-problem to the full system of equations. We show optimal convergence for sufficiently smooth solutions for a compacting column and mid-ocean ridge-like corner flow examples, and investigate accuracy and stability for less regular problems / text

Page generated in 0.0402 seconds