• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 513
  • 85
  • 53
  • 49
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • Tagged with
  • 864
  • 322
  • 133
  • 94
  • 90
  • 88
  • 86
  • 79
  • 76
  • 68
  • 68
  • 67
  • 66
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

On the Generalizations of Gershgorin's Theorem

Lee, Sang-Gu 01 May 1986 (has links)
This paper deals with generalization fo Gershgorin's theorem. This theorem is investigated and generalized in terms of contour integrals, directed graphs, convex analysis, and clock matrices. These results are shown to apply to some specified matrices such as stable and stochastic matrices and some examples will show the relationship of eigenvalue inclusion regions among them.
312

Constrained Motion Particle Swarm Optimization for Non-Linear Time Series Prediction

Sapankevych, Nicholas 13 March 2015 (has links)
Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting, weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. One type of time series interpolation and prediction algorithm that has been proven to be effective for these various types of applications is Support Vector Regression (SVR) [1], which is based on the Support Vector Machine (SVM) developed by Vapnik et al. [2, 3]. The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons. As with most time series prediction algorithms, there are typically challenges associated in applying a given heuristic to any general problem. One difficult challenge in using SVR to solve these types of problems is the selection of free parameters associated with the SVR algorithm. There is no given heuristic to select SVR free parameters and the user is left to adjust these parameters in an ad hoc manner. The focus of this dissertation is to present an alternative to the typical ad hoc approach of tuning SVR for time series prediction problems by using Particle Swarm Optimization (PSO) to assist in the SVR free parameter selection process. Developed by Kennedy and Eberhart [4-8], PSO is a technique that emulates the process living creatures (such as birds or insects) use to discover food resources at a given geographic location. PSO has been proven to be an effective technique for many different kinds of optimization problems [9-11]. The focus of this dissertation is to present an alternative to the typical ad hoc approach of tuning SVR for time series prediction problems by using Particle Swarm Optimization (PSO) to assist in the SVR free parameter selection process. Developed by Kennedy and Eberhart [4-8], PSO is a technique that emulates the process living creatures (such as birds or insects) use to discover food resources at a given geographic location. PSO has been proven to be an effective technique for many different kinds of optimization problems [9-11].
313

Mean curvature flow with free boundary on smooth hypersurfaces

Buckland, John A. (John Anthony), 1978- January 2003 (has links)
Abstract not available
314

Introduction et analyse des schémas de cotation en avance de phase

Socoliuc, Michel 09 July 2010 (has links) (PDF)
Il y a peu, j'ai pu lire « qu'on pouvait considérer que les ponts romains de l'Antiquité, pouvaient être considérés comme inefficaces, au regard des standards actuels : ils utilisaient trop de pierre et énormément de travail était nécessaire à leur construction. Au fil des années, pour répondre à une problématique équivalente, nous avons appris à utiliser moins de matériaux et à réduire la charge de travail ». Ces problématiques nous les retrouvons aussi en conception mécanique où l'on essaye en continu de proposer des systèmes de plus en plus performants mais devant être conçus en moins de temps, étant moins cher à produire et fournissant des prestations au moins équivalentes à ce qui a déjà été conçu.Au cours d'un processus de conception classique, les concepteurs définissent une géométrie ne présentant aucun défaut puis, étant donné que les moyens de production ne permettent pas d'obtenir de telles pièces finales, ils spécifient les schémas de cotation définissant les écarts acceptables garantissant le bon fonctionnement du système. Seulement, cela est fait après avoir produit les dessins détaillés, c'est à dire trop tard. Pour répondre à cette problématique, je présenterai l'intégration, très tôt dans le cycle de vie de conception, d'un processus de validation optimisé, basé sur une maquette numérique directement en lien avec sa représentation fonctionnelle (maquette fonctionnelle), et permettant de valider des schémas de cotation 3D standardisés.Je décrirai d'abord ce que l'on entend par « maquette fonctionnelle » et surtout ce que cette nouvelle définition apporte en plus de la définition numérique. Une fois ce point abordé, je détaillerai les liens qui permettent d'avoir une unicité de l'information au sein de l'environnement de travail, tout comme les processus qui permettent de lier les représentations fonctionnelles et numériques.Ensuite, je détaillerai les processus basés sur ces concepts, et qui ont pour but de valider les choix qui sont effectués en avance de phase au niveau des schémas de cotation. Pour ce faire, je commencerai par présenter l'analyse au pire des cas (utilisant les modèles de domaines écarts notamment), permettant de garantir le bon fonctionnement de l'ensemble mécanique, dans le cas ou touts les écarts se retrouvent à l'intérieur des zones respectives (définies par les tolérances).Enfin, je finirai par introduire ce qu'une couche statistique, couplée à l'analyse au pire des cas utilisant les enveloppes convexes, peut amener dans le contexte industriel et notamment sous la contrainte temporelle.
315

Metabolic design of dynamic bioreaction models

Provost, Agnès 06 November 2006 (has links)
This thesis is concerned with the derivation of bioprocess models intended for engineering purposes. In contrast with other techniques, the methodology used to derive a macroscopic model is based on available intracellular information. This information is extracted from the metabolic network describing the intracellular metabolism. The aspects of metabolic regulation are modeled by representing the metabolism of cultured cells with several metabolic networks. Here we present a systematic methodology for deriving macroscopic models when such metabolic networks are known. A separate model is derived for each “phase” of the culture. Each of these models relies upon a set of macroscopic bioreactions that resumes the information contained in the corresponding metabolic network. Such a set of macroscopic bioreactions is obtained by translating the set of Elementary Flux Modes which are well-known tools in the System Biology community. The Elementary Flux Modes are described in the theory of Convex Analysis. They represent pathways across metabolic networks. Once the set of Elementary Flux Modes is computed and translated into macroscopic bioreactions, a general model could be obtained for the type of culture under investigation. However, depending on the size and the complexity of the metabolic network, such a model could contain hundreds, and even thousands, of bioreactions. Since the reaction kinetics of such bioreactions are parametrized with at least one parameter that needs to be identified, the reduction of the general model to a more manageable size is desirable. Convex Analysis provides further results that allow for the selection of a macroscopic bioreaction subset. This selection is based on the data collected from the available experiments. The selected bioreactions then allow for the construction of a model for the experiments at hand.
316

Condition-Measure Bounds on the Behavior of the Central Trajectory of a Semi-Definete Program

Nunez, Manuel A., Freund, Robert M. 08 1900 (has links)
We present bounds on various quantities of interest regarding the central trajectory of a semi-definite program (SDP), where the bounds are functions of Renegar's condition number C(d) and other naturally-occurring quantities such as the dimensions n and m. The condition number C(d) is defined in terms of the data instance d = (A, b, C) for SDP; it is the inverse of a relative measure of the distance of the data instance to the set of ill-posed data instances, that is, data instances for which arbitrary perturbations would make the corresponding SDP either feasible or infeasible. We provide upper and lower bounds on the solutions along the central trajectory, and upper bounds on changes in solutions and objective function values along the central trajectory when the data instance is perturbed and/or when the path parameter defining the central trajectory is changed. Based on these bounds, we prove that the solutions along the central trajectory grow at most linearly and at a rate proportional to the inverse of the distance to ill-posedness, and grow at least linearly and at a rate proportional to the inverse of C(d)2 , as the trajectory approaches an optimal solution to the SDP. Furthermore, the change in solutions and in objective function values along the central trajectory is at most linear in the size of the changes in the data. All such bounds involve polynomial functions of C(d), the size of the data, the distance to ill-posedness of the data, and the dimensions n and m of the SDP.
317

On an Extension of Condition Number Theory to Non-Conic Convex Optimization

Freund, Robert M., Ordóñez, Fernando, 1970- 02 1900 (has links)
The purpose of this paper is to extend, as much as possible, the modern theory of condition numbers for conic convex optimization: z* := minz ctx s.t. Ax - b Cy C Cx , to the more general non-conic format: z* := minx ctx (GPd) s.t. Ax-b E Cy X P, where P is any closed convex set, not necessarily a cone, which we call the groundset. Although any convex problem can be transformed to conic form, such transformations are neither unique nor natural given the natural description of many problems, thereby diminishing the relevance of data-based condition number theory. Herein we extend the modern theory of condition numbers to the problem format (GPd). As a byproduct, we are able to state and prove natural extensions of many theorems from the conic-based theory of condition numbers to this broader problem format.
318

Volume distribution and the geometry of high-dimensional random polytopes

Pivovarov, Peter 11 1900 (has links)
This thesis is based on three papers on selected topics in Asymptotic Geometric Analysis. The first paper is about the volume of high-dimensional random polytopes; in particular, on polytopes generated by Gaussian random vectors. We consider the question of how many random vertices (or facets) should be sampled in order for such a polytope to capture significant volume. Various criteria for what exactly it means to capture significant volume are discussed. We also study similar problems for random polytopes generated by points on the Euclidean sphere. The second paper is about volume distribution in convex bodies. The first main result is about convex bodies that are (i) symmetric with respect to each of the coordinate hyperplanes and (ii) in isotropic position. We prove that most linear functionals acting on such bodies exhibit super-Gaussian tail-decay. Using known facts about the mean-width of such bodies, we then deduce strong lower bounds for the volume of certain caps. We also prove a converse statement. Namely, if an arbitrary isotropic convex body (not necessarily satisfying the symmetry assumption (i)) exhibits similar cap-behavior, then one can bound its mean-width. The third paper is about random polytopes generated by sampling points according to multiple log-concave probability measures. We prove related estimates for random determinants and give applications to several geometric inequalities; these include estimates on the volume-radius of random zonotopes and Hadamard's inequality for random matrices. / Mathematics
319

On the Extension and Wedge Product of Positive Currents

Al Abdulaali, Ahmad Khalid January 2012 (has links)
This dissertation is concerned with extensions and wedge products of positive currents. Our study can be considered as a generalization for classical works done earlier in this field. Paper I deals with the extension of positive currents across different types of sets. For closed complete pluripolar obstacles, we show the existence of such extensions. To do so, further Hausdorff dimension conditions are required. Moreover, we study the case when these obstacles are zero sets of strictly k-convex functions. In Paper II, we discuss the wedge product of positive pluriharmonic (resp. plurisubharmonic) current of bidimension (p,p) with the Monge-Ampère operator of plurisubharmonic function. In the first part of the paper, we define this product when the locus points of the plurisubharmonic function are located in a (2p-2)-dimensional closed set (resp. (2p-4)-dimensional sets), in the sense of Hartogs. The second part treats the case when these locus points are contained in a compact complete pluripolar sets and p≥2 (resp. p≥3). Paper III studies the extendability of negative S-plurisubharmonic current of bidimension (p,p) across a (2p-2)-dimensional closed set. Using only the positivity of S, we show that such extensions exist in the case when these obstacles are complete pluripolar, as well as zero sets of C2-plurisubharmoinc functions. / At the time of doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Accepted. Paper 2: Manuscript. Paper 3: Manuscript.
320

The Use of Landweber Algorithm in Image Reconstruction

Nikazad, Touraj January 2007 (has links)
Ill-posed sets of linear equations typically arise when discretizing certain types of integral transforms. A well known example is image reconstruction, which can be modelled using the Radon transform. After expanding the solution into a finite series of basis functions a large, sparse and ill-conditioned linear system arises. We consider the solution of such systems. In particular we study a new class of iteration methods named DROP (for Diagonal Relaxed Orthogonal Projections) constructed for solving both linear equations and linear inequalities. This class can also be viewed, when applied to linear equations, as a generalized Landweber iteration. The method is compared with other iteration methods using test data from a medical application and from electron microscopy. Our theoretical analysis include convergence proofs of the fully-simultaneous DROP algorithm for linear equations without consistency assumptions, and of block-iterative algorithms both for linear equations and linear inequalities, for the consistent case. When applying an iterative solver to an ill-posed set of linear equations the error typically initially decreases but after some iterations (depending on the amount of noise in the data, and the degree of ill-posedness) it starts to increase. This phenomena is called semi-convergence. It is therefore vital to find good stopping rules for the iteration. We describe a class of stopping rules for Landweber type iterations for solving linear inverse problems. The class includes, e.g., the well known discrepancy principle, and also the monotone error rule. We also unify the error analysis of these two methods. The stopping rules depend critically on a certain parameter whose value needs to be specified. A training procedure is therefore introduced for securing robustness. The advantages of using trained rules are demonstrated on examples taken from image reconstruction from projections. / Vi betraktar lösning av sådana linjära ekvationssystem som uppkommer vid diskretisering av inversa problem. Dessa problem karakteriseras av att den sökta informationen inte direkt kan mätas. Ett välkänt exempel utgör datortomografi. Där mäts hur mycket strålning som passerar genom ett föremål som belyses av en strålningskälla vilken intar olika vinklar i förhållande till objektet. Syftet är förstås att generera bilder av föremålets inre (i medicinska tillämpngar av det inre av kroppen). Vi studerar en klass av iterativa lösningsmetoder för lösning av ekvationssystemen. Metoderna tillämpas på testdata från bildrekonstruktion och jämförs med andra föreslagna iterationsmetoder. Vi gör även en konvergensanalys för olika val av metod-parametrar. När man använder en iterativ metod startar man med en begynnelse approximation som sedan gradvis förbättras. Emellertid är inversa problem känsliga även för relativt små fel i uppmätta data. Detta visar sig i att iterationerna först förbättras för att senare försämras. Detta fenomen, s.k. ’semi-convergence’ är väl känt och förklarat. Emellertid innebär detta att det är viktigt att konstruera goda stoppregler. Om man avbryter iterationen för tidigt fås dålig upplösning och om den avbryts för sent fås en oskarp och brusig bild. I avhandligen studeras en klass av stoppregler. Dessa analyseras teoretiskt och testas på mätdata. Speciellt föreslås en inlärningsförfarande där stoppregeln presenteras med data där det korrekra värdet på stopp-indexet är känt. Dessa data används för att bestämma en viktig parameter i regeln. Sedan används regeln för nya okända data. En sådan tränad stoppregel visar sig fungera väl på testdata från bildrekonstruktionsområdet.

Page generated in 0.049 seconds