• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 29
  • 11
  • 7
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 128
  • 45
  • 20
  • 18
  • 16
  • 15
  • 15
  • 15
  • 14
  • 14
  • 13
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Développement d'un modèle de calcul de la capacité ultime d'éléments de structure (3D) en béton armé, basé sur la théorie du calcul à la rupture / Development of a yield design model (until failure, collapse limit load) for 3D reinforced concrete structures

Vincent, Hugues 21 November 2018 (has links)
Pour l’évaluation de la résistance ultime des ouvrages l’ingénieur de génie civil fait appel à différentes méthodes plus ou moins empiriques, dont de nombreuses manuelles, du fait de la lourdeur excessive des méthodes par éléments finis non-linéaires mises en œuvre dans les logiciels de calcul à sa disposition. Le calcul à la rupture, théorisé par J. Salençon, indique la voie de méthodes rigoureuses, tout à fait adaptées à cette problématique, mais dont la mise en œuvre systématique dans un logiciel a longtemps buté sur l’absence de méthodes numériques efficaces. Ce verrou de mathématique numérique a été levé récemment (Algorithme de point intérieur).Dans ce contexte l’objectif de la présente thèse est de mettre au point les méthodes permettant d’analyser, au moyen du calcul à la rupture, la capacité ultime d’éléments en béton armé tridimensionnels. Les deux approches du calcul à la rupture, que sont les approches statique et cinématiques, seront mises en œuvre numériquement sous la forme d’un problème d’optimisation résolu à l’aide d’un solveur mathématique dans le cadre de la programmation semi définie positive (SDP).Une large partie du travail sera consacré à la modélisation des différents matériaux constituant le béton armé. Le choix du critère pour modéliser la résistance du béton sera discuté, tout comme la méthode pour prendre en compte le renforcement. La méthode d’homogénéisation sera utilisée dans le cas de renforcement périodique et une adaptation de cette méthode sera utilisée dans le cas de renforts isolés. Enfin, les capacités et le potentiel de l’outil développé et mis en œuvre au cours de cette thèse seront exposés au travers d’exemples d’application sur des structures massives / To evaluate the load bearing capacity of structures, civil engineers often make use of empirical methods, which are often manuals, instead of nonlinear finite element methods available in existing civil engineering softwares, which are long to process and difficult to handle. Yield design (or limit analysis) approach, formalized by J. Salençon, is a rigorous method to evaluate the capacity of structures and can be used to answer the question of structural failure. It was, yet, not possible to take advantage of these theoretical methods due to the lack of efficient numerical methods. Recent progress in this field and notably in interior point algorithms allows one to rethink this opportunity. Therefore, the main objective of this thesis is to develop a numerical model, based on the yield design approach, to evaluate the ultimate capacity of massive (3D) reinforced concrete structural elements. Both static and kinematic approaches are implemented and expressed as an optimization problem that can be solved by a mathematical optimization solver in the framework of Semi-Definite Programming (SDP).A large part of this work is on modelling the resistance of the different components of the reinforced concrete composite material. The modelling assumptions taken to model the resistance of concrete are discussed. And the method used to model reinforcement is also questioned. The homogenization method is used to model periodic reinforcement and an adaptation of this technique is developed for isolated rebars. To conclude this work, a last part is dedicated to illustrate the power and potentialities of the numerical tool developed during this PhD thesis through various examples of massive structures
112

Modelos de aprendizado supervisionado usando métodos kernel, conjuntos fuzzy e medidas de probabilidade / Supervised machine learning models using kernel methods, probability measures and fuzzy sets

Guevara Díaz, Jorge Luis 04 May 2015 (has links)
Esta tese propõe uma metodologia baseada em métodos de kernel, teoria fuzzy e probabilidade para tratar conjuntos de dados cujas observações são conjuntos de pontos. As medidas de probabilidade e os conjuntos fuzzy são usados para modelar essas observações. Posteriormente, graças a kernels definidos sobre medidas de probabilidade, ou em conjuntos fuzzy, é feito o mapeamento implícito dessas medidas de probabilidade, ou desses conjuntos fuzzy, para espaços de Hilbert com kernel reproduzível, onde a análise pode ser feita com algum método kernel. Usando essa metodologia, é possível fazer frente a uma ampla gamma de problemas de aprendizado para esses conjuntos de dados. Em particular, a tese apresenta o projeto de modelos de descrição de dados para observações modeladas com medidas de probabilidade. Isso é conseguido graças ao mergulho das medidas de probabilidade nos espaços de Hilbert, e a construção de esferas envolventes mínimas nesses espaços de Hilbert. A tese apresenta como esses modelos podem ser usados como classificadores de uma classe, aplicados na tarefa de detecção de anomalias grupais. No caso que as observações sejam modeladas por conjuntos fuzzy, a tese propõe mapear esses conjuntos fuzzy para os espaços de Hilbert com kernel reproduzível. Isso pode ser feito graças à projeção de novos kernels definidos sobre conjuntos fuzzy. A tese apresenta como esses novos kernels podem ser usados em diversos problemas como classificação, regressão e na definição de distâncias entre conjuntos fuzzy. Em particular, a tese apresenta a aplicação desses kernels em problemas de classificação supervisionada em dados intervalares e teste kernel de duas amostras para dados contendo atributos imprecisos. / This thesis proposes a methodology based on kernel methods, probability measures and fuzzy sets, to analyze datasets whose individual observations are itself sets of points, instead of individual points. Fuzzy sets and probability measures are used to model observations; and kernel methods to analyze the data. Fuzzy sets are used when the observation contain imprecise, vague or linguistic values. Whereas probability measures are used when the observation is given as a set of multidimensional points in a $D$-dimensional Euclidean space. Using this methodology, it is possible to address a wide range of machine learning problems for such datasets. Particularly, this work presents data description models when observations are modeled by probability measures. Those description models are applied to the group anomaly detection task. This work also proposes a new class of kernels, \\emph{the kernels on fuzzy sets}, that are reproducing kernels able to map fuzzy sets to a geometric feature spaces. Those kernels are similarity measures between fuzzy sets. We give from basic definitions to applications of those kernels in machine learning problems as supervised classification and a kernel two-sample test. Potential applications of those kernels include machine learning and patter recognition tasks over fuzzy data; and computational tasks requiring a similarity measure estimation between fuzzy sets.
113

Kenngrößen für die Abhängigkeitsstruktur in Extremwertzeitreihen / Characteristics for Dependence in Time Series of Extreme Values

Ehlert, Andree 31 August 2010 (has links)
No description available.
114

Riemannuv integrál a jeho aplikace / The Rieman integral and its applications

VOPÁLENSKÁ, Lenka January 2015 (has links)
The main goal of my diploma work on the topic "The Riemann integral and its applications" is to create an overview that describes the individual use of the Riemann integral. The first three chapters deal with the history, the definition of the Riemann integral and with the calculation of integration. There is the overview of the individual use in the fourth chapter (area plane region, length plane curve, volume solid of revolution and area lateral surface solid of revolution) that is accompanied by solved examples, the examples are accompanied by graphical representation for a better idea. In the fifth chapter there is a collection of unsolved examples, for practising of the individual use. The collection is complemented by results for the reader to check the accuracy of calculation.
115

O efeito grau máximo sobre os domínios: como \'todo\' modifica a relação argumento-predicado / The maximal degree effect: how todo modifies the predication

Ana Paula Quadros Gomes 19 February 2009 (has links)
Esta tese investiga o modo de organização dos domínios nominal, verbal e dos adjetivos em Português do Brasil (PB), tendo como guia a aceitabilidade de sentenças com todo. Para o inglês, a natureza do parâmetro orienta a seleção de argumentos por operadores; já para o PB, o que importa é a oposição entre tipos de escala. O PB não tem determinantes que distingam entre nome contável e massivo, como much e many. O operador aspectual progressivo não modifica estados em inglês, mas em PB sim. Em inglês, very seleciona adjetivos de parâmetro relativo. Em PB, muito + adjetivo tem parâmetro relativo, e todo + adjetivo tem parâmetro absoluto. Todo é um operador interdomínios, sensível aos tipos de escala. Todo modifica a relação de predicação. Todo impõe condições (quantitativas) sobre como a saturação de um predicado por certo argumento deve ocorrer. Todo não é nem um modificador nominal, nem um quantificador canônico como cada. Todo não cria, apenas modifica uma relação existente. A distribuição que ocorre em sentenças com todo é uma entre as muitas formas de saturação de um predicado por um argumento: uma relação incremental. Se o argumento for quantizado, o predicado necessariamente também se tornará quantizado. Analisamos uma descrição definida (DD) como um sintagma de medida (SM). O artigo definido torna um predicado nominal em denotação quantizada, mas todo não. Relacionamos ser quantizado a ser argumental, e ser cumulativo a ser predicativo. E associamos sentidos diferentes às posições de todo na sentença. / This thesis takes the distribution of todo as a probe for the structure of nominal, verbal and adjective domains in Brazilian Portuguese (BP). Todo is a Degree Modifier (DM) and is sensitive to scale structure. English DMs (e.g., very) select adjectives by their standards; the BP DMs select adjectives only by their scale structure. However, they produce phrases with standard specialization. Todo + adjective shows absolute standard interpretation. We claim that the domains show the same properties in both languages, but the nature of scale standard matters in a distinct level for each one. We claim that todo is neither a noun modifier nor a true quantifier. Todo is a relation modifier. Todo modifies the way the argument saturates the predicate. A quantized incremental argument will make the predicate quantized as well. Todo is not the true source of distributivity, since incremental relations occur even in its absence. Definite Descriptions are treated as measure phrases. The definite article relates noun predicates to situations. So it will change a bare noun into a quantized denotation, which todo cannot do. Each land site corresponds to a different meaning for floating todo.
116

Power-Aware Protocols for Wireless Sensor Networks / Conception et analyse de protocoles, pour les réseaux de capteurs sans fil, prenant en compte la consommation d'énergie

Xu, Chuan 15 December 2017 (has links)
Ce manuscrit contient d'abord l'étude d'une extension du modèle des protocoles de populations, qui représentent des réseaux de capteurs asynchrones, passivement mobiles, limités en ressources et anonymes. Pour la première fois (à notre connaissance), un modèle formel de consommation d'énergie est proposé pour les protocoles de populations. A titre d'application, nous étudions à la complexité en énergie (dans le pire des cas et en moyenne) pour le problème de collecte de données. Deux protocoles prenant en compte la consommation d'énergie sont proposés. Le premier est déterministe et le second randomisé. Pour déterminer les valeurs optimales des paramètres, nous faisons appel aux techniques d'optimisation. Nous appliquons aussi ces techniques dans un cadre différent, celui des réseaux de capteurs corporels (WBAN). Une formulation de flux est proposée pour acheminer de manière optimale les paquets de données en minimisant la pire consommation d'énergie. Une procédure de recherche à voisinage variable est développée et les résultats numériques montrent son efficacité. Enfin, nous considérons le problème d'optimisation avec des paramètres aléatoires. Précisément, nous étudions un modèle semi-défini positif sous contrainte en probabilité. Un nouvel algorithme basé sur la simulation est proposé et testé sur un problème réel de théorie du contrôle. Nous montrons que notre méthode permet de trouver une solution moins conservatrice que d'autres approches en un temps de calcul raisonnable. / In this thesis, we propose a formal energy model which allows an analytical study of energy consumption, for the first time in the context of population protocols. Population protocols model one special kind of sensor networks where anonymous and uniformly bounded memory sensors move unpredictably and communicate in pairs. To illustrate the power and the usefulness of the proposed energy model, we present formal analyses on time and energy, for the worst and the average cases, for accomplishing the fundamental task of data collection. Two power-aware population protocols, (deterministic) EB-TTFM and (randomized) lazy-TTF, are proposed and studied for two different fairness conditions, respectively. Moreover, to obtain the best parameters in lazy-TTF, we adopt optimization techniques and evaluate the resulting performance by experiments. Then, we continue the study on optimization for the power-aware data collection problem in wireless body area networks. A minmax multi-commodity netflow formulation is proposed to optimally route data packets by minimizing the worst power consumption. Then, a variable neighborhood search approach is developed and the numerical results show its efficiency. At last, a stochastic optimization model, namely the chance constrained semidefinite programs, is considered for the realistic decision making problems with random parameters. A novel simulation-based algorithm is proposed with experiments on a real control theory problem. We show that our method allows a less conservative solution, than other approaches, within reasonable time.
117

Matrices de moments, géométrie algébrique réelle et optimisation polynomiale / Moments matrices, real algebraic geometry and polynomial optimization

Abril Bucero, Marta 12 December 2014 (has links)
Le but de cette thèse est de calculer l'optimum d'un polynôme sur un ensemble semi-algébrique et les points où cet optimum est atteint. Pour atteindre cet objectif, nous combinons des méthodes de base de bord avec la hiérarchie de relaxation convexe de Lasserre afin de réduire la taille des matrices de moments dans les problèmes de programmation semi-définie positive (SDP). Afin de vérifier si le minimum est atteint, nous apportons un nouveau critère pour vérifier l'extension plate de Curto Fialkow utilisant des bases orthogonales. En combinant ces nouveaux résultats, nous fournissons un nouvel algorithme qui calcule l'optimum et les points minimiseurs. Nous décrivons plusieurs expérimentations et des applications dans différents domaines qui prouvent la performance de l'algorithme. Au niveau théorique nous prouvons aussi la convergence finie d'une hiérarchie SDP construite à partir d'un idéal de Karush-Kuhn-Tucker et ses conséquences dans des cas particuliers. Nous étudions aussi le cas particulier où les minimiseurs ne sont pas des points de KKT en utilisant la variété de Fritz-John. / The objective of this thesis is to compute the optimum of a polynomial on a closed basic semialgebraic set and the points where this optimum is reached. To achieve this goal we combine border basis method with Lasserre's hierarchy in order to reduce the size of the moment matrices in the SemiDefinite Programming (SDP) problems. In order to verify if the minimum is reached we describe a new criterion to verify the flat extension condition using border basis. Combining these new results we provide a new algorithm which computes the optimum and the minimizers points. We show several experimentations and some applications in different domains which prove the perfomance of the algorithm. Theorethically we also prove the finite convergence of a SDP hierarchie contructed from a Karush-Kuhn-Tucker ideal and its consequences in particular cases. We also solve the particular case where the minimizers are not KKT points using Fritz-John Variety.
118

Modelos de aprendizado supervisionado usando métodos kernel, conjuntos fuzzy e medidas de probabilidade / Supervised machine learning models using kernel methods, probability measures and fuzzy sets

Jorge Luis Guevara Díaz 04 May 2015 (has links)
Esta tese propõe uma metodologia baseada em métodos de kernel, teoria fuzzy e probabilidade para tratar conjuntos de dados cujas observações são conjuntos de pontos. As medidas de probabilidade e os conjuntos fuzzy são usados para modelar essas observações. Posteriormente, graças a kernels definidos sobre medidas de probabilidade, ou em conjuntos fuzzy, é feito o mapeamento implícito dessas medidas de probabilidade, ou desses conjuntos fuzzy, para espaços de Hilbert com kernel reproduzível, onde a análise pode ser feita com algum método kernel. Usando essa metodologia, é possível fazer frente a uma ampla gamma de problemas de aprendizado para esses conjuntos de dados. Em particular, a tese apresenta o projeto de modelos de descrição de dados para observações modeladas com medidas de probabilidade. Isso é conseguido graças ao mergulho das medidas de probabilidade nos espaços de Hilbert, e a construção de esferas envolventes mínimas nesses espaços de Hilbert. A tese apresenta como esses modelos podem ser usados como classificadores de uma classe, aplicados na tarefa de detecção de anomalias grupais. No caso que as observações sejam modeladas por conjuntos fuzzy, a tese propõe mapear esses conjuntos fuzzy para os espaços de Hilbert com kernel reproduzível. Isso pode ser feito graças à projeção de novos kernels definidos sobre conjuntos fuzzy. A tese apresenta como esses novos kernels podem ser usados em diversos problemas como classificação, regressão e na definição de distâncias entre conjuntos fuzzy. Em particular, a tese apresenta a aplicação desses kernels em problemas de classificação supervisionada em dados intervalares e teste kernel de duas amostras para dados contendo atributos imprecisos. / This thesis proposes a methodology based on kernel methods, probability measures and fuzzy sets, to analyze datasets whose individual observations are itself sets of points, instead of individual points. Fuzzy sets and probability measures are used to model observations; and kernel methods to analyze the data. Fuzzy sets are used when the observation contain imprecise, vague or linguistic values. Whereas probability measures are used when the observation is given as a set of multidimensional points in a $D$-dimensional Euclidean space. Using this methodology, it is possible to address a wide range of machine learning problems for such datasets. Particularly, this work presents data description models when observations are modeled by probability measures. Those description models are applied to the group anomaly detection task. This work also proposes a new class of kernels, \\emph{the kernels on fuzzy sets}, that are reproducing kernels able to map fuzzy sets to a geometric feature spaces. Those kernels are similarity measures between fuzzy sets. We give from basic definitions to applications of those kernels in machine learning problems as supervised classification and a kernel two-sample test. Potential applications of those kernels include machine learning and patter recognition tasks over fuzzy data; and computational tasks requiring a similarity measure estimation between fuzzy sets.
119

Résolution exacte du problème de l'optimisation des flux de puissance / Global optimization of the Optimal Power Flow problem

Godard, Hadrien 17 December 2019 (has links)
Cette thèse a pour objet la résolution exacte d’un problème d’optimisation des flux de puissance (OPF) dans un réseau électrique. Dans l’OPF, on doit planifier la production et la répartition des flux de puissances électriques permettant de couvrir, à un coût minimal, la consommation en différents points du réseau. Trois variantes du problème de l’OPF sont étudiées dans ce manuscrit. Nous nous concentrerons principalement sur la résolution exacte des deux problèmes (OPF − L) et (OPF − Q), puis nous montrerons comment notre approche peut naturellement s’´étendre à la troisième variante (OPF − UC). Cette thèse propose de résoudre ces derniers à l’aide d’une méthode de reformulation que l’on appelle RC-OPF. La contribution principale de cette thèse réside dans l’étude, le développement et l’utilisation de notre méthode de résolution exacte RC-OPF sur les trois variantes d’OPF. RC-OPF utilise également des techniques de contractions de bornes, et nous montrons comment ces techniques classiques peuvent être renforcées en utilisant des résultats issus de notre reformulation optimale. / Alternative Current Optimal Power Flow (ACOPF) is naturally formulated as a non-convex problem. In that context, solving (ACOPF) to global optimality remains a challenge when classic convex relaxations are not exact. We use semidefinite programming to build a quadratic convex relaxation of (ACOPF). We show that this quadratic convex relaxation has the same optimal value as the classical semidefinite relaxation of (ACOPF) which is known to be tight. In that context, we build a spatial branch-and-bound algorithm to solve (ACOPF) to global optimality that is based on a quadratic convex programming bound.
120

A study concerning the positive semi-definite property for similarity matrices and for doubly stochastic matrices with some applications / Une étude concernant la propriété semi-définie positive des matrices de similarité et des matrices doublement stochastiques avec certaines applications

Nader, Rafic 28 June 2019 (has links)
La théorie des matrices s'est développée rapidement au cours des dernières décennies en raison de son large éventail d'applications et de ses nombreux liens avec différents domaines des mathématiques, de l'économie, de l'apprentissage automatique et du traitement du signal. Cette thèse concerne trois axes principaux liés à deux objets d'étude fondamentaux de la théorie des matrices et apparaissant naturellement dans de nombreuses applications, à savoir les matrices semi-définies positives et les matrices doublement stochastiques.Un concept qui découle naturellement du domaine de l'apprentissage automatique et qui est lié à la propriété semi-définie positive est celui des matrices de similarité. En fait, les matrices de similarité qui sont semi-définies positives revêtent une importance particulière en raison de leur capacité à définir des distances métriques. Cette thèse explorera la propriété semi-définie positive pour une liste de matrices de similarité trouvées dans la littérature. De plus, nous présentons de nouveaux résultats concernant les propriétés définie positive et semi-définie trois-positive de certains matrices de similarité. Une discussion détaillée des nombreuses applications de tous ces propriétés dans divers domaines est également établie.D'autre part, un problème récent de l'analyse matricielle implique l'étude des racines des matrices stochastiques, ce qui s'avère important dans les modèles de chaîne de Markov en finance. Nous étendons l'analyse de ce problème aux matrices doublement stochastiques semi-définies positives. Nous montrons d'abord certaines propriétés géométriques de l'ensemble de toutes les matrices semi-définies positives doublement stochastiques d'ordre n ayant la p-ième racine doublement stochastique pour un entier donné p . En utilisant la théorie des M-matrices et le problème inverse des valeurs propres des matrices symétriques doublement stochastiques (SDIEP), nous présentons également quelques méthodes pour trouver des classes de matrices semi-définies positives doublement stochastiques ayant des p-ièmes racines doublement stochastiques pour tout entier p.Dans le contexte du SDIEP, qui est le problème de caractériser ces listes de nombres réels qui puissent constituer le spectre d’une matrice symétrique doublement stochastique, nous présentons quelques nouveaux résultats le long de cette ligne. En particulier, nous proposons d’utiliser une méthode récursive de construction de matrices doublement stochastiques afin d'obtenir de nouvelles conditions suffisantes indépendantes pour SDIEP. Enfin, nous concentrons notre attention sur les spectres normalisés de Suleimanova, qui constituent un cas particulier des spectres introduits par Suleimanova. En particulier, nous prouvons que de tels spectres ne sont pas toujours réalisables et nous construisons trois familles de conditions suffisantes qui affinent les conditions suffisantes précédemment connues pour SDIEP dans le cas particulier des spectres normalisés de Suleimanova. / Matrix theory has shown its importance by its wide range of applications in different fields such as statistics, machine learning, economics and signal processing. This thesis concerns three main axis related to two fundamental objects of study in matrix theory and that arise naturally in many applications, that are positive semi-definite matrices and doubly stochastic matrices.One concept which stems naturally from machine learning area and is related to the positive semi-definite property, is the one of similarity matrices. In fact, similarity matrices that are positive semi-definite are of particular importance because of their ability to define metric distances. This thesis will explore the latter desirable structure for a list of similarity matrices found in the literature. Moreover, we present new results concerning the strictly positive definite and the three positive semi-definite properties of particular similarity matrices. A detailed discussion of the many applications of all these properties in various fields is also established.On the other hand, an interesting research field in matrix analysis involves the study of roots of stochastic matrices which is important in Markov chain models in finance and healthcare. We extend the analysis of this problem to positive semi-definite doubly stochastic matrices.Our contributions include some geometrical properties of the set of all positive semi-definite doubly stochastic matrices of order n with nonnegative pth roots for a given integer p. We also present methods for finding classes of positive semi-definite doubly stochastic matrices that have doubly stochastic pth roots for all p, by making use of the theory of M-Matrices and the symmetric doubly stochastic inverse eigenvalue problem (SDIEP), which is also of independent interest.In the context of the SDIEP, which is the problem of characterising those lists of real numbers which are realisable as the spectrum of some symmetric doubly stochastic matrix, we present some new results along this line. In particular, we propose to use a recursive method on constructing doubly stochastic matrices from smaller size matrices with known spectra to obtain new independent sufficient conditions for SDIEP. Finally, we focus our attention on the realizability by a symmetric doubly stochastic matrix of normalised Suleimanova spectra which is a normalized variant of the spectra introduced by Suleimanova. In particular, we prove that such spectra is not always realizable for odd orders and we construct three families of sufficient conditions that make a refinement for previously known sufficient conditions for SDIEP in the particular case of normalized Suleimanova spectra.

Page generated in 0.1101 seconds