Spelling suggestions: "subject:"minimisation"" "subject:"minimisations""
141 |
The future of trusts as an estate planning tool / Burger T.Burger, Trinette January 2011 (has links)
Estate planning is an important exercise aimed at increasing, preserving and protecting assets during a person's lifetime and providing for the disposition and continued utilisation of these assets after his death. The minimisation of estate duty, however, often dominates the motivation behind estate planning and many of the tools, structures and techniques used as part of the estate planning exercise are aimed at reducing or avoiding estate duty. One of these tools is the trust. In the 2010 Budget Review National Treasury suggested that taxes upon death should be reviewed. Such review may result in estate duty being abolished. Should this happen, the motivation behind many estate plans will dissipate and many estate plans that mainly focussed on estate duty will become ineffective. The question that comes to mind is whether trusts have a future as estate planning tools.
Estate planning involves many different objectives and many of these objectives can be achieved through the use of trusts. Trusts have multiple benefits and only if a trust was set up solely to reduce or avoid estate duty, will such trust become superfluous. When looking at the use of trusts in countries that do not levy estate duty (such as Australia, Canada and New Zealand), it is clear that trusts remained useful and popular in these countries even after estate duty had been abolished. This is a strong indication that trusts have a future in South Africa and that the abolishment of estate duty will not affect the usefulness and popularity of trusts. / Thesis (M.Com. (South African and International Taxation))--North-West University, Potchefstroom Campus, 2012.
|
142 |
The future of trusts as an estate planning tool / Burger T.Burger, Trinette January 2011 (has links)
Estate planning is an important exercise aimed at increasing, preserving and protecting assets during a person's lifetime and providing for the disposition and continued utilisation of these assets after his death. The minimisation of estate duty, however, often dominates the motivation behind estate planning and many of the tools, structures and techniques used as part of the estate planning exercise are aimed at reducing or avoiding estate duty. One of these tools is the trust. In the 2010 Budget Review National Treasury suggested that taxes upon death should be reviewed. Such review may result in estate duty being abolished. Should this happen, the motivation behind many estate plans will dissipate and many estate plans that mainly focussed on estate duty will become ineffective. The question that comes to mind is whether trusts have a future as estate planning tools.
Estate planning involves many different objectives and many of these objectives can be achieved through the use of trusts. Trusts have multiple benefits and only if a trust was set up solely to reduce or avoid estate duty, will such trust become superfluous. When looking at the use of trusts in countries that do not levy estate duty (such as Australia, Canada and New Zealand), it is clear that trusts remained useful and popular in these countries even after estate duty had been abolished. This is a strong indication that trusts have a future in South Africa and that the abolishment of estate duty will not affect the usefulness and popularity of trusts. / Thesis (M.Com. (South African and International Taxation))--North-West University, Potchefstroom Campus, 2012.
|
143 |
Current waste management and minimisation patterns and practices : an exploratory study on the Ekurhuleni Metropolitan Municipality in South AfricaGumbi, Sibongile Euphemia 08 1900 (has links)
Growing municipal waste mismanagement and associated environmental impacts is an enormous
environmental concern in developing countries such as South Africa. Hence, this study explored
current waste management and minimisation patterns and practices in the Ekurhuleni Metropolitan
Municipality (EMM), located east of the Gauteng province. The study was undertaken using a mixed
method design, particularly the concurrent triangulated design where the quantitative and qualitative
data were collected at the same time. The methods employed were desktop surveys, interviews with
the participants and use of questionnaires which were designed based on the objectives of the study.
The questionnaires were designed for different types of participants (namely, households, informal
reclaimers, municipal officials and landfill officials).
All the data collected were stored in Microsoft Excel (2010) spread sheet for statistical analyses. The
study has revealed some patterns, practices as well as trends regarding waste management and
minimisation within the EMM municipality. At household level, there was some environmental
awareness on waste management practices provided by the municipality as well as local recycling
options although there are numerous challenges to be resolved before these functions can become
effective. With informal recycling, a number of waste materials are being reclaimed at various landfill
sites. However, current informal waste picking activities by the so-called scavengers are not
sustainable as waste is not separated prior to disposal at various point sources. In addition, informal
reclaimers have to travel long distances to reach waste sources. Another concerning constraint
hampering the effectiveness of informal waste recovery, has to do with their daily exposure to several
environmental and health risks. Furthermore, the study has found out that the EMM is predominantly
focused on providing better waste management services rather than balancing this activity with waste
minimisation through reclaiming and recycling operations. Thus, the municipality lacks adequate
infrastructure to undertake waste minimisation effectively. Also, waste minimisation and awareness
campaigns were found to be inadequate and at an infant stage, unlike those carried out by private
companies. In view of these findings, a number of recommendations have been made. / Environmental Sciences / M. Sc. (Environmental Science)
|
144 |
Regret minimisation and system-efficiency in route choice / Minimização de Regret e eficiência do sistema em escala de rotasRamos, Gabriel de Oliveira January 2018 (has links)
Aprendizagem por reforço multiagente (do inglês, MARL) é uma tarefa desafiadora em que agentes buscam, concorrentemente, uma política capaz de maximizar sua utilidade. Aprender neste tipo de cenário é difícil porque os agentes devem se adaptar uns aos outros, tornando o objetivo um alvo em movimento. Consequentemente, não existem garantias de convergência para problemas de MARL em geral. Esta tese explora um problema em particular, denominado escolha de rotas (onde motoristas egoístas deve escolher rotas que minimizem seus custos de viagem), em busca de garantias de convergência. Em particular, esta tese busca garantir a convergência de algoritmos de MARL para o equilíbrio dos usuários (onde nenhum motorista consegue melhorar seu desempenho mudando de rota) e para o ótimo do sistema (onde o tempo médio de viagem é mínimo). O principal objetivo desta tese é mostrar que, no contexto de escolha de rotas, é possível garantir a convergência de algoritmos de MARL sob certas condições. Primeiramente, introduzimos uma algoritmo de aprendizagem por reforço baseado em minimização de arrependimento, o qual provamos ser capaz de convergir para o equilíbrio dos usuários Nosso algoritmo estima o arrependimento associado com as ações dos agentes e usa tal informação como sinal de reforço dos agentes. Além do mais, estabelecemos um limite superior no arrependimento dos agentes. Em seguida, estendemos o referido algoritmo para lidar com informações não-locais, fornecidas por um serviço de navegação. Ao usar tais informações, os agentes são capazes de estimar melhor o arrependimento de suas ações, o que melhora seu desempenho. Finalmente, de modo a mitigar os efeitos do egoísmo dos agentes, propomos ainda um método genérico de pedágios baseados em custos marginais, onde os agentes são cobrados proporcionalmente ao custo imposto por eles aos demais. Neste sentido, apresentamos ainda um algoritmo de aprendizagem por reforço baseado em pedágios que, provamos, converge para o ótimo do sistema e é mais justo que outros existentes na literatura. / Multiagent reinforcement learning (MARL) is a challenging task, where self-interested agents concurrently learn a policy that maximise their utilities. Learning here is difficult because agents must adapt to each other, which makes their objective a moving target. As a side effect, no convergence guarantees exist for the general MARL setting. This thesis exploits a particular MARL problem, namely route choice (where selfish drivers aim at choosing routes that minimise their travel costs), to deliver convergence guarantees. We are particularly interested in guaranteeing convergence to two fundamental solution concepts: the user equilibrium (UE, when no agent benefits from unilaterally changing its route) and the system optimum (SO, when average travel time is minimum). The main goal of this thesis is to show that, in the context of route choice, MARL can be guaranteed to converge to the UE as well as to the SO upon certain conditions. Firstly, we introduce a regret-minimising Q-learning algorithm, which we prove that converges to the UE. Our algorithm works by estimating the regret associated with agents’ actions and using such information as reinforcement signal for updating the corresponding Q-values. We also establish a bound on the agents’ regret. We then extend this algorithm to deal with non-local information provided by a navigation service. Using such information, agents can improve their regrets estimates, thus performing empirically better. Finally, in order to mitigate the effects of selfishness, we also present a generalised marginal-cost tolling scheme in which drivers are charged proportional to the cost imposed on others. We then devise a toll-based Q-learning algorithm, which we prove that converges to the SO and that is fairer than existing tolling schemes.
|
145 |
Regret minimisation and system-efficiency in route choice / Minimização de Regret e eficiência do sistema em escala de rotasRamos, Gabriel de Oliveira January 2018 (has links)
Aprendizagem por reforço multiagente (do inglês, MARL) é uma tarefa desafiadora em que agentes buscam, concorrentemente, uma política capaz de maximizar sua utilidade. Aprender neste tipo de cenário é difícil porque os agentes devem se adaptar uns aos outros, tornando o objetivo um alvo em movimento. Consequentemente, não existem garantias de convergência para problemas de MARL em geral. Esta tese explora um problema em particular, denominado escolha de rotas (onde motoristas egoístas deve escolher rotas que minimizem seus custos de viagem), em busca de garantias de convergência. Em particular, esta tese busca garantir a convergência de algoritmos de MARL para o equilíbrio dos usuários (onde nenhum motorista consegue melhorar seu desempenho mudando de rota) e para o ótimo do sistema (onde o tempo médio de viagem é mínimo). O principal objetivo desta tese é mostrar que, no contexto de escolha de rotas, é possível garantir a convergência de algoritmos de MARL sob certas condições. Primeiramente, introduzimos uma algoritmo de aprendizagem por reforço baseado em minimização de arrependimento, o qual provamos ser capaz de convergir para o equilíbrio dos usuários Nosso algoritmo estima o arrependimento associado com as ações dos agentes e usa tal informação como sinal de reforço dos agentes. Além do mais, estabelecemos um limite superior no arrependimento dos agentes. Em seguida, estendemos o referido algoritmo para lidar com informações não-locais, fornecidas por um serviço de navegação. Ao usar tais informações, os agentes são capazes de estimar melhor o arrependimento de suas ações, o que melhora seu desempenho. Finalmente, de modo a mitigar os efeitos do egoísmo dos agentes, propomos ainda um método genérico de pedágios baseados em custos marginais, onde os agentes são cobrados proporcionalmente ao custo imposto por eles aos demais. Neste sentido, apresentamos ainda um algoritmo de aprendizagem por reforço baseado em pedágios que, provamos, converge para o ótimo do sistema e é mais justo que outros existentes na literatura. / Multiagent reinforcement learning (MARL) is a challenging task, where self-interested agents concurrently learn a policy that maximise their utilities. Learning here is difficult because agents must adapt to each other, which makes their objective a moving target. As a side effect, no convergence guarantees exist for the general MARL setting. This thesis exploits a particular MARL problem, namely route choice (where selfish drivers aim at choosing routes that minimise their travel costs), to deliver convergence guarantees. We are particularly interested in guaranteeing convergence to two fundamental solution concepts: the user equilibrium (UE, when no agent benefits from unilaterally changing its route) and the system optimum (SO, when average travel time is minimum). The main goal of this thesis is to show that, in the context of route choice, MARL can be guaranteed to converge to the UE as well as to the SO upon certain conditions. Firstly, we introduce a regret-minimising Q-learning algorithm, which we prove that converges to the UE. Our algorithm works by estimating the regret associated with agents’ actions and using such information as reinforcement signal for updating the corresponding Q-values. We also establish a bound on the agents’ regret. We then extend this algorithm to deal with non-local information provided by a navigation service. Using such information, agents can improve their regrets estimates, thus performing empirically better. Finally, in order to mitigate the effects of selfishness, we also present a generalised marginal-cost tolling scheme in which drivers are charged proportional to the cost imposed on others. We then devise a toll-based Q-learning algorithm, which we prove that converges to the SO and that is fairer than existing tolling schemes.
|
146 |
Contribution à la modélisation et à la simulation numérique multi-échelle du transport cinétique électronique dans un plasma chaudMallet, Jessy 01 October 2012 (has links)
En physique des plasmas, le transport des électrons peut être décrit d'un point de vue cinétique ou d'un point de vue hydrodynamique.En théorie cinétique, une équation de Fokker-Planck couplée aux équations de Maxwell est utilisée habituellement pour décrire l'évolution des électrons dans un plasma collisionnel. Plus précisément la solution de l'équation cinétique est une fonction de distribution non négative f spécifiant la densité des particules en fonction de la vitesse des particules, le temps et la position dans l'espace. Afin d'approcher la solution de ce problème cinétique, de nombreuses méthodes de calcul ont été développées. Ici, une méthode déterministe est proposée dans une géométrie plane. Cette méthode est basée sur différents schémas numériques d'ordre élevé . Chaque schéma déterministe utilisé présente de nombreuses propriétés fondamentales telles que la conservation du flux de particules, la préservation de la positivité de la fonction de distribution et la conservation de l'énergie. Cependant, le coût de calcul cinétique pour cette méthode précise est trop élevé pour être utilisé dans la pratique, en particulier dans un espace multidimensionnel.Afin de réduire ce temps de calcul, le plasma peut être décrit par un modèle hydrodynamique. Toutefois, pour les nouvelles cibles à haute énergie, les effets cinétiques sont trop importants pour les négliger et remplacer le calcul cinétique par des modèles habituels d'Euler macroscopiques. C'est pourquoi une approche alternative est proposée en considérant une description intermédiaire entre le modèle fluide et le modèle cinétique. Pour décrire le transport des électrons, le nouveau modèle réduit cinétique M1 est basé sur une approche aux moments pour le système Maxwell-Fokker-Planck. Ce modèle aux moments utilise des intégrations de la fonction de distribution des électrons sur la direction de propagation et ne retient que l'énergie des particules comme variable cinétique. La variable de vitesse est écrite en coordonnées sphériques et le modèle est défini en considérant le système de moments par rapport à la variable angulaire. La fermeture du système de moments est obtenue sous l'hypothèse que la fonction de distribution est une fonction d'entropie minimale. Ce modèle satisfait les propriétés fondamentales telles que la conservation de la positivité de la fonction de distribution, les lois de conservation pour les opérateurs de collision et la dissipation d'entropie. En outre une discrétisation entropique avec la variable de vitesse est proposée sur le modèle semi-discret. De plus, le modèle M1 peut être généralisé au modèle MN en considérant N moments donnés. Le modèle aux N-moments obtenu préserve également les propriétés fondamentales telles que les lois de conservation et la dissipation de l'entropie. Le schéma semi-discret associé préserve les propriétés de conservation et de décroissance de l'entropie. / In plasma physics, the transport of electrons can be described from a kinetic point of view or from an hydrodynamical point of view.Classically in kinetic theory, a Fokker-Planck equation coupled with Maxwell equations is used to describe the evolution of electrons in a collisional plasma. More precisely the solution of the kinetic equations is a non-negative distribution function f specifying the density of particles as a function of velocity of particles, the time and the position in space. In order to approximate the solution of such problems, many computational methods have been developed. Here, a deterministic method is proposed in a planar geometry. This method is based on different high order numerical schemes. Each deterministic scheme used presents many fundamental properties such as conservation of flux particles, preservation of positivity of the distribution function and conservation of energy. However the kinetic computation of this accurate method is too expensive to be used in practical computation especially in multi-dimensional space.To reduce the computational time, the plasma can be described by an hydrodynamic model. However for the new high energy target drivers, the kinetic effects are too important to neglect them and replace kinetic calculus by usual macroscopic Euler models.That is why an alternative approach is proposed by considering an intermediate description between the fluid and the kinetic level. To describe the transport of electrons, the new reduced kinetic model M1 proposed is based on a moment approach for Maxwell-Fokker-Planck equations. This moment model uses integration of the electron distribution function on the propagating direction and retains only the energy of particles as kinetic variable. The velocity variable is written in spherical coordinates and the model is written by considering the system of moments with respect to the angular variable. The closure of the moments system is obtained under the assumption that the distribution function is a minimum entropy function. This model is proved to satisfy fundamental properties such as the non-negativity of the distribution function, conservation laws for collision operators and entropy dissipation. Moreover an entropic discretization in the velocity variable is proposed on the semi-discrete model. Moreover the M1 model can be generalized to the MN model by considering N given moments. The N-moments model obtained also preserves fundamental properties such as conservation laws and entropy dissipation. The associated semi-discrete scheme is shown to preserve the conservation properties and entropy decay.
|
147 |
Simulation numérique des fissures et du comportement ductile-fragile de l’aluminium et du fer / Numerical simulation of ductile-brittle behaviour of cracks in aluminium and bcc ironZacharopoulos, Marios 16 May 2017 (has links)
L'objectif principal de la présente dissertation est d'étudier le rôle des fissures pointues sur le comportement mécanique des cristaux sous charge à l'échelle atomique. La question d'intérêt est la façon dont un cristal pur, qui contient une seule fissure en équilibre mécanique, se déforme. Deux métaux ont été considérés: l'aluminium, qui est ductile à toute température, et le fer, transformé de ductile en fragile à une température décroissante inférieure à T=77K. Les forces de cohésion dans les deux métaux ont été modélisées via les potentiels phénoménologiques "n-body". A (010)[001] mode I nano-crack a été introduit dans le réseau cristallin parfait de chacun des métaux étudiés en utilisant des déplacements appropriés attribués par l'élasticité anisotrope. A T=0K, des configurations de fissures à l'équilibre ont été obtenues par minimisation d'énergie avec un type mixte de conditions aux limites. Les deux modèles ont révélé que les configurations de fissures restaient stables sous une gamme finie de contraintes appliquées en raison de l'effet de piégeage en treillis. La présente thèse propose une nouvelle approche pour interpréter le comportement mécanique intrinsèque des deux systèmes métalliques sous le chargement. En particulier, la réponse ductile ou fragile d'un système cristallin peut être déterminée en examinant si la barrière de piégeage en treillis d'une fissure préexistante est suffisante pour provoquer le glissement de dislocations statiques préexistantes. Les résultats des simulations ainsi que les données expérimentales démontrent que, selon le modèle proposé, l'aluminium et le fer sont ductiles et fragiles à T=0K, respectivement. / The principal aim of the present dissertation is to investigate the role of sharp cracks on the mechanical behaviour of crystals under load at the atomic scale. The question of interest is how a pure crystal, which contains a single crack in mechanical equilibrium, deforms. Two metals were considered: aluminium, ductile at any temperature below its melting point, and iron, being transformed from ductile to brittle upon decreasing temperature below T=77K. Cohesive forces in both metals were modeled via phenomenological n-body potentials. A (010)[001] mode I nano-crack was introduced in the perfect crystalline lattice of each of the studied metals by using appropriate displacements ascribed by anisotropic elasticity. At T=0K, equilibrium crack configurations were obtained via energy minimization with a mixed type of boundary conditions. Both models revealed that the crack configurations remained stable under a finite range of applied stresses due to the lattice trapping effect. The present thesis proposes a novel approach to interpret the intrinsic mechanical behaviour of the two metallic systems under loading. In particular, the ductile or brittle response of a crystalline system can be determined by examining whether the lattice trapping barrier of a pre-existing crack is sufficient to cause the glide of pre-existing static dislocations on the available slip systems. Simulation results along with experimental data demonstrate that, according to the model proposed, aluminium and iron are ductile and brittle at T=0K, respectively.
|
148 |
Algorithmes d'optimisation en grande dimension : applications à la résolution de problèmes inverses / Large scale optimization algorithms : applications to solution of inverse problemsRepetti, Audrey 29 June 2015 (has links)
Une approche efficace pour la résolution de problèmes inverses consiste à définir le signal (ou l'image) recherché(e) par minimisation d'un critère pénalisé. Ce dernier s'écrit souvent sous la forme d'une somme de fonctions composées avec des opérateurs linéaires. En pratique, ces fonctions peuvent n'être ni convexes ni différentiables. De plus, les problèmes auxquels on doit faire face sont souvent de grande dimension. L'objectif de cette thèse est de concevoir de nouvelles méthodes pour résoudre de tels problèmes de minimisation, tout en accordant une attention particulière aux coûts de calculs ainsi qu'aux résultats théoriques de convergence. Une première idée pour construire des algorithmes rapides d'optimisation est d'employer une stratégie de préconditionnement, la métrique sous-jacente étant adaptée à chaque itération. Nous appliquons cette technique à l'algorithme explicite-implicite et proposons une méthode, fondée sur le principe de majoration-minimisation, afin de choisir automatiquement les matrices de préconditionnement. L'analyse de la convergence de cet algorithme repose sur l'inégalité de Kurdyka-L ojasiewicz. Une seconde stratégie consiste à découper les données traitées en différents blocs de dimension réduite. Cette approche nous permet de contrôler à la fois le nombre d'opérations s'effectuant à chaque itération de l'algorithme, ainsi que les besoins en mémoire, lors de son implémentation. Nous proposons ainsi des méthodes alternées par bloc dans les contextes de l'optimisation non convexe et convexe. Dans le cadre non convexe, une version alternée par bloc de l'algorithme explicite-implicite préconditionné est proposée. Les blocs sont alors mis à jour suivant une règle déterministe acyclique. Lorsque des hypothèses supplémentaires de convexité peuvent être faites, nous obtenons divers algorithmes proximaux primaux-duaux alternés, permettant l'usage d'une règle aléatoire arbitraire de balayage des blocs. L'analyse théorique de ces algorithmes stochastiques d'optimisation convexe se base sur la théorie des opérateurs monotones. Un élément clé permettant de résoudre des problèmes d'optimisation de grande dimension réside dans la possibilité de mettre en oeuvre en parallèle certaines étapes de calculs. Cette parallélisation est possible pour les algorithmes proximaux primaux-duaux alternés par bloc que nous proposons: les variables primales, ainsi que celles duales, peuvent être mises à jour en parallèle, de manière tout à fait flexible. A partir de ces résultats, nous déduisons de nouvelles méthodes distribuées, où les calculs sont répartis sur différents agents communiquant entre eux suivant une topologie d'hypergraphe. Finalement, nos contributions méthodologiques sont validées sur différentes applications en traitement du signal et des images. Nous nous intéressons dans un premier temps à divers problèmes d'optimisation faisant intervenir des critères non convexes, en particulier en restauration d'images lorsque l'image originale est dégradée par un bruit gaussien dépendant du signal, en démélange spectral, en reconstruction de phase en tomographie, et en déconvolution aveugle pour la reconstruction de signaux sismiques parcimonieux. Puis, dans un second temps, nous abordons des problèmes convexes intervenant dans la reconstruction de maillages 3D et dans l'optimisation de requêtes pour la gestion de bases de données / An efficient approach for solving an inverse problem is to define the recovered signal/image as a minimizer of a penalized criterion which is often split in a sum of simpler functions composed with linear operators. In the situations of practical interest, these functions may be neither convex nor smooth. In addition, large scale optimization problems often have to be faced. This thesis is devoted to the design of new methods to solve such difficult minimization problems, while paying attention to computational issues and theoretical convergence properties. A first idea to build fast minimization algorithms is to make use of a preconditioning strategy by adapting, at each iteration, the underlying metric. We incorporate this technique in the forward-backward algorithm and provide an automatic method for choosing the preconditioning matrices, based on a majorization-minimization principle. The convergence proofs rely on the Kurdyka-L ojasiewicz inequality. A second strategy consists of splitting the involved data in different blocks of reduced dimension. This approach allows us to control the number of operations performed at each iteration of the algorithms, as well as the required memory. For this purpose, block alternating methods are developed in the context of both non-convex and convex optimization problems. In the non-convex case, a block alternating version of the preconditioned forward-backward algorithm is proposed, where the blocks are updated according to an acyclic deterministic rule. When additional convexity assumptions can be made, various alternating proximal primal-dual algorithms are obtained by using an arbitrary random sweeping rule. The theoretical analysis of these stochastic convex optimization algorithms is grounded on the theory of monotone operators. A key ingredient in the solution of high dimensional optimization problems lies in the possibility of performing some of the computation steps in a parallel manner. This parallelization is made possible in the proposed block alternating primal-dual methods where the primal variables, as well as the dual ones, can be updated in a quite flexible way. As an offspring of these results, new distributed algorithms are derived, where the computations are spread over a set of agents connected through a general hyper graph topology. Finally, our methodological contributions are validated on a number of applications in signal and image processing. First, we focus on optimization problems involving non-convex criteria, in particular image restoration when the original image is corrupted with a signal dependent Gaussian noise, spectral unmixing, phase reconstruction in tomography, and blind deconvolution in seismic sparse signal reconstruction. Then, we address convex minimization problems arising in the context of 3D mesh denoising and in query optimization for database management
|
149 |
Quelques contributions à l'estimation de grandes matrices de précision / Some contributions to large precision matrix estimationBalmand, Samuel 27 June 2016 (has links)
Sous l'hypothèse gaussienne, la relation entre indépendance conditionnelle et parcimonie permet de justifier la construction d'estimateurs de l'inverse de la matrice de covariance -- également appelée matrice de précision -- à partir d'approches régularisées. Cette thèse, motivée à l'origine par la problématique de classification d'images, vise à développer une méthode d'estimation de la matrice de précision en grande dimension, lorsque le nombre $n$ d'observations est petit devant la dimension $p$ du modèle. Notre approche repose essentiellement sur les liens qu'entretiennent la matrice de précision et le modèle de régression linéaire. Elle consiste à estimer la matrice de précision en deux temps. Les éléments non diagonaux sont tout d'abord estimés en considérant $p$ problèmes de minimisation du type racine carrée des moindres carrés pénalisés par la norme $ell_1$.Les éléments diagonaux sont ensuite obtenus à partir du résultat de l'étape précédente, par analyse résiduelle ou maximum de vraisemblance. Nous comparons ces différents estimateurs des termes diagonaux en fonction de leur risque d'estimation. De plus, nous proposons un nouvel estimateur, conçu de sorte à tenir compte de la possible contamination des données par des {em outliers}, grâce à l'ajout d'un terme de régularisation en norme mixte $ell_2/ell_1$. L'analyse non-asymptotique de la convergence de notre estimateur souligne la pertinence de notre méthode / Under the Gaussian assumption, the relationship between conditional independence and sparsity allows to justify the construction of estimators of the inverse of the covariance matrix -- also called precision matrix -- from regularized approaches. This thesis, originally motivated by the problem of image classification, aims at developing a method to estimate the precision matrix in high dimension, that is when the sample size $n$ is small compared to the dimension $p$ of the model. Our approach relies basically on the connection of the precision matrix to the linear regression model. It consists of estimating the precision matrix in two steps. The off-diagonal elements are first estimated by solving $p$ minimization problems of the type $ell_1$-penalized square-root of least-squares. The diagonal entries are then obtained from the result of the previous step, by residual analysis of likelihood maximization. This various estimators of the diagonal entries are compared in terms of estimation risk. Moreover, we propose a new estimator, designed to consider the possible contamination of data by outliers, thanks to the addition of a $ell_2/ell_1$ mixed norm regularization term. The nonasymptotic analysis of the consistency of our estimator points out the relevance of our method
|
150 |
Physikalische und numerische Modelle zur Minimierung des Restrisikos für die Stadt Dresden bei einem Extremhochwasser der WeißeritzAigner, Detlef 05 March 2007 (has links)
Full protection from forces of nature can not be achieved. A residual risk will always remain. Compromises must be found between the expectations and demands on flood protection on the one hand and the technical, economical and ecological possibilities on the other. The calls for full flood protection by victims of the Weißeritz flood in 2002 and by some politicians can not be satisfied. However, the valid design criterias concerning the upgrading of the Weißeritz regarding flood protection do not meet the requirements. A more sophisticated approach by the Saxonian Dam Authority and the City of Dresden led to much greater design discharge values. Therefore the Institute for Hydraulic Engineering and Applied Hydromechanics of TU Dresden developed physical and numerical models which will function as assisting tools during the decision process for the planned flood measures. / Der Schutz vor Naturgewalten ist kein vollständiger Schutz, ein Restrisiko bleibt immer. Hier sind Kompromisse erforderlich, Kompromisse zwischen den Erwartungen und Forderungen an den Hochwasserschutz und den technischen, ökonomischen und ökologischen Möglichkeiten. Die Forderungen vom Hochwasser betroffener Bürger sowie einiger Dresdner Politiker, sich vollständig vor Hochwasser an der Weißeritz zu schützen, sind nicht realisierbar. Andererseits sind Bemessungsgrößen nach derzeit geltenden Regeln für den Ausbau der Weißeritz nicht zeitgemäß. Eine differenziertere Betrachtungsweise für Dresden führte in den Verhandlungen zwischen der sächsischen Landestalsperrenverwaltung und der Stadt zu einem weit höheren Bemessungsabfluss. Die Hochwassermodelle am Institut für Wasserbau und Technische Hydromechanik der TU Dresden haben diesen Prozess unterstützt.
|
Page generated in 0.0939 seconds