• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 117
  • 102
  • 48
  • 33
  • 25
  • 14
  • 13
  • 13
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 975
  • 135
  • 120
  • 111
  • 99
  • 86
  • 82
  • 73
  • 72
  • 71
  • 71
  • 71
  • 70
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Sus och brus : En GIS-baserad studie om hur buller sprids i stadsnära naturreservat / Soothing swoosh and nagging noise : A GIS study of how noise spread through metropolitan nature reserves

Wall, Christian January 2024 (has links)
I ett alltmer urbaniserat samhälle har forskning uppmärksammat behovet av tillgång till gröna och tysta områden för att främja god hälsa hos människor. Befolkningstillväxten och förtätningen ökar i kommunerna i Stockholms län, vilket orsaker mer trafik och mer buller. Denna studie syftar till att undersöka hur buller från motorvägar sprids i stadsnära naturreservat. Studiens frågeställningar berör vilken inverkan vegetation och topografi har för bullerspridningen, vilka avstånd som krävs från bullerkällan för att bullret inte ska upplevas som störande och slutligen om en mobiltelefon som GNSS-mottagare är tillräcklig för positionering av mätpunkter. Det utvalda studieområdet var Järvafältet, en samling sammanhängande naturreservat omgivet av tätbebyggda kommuner och högtrafikerade motorvägar. Datainsamling bestod av 72 mätpunkter där flertalet data samlades in, bland annat decibelnivå, koordinater, avstånd till närmaste motorväg, höjddata, vindhastighet, väder, vegetation och en subjektiv uppskattning av störning från trafikbuller. Sju av variablerna behandlades i en principal-komponentanalys där den viktigaste komponenten bestod av decibelnivå, avstånd till motorväg, grad av bullerskydd och lufttemperatur. Subjektiv uppskattning av störning ingick inte i principalkomponentanalysen. En ljudmodell skapades med hjälp av interpolationsverktyget Kriging. I principalkomponentanalysen fastslogs att fyra av variablerna byggde upp en extraherad komponent: decibelnivå, avstånd till motorväg, grad av bullerskydd och lufttemperatur. Variablerna decibelnivå och avstånd till motorväg var negativt korrelerade till -0,78. Drygt 500 meter från bullerkällan visade sig sänkningen av decibelnivån plana ut. Även variabeln bullerskydd var korrelerad med decibelnivån, men det gick inte att fastställa om detta samband var mer kopplat till decibelnivån eller till avstånd från motorvägarna. Variabeln vegetation visade liten eller ingen korrelation med decibelnivåer. Lufttemperatur visade viss korrelation med decibelnivå, men detta samband var sannolikt falskt och berodde högst troligen på årstidsväxlingar. Avstånd till motorväg klassificerades i sju kategorier beroende på avstånd till motorväg. I varje kategori testade olika korrelationer mellan decibelnivå och tre olika variabler. Inom cirka 240 meter från motorvägen visade sig graden av bullerskydd vara stark negativt korrelerad med decibelnivå, men längre bort blev denna korrelation svag. Kontrolljudmätningar utfördes efter interpolationsmodellering för att validera resultatet. Kontrollmätningar avvek med i genomsnitt 1,5 dBA. Mobiltelefonens positionerings-precision jämfördes med en dedikerad GNSS-mottagare. Mobiltelefonen avvek med i genomsnitt 8,33 meter (13,16 meter inklusive en utliggare), ett resultat som ansågs fullgott för denna typ av ljudstudie. / In a more urbanised society research has shown the need for green spaces in order to favour human health. This study aims to explore how traffic noise travel through nature reserves close to metropolitan areas. Furthermore, the study tries to answer whether topography and vegetation have impact on noise level, distance required for the noise to stop being an inconvenience and if a mobile phone is sufficient as a GNSS-receiver for positioning of sound measuring points. The study area consists of Järvafältet, north of Stockholm, a collection of nature reserves surrounded by freeways. A total of 72 measuring points were selected, where several data were collected. A principal component analysis was used to extract components from the most important variables: decibel levels, distance to freeways, noise protection and air temperature. Decibel levels and distance to freeways were negatively correlated by -0.78. Vegetations show little correlation with decibel levels and correlation between noise protection and decibel levels are inconclusive but seem to have some correlation when analysed close to the source of noise. Kriging interpolation was used to model sound maps and control measurements resulted in an average deviation of 1,5 dBA in comparison to the created model. It was also found that a mobile phone GNSS receiver is not as precise as a dedicated GNSS receiver but is sufficient for this kind of sound study.
522

Parametric Dynamical Systems: Transient Analysis and Data Driven Modeling

Grimm, Alexander Rudolf 02 July 2018 (has links)
Dynamical systems are a commonly used and studied tool for simulation, optimization and design. In many applications such as inverse problem, optimal control, shape optimization and uncertainty quantification, those systems typically depend on a parameter. The need for high fidelity in the modeling stage leads to large-scale parametric dynamical systems. Since these models need to be simulated for a variety of parameter values, the computational burden they incur becomes increasingly difficult. To address these issues, parametric reduced models have encountered increased popularity in recent years. We are interested in constructing parametric reduced models that represent the full-order system accurately over a range of parameters. First, we define a global joint error mea- sure in the frequency and parameter domain to assess the accuracy of the reduced model. Then, by assuming a rational form for the reduced model with poles both in the frequency and parameter domain, we derive necessary conditions for an optimal parametric reduced model in this joint error measure. Similar to the nonparametric case, Hermite interpolation conditions at the reflected images of the poles characterize the optimal parametric approxi- mant. This result extends the well-known interpolatory H2 optimality conditions by Meier and Luenberger to the parametric case. We also develop a numerical algorithm to construct locally optimal reduced models. The theory and algorithm are data-driven, in the sense that only function evaluations of the parametric transfer function are required, not access to the internal dynamics of the full model. While this first framework operates on the continuous function level, assuming repeated transfer function evaluations are available, in some cases merely frequency samples might be given without an option to re-evaluate the transfer function at desired points; in other words, the function samples in parameter and frequency are fixed. In this case, we construct a parametric reduced model that minimizes a discretized least-squares error in the finite set of measurements. Towards this goal, we extend Vector Fitting (VF) to the parametric case, solving a global least-squares problem in both frequency and parameter. The output of this approach might lead to a moderate size reduced model. In this case, we perform a post- processing step to reduce the output of the parametric VF approach using H2 optimal model reduction for a special parametrization. The final model inherits the parametric dependence of the intermediate model, but is of smaller order. A special case of a parameter in a dynamical system is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. Modeling such a delay comes with several challenges for the mathematical formulation, analysis, and solution. We address the issue of transient behavior for scalar delay equations. Besides the choice of an appropriate measure, we analyze the impact of the coefficients of the delay equation on the finite time growth, which can be arbitrary large purely by the influence of the delay. / Ph. D. / Mathematical models play an increasingly important role in the sciences for experimental design, optimization and control. These high fidelity models are often computationally expensive and may require large resources, especially for repeated evaluation. Parametric model reduction offers a remedy by constructing models that are accurate over a range of parameters, and yet are much cheaper to evaluate. An appropriate choice of quality measure and form of the reduced model enable us to characterize these high quality reduced models. Our first contribution is a characterization of optimal parametric reduced models and an efficient implementation to construct them. While this first framework assumes we have access to repeated evaluations of the full model, in some cases merely measurement data might be available. In this case, we construct a parametric model that fits the measurements in a least squares sense. The output of this approach might lead to a moderate size reduced model, which we address with a post-processing step that reduces the model size while maintaining important properties. A special case of a parameter is a delay in the model equation, e.g., arising from a feedback loop, reaction time, delayed response and various other physical phenomena. While asymptotically stable solutions eventually vanish, they might grow large before asymptotic behavior takes over; this leads to the notion of transient behavior, which is our main focus for a simple class of delay equations. Besides the choice of an appropriate measure, we analyze the impact of the structure of the delay equation on the transient growth, which can be arbitrary large purely by the influence of the delay.
523

Development of a nonlinear equations solver with superlinear convergence at regular singularities

Alabdallah, Suleiman 10 October 2014 (has links)
In dieser Arbeit präsentieren wir eine neue Art von Newton-Verfahren mit Liniensuche, basierend auf Interpolation im Bildbereich nach Wedin et al. [LW84]. Von dem resultierenden stabilisierten Newton-Algorithmus wird theoretisch und praktisch gezeigt, dass er effizient ist im Falle von nichtsingulären Lösungen. Darüber hinaus wird beobachtet, dass er eine superlineare Rate von Konvergenz bei einfachen Singularitäten erhält. Hingegen ist vom Newton-Verfahren ohne Liniensuche bekannt, dass es nur linear von fast allen Punkten in der Nähe einer singulären Lösung konvergiert. In Hinsicht auf Anwendungen auf Komplementaritätsprobleme betrachten wir auch Systeme, deren Jacobimatrix nicht differenzierbar sondern nur semismooth ist. Auch hier erreicht unser stabilisiertes und beschleunigtes Newton- Verfahren Superlinearität bei einfachen Singularitäten. / In this thesis we present a new type of line-search for Newton’s method, based on range space interpolation as suggested by Wedin et al. [LW84]. The resulting stabilized Newton algorithm is theoretically and practically shown to be efficient in the case of nonsingular roots. Moreover it is observed that it maintains a superlinear rate of convergence at simple singularities. Whereas Newton’s method without line-search is known to converge only linearly from almost all points near the singular root. In view of applications to complementarity problems we also consider systems, whose Jacobian is not differentiable but only semismooth. Again, our stabilized and accelerated Newton’s method achieves superlinearity at simple singularities.
524

SisA3 : Sistema Automatizado de Auditoria de Armaz´ens de Gran´eis / SISA3 : AN AUTOMATED AUDIT SYSTEM FOR GRAIN STORES

Al-alam, Wagner Guimarães 15 January 2010 (has links)
Made available in DSpace on 2016-03-22T17:26:24Z (GMT). No. of bitstreams: 1 Wagner Guimaraes Al-Alam.pdf: 2995290 bytes, checksum: 9902eafe02c0b5318a99f1e796dc399f (MD5) Previous issue date: 2010-01-15 / Companies working with bulk materials have appropriate locations for storage during the development of the production and storage of the final product, known as warehouses or storehouses. The values of stocks need to be periodically validated by comparing the control of receipts the and the physical situation (removal of the volume stored in the company). In this context, the calculation of physical inventory as the volume of bulk present in the warehouses is usually done manually with low credibility and prone to errors. The current audit procedures on the contents of warehouses involve inaccurate estimates, and often require emptying the warehouse. Considering the use of technologies which enable the electronic measurement of distances, angles, and automatic controls on actuators enabling mechanical movements on the supporting structures, we sought to develop a system capable of providing both computing solutions, and technology for the problem of calculation of irregular relief (products stocked in warehouses). The Automated Auditing Warehouse SisA3 intends to make this process automatic, fast and precise, without the need for emptying warehouses or having contact the products. To achieve this goal, we developed an integrated system composed of: (i) a scanner equipment, consoling the hybrid prototype of hardware and software called DigSisA3, in order to the measurement of points of relief non-uniform, formed by the products in stock, and (ii) a method for calculating the volume iCone, which combines techniques of scientific visualization, numerical interpolation points and iterative calculation of volume. The parallelization of the prototype iCone was also developed in order to satisfy the test of agility and performance of the method iCone in the audit process. The development for multiprocessor, multi-core, and distributed architectures was done over the DGM (Geometric Distributed Machine), which provides the formalities to ensure creation, management and application processing parallel and / or distributed scientific computing, with emphasis on the exploitation of data parallelism and synchronization steps. The prototype of software iCone was functionally validated, including analysis of error in the method. The analysis of performance in the prototype p-iCone showed satisfactory results. The development of this work strengthens the system SisA3, enabling automatic and reliable measurement of inventories, including broad market application / Empresas que trabalham com produtos a granel possuem locais para estocagem, durante o desenvolvimento do processo produtivo e no armazenamento do produto final, denominados armaz´ens ou silos. Os valores dos estoques devem ser validados periodicamente atrav´es da comparac¸ ao dos estoques fiscal (controle das notas fiscais) e f´ısico (levantamento do volume estocado na empresa). Neste contexto, o c´alculo do estoque f´ısico, ou seja, o volume de gran´eis presentes nos armaz´ens, ´e geralmente efetuado de forma manual e com baixa credibilidade, desta forma com propens ao a erros. Os atuais processos de auditoria no conte´udo de silos, al´em de envolverem estimativas inexatas, est ao frequentemente baseados no esvaziamento do silo. Considerando o uso de tecnologias que viabilizam a medic¸ ao eletr onica de dist ancias, angulos, e controles autom´aticos sobre atuadores que possibilitam movimentos mec anicos sobre estruturas de suporte, buscou-se o desenvolvimento de um sistema capaz de prover tanto soluc¸ oes computacionais, quanto tecnol´ogicas para o problema de c´alculo do volume de relevos irregulares, no caso dos produtos estocados nos armaz´ens. O Sistema Automatizado de Auditoria em Armaz´ens (SisA3) pretende tornar este processo autom´atico, r´apido e preciso, sem a necessidade de esvaziamento ou contato com os produtos. Para alcanc¸ar este objetivo, tem-se um sistema integrado composto de: (i) um equipamento digitalizador, consolidando o prot´otipo h´ıbrido de hardware e software denominado Dig-SisA3 , para a medic¸ ao de pontos do relevo n ao-uniforme, formado pelos produtos estocados; e (ii) m´etodo para o c´alculo do volume (iCone), que combina t´ecnicas de visualizac¸ ao cient´ıfica, interpolac¸ ao num´erica de pontos e c´alculo iterativo de volume. Al´em disto, introduz-se a paralelizac¸ ao do prot´otipo iCone, para diminuir o tempo da obtenc¸ ao dos resultados do m´etodo iCone no processo de auditoria. A an´alise sobre as perspectivas em arquiteturas multiprocessadas, multi-core e paralela distribu´ıda, utiliza o ambiente D-GM (Distributed Geometric Machine), a qual prov e os formalismos para garantir criac¸ ao, gerenciamento e processamento de aplicac¸ oes paralelas e/ou distribu´ıdas da computac¸ ao cient´ıfica, com enfase na explorac¸ ao do paralelismo de dados e nas etapas de sincronizac¸ oes. O prot´otipo de software iCone apresenta-se funcionalmente validado, incluindo an´alise de erro na execuc¸ ao do m´etodo. As an´alises de desempenho no prot´otipo p-iCone apresentaram resultados satisfat´orios. O desenvolvimento deste trabalho consolida o sistema SisA3, viabilizando aferic¸ ao autom´atica e confi´avel de estoques, incluindo ampla aplicac¸ ao no mercado
525

Interpolation sur les variétés grassmanniennes et applications à la réduction de modèles en mécanique / Interpolation on Grassmann manifolds and applications to reduced order methods in mechanics

Mosquera Meza, Rolando 26 June 2018 (has links)
Ce mémoire de thèse concerne l'interpolation sur les variétés de Grassmann et ses applications à la réduction de modèles en mécanique et plus généralement aux systèmes d'équations aux dérivées partielles d'évolution. Après une description de la méthode POD, nous introduisons les fondements théoriques en géométrie des variétés de Grassmann, qui seront utilisés dans le reste de la thèse. Ce chapitre donne à ce mémoire à la fois une rigueur mathématique au niveau des algorithmes mis au point, leur domaine de validité ainsi qu'une estimation de l'erreur en distance grassmannienne, mais également un caractère auto-contenu "self-contained" du manuscrit. Ensuite, on présente la méthode d'interpolation sur les variétés de Grassmann introduite par David Amsallem et Charbel Farhat. Cette méthode sera le point de départ des méthodes d'interpolation que nous développerons dans les chapitres suivants. La méthode de Amsallem-Farhat consiste à choisir un point d'interpolation de référence, envoyer l'ensemble des points d'interpolation sur l'espace tangent en ce point de référence via l'application logarithme géodésique, effectuer une interpolation classique sur cet espace tangent, puis revenir à la variété de Grassmann via l'application exponentielle géodésique. On met en évidence par des essais numériques l'influence du point de référence sur la qualité des résultats. Dans notre premier travail, nous présentons une version grassmannienne d'un algorithme connu dans la littérature sous le nom de Pondération par Distance Inverse (IDW). Dans cette méthode, l'interpolé en un point donné est considéré comme le barycentre des points d'interpolation où les coefficients de pondération utilisés sont inversement "proportionnels" à la distance entre le point considéré et les points d'interpolation. Dans notre méthode, notée IDW-G, la distance géodésique sur la variété de Grassmann remplace la distance euclidienne dans le cadre standard des espaces euclidiens. L'avantage de notre algorithme, dont on a montré la convergence sous certaines conditions assez générales, est qu'il ne requiert pas de point de référence contrairement à la méthode de Amsallem-Farhat. Pour remédier au caractère itératif (point fixe) de notre première méthode, nous proposons une version directe via la notion de barycentre généralisé. Notons enfin que notre algorithme IDW-G dépend nécessairement du choix des coefficients de pondération utilisés. Dans notre second travail, nous proposons une méthode qui permet un choix optimal des coefficients de pondération, tenant compte de l'auto-corrélation spatiale de l'ensemble des points d'interpolation. Ainsi, chaque coefficient de pondération dépend de tous les points d'interpolation et non pas seulement de la distance entre le point considéré et un point d'interpolation. Il s'agit d'une version grassmannienne de la méthode de Krigeage, très utilisée en géostatique. La méthode de Krigeage grassmannienne utilise également le point de référence. Dans notre dernier travail, nous proposons une version grassmannienne de l'algorithme de Neville qui permet de calculer le polynôme d'interpolation de Lagrange de manière récursive via l'interpolation linéaire entre deux points. La généralisation de cet algorithme sur une variété grassmannienne est basée sur l'extension de l'interpolation entre deux points (géodésique/droite) que l'on sait faire de manière explicite. Cet algorithme ne requiert pas le choix d'un point de référence, il est facile d'implémentation et très rapide. De plus, les résultats numériques obtenus sont remarquables et nettement meilleurs que tous les algorithmes décrits dans ce mémoire. / This dissertation deals with interpolation on Grassmann manifolds and its applications to reduced order methods in mechanics and more generally for systems of evolution partial differential systems. After a description of the POD method, we introduce the theoretical tools of grassmannian geometry which will be used in the rest of the thesis. This chapter gives this dissertation a mathematical rigor in the performed algorithms, their validity domain, the error estimate with respect to the grassmannian distance on one hand and also a self-contained character to the manuscript. The interpolation on Grassmann manifolds method introduced by David Amsallem and Charbel Farhat is afterward presented. This method is the starting point of the interpolation methods that we will develop in this thesis. The method of Amsallem-Farhat consists in chosing a reference interpolation point, mapping forward all interpolation points on the tangent space of this reference point via the geodesic logarithm, performing a classical interpolation on this tangent space and mapping backward the interpolated point to the Grassmann manifold by the geodesic exponential function. We carry out the influence of the reference point on the quality of the results through numerical simulations. In our first work, we present a grassmannian version of the well-known Inverse Distance Weighting (IDW) algorithm. In this method, the interpolation on a point can be considered as the barycenter of the interpolation points where the used weights are inversely proportional to the distance between the considered point and the given interpolation points. In our method, denoted by IDW-G, the geodesic distance on the Grassmann manifold replaces the euclidean distance in the standard framework of euclidean spaces. The advantage of our algorithm that we show the convergence undersome general assumptions, does not require a reference point unlike the method of Amsallem-Farhat. Moreover, to carry out this, we finally proposed a direct method, thanks to the notion of generalized barycenter instead of an earlier iterative method. However, our IDW-G algorithm depends on the choice of the used weighting coefficients. The second work deals with an optimal choice of the weighting coefficients, which take into account of the spatial autocorrelation of all interpolation points. Thus, each weighting coefficient depends of all interpolation points an not only on the distance between the considered point and the interpolation point. It is a grassmannian version of the Kriging method, widely used in Geographic Information System (GIS). Our grassmannian Kriging method require also the choice of a reference point. In our last work, we develop a grassmannian version of Neville's method which allow the computation of the Lagrange interpolation polynomial in a recursive way via the linear interpolation of two points. The generalization of this algorithm to grassmannian manifolds is based on the extension of interpolation of two points (geodesic/straightline) that we can do explicitly. This algorithm does not require the choice of a reference point, it is easy to implement and very quick. Furthermore, the obtained numerical results are notable and better than all the algorithms described in this dissertation.
526

Développement de modèles réduits adaptatifs pour le contrôle optimal des écoulements / Development of adaptive reduced order models for optimal flow control

Oulghelou, Mourad 26 June 2018 (has links)
La résolution des problèmes de contrôle optimal nécessite des temps de calcul et des capacités de stockage très élevés. Pour s’affranchir de ces contraintes, il est possible d’utiliser les méthodes de réduction de modèles comme la POD (Proper Orthogonal Decomposition). L’inconvénient de cette approche est que la base POD n’est valable que pour des paramètres situés dans un voisinage proche des paramètres pour lesquels elle a été construite. Par conséquent, en contrôle optimal, cette base peut ne pas être représentative de tous les paramètres qui seront proposés par l’algorithme de contrôle. Pour s’affranchir de cet handicap, une méthodologie de contrôle optimal utilisant des modèles réduits adaptatifs a été proposée dans ce manuscrit. Les bases réduites adaptées sont obtenues à l’aide de la méthode d’interpolation ITSGM (Interpolation on Tangent Subspace of Grassman Manifold) ou de la méthode d’enrichissement PGD (Proper Generalized Decomposition). La robustesse de cette approche en termes de précision et de temps de calcul a été démontrée pour le contrôle optimal (basé sur les équations adjointes) des équations 2D de réaction-diffusion et de Burgers. L’approche basée sur l’interpolation ITSGM a également été appliquée avec succès pour contrôler l’écoulement autour d’un cylindre 2D. Deux méthodes de réduction non intrusives, ne nécessitant pas la connaissance des équations du modèle étudié, ont également été proposées. Ces méthodes appelées NIMR (Non Intrusive Model Reduction) et HNIMR (Hyper Non Intrusive Model Reduction) ont été couplées à un algorithme génétique pour résoudre rapidement un problème de contrôle optimal. Le problème du contrôle optimal de l’écoulement autour d’un cylindre 2D a été étudié et les résultats ont montré l’efficacité de cette approche. En effet, l’algorithme génétique couplé avec la méthode HNIMR a permis d’obtenir les solutions avec une bonne précision en moins de 40 secondes. / The numerical resolution of adjoint based optimal control problems requires high computational time and storage capacities. In order to get over these high requirement, it is possible to use model reduction techniques such as POD (Proper Orthogonal Decomposition). The disadvantage of this approach is that the POD basis is valid only for parameters located in a small neighborhood to the parameters for which it was built. Therefore, this basis may not be representative for all parameters in the optimizer’s path eventually suggested by the optimal control loop. To overcome this issue, a reduced optimal control methodology using adaptive reduced order models obtained by the ITSGM (Interpolation on a Tangent Subspace of the Grassman Manifold) method or by the PGD (Proper Generalized Decomposition) method, has been proposed in this work. The robustness of this approach in terms of precision and computation time has been demonstrated for the optimal control (based on adjoint equations) of the 2D reaction-diffusion and Burgers equations. The interpolation method ITSGM has also been validated in the control of flow around a 2D cylinder. In the context of non intrusive model reduction, two non intrusive reduction methods, which do not require knowledge of the equations of the studied model, have also been proposed. These methods called NIMR (Non-Intrusive Model Reduction) and HNIMR (Hyper Non-Intrusive Model Reduction) were developed and then coupled to a genetic algorithm in order to solve an optimal control problem in quasi-real time. The problem of optimal control of the flow around a 2D cylinder has been studied and the results have shown the effectiveness of this approach. Indeed, the genetic algorithm coupled with the HNIMR method allowed to obtain the solutions with a good accuracy in less than 40 seconds.
527

Explorative Multivariate Data Analysis of the Klinthagen Limestone Quarry Data / Utforskande multivariat analys av Klinthagentäktens projekteringsdata

Bergfors, Linus January 2010 (has links)
<p> </p><p>The today quarry planning at Klinthagen is rough, which provides an opportunity to introduce new exciting methods to improve the quarry gain and efficiency. Nordkalk AB, active at Klinthagen, wishes to start a new quarry at a nearby location. To exploit future quarries in an efficient manner and ensure production quality, multivariate statistics may help gather important information.</p><p>In this thesis the possibilities of the multivariate statistical approaches of Principal Component Analysis (PCA) and Partial Least Squares (PLS) regression were evaluated on the Klinthagen bore data. PCA data were spatially interpolated by Kriging, which also was evaluated and compared to IDW interpolation.</p><p>Principal component analysis supplied an overview of the variables relations, but also visualised the problems involved when linking geophysical data to geochemical data and the inaccuracy introduced by lacking data quality.</p><p>The PLS regression further emphasised the geochemical-geophysical problems, but also showed good precision when applied to strictly geochemical data.</p><p>Spatial interpolation by Kriging did not result in significantly better approximations than the less complex control interpolation by IDW.</p><p>In order to improve the information content of the data when modelled by PCA, a more discrete sampling method would be advisable. The data quality may cause trouble, though with sample technique of today it was considered to be of less consequence.</p><p>Faced with a single geophysical component to be predicted from chemical variables further geophysical data need to complement existing data to achieve satisfying PLS models.</p><p>The stratified rock composure caused trouble when spatially interpolated. Further investigations should be performed to develop more suitable interpolation techniques.</p>
528

Explorative Multivariate Data Analysis of the Klinthagen Limestone Quarry Data / Utforskande multivariat analys av Klinthagentäktens projekteringsdata

Bergfors, Linus January 2010 (has links)
The today quarry planning at Klinthagen is rough, which provides an opportunity to introduce new exciting methods to improve the quarry gain and efficiency. Nordkalk AB, active at Klinthagen, wishes to start a new quarry at a nearby location. To exploit future quarries in an efficient manner and ensure production quality, multivariate statistics may help gather important information. In this thesis the possibilities of the multivariate statistical approaches of Principal Component Analysis (PCA) and Partial Least Squares (PLS) regression were evaluated on the Klinthagen bore data. PCA data were spatially interpolated by Kriging, which also was evaluated and compared to IDW interpolation. Principal component analysis supplied an overview of the variables relations, but also visualised the problems involved when linking geophysical data to geochemical data and the inaccuracy introduced by lacking data quality. The PLS regression further emphasised the geochemical-geophysical problems, but also showed good precision when applied to strictly geochemical data. Spatial interpolation by Kriging did not result in significantly better approximations than the less complex control interpolation by IDW. In order to improve the information content of the data when modelled by PCA, a more discrete sampling method would be advisable. The data quality may cause trouble, though with sample technique of today it was considered to be of less consequence. Faced with a single geophysical component to be predicted from chemical variables further geophysical data need to complement existing data to achieve satisfying PLS models. The stratified rock composure caused trouble when spatially interpolated. Further investigations should be performed to develop more suitable interpolation techniques.
529

Die Methode von Smolyak bei der multivariaten Interpolation / Smolyak's method for multivariate interpolation

Schreiber, Anja 22 June 2000 (has links)
No description available.
530

Risiken des Klimawandels für den Wasserhaushalt - Variabilität und Trend des zeitlichen Niederschlagsspektrums / Risks for the water budget due to climate change – variability and trend of the temporal spectrum of precipitation

Franke, Johannes 29 December 2009 (has links)
Die vorliegende Arbeit wurde auf der Grundlage begutachteter Publikationen als kumulative Dissertation verfasst. Ziel war hier, das zeitliche Spektrum des Niederschlages unter sich bereits geänderten und zukünftig möglichen Klimabedingungen zu untersuchen, um daraus risikobehaftete Auswirkungen auf den Wasserhaushalt ableiten zu können. Ausgehend von den für Sachsen bzw. Mitteldeutschland jahreszeitlich berechneten Trends für den Niederschlag im Zeitraum 1951-2000 wurde hier der Schwerpunkt auf das Verhalten des Starkniederschlages im Einzugsgebiet der Weißeritz (Osterzgebirge) während der Vegetationsperiode gesetzt. Unter Verwendung von Extremwertverteilungen wurde das lokale Starkniederschlagsgeschehen im Referenzzeitraum 1961-2000 für Ereignisandauern von 1-24 Stunden und deren Wiederkehrzeiten von 5-100 Jahren aus statistischer Sicht beschrieben. Mittels eines wetterlagenbasierten statistischen Downscaling wurden mögliche Änderungen im Niveau des zeitlich höher aufgelösten Niederschlagspektrums gegenüber dem Referenzspektrum auf die Zeitscheiben um 2025 (2011-2040) und 2050 (2036-2065) projiziert. Hierfür wurden die zu erwartenden Klimabedingungen für das IPCC-Emissionsszenario A1B angenommen. Mittels eines problemangepassten Regionalisierungsalgorithmus´ konnte eine Transformation der Punktinformationen in eine stetige Flächeninformation erreicht werden. Dabei wurden verteilungsrelevante Orografieeffekte auf den Niederschlag maßstabsgerecht berücksichtigt. Die signifikanten Niederschlagsabnahmen im Sommer bzw. in der Vegetationsperiode sind in Sachsen mit einer Zunahme und Intensivierung von Starkniederschlägen kombiniert. Hieraus entsteht ein Konfliktpotenzial zwischen Hochwasserschutz auf der einen und (Trink-) Wasserversorgung auf der anderen Seite. Für die zu erwartenden Klimabedingungen der Zeitscheiben um 2025 und 2050 wurden für das Einzugsgebiet der Weißeritz zunehmend positive, nicht-lineare Niveauverschiebungen im zeitlich höher aufgelösten Spektrum des Starkniederschlages berechnet. Für gleich bleibende Wiederkehrzeiten ergaben sich größere Regenhöhen bzw. für konstant gehaltene Regenhöhen kleinere Wiederkehrzeiten. Aus dem erhaltenen Änderungssignal kann gefolgert werden, dass der sich fortsetzende allgemeine Erwärmungstrend mit einer Intensivierung des primär thermisch induzierten, konvektiven Starkniederschlagsgeschehens einhergeht, was in Sachsen mit einem zunehmend häufigeren Auftreten von Starkregenereignissen kürzerer Andauer sowie mit einer zusätzlichen orografischen Verstärkung von Ereignissen längerer Andauer verbunden ist. Anhand des Klimaquotienten nach Ellenberg wurden Effekte des rezenten Klimatrends auf die Verteilung der potenziellen natürlichen Vegetation in Mitteldeutschland beispielhaft untersucht. Über eine Korrektur der Berechnungsvorschrift konnte eine Berücksichtigung der trendbehafteten klimatologischen Rahmenbedingungen, insbesondere dem negativen Niederschlagstrend im Sommer, erreicht werden. Insgesamt konnte festgestellt werden, dass die regionalen Auswirkungen des globalen Klimawandels massive Änderungen in der raum-zeitlichen Struktur des Niederschlages in Sachsen zur Folge haben, was unvermeidlich eine komplexe Wirkungskette auf den regionalen Wasserhaushalt zur Folge hat und mit Risiken verbunden ist. / This paper was written as a cumulative doctoral thesis based on appraised publications. Its objective was to study the temporal spectrum of precipitation under already changed or possible future climate conditions in order to derive effects on the water budget which are fraught with risks. Based on seasonal trends as established for Saxony and Central Germany for precipitation in the period of 1951-2000, the focus was on the behaviour of heavy precipitation in the catchment area of the Weißeritz (eastern Ore Mountains) during the growing season. Using distributions of extreme values, the local heavy precipitation behaviour in the reference period of 1961-2000 was described from a statistical point of view for event durations of 1-24 hours and their return periods of 5-100 years. Statistical downscaling based on weather patterns was used to project possible changes in the level of the high temporal resolution spectrum of precipitation, compared with the reference spectrum, to the time slices around 2025 (2011-2040) and 2050 (2036-2065). The IPCC A1B emission scenario was assumed for expected climate conditions for this purpose. Using a regionalisation algorithm adapted to the problem made it possible to achieve a transformation of local information into areal information. In doing so, distribution-relevant orographic effects on precipitation were taken into consideration in a manner true to scale. Significant decreases in precipitation in summer and during the growing season are combined with an increase and intensification of heavy precipitation in Saxony. This gives rise to a potential for conflict between the need for flood protection, on the one hand, and the supply of (drinking) water, on the other hand. For the expected climate conditions of the time slices around 2025 and 2050, increasingly positive, non-linear shifts in the level of the high temporal resolution spectrum of heavy precipitation were calculated for the catchment of the Weißeritz. Higher amounts of rain were found if the return periods were kept constant, and shorter return periods were found if the rain amounts were kept constant. It may be concluded from the change signal obtained that the continuing general warming trend is accompanied by an intensification of the primarily thermally induced convective behaviour of heavy precipitation. In Saxony, this is associated with an increasingly frequent occurrence of heavy precipitation events of short duration and with an additional orographic intensification of events of long duration. Using the Ellenberg climate quotient, effects of the recent climate trend on the distribution of potential natural vegetation in Central Germany were studied by way of example. Underlying climatological conditions subject to a trend, in particular the negative trend of precipitation in summer, were taken into consideration by a modification of the calculation rule. All in all, it was found that regional effects of global climate change bring about massive changes in the spatiotemporal structure of precipitation in Saxony, which inevitably leads to a complex chain of impact on the regional water budget and is fraught with risks.

Page generated in 0.0233 seconds