• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 530
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1178
  • 1032
  • 202
  • 193
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
891

Orbitography and rendezvous dynamics of a space debris removal mission / Orbitografi och rendezvousdynamik i ett uppdrag för att avlägsna rymdskräp

Quénéa, Hugo January 2024 (has links)
This paper investigates the feasibility of a rendezvous with an uncooperative space object using only optical sensors and takes a closer look at the performance of different algorithms used to estimate an object’s orbit. The ability to perform a rendezvous with an uncooperative target is critical for a wide variety of future missions, such as space debris removal. The main satellite, referred to as chaser, has to determine precisely the orbit of the space object of interest, referred to as target. After some elements of mission analysis, this report dives into the angles-only method of Initial Orbit Determination developed by Gooding, which is a method well suited for space-based observations. It gives access to the osculating orbit at the time of measurements. Then, the estimated orbit is refined using a Batch Least Squares algorithm. The accuracy of the orbit determination depends on the number and precision of the measurements. An optimal strategy for the distribution of the measurements on orbit is to take measurements regularly throughout the whole orbit. The constraints of eclipses and ground stations contacts are taken into account. Finally, the Rendezvous and Proximity Operations are explored in a mission scenario. / Detta examensarbete undersöker möjligheten av ett rendezvous med ett icke-samarbetsvilligt rymdobjekt som endast använder optiska sensorer och tar en närmare titt på prestandan hos olika algoritmer som används för att uppskatta ett objekts omloppsbana. Förmågan att utföra ett möte med ett icke samarbetsvilligt mål är avgörande för en mängd olika framtida uppdrag, till exempel borttagning av rymdskrot. Den viktigaste satelliten, kallad chaser, måste exakt bestämma omloppsbanan för rymdföremål av intresse, kallad target. Efter några element av uppdragsanalys, dyker denna rapport in i den vinkelbaserade metoden för initial omloppsbestämning som utvecklats av Gooding, som är en metod som är väl lämpad för rymdbaserade observationer. Den ger tillgång till den osulerande banan vid tidpunkten för mätningarna. Därefter förfinas den beräknade omloppsbanan med hjälp av minstakvadratmetoden. Noggrannheten i omloppsbestämningen beror på antalet mätningar och deras precision. En optimal strategi för fördelningen av mätningarna i omloppsbana är att göra mätningar regelbundet över hela omloppsbanan. Begränsningarna i förmörkelser och markstationernas kontakter beaktas. Slutligen utforskas mötes- och närhetsoperationer i ett uppdragsscenario.
892

Unit root, outliers and cointegration analysis with macroeconomic applications

Rodríguez, Gabriel 10 1900 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal. / In this thesis, we deal with three particular issues in the literature on nonstationary time series. The first essay deals with various unit root tests in the context of structural change. The second paper studies some residual based tests in order to identify cointegration. Finally, in the third essay, we analyze several tests in order to identify additive outliers in nonstationary time series. The first paper analyzes the hypothesis that some time series can be characterized as stationary with a broken trend. We extend the class of M-tests and ADF test for a unit root to the case where a change in the trend function is allowed to occur at an unknown time. These tests (MGLS, ADFGLS) adopt the Generalized Least Squares (GLS) detrending approach to eliminate the set of deterministic components present in the model. We consider two models in the context of the structural change literature. The first model allows for a change in slope and the other for a change in slope as well as intercept. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal test (PF-Ls) which allows us to find the power envelope. The asymptotic critical values of the tests are tabulated and we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. Two methods to select the break point are analyzed. A first method estimates the break point that yields the minimal value of the statistic. In the second method, the break point is selected such that the absolute value of the t-statistic on the change in slope is maximized. We show that the MGLS and PTGLS tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power of the tests in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. In an empirical application, we consider two U.S. macroeconomic annual series widely used in the unit root literature: real wages and common stock prices. Our results suggest a rejection of the unit root hypothesis. In other words, we find that these series can be considered as trend stationary with a broken trend. Given the fact that using the GLS detrending approach allows us to attain gains in the power of the unit root tests, a natural extension is to propose this approach to the context of tests based on residuals to identify cointegration. This is the objective of the second paper in the thesis. In fact, we propose residual based tests for cointegration using local GLS detrending to eliminate separately the deterministic components in the series. We consider two cases, one where only a constant is included and one where a constant and a time trend are included. The limiting distributions of various residuals based tests are derived for a general quasi-differencing parameter and critical values are tabulated for values of c = 0 irrespective of the nature of the deterministic components and also for other values as proposed in the unit root literature. Simulations show that GLS detrending yields tests with higher power. Furthermore, using c = -7.0 or c = -13.5 as the quasi-differencing parameter, based on the two cases analyzed, is preferable. The third paper is an extension of a recently proposed method to detect outliers which explicitly imposes the null hypothesis of a unit root. it works in an iterative fashion to select multiple outliers in a given series. We show, via simulation, that under the null hypothesis of no outliers, it has the right size in finite samples to detect a single outlier but when applied in an iterative fashion to select multiple outliers, it exhibits severe size distortions towards finding an excessive number of outliers. We show that this iterative method is incorrect and derive the appropriate limiting distribution of the test at each step of the search. Whether corrected or not, we also show that the outliers need to be very large for the method to have any decent power. We propose an alternative method based on first-differenced data that has considerably more power. The issues are illustrated using two US/Finland real exchange rate series.
893

Investigation of a solvent-free continuous process to produce pharmaceutical co-crystals. Understanding and developing solvent-free continuous cocrystallisation (SFCC) through study of co-crystal formation under the application of heat, model shear and twin screw extrusion, including development of a near infrared spectroscopy partial least squares quantification method

Wood, Clive John January 2016 (has links)
This project utilised a novel solvent-free continuous cocrystallisation (SFCC) method to manufacture pharmaceutical co-crystals. The objectives were to optimize the process towards achieving high co-crystal yields and to understand the behaviour of co-crystals under different conditions. Particular attention was paid to the development of near infrared (NIR) spectroscopy as a process analytical technology (PAT). Twin screw, hot melt extrusion was the base technique of the SFCC process. Changing parameters such as temperature, screw speed and screw geometry was important for improving the co-crystal yield. The level of mixing and shear was directly influenced by the screw geometry, whilst the screw speed was an important parameter for controlling the residence time of the material during hot melt extrusion. Ibuprofen – nicotinamide 1:1 cocrystals and carbamazepine – nicotinamide 1:1 co-crystals were successfully manufactured using the SFCC method. Characterisation techniques were important for this project, and NIR spectroscopy proved to be a convenient, accurate analytical technique for identifying the formation of co-crystals along the extruder barrel. Separate thermal and model shear deformation studies were also carried out to determine the effect of temperature and shear on co-crystal formation for several different pharmaceutical co-crystal pairs. Finally, NIR spectroscopy was used to create two partial least squares regression models, for predicting the 1:1 co-crystal yield of ibuprofen – nicotinamide and carbamazepine – nicotinamide, when in a powder mixture with the respective pure API. It is believed that the prediction models created in this project can be used to facilitate future in-line PAT studies of pharmaceutical co-crystals during different manufacturing processes. / Engineering and Physical Sciences Research Council (EPSRC)
894

Monitoring ibuprofen-nicotinamide cocrystal formation during solvent free continuous cocrystallization (SFCC) using near infrared spectroscopy as a PAT tool

Kelly, Adrian L., Gough, Tim, Dhumal, Ravindra S., Halsey, S.A., Paradkar, Anant R January 2012 (has links)
No / The purpose of this work was to explore NIR spectroscopy as a PAT tool to monitor the formation of ibuprofen and nicotinamide cocrystals during extrusion based solvent free continuous cocrystallization (SFCC). Drug and co-former were gravimetrically fed into a heated co-rotating twin screw extruder to form cocrystals. Real-time process monitoring was performed using a high temperature NIR probe in the extruder die to assess cocrystal content and subsequently compared to off-line powder X-ray diffraction measurements. The effect of processing variables, such as temperature and mixing intensity, on the extent of cocrystal formation was investigated. NIR spectroscopy was sensitive to cocrystal formation with the appearance of new peaks and peak shifts, particularly in the 4800-5200 cm(-1) wave-number region. PXRD confirmed an increased conversion of the mixture into cocrystal with increase in barrel temperature and screw mixing intensity. A decrease in screw rotation speed also provided improved cocrystal yield due to the material experiencing longer residence times within the process. A partial least squares analysis in this region of NIR spectrum correlated well with PXRD data, providing a best fit with cocrystal conversion when a limited range of process conditions were considered, for example a single set temperature. The study suggests that NIR spectroscopy could be used to monitor cocrystal purity on an industrial scale using this continuous, solvent-free process.
895

Restauration et séparation de signaux polynômiaux par morceaux. Application à la microscopie de force atomique / Restoration and separation of piecewise polynomial signals. Application to Atomic Force Microscopy

Duan, Junbo 15 November 2010 (has links)
Cette thèse s'inscrit dans le domaine des problèmes inverses en traitement du signal. Elle est consacrée à la conception d'algorithmes de restauration et de séparation de signaux parcimonieux et à leur application à l'approximation de courbes de forces en microscopie de force atomique (AFM), où la notion de parcimonie est liée au nombre de points de discontinuité dans le signal (sauts, changements de pente, changements de courbure). Du point de vue méthodologique, des algorithmes sous-optimaux sont proposés pour le problème de l'approximation parcimonieuse basée sur la pseudo-norme l0 : l'algorithme Single Best Replacement (SBR) est un algorithme itératif de type « ajout-retrait » inspiré d'algorithmes existants pour la restauration de signaux Bernoulli-Gaussiens. L'algorithme Continuation Single Best Replacement (CSBR) est un algorithme permettant de fournir des approximations à des degrés de parcimonie variables. Nous proposons aussi un algorithme de séparation de sources parcimonieuses à partir de mélanges avec retards, basé sur l'application préalable de l'algorithme CSBR sur chacun des mélanges, puis sur une procédure d'appariement des pics présents dans les différents mélanges. La microscopie de force atomique est une technologie récente permettant de mesurer des forces d'interaction entre nano-objets. L'analyse de courbes de forces repose sur des modèles paramétriques par morceaux. Nous proposons un algorithme permettant de détecter les régions d'intérêt (les morceaux) où chaque modèle s'applique puis d'estimer par moindres carrés les paramètres physiques (élasticité, force d'adhésion, topographie, etc.) dans chaque région. Nous proposons finalement une autre approche qui modélise une courbe de force comme un mélange de signaux sources parcimonieux retardées. La recherche des signaux sources dans une image force-volume s'effectue à partir d'un grand nombre de mélanges car il y autant de mélanges que de pixels dans l'image / This thesis handles several inverse problems occurring in sparse signal processing. The main contributions include the conception of algorithms dedicated to the restoration and the separation of sparse signals, and their application to force curve approximation in Atomic Force Microscopy (AFM), where the notion of sparsity is related to the number of discontinuity points in the signal (jumps, change of slope, change of curvature).In the signal processing viewpoint, we propose sub-optimal algorithms dedicated to the sparse signal approximation problem based on the l0 pseudo-norm : the Single Best Replacement algorithm (SBR) is an iterative "forward-backward" algorithm inspired from existing Bernoulli-Gaussian signal restoration algorithms. The Continuation Single Best Replacement algorithm (CSBR) is an extension providing approximations at various sparsity levels. We also address the problem of sparse source separation from delayed mixtures. The proposed algorithm is based on the prior application of CSBR on every mixture followed by a matching procedure which attributes a label for each peak occurring in each mixture.Atomic Force Microscopy (AFM) is a recent technology enabling to measure interaction forces between nano-objects. The force-curve analysis relies on piecewise parametric models. We address the detection of the regions of interest (the pieces) where each model holds and the subsequent estimation of physical parameters (elasticity, adhesion forces, topography, etc.) in each region by least-squares optimization. We finally propose an alternative approach in which a force curve is modeled as a mixture of delayed sparse sources. The research of the source signals and the delays from a force-volume image is done based on a large number of mixtures since there are as many mixtures as the number of image pixels
896

變數轉換之穩健迴歸分析

張嘉璁 Unknown Date (has links)
在傳統的線性迴歸分析當中,當基本假設不滿足時,有時可考慮變數轉換使得資料能夠比較符合基本假設。在眾多的轉換方法當中,以Box和Cox(1964)所提出的乘冪轉換(Box-Cox power transformation)最為常用,乘冪轉換可將某些複雜的系統轉換成線性常態模式。然而當資料存在離群值(outlier)時,Box-Cox Transformation會受到影響,因此不是一種穩健方法。 在本篇論文當中,我們利用前進演算法(forward search algorithm)求得最小消去平方估計量(Least trimmed squares estimator),在過程當中估計出穩健的轉換參數。
897

Χωροχρονικές τεχνικές επεξεργασίας σήματος σε ασύρματα τηλεπικοινωνιακά δίκτυα / Space -Time signal processing techniques for wireless communication networks

Κεκάτος, Βασίλειος 25 October 2007 (has links)
Τα τελευταία χρόνια χαρακτηρίζονται από μια αλματώδη ανάπτυξη των προϊόντων και υπηρεσιών που βασίζονται στα δίκτυα ασύρματης επικοινωνίας, ενώ προκύπτουν σημαντικές ερευνητικές προκλήσεις. Τα συστήματα πολλαπλών κεραιών στον πομπό και στο δέκτη, γνωστά και ως συστήματα MIMO (multi-input multi-output), καθώς και η τεχνολογία πολλαπλής προσπέλασης με χρήση κωδικών (code division multiple access, CDMA) αποτελούν δύο από τα βασικά μέτωπα ανάπτυξης των ασύρματων τηλεπικοινωνιών. Στα πλαίσια της παρούσας διδακτορικής διατριβής, ασχοληθήκαμε με την ανάπτυξη και μελέτη αλγορίθμων επεξεργασίας σήματος για τα δύο παραπάνω συστήματα, όπως περιγράφεται αναλυτικά παρακάτω. Σχετικά με τα συστήματα MIMO, η πρωτοποριακή έρευνα που πραγματοποιήθηκε στα Bell Labs γύρω στα 1996, όπου αναπτύχθηκε η αρχιτεκτονική BLAST (Bell Labs Layered Space-Time), απέδειξε ότι η χρήση πολλαπλών κεραιών μπορεί να οδηγήσει σε σημαντική αύξηση της χωρητικότητας των ασύρματων συστημάτων. Προκειμένου να αξιοποιηθούν οι παραπάνω δυνατότητες, απαιτείται η σχεδίαση σύνθετων δεκτών MIMO. Προς αυτήν την κατεύθυνση, έχει προταθεί ένας μεγάλος αριθμός μεθόδων ισοστάθμισης του καναλιού. Ωστόσο, οι περισσότερες από αυτές υποθέτουν ότι το ασύρματο κανάλι είναι: 1) χρονικά σταθερό, 2) συχνοτικά επίπεδο (δεν εισάγει διασυμβολική παρεμβολή), και κυρίως 3) ότι είναι γνωστό στο δέκτη. Δεδομένου ότι σε ευρυζωνικά συστήματα μονής φέρουσας οι παραπάνω υποθέσεις είναι δύσκολο να ικανοποιηθούν, στραφήκαμε προς τις προσαρμοστικές μεθόδους ισοστάθμισης. Συγκεκριμένα, αναπτύξαμε τρεις βασικούς αλγορίθμους. Ο πρώτος αλγόριθμος αποτελεί έναν προσαρμοστικό ισοσταθμιστή ανάδρασης αποφάσεων (decision feedback equalizer, DFE) για συχνοτικά επίπεδα κανάλια ΜΙΜΟ. Ο προτεινόμενος MIMO DFE ακολουθεί την αρχιτεκτονική BLAST, και ανανεώνεται με βάση τον αλγόριθμο αναδρομικών ελαχίστων τετραγώνων (RLS) τετραγωνικής ρίζας. Ο ισοσταθμιστής μπορεί να παρακολουθήσει ένα χρονικά μεταβαλλόμενο κανάλι, και, από όσο γνωρίζουμε, έχει τη χαμηλότερη πολυπλοκότητα από όλους τους δέκτες BLAST που έχουν προταθεί έως σήμερα. Ο δεύτερος αλγόριθμος αποτελεί την επέκταση του προηγούμενου σε συχνοτικά επιλεκτικά κανάλια. Μέσω κατάλληλης μοντελοποίησης του προβλήματος ισοστάθμισης, οδηγηθήκαμε σε έναν αποδοτικό DFE για ευρυζωνικά κανάλια MIMO. Τότε, η διαδικασία της ισοστάθμισης εμφανίζει προβλήματα αριθμητικής ευστάθειας, που λόγω της υλοποίησης RLS τετραγωνικής ρίζας αντιμετωπίστηκαν επιτυχώς. Κινούμενοι προς την κατεύθυνση περαιτέρω μείωσης της πολυπλοκότητας, προτείναμε έναν προσαρμοστικό MIMO DFE που ανανεώνεται με βάση τον αλγόριθμο ελαχίστων μέσων τετραγώνων (LMS) υλοποιημένο εξ ολοκλήρου στο πεδίο της συχνότητας. Με χρήση του ταχύ μετασχηματισμού Fourier (FFT), μειώνεται η απαιτούμενη πολυπλοκότητα. Παράλληλα, η μετάβαση στο πεδίο των συχνοτήτων έχει ως αποτέλεσμα την προσεγγιστική διαγωνοποίηση του συστήματος, προσφέροντας ανεξάρτητη ανανέωση των φίλτρων ανά συχνοτική συνιστώσα και επιτάχυνση της σύγκλισης του αλγορίθμου. Ο προτεινόμενος ισοσταθμιστής πετυχαίνει μια καλή ανταλλαγή μεταξύ απόδοσης και πολυπλοκότητας. Παράλληλα με τα παραπάνω, ασχοληθήκαμε με την εκτίμηση του ασύρματου καναλιού σε ένα ασύγχρονο σύστημα CDMA. Το βασικό σενάριο είναι ότι ο σταθμός βάσης γνωρίζει ήδη τους ενεργούς χρήστες, και καλείται να εκτιμήσει τις παραμέτρους του καναλιού ανερχόμενης ζεύξης ενός νέου χρήστη που εισέρχεται στο σύστημα. Το πρόβλημα περιγράφεται από μια συνάρτηση ελαχίστων τετραγώνων, η οποία είναι γραμμική ως προς τα κέρδη του καναλιού, και μη γραμμική ως προς τις καθυστερήσεις του. Αποδείξαμε ότι το πρόβλημα έχει μια προσεγγιστικά διαχωρίσιμη μορφή, και προτείναμε μια επαναληπτική μέθοδο υπολογισμού των παραμέτρων. Ο προτεινόμενος αλγόριθμος δεν απαιτεί κάποια ειδική ακολουθία διάχυσης και λειτουργεί αποδοτικά ακόμη και για περιορισμένη ακολουθία εκπαίδευσης. Είναι εύρωστος στην παρεμβολή πολλαπλών χρηστών και περισσότερο ακριβής από μια υπάρχουσα μέθοδο εις βάρος μιας ασήμαντης αύξησης στην υπολογιστική πολυπλοκότητα. / Over the last decades, a dramatic progress in the products and services based on wireless communication networks has been observed, while, at the same time, new research challenges arise. The systems employing multiple antennas at the transmitter and the receiver, known as MIMO (multi-input multi-output) systems, as well as code division multiple access (CDMA) systems, are two of the main technologies employed for the evolution of wireless communications. During this PhD thesis, we worked on the design and analysis of signal processing algorithms for the two above systems, as it is described in detail next. Concerning the MIMO systems, the pioneering work performed at Bell Labs around 1996, where the BLAST (Bell Labs Layered Space-Time) architecture has been developed, proved that by using multiple antennas can lead to a significant increase in wireless systems capacity. To exploit this potential, sophisticated MIMO receivers should be designed. To this end, a large amount of channel equalizers has been proposed. However, most of these methods assume that the wireless channel is: 1) static, 2) frequency flat (no intersymbol interference is introduced), and mainly 3) it is perfectly known at the receiver. Provided that in high rate single carrier systems these assumptions are difficult to be met, we focused our attention on adaptive equalization methods. More specifically, three basic algorithms have been developed. The first algorithm is an adaptive decision feedback equalizer (DFE) for frequency flat MIMO channels. The proposed MIMO DFE implements the BLAST architecture, and it is updated by the recursive least squares (RLS) algorithm in its square root form. The new equalizer can track time varying channels, and, to the best of our knowledge, it has the lowest computational complexity among the BLAST receivers that have been proposed up to now. The second algorithm is an extension of the previous one to the frequency selective channel case. By proper modeling of the equalization problem, we arrived at an efficient DFE for wideband MIMO channels. In this case, the equalization process encounters numerical instability problems, which were successfully treated by the square root RLS implementation employed. To further reduce complexity, we proposed an adaptive MIMO DFE that is updated by the least mean square (LMS) algorithm, fully implemented in the frequency domain. By using the fast Fourier transform (FFT), the complexity required is considerably reduced. Moreover, the frequency domain implementation leads to an approximate decoupling of the equalization problem at each frequency bin. Thus, an independent update of the filters at each frequency bin allows for a faster convergence of the algorithm. The proposed equalizer offers a good performance - complexity tradeoff. Furthermore, we worked on channel estimation for an asynchronous CDMA system. The assumed scenario is that the base station has already acquired all the active users, while the uplink channel parameters of a new user entering the system should be estimated. The problem can be described via a least squares cost function, which is linear with respect to the channel gains, and non linear to its delays. We proved that the problem is approximately decoupled, and a new iterative parameter estimation method has been proposed. The suggested method does not require any specific pilot sequence and performs well even for a short training interval. It is robust to multiple access interference and more accurate compared to an existing method, at the expense of an insignificant increase in computational complexity.
898

Propriétés fonctionnelles et spectrales d’espèces végétales de tourbières ombrotrophes le long d’un gradient de déposition d’azote

Girard, Alizée 12 1900 (has links)
Les tourbières ombrotrophes, ou bogs sont particulièrement vulnérables à l’augmentation de la déposition atmosphérique d’azote. Cet apport d’un nutriment normalement limitant altère la capacité des tourbières à accumuler le carbone (C), en plus de mener à des changements de leur composition végétale. L’imagerie spectrale est une approche prometteuse puisqu’elle rend possible la détection des espèces végétales et de certaines caractéristiques chimiques des plantes, à distance. Toutefois, l’ampleur des différences spectrales intra- et interespèces n’est pas encore connue. Nous avons évalué la façon dont la chimie, la structure et la signature spectrale des feuilles changent chez Chamaedaphne calyculata, Kalmia angustifolia, Rhododendron groenlandicum et Eriophorum vaginatum, dans trois tourbières du sud du Québec et de l’Ontario, incluant une tourbière où se déroule une expérience de fertilisation à long terme. Nous avons mesuré des changements dans les traits fonctionnels dus aux différences dans la quantité d’azote disponible dans les sites. Toutefois, la déposition atmosphérique d’azote a eu relativement peu d’effet sur les spectres foliaires ; les variations spectrales les plus importantes étaient entre les espèces. En fait, nous avons trouvé que les quatre espèces ont un spectre caractéristique, une signature spectrale permettant leur identification au moyen d’analyses discriminantes des moindres carrés partiels (PLSDA). De plus, nous avons réussi à prédire plusieurs traits fonctionnels (l’azote, le carbone ; et la proportion d’eau et de matière sèche) avec moins de 10 % d’erreur grâce à des régressions des moindres carrés partiels (PLSR) des données spectrales. Notre étude fournit de nouvelles preuves que les variations intraspécifiques, causées en partie par des variations environnementales considérables, sont perceptibles dans les spectres foliaires. Toutefois, les variations intraspécifiques n’affectent pas l’identification des espèces ou la prédiction des traits. Nous démontrons que les spectres foliaires comprennent des informations sur les espèces et leurs traits fonctionnels, confirmant le potentiel de la spectroscopie pour le suivi des tourbières. / Abstract Bogs, as nutrient-poor ecosystems, are particularly sensitive to atmospheric nitrogen (N) deposition. Nitrogen deposition alters bog plant community composition and can limit their ability to sequester carbon (C). Spectroscopy is a promising approach for studying how N deposition affects bogs because of its ability to remotely determine changes in plant species composition in the long term as well as shorter-term changes in foliar chemistry. However, there is limited knowledge on the extent to which bog plants differ in their foliar spectral properties, how N deposition might affect those properties, and whether subtle inter- or intraspecific changes in foliar traits can be spectrally detected. Using an integrating sphere fitted to a field spectrometer, we measured spectral properties of leaves from the four most common vascular plant species (Chamaedaphne calyculata, Kalmia angustifolia, Rhododendron groenlandicum and Eriophorum vaginatum) in three bogs in southern Québec and Ontario, Canada, exposed to different atmospheric N deposition levels, including one subjected to a 18 years N fertilization experiment. We also measured chemical and morphological properties of those leaves. We found detectable intraspecific changes in leaf structural traits and chemistry (namely chlorophyll b and N concentrations) with increasing N deposition and identified spectral regions that helped distinguish the site-specific populations within each species. Most of the variation in leaf spectral, chemical and morphological properties was among species. As such, species had distinct spectral foliar signatures, allowing us to identify them with high accuracy with partial least squares discriminant analyses (PLSDA). Predictions of foliar traits from spectra using partial least squares regression (PLSR) were generally accurate, particularly for the concentrations of N and C, soluble C, leaf water, and dry matter content (<10% RMSEP). However, these multi-species PLSR models were not accurate within species, where the range of values was narrow. To improve the detection of short-term intraspecific changes in functional traits, models should be trained with more species-specific data. Our field study showing clear differences in foliar spectra and traits among species, and some within-species differences due to N deposition, suggest that spectroscopy is a promising approach for assessing long-term vegetation changes in bogs subject to atmospheric pollution.
899

Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods

Hellwig, Friederike 12 June 2019 (has links)
Die vorliegende Arbeit "Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods" beweist optimale Konvergenzraten für vier diskontinuierliche Petrov-Galerkin (dPG) Finite-Elemente-Methoden für das Poisson-Modell-Problem für genügend feine Anfangstriangulierung. Sie zeigt dazu die Äquivalenz dieser vier Methoden zu zwei anderen Klassen von Methoden, den reduzierten gemischten Methoden und den verallgemeinerten Least-Squares-Methoden. Die erste Klasse benutzt ein gemischtes System aus konformen Courant- und nichtkonformen Crouzeix-Raviart-Finite-Elemente-Funktionen. Die zweite Klasse verallgemeinert die Standard-Least-Squares-Methoden durch eine Mittelpunktsquadratur und Gewichtsfunktionen. Diese Arbeit verallgemeinert ein Resultat aus [Carstensen, Bringmann, Hellwig, Wriggers 2018], indem die vier dPG-Methoden simultan als Spezialfälle dieser zwei Klassen charakterisiert werden. Sie entwickelt alternative Fehlerschätzer für beide Methoden und beweist deren Zuverlässigkeit und Effizienz. Ein Hauptresultat der Arbeit ist der Beweis optimaler Konvergenzraten der adaptiven Methoden durch Beweis der Axiome aus [Carstensen, Feischl, Page, Praetorius 2014]. Daraus folgen dann insbesondere die optimalen Konvergenzraten der vier dPG-Methoden. Numerische Experimente bestätigen diese optimalen Konvergenzraten für beide Klassen von Methoden. Außerdem ergänzen sie die Theorie durch ausführliche Vergleiche beider Methoden untereinander und mit den äquivalenten dPG-Methoden. / The thesis "Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods" proves optimal convergence rates for four lowest-order discontinuous Petrov-Galerkin methods for the Poisson model problem for a sufficiently small initial mesh-size in two different ways by equivalences to two other non-standard classes of finite element methods, the reduced mixed and the weighted Least-Squares method. The first is a mixed system of equations with first-order conforming Courant and nonconforming Crouzeix-Raviart functions. The second is a generalized Least-Squares formulation with a midpoint quadrature rule and weight functions. The thesis generalizes a result on the primal discontinuous Petrov-Galerkin method from [Carstensen, Bringmann, Hellwig, Wriggers 2018] and characterizes all four discontinuous Petrov-Galerkin methods simultaneously as particular instances of these methods. It establishes alternative reliable and efficient error estimators for both methods. A main accomplishment of this thesis is the proof of optimal convergence rates of the adaptive schemes in the axiomatic framework [Carstensen, Feischl, Page, Praetorius 2014]. The optimal convergence rates of the four discontinuous Petrov-Galerkin methods then follow as special cases from this rate-optimality. Numerical experiments verify the optimal convergence rates of both types of methods for different choices of parameters. Moreover, they complement the theory by a thorough comparison of both methods among each other and with their equivalent discontinuous Petrov-Galerkin schemes.
900

The determinants and deterrents of profit shifting : evidence from a sample of South African multinational enterprises

Isaac, Nereen 10 1900 (has links)
This study aimed to assess the determinants and deterrents of profit shifting, which can occur as a result of corporate income tax competition, with a view to aid in collecting sufficient tax revenue to meet public spending requirements. The study theoretically and empirically analysed the effectiveness of the introduction of the South African transfer pricing regulations on deterring the occurrence of profit shifting in South Africa using annual financial information of South African parented multinational enterprises for the period 2010 – 2017. The study established that the implementation of transfer pricing regulations resulted in a reduction in profit shifting that became increasingly more prominent as the rules became stricter. Based on the findings of the study, it is recommended that the South Africa government should allocate sufficient resources to ensure that the transfer pricing regulations are being adhered with an aim to reduce profit shifting from South Africa. / Economics / M. Com. (Economics)

Page generated in 0.0828 seconds