• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Bayesian Inference for Bivariate Conditional Copula Models with Continuous or Mixed Outcomes

Sabeti, Avideh 12 August 2013 (has links)
The main goal of this thesis is to develop Bayesian model for studying the influence of covariate on dependence between random variables. Conditional copula models are flexible tools for modelling complex dependence structures. We construct Bayesian inference for the conditional copula model adapted to regression settings in which the bivariate outcome is continuous or mixed (binary and continuous) and the copula parameter varies with covariate values. The functional relationship between the copula parameter and the covariate is modelled using cubic splines. We also extend our work to additive models which would allow us to handle more than one covariate while keeping the computational burden within reasonable limits. We perform the proposed joint Bayesian inference via adaptive Markov chain Monte Carlo sampling. The deviance information criterion and cross-validated marginal log-likelihood criterion are employed for three model selection problems: 1) choosing the copula family that best fits the data, 2) selecting the calibration function, i.e., checking if parametric form for copula parameter is suitable and 3) determining the number of independent variables in the additive model. The performance of the estimation and model selection techniques are investigated via simulations and demonstrated on two data sets: 1) Matched Multiple Birth and 2) Burn Injury. In which of interest is the influence of gestational age and maternal age on twin birth weights in the former data, whereas in the later data we are interested in investigating how patient’s age affects the severity of burn injury and the probability of death.
12

Bayesian Inference for Bivariate Conditional Copula Models with Continuous or Mixed Outcomes

Sabeti, Avideh 12 August 2013 (has links)
The main goal of this thesis is to develop Bayesian model for studying the influence of covariate on dependence between random variables. Conditional copula models are flexible tools for modelling complex dependence structures. We construct Bayesian inference for the conditional copula model adapted to regression settings in which the bivariate outcome is continuous or mixed (binary and continuous) and the copula parameter varies with covariate values. The functional relationship between the copula parameter and the covariate is modelled using cubic splines. We also extend our work to additive models which would allow us to handle more than one covariate while keeping the computational burden within reasonable limits. We perform the proposed joint Bayesian inference via adaptive Markov chain Monte Carlo sampling. The deviance information criterion and cross-validated marginal log-likelihood criterion are employed for three model selection problems: 1) choosing the copula family that best fits the data, 2) selecting the calibration function, i.e., checking if parametric form for copula parameter is suitable and 3) determining the number of independent variables in the additive model. The performance of the estimation and model selection techniques are investigated via simulations and demonstrated on two data sets: 1) Matched Multiple Birth and 2) Burn Injury. In which of interest is the influence of gestational age and maternal age on twin birth weights in the former data, whereas in the later data we are interested in investigating how patient’s age affects the severity of burn injury and the probability of death.
13

Predictors of outcome for severely emotionally disturbed children in treatment

Luiker, Henry George January 2008 (has links)
Doctor of Philosophy (Phd) / Despite general agreement that severely emotionally disturbed children and adolescents are an "at risk" group, and that ongoing evaluation and research into the effectiveness of services provided for them is important, very little outcome evaluation actually takes place. The absence of well-conducted and appropriately interpreted studies is particularly notable for day or residential treatment programs, which cater for the most severely emotionally disturbed youths. This thesis outlines the main areas of conceptual, pragmatic and methodological confusion and neglect which impede progress in research in this area. It argues for plurality of data analytic strategies and research designs. It then critically reviews the reported findings about the effectiveness of day and residential treatment in specialist facilities, and the predictors of good outcomes for this treatment type. This review confirms that there is very little to guide practice. Having argued for the legitimacy of its methods and the necessity to address basic questions, the thesis reports the results of a naturalistic study based on data accumulated during a decade-long evaluative research program taking place at Arndell Child and Adolescent Unit, Sydney. The study addresses the question of what child, family and treatment variables predict outcome for 159 children and adolescents treated at this facility from 1990 to 1999. Statistically significant results with large effect size were obtained. Among the most disturbed subgroup of forty three children, (a) psychodynamic milieu-based treatment was shown to be more effective than the “empirically-validated” cognitive-behavioural treatment which superseded it in 1996, and (b) children from step-families showed better outcome than those from other family structures. Furthermore, it was found for the study sample as a whole that severe school-based problem behaviours were associated with a limited trajectory of improvement in home-based problem behaviour. These results are discussed with regard to implications for treatment, research methodology, policy and further studies.
14

Aspects of interval analysis applied to initial-value problems for ordinary differential equations and hyperbolic partial differential equations

Anguelov, Roumen Anguelov 09 1900 (has links)
Interval analysis is an essential tool in the construction of validated numerical solutions of Initial Value Problems (IVP) for Ordinary (ODE) and Partial (PDE) Differential Equations. A validated solution typically consists of guaranteed lower and upper bounds for the exact solution or set of exact solutions in the case of uncertain data, i.e. it is an interval function (enclosure) containing all solutions of the problem. IVP for ODE: The central point of discussion is the wrapping effect. A new concept of wrapping function is introduced and applied in studying this effect. It is proved that the wrapping function is the limit of the enclosures produced by any method of certain type (propagate and wrap type). Then, the wrapping effect can be quantified as the difference between the wrapping function and the optimal interval enclosure of the solution set (or some norm of it). The problems with no wrapping effect are characterized as problems for which the wrapping function equals the optimal interval enclosure. A sufficient condition for no wrapping effect is that there exist a linear transformation, preserving the intervals, which reduces the right-hand side of the system of ODE to a quasi-isotone function. This condition is also necessary for linear problems and "near" necessary in the general case. Hyperbolic PDE: The Initial Value Problem with periodic boundary conditions for the wave equation is considered. It is proved that under certain conditions the problem is an operator equation with an operator of monotone type. Using the established monotone properties, an interval (validated) method for numerical solution of the problem is proposed. The solution is obtained step by step in the time dimension as a Fourier series of the space variable and a polynomial of the time variable. The numerical implementation involves computations in Fourier and Taylor functoids. Propagation of discontinuo~swaves is a serious problem when a Fourier series is used (Gibbs phenomenon, etc.). We propose the combined use of periodic splines and Fourier series for representing discontinuous functions and a method for propagating discontinuous waves. The numerical implementation involves computations in a Fourier hyper functoid. / Mathematical Sciences / D. Phil. (Mathematics)
15

Clustering in foreign exchange markets : price, trades and traders / Clustering sur les marchés FX : prix, trades et traders

Lallouache, Mehdi 10 July 2015 (has links)
En utilisant des données haute-fréquence inédites, cette thèse étudie trois types de regroupements (“clusters”) présents dans le marché des changes: la concentration d'ordres sur certains prix, la concentration des transactions dans le temps et l'existence de groupes d'investisseurs prenant les mêmes décisions. Nous commençons par étudier les propriétés statistiques du carnet d'ordres EBS pour les paires de devises EUR/USD et USD/JPY et l'impact d'une réduction de la taille du tick sur sa dynamique. Une grande part des ordres limites est encore placée sur les anciens prix autorisés, entraînant l'apparition de prix-barrières, où figurent les meilleures limites la plupart du temps. Cet effet de congestion se retrouve dans la forme moyenne du carnet où des pics sont présents aux distances entières. Nous montrons que cette concentration des prix est causée par les traders manuels qui se refusent d’utiliser la nouvelle résolution de prix. Les traders automatiques prennent facilement la priorité, en postant des ordres limites un tick devant les pics de volume.Nous soulevons ensuite la question de l'aptitude des processus de Hawkes à rendre compte de la dynamique du marché. Nous analysons la précision de tels processus à mesure que l'intervalle de calibration est augmenté. Différent noyaux construits à partir de sommes d'exponentielles sont systématiquement comparés. Le marché FX qui ne ferme jamais est particulièrement adapté pour notre but, car il permet d’éviter les complications dues à la fermeture nocturne des marchés actions. Nous trouvons que la modélisation est valide selon les trois tests statistiques, si un noyau à deux exponentielles est utilisé pour fitter une heure, et deux ou trois pour une journée complète. Sur de plus longues périodes la modélisation est systématiquement rejetée par les tests à cause de la non-stationnarité du processus endogène. Les échelles de temps d'auto-excitation estimées sont relativement courtes et le facteur d'endogénéité est élevé mais sous-critique autour de 0.8. La majorité des modèles à agents suppose implicitement que les agents interagissent à travers du prix des actifs et des volumes échangés. Certains utilisent explicitement un réseau d'interaction entre traders, sur lequel des rumeurs se propagent, d'autres, un réseau qui représente des groupes prenant des décisions communes. Contrairement à d'autres types de données, de tels réseaux, s'ils existent, sont nécessairement implicites, ce qui rend leur détection compliquée. Nous étudions les transactions des clients de deux fournisseur de liquidités sur plusieurs années. En supposant que les liens entre agents sont déterminés par la synchronisation de leur activité ou inactivité, nous montrons que des réseaux d'interactions existent. De plus, nous trouvons que l'activité de certains agents entraîne systématiquement l’activité d'autres agents, définissant ainsi des relations de type “lead-lag” entre les agents. Cela implique que le flux des clients est prévisible, ce que nous vérifions à l'aide d'une méthode sophistiquée d'apprentissage statistique. / The aim of this thesis is to study three types of clustering in foreign exchange markets, namely in price, trades arrivals and investors decisions. We investigate the statistical properties of the EBS order book for the EUR/USD and USD/JPY currency pairs and the impact of a ten-fold tick size reduction on its dynamics. A large fraction of limit orders are still placed right at or halfway between the old allowed prices. This generates price barriers where the best quotes lie for much of the time, which causes the emergence of distinct peaks in the average shape of the book at round distances. Furthermore, we argue that this clustering is mainly due to manual traders who remained set to the old price resolution. Automatic traders easily take price priority by submitting limit orders one tick ahead of clusters, as shown by the prominence of buy (sell) limit orders posted with rightmost digit one (nine).The clustering of trades arrivals is well-known in financial markets and Hawkes processes are particularly suited to describe this phenomenon. We raise the question of what part of market dynamics Hawkes processes are able to account for exactly. We document the accuracy of such processes as one varies the time interval of calibration and compare the performance of various types of kernels made up of sums of exponentials. Because of their around-the-clock opening times, FX markets are ideally suited to our aim as they allow us to avoid the complications of the long daily overnight closures of equity markets. One can achieve statistical significance according to three simultaneous tests provided that one uses kernels with two exponentials for fitting an hour at a time, and two or three exponentials for full days, while longer periods could not be fitted within statistical satisfaction because of the non-stationarity of the endogenous process. Fitted timescales are relatively short and endogeneity factor is high but sub-critical at about 0.8.Most agent-based models of financial markets implicitly assume that the agents interact through asset prices and exchanged volumes. Some of them add an explicit trader-trader interaction network on which rumors propagate or that encode groups that take common decisions. Contrarily to other types of data, such networks, if they exist, are necessarily implicit, which makes their determination a more challenging task. We analyze transaction data of all the clients of two liquidity providers, encompassing several years of trading. By assuming that the links between agents are determined by systematic simultaneous activity or inactivity, we show that interaction networks do exist. In addition, we find that the (in)activity of some agents systematically triggers the (in)activity of other traders, defining lead-lag relationships between the agents. This implies that the global investment flux is predictable, which we check by using sophisticated machine learning methods.
16

Sarcopenia Screening by Registered Dietitian Nutritionists (RDNs) in the United States (U.S.)

Marcom, Madison 01 May 2021 (has links)
Sarcopenia is a disease of muscle wasting primarily seen in older adults. Although this term was first coined over three decades ago, there is a lack of consensus on a definition, screening criteria, and treatment protocol for sarcopenia. The primary purpose of this study is to determine whether registered dietitian nutritionists (RDNs) in the United States (U.S.) screen for sarcopenia. Study participants were recruited through a randomized email list and included RDNs throughout the U.S. Respondents completed a survey questioning knowledge of sarcopenia, screening tools and company protocols in place, and the need and desire for sarcopenia education. Data revealed a lack of pre-existing protocols in place, a dissonance of validated and unvalidated screening tools used in practice, and substantial need for sarcopenia education.
17

Causalité des marchés financiers : asymétrie temporelle et réseaux multi-échelles de meneurs et suiveurs / Causality in financial markets : time reversal asymmetry and multi-scale lead-lag networks

Cordi, Marcus 07 March 2019 (has links)
Cette thèse a pour but d’explorer la structure de causalité qui sous-tend les marchés financiers. Elle se concentre sur l’inférence multi-échelle de réseaux de causalité entre investisseurs dans deux bases de données contenant les identifiants des investisseurs. La première partie de cette thèse est consacrée à l’étude de la causalité dans les processus de Hawkes. Ces derniers définissent la façon dont l’activité d’un investisseur (par exemple) dépend du passé; sa version multivariée inclut l’interaction entre séries temporelles, à toutes les échelles. Les résultats principaux de cette partie est que l’estimation avec le maximum de vraisemblance des paramètres du processus changent remarquablement peu lorsque la direction du temps est inversée, tant pour les processus univariés que pour les processus multivariés avec noyaux d’influence mutuelle symétriques, et que la causalité effective de ces processus dépend de leur endogénéité. Cela implique qu’on ne peut pas utiliser ce type de processus pour l’inférence de causalité sans précautions. L’utilisation de tests statistiques permet la différentiation des directions du temps pour des longues données synthétiques. Par contre, l’analyse de données empiriques est plus problématique: il est tout à fait possible de trouver des données financières pour lesquelles la vraisemblance des processus de Hawkes est plus grande si le temps s’écoule en sens inverse. Les processus de Hawkes multivariés avec noyaux d’influence asymétriques ne sont pas affectés par une faible causalité. Il est malheureusement difficile de les calibrer aux actions individuelles des investisseurs présents dans nos bases de données, pour deux raisons. Nous avons soigneusement vérifie que l’activité des investisseurs est hautement non-stationaire et qu’on ne peut pas supposer que leur activité est localement stationaire, faute de données en nombre suffisant, bien que nos bases de données contiennent chacune plus de 1 million de transactions. Ces problèmes sont renforcés par le fait que les noyaux dans les processus de Hawkes codent l’influence mutuelle des investisseurs pour toutes les échelles de temps simultanément. Afin de pallier ce problème, la deuxième partie de cette thèse se concentre sur la causalité entre des échelles de temps spécifiques. Un filtrage supplémentaire est obtenu en réduisant le nombre effectif d’investisseurs grâce aux Réseaux Statistiquement Validés. Ces derniers sont utilisés pour catégoriser les investisseurs, qui sont groupés selon leur degré de la synchronisation de leurs actions (achat, vente, neutre) dans des intervalles déterminés à une échelle temporelle donnée. Cette partie propose une méthode pour l’inférence de réseaux de meneurs et suiveurs déterminés à une échelle de temps donnée dans le passé et à une autre dans le futur. Trois variations de cette méthode sont étudiées. Cette méthode permet de caractériser la causalité d’une façon novatrice. Nous avons comparé l’asymétrie temporelle des actions des investisseurs et celle de la volatilité des prix, et conclure que la structure de causalité des investisseurs est considérablement plus complexe que celle de la volatilité. De façon attendue, les investisseurs institutionnels, dont l’impact sur l’évolution des prix est beaucoup plus grand que celui des clients privés, ont une structure causale proche de celle de la volatilité: en effet, la volatilité, étant une quantité macroscopique, est le résultat d’une aggrégation des comportements de tous les investisseurs, qui fait disparaître la structure causale des investisseurs privés. / This thesis aims to uncover the underlyingcausality structure of financial markets by focusing onthe inference of investor causal networks at multipletimescales in two trader-resolved datasets.The first part of this thesis is devoted to the causal strengthof Hawkes processes. These processes describe in a clearlycausal way how the activity rate of e.g. an investor dependson his past activity rate; its multivariate version alsomakes it possible to include the interactions between theagents, at all time scales. The main result of this part isthat the classical MLE estimation of the process parametersdoes not vary significantly if the arrow of time is reversedin the univariate and symmetric multivariate case.This means that blindly trusting univariate and symmetricmultivariate Hawkes processes to infer causality from datais problematic. In addition, we find a dependency betweenthe level of causality in the process and its endogeneity.For long time series of synthetic data, one can discriminatebetween the forward and backward arrows of time byperforming rigorous statistical tests on the processes, butfor empirical data the situation is much more ambiguous,as it is entirely possible to find a better Hawkes process fitwhen time runs backwards compared to forwards.Asymmetric Hawkes processes do not suffer from veryweak causality. Fitting them to the individual traders’ actionsfound in our datasets is unfortunately not very successfulfor two reasons. We carefully checked that tradersactions in both datasets are highly non-stationary, andthat local stationarity cannot be assumed to hold as thereis simply not enough data, even if each dataset containsabout one million trades. This is also compounded by thefact that Hawkes processes encode the pairwise influenceof traders for all timescales simultaneously.In order to alleviate this problem, the second part ofthis thesis focuses on causality between specific pairs oftimescales. Further filtering is achieved by reducing theeffective number of investors; Statistically Validated Networksare applied to cluster investors into groups basedon the statistically high synchronisation of their actions(buy, sell or neutral) in time intervals of a given timescale.This part then generalizes single-timescale lead-lag SVNsto lead-lag networks between two timescales and introducesthree slightly different methodsThese methods make it possible to characterize causalityin a novel way. We are able to compare the time reversalasymmetry of trader activity and that of price volatility,and conclude that the causal structure of trader activity isconsiderably more complex than that of the volatility for agiven category of traders. Expectedly, institutional traders,whose impact on prices is much larger than that of retailclients, have a causality structure that is closer to that ofvolatility. This is because volatility, being a macroscopicquantity, aggregates the behaviour of all types of traders,thereby hiding the causality structure of minor players.
18

Embedded and validated control algorithms for the spacecraft rendezvous / Algorithmes de commande embarqués et validés pour le rendez-vous spatial

Arantes Gilz, Paulo Ricardo 17 October 2018 (has links)
L'autonomie est l'une des préoccupations majeures lors du développement de missions spatiales que l'objectif soit scientifique (exploration interplanétaire, observations, etc) ou commercial (service en orbite). Pour le rendez-vous spatial, cette autonomie dépend de la capacité embarquée de contrôle du mouvement relatif entre deux véhicules spatiaux. Dans le contexte du service aux satellites (dépannage, remplissage additionnel d'ergols, correction d'orbite, désorbitation en fin de vie, etc), la faisabilité de telles missions est aussi fortement liée à la capacité des algorithmes de guidage et contrôle à prendre en compte l'ensemble des contraintes opérationnelles (par exemple, saturation des propulseurs ou restrictions sur le positionnement relatif entre les véhicules) tout en maximisant la durée de vie du véhicule (minimisation de la consommation d'ergols). La littérature montre que ce problème a été étudié intensément depuis le début des années 2000. Les algorithmes proposés ne sont pas tout à fait satisfaisants. Quelques approches, par exemple, dégradent les contraintes afin de pouvoir fonder l'algorithme de contrôle sur un problème d'optimisation efficace. D'autres méthodes, si elles prennent en compte l'ensemble du problème, se montrent trop lourdes pour être embarquées sur de véritables calculateurs existants dans les vaisseaux spatiaux. Le principal objectif de cette thèse est le développement de nouveaux algorithmes efficaces et validés pour le guidage et le contrôle impulsif des engins spatiaux dans le contexte des phases dites de "hovering" du rendez-vous orbital, i.e. les étapes dans lesquelles un vaisseau secondaire doit maintenir sa position à l'intérieur d'une zone délimitée de l'espace relativement à un autre vaisseau principal. La première contribution présentée dans ce manuscrit utilise une nouvelle formulation mathématique des contraintes d'espace pour le mouvement relatif entre vaisseaux spatiaux pour la conception d'algorithmes de contrôle ayant un traitement calculatoire plus efficace comparativement aux approches traditionnelles. La deuxième et principale contribution est une stratégie de contrôle prédictif qui assure la convergence des trajectoires relatives vers la zone de "hovering", même en présence de perturbations ou de saturation des actionneurs. [...] / Autonomy is one of the major concerns during the planning of a space mission, whether its objective is scientific (interplanetary exploration, observations, etc.) or commercial (service in orbit). For space rendezvous, this autonomy depends on the on-board capacity of controlling the relative movement between two spacecraft. In the context of satellite servicing (troubleshooting, propellant refueling, orbit correction, end-of-life deorbit, etc.), the feasibility of such missions is also strongly linked to the ability of the guidance and control algorithms to account for all operational constraints (for example, thruster saturation or restrictions on the relative positioning between the vehicles) while maximizing the life of the vehicle (minimizing propellant consumption). The literature shows that this problem has been intensively studied since the early 2000s. However, the proposed algorithms are not entirely satisfactory. Some approaches, for example, degrade the constraints in order to be able to base the control algorithm on an efficient optimization problem. Other methods accounting for the whole set of constraints of the problem are too cumbersome to be embedded on real computers existing in the spaceships. The main object of this thesis is the development of new efficient and validated algorithms for the impulsive guidance and control of spacecraft in the context of the so-called "hovering" phases of the orbital rendezvous, i.e. the stages in which a secondary vessel must maintain its position within a bounded area of space relatively to another main vessel. The first contribution presented in this manuscript uses a new mathematical formulation of the space constraints for the relative motion between spacecraft for the design of control algorithms with more efficient computational processing compared to traditional approaches. The second and main contribution is a predictive control strategy that has been formally demonstrated to ensure the convergence of relative trajectories towards the "hovering" zone, even in the presence of disturbances or saturation of the actuators.[...]
19

Prescription de médicament hors autorisation de mise sur le marché : fondements, limites, nécessités et responsabilités / Off-label drug prescribing : grounds, limits, needs and responsibilities

Debarre, Jean-Michel 30 March 2016 (has links)
La prescription de médicament hors AMM est légitime quand elle s’appuie sur les connaissances médicales acquises ou validées au moment de la proposition de soins, lors du colloque singulier patient-médecin. L’AMM d’un médicament ne représente qu’un sous-ensemble de connaissances médicales, sans cesse changeantes, qui ne peut être regardée comme le référentiel idoine de la prescription d’un médicament, à la fois sur un plan médical et sur un plan juridique. La démocratie sanitaire est particulièrement inachevée dans la gestion européenne ou nationale de l’AMM d’un médicament. / The off-label drug prescribing is legitimate when it is based on accepted or validated medical knowledge at the time of the proposal care during the patient-physician singular interview. The marketing authorization of a drug represents only a fraction of medical knowledge, constantly changing, which can not be considered as a suitable reference document of drug prescribing, both from a medical and legal aspect. Health democracy is particularly incomplete in the European or national management of the drug marketing authorization.
20

Avaliação e aplicação de metodologia da técnica espectrométrica de fluorescência de raios X por reflexão total (TXRF) na caracterização multielementar em particulados sólidos de amostras ambientais / Evaluation and technical application of the methodology of the total reflection X-ray fluorescence spectrometry (TXRF) in multi-element characterization of particulate solids from environmental samples

Santos, Joelmir dos 29 February 2016 (has links)
Submitted by Marilene Donadel (marilene.donadel@unioeste.br) on 2018-04-25T00:28:56Z No. of bitstreams: 1 Joelmir_Santos_2016.pdf: 3759768 bytes, checksum: 953ccb765b5bea358bd87b6b493ab5cb (MD5) / Made available in DSpace on 2018-04-25T00:28:56Z (GMT). No. of bitstreams: 1 Joelmir_Santos_2016.pdf: 3759768 bytes, checksum: 953ccb765b5bea358bd87b6b493ab5cb (MD5) Previous issue date: 2016-02-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This study aimed the evaluation and application of methodology for multielement analyses by total reflection X-ray fluorescence spectrometry (TXRF) in order to provide analyte concentration measurements with accuracy and precision. Using the energy and intensity of the spectral lines of characteristic X-ray, a series of chemicals were identified and quantified, respectively, with a concentration below part per million to a few parts per billion, especially for transition metals. To ensure statistical reliability and reproducibility, a methodology based on the deposition of fine particles, less than 50 m, representative sample and uniformly spread on a quartz flat reflector, which was extremely clean and positioned at the angular condition of maximum reflectivity of monochromatic X-rays was applied. To ensure uniformity on the support surface and the sample representativeness, a number of replicas of specific quantities of samples, in the form of a fine powder was added in a diluted viscous organic solution containing a well-known concentration of an element choosing as internal standard. An aliquot of the viscous solution containing both the suspended particulate sample and an internal standard was deposited at the center of the quartz reflector and dried, yielding a very thin layer of particulates. First, the instrumental TXRF was previously evaluated for its operation, spectral response and recovery of transition metal concentrations in a test sample supplied by the manufacturer. The reproducibility of the elemental concentration measurements was tested with a combination of the number of experimental replicates (five) and analytical (three) for each type of environmental matrix samples, namely certified reference materials or CRMs. Six types of CRMs were tested: river sediment, tomato leaves, rice, fish muscle, bone and bovine liver. The data were statistically processed data, resulting in a series of average and uncertainty values of the analyte concentrations was compared to certified reference values, reported in the technical reports of CRM. Regarding the multielement character of the TXRF technique, there was a good recovery, about 100%, for most of the certified concentrations of the referenced analytes in the six CRMs within the margin of variability of results found by both the TXRF technique and reports. It was also found that the detection limit depends on the density matrix of the material under study. However, the transition elements were those with the lowest values of detection limits. Having a good recovery of majority analyte concentrations in Buffalo River sediment reference material, the same methodology was applied to sediment material analysis of Bezerra stream that was collected in three locations and for a period of three years. A number of chemicals has been identified through their spectral lines K and L and quantified following the internal standard method. The set of concentration data was statistically processed from the point of view of the normality of the data and an analysis of principal components, revealing a systematic presence of high concentrations of heavy metals such as lead and chromium, comparatively higher those recommended by current environmental legislation (CONAMA 420/2009) as the maximum permissible limits. / Este trabalho teve como objetivo a avaliação e aplicação de metodologia para análise multielementar por meio da espectrometria de fluorescência de raios X por reflexão total (TXRF), a fim de fornecer medidas de concentração de analitos com acurácia e precisão. Usando a energia e a intensidade das linhas espectrais de raios X característicos, uma série de elementos químicos foram identificados e quantificados, respectivamente, com concentração abaixo de parte por milhão até poucas partes por bilhão, principalmente para metais de transição. A fim de garantir a confiabilidade e reprodutibilidade estatística, foi aplicada uma metodologia que se baseia na deposição de particulados finos, menores que 50 m, representativos da amostra e uniformemente distribuídos sobre um refletor plano de quartzo, previamente limpo, na condição angular de máxima refletividade de raios X monocromáticos. Para garantir a uniformidade no suporte e a representatividade da amostra, uma série de réplicas de quantidades específicas de amostras, na forma de um pó fino, foi adicionada em solução orgânica viscosa diluída contendo uma concentração conhecida de um elemento na qualidade de padrão interno. Uma alíquota da solução viscosa contendo ambos o particulado de amostra em suspensão e o padrão interno foi depositada no centro do refletor de quartzo e seca, obtendo-se uma camada fina de particulado. Primeiramente, o instrumental TXRF foi previamente avaliado quanto a operação, resposta espectral e recuperação de concentrações de metais de transição de uma amostra de teste fornecido pelo fabricante. A reprodutibilidade das medidas de concentração elementar foi testada com uma combinação da série de réplicas experimentais (cinco) e analíticas (três) para cada tipo de matriz de amostras ambientais, chamadas de Materiais de Referência Certificados ou MRC. Foram testados seis tipos de MCRs: sedimento de rio, folhas de tomate, arroz, músculo de peixe, osso e fígado bovino. Foram processados estatisticamente a série de dados de concentrações elementares de cada MCR, obtendo-se o valor médio e incerteza da concentração de cada analito de referência o qual foi comparado ao reportado nos laudos técnicos dos MCR. Considerando o caráter multielementar da técnica TXRF, verificou-se uma boa recuperação, em torno de 100%, da maioria das concentrações referenciadas e certificadas dos analitos nos seis MCRs, dentro da margem de variabilidade dos resultados encontrados pela técnica TXRF e reportados pelos laudos. Verificou-se também que o limite de detecção depende da densidade da matriz do material sob estudo. Contudo, os elementos de transição foram os que apresentaram os menores valores de limites de detecção. Tendo a boa recuperação das concentrações de analitos maioritários no material de referência sedimento Buffalo River, foi aplicada a mesma metodologia para análise de material de sedimento do córrego Bezerra-Cascavel que foi coletado em três locais ao longo dele e por um período de três anos. Uma série de elementos químicos foram identificados através de suas linhas espectrais K e L, e quantificados seguindo o método do padrão interno. O conjunto de dados de concentração foi processado estatisticamente do ponto de vista da normalidade dos dados e feita uma análise de componentes principais, revelando uma sistemática presença de altas concentrações de metais pesados, tais como chumbo e cromo, comparativamente maiores daqueles preconizados pela atual legislação ambiental (CONAMA 420/2009) quanto aos limites máximos permissíveis.

Page generated in 0.0451 seconds