• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 453
  • 274
  • 163
  • 47
  • 25
  • 22
  • 19
  • 10
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 1190
  • 259
  • 193
  • 143
  • 124
  • 87
  • 74
  • 67
  • 61
  • 61
  • 61
  • 61
  • 57
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Preditores da argila prontamente dispersa em água em solos tropicais via turbidimetria e VisNIR-SWIR-MidlR / Predictors of readily-dispersible clay in water on tropical soils via turbidimetry and VisNIR-SWIR-MidIR

Isabela Mello da Silva 12 January 2017 (has links)
O Brasil se destaca na produção agrícola intensiva omitindo em diversas situações práticas conservacionistas acarretando a má estrutura do solo, que por sua vez, propicia a dispersão da argila quando submetida em meio líquido. A argila prontamente dispersa em água (APDA) é a que sofre pequena agitação e pode ser quantificada pelo método do turbidímetro por definir desde os mais elevados graus de turvação até os menores (10000:1) em solução. O uso da espectroscopia tem sido crescente como uma ferramenta eficaz na quantificação das propriedades do solo. Diante disso, objetivou-se comparar métodos de determinação da APDA pelo turbidímetro e pela espectroscopia nas regiões espectrais VisNIR-SWIR-MidIR (400-25000 nm). 68 amostras foram adquiridas do horizonte B de Latossolos e subsuperficial de Neossolos Quartzarênicos e apresentam ampla variabilidade textural, química e mineralógica pertencentes aos estados de Goiás, Mato Grosso do Sul e São Paulo, estas foram avaliadas química e fisicamente. A determinação da APDA ocorreu por turbidimetria e esta foi relacionada com a argila total (AT) quantificada nos comprimentos de onda VisNIR-SWIR (350-2500 nm) e MidIR (4000-400 cm-1) via espectroscopia. Através da determinação da APDA, modelos foram criados por análise de regressão múltipla e estes foram ajustados conforme o resultado da análise de variância enquanto que a AT foi quantificada através da regressão por mínimos quadrados parciais (PLSR) por meio da validação cruzada, a ACP e o PLSR foram pressupostos e os modelos foram escolhidos levando em consideração o coeficiente de determinação, erro quadrado médio e a previsão para intervalo interquartil. Ocorreu uma relação negativa entre as variáveis AT, Ca, K, Mg, C e Al com a APDA e uma relação positiva da CTC, P e m com a APDA. Ao serem transformadas para logaritmo as variáveis estiveram mais próximas da normalidade. Das 9 variáveis independentes testadas 5 foram significativamente correlacionadas com a APDA em até 95% (log(AT), log(Ca), log(CTC), log(Al) e log(P)), o modelo foi adquirido por exclusão das variáveis não significativas equivalente ao modelo gerado pelo stepwise. Houve correlação negativa entre a APDA e AT (R2=0,27). A curva espectral evidenciou elevadas discrepâncias na amplitude de reflectância e nas feições das curvas, em razão das diferenças físicas, químicas e mineralógicas dos diferentes solos estudados. Para a AT o desempenho do modelo mensurado na faixa espectral MidIR foi superior (R2 ≥ 0,6; RMSE ≥ 103; RPIQ ≥ 249). Para a APDA os modelos validados sofreram pequena variação (R2 ≥ 0,5; RMSE ≥ 3,5; RPIQ ≥ 10,2) sendo a região MidIR a melhor desempenhada. O turbidímetro mostrou-se ser um método eficaz sendo portanto recomendado para futuros trabalhos. O conteúdo de argila total dos solos estudados foi a variável mais explicativa da APDA. A espectroscopia de reflectância foi eficiente em estimar a APDA tanto quanto o turbidímetro. / Brazil stands out in the intensive agricultural production omitting in several situations conservationist practices causing poor soil structure, which in turn, propitiates the dispersion of the clay when submitted in a liquid environment. The readily-dispersible clay in water (APDA) is the one that undergoes slight agitation and can be quantified by the turbidimeter method to be defined from the highest turbidity to the lowest turbidity (10000: 1) in solution. The use of spectroscopy has been increasing as an effective tool in the quantification of soil properties. The aim of this study was to compare APDA methods by the turbidimeter and by spectroscopy in the VisNIR-SWIR-MidIR (400-25000 nm) spectral regions. 68 samples were obtained from the B horizon of Latosols and subsurface of Quartzarenic Neosols and present wide textural, chemical and mineralogical variability belonging to the states of Goiás, Mato Grosso do Sul and São Paulo, which were evaluated chemically and physically. The determination of APDA occurred by turbidimetry and this was related to the total clay (AT) quantified at the wavelengths VisNIR-SWIR (350-2500 nm) and MidIR (4000-400 cm-1) via spectroscopy. Through the determination of the APDA, models were created by multiple regression analysis and these were adjusted according to the result of the analysis of variance while the AT was quantified through the partial least squares regression (PLSR) through the cross validation, the ACP and The PLSR were assumptions and the models were chosen taking into account the coefficient of determination, mean square error and the forecast for interquartile range. There was a negative relationship between the variables AT, Ca, K, Mg, C and Al with APDA and a positive relation of CTC, P and m with APDA. When they were transformed to logarithm the variables were closer to normal. Of the 9 independent variables tested 5 were significantly correlated with APDA in up to 95% (log (AT), log (Ca), log (CTC), log (Al) and log (P)), the model was acquired by excluding the Non-significant variables equivalent to the model generated by stepwise. There was a negative correlation between APDA and AT (R2 = 0.27). The spectral curve showed high discrepancies in reflectance amplitude and curves, due to the physical, chemical and mineralogical differences of the studied soils. For AT, the performance of the model measured in the MidIR spectral range was higher (R2> 0.6, RMSE> 103, RPIQ> 249). For the APDA, the validated models suffered a small variation (R2> 0.5, RMSE> 3.5, RPIQ> 10.2), with the MidIR region being the best performed. The turbidimeter proved to be an effective method and is therefore recommended for future work. The total clay content of the studied soils was the most explanatory variable of APDA. Reflectance spectroscopy was efficient in estimating APDA as much as the turbidimeter.
452

Quantification of Tripeptidyl-peptidase II : Optimisation and evaluation of 3 assays

Gyllenfjärd, Sabina January 2010 (has links)
Abstract   Tripeptidyl-peptidase II (TPPII), is present in most eukaryotic cells. It cuts tripeptides from the N-terminus of peptides and is especially important for degrading peptides longer than 15 amino acids. TPPII also tailors long peptides into suitable substrates for the enzymes which transport and produce the peptides that MHC I present. Increased levels of TPPII have also been found in certain cancer cells, thus it is of interest to determine if TPPII could be used as a tumour marker. The aim of this study was to optimise and evaluate 3 different methods for quantifying TPPII. Western blot, enzyme-linked immunosorbent assay (ELISA) and fluorophore-linked immunosorbent assay (FLISA) protocols were optimised regarding incubation times and antibody dilutions. Sensitivity and linearity were the most important parameters when evaluating the results. The coefficient of determination of western blot was R2=0.98-1 within the range of 1.29-250ng TPPII/well and ELISA had a coefficient of determination of R2=0.96 within the range of 0.03-250ng TPPII/well. Presently western blot is the only one of these methods to yield reliable results with impure samples, but ELISA is superior regarding sensitivity and throughput. Thus further optimisation of ELISA is interesting to pursue.
453

Conditional quantile estimation through optimal quantization / Estimation de quantiles conditionnels basée sur la quantification optimale

Charlier, Isabelle 17 December 2015 (has links)
Les applications les plus courantes des méthodes non paramétriques concernent l’estimation d’une fonction de régression (i.e. de l’espérance conditionnelle). Cependant, il est souvent intéressant de modéliser les quantiles conditionnels, en particulier lorsque la moyenne conditionnelle ne permet pas de représenter convenablement l’impact des covariables sur la variable dépendante. De plus, ils permettent d’obtenir des graphiques plus compréhensibles de la distribution conditionnelle de la variable dépendante que ceux obtenus avec la moyenne conditionnelle. À l’origine, la « quantification » était utilisée en ingénierie du signal et de l’information. Elle permet de discrétiser un signal continu en un nombre fini de quantifieurs. En mathématique, le problème de la quantification optimale consiste à trouver la meilleure approximation d’une distribution continue d’une variable aléatoire par une loi discrète avec un nombre fixé de quantifieurs. Initialement utilisée pour des signaux univariés, la méthode a été étendue au cadre multivarié et est devenue un outil pour résoudre certains problèmes en probabilités numériques. Le but de cette thèse est d’appliquer la quantification optimale en norme Lp à l’estimation des quantiles conditionnels. Différents cas sont abordés : covariable uni- ou multidimensionnelle, variable dépendante uni- ou multivariée. La convergence des estimateurs proposés est étudiée d’un point de vue théorique. Ces estimateurs ont été implémentés et un package R, nommé QuantifQuantile, a été développé. Leur comportement numérique est évalué sur des simulations et des données réelles. / One of the most common applications of nonparametric techniques has been the estimation of a regression function (i.e. a conditional mean). However it is often of interest to model conditional quantiles, particularly when it is felt that the conditional mean is not representative of the impact of the covariates on the dependent variable. Moreover, the quantile regression function provides a much more comprehensive picture of the conditional distribution of a dependent variable than the conditional mean function. Originally, the “quantization” was used in signal and information theories since the fifties. Quantization was devoted to the discretization of a continuous signal by a finite number of “quantizers”. In mathematics, the problem of optimal quantization is to find the best approximation of the continuous distribution of a random variable by a discrete law with a fixed number of charged points. Firstly used for a one-dimensional signal, the method has then been developed in the multi-dimensional case and extensively used as a tool to solve problems arising in numerical probability. The goal of this thesis is to study how to apply optimal quantization in Lp-norm to conditional quantile estimation. Various cases are studied: one-dimensional or multidimensional covariate, univariate or multivariate dependent variable. The convergence of the proposed estimators is studied from a theoretical point of view. The proposed estimators were implemented and a R package, called QuantifQuantile, was developed. Numerical behavior of the estimators is evaluated through simulation studies and real data applications.
454

Second-order prediction and residue vector quantization for video compression / Prédiction de second ordre et résidu par quantification vectorielle pour la compression vidéo

Huang, Bihong 08 July 2015 (has links)
La compression vidéo est une étape cruciale pour une grande partie des applications de télécommunication. Depuis l'avènement de la norme H.261/MPEG-2, un nouveau standard de compression vidéo est produit tous les 10 ans environ, avec un gain en compression de 50% par rapport à la précédente. L'objectif de la thèse est d'obtenir des gains en compression par rapport à la dernière norme de codage vidéo HEVC. Dans cette thèse, nous proposons trois approches pour améliorer la compression vidéo en exploitant les corrélations du résidu de prédiction intra. Une première approche basée sur l'utilisation de résidus précédemment décodés montre que, si des gains sont théoriquement possibles, le surcoût de la signalisation les réduit pratiquement à néant. Une deuxième approche basée sur la quantification vectorielle mode-dépendent (MDVQ) du résidu préalablement à l'étape classique transformée-quantification scalaire, permet d'obtenir des gains substantiels. Nous montrons que cette approche est réaliste, car les dictionnaires sont indépendants du QP et de petite taille. Enfin, une troisième approche propose de rendre adaptatif les dictionnaires utilisés en MDVQ. Un gain substantiel est apporté par l'adaptivité, surtout lorsque le contenu vidéo est atypique, tandis que la complexité de décodage reste bien contenue. Au final on obtient un compromis gain-complexité compatible avec une soumission en normalisation. / Video compression has become a mandatory step in a wide range of digital video applications. Since the development of the block-based hybrid coding approach in the H.261/MPEG-2 standard, new coding standard was ratified every ten years and each new standard achieved approximately 50% bit rate reduction compared to its predecessor without sacrificing the picture quality. However, due to the ever-increasing bit rate required for the transmission of HD and Beyond-HD formats within a limited bandwidth, there is always a requirement to develop new video compression technologies which provide higher coding efficiency than the current HEVC video coding standard. In this thesis, we proposed three approaches to improve the intra coding efficiency of the HEVC standard by exploiting the correlation of intra prediction residue. A first approach based on the use of previously decoded residue shows that even though gains are theoretically possible, the extra cost of signaling could negate the benefit of residual prediction. A second approach based on Mode Dependent Vector Quantization (MDVQ) prior to the conventional transformed scalar quantization step provides significant coding gains. We show that this approach is realistic because the dictionaries are independent of QP and of a reasonable size. Finally, a third approach is developed to modify dictionaries gradually to adapt to the intra prediction residue. A substantial gain is provided by the adaptivity, especially when the video content is atypical, without increasing the decoding complexity. In the end we get a compromise of complexity and gain for a submission in standardization.
455

EPIGENETIC MODIFICATIONS TO CYTOSINE AND ALZHEIMER’S DISEASE: A QUANTITATIVE ANALYSIS OF POST-MORTEM TISSUE

Ellison, Elizabeth M. 01 January 2017 (has links)
Alzheimer’s disease (AD) is the most common form of dementia and the sixth leading cause of death in the United States, with no therapeutic option to slow or halt disease progression. Development of two characteristic pathologic lesions, amyloid beta plaques and neurofibrillary tangles, in the brain are associated with synaptic dysfunction and neuron loss leading to memory impairment and cognitive decline. Although mutations in genes involved in amyloid beta processing are linked to increased plaque formation in the inherited familial form of AD, the more common idiopathic form, termed sporadic AD, develops in the absence of gene mutations. In contrast, alterations in gene expression and transcription occur in plaque and tangle susceptible brain regions of sporadic AD subjects, even in the earliest stages of development of pathologic burden, and may give insight into the pathogenesis of AD. Epigenetic modifications to cytosine are known to alter transcriptional states and gene expression in embryonic development as well as in cancer studies. With the discovery of enzymatically oxidized derivatives of 5-methylcytosine (5-mC), the most common epigenetic cytosine modification, a probable demethylation pathway has been suggested to alter transcriptional states of DNA. The most abundant 5-mC derivative, 5-hydroxymethylcytosine (5-hmC), while expressed at low concentrations throughout the body, is expressed at high concentrations in brain cells. To determine the role cytosine modifications play in AD, this study was directed at the quantification of epigenetic modifications to cytosine in several stages of AD progression using global, genome-wide, and gene-specific studies. To determine global levels of each cytosine derivative in brain regions relevant to AD progression, a gas chromatography/mass spectrometry quantitative analysis was utilized to analyze cytosine, 5-mC, and 5-hmC in tissue specimens from multiple brain regions of AD subjects, including early and late stages of AD progression. To determine the genome-wide impact of 5-hmC on biologically relevant pathways in AD, a single-base resolution sequencing analysis was used to map hydroxymethylation throughout the hippocampus of late stage AD subjects. Finally, to determine gene-specific levels of cytosine, 5-mC, and 5-hmC, a quantitative polymerase chain reaction (qPCR) protocol was paired with specific restriction enzyme digestion to analyze target sequences within exons of genes related to sporadic AD. Results from these studies show epigenetic modifications to cytosine are altered on the global, genome-wide, and gene-specific levels in AD subjects compared to normal aging, particularly in early stages of AD progression, suggesting alterations to the epigenetic landscape may play a role in the dysregulation of transcription and the pathogenesis of AD.
456

Développement de nouvelles stratégies analytiques pour la caractérisation moléculaire des états d'oxydation à l'échelle protéomique / Development of new analytical strategies for the molecular characterization of oxidation states at a proteomic scale

Shakir, Shakir Mahmood Shakir 17 December 2015 (has links)
L'analyse des modifications post-traductionnelles (PTMs) est une des contributions les plus importantes de la protéomique aux sciences du vivant. Malgré les progrès importants des techniques séparatives et de la spectrométrie de masse, la quantification des PTMs reste un défi analytique. Le nombre limité de PTMs identifiées robustement dans une recherche, la nécessité d'enrichissement et l'estimation quantitative basée sur un seul peptide ne représentent que les difficultés les plus évidentes. De plus, la quantification des PTMs doit toujours être associée au profil d'expression de la protéine pour éviter les faux positifs.Nous avons développé de nouvelles stratégies analytiques de quantification des PTMs, en prenant en compte le niveau d'expression des protéines. Ces stratégies ont été appliquées à l'étude de l'oxydation des cystéines dans le cadre du stress oxydatif et de l'homéostasie redox.La stratégie OcSILAC est une adaptation de la technique biotin switch au marquage métabolique des cultures cellulaires et a été appliquée à un modèle de levure n'exprimant pas la thiorédoxine réductase. OcSILAC apporte des améliorations techniques importantes et une innovation dans le traitement des données. Les résultats obtenus sont en accord avec le système redox de ce modèle. OcSILAC a ensuite été adaptée au fractionnement subcellulaire pour étendre la couverture du redoxome.Une deuxième stratégie, OxiTMT, a été développée en se basant sur des tandem mass tags spécifiques de la cystéine. OxiTMT a été employée à l'étude de cellules E. Coli soumises à un traitement oxydatif. OxiTMT offre l'avantage d'une large gamme d'applications qui peut s'étendre aux tissus et aux biopsies. / The analysis of protein Post-Translational Modifications (PTMs) is probably the most important contribution that proteomics can give to life sciences. Although separative techniques and mass spectrometry have improved tremendously, the quantitative analysis of PTMs remains an analytical challenge. The limited number of PTMs that can be robustly computed in a single research, the necessity of enrichment steps and performing quantitative estimations using only one peptide, are just the most evident difficulties faced. Furthermore, PTMs quantification should always be associated to protein expression levels to avoid false positives. We have developed new analysis methods allowing the quantification of PTM changes while taking into account protein expression levels. These strategies were applied to the study of cysteine oxidation within the contexts of oxidative stress and redox homeostasis. The first strategy, OcSILAC, is a revision of the biotin switch adapted to Stable Isotope Labelling by Amino acids in Cell culture; it was applied to the study of a thioredoxin reductase silenced yeast model. OcSILAC leads to important technical improvements and data analysis innovations. The results obtained are in agreement with the extensive existing literature concerning the yeast redox system. OcSILAC was then adapted to a subcellular fractionation kit to extend the coverage of the cysteine redoxome. A second strategy called OxiTMT was developed based on cysteine specific tandem mass tags. OxiTMT was used to study E. Coli cells exposed to oxidative treatment. OxiTMT offers the advantage of a wide range of applications that can extended to the study of tissues and biopsies.
457

Approches multivariées innovantes pour le traitement des spectres d'émission de plasmas produits par laser. Application à l'analyse chimique en ligne par LIBS en milieu nucléaire / Multivariate innovative approaches to the treatment of the emission of LIBS plasmas. Application to chemical online analysis in a nuclear environment

El Rakwe, Maria 26 September 2016 (has links)
L’analyse en ligne et in situ constitue aujourd’hui un axe de développement stratégique pour la chimie analytique. C’est particulièrement vrai dans le domaine nucléaire pour lequel les contraintes de sécurité liées à la radioactivité des échantillons, et la nécessité de limiter au maximum les déchets issus des analyses, plaident en faveur de techniques de mesure à distance, sans prélèvement ni préparation d’échantillon. La spectroscopie d’émission de plasma créé par laser (ou LIBS pour laser-induced breakdown spectroscopy), technique d’analyse élémentaire des matériaux basée sur l’ablation laser et la spectroscopie d’émission optique, possède ces qualités. C’est donc une technique de choix pour l’analyse en ligne. Cependant, la maîtrise de la mesure est délicate pour plusieurs raisons. D’abord, la LIBS est multiparamétrique et l’effet des paramètres expérimentaux sur les performances analytiques n’est pas toujours clairement établi. Ensuite, les phénomènes physiques donnant lieu au signal LIBS sont non linéaires, couplés, et transitoires. Enfin, un système d’analyse en ligne doit être le plus robuste possible face aux variations non contrôlées des conditions de mesure. L’objectif de cette thèse est donc d’améliorer la maîtrise et les performances de l’analyse quantitative par LIBS en utilisant des méthodes multivariées capables de gérer la multidimensionalité, la non linéarité et le couplage des paramètres et des données. Pour cela, le travail se décompose en deux parties. Dans un premier temps, nous avons réalisé un plan d’expériences composite centré visant à relier les paramètres expérimentaux de l’ablation laser (énergie de l’impulsion et paramètres de focalisation du faisceau) et de la détection du signal (délai après le tir laser) aux caractéristiques physiques du plasma (masse ablatée, température) et aux performances analytiques (intensité et répétabilité du signal). L’optimisation des paramètres qui en résulte est alors interprétée comme le meilleur compromis, pour l’analyse quantitative, entre efficacité d’ablation laser et chauffage du plasma. Dans un deuxième temps, nous avons développé une méthodologie multivariée basée sur les techniques MCR-ALS, ICA et PLS, pour quantifier certains éléments dans différentes matrices métalliques en exploitant, en plus de la dimension spectrale habituelle, la dimension temporelle du signal LIBS. Cette dernière, pourtant essentielle, est généralement négligée dans la littérature. Dans cette partie, nous discutons donc de l’intérêt de cette approche par rapport aux méthodes usuelles de quantification (univariée et multivariée), et de l’apport de cette méthodologie pour diagnostiquer, comprendre et éventuellement compenser les effets de matrice observés en LIBS. / Online and in situ analysis is now a strategic development for analytical chemistry. This is especially true in the nuclear field for which the security constraints related to the radioactivity of samples, and the need to minimize waste from analyzes argue for remote measurement techniques without sampling or sample preparation. Laser-Induced Breakdown Spectroscopy (LIBS) technique for elemental analysis of materials based on laser ablation and the optical emission spectroscopy, has these qualities. It is a technique of choice for online analysis. However, the processes involved in LIBS, namely laser ablation, atomization, plasma formation and emission, are quite complex and difficult to control because the underlying physical phenomena are coupled and nonlinear. In addition, the analytical performance of the LIBS technique depends strongly on the choice of experimental conditions. Finally, an online analysis system should be as robust as possible face to uncontrolled variations in measurement conditions. The processes involved in LIBS, namely laser ablation, atomization, plasma formation and emission, are quite complex and difficult to control because the underlying physical phenomena are coupled and nonlinear. In addition, the analytical performance of the LIBS technique depends strongly on the choice of experimental conditions. The objective of this thesis is to improve control and performance of quantitative analysis by LIBS using multivariate methods capable of handling multi-dimensionality, nonlinearity and the coupling between parameters and data. For this, the work is divided into two parts. First the optimization is carried out using a central composite design to model the relationship between the experimental parameters of laser ablation (pulse energy and beam focusing parameters) and signal detection (delay time) to the physical characteristics of plasma (ablated mass, temperature) and the analytical performance (intensity and repeatability of the signal). The optimization parameters that results is then interpreted as the best compromise for the quantitative analysis between efficiency of laser ablation and plasma heating. Secondly, we developed a multivariate methodology based on MCR-ALS, ICA and PLS techniques to quantify certain elements in different metallic matrices operator, in addition to the usual spectral dimension, the time dimension of LIBS signal. In this part, we discuss the value of this approach over conventional methods of quantification (univariate and multivariate) and the contribution of this methodology to diagnose, understand and possibly compensate matrix effects observed in LIBS.
458

The Role of Numbers in Environmental Policy: The Economics of Ecosystems and Biodiversity (TEEB)

Smith Spash, Tone 20 September 2017 (has links) (PDF)
This dissertation explores the central role of numbers in environmental policy and discourse, with a particular focus on the "economic turn" in nature conservation. The aim has been to understand and explain why, despite the parallel increase in environmental problems and in quantitative information about the environment, the faith in and focus on numbers to do something about the problems seem as strong as ever. The dissertation draws on discourse analysis and insights from historical and sociological studies about numbers and quantification and combines it within a critical realist methodology. The main empirical case analysed is the UN-backed study of "The Economics of Ecosystems and Biodiversity" (TEEB), supplemented by an historical review of the development of environmental statistics since the 1970s and a review of the developments within conservation science with respect to the role of numbers. The historical review demonstrates a change from biophysical numbers to new measures of equivalence (e.g. CO2-equivalents), paralleling the move from central planning and administrative rationality to neoliberalism and market rationality. While monetary valuation has been much criticised in the environmental politics literature for leading to the commercialisation of nature, this study shows a more nuanced picture: the role of monetary valuation has rather been to "bridge" the transition from administrative rationality to market rationality. It is the newly developed measures of equivalence which allow setting up new markets for financial instruments and compensation schemes for environmental damage. In the case of TEEB, monetary valuation and its related arguments of efficiency, rational decision-making etc., are first and foremost rhetorical since the main recommendations (economic incentives and markets) are taken for granted. The centrality of numbers in current environmental policy discourse is explained by a combination of structural conditions, the search for business opportunities and actors' perceptions of money as the only possible language of communication. Some structural conditions are of a more general kind specific for modernity, while others are specific for the neoliberal era. A main problem with the number focus in environmental policy, is that it allows to not address the underlying drivers of the problems, and hence strengthens the "actualist" perception of reality. The study concludes that numbers have potential as evidence of environmental problems. However, change does not happen by the numbers themselves (contra mainstream economics), but must achieve political support. Further research is needed to understand better how numerical information can be combined with approaches which move beyond actualism, instrumentalism and relativism.
459

Targeted proteomics methods for protein quantification of human cells, tissues and blood

Edfors, Fredrik January 2016 (has links)
The common concept in this thesis was to adapt and develop quantitative mass spectrometric assays focusing on reagents originating from the Human Protein Atlas project to quantify proteins in human cell lines, tissues and blood. The work is based around stable isotope labeled protein fragment standards that each represent a small part of a human protein-coding gene. This thesis shows how they can be used in various formats to describe the protein landscape and be used to standardize mass spectrometry experiments. The first part of the thesis describes the use of antibodies in combination with heavy stable isotope labeled antigens to establish a semi-automated protocol for protein quantification of complex samples with fast analysis time  (Paper~I). Paper II introduces a semi-automated cloning protocol that can be used to selectively clone variants of recombinant proteins, and highlights the automation process that is necessary for large-scale proteomics endeavors. This paper also describes the technology that was used to clone all protein standards that are used in all of the included papers.                       The second part of the thesis includes papers that focus on the generation and application of antibody-free targeted mass spectrometry methods. Here, absolute protein copy numbers were determined across human cell lines and tissues (Paper III) and the protein data was correlated against transcriptomics data. Proteins were quantified to validate antibodies in a novel method that evaluates antibodies based on differential protein expression across multiple cell lines (Paper IV). Finally, a large-scale study was performed to generate targeted proteomics assays (Paper V) based on protein fragments. Here, assay coordinates were mapped for more than 10,000 human protein-coding genes and a subset of peptides was thereafter used to determine absolute protein levels of 49 proteins in human serum.                       In conclusion, this thesis describes the development of methods for protein quantification by targeted mass spectrometry and the use of recombinant protein fragment standards as the common denominator. / <p>QC 20161013</p>
460

Correlação entre os métodos tradicionais de quantificação de fases e o método que utiliza o sistema de análise de imagens em aços ao carbono comum / not available

Euclides Castorino da Silva 03 October 1997 (has links)
Recentemente, com o rápido progresso da informática, a qual tem criado novos sistemas semi-automáticos ou mesmo completamente automáticos para avaliar os parâmetros metalográficos, tornou-se possível oferecer aos usuários da prática metalográfica, uma nova opção para determinar os parâmetros de interesse, cuja determinação em materiais metálicos é executada por vários métodos e utilizada extensivamente na prática metalográfica, pois estes parâmetros contribuem significativamente para a resistência mecânica dos aços ao carbono. Porém a escolha do melhor método a ser adotado é um assunto bastante discutido por vários autores. As nove amostras das chapas de aços ao carbono comum recozidas, com porcentagem de carbono variando de 0,05% a 0,56% apresentando estrutura ferrítica ou ferrítico-perlítica foram selecionadas, objetivando ao estudo de uma correlação entre os métodos tradicionais mais utilizados, como também o método que utiliza o sistema de análise de imagem digitalizada, que enfatiza a diferença dos níveis de cinza entre as fases presentes. Os atuais resultados dessa correlação, representam um dos primeiros trabalhos na área. Os resultados dos achados metalográficos,explorando na íntegra a particularidade de cada método, demonstram que o método de análise de imagem, relacionado aos métodos tradicionais, conduz a uma rápida e precisa obtenção dos parâmetros, com uma boa reprodutividade dos resultados. / The evaluation of the metallographic parameters in metallic materials has been carried out by many methods and they have been used extensively in the metallographic practice. However, the best method to be adopted has been a subject of many discussion. Nowadays, with the fast advance of computerisation, it has been possible to create semi-automatic or completely automatic systems for evaluation of these metallographic parameters, which are very important in determining the mechanical strength of carbon steels. In this work nine samples were removed from carbon steel plates with carbon content varying from 0,05 to 0,56 for evaluation of the correlation between the traditional methods, as well as with the method which use a digitised image analysis. This late method emphasise to the difference on the grey levels of the phases present. The results from this work are one of the first in this area, and they exploited the particularity of each method. Also, they demonstrated that the image analysis method, when compared with the traditional ones, gives a rapid and precise evaluation of the metallographic parameters, with a very good results reproducibility.

Page generated in 0.0511 seconds