• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 669
  • 188
  • 90
  • 90
  • 55
  • 29
  • 18
  • 18
  • 18
  • 16
  • 9
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 1421
  • 369
  • 207
  • 166
  • 160
  • 144
  • 130
  • 125
  • 115
  • 105
  • 100
  • 92
  • 88
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
801

<b>EXPLORING ENSEMBLE MODELS AND GAN-BASED </b><b>APPROACHES FOR AUTOMATED DETECTION OF </b><b>MACHINE-GENERATED TEXT</b>

Surbhi Sharma (18437877) 29 April 2024 (has links)
<p dir="ltr">Automated detection of machine-generated text has become increasingly crucial in various fields such as cybersecurity, journalism, and content moderation due to the proliferation of generated content, including fake news, spam, and bot-generated comments. Traditional methods for detecting such content often rely on rule-based systems or supervised learning approaches, which may struggle to adapt to evolving generation techniques and sophisticated manipulations. In this thesis, we explore the use of ensemble models and Generative Adversarial Networks (GANs) for the automated detection of machine-generated text. </p><p dir="ltr">Ensemble models combine the strengths of different approaches, such as utilizing both rule-based systems and machine learning algorithms, to enhance detection accuracy and robustness. We investigate the integration of linguistic features, syntactic patterns, and semantic cues into machine learning pipelines, leveraging the power of Natural Language Processing (NLP) techniques. By combining multiple modalities of information, Ensemble models can effectively capture the subtle characteristics and nuances inherent in machine-generated text, improving detection performance. </p><p dir="ltr">In my latest experiments, I examined the performance of a Random Forest classifier trained on TF-IDF representations in combination with RoBERTa embeddings to calculate probabilities for machine-generated text detection. Test1 results showed promising accuracy rates, indicating the effectiveness of combining TF-IDF with RoBERTa probabilities. Test2 further validated these findings, demonstrating improved detection performance compared to standalone approaches.<br></p><p dir="ltr">These results suggest that leveraging Random Forest TF-IDF representation with RoBERTa embeddings to calculate probabilities can enhance the detection accuracy of machine-generated text.<br></p><p dir="ltr">Furthermore, we delve into the application of GAN-RoBERTa, a class of deep learning models comprising a generator and a discriminator trained adversarially, for generating and detecting machine-generated text. GANs have demonstrated remarkable capabilities in generating realistic text, making them a potential tool for adversaries to produce deceptive content. However, this same adversarial nature can be harnessed for detection purposes,<br>where the discriminator is trained to distinguish between genuine and machine-generated text.<br></p><p dir="ltr">Overall, our findings suggest that the use of Ensemble models and GAN-RoBERTa architectures holds significant promise for the automated detection of machine-generated text. Through a combination of diverse approaches and adversarial training techniques, we have demonstrated improved detection accuracy and robustness, thereby addressing the challenges posed by the proliferation of generated content across various domains. Further research and refinement of these approaches will be essential to stay ahead of evolving generation techniques and ensure the integrity and trustworthiness of textual content in the digital landscape.</p>
802

Metagenomic Data Analysis Using Extremely Randomized Tree Algorithm

Gupta, Suraj 26 June 2018 (has links)
Many antibiotic resistance genes (ARGs) conferring resistance to a broad range of antibiotics have often been detected in aquatic environments such as untreated and treated wastewater, river and surface water. ARG proliferation in the aquatic environment could depend upon various factors such as geospatial variations, the type of aquatic body, and the type of wastewater (untreated or treated) discharged into these aquatic environments. Likewise, the strong interconnectivity of aquatic systems may accelerate the spread of ARGs through them. Hence a comparative and a holistic study of different aquatic environments is required to appropriately comprehend the problem of antibiotic resistance. Many studies approach this issue using molecular techniques such as metagenomic sequencing and metagenomic data analysis. Such analyses compare the broad spectrum of ARGs in water and wastewater samples, but these studies use comparisons which are limited to similarity/dissimilarity analyses. However, in such analyses, the discriminatory ARGs (associated ARGs driving such similarity/ dissimilarity measures) may not be identified. Consequentially, the reason which drives the dissimilarities among the samples would not be identified and the reason for antibiotic resistance proliferation may not be clearly understood. In this study, an effective methodology, using Extremely Randomized Trees (ET) Algorithm, was formulated and demonstrated to capture such ARG variations and identify discriminatory ARGs among environmentally derived metagenomes. In this study, data were grouped by: geographic location (to understand the spread of ARGs globally), untreated vs. treated wastewater (to see the effectiveness of WWTPs in removing ARGs), and different aquatic habitats (to understand the impact and spread within aquatic habitats). It was observed that there were certain ARGs which were specific to wastewater samples from certain locations suggesting that site-specific factors can have a certain effect in shaping ARG profiles. Comparing untreated and treated wastewater samples from different WWTPs revealed that biological treatments have a definite impact on shaping the ARG profile. While there were several ARGs which got removed after the treatment, there were some ARGs which showed an increase in relative abundance irrespective of location and treatment plant specific variables. On comparing different aquatic environments, the algorithm identified ARGs which were specific to certain environments. The algorithm captured certain ARGs which were specific to hospital discharges when compared with other aquatic environments. It was determined that the proposed method was efficient in identifying the discriminatory ARGs which could classify the samples according to their groups. Further, it was also effective in capturing low-level variations which generally get over-shadowed in the analysis due to highly abundant genes. The results of this study suggest that the proposed method is an effective method for comprehensive analyses and can provide valuable information to better understand antibiotic resistance. / MS
803

Total Organic Carbon and Clay Estimation in Shale Reservoirs Using Automatic Machine Learning

Hu, Yue 21 September 2021 (has links)
High total organic carbon (TOC) and low clay content are two criteria to identify the "sweet spots" in shale gas plays. Recently, machine learning has been proved to be effective to estimate TOC and clay from well loggings. The remaining questions are what algorithm we should choose in the first place and whether we can improve the already built models. Automatic machine learning (AutoML) appears as a promising tool to solve those realistic questions by training multiple models and compares them automatically. Two wells with conventional well loggings and elemental capture spectroscopy are selected from a shale gas play to test the AutoML's ability in TOC and clay estimation. TOC and clay content are extracted from the Schlumberger's ELAN interpretation and calibrated to cores. Generalizability is proved in the blind test well and the mean absolute test errors for TOC and clay estimation are 0.23% and 3.77%. 829 data points are used to generate the final models with the train-test ratio of 75:25. The mean absolute test errors are 0.26% and 2.68% for TOC and clay, respectively, which are very low for TOC ranging from 0-6% and clay from 35-65%. The results show the AutoML's success and efficiency in the estimation. The trained models are interpreted to understand the variables effects in predictions. 235 wells are selected through data quality checking and feed into the models to create TOC and clay distribution maps. The maps provide guidance on where to drill a new well for higher shale gas production. / Master of Science / Locating "sweet spots", where the shale gas production is much higher than the average areas, is critical for a shale reservoir's successful commercial exploitation. Among the properties of shale, total organic carbon (TOC) and clay content are often selected to evaluate the gas production potential. For TOC and clay estimation, multiple machine learning models have been tested in recent studies and are proved successful. The questions are what algorithm to choose for a specific task and whether the already built models can be improved. Automatic machine learning (AutoML) has the potential to solve the problems by automatically training multiple models and comparing them to achieve the best performance. In our study, AutoML is tested to estimate TOC and clay using data from two gas wells in a shale gas field. First, one well is treated as blind test well and the other is used as trained well to examine the generalizability. The mean absolute errors for TOC and clay content are 0.23% and 3.77%, indicating reliable generalization. Final models are built using 829 data points which are split into train-test sets with the ratio of 75:25. The mean absolute test errors are 0.26% and 2.68% for TOC and clay, respectively, which are very low for TOC ranging from 0-6% and clay from 35-65%. Moreover, AutoML requires very limited human efforts and liberate researchers or engineers from tedious parameter-tuning process that is the critical part of machine learning. Trained models are interpreted to understand the mechanism behind the models. Distribution maps of TOC and clay are created by selecting 235 gas wells that pass the data quality checking, feeding them into trained models, and interpolating. The maps provide guidance on where to drill a new well for higher shale gas production.
804

Deep Learning for Spatiotemporal Nowcasting

Franch, Gabriele 08 March 2021 (has links)
Nowcasting – short-term forecasting using current observations – is a key challenge that human activities have to face on a daily basis. We heavily rely on short-term meteorological predictions in domains such as aviation, agriculture, mobility, and energy production. One of the most important and challenging task for meteorology is the nowcasting of extreme events, whose anticipation is highly needed to mitigate risk in terms of social or economic costs and human safety. The goal of this thesis is to contribute with new machine learning methods to improve the spatio-temporal precision of nowcasting of extreme precipitation events. This work relies on recent advances in deep learning for nowcasting, adding methods targeted at improving nowcasting using ensembles and trained on novel original data resources. Indeed, the new curated multi-year radar scan dataset (TAASRAD19) is introduced that contains more than 350.000 labelled precipitation records over 10 years, to provide a baseline benchmark, and foster reproducibility of machine learning modeling. A TrajGRU model is applied to TAASRAD19, and implemented in an operational prototype. The thesis also introduces a novel method for fast analog search based on manifold learning: the tool leverages the entire dataset history in less than 5 seconds and demonstrates the feasibility of predictive ensembles. In the final part of the thesis, the new deep learning architecture ConvSG based on stacked generalization is presented, introducing novel concepts for deep learning in precipitation nowcasting: ConvSG is specifically designed to improve predictions of extreme precipitation regimes over published methods, and shows a 117% skill improvement on extreme rain regimes over a single member. Moreover, ConvSG shows superior or equal skills compared to Lagrangian Extrapolation models for all rain rates, achieving a 49% average improvement in predictive skill over extrapolation on the higher precipitation regimes.
805

La pratique et l'idéologie coopérative à Québec 1935-1955

Ouellet, Line 25 April 2018 (has links)
L'ensemble de cette recherche tente de répondre â la question suivante: quels sont les intérêts et les enjeux de la pratique coopérative et de l'idéologie qui la sous-tend pour les divers groupes sociaux qui y adhérent? Une ébauche d'explication de ce phénomène est fournie par la mise en relation de la pratique coopérative avec sa base sociale i.e. les caractéristiques sociales des différents groupes s'impliquant dans cette pratique. Le cadre théorique de cette étude s'appuie sur des définitions opérationnelles des concepts d'idéologie et de classe sociale. Les idéologies sont ici définies non seulement comme des systèmes de pensées mais aussi comme un ensemble de pratiques nécessaires au fonctionnement de la société. Ainsi, le discours coopératif n'est pas innocent, il exprime, en sous-texte, des intérêts, des enjeux. D'où l'importance de lier ces discours à ceux qui les tiennent. Le concept de classe sociale nous permet donc de situer les coopérateurs dans l'ensemble de la dynamique sociale puisque chaque classe est déterminée par la place qu'elle occupe dans les rapports de production et de reproduction d'une formation sociale. Une étude de la pratique coopérative à Québec fait ressortir ces caractéristiques. Entre 1935 et 1955, un plus grand nombre de coopératives sont fondées; ces dernières, souvent précaires, relèvent du secteur tertiaire; la majorité des coopérateurs fondateurs appartiennent soit au groupe des employés subalternes soit à la petite-bourgeoisie; et ce sont, dans 95% des cas, les coopérateurs-fondateurs de la petite-bourgeoisie qui collaborent à la revue Ensemble!, l'organe officiel du Conseil supérieur de la coopération. En dernier lieu, nous démontrons cette hypothèse: au moment où l'Etat ne figurait pas encore comme une voie de développement possible et que la concentration toujours plus grande du capital remettait en cause l'existence de l'entreprise familiale, la coopération offrait, elle, des solutions permettant de créer une économie "à notre mesure". C'est pourquoi nous crayons que la petite-bourgeoisie, un groupe particulièrement menace par les transformations économiques alors en cours, a répondu le plus largement au discours coopératif de cette période. / Québec Université Laval, Bibliothèque 2013
806

Real talk

Wilcher, Marcus 21 July 2014 (has links)
This dissertation is intended as a supportive document for the five-part suite for ten-piece jazz ensemble entitled Real Talk. It is divided into six chapters, four of which are analytical and cover the following topics: Form, Melody, Harmony, and Other Compositional Techniques. Subcategories are used within these chapters to draw attention to specific compositional components relevant to the construction of the piece; illustrative tables and examples have been provided to assist in describing these components. The ultimate purpose of this document is to describe in detail my technical approach to the composition. / text
807

Autour les relations entre SLE, CLE, champ libre Gaussien, et les conséquences / On the relations between SLE, CLE, GFF and the consequences

Wu, Hao 26 June 2013 (has links)
Cette thèse porte sur les relations entre les processus SLE, les ensembles CLE et le champ libre Gaussien. Dans le chapitre 2, nous donnons une construction des processus SLE(k,r) à partir des boucles des CLE(k) et d'échantillons de restriction chordale. Sheffield et Werner ont prouvé que les CLE(k) peuvent être construits à partir des processus d'exploration symétriques des SLE(k,r).Nous montrons dans le chapitre 3 que la configuration des boucles construites à partir du processus d'exploration asymétrique des SLE(k,k-6) donne la même loi CLE(k). Le processus SLE(4) peut être considéré comme les lignes de niveau du champ libre Gaussien et l'ensemble CLE(4) correspond à la collection des lignes de niveau de ce champ libre Gaussien. Dans la deuxième partie du chapitre 3, nous définissons un paramètre de temps invariant conforme pour chaque boucle appartenant à CLE(4) et nous donnons ensuite dans le chapitre 4 un couplage entre le champ libre Gaussien et l'ensemble CLE(4) à l'aide du paramètre de temps. Les processus SLE(k) peuvent être considérés comme les lignes de flot du champ libre Gaussien. Nous explicitons la dimension de Hausdorff de l'intersection de deux lignes de flot du champ libre Gaussien. Cela nous permet d'obtenir la dimension de l'ensemble des points de coupure et des points doubles de la courbe SLE, voir le chapitre 5. Dans le chapitre 6, nous définissons la mesure de restriction radiale, prouvons la caractérisation de ces mesures, et montrons la condition nécessaire et suffisante de l'existence des mesures de restriction radiale. / This thesis focuses on various relations between SLE, CLE and GFF. In Chapter 2, we give a construction of SLE(k,r) processes from CLE(k) loop configuration and chordal restriction samples. Sheffield and Werner has proved that CLE(k) can be constructed from symmetric SLE(k,k-6) exploration processes. We prove in Chapter 3 that the loop configuration constructed from the asymmetric SLE(k,k-6) exploration processes also give the same law CLE(k). SLE(4) can be viewed as level lines of GFF and CLE(4) can be viewed as the collection of level lines of GFF. We define a conformally invariant time parameter for each loop in CLE(4) in the second part of Chapter 3 and then give a coupling between GFF and CLE(4) with time parameter in Chapter 4. SLE(k,r) can be viewed as flow lines of GFF. We derive the Hausdorff dimension of the intersection of two flow lines in GFF. Then, from there, we obtain the dimension of the cut and double point set of SLE curve in Chapter 5. In Chapter 6, we define the radial restriction measure, prove the characterization of these measures, and show the if and only if condition for the existence of radial restriction measure.
808

Use of social media data in flood monitoring / Uso de dados das mídias sociais no monitoramento de enchentes

Restrepo Estrada, Camilo Ernesto 05 November 2018 (has links)
Floods are one of the most devastating types of worldwide disasters in terms of human, economic, and social losses. If authoritative data is scarce, or unavailable for some periods, other sources of information are required to improve streamflow estimation and early flood warnings. Georeferenced social media messages are increasingly being regarded as an alternative source of information for coping with flood risks. However, existing studies have mostly concentrated on the links between geo-social media activity and flooded areas. This thesis aims to show a novel methodology that shows a way to close the research gap regarding the use of social networks as a proxy for precipitation-runoff and flood forecast estimates. To address this, it is proposed to use a transformation function that creates a proxy variable for rainfall by analysing messages from geo-social media and precipitation measurements from authoritative sources, which are then incorporated into a hydrological model for the flow estimation. Then the proxy and authoritative rainfall data are merged to be used in a data assimilation scheme using the Ensemble Kalman Filter (EnKF). It is found that the combined use of authoritative rainfall values with the social media proxy variable as input to the Probability Distributed Model (PDM), improves flow simulations for flood monitoring. In addition, it is found that when these models are made under a scheme of fusion-assimilation of data, the results improve even more, becoming a tool that can help in the monitoring of \"ungauged\" or \"poorly gauged\" catchments. The main contribution of this thesis is the creation of a completely original source of rain monitoring, which had not been explored in the literature in a quantitative way. It also shows how the joint use of this source and data assimilation methodologies aid to detect flood events. / As inundações são um dos tipos mais devastadores de desastres em todo o mundo em termos de perdas humanas, econômicas e sociais. Se os dados oficiais forem escassos ou indisponíveis por alguns períodos, outras fontes de informação são necessárias para melhorar a estimativa de vazões e antecipar avisos de inundação. Esta tese tem como objetivo mostrar uma metodologia que mostra uma maneira de fechar a lacuna de pesquisa em relação ao uso de redes sociais como uma proxy para as estimativas de precipitação e escoamento. Para resolver isso, propõe-se usar uma função de transformação que cria uma variável proxy para a precipitação, analisando mensagens de medições geo-sociais e precipitação de fontes oficiais, que são incorporadas em um modelo hidrológico para a estimativa de fluxo. Em seguida, os dados de proxy e precipitação oficial são fusionados para serem usados em um esquema de assimilação de dados usando o Ensemble Kalman Filter (EnKF). Descobriu-se que o uso combinado de valores oficiais de precipitação com a variável proxy das mídias sociais como entrada para o modelo distribuído de probabilidade (Probability Distributed Model - PDM) melhora as simulações de fluxo para o monitoramento de inundações. A principal contribuição desta tese é a criação de uma fonte completamente original de monitoramento de chuva, que não havia sido explorada na literatura de forma quantitativa.
809

Exploring the genesis and specificity of serum antibody binding

Greiff, Victor 17 January 2013 (has links)
Humorale Immunantworten gehen einher mit der Veränderung der Zusammensetzung und der Konzentration des Antikörper (AK)-Repertoires. Signalintensitäts-basierte AK-Bindungsprofile (ABP), gemessen mit Zufallspeptidmikroarrays, versuchen diese Veränderungen zu detektieren, um sie für serologische Diagnostik nutzbar zu machen. Gegenstand dieser Arbeit ist die Analyse des Einflusses des AK-Repertoires auf ABP mittels eines mathematischen Modells für AK-Peptidbindung. Das Modell basiert auf dem MWG und beinhaltet als Parameter (i) AK- und Peptidsequenzen sowie (ii) AK-Konzentrationen. Die Affinität simulierter monoklonaler AK hängt nichtlinear von den Aminosäurepositionen in den Peptidsequenzen ab. Das Modell wurde mathematisch analysiert und in silico implementiert. Simulationen ergaben, dass ABP von Mischungen hochdiverser, zufällig generierter AK-Sequenzen, welche nicht durch wenige AK konzentrationsdominiert sind, genannt ideale Mischungen, linear ausschließlich mit Hilfe der Aminosäurezusammensetzung der Peptidbibliothek vorhergesagt werden können. Dieser Zusammenhang führte zu der Formulierung eines linearen Regressionsmodells, aus welchem Aminosäure-assoziierte Gewichte (AAWS) hervorgehen, welche eine kompakte, verlustfreie Abbildung von ABP idealer Mischungen darstellen. Für niedrig-diverse Mischungen ist die Vorhersagekraft des Regressionsmodells eingeschränkt. Die in vitro-Relevanz der mathematisch vorhergesagten Ensembleeigenschaften von AK-Mischungen wurde durch Inkubationen monoklonaler AK und Serum-AK mit derselben Peptidbibliothek bestätigt. Diese Arbeit zeigt, dass Kenntnisse über die Zusammensetzung einer polyklonalen Mischung essentiell für die Interpretation von ABP in Bezug auf serologische Diagnostik und Epitopkartierung sind. Die Spezifität und damit auch die Klassifizierbarkeit von ABP ist sowohl eine Funktion der untersuchten AK-Mischung als auch technologischer Faktoren. / Humoral immune responses are associated with changes of both the composition and the concentration of serum antibodies. Signal intensity- based antibody binding profiles (ABP) measured with random-sequence peptide microarrays attempt to capture these changes to render them applicable to serological diagnostics. In this work, the antibody repertoire’s impact on ABP is studied. This model is based on the law of mass action and incorporates as parameters (i) antibody and peptide sequences and (ii) antibody concentrations. The binding affinity of simulated monoclonal antibodies depends non-linearly on amino acid positions in the peptide sequences. The model was both mathematically analyzed and implemented in silico. Mathematical analysis and simulations predicted that mixtures of highly diverse random antibodies which are not dominated concentration-wise by few antibodies, termed unbiased mixtures, could be linearly predicted based only on the amino acid composition of the peptide library used. This linear relationship led to the formulation of a linear regression model of which amino acid associated-weights (AAWS) emerge as a compact, lossless representation of unbiased mixtures’ ABP. For lowly diverse antibody mixtures, this linear regression model breaks down. In order to test the in vitro relevance of the mathematically predicted ensemble properties of antibody mixtures, monoclonal and serum antibodies were incubated with the same peptide library. In conclusion, this work shows that serum antibody ensemble properties impact the genesis of ABP measured with random-sequence peptide microarrays. This thesis indicates that a knowledge of both a polyclonal mixture’s diversity and composition is essential for the interpretation of ABP with respect to both serological diagnostics and B-cell epitope mapping. Specificity, and thus classifiability, of serum ABP is a function of both the investigated antibody mixtures and technological features.
810

Use of social media data in flood monitoring / Uso de dados das mídias sociais no monitoramento de enchentes

Camilo Ernesto Restrepo Estrada 05 November 2018 (has links)
Floods are one of the most devastating types of worldwide disasters in terms of human, economic, and social losses. If authoritative data is scarce, or unavailable for some periods, other sources of information are required to improve streamflow estimation and early flood warnings. Georeferenced social media messages are increasingly being regarded as an alternative source of information for coping with flood risks. However, existing studies have mostly concentrated on the links between geo-social media activity and flooded areas. This thesis aims to show a novel methodology that shows a way to close the research gap regarding the use of social networks as a proxy for precipitation-runoff and flood forecast estimates. To address this, it is proposed to use a transformation function that creates a proxy variable for rainfall by analysing messages from geo-social media and precipitation measurements from authoritative sources, which are then incorporated into a hydrological model for the flow estimation. Then the proxy and authoritative rainfall data are merged to be used in a data assimilation scheme using the Ensemble Kalman Filter (EnKF). It is found that the combined use of authoritative rainfall values with the social media proxy variable as input to the Probability Distributed Model (PDM), improves flow simulations for flood monitoring. In addition, it is found that when these models are made under a scheme of fusion-assimilation of data, the results improve even more, becoming a tool that can help in the monitoring of \"ungauged\" or \"poorly gauged\" catchments. The main contribution of this thesis is the creation of a completely original source of rain monitoring, which had not been explored in the literature in a quantitative way. It also shows how the joint use of this source and data assimilation methodologies aid to detect flood events. / As inundações são um dos tipos mais devastadores de desastres em todo o mundo em termos de perdas humanas, econômicas e sociais. Se os dados oficiais forem escassos ou indisponíveis por alguns períodos, outras fontes de informação são necessárias para melhorar a estimativa de vazões e antecipar avisos de inundação. Esta tese tem como objetivo mostrar uma metodologia que mostra uma maneira de fechar a lacuna de pesquisa em relação ao uso de redes sociais como uma proxy para as estimativas de precipitação e escoamento. Para resolver isso, propõe-se usar uma função de transformação que cria uma variável proxy para a precipitação, analisando mensagens de medições geo-sociais e precipitação de fontes oficiais, que são incorporadas em um modelo hidrológico para a estimativa de fluxo. Em seguida, os dados de proxy e precipitação oficial são fusionados para serem usados em um esquema de assimilação de dados usando o Ensemble Kalman Filter (EnKF). Descobriu-se que o uso combinado de valores oficiais de precipitação com a variável proxy das mídias sociais como entrada para o modelo distribuído de probabilidade (Probability Distributed Model - PDM) melhora as simulações de fluxo para o monitoramento de inundações. A principal contribuição desta tese é a criação de uma fonte completamente original de monitoramento de chuva, que não havia sido explorada na literatura de forma quantitativa.

Page generated in 0.068 seconds