• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 34
  • 33
  • 26
  • 12
  • 10
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 76
  • 59
  • 49
  • 35
  • 34
  • 32
  • 31
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Data-based Explanations of Random Forest using Machine Unlearning

Tanmay Laxman Surve (17537112) 03 December 2023 (has links)
<p dir="ltr">Tree-based machine learning models, such as decision trees and random forests, are one of the most widely used machine learning models primarily because of their predictive power in supervised learning tasks and ease of interpretation. Despite their popularity and power, these models have been found to produce unexpected or discriminatory behavior. Given their overwhelming success for most tasks, it is of interest to identify root causes of the unexpected and discriminatory behavior of tree-based models. However, there has not been much work on understanding and debugging tree-based classifiers in the context of fairness. We introduce FairDebugger, a system that utilizes recent advances in machine unlearning research to determine training data subsets responsible for model unfairness. Given a tree-based model learned on a training dataset, FairDebugger identifies the top-k training data subsets responsible for model unfairness, or bias, by measuring the change in model parameters when parts of the underlying training data are removed. We describe the architecture of FairDebugger and walk through real-world use cases to demonstrate how FairDebugger detects these patterns and their explanations.</p>
312

Estimating Per-pixel Classification Confidence of Remote Sensing Images

Jiang, Shiguo 19 December 2012 (has links)
No description available.
313

Quality of Data in Scope 3 Sustainability Reporting for the Mining and Extractive Industry

Bratan, Dastan, Jacob, Steve Anthony January 2024 (has links)
Climate change is the most pressing environmental challenge of our time, with global temperatures rising and extreme weather events becoming more frequent. The corporate sector, particularly large industrial entities, faces increasing scrutiny due to its significant contributions to global emissions. This thesis examines Scope 3 emissions reporting within the mining and extractive industries, focusing on data quality and comparing Nordic and international practices.    Using a self-developed theoretical model of logic, concepts, metrics, and tools, this research investigates how these industries report their Scope 3 emissions and identifies gaps in current practices. The study utilizes an abductive approach, combining both inductive and deductive methods, supported by a mixed-method combined of empirical data from publicly available sustainability reports and an online questionnaire, targeting mining and extractive companies and sustainability managers.   Key findings reveal considerable variability in how companies report Scope 3 emissions, with Nordic companies often lagging behind their international counterparts despite strong sustainability credentials. Data quality concerns, including issues of accuracy, completeness, and timeliness, undermine stakeholders' ability to make informed decisions. Additionally, the research highlights diverse tools and methodologies employed by companies, noting that the lack of clear guidelines often hinders their effectiveness.   This thesis contributes to a deeper understanding of Scope 3 emissions management, emphasizing the need for standardized and effective reporting practices.
314

Uso de propriedades visuais-interativas na avaliação da qualidade de dados / Using visual-interactive properties in the data quality assessment

Josko, João Marcelo Borovina 29 April 2016 (has links)
Os efeitos dos dados defeituosos sobre os resultados dos processos analíticos são notórios. Aprimorar a qualidade dos dados exige estabelecer alternativas a partir de vários métodos, técnicas e procedimentos disponíveis. O processo de Avaliação da Qualidade dos Dados - pAQD - provê relevantes insumos na definição da alternativa mais adequada por meio do mapeamento dos defeitos nos dados. Relevantes abordagens computacionais apoiam esse processo. Tais abordagens utilizam métodos quantitativos ou baseados em asserções que usualmente restringem o papel humano a interpretação dos seus resultados. Porém, o pAQD depende do conhecimento do contexto dos dados visto que é impossível confirmar ou refutar a presença de defeitos baseado exclusivamente nos dados. Logo, a supervisão humana é essencial para esse processo. Sistemas de visualização pertencem a uma classe de abordagens supervisionadas que podem tornar visíveis as estruturas dos defeitos nos dados. Apesar do considerável conhecimento sobre o projeto desses sistemas, pouco existe para o domínio da avaliação visual da qualidade dos dados. Isto posto, este trabalho apresenta duas contribuições. A primeira reporta uma taxonomia que descreve os defeitos relacionados aos critérios de qualidade da acuracidade, completude e consistência para dados estruturados e atemporais. Essa taxonomia seguiu uma metodologia que proporcionou a cobertura sistemática e a descrição aprimorada dos defeitos em relação ao estado-da-arte das taxonomias. A segunda contribuição reporta relacionamentos entre propriedades-defeitos que estabelecem que certas propriedades visuais-interativas são mais adequadas para a avaliação visual de certos defeitos em dadas resoluções de dados. Revelados por um estudo de caso múltiplo e exploratório, esses relacionamentos oferecem indicações que reduzem a subjetividade durante o projeto de sistemas de visualização de apoio a avaliação visual da qualidade dos dados. / The effects of poor data quality on the reliability of the outcomes of analytical processes are notorious. Improving data quality requires alternatives that combine procedures, methods, techniques and technologies. The Data Quality Assessment process - DQAp - provides relevant and practical inputs for choosing the most suitable alternative through a data defects mapping. Relevant computational approaches support this process. Such approaches apply quantitative or assertions-based methods that usually limit the human interpretation of their outcomes. However, the DQAp process strongly depends on data context knowledge since it is impossible to confirm or refute a defect based only on data. Hence, human supervision is essential throughout this process. Visualization systems belong to a class of supervised approaches that can make visible data defect structures. Despite their considerable design knowledge encodings, there is little support design to data quality visual assessment. Therefore, this work reports two contributions. The first reports a taxonomy that organizes a detailed description of defects on structured and timeless data related to the quality criteria of accuracy, completeness and consistency. This taxonomy followed a methodology which enabled a systematic coverage of data defects and an improved description of data defects in regard to state-of-art literature. The second contribution reports a set of property-defect relationships that establishes that certain visual and interactive properties are more suitable for visual assessment of certain data defects in a given data resolution. Revealed by an exploratory and multiple study case, these relationships provides implications that reduce the subjectivity in the visualization systems design for data quality visual assessment.
315

工商及服務業普查資料品質之研究 / Data quality research of industry and commerce census

邱詠翔 Unknown Date (has links)
資料品質的好壞會影響決策品質以及各種行動的執行成果,所以資料品質在近年來越來越受到重視。本研究包含了兩個資料庫,一個是產業創新調查資料庫,一個是95年工商及服務業普查資料庫,資料品質的好壞對一個資料庫來說也是一個相當重要的議題,資料庫中往往都含有錯誤的資料,錯誤的資料會導致分析結果出現偏差的狀況,所以在進行資料分析之前,資料清理與整理是必要的事前處理工作。 我們從母體資料分佈與樣本資料分佈得知,在清理與整理資料之前,平均創新員工人數為92.08,平均工商員工人數為135.54;在清理與整理資料之後,我們比較兩個資料庫員工人數的相關性、相似性、距離等性質,結果顯示兩個資料庫的資料一致性極高,平均創新員工人數與平均工商員工人數分別為39.01與42.12,跟母體平均員工人數7.05較為接近,也顯示出資料清理的重要性。 本研究使用的方法為事後分層抽樣,主要研究目的是要利用產業創新調查樣本來推估95年工商及服務業普查母體資料的準確性。產業創新調查樣本在推估母體從業員工人數與母體營業收入方面皆出現高估的狀況,推測出現高估的原因是產業創新調查母體為前中華徵信所出版的五千大企業名冊為母體底冊,而工商及服務業普查企業資料為一般企業母體底冊。因此,我們利用和產業創新調查樣本所相對應的工商普查樣本做驗證,發現95年工商及服務業普查樣本與產業創新調查樣本的資料一致性極高。 / Data quality is good or bad will affect the decision quality and achievements in the implementation of various actions, so the data quality more and more attention in recent years. This study consists of two databases, one is the industrial innovation survey database, another is the industry and commerce census database in ninety five years. Data quality is good or bad of a database is also a very important issue, the database often contain erroneous information, incorrect information will result in bias of the analysis results. So before carrying out data analysis, data cleaning and consolidation is necessary. We can know from the parent and the sample data distribution. Before data cleaning and consolidation, the average number of innovation employees is 92.08, and the average number of industrial-commerce employees is 135.54. After data cleaning and consolidation, we compare the correlation, similarity, and distance of the number of employees in two databases. The results show the data consistency of the two databases is very high, the average number of innovation employees is 39.01, and the average number of industrial-commerce employees is 42.12, it is closer to the average number of parent employees 7.05. This also shows the importance of data cleaning. Method used in the study is post-stratified sampling, the main research objective is to use industrial innovation survey sample to estimate the data accuracy of the industry and commerce census in ninety five years. Use industrial innovation survey sample to estimate the number of employees and operating revenue in the industry and commerce census in ninety five years are both overestimated, we guess the reason is that the parent of the industrial innovation survey is five thousand large enterprises published by China Credit Information, and the parent of the industry and commerce census is general enterprises. Therefore, we use the corresponding industry and commerce census sample for validation. The results show that the data consistency of the industrial innovation survey sample and the industry and commerce census sample in ninety five years is very high.
316

Skoner en kleiner vertaalgeheues

Wolff, Friedel 10 1900 (has links)
Rekenaars kan ’n nuttige rol speel in vertaling. Twee benaderings is vertaalgeheuestelsels en masjienvertaalstelsels. By hierdie twee tegnologieë word ’n vertaalgeheue gebruik—’n tweetalige versameling vorige vertalings. Hierdie proefskrif bied metodes aan om die kwaliteit van ’n vertaalgeheue te verbeter. ’n Masjienleerbenadering word gevolg om foutiewe inskrywings in ’n vertaalgeheue te identifiseer. ’n Verskeidenheid leerkenmerke in drie kategorieë word aangebied: kenmerke wat verband hou met tekslengte, kenmerke wat deur kwaliteittoetsers soos vertaaltoetsers, ’n speltoetser en ’n grammatikatoetser bereken word, asook statistiese kenmerke wat met behulp van eksterne data bereken word. Die evaluasie van vertaalgeheuestelsels is nog nie gestandaardiseer nie. In hierdie proefskrif word ’n verskeidenheid probleme met bestaande evaluasiemetodes uitgewys, en ’n verbeterde evaluasiemetode word ontwikkel. Deur die foutiewe inskrywings uit ’n vertaalgeheue te verwyder, is ’n kleiner, skoner vertaalgeheue beskikbaar vir toepassings. Eksperimente dui aan dat so ’n vertaalgeheue beter prestasie behaal in ’n vertaalgeheuestelsel. As ondersteunende bewys vir die waarde van ’n skoner vertaalgeheue word ’n verbetering ook aangedui by die opleiding van ’n masjienvertaalstelsel. / Computers can play a useful role in translation. Two approaches are translation memory systems and machine translation systems. With these two technologies a translation memory is used— a bilingual collection of previous translations. This thesis presents methods to improve the quality of a translation memory. A machine learning approach is followed to identify incorrect entries in a translation memory. A variety of learning features in three categories are presented: features associated with text length, features calculated by quality checkers such as translation checkers, a spell checker and a grammar checker, as well as statistical features computed with the help of external data. The evaluation of translation memory systems is not yet standardised. This thesis points out a number of problems with existing evaluation methods, and an improved evaluation method is developed. By removing the incorrect entries in a translation memory, a smaller, cleaner translation memory is available to applications. Experiments demonstrate that such a translation memory results in better performance in a translation memory system. As supporting evidence for the value of a cleaner translation memory, an improvement is also achieved in training a machine translation system. / School of Computing / Ph. D. (Rekenaarwetenskap)
317

Une approche automatisée basée sur des contraintes d’intégrité définies en UML et OCL pour la vérification de la cohérence logique dans les systèmes SOLAP : applications dans le domaine agri-environnemental / An automated approach based on integrity constraints defined in UML and OCL for the verification of logical consistency in SOLAP systems : applications in the agri-environmental field

Boulil, Kamal 26 October 2012 (has links)
Les systèmes d'Entrepôts de Données et OLAP spatiaux (EDS et SOLAP) sont des technologies d'aide à la décision permettant l'analyse multidimensionnelle de gros volumes de données spatiales. Dans ces systèmes, la qualité de l'analyse dépend de trois facteurs : la qualité des données entreposées, la qualité des agrégations et la qualité de l’exploration des données. La qualité des données entreposées dépend de critères comme la précision, l'exhaustivité et la cohérence logique. La qualité d'agrégation dépend de problèmes structurels (e.g. les hiérarchies non strictes qui peuvent engendrer le comptage en double des mesures) et de problèmes sémantiques (e.g. agréger les valeurs de température par la fonction Sum peut ne pas avoir de sens considérant une application donnée). La qualité d'exploration est essentiellement affectée par des requêtes utilisateur inconsistantes (e.g. quelles ont été les valeurs de température en URSS en 2010 ?). Ces requêtes peuvent engendrer des interprétations erronées des résultats. Cette thèse s'attaque aux problèmes d'incohérence logique qui peuvent affecter les qualités de données, d'agrégation et d'exploration. L'incohérence logique est définie habituellement comme la présence de contradictions dans les données. Elle est typiquement contrôlée au moyen de Contraintes d'Intégrité (CI). Dans cette thèse nous étendons d'abord la notion de CI (dans le contexte des systèmes SOLAP) afin de prendre en compte les incohérences relatives aux agrégations et requêtes utilisateur. Pour pallier les limitations des approches existantes concernant la définition des CI SOLAP, nous proposons un Framework basé sur les langages standards UML et OCL. Ce Framework permet la spécification conceptuelle et indépendante des plates-formes des CI SOLAP et leur implémentation automatisée. Il comporte trois parties : (1) Une classification des CI SOLAP. (2) Un profil UML implémenté dans l'AGL MagicDraw, permettant la représentation conceptuelle des modèles des systèmes SOLAP et de leurs CI. (3) Une implémentation automatique qui est basée sur les générateurs de code Spatial OCL2SQL et UML2MDX qui permet de traduire les spécifications conceptuelles en code au niveau des couches EDS et serveur SOLAP. Enfin, les contributions de cette thèse ont été appliquées dans le cadre de projets nationaux de développement d'applications (S)OLAP pour l'agriculture et l'environnement. / Spatial Data Warehouse (SDW) and Spatial OLAP (SOLAP) systems are Business Intelligence (BI) allowing for interactive multidimensional analysis of huge volumes of spatial data. In such systems the quality ofanalysis mainly depends on three components : the quality of warehoused data, the quality of data aggregation, and the quality of data exploration. The warehoused data quality depends on elements such accuracy, comleteness and logical consistency. The data aggregation quality is affected by structural problems (e.g., non-strict dimension hierarchies that may cause double-counting of measure values) and semantic problems (e.g., summing temperature values does not make sens in many applications). The data exploration quality is mainly affected by inconsistent user queries (e.g., what are temperature values in USSR in 2010?) leading to possibly meaningless interpretations of query results. This thesis address the problems of logical inconsistency that may affect the data, aggregation and exploration qualities in SOLAP. The logical inconsistency is usually defined as the presence of incoherencies (contradictions) in data ; It is typically controlled by means of Integrity Constraints (IC). In this thesis, we extends the notion of IC (in the SOLAP domain) in order to take into account aggregation and query incoherencies. To overcome the limitations of existing approaches concerning the definition of SOLAP IC, we propose a framework that is based on the standard languages UML and OCL. Our framework permits a plateforme-independent conceptual design and an automatic implementation of SOLAP IC ; It consists of three parts : (1) A SOLAP IC classification, (2) A UML profile implemented in the CASE tool MagicDraw, allowing for a conceptual design of SOLAP models and their IC, (3) An automatic implementation based on the code generators Spatial OCLSQL and UML2MDX, which allows transforming the conceptual specifications into code. Finally, the contributions of this thesis have been experimented and validated in the context of French national projetcts aimming at developping (S)OLAP applications for agriculture and environment.
318

Dashboardy - jejich analýza a implementace v prostředí SAP Business Objects / An analysis and implementation of Dashboards within SAP Business Objects 4.0/4.1

Kratochvíl, Tomáš January 2013 (has links)
The diploma thesis is focused on dashboards analysis and distribution and theirs implementation afterwards in SAP Dashboards and Web Intelligence tools. The main goal of this thesis is an analysis of dashboards for different area of company management according to chosen of architecture solution. Another goal of diploma thesis is to take into account the principles of dashboards within the company and it deals with indicator comparison as well. The author further defines data life cycle within Business Intelligence and deals with the decomposition of particular dashboard types in theoretical part. At the end of theory, it is included an important chapter from point of view data quality, data quality process and data quality improvement and an using of SAP Best Practices and KBA as well for BI tools published by SAP. The implementation of dashboards should be back up theoretical part. Implementation is divided into 3 chapters according to selected architecture, using multisource systems, SAP Infosets/Query and using Data Warehouse or Data Mart as an architecture solution for reporting purposes. The deep implementing section should be help reader to make his own opinion to different architecture, but especially difference in used BI tools within SAP Business Objects. At the end of each section regarding architecture and its solution, there are defined pros and cons.
319

Uso de propriedades visuais-interativas na avaliação da qualidade de dados / Using visual-interactive properties in the data quality assessment

João Marcelo Borovina Josko 29 April 2016 (has links)
Os efeitos dos dados defeituosos sobre os resultados dos processos analíticos são notórios. Aprimorar a qualidade dos dados exige estabelecer alternativas a partir de vários métodos, técnicas e procedimentos disponíveis. O processo de Avaliação da Qualidade dos Dados - pAQD - provê relevantes insumos na definição da alternativa mais adequada por meio do mapeamento dos defeitos nos dados. Relevantes abordagens computacionais apoiam esse processo. Tais abordagens utilizam métodos quantitativos ou baseados em asserções que usualmente restringem o papel humano a interpretação dos seus resultados. Porém, o pAQD depende do conhecimento do contexto dos dados visto que é impossível confirmar ou refutar a presença de defeitos baseado exclusivamente nos dados. Logo, a supervisão humana é essencial para esse processo. Sistemas de visualização pertencem a uma classe de abordagens supervisionadas que podem tornar visíveis as estruturas dos defeitos nos dados. Apesar do considerável conhecimento sobre o projeto desses sistemas, pouco existe para o domínio da avaliação visual da qualidade dos dados. Isto posto, este trabalho apresenta duas contribuições. A primeira reporta uma taxonomia que descreve os defeitos relacionados aos critérios de qualidade da acuracidade, completude e consistência para dados estruturados e atemporais. Essa taxonomia seguiu uma metodologia que proporcionou a cobertura sistemática e a descrição aprimorada dos defeitos em relação ao estado-da-arte das taxonomias. A segunda contribuição reporta relacionamentos entre propriedades-defeitos que estabelecem que certas propriedades visuais-interativas são mais adequadas para a avaliação visual de certos defeitos em dadas resoluções de dados. Revelados por um estudo de caso múltiplo e exploratório, esses relacionamentos oferecem indicações que reduzem a subjetividade durante o projeto de sistemas de visualização de apoio a avaliação visual da qualidade dos dados. / The effects of poor data quality on the reliability of the outcomes of analytical processes are notorious. Improving data quality requires alternatives that combine procedures, methods, techniques and technologies. The Data Quality Assessment process - DQAp - provides relevant and practical inputs for choosing the most suitable alternative through a data defects mapping. Relevant computational approaches support this process. Such approaches apply quantitative or assertions-based methods that usually limit the human interpretation of their outcomes. However, the DQAp process strongly depends on data context knowledge since it is impossible to confirm or refute a defect based only on data. Hence, human supervision is essential throughout this process. Visualization systems belong to a class of supervised approaches that can make visible data defect structures. Despite their considerable design knowledge encodings, there is little support design to data quality visual assessment. Therefore, this work reports two contributions. The first reports a taxonomy that organizes a detailed description of defects on structured and timeless data related to the quality criteria of accuracy, completeness and consistency. This taxonomy followed a methodology which enabled a systematic coverage of data defects and an improved description of data defects in regard to state-of-art literature. The second contribution reports a set of property-defect relationships that establishes that certain visual and interactive properties are more suitable for visual assessment of certain data defects in a given data resolution. Revealed by an exploratory and multiple study case, these relationships provides implications that reduce the subjectivity in the visualization systems design for data quality visual assessment.
320

Standards for exchanging digital geo-referenced information

Cooper, Antony Kyle 12 March 2011 (has links)
The purpose of this dissertation is to assess digital geo-referenced information and standards for exchanging such information, especially the South African National Exchange Standard (NES). The process of setting up a standard is exacting. On the one hand, the process demands a thorough scrutiny and analysis of the objects to be standardised and of all related concepts. This is a prerequisite for ensuring that there is unanimity about their meaning and inter-relationships. On the other hand, the process requires that the standard itself be enunciated as succinctly, comprehensibly and precisely as possible. This dissertation addresses both these facets of the standards process in the context of standards for exchanging digital geo-referenced information. The dissertation begins with an analysis of geo-referenced information in general, including digital geo-referenced information. In chapters 2 and 3, the various aspects of such information are scrutinised and evaluated in more detail. The examination of concepts is backed up by a comprehensive Glossary of terms in the domain under discussion. Chapter 4 examines the nature of standards. It also proposes a novel way to approach a standard for the exchange of digital geo-referenced information: namely, that it can be viewed as a language and can accordingly be specified by a grammar. To illustrate the proposal, NES is fully specified, using the Extended Backus-Naur Form notation, in an Appendix. Apart from the advantages of being a succinct and precise formal specification, the approach also lends itself to deploying standard tools such as Lex and yacc for conformance testing and for developing interfaces to NES, as illustrated in a second appendix. As a final theme of the dissertation, an evaluation of such standards is provided. Other standards that have been proposed elsewhere for purposes similar to that of NES are surveyed in chapter 5. In chapter 6, features of NES are highlighted, including the fact that it takes a relational approach. Chapter 7 concludes the dissertation, summarising the work to date, and looking ahead to future work. AFRIKAANS : Die doel van hierdie verhandling is om versyferde geo-verwysde inligting en standaarde vir die uitruil van sulke inligting te ondersoek, met spesifieke verwysing na die Suid- Afrikaanse Nasionale Uitruilstandaard (NES). Die proses om ’n standaard op te stel is veeleisend. Aan die een kant vereis die proses ’n volledige bestudering en ontleding van die objekte wat gestandaardiseer gaan word, asook van al die verwante konsepte. Hierdie is ’n voorvereiste om te verseker dat daar oor hul betekenisse en onderlinge verwantskappe eenstemmigheid bestaan. Aan die ander kant vereis die proses dat die standaard so kernagtig, volledig en presies moontlik gestel moet word. Hierdie verhandeling spreek beide hierdie fasette van die standaardiseringsproses aan, en wel in die konteks van standaarde vir die uitruil van versyferde geo-verwysde inligting. Dié verhandling begin met ’n oorhoofse analise van geo-verwysde inligting, insluitend versyferde geo-verwysde inligting. In hoofstukke 2 en 3 word verskeie aspekte van dié inligting in meer detail ondersoek en geëvalueer. Hierdie ondersoek van konsepte word deur ’n omvattende woordelys van terme in die veld onder bespreking gesteun. Hoofstuk 4 ondersoek die aard van standaarde. Dit stel ook ’n nuwe manier voor om ’n standaard vir die uitruil van versyferde geo-verwysde inligting te benader, naamlik dat dit as ’n taal beskou kan word, en dat dit gevolglik deur middel van ’n grammatika gespesifiseer kan word. Om die voorstel te illustreer, word NES volledig in ’n aanhangsel deur middel van die Uitgebreide Backus-Naur Vorm notasie gespesifiseer. Afgesien van die voordeel van ’n kernagtige en presiese formele spesifikasie, ondersteun die benadering ook standaardgereedskap soos Lex en yacc wat vir konformeringstoetsing en vir NES koppelvlakke gebruik kan word, soos in ’n tweede aanhangsel illustreer word. As ’n finale tema van die verhandeling word ’n evaluasie van tersaaklike standaarde voorsien. Standaarde wat elders vir soortgelyke doeleindes aan dié van NES voorgestel is, word oorsigtelik in hoofstuk 5 beskou. In hoofstuk 6 word kenmerkende eienskappe van NES uitgelig, insluitend die feit dat dit op ’n relasionele benadering gebaseer is. Hoofstuk 7 sluit die verhandeling af met ’n opsomming van werk tot op datum en ’n blik op toekomstige werk. / Dissertation (MSc)--University of Pretoria, 1993. / Computer Science / unrestricted

Page generated in 0.3875 seconds