• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 51
  • 29
  • 24
  • 16
  • 14
  • 13
  • 11
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 673
  • 144
  • 84
  • 61
  • 57
  • 54
  • 52
  • 51
  • 50
  • 45
  • 43
  • 40
  • 39
  • 38
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Towards Fairness-Aware Online Machine Learning from Imbalanced Data Streams

Sadeghi, Farnaz 10 August 2023 (has links)
Online supervised learning from fast-evolving imbalanced data streams has applications in many areas. That is, the development of techniques that are able to handle highly skewed class distributions (or 'class imbalance') is an important area of research in domains such as manufacturing, the environment, and health. Solutions should be able to analyze large repositories in near real-time and provide accurate models to describe rare classes that may appear infrequently or in bursts while continuously accommodating new instances. Although numerous online learning methods have been proposed to handle binary class imbalance, solutions suitable for multi-class streams with varying degrees of imbalance in evolving streams have received limited attention. To address this knowledge gap, the first contribution of this thesis introduces the Online Learning from Imbalanced Multi-Class Streams through Dynamic Sampling (DynaQ) algorithm for learning in such multi-class imbalanced settings. Our approach utilizes a queue-based learning method that dynamically creates an instance queue for each class. The number of instances is balanced by maintaining a queue threshold and removing older samples during training. In addition, new and rare classes are dynamically added to the training process as they appear. Our experimental results confirm a noticeable improvement in minority-class detection and classification performance. A comparative evaluation shows that the DynaQ algorithm outperforms the state-of-the-art approaches. Our second contribution in this thesis focuses on fairness-aware learning from imbalanced streams. Our work is motivated by the observation that the decisions made by online learning algorithms may negatively impact individuals or communities. Indeed, the development of approaches to handle these concerns is an active area of research in the machine learning community. However, most existing methods process the data in offline settings and are not directly suitable for online learning from evolving data streams. Further, these techniques fail to take the effects of class imbalance, on fairness-aware supervised learning into account. In addition, recent fairness-aware online learning supervised learning approaches focus on one sensitive attribute only, which may lead to subgroup discrimination. In a fair classification, the equality of fairness metrics across multiple overlapping groups must be considered simultaneously. In our second contribution, we thus address the combined problem of fairness-aware online learning from imbalanced evolving streams, while considering multiple sensitive attributes. To this end, we introduce the Multi-Sensitive Queue-based Online Fair Learning (MQ-OFL) algorithm, an online fairness-aware approach, which maintains valid and fair models over evolving streams. MQ-OFL changes the training distribution in an online fashion based on both stream imbalance and discriminatory behavior of the model evaluated over the historical stream. We compare our MQ-OFL method with state-of-art studies on real-world datasets and present comparative insights on the performance. Our final contribution focuses on explainability and interpretability in fairness-aware online learning. This research is guided by the concerns raised due to the black-box nature of models, concealing internal logic from users. This lack of transparency poses practical and ethical challenges, particularly when these algorithms make decisions in finance, healthcare, and marketing domains. These systems may introduce biases and prejudices during the learning phase by utilizing complex machine learning algorithms and sensitive data. Consequently, decision models trained on such data may make unfair decisions and it is important to realize such issues before deploying the models. To address this issue, we introduce techniques for interpreting the outcomes of fairness-aware online learning. Through a case study predicting income based on features such as ethnicity, biological sex, age, and education level, we demonstrate how our fairness-aware learning process (MQ-OFL) maintains a balance between accuracy and discrimination trade-off using global and local surrogate models.
212

The acquisition of Social License to Operate : Create trust through dialogue and receive acceptance

Grimsvik, Tor, Tornberg, Viktor January 2021 (has links)
Today, the concept social license to operate (SLO) has been gaining traction among companies. If a company wants to establish a business in a new area or keep their current one, they must cooperate with the stakeholders to acquire and maintain a SLO or be forced to shut down. In the mining and extractive (M&E) sector the negative effects on the environment and local area are so apparent it has made the industry focus on SLO. Previous research draws different conclusions on what factors impact the SLO the most and this presents a research gap to analyze.  The purpose of this research is to investigate how companies within the M&E sector need to interact with local communities to build trust and acquire a SLO. This is done by an exploration of how individuals living in or near mining operations perceive mining operations by asking how they feel about the distributional fairness, the procedural fairness, the confidence in governance, the dialogue, the trust towards the mining industry, and the acceptance level towards mining. This research is of a quantitative explanatory character and primary data was gathered from an online questionnaire distributed among two Facebook groups that is connected to Kiruna and Gällivare. A total of 190 responded and their answers were analyzed with the help of statistical techniques.   The results indicate dialogue to be an efficient way to communicate for companies towards communities. Procedural fairness and confidence in governance lead to trust while distributional fairness did not and a company that is trusted will receive a SLO.
213

Perceptions Of Distributive And Procedural Justice In Ai And Hybrid Decision-Making: Exploring The Impact Of Task Complexity

Börresen, Henrik, Mykhalevych, Kateryna January 2024 (has links)
Artificial intelligence (AI) is increasingly used in organizational decision-making, optimizing performance, and cutting operational costs. While AI can potentially improve decision-making processes' efficiency and reliability, empirical research highlights that AI adoption may cause people to question the fairness of algorithmic decisions. Thus, the present study investigates whether distributive and procedural fairness perceptions are influenced by human, algorithmic, and hybrid decision-makers in high versus low task complexity conditions. Participants (N = 391) assessed the perceived distributive and procedural fairness in a pre-registered scenario-based experiment. Decision-maker type (human vs. hybrid vs. AI) and task complexity (low vs. high) were manipulated using a 3x2 between-subject design. It was hypothesized that the human decision-maker would be perceived as fairer than the AI, especially in high-complexity conditions. Furthermore, hybrid decision-makers were hypothesized to be perceived as fairer than AI and human decision-makers in low and high-complexity tasks. The results indicate that people tend to perceive human decision-makers as fairer than AI in situations of high complexity. Additionally, in the high-complexity condition, the hybrid decision-maker was perceived as more distributively fair than AI and less procedural fair than the human decision-maker. In low-complexity tasks, the hybrid decision-maker does not show superiority in fairness perception over AI or humans. Hence, the results support the first hypothesis and contradict the second hypothesis that hybrid decision-makers would be perceived as more distributive and procedural fair than AI and human decision-makers. Implications regarding the consequences of implementing AI in organizational decision-making are discussed, and suggestions for further research are included.
214

Identifying Induced Bias in Machine Learning

Chowdhury Mohammad Rakin Haider (18414885) 22 April 2024 (has links)
<p dir="ltr">The last decade has witnessed an unprecedented rise in the application of machine learning in high-stake automated decision-making systems such as hiring, policing, bail sentencing, medical screening, etc. The long-lasting impact of these intelligent systems on human life has drawn attention to their fairness implications. A majority of subsequent studies targeted the existing historically unfair decision labels in the training data as the primary source of bias and strived toward either removing them from the dataset (de-biasing) or avoiding learning discriminatory patterns from them during training. In this thesis, we show label bias is not a necessary condition for unfair outcomes from a machine learning model. We develop theoretical and empirical evidence showing that biased model outcomes can be introduced by a range of different data properties and components of the machine learning development pipeline.</p><p dir="ltr">In this thesis, we first prove that machine learning models are expected to introduce bias even when the training data doesn’t include label bias. We use the proof-by-construction technique in our formal analysis. We demonstrate that machine learning models, trained to optimize for joint accuracy, introduce bias even when the underlying training data is free from label bias but might include other forms of disparity. We identify two data properties that led to the introduction of bias in machine learning. They are the group-wise disparity in the feature predictivity and the group-wise disparity in the rates of missing values. The experimental results suggest that a wide range of classifiers trained on synthetic or real-world datasets are prone to introducing bias under feature disparity and missing value disparity independently from or in conjunction with the label bias. We further analyze the trade-off between fairness and established techniques to improve the generalization of machine learning models such as adversarial training, increasing model complexity, etc. We report that adversarial training sacrifices fairness to achieve robustness against noisy (typically adversarial) samples. We propose a fair re-weighted adversarial training method to improve the fairness of the adversarially trained models while sacrificing minimal adversarial robustness. Finally, we observe that although increasing model complexity typically improves generalization accuracy, it doesn’t linearly improve the disparities in the prediction rates.</p><p dir="ltr">This thesis unveils a vital limitation of machine learning that has yet to receive significant attention in FairML literature. Conventional FairML literature reduces the ML fairness task to as simple as de-biasing or avoiding learning discriminatory patterns. However, the reality is far away from it. Starting from deciding on which features collect up to algorithmic choices such as optimizing robustness can act as a source of bias in model predictions. It calls for detailed investigations on the fairness implications of machine learning development practices. In addition, identifying sources of bias can facilitate pre-deployment fairness audits of machine learning driven automated decision-making systems.</p>
215

A STUDY ON THE IMPACT OF PREPROCESSING STEPS ON MACHINE LEARNING MODEL FAIRNESS

Sathvika Kotha (18370548) 17 April 2024 (has links)
<p dir="ltr">The success of machine learning techniques in widespread applications has taught us that with respect to accuracy, the more data, the better the model. However, for fairness, data quality is perhaps more important than quantity. Existing studies have considered the impact of data preprocessing on the accuracy of ML model tasks. However, the impact of preprocessing on the fairness of the downstream model has neither been studied nor well understood. Throughout this thesis, we conduct a systematic study of how data quality issues and data preprocessing steps impact model fairness. Our study evaluates several preprocessing techniques for several machine learning models trained over datasets with different characteristics and evaluated using several fairness metrics. It examines different data preparation techniques, such as changing categories into numbers, filling in missing information, and smoothing out unusual data points. The study measures fairness using standards that check if the model treats all groups equally, predicts outcomes fairly, and gives similar chances to everyone. By testing these methods on various types of data, the thesis identifies which combinations of techniques can make the models both accurate and fair.The empirical analysis demonstrated that preprocessing steps like one-hot encoding, imputation of missing values, and outlier treatment significantly influence fairness metrics. Specifically, models preprocessed with median imputation and robust scaling exhibited the most balanced performance across fairness and accuracy metrics, suggesting a potential best practice guideline for equitable ML model preparation. Thus, this work sheds light on the importance of data preparation in ML and emphasizes the need for careful handling of data to support fair and ethical use of ML in society.</p>
216

Data-based Explanations of Random Forest using Machine Unlearning

Tanmay Laxman Surve (17537112) 03 December 2023 (has links)
<p dir="ltr">Tree-based machine learning models, such as decision trees and random forests, are one of the most widely used machine learning models primarily because of their predictive power in supervised learning tasks and ease of interpretation. Despite their popularity and power, these models have been found to produce unexpected or discriminatory behavior. Given their overwhelming success for most tasks, it is of interest to identify root causes of the unexpected and discriminatory behavior of tree-based models. However, there has not been much work on understanding and debugging tree-based classifiers in the context of fairness. We introduce FairDebugger, a system that utilizes recent advances in machine unlearning research to determine training data subsets responsible for model unfairness. Given a tree-based model learned on a training dataset, FairDebugger identifies the top-k training data subsets responsible for model unfairness, or bias, by measuring the change in model parameters when parts of the underlying training data are removed. We describe the architecture of FairDebugger and walk through real-world use cases to demonstrate how FairDebugger detects these patterns and their explanations.</p>
217

Swedish SMEs' Perception of the Corporate Income Taxation System's Treatment of Online Data Collection

Kramer, Arnold, Dobreva, Gentrit January 2023 (has links)
Purpose - The paper aims to analyse the perception SMEs in Sweden have of the corporate income tax system's treatment of online data collection. Methodology – This study employs a qualitative research approach in which the authors implemented a deductive phenomenological research approach. The paper incorporates both exploratory and descriptive research methodologies as its primary research approaches. These approaches were deemed most suited by the authors to collect both primary and secondary data tailored to the research objectives. The primary data source consists of semi-structured interviews with six Swedish SMEs, selected through a judgment-based approach. An in-depth investigation of the current literature formed the foundation of the secondary data collection. Findings – The findings suggest that the SMEs studied in this paper address their perceptions of the CITS’s treatment of ODC through (I) Online Data Privacy, (II) Distributional Tax Fairness, (III) Retributive Tax Fairness, (IV) Procedural Tax Fairness, (V) Complexity, (VI) Trust, (VII) Growth Obstruction Practical implications – The practical implications of this study are valuable for policymakers, SMEs and any type of stakeholders interested in the corporate income tax system's treatment of online data collection. This research can help improve the CITS's effectiveness and reduce the compliance burden on SMEs in Sweden. Policymakers can leverage the insights and perceptions of Swedish SMEs to modernize the CITS to the 21st century and implement ODC practices that are most suitable according to the SME’s preferences. SMEs on the other hand can leverage the insights and perceptions of this study to gain a better understanding of the CITS and its treatment on the components of value creation, including ODC practices. External stakeholders can use the study findings to gain an understanding of the field of research and implement it according to their needs, such as through the assistance of SMEs in their ODC practices concerning the CITS. Originality/value – The originality and value of this paper lie in the novel focus on Swedish SMEs' perception of the CITS's treatment of ODC. To the authors' knowledge, this study is the first to explore this topic in Sweden, contributing to the literature on the CITS, ODC practices and the treatment of ODC through the CITS. Keywords – Corporate Income Taxation, Corporate Income Taxation System, Tax Perceptions, SMEs, The Slippery Slope Framework, Complexity, Distributional Fairness, Retributive Fairness, Procedural Fairness, Growth, Tax Benefits, Privacy, Punishment, Trust Paper type – Research Paper
218

Benchmarking bias mitigation algorithms in representation learning through fairness metrics

Reddy, Charan 07 1900 (has links)
Le succès des modèles d’apprentissage en profondeur et leur adoption rapide dans de nombreux domaines d’application ont soulevé d’importantes questions sur l’équité de ces modèles lorsqu’ils sont déployés dans le monde réel. Des études récentes ont mis en évidence les biais encodés par les algorithmes d’apprentissage des représentations et ont remis en cause la fiabilité de telles approches pour prendre des décisions. En conséquence, il existe un intérêt croissant pour la compréhension des sources de biais dans l’apprentissage des algorithmes et le développement de stratégies d’atténuation des biais. L’objectif des algorithmes d’atténuation des biais est d’atténuer l’influence des caractéristiques des données sensibles sur les décisions d’éligibilité prises. Les caractéristiques sensibles sont des caractéristiques privées et protégées d’un ensemble de données telles que le sexe ou la race, qui ne devraient pas affecter les décisions de sortie d’éligibilité, c’està-dire les critères qui rendent un individu qualifié ou non qualifié pour une tâche donnée, comme l’octroi de prêts ou l’embauche. Les modèles d’atténuation des biais visent à prendre des décisions d’éligibilité sur des échantillons d’ensembles de données sans biais envers les attributs sensibles des données d’entrée. La difficulté des tâches d’atténuation des biais est souvent déterminée par la distribution de l’ensemble de données, qui à son tour est fonction du déséquilibre potentiel de l’étiquette et des caractéristiques, de la corrélation des caractéristiques potentiellement sensibles avec d’autres caractéristiques des données, du décalage de la distribution de l’apprentissage vers le phase de développement, etc. Sans l’évaluation des modèles d’atténuation des biais dans diverses configurations difficiles, leurs mérites restent incertains. Par conséquent, une analyse systématique qui comparerait différentes approches d’atténuation des biais sous la perspective de différentes mesures d’équité pour assurer la réplication des résultats conclus est nécessaire. À cette fin, nous proposons un cadre unifié pour comparer les approches d’atténuation des biais. Nous évaluons différentes méthodes d’équité formées avec des réseaux de neurones profonds sur un ensemble de données synthétiques commun et un ensemble de données du monde réel pour obtenir de meilleures informations sur le fonctionnement de ces méthodes. En particulier, nous formons environ 3000 modèles différents dans diverses configurations, y compris des configurations de données déséquilibrées et corrélées, pour vérifier les limites des modèles actuels et mieux comprendre dans quelles configurations ils sont sujets à des défaillances. Nos résultats montrent que le biais des modèles augmente à mesure que les ensembles de données deviennent plus déséquilibrés ou que les attributs des ensembles de données deviennent plus corrélés, le niveau de dominance des caractéristiques des ensembles de données sensibles corrélées a un impact sur le biais, et les informations sensibles restent dans la représentation latente même lorsque des algorithmes d’atténuation des biais sont appliqués. Résumant nos contributions - nous présentons un ensemble de données, proposons diverses configurations d’évaluation difficiles et évaluons rigoureusement les récents algorithmes prometteurs d’atténuation des biais dans un cadre commun et publions publiquement cette référence, en espérant que la communauté des chercheurs le considérerait comme un point d’entrée commun pour un apprentissage en profondeur équitable. / The rapid use and success of deep learning models in various application domains have raised significant challenges about the fairness of these models when used in the real world. Recent research has shown the biases incorporated within representation learning algorithms, raising doubts about the dependability of such decision-making systems. As a result, there is a growing interest in identifying the sources of bias in learning algorithms and developing bias-mitigation techniques. The bias-mitigation algorithms aim to reduce the impact of sensitive data aspects on eligibility choices. Sensitive features are private and protected features of a dataset, such as gender of the person or race, that should not influence output eligibility decisions, i.e., the criteria that determine whether or not an individual is qualified for a particular activity, such as lending or hiring. Bias mitigation models are designed to make eligibility choices on dataset samples without bias toward sensitive input data properties. The dataset distribution, which is a function of the potential label and feature imbalance, the correlation of potentially sensitive features with other features in the data, the distribution shift from training to the development phase, and other factors, determines the difficulty of bias-mitigation tasks. Without evaluating bias-mitigation models in various challenging setups, the merits of deep learning approaches to these tasks remain unclear. As a result, a systematic analysis is required to compare different bias-mitigation procedures using various fairness criteria to ensure that the final results are replicated. In order to do so, this thesis offers a single paradigm for comparing bias-mitigation methods. To better understand how these methods work, we compare alternative fairness algorithms trained with deep neural networks on a common synthetic dataset and a real-world dataset. We train around 3000 distinct models in various setups, including imbalanced and correlated data configurations, to validate the present models’ limits and better understand which setups are prone to failure. Our findings show that as datasets become more imbalanced or dataset attributes become more correlated, model bias increases, the dominance of correlated sensitive dataset features influence bias, and sensitive data remains in the latent representation even after bias-mitigation algorithms are applied. In summary, we present a dataset, propose multiple challenging assessment scenarios, rigorously analyse recent promising bias-mitigation techniques in a common framework, and openly disclose this benchmark as an entry point for fair deep learning.
219

Onbillike ontslag in die Suid-Afrikaanse arbeidsreg met spesiale verwysing na Prosessuele aspekte

Botha, Gerhard 11 1900 (has links)
Text in Afrikaans / Werknemers is benewens sekere hoogs uitsonderlike gevalle altyd voor ontslag op substantiewe - en prosessuele billikheid geregtig, hetsy in 'n individuele ofkollektiewe verband. Prosessuele billikheid in besonder het 'n inherente waarde, o.a. omdat die uiteinde van 'n proses nie voorspel kan word nie. Die werkgewer word ook daardeur in staat gestel om die feite te bekom, en arbeidsvrede word daardeur gehandhaaf. Van verdere belang vir prosessuele billikheid is die nakoming van eie of ooreengekome prosedures, die beskikbaarstelling van genoegsame inligting, voorafkennisgewing en bona fide optrede deur die werkgewer. Die primere remedie in die geval van 'n onbillike ontslag is herindiensstelling, alhoewel herindiensstelling nie in die geval van 'n prosessuele onbillike ontslag beveel behoort te word nie. Die riglyne soos in die verlede deur die howe en arbiters ontwikkel is grootliks in die Konsepwet op Arbeids= verhoudinge, soos bevestig in die Wet op Arbeidsverhoudinge, 1995, gekodifiseer. / Prior to dismissal employees are always entitled to substantive - and procedural fairness, be it in an individual or a collective context, subject to highly exceptional circumstances. Procedural fairness in particular has an inherent value, inter alia because the outcome of a process cannot be predicted. The employer also thereby establishes the facts and by conducting a process, labour peace is promoted. Also of importance for procedural fairness is adherance to own or agreed procedures, providing the employee with sufficient information, prior notification and bona fide conduct by the employer. The primary remedy in the case of an unfair dismissal is reinstatement, though reinstatement should not follow in the case of a dismissal which is (only) procedurally unfair. The guidelines as developed by the courts and arbitrators have largely been codified in the Draft Labour Relations Bill, as subsequently confirmed in the Labour Relations Act, 1995. / Mercentile Law / LL. M.
220

The Social Framework of Individual Decisions

Gerlach, Philipp 19 January 2018 (has links)
Wann und warum verhalten sich Menschen ethisch (in-)korrekt? Die vorliegende Dissertation fasst allgemeine Theorien und experimentelle Befunde (nicht-)kooperativen, (un-)fairen und (un-)ehrlichen Verhaltens zusammen. Hierzu führt Kapitel 1 experimentelle Spiele als rigoroses Instrument zur Untersuchung (un-)ethischen Verhaltens ein. Kapitel 2 zeigt, dass sich kleine Veränderungen in der kontextuellen Rahmung von experimentellen Spielen langanhaltend auf die Kooperationsneigung der Teilnehmer auswirken können. Kontextuelle Rahmungen verändern zudem Verhaltenserwartungen sowie Aufteilungen in nicht-strategischen Situationen. Diese Effekte sind durch Theorien sozialer Normen erklärbar. Kapitel 3 ergründet, warum sich Studierende der Wirtschaftswissenschaften teils egoistischer verhalten als ihre Kommilitonen. Theorien sozialer Normen werden hierbei um die Bereitschaft erweitert, Nonkonformität mittels Sanktionen zu erzwingen. Es wird gezeigt, dass sich Studierende der Wirtschaftswissenschaften und anderer Fächer in ihren Aufteilungsentscheidungen ähnlich häufig mit Fairness beschäftigen und zu ähnlichen Einschätzungen kommen, welche Aufteilung als fair gilt. Sie teilen jedoch weniger großzügig und erwarten dies auch von anderen. Zudem sind sie weniger bereit, als unfair angesehene Aufteilungen zu sanktionieren. Es wird argumentiert, dass sich Studierende der Wirtschaftswissenschaften egoistischer verhalten, weil sie nicht daran glauben, dass sich andere an eine grundsätzlich geteilte Fairnessnorm halten. Kapitel 4 zeigt, dass intrinsische Sanktionen (wie Scham und Schuld) ausreichen, damit sich Menschen ethisch korrekt verhalten. Das Kapitel bietet zahlreiche Antworten zu aktuellen Debatten, wer sich unter welchen Umständen (un-)ehrlich verhält. Es wird gezeigt, dass Ehrlichkeit sowohl von situativen Einflüssen (z.B. Anreizen und Externalitäten) wie von persönlichen Aspekten (z.B. Geschlecht und Alter) und letztlich auch vom experimentellen Paradigma abhängt. / When and why do people engage in (un)ethical behavior? This dissertation summarizes general theories and synthesizes experimental findings on (non)cooperation, (un)fairness, and (dis)honesty. To this end, Chapter 1 introduces experimental games as a rigorous tool for studying (un)ethical behavior. Chapter 2 demonstrates that small changes in the framing of context (e.g., referring to a social dilemma as a competition vs. a team endeavor) can have long-lasting effects on the participants’ propensity to cooperate. Context framing also shapes beliefs about the cooperative behavior of interaction partners and donations in non-strategic allocation decisions. Taken together, the results suggest that social norm theories provide a plausible explanation for cooperation, including its sensitivity to context framing. Chapter 3 investigates why experimental games regularly find that economics students behave more selfishly than their peers. The concept of social norms is thereby extended to include the enforcement of compliance per sanctions. The results indicate that economics students and students of other majors are about equally concerned with fairness and they have similar notions of fairness in the situation. However, economics students make lower allocations, expect others to make lower allocations, and are less willing to sanction allocations seen as unfair. Skepticism mediated their lower allocations, suggesting that economics students behave more selfishly because they expect others not to comply with a shared fairness norm. Chapter 4 shows that intrinsic sanctions (e.g., shame and guilt) can be sufficient for ethical behavior to emerge. The chapter provides answers to many of the ongoing debates on who behaves dishonestly and under what circumstances. The findings suggest that dishonest behavior depends on situational factors (e.g., reward magnitude and externalities), personal factors (e.g., gender and age) as well as on the experimental paradigm itself.

Page generated in 0.1201 seconds