• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 11
  • 4
  • 3
  • 1
  • Tagged with
  • 56
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Canonical Correlation and Clustering for High Dimensional Data

Ouyang, Qing January 2019 (has links)
Multi-view datasets arise naturally in statistical genetics when the genetic and trait profile of an individual is portrayed by two feature vectors. A motivating problem concerning the Skin Intrinsic Fluorescence (SIF) study on the Diabetes Control and Complications Trial (DCCT) subjects is presented. A widely applied quantitative method to explore the correlation structure between two domains of a multi-view dataset is the Canonical Correlation Analysis (CCA), which seeks the canonical loading vectors such that the transformed canonical covariates are maximally correlated. In the high dimensional case, regularization of the dataset is required before CCA can be applied. Furthermore, the nature of genetic research suggests that sparse output is more desirable. In this thesis, two regularized CCA (rCCA) methods and a sparse CCA (sCCA) method are presented. When correlation sub-structure exists, stand-alone CCA method will not perform well. To tackle this limitation, a mixture of local CCA models can be employed. In this thesis, I review a correlation clustering algorithm proposed by Fern, Brodley and Friedl (2005), which seeks to group subjects into clusters such that features are identically correlated within each cluster. An evaluation study is performed to assess the effectiveness of CCA and correlation clustering algorithms using artificial multi-view datasets. Both sCCA and sCCA-based correlation clustering exhibited superior performance compare to the rCCA and rCCA-based correlation clustering. The sCCA and the sCCA-clustering are applied to the multi-view dataset consisted of PrediXcan imputed gene expression and SIF measurements of DCCT subjects. The stand-alone sparse CCA method identified 193 among 11538 genes being correlated with SIF#7. Further investigation of these 193 genes with simple linear regression and t-test revealed that only two genes, ENSG00000100281.9 and ENSG00000112787.8, were significance in association with SIF#7. No plausible clustering scheme was detected by the sCCA based correlation clustering method. / Thesis / Master of Science (MSc)
42

Offline Reinforcement Learning for Downlink Link Adaption : A study on dataset and algorithm requirements for offline reinforcement learning. / Offline Reinforcement Learning för nedlänksanpassning : En studie om krav på en datauppsättning och algoritm för offline reinforcement learning

Dalman, Gabriella January 2024 (has links)
This thesis studies offline reinforcement learning as an optimization technique for downlink link adaptation, which is one of many control loops in Radio access networks. The work studies the impact of the quality of pre-collected datasets, in terms of how much the data covers the state-action space and whether it is collected by an expert policy or not. The data quality is evaluated by training three different algorithms: Deep Q-networks, Critic regularized regression, and Monotonic advantage re-weighted imitation learning. The performance is measured for each combination of algorithm and dataset, and their need for hyperparameter tuning and sample efficiency is studied. The results showed Critic regularized regression to be the most robust because it could learn well from any of the datasets that were used in the study and did not require extensive hyperparameter tuning. Deep Q-networks required careful hyperparameter tuning, but paired with the expert data it managed to reach rewards equally as high as the agents trained with Critic Regularized Regression. Monotonic advantage re-weighted imitation learning needed data from an expert policy to reach a high reward. In summary, offline reinforcement learning can perform with success in a telecommunication use case such as downlink link adaptation. Critic regularized regression was the preferred algorithm because it could perform great with all the three different datasets presented in the thesis. / Denna avhandling studerar offline reinforcement learning som en optimeringsteknik för nedlänks länkanpassning, vilket är en av många kontrollcyklar i radio access networks. Arbetet undersöker inverkan av kvaliteten på förinsamlade dataset, i form av hur mycket datan täcker state-action rymden och om den samlats in av en expertpolicy eller inte. Datakvaliteten utvärderas genom att träna tre olika algoritmer: Deep Q-nätverk, Critic regularized regression och Monotonic advantage re-weighted imitation learning. Prestanda mäts för varje kombination av algoritm och dataset, och deras behov av hyperparameterinställning och effektiv användning av data studeras. Resultaten visade att Critic regularized regression var mest robust, eftersom att den lyckades lära sig mycket från alla dataseten som användes i studien och inte krävde omfattande hyperparameterinställning. Deep Q-nätverk krävde noggrann hyperparameterinställning och tillsammans med expertdata lyckades den nå högst prestanda av alla agenter i studien. Monotonic advantage re-weighted imitation learning behövde data från en expertpolicy för att lyckas lära sig problemet. Det datasetet som var mest framgångsrikt var expertdatan. Sammanfattningsvis kan offline reinforcement learning vara framgångsrik inom telekommunikation, specifikt nedlänks länkanpassning. Critic regularized regression var den föredragna algoritmen för att den var stabil och kunde prestera bra med alla tre olika dataseten som presenterades i avhandlingen.
43

A regularized arithmetic Riemann-Roch theorem via metric degeneration

De Gaetano, Giovanni 14 June 2018 (has links)
Das Hauptresultat dieser Arbeit ist ein regularisierter arithmetischer Satz von Riemann-Roch für ein hermitesches Geradenbündel, die isometrisch zum Geradenbündel den Spitzenformen vom geraden Gewicht ist, auf eine arithmetische Fläche, deren komplexe Faser isometrisch zu einer hyperbolischen Riemannschen Fläche ohne elliptische Punkte ist. Der Beweis des Resultats erfolgt durch metrische Degeneration: Wir regularisieren die betreffenden Metriken in einer Umgebung der Singularitäten, wenden dann den arithmetischen Riemann-Roch-Satz von Gillet und Soulé an und lassen schließlich den Parameter gegen Null gehen. Durch die metrische Degeneration entsteht auf beiden Seiten der Formel ein divergenter Term. Die asymptotische Entwicklung der Divergenz berechnet sich auf der einen Seite direkt aus der Definition der glatten arithmetischen Selbstschnittzahlen. Der divergente Term auf der anderen Seite ist die zeta-regularisierte Determinante des zu den regularisierten Metriken assoziierten Laplace-Operators, der auf den 1-Formen mit Werten in dem betrachteten hermitischen Geradenbündel operiert. Wir definieren und berechnen zuerst eine Regularisiereung des entsprechenden zu den singulären Metriken assoziierten Laplace-Operators; diese wird später im regularisierten Riemann-Roch-Satz auftauchen. Zu diesem Zweck passen wir Ideen von Jorgenson-Lundelius, D'Hoker-Phong und Sarnak auf die vorliegende Situation an und verallgemeinern diese. Schließlich beweisen wir eine Formel für den zum betrachteten hermitischen Geradenbündel assoziierten Wärmeleitungskern auf der Diagonalen bei einer Modellspitze. Diese Darstellung steht im Zusammenhang mit einer Entwicklung nach zur Whittaker-Gleichung assoziierten Eigenfunktionen, die im Anhang bewiesen wird. Weitere Abschätzungen des zum betrachteten hermitischen Geradenbündel gehörigen Wärmeleitungskern auf der komplexe Faser der arithmetischen Fläche schließen den Beweis des Hauptresultats ab. / The main result of the dissertation is an arithmetic Riemann-Roch theorem for the hermitian line bundle of cusp form of given even integer weights on an arithmetic surface whose complex fiber is isometric to an hyperbolic Riemann surface without elliptic points. The proof proceeds by metric degeneration: We regularize the metric under consideration in a neighborhood of the singularities, then we apply the arithmetic Riemann-Roch theorem of Gillet and Soulé, and finally we let the parameter go to zero. Both sides of the formula blow up through metric degeneration. On one side the exact asymptotic expansion is computed from the definition of the smooth arithmetic intersection numbers. The divergent term on the other side is the zeta-regularized determinant of the Laplacian acting on 1-forms with values in the chosen hermitian line bundle associated to the regularized metrics. We first define and compute a regularization of the determinant of the corresponding Laplacian associated to the singular metrics, which will later occur int he regularized arithmetic Riemann-Roch theorem. To do so we adapt and generalize ideas od Jorgenson-Lundelius, D'Hoker-Phong, and Sarnak. Then, we prove a formula for the on-diagonal heat kernel associated to the chosen hermitian line bundle on a model cusp, from which its behavior close to a cusp is transparent. This expression is related to an expansion in terms of eigenfunctions associated to the Whittaker equation, which we prove in an appendix. Further estimates on the heat kernel associated to the chosen hermitian line bundle on the complex fiber of the arithmetic surface prove the main theorem.
44

Nonnegative matrix and tensor factorizations, least squares problems, and applications

Kim, Jingu 14 November 2011 (has links)
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
45

Multiple prediction from incomplete data with the focused curvelet transform

Herrmann, Felix J., Wang, Deli, Hennenfent, Gilles January 2007 (has links)
Incomplete data represents a major challenge for a successful prediction and subsequent removal of multiples. In this paper, a new method will be represented that tackles this challenge in a two-step approach. During the first step, the recenly developed curvelet-based recovery by sparsity-promoting inversion (CRSI) is applied to the data, followed by a prediction of the primaries. During the second high-resolution step, the estimated primaries are used to improve the frequency content of the recovered data by combining the focal transform, defined in terms of the estimated primaries, with the curvelet transform. This focused curvelet transform leads to an improved recovery, which can subsequently be used as input for a second stage of multiple prediction and primary-multiple separation.
46

Otimização de carteiras regularizadas empregando informações de grupos de ativos para o mercado brasileiro

Martins, Diego de Carvalho 06 February 2015 (has links)
Submitted by Diego de Carvalho Martins (diego.cmartins@gmail.com) on 2015-03-03T17:37:26Z No. of bitstreams: 1 Dissertação Diego Martins Vf.pdf: 5717457 bytes, checksum: 7b47eb855a437b18798c842352f083b8 (MD5) / Rejected by Renata de Souza Nascimento (renata.souza@fgv.br), reason: Prezado Diego, Encaminharei por e-mail o que deve ser alterado, para que possamos aceita-lo junto à biblioteca. Att Renata on 2015-03-03T21:33:00Z (GMT) / Submitted by Diego de Carvalho Martins (diego.cmartins@gmail.com) on 2015-03-03T22:13:33Z No. of bitstreams: 1 Dissertação Diego Martins Vf.pdf: 5717977 bytes, checksum: 446abdc648b62abddb519b99648b6a3a (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-03-04T17:27:29Z (GMT) No. of bitstreams: 1 Dissertação Diego Martins Vf.pdf: 5717977 bytes, checksum: 446abdc648b62abddb519b99648b6a3a (MD5) / Made available in DSpace on 2015-03-04T18:27:00Z (GMT). No. of bitstreams: 1 Dissertação Diego Martins Vf.pdf: 5717977 bytes, checksum: 446abdc648b62abddb519b99648b6a3a (MD5) Previous issue date: 2015-02-06 / This work aims to analyze the performance of regularized mean-variance portfolios, employing financial assets available in Brazilian markets. In particular, regularized portfolios are obtained by restricting the norm of the portfolio-weights vector, following DeMiguel et al. (2009). Additionally, we analyze the performance of portfolios that take into account information about the group structure of assets with similar characteristics, as proposed by Fernandes, Rocha and Souza (2011). While the covariance matrix employed is the sample one, the expected returns are obtained by reverse optimization of market equilibrium portfolio proposed by Black and Litterman (1992). The empirical analysis out of the sample for the period between January 2010 and October 2014 indicates that, in line with previous studies, penalizing the norm of weights can (depending on the chosen standard and intensity of the restriction) lead to portfolios having best performances in terms of return and Sharpe, when compared to portfolios obtained via Markowitz models. In addition, the inclusion of group information can also be beneficial in order to calculate optimal portfolios, when compared to both Markowitz portfolios or without using group information. / Este trabalho se dedica a analisar o desempenho de modelos de otimização de carteiras regularizadas, empregando ativos financeiros do mercado brasileiro. Em particular, regularizamos as carteiras através do uso de restrições sobre a norma dos pesos dos ativos, assim como DeMiguel et al. (2009). Adicionalmente, também analisamos o desempenho de carteiras que levam em consideração informações sobre a estrutura de grupos de ativos com características semelhantes, conforme proposto por Fernandes, Rocha e Souza (2011). Enquanto a matriz de covariância empregada nas análises é a estimada através dos dados amostrais, os retornos esperados são obtidos através da otimização reversa da carteira de equilíbrio de mercado proposta por Black e Litterman (1992). A análise empírica fora da amostra para o período entre janeiro de 2010 e outubro de 2014 sinaliza-nos que, em linha com estudos anteriores, a penalização das normas dos pesos pode levar (dependendo da norma escolhida e da intensidade da restrição) a melhores performances em termos de Sharpe e retorno médio, em relação a carteiras obtidas via o modelo tradicional de Markowitz. Além disso, a inclusão de informações sobre os grupos de ativos também pode trazer benefícios ao cálculo de portfolios ótimos, tanto em relação aos métodos tradicionais quanto em relação aos casos sem uso da estrutura de grupos.
47

Modélisation du comportement hydrogéomécanique d’un réseau de failles sous l’effet des variations de l’état de contrainte / Modeling of the hydro-geomechanical behavior of a fault network under the stress-state variations

Faivre, Maxime 06 July 2016 (has links)
Nous présentons dans ce mémoire l'influence que peuvent avoir les écoulements de fluide au sein de la matrice rocheuse fracturée, laquelle est sujette aux variations locales ou régionales de l'état de contrainte in situ. Du fait de l'augmentation de la pression de pore, la longueur et l'ouverture de la (les) fracture(s) peuvent subir des variations significatives et conduire à la formation de chemins préférentiels pour l'écoulement du fluide dans le milieu géologique. Les modèles théorique et numérique évoqués ici sont des modèles de comportement hydro-mécanique pour le milieu poreux saturé en présence d'une seule phase fluide. La méthode des éléments finis étendue (XFEM) est utilisée afin de modéliser la dynamique des fractures ainsi que les écoulements de fluide dans la matrice rocheuse fracturée, sans être tributaire de la dépendance au maillage. Ainsi, nous considérons: (i) qu'il existe une pression fluide induite par l'écoulement au sein de la fracture, (ii) que la dynamique de la fracture est gérée grâce à un modèle de zone cohésive en supposant un chemin de propagation prédéfini, et (iii) que des échanges entre la fracture et la matrice poreuse peuvent se produire. Ce dernier aspect sera pris en compte en introduisant, dans la formulation du problème couplé, un champ de multiplicateur de Lagrange. Ce champ résulte de la dualisation de la condition d'égalité entre la pression de pore et de la pression de fluide au niveau des parois de la fracture. Afin de respecter les contraintes liées à XFEM, nous avons choisi d'introduire dans la formulation une loi cohésive non-régularisée de type Talon-Curnier. Ce type de loi est capable de gérer la propagation et/ou la refermeture de la fracture. Le modèle HM-XFEM a été validé à partir des solutions analytiques du modèle 2D de fracture KGD, et ce, pour différents régimes de propagation. Nous avons ensuite appliqué le modèle HM-XFEM au cas d'un réseau de fractures non connectées entre elles et évoluant sur des chemins de propagation prédéfinis, afin d'analyser comment les fractures d'un réseau peuvent influer les unes sur les autres lorsqu'elles sont soumises à un écoulement. En particulier, une étude paramétrique a été menée afin de montrer l'influence que peuvent avoir la viscosité, le débit d'injection et l'écartement entre les fractures sur leur propagation. Une attention particulière sera porté à l'évolution du stress-shadowing effect (i.e. modification de l'état de contrainte due à l'effet d'interaction entre les fractures). / In the present work, we address the issue of groundwater flow in the fractured porous media submitted to local or regional stress-state variations. Due to the increasing pore fluid pressure, the length and aperture distribution of the fractures are modified resulting in the formation of preferential flow channels within the geological formation. The numerical approach proposed is a fully coupled hydro-poro-mechanical model in saturated conditions involving single-phase flow both in fractures and in the porous matrix. The extended finite element method (XFEM) is employed for modeling fracture dynamics and flow calculation for fracture which do not lie on the mesh but cross through the elements. In this study: (i) we consider the pressure build up generated by fluid flow inside and through the fracture, (ii) the fracture dynamics by using a cohesive zone model (CZM) on pre-existing propagation path and (iii) fluid exchanges may occur in between fractures and porous medium. The last specification of the HM-XFEM model is taken into account through the introduction of a Lagrange multiplier field along the fracture path. These fields are the result of the dualised condition of pressure continuity between the pore pressure and the fluid pressure inside the fracture. As a function of the Lagrange multiplier value, both permeable and impervious fractures can be considered. The cohesive law employed is a non-regularized-type cohesive law to ensure propagation and eventually closure of the fracture. Validation of the model has been conducted by means of the well-known KGD fracture model when different propagation regimes are considered. We applied the HM-XFEM model to the case of multi-stage fracture network stimulated by the injection of incompressible fluid at constant rate. Fractures are not connected to each other and evolve on pre-existing propagation paths. We aim at appreciating the influence of the fluid viscosity, the injection rate and spacing between each fracture, on the fracture propagation. A peculiar attention is paid to the stress-shadowing effect (i.e. interaction between fractures).
48

Modélisation avancée du signal dMRI pour la caractérisation de la microstructure tissulaire / Advanced dMRI signal modeling for tissue microstructure characterization

Fick, Rutger 10 March 2017 (has links)
Cette thèse est dédiée à améliorer la compréhension neuro-scientifique à l'aide d'imagerie par résonance magnétique de diffusion (IRMd). Nous nous concentrons sur la modélisation du signal de diffusion et l'estimation par IRMd des biomarqueurs liés à la microstructure, appelé «Microstructure Imaging». Cette thèse est organisée en trois parties. Dans partie I nous commençons par la base de l'IRMd et un aperçu de l'anisotropie en diffusion. Puis nous examinons la plupart des modèles de microstructure utilisant PGSE, en mettant l'accent sur leurs hypothèses et limites, suivi par une validation par l'histologie de la moelle épinière de leur estimation. La partie II présente nos contributions à l'imagerie en 3D et à l’estimation de microstructure. Nous proposons une régularisation laplacienne de la base fonctionnelle MAP, ce qui nous permet d'estimer de façon robuste les indices d'espace q liés au tissu. Nous appliquons cette approche aux données du Human Connectome Project, où nous l'utilisons comme prétraitement pour d'autres modèles de microstructure. Enfin, nous comparons les biomarqueurs dans une étude ex-vivo de rats Alzheimer à différents âges. La partie III présente nos contributions au représentation de l’espace qt - variant sur l'espace q 3D et le temps de diffusion. Nous présentons une approche initiale qui se concentre sur l'estimation du diamètre de l'axone depuis l'espace qt. Nous terminons avec notre approche finale, où nous proposons une nouvelle base fonctionnelle régularisée pour représenter de façon robuste le signal qt, appelé qt-IRMd. Ce qui permet l'estimation des indices d’espace q dépendants du temps, quantifiant la dépendance temporelle du signal IRMd. / This thesis is dedicated to furthering neuroscientific understanding of the human brain using diffusion-sensitized Magnetic Resonance Imaging (dMRI). Within dMRI, we focus on the estimation and interpretation of microstructure-related markers, often referred to as ``Microstructure Imaging''. This thesis is organized in three parts. Part I focuses on understanding the state-of-the-art in Microstructure Imaging. We start with the basic of diffusion MRI and a brief overview of diffusion anisotropy. We then review and compare most state-of-the-art microstructure models in PGSE-based Microstructure Imaging, emphasizing model assumptions and limitations, as well as validating them using spinal cord data with registered ground truth histology. In Part II we present our contributions to 3D q-space imaging and microstructure recovery. We propose closed-form Laplacian regularization for the recent MAP functional basis, allowing robust estimation of tissue-related q-space indices. We also apply this approach to Human Connectome Project data, where we use it as a preprocessing for other microstructure models. Finally, we compare tissue biomarkers in a ex-vivo study of Alzheimer rats at different ages. In Part III, we present our contributions to representing the qt-space - varying over 3D q-space and diffusion time. We present an initial approach that focuses on 3D axon diameter estimation from the qt-space. We end with our final approach, where we propose a novel, regularized functional basis to represent the qt-signal, which we call qt-dMRI. Our approach allows for the estimation of time-dependent q-space indices, which quantify the time-dependence of the diffusion signal.
49

ADVANCED PRIOR MODELS FOR ULTRA SPARSE VIEW TOMOGRAPHY

Maliha Hossain (17014278) 26 September 2023 (has links)
<p dir="ltr">There is a growing need to reconstruct high quality tomographic images from sparse view measurements to accommodate time and space constraints as well as patient well-being in medical CT. Analytical methods perform poorly with sub-Nyquist acquisition rates. In extreme cases with 4 or fewer views, effective reconstruction approaches must be able to incorporate side information to constrain the solution space of an otherwise under-determined problem. This thesis presents two sparse view tomography problems that are solved using techniques that exploit. knowledge of the structural and physical properties of the scanned objects.</p><p dir="ltr"><br></p><p dir="ltr">First, we reconstruct four view CT datasets obtained from an in-situ imaging system used to observe Kolsky bar impact experiments. Test subjects are typically 3D-printed out ofhomogeneous materials into shapes with circular cross sections. Two advanced prior modelsare formulated to incorporate these assumptions in a modular fashion into the iterativeradiographic inversion framework. The first is a Multi-Slice Fusion and the latter is TotalVariation regularization that operates in cylindrical coordinates.</p><p dir="ltr"><br></p><p dir="ltr">In the second problem, artificial neural networks (NN) are used to directly invert a temporal sequence of four radiographic images of discontinuities propagating through an imploding steel shell. The NN is fed the radiographic features that are robust to scatter and is trained using density simulations synthesized as solutions to hydrodynamic equations of state. The proposed reconstruction pipeline learns and enforces physics-based assumptions of hydrodynamics and shock physics to constrain the final reconstruction to a space ofphysically admissible solutions.</p>
50

A Regularized Extended Finite Element Method for Modeling the Coupled Cracking and Delamination of Composite Materials

Swindeman, Michael James January 2011 (has links)
No description available.

Page generated in 0.0495 seconds