291 |
Delayed Transfer Entropy applied to Big Data / Delayed Transfer Entropy aplicado a Big DataJonas Rossi Dourado 30 November 2018 (has links)
Recent popularization of technologies such as Smartphones, Wearables, Internet of Things, Social Networks and Video streaming increased data creation. Dealing with extensive data sets led the creation of term big data, often defined as when data volume, acquisition rate or representation demands nontraditional approaches to data analysis or requires horizontal scaling for data processing. Analysis is the most important Big Data phase, where it has the objective of extracting meaningful and often hidden information. One example of Big Data hidden information is causality, which can be inferred with Delayed Transfer Entropy (DTE). Despite DTE wide applicability, it has a high demanding processing power which is aggravated with large datasets as those found in big data. This research optimized DTE performance and modified existing code to enable DTE execution on a computer cluster. With big data trend in sight, this results may enable bigger datasets analysis or better statistical evidence. / A recente popularização de tecnologias como Smartphones, Wearables, Internet das Coisas, Redes Sociais e streaming de Video aumentou a criação de dados. A manipulação de grande quantidade de dados levou a criação do termo Big Data, muitas vezes definido como quando o volume, a taxa de aquisição ou a representação dos dados demanda abordagens não tradicionais para analisar ou requer uma escala horizontal para o processamento de dados. A análise é a etapa de Big Data mais importante, tendo como objetivo extrair informações relevantes e às vezes escondidas. Um exemplo de informação escondida é a causalidade, que pode ser inferida utilizando Delayed Transfer Entropy (DTE). Apesar do DTE ter uma grande aplicabilidade, ele possui uma grande demanda computacional, esta última, é agravada devido a grandes bases de dados como as encontradas em Big Data. Essa pesquisa otimizou e modificou o código existente para permitir a execução de DTE em um cluster de computadores. Com a tendência de Big Data em vista, esse resultado pode permitir bancos de dados maiores ou melhores evidências estatísticas.
|
292 |
Algorithmes Parallèles Efficaces Appliqués aux Calculs sur Maillages Non Structurés / Scalable and Efficient Algorithms for Unstructured Mesh ComputationsThebault, Loïc 14 October 2016 (has links)
Le besoin croissant en simulation a conduit à l’élaboration de supercalculateurs complexes et d’un nombre croissant de logiciels hautement parallèles. Ces supercalculateurs requièrent un rendement énergétique et une puissance de calcul de plus en plus importants. Les récentes évolutions matérielles consistent à augmenter le nombre de noeuds de calcul et de coeurs par noeud. Certaines ressources n’évoluent cependant pas à la même vitesse. La multiplication des coeurs de calcul implique une diminution de la mémoire par coeur, plus de trafic de données, un protocole de cohérence plus coûteux et requiert d’avantage de parallélisme. De nombreuses applications et modèles actuels peinent ainsi à s’adapter à ces nouvelles tendances. En particulier, générer du parallélisme massif dans des méthodes d’éléments finis utilisant des maillages non structurés, et ce avec un nombre minimal de synchronisations et des charges de travail équilibrées, s’avèrent particulièrement difficile. Afin d’exploiter efficacement les multiples niveaux de parallélisme des architectures actuelles, différentes approches parallèles doivent être combinées. Cette thèse propose plusieurs contributions destinées à paralléliser les codes et les structures irrégulières de manière efficace. Nous avons développé une approche parallèle hybride par tâches à grain fin combinant les formes de parallélisme distribuée, partagée et vectorielle sur des structures irrégulières. Notre approche a été portée sur plusieurs applications industrielles développées par Dassault Aviation et a permis d’importants gains de performance à la fois sur les multicoeurs classiques ainsi que sur le récent Intel Xeon Phi. / The growing need for numerical simulations results in larger and more complex computing centers and more HPC softwares. Actual HPC system architectures have an increasing requirement for energy efficiency and performance. Recent advances in hardware design result in an increasing number of nodes and an increasing number of cores per node. However, some resources do not scale at the same rate. The increasing number of cores and parallel units implies a lower memory per core, higher requirement for concurrency, higher coherency traffic, and higher cost for coherency protocol. Most of the applications and runtimes currently in use struggle to scale with the present trend. In the context of finite element methods, exposing massive parallelism on unstructured mesh computations with efficient load balancing and minimal synchronizations is challenging. To make efficient use of these architectures, several parallelization strategies have to be combined together to exploit the multiple levels of parallelism. This P.h.D. thesis proposes several contributions aimed at overpassing this limitation by addressing irregular codes and data structures in an efficient way. We developed a hybrid parallelization approach combining the distributed, shared, and vectorial forms of parallelism in a fine grain taskbased approach applied to irregular structures. Our approach has been ported to several industrial applications developed by Dassault Aviation and has led to important speedups using standard multicores and the Intel Xeon Phi manycore.
|
293 |
Parallelism and modular proof in differential dynamic logic / Parallélisme et preuve modulaire en logique dynamique différentielleLunel, Simon 28 January 2019 (has links)
Les systèmes cyber-physiques mélangent des comportements physiques continus, tel la vitesse d'un véhicule, et des comportement discrets, tel que le régulateur de vitesse d'un véhicule. Ils sont désormais omniprésents dans notre société. Un grand nombre de ces systèmes sont dits critiques, i.e. une mauvaise conception entraînant un comportement non prévu, un bug, peut mettre en danger des êtres humains. Il est nécessaire de développer des méthodes pour garantir le bon fonctionnement de tels systèmes. Les méthodes formelles regroupent des procédés mathématiques pour garantir qu'un système se comporte comme attendu, par exemple que le régulateur de vitesse n'autorise pas de dépasser la vitesse maximale autorisée. De récents travaux ont permis des progrès significatifs dans ce domaine, mais l'approche adoptée est encore monolithique, i.e. que le système est modélisé d'un seul tenant et est ensuite soumis à la preuve. Notre problématique est comment modéliser efficacement des systèmes cyber-physiques dont la complexité réside dans une répétition de morceaux élémentaires. Et une fois que l'on a obtenu une modélisation, comment garantir le bon fonctionnement de tels systèmes. Notre approche consiste à modéliser le système de manière compositionnelle. Plutôt que de vouloir le modéliser d'un seul tenant, il faut le faire morceaux par morceaux, appelés composants. Chaque composant correspond à un sous-système du système final qu'il est simple de modéliser. On obtient le système complet en assemblant les composants ensembles. Ainsi une usine de traitement des eaux est obtenue en assemblant différentes cuves. L'intérêt de cette méthode est qu'elle correspond à l'approche des ingénieurs dans l'industrie : considérer des éléments séparés que l'on compose ensuite. Mais cette approche seule ne résout pas le problème de la preuve de bon fonctionnement du système. Il faut aussi rendre la preuve compositionnelle. Pour cela, on associe à chaque composant des propriétés sur ses entrées et sortie, et on prouve qu'elles sont respectées. Cette preuve peut être effectué par un expert, mais aussi par un ordinateur si les composants sont de tailles raisonnables. Il faut ensuite nous assurer que lors de l'assemblage des composants, les propriétés continuent à être respectées. Ainsi, la charge de la preuve est reportée sur les composants élémentaires, l'assurance du respect des propriétés désirées est conservée lors des étapes de composition. On peut alors obtenir une preuve du bon fonctionnement de systèmes industriels avec un coût de preuve réduit. Notre contribution majeure est de proposer une telle approche compositionnelle à la fois pour modéliser des systèmes cyber-physiques, mais aussi pour prouver qu'ils respectent les propriétés voulues. Ainsi, à chaque étape de la conception, on s'assure que les propriétés sont conservées, si possible à l'aide d'un ordinateur. Le système résultant est correct par construction. De ce résultat, nous avons proposé plusieurs outils pour aider à la conception de systèmes cyber-physiques de manière modulaire. On peut raisonner sur les propriétés temporelles de tels systèmes, par exemple est-ce que le temps de réaction d'un contrôleur est suffisamment court pour garantir le bon fonctionnement. On peut aussi raisonner sur des systèmes où un mode nominal cohabite avec un mode d'urgence. / Cyber-physical systems mix continuous physical behaviors, e.g. the velocity of a vehicle, and discrete behaviors, e.g. the cruise-controller of the vehicle. They are pervasive in our society. Numerous of such systems are safety-critical, i.e. a design error which leads to an unexpected behavior can harm humans. It is mandatory to develop methods to ensure the correct functioning of such systems. Formal methods is a set of mathematical methods that are used to guarantee that a system behaves as expected, e.g. that the cruise-controller does not allow the vehicle to exceed the speed limit. Recent works have allowed significant progress in the domain of the verification of cyber-physical systems, but the approach is still monolithic. The system under consideration is modeled in one block. Our problematic is how to efficiently model cyber-physical systems where the complexity lies in a repetition of elementary blocks. And once this modeling done, how guaranteeing the correct functioning of such systems. Our approach is to model the system in a compositional manner. Rather than modeling it in one block, we model it pieces by pieces, called components. Each component correspond to a subsystem of the final system and are easier to model due to their reasonable size. We obtain the complete system by assembling the different components. A water-plant will thus be obtained by the composition of several water-tanks. The main advantage of this method is that it corresponds to the work-flow in the industry : consider each elements separately and compose them later. But this approach does not solve the problem of the proof of correct functioning of the system. We have to make the proof compositional too. To achieve it, we associate to each component properties on its inputs and outputs, then prove that they are satisfied. This step can be done by a domain expert, but also by a computer program if the component is of a reasonable size. We have then to ensure that the properties are preserved through the composition. Thus, the proof effort is reported to elementary components. It is possible to obtain a proof of the correct functioning of industrial systems with a reduced proof effort. Our main contribution is the development of such approach in Differential Dynamic Logic. We are able to modularly model cyber-physical systems, but also prove their correct functioning. Then, at each stage of the design, we can verify that the desired properties are still guaranteed. The resulting system is correct-by-construction. From this result, we have developed several tools to help for the modular reasoning on cyber-physical systems. We have proposed a methodology to reason on temporal properties, e.g. if the execution period of a controller is small enough to effectively regulate the continuous behavior. We have also showed how we can reason on functioning modes in our framework.
|
294 |
Massively Parallel Cartesian Discrete Ordinates Method for Neutron Transport Simulation / SN cartésien massivement parallèle pour la simulation neutroniqueMoustafa, Salli 15 December 2015 (has links)
La simulation haute-fidélité des coeurs de réacteurs nucléaires nécessite une évaluation précise du flux neutronique dans le coeur du réacteur. Ce flux est modélisé par l’équation de Boltzmann ou équation du transport neutronique. Dans cette thèse, on s’intéresse à la résolution de cette équation par la méthode des ordonnées discrètes (SN) sur des géométries cartésiennes. Cette méthode fait intervenir un schéma d’itérations à source, incluant un algorithme de balayage sur le domaine spatial qui regroupe l’essentiel des calculs effectués. Compte tenu du très grand volume de calcul requis par la résolution de l’équation de Boltzmann, de nombreux travaux antérieurs ont été consacrés à l’utilisation du calcul parallèle pour la résolution de cette équation. Jusqu’ici, ces algorithmes de résolution parallèles de l’équation du transport neutronique ont été conçus en considérant la machine cible comme une collection de processeurs mono-coeurs indépendants, et ne tirent donc pas explicitement profit de la hiérarchie mémoire et du parallélisme multi-niveaux présents sur les super-calculateurs modernes. Ainsi, la première contribution de cette thèse concerne l’étude et la mise en oeuvre de l’algorithme de balayage sur les super-calculateurs massivement parallèles modernes. Notre approche combine à la fois la vectorisation par des techniques de la programmation générique en C++, et la programmation hybride par l’utilisation d’un support d’exécution à base de tâches: PaRSEC. Nous avons démontré l’intérêt de cette approche grâce à des modèles de performances théoriques, permettant également de prédire le partitionnement optimal. Par ailleurs, dans le cas de la simulation des milieux très diffusifs tels que le coeur d’un REP, la convergence du schéma d’itérations à source est très lente. Afin d’accélérer sa convergence, nous avons implémenté un nouvel algorithme (PDSA), adapté à notre implémentation hybride. La combinaison de ces techniques nous a permis de concevoir une version massivement parallèle du solveur SN Domino. Les performances de la partie Sweep du solveur atteignent 33.9% de la performance crête théorique d’un super-calculateur à 768 cores. De plus, un calcul critique d’un réacteur de type REP 900MW à 26 groupes d’énergie mettant en jeu 1012 DDLs a été résolu en 46 minutes sur 1536 coeurs. / High-fidelity nuclear reactor core simulations require a precise knowledge of the neutron flux inside the reactor core. This flux is modeled by the linear Boltzmann equation also called neutron transport equation. In this thesis, we focus on solving this equation using the discrete ordinates method (SN) on Cartesian mesh. This method involves a source iteration scheme including a sweep over the spatial mesh and gathering the vast majority of computations in the SN method. Due to the large amount of computations performed in the resolution of the Boltzmann equation, numerous research works were focused on the optimization of the time to solution by developing parallel algorithms for solving the transport equation. However, these algorithms were designed by considering a super-computer as a collection of independent cores, and therefore do not explicitly take into account the memory hierarchy and multi-level parallelism available inside modern super-computers. Therefore, we first proposed a strategy for designing an efficient parallel implementation of the sweep operation on modern architectures by combining the use of the SIMD paradigm thanks to C++ generic programming techniques and an emerging task-based runtime system: PaRSEC. We demonstrated the need for such an approach using theoretical performance models predicting optimal partitionings. Then we studied the challenge of converging the source iterations scheme in highly diffusive media such as the PWR cores. We have implemented and studied the convergence of a new acceleration scheme (PDSA) that naturally suits our Hybrid parallel implementation. The combination of all these techniques have enabled us to develop a massively parallel version of the SN Domino solver. It is capable of tackling the challenges posed by the neutron transport simulations and compares favorably with state-of-the-art solvers such as Denovo. The performance of the PaRSEC implementation of the sweep operation reaches 6.1 Tflop/s on 768 cores corresponding to 33.9% of the theoretical peak performance of this set of computational resources. For a typical 26-group PWR calculations involving 1.02×1012 DoFs, the time to solution required by the Domino solver is 46 min using 1536 cores.
|
295 |
Computação paralela em GPU para resolução de sistemas de equações algébricas resultantes da aplicação do método de elementos finitos em eletromagnetismo. / Parallel computing on GPU for solving systems of algebraic equations resulting from application of finite element method in electromagnetism.Ana Flávia Peixoto de Camargos 04 August 2014 (has links)
Este trabalho apresenta a aplicação de técnicas de processamento paralelo na resolução de equações algébricas oriundas do Método de Elementos Finitos aplicado ao Eletromagnetismo, nos regimes estático e harmônico. As técnicas de programação paralelas utilizadas foram OpenMP, CUDA e GPUDirect, sendo esta última para as plataformas do tipo Multi-GPU. Os métodos iterativos abordados incluem aqueles do subespaço Krylov: Gradientes Conjugados, Gradientes Biconjugados, Conjugado Residual, Gradientes Biconjugados Estabilizados, Gradientes Conjugados para equações normais (CGNE e CGNR) e Gradientes Conjugados ao Quadrado. Todas as implementações fizeram uso das bibliotecas CUSP, CUSPARSE e CUBLAS. Para problemas estáticos, os seguintes pré-condicionadores foram adotados, todos eles com implementações paralelizadas e executadas na GPU: Decomposições Incompletas LU e de Cholesky, Multigrid Algébrico, Diagonal e Inversa Aproximada. Para os problemas harmônicos, apenas os dois primeiros pré-condicionadores foram utilizados, porém na sua versão sequencial, com execução na CPU, resultando em uma implementação híbrida CPU-GPU. As ferramentas computacionais desenvolvidas foram testadas na simulação de problemas de aterramento elétrico. No caso do regime harmônico, em que o fenômeno é regido pela Equação de Onda completa com perdas e não homogênea, a formulação adotada foi aquela em dois potenciais, A-V aresta-nodal. Em todas as situações, os aplicativos desenvolvidos para GPU apresentaram speedups apreciáveis, demonstrando a potencialidade dessa tecnologia para a simulação de problemas de larga escala na Engenharia Elétrica, com excelente relação custo-benefício. / This work presents the use of parallel processing techniques in Graphics Processing Units (GPU) for the solution of algebraic equations arising from the Finite Element modeling of electromagnetic phenomena, both in steadystate and time-harmonic regime. The techniques used were parallel programming OpenMP, CUDA and GPUDirect, the latter for those platforms of type Multi-GPU. The iterative methods discussed include those of the Krylov subspace: Conjugate Gradients, Bi-conjugate Gradients, Conjugate Residual, Bi-conjugate Gradients Stabilized, Conjugate Gradients for Normal Equations (CGNE and CGNR) and Conjugate Gradients Squared. All implementations have made use of CUSP, CUSPARSE and CUBLAS libraries. For the static problems, the following pre-conditioners were adopted, all with parallelized implementations and executed on the GPU: Incomplete decompositions, both LU and Cholesky, Algebraic Multigrid, Diagonal and Approximate Inverse. For the time-harmonic varying problems, only the first two pre-conditioners were used, but in their sequential version and running in the CPU, which yielded a hybrid CPU-GPU implementation. The developed computational tools were tested in the simulation of electrical grounding systems. In the case of the harmonic regime, in which the phenomenon is governed by the driven, lossy wave equation, the formulation adopted was that in two potential, the ungauged edge A-V formulation. In all cases, the developed GPU-based tools showed considerable speedups, showing that this is a promising technology for the simulation of large-scale Electrical Engineering problems, with excellent cost-benefit.
|
296 |
Integrated Optimal Code Generation for Digital Signal ProcessorsBednarski, Andrzej January 2006 (has links)
<p>In this thesis we address the problem of optimal code generation for irregular architectures such as Digital Signal Processors (DSPs).</p><p>Code generation consists mainly of three interrelated optimization tasks: instruction selection (with resource allocation), instruction scheduling and register allocation. These tasks have been discovered to be NP-hard for most architectures and most situations. A common approach to code generation consists in solving each task separately, i.e. in a decoupled manner, which is easier from a software engineering point of view. Phase-decoupled compilers produce good code quality for regular architectures, but if applied to DSPs the resulting code is of significantly lower performance due to strong interdependences between the different tasks.</p><p>We developed a novel method for fully integrated code generation at the basic block level, based on dynamic programming. It handles the most important tasks of code generation in a single optimization step and produces an optimal code sequence. Our dynamic programming algorithm is applicable to small, yet not trivial problem instances with up to 50 instructions per basic block if data locality is not an issue, and up to 20 instructions if we take data locality with optimal scheduling of data transfers on irregular processor architectures into account. For larger problem instances we have developed heuristic relaxations.</p><p>In order to obtain a retargetable framework we developed a structured architecture specification language, xADML, which is based on XML. We implemented such a framework, called OPTIMIST that is parameterized by an xADML architecture specification.</p><p>The thesis further provides an Integer Linear Programming formulation of fully integrated optimal code generation for VLIW architectures with a homogeneous register file. Where it terminates successfully, the ILP-based optimizer mostly works faster than the dynamic programming approach; on the other hand, it fails for several larger examples where dynamic programming still provides a solution. Hence, the two approaches complement each other. In particular, we show how the dynamic programming approach can be used to precondition the ILP formulation.</p><p>As far as we know from the literature, this is for the first time that the main tasks of code generation are solved optimally in a single and fully integrated optimization step that additionally considers data placement in register sets and optimal scheduling of data transfers between different registers sets.</p>
|
297 |
ALiCE: A Java-based Grid Computing SystemTeo, Yong Meng 01 1900 (has links)
A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. This talk is divided into three parts. Firstly, we give an overview of the main issues in grid computing. Next, we introduce ALiCE (Adaptive and Scalable Internet-based Computing Engine), a platform independent and lightweight grid. ALiCE exploits object-level parallelism using our Object Network Transport Architecture (ONTA). Grid applications are written using ALiCE Object Programming Template that hides the complexities of the underlying grid fabric. Lastly, we present some performance results of ALiCE applications including the geo-rectification of satellite images and the progressive multiple sequence alignments problem. / Singapore-MIT Alliance (SMA)
|
298 |
Integrated Optimal Code Generation for Digital Signal ProcessorsBednarski, Andrzej January 2006 (has links)
In this thesis we address the problem of optimal code generation for irregular architectures such as Digital Signal Processors (DSPs). Code generation consists mainly of three interrelated optimization tasks: instruction selection (with resource allocation), instruction scheduling and register allocation. These tasks have been discovered to be NP-hard for most architectures and most situations. A common approach to code generation consists in solving each task separately, i.e. in a decoupled manner, which is easier from a software engineering point of view. Phase-decoupled compilers produce good code quality for regular architectures, but if applied to DSPs the resulting code is of significantly lower performance due to strong interdependences between the different tasks. We developed a novel method for fully integrated code generation at the basic block level, based on dynamic programming. It handles the most important tasks of code generation in a single optimization step and produces an optimal code sequence. Our dynamic programming algorithm is applicable to small, yet not trivial problem instances with up to 50 instructions per basic block if data locality is not an issue, and up to 20 instructions if we take data locality with optimal scheduling of data transfers on irregular processor architectures into account. For larger problem instances we have developed heuristic relaxations. In order to obtain a retargetable framework we developed a structured architecture specification language, xADML, which is based on XML. We implemented such a framework, called OPTIMIST that is parameterized by an xADML architecture specification. The thesis further provides an Integer Linear Programming formulation of fully integrated optimal code generation for VLIW architectures with a homogeneous register file. Where it terminates successfully, the ILP-based optimizer mostly works faster than the dynamic programming approach; on the other hand, it fails for several larger examples where dynamic programming still provides a solution. Hence, the two approaches complement each other. In particular, we show how the dynamic programming approach can be used to precondition the ILP formulation. As far as we know from the literature, this is for the first time that the main tasks of code generation are solved optimally in a single and fully integrated optimization step that additionally considers data placement in register sets and optimal scheduling of data transfers between different registers sets.
|
299 |
Den kvantandliga diskursen : En undersökning om nyandlighetens möte med kvantfysikenSporrong, Elin January 2012 (has links)
This paper aims to describe and elaborate on a recent discursive change within the new-age movement. Since the seventies and the publishing of speculative popular science books like The Tao of Physics by Fritjof Capra and The self-aware universe by Amit Goswami, the idea that quantum physics resonates with spirituality has become the topic of hundreds of books and movies. The quantum-spiritual discourse has three distinct ways to approach quantum physics in its discussion on spirituality: The parallelistic approach which emphasizes the similarities between eastern philosophies and modern physics, the monistic-idealistic approach which tells us that mind is the foundation of matter and the scientific spiritual approach which tries to explain spiritual claims scientifically. In the quantum-spiritual discourse, quantum physical phenomena (e.g. non-locality and entanglement) are being called upon to validate metaphysical statements. The primary assumption of the discourse is that the shift of paradigm due to the establishment of modern physics also is a shift of paradigm of spirituality. With the object to examine the common claims made in the discourse, cross-references between spiritual arguments and facts of quantum physics are being made. A discussion is held about the probable influence of the historical context, with particular focus on the monistic evolvement during the late nineteenth century.
|
300 |
A case study of the relationship between journalism and politics in Sri LankaWesterberg, Isabella January 2012 (has links)
This bachelor thesis is conducted as a Minor Field Study (MFS) in Colombo, Sri Lanka. The aim of the study is to investigate the relationship between journalism and politics from three questions at issue: 1) What is the role of media according to the journalists? 2) How do journalists work with political reporting in the Sri Lankan print media? 3) How does print media and politics correspond to each other in Sri Lanka?. The theoretical framework consists of theories onmedia systems, democracy models, the notion of the public sphere, media during elections and types of regulations. Semi-structured interviews were conducted with 17 informants, both editors and journalists, at eight different editorial offices. The newspapers at which the informants were employed were either state-owned or privately owned. The qualitative material was transcribed and analysed using thematisation and meaning concentration to reveal patterns, attitudes and opinions. The analysis is divided into two major sections; 'Media's Role in the Society' and 'Media and Politics'. The first section investigates the first question at issue. Informing and educating people are valued as important responsibilities amongst the informants. Media is considered to be powerful in terms of affecting both people and politicians, although, some reservations are made. The second section examines the second and third questions at issue. The ideal execution of political reportage includes notions of neutrality, fairness, balance and unbiased reporting. In reality this is not necessarily accomplished. The state newspapers seem to report on behalf of the government in a positive and uncritical way. Private newspapers consider themselves to be more independent, but political ties and restrictions can disable their independence. Tendencies towards clientelism, political parallelism and instrumentalization are noted in the media environment. Sensitive, political news is often self-censored by journalists due to fear of consequences. In 'Conclusions and Discussion' the questions at issue are connected to each other in an attempt to discuss the complex relationship between journalism and politics in Sri Lanka.
|
Page generated in 0.0493 seconds