• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 19
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 21
  • 21
  • 18
  • 15
  • 14
  • 10
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

ENABLING MULTI-PARTY COLLABORATIVE DATA ACCESS

Athamnah, Malek January 2018 (has links)
Cloud computing has brought availability of services at unprecedented scales but data accessibility considerations become more complex due to involvement of multiple parties in providing the infrastructure. In this thesis, we discuss the problem of enabling cooperative data access in a multi-cloud environment where the data is owned and managed by multiple enterprises. We consider a multi-party collaboration scheme whereby a set of parties collectively decide accessibility to data from individual parties using different data models such as relational databases, and graph databases. In order to implement desired business services, parties need to share a selected portion of information with one another. We consider a model with a set of authorization rules over the joins of basic relations, and such rules are defined by these cooperating parties. The accessible information is constrained by these rules. Specifically, the following critical issues were examined: Combine rule enforcement and query planning and devise an algorithm which simultaneously checks for the enforceability of each rule and generation of minimum cost plan of its execution using a cost metric whenever the enforcement is possible; We also consider other forms of limiting the access to the shared data using safety properties and selection conditions. We proposed algorithms for both forms to remove any conflicts or violations between the limited accesses and model queries; Used graph databases with our authorization rules and query planning model to conduct similarity search between tuples, where we represent the relational database tuples as a graph with weighted edges, which enables queries involving "similarity" across the tuples. We proposed an algorithm to exploit the correlations between attributes to create virtual attributes that can be used to catch much of the data variance, and enhance the speed at which similarity search occurs; Proposed a framework for defining test functionalities their composition, and their access control. We discussed an algorithm to determine the realization of the given test via valid compositions of individual functionalities in a way to minimize the number of parties involved. The research significance resides in solving real-world issues that arise in using cloud services for enterprises After extensive evaluations, results revealed: collaborative data access model improves the security during cooperative data processes; systematic and efficient solving access rules conflict issues minimizes the possible data leakage; and, a systematic approach tackling control failure diagnosis helps reducing troubleshooting times and all that improve availability and resiliency. The study contributes to the knowledge, literature, and practice. This research opens up the space for further studies in various aspects of secure data cooperation in large-scale cyber and cyber-physical infrastructures. / Computer and Information Science
Read more
22

Abnormal Pattern Recognition in Spatial Data

Kou, Yufeng 26 January 2007 (has links)
In the recent years, abnormal spatial pattern recognition has received a great deal of attention from both industry and academia, and has become an important branch of data mining. Abnormal spatial patterns, or spatial outliers, are those observations whose characteristics are markedly different from their spatial neighbors. The identification of spatial outliers can be used to reveal hidden but valuable knowledge in many applications. For example, it can help locate extreme meteorological events such as tornadoes and hurricanes, identify aberrant genes or tumor cells, discover highway traffic congestion points, pinpoint military targets in satellite images, determine possible locations of oil reservoirs, and detect water pollution incidents. Numerous traditional outlier detection methods have been developed, but they cannot be directly applied to spatial data in order to extract abnormal patterns. Traditional outlier detection mainly focuses on "global comparison" and identifies deviations from the remainder of the entire data set. In contrast, spatial outlier detection concentrates on discovering neighborhood instabilities that break the spatial continuity. In recent years, a number of techniques have been proposed for spatial outlier detection. However, they have the following limitations. First, most of them focus primarily on single-attribute outlier detection. Second, they may not accurately locate outliers when multiple outliers exist in a cluster and correlate with each other. Third, the existing algorithms tend to abstract spatial objects as isolated points and do not consider their geometrical and topological properties, which may lead to inexact results. This dissertation reports a study of the problem of abnormal spatial pattern recognition, and proposes a suite of novel algorithms. Contributions include: (1) formal definitions of various spatial outliers, including single-attribute outliers, multi-attribute outliers, and region outliers; (2) a set of algorithms for the accurate detection of single-attribute spatial outliers; (3) a systematic approach to identifying and tracking region outliers in continuous meteorological data sequences; (4) a novel Mahalanobis-distance-based algorithm to detect outliers with multiple attributes; (5) a set of graph-based algorithms to identify point outliers and region outliers; and (6) extensive analysis of experiments on several spatial data sets (e.g., West Nile virus data and NOAA meteorological data) to evaluate the effectiveness and efficiency of the proposed algorithms. / Ph. D.
Read more
23

Modifikace metody Pivot Tables pro perzistentní metrické indexování / Modification of Pivot Tables method for persistent metric indexing

Moško, Juraj January 2011 (has links)
The pivot tables is one of the most effective metric access method optimized for a number of distance computations in similarity search. In this work the new modification of the pivot tables method was proposed that is besides distance computations optimized also for a number of I/O operations. Proposed Clustered pivot tables method is indexing clusters of similar objects that were created by another metric access method - the M-tree. The indexing of clustered objects has a positive effect for searching within indexed database. Whereas the clusters are paged in second memory, page containing such cluster, which do not satisfy particular query, is not accessed in second memory at all. Non-relevant objects, that are out of the query range, are not loaded into memory, what has the effect of decreasing number of I/O operations and total volume of transferred data. The correctness of proposed approach was experimentally proved and experimental results of proposed method was compared to selected metric access methods.
24

Ontological Reasoning with Taxonomies in RDF Database / Ontological Reasoning with Taxonomies in RDF Database

Hoferek, Ondřej January 2013 (has links)
13548805670613-46162052c208770f99e83a586780d16c.txt As the technologies for the realisation of the idea of the Semantic Web have evolved rapidly during past few years, it is possible to use them in variety of applications. As they are designed with the ability to process and analyze semantic information found in the data in mind, they are particularly suitable for the task of enhancing relevance of the document retrieval. In this work, we discuss the possibilities of identifying a suitable subset of the expressing capabilities of the SPARQL querying language and create a component that encapsulates the technical details of its usage. Page 1
25

Seleção de características por meio de algoritmos genéticos para aprimoramento de rankings e de modelos de classificação / Feature selection by genetic algorithms to improve ranking and classification models

Silva, Sérgio Francisco da 25 April 2011 (has links)
Sistemas de recuperação de imagens por conteúdo (Content-based image retrieval { CBIR) e de classificação dependem fortemente de vetores de características que são extraídos das imagens considerando critérios visuais específicos. É comum que o tamanho dos vetores de características seja da ordem de centenas de elementos. Conforme se aumenta o tamanho (dimensionalidade) do vetor de características, também se aumentam os graus de irrelevâncias e redundâncias, levando ao problema da \"maldição da dimensionalidade\". Desse modo, a seleção das características relevantes é um passo primordial para o bom funcionamento de sistemas CBIR e de classificação. Nesta tese são apresentados novos métodos de seleção de características baseados em algoritmos genéticos (do inglês genetic algorithms - GA), visando o aprimoramento de consultas por similaridade e modelos de classificação. A família Fc (\"Fitness coach\") de funções de avaliação proposta vale-se de funções de avaliação de ranking, para desenvolver uma nova abordagem de seleção de características baseada em GA que visa aprimorar a acurácia de sistemas CBIR. A habilidade de busca de GA considerando os critérios de avaliação propostos (família Fc) trouxe uma melhora de precisão de consultas por similaridade de até 22% quando comparado com métodos wrapper tradicionais para seleção de características baseados em decision-trees (C4.5), naive bayes, support vector machine, 1-nearest neighbor e mineração de regras de associação. Outras contribuições desta tese são dois métodos de seleção de características baseados em filtragem, com aplicações em classificação de imagens, que utilizam o cálculo supervisionado da estatística de silhueta simplificada como função de avaliação: o silhouette-based greedy search (SiGS) e o silhouette-based genetic algorithm search (SiGAS). Os métodos propostos superaram os métodos concorrentes na literatura (CFS, FCBF, ReliefF, entre outros). É importante também ressaltar que o ganho em acurácia obtido pela família Fc, e pelos métodos SiGS e SiGAS propostos proporcionam também um decréscimo significativo no tamanho do vetor de características, de até 90% / Content-based image retrieval (CBIR) and classification systems rely on feature vectors extracted from images considering specific visual criteria. It is common that the size of a feature vector is of the order of hundreds of elements. When the size (dimensionality) of the feature vector is increased, a higher degree of redundancy and irrelevancy can be observed, leading to the \"curse of dimensionality\" problem. Thus, the selection of relevant features is a key aspect in a CBIR or classification system. This thesis presents new methods based on genetic algorithms (GA) to perform feature selection. The Fc (\"Fitness coach\") family of fitness functions proposed takes advantage of single valued ranking evaluation functions, in order to develop a new method of genetic feature selection tailored to improve the accuracy of CBIR systems. The ability of the genetic algorithms to boost feature selection by employing evaluation criteria (fitness functions) improves up to 22% the precision of the query answers in the analyzed databases when compared to traditional wrapper feature selection methods based on decision-tree (C4.5), naive bayes, support vector machine, 1-nearest neighbor and association rule mining. Other contributions of this thesis are two filter-based feature selection algorithms for classification purposes, which calculate the simplified silhouette statistic as evaluation function: the silhouette-based greedy search (SiGS) and the silhouette-based genetic algorithm search (SiGAS). The proposed algorithms overcome the state-of-the-art ones (CFS, FCBF and ReliefF, among others). It is important to stress that the gain in accuracy of the proposed methods family Fc, SiGS and SIGAS is allied to a significant decrease in the feature vector size, what can reach up to 90%
Read more
26

Algoritmos de bulk-loading para o método de acesso métrico Onion-tree / Bulk-loading algorithms to the metric access method onion-tree

Carosia, Arthur Emanuel de Oliveira 27 May 2013 (has links)
Atualmente, a Onion-tree [Carélo et al., 2009] é o método de acesso métrico baseado em memória primária mais eficiente para pesquisa por similaridade disponível na literatura. Ela indexa dados complexos por meio da divisão do espaço métrico em regiões (ou seja, subespaços) disjuntas, usando para isso dois pivôs por nó. Para prover uma boa divisão do espaço métrico, a Onion-tree introduz as seguintes características principais: (i) procedimento de expansão, o qual inclui um método de particionamento que controla o número de subespaços disjuntos gerados em cada nó; (ii) técnica de substituição, a qual pode alterar os pivôs de um nó durante operações de inserção baseado em uma política de substituição que garante uma melhor divisão do espaço métrico, independente da ordem de inserção dos elementos; e (iii) algoritmos para a execução de consultas por abrangência e aos k-vizinhos mais próximos, de forma que esses tipos de consulta possam explorar eficientemente o método de particionamento da Onion-tree. Entretanto, a Onion-tree apenas oferece funcionalidades voltadas à inserção dos dados um-a-um em sua estrutura. Ela não oferece, portanto, uma operação de bulk-loading que construa o índice considerando todos os elementos do conjunto de dados de uma única vez. A principal vantagem dessa operação é analisar os dados antecipadamente para garantir melhor particionamento possível do espaço métrico. Com isto, a carga inicial de grandes volumes de dados pode ser melhor realizada usando a operação de bulk-loading. Este projeto de mestrado visa suprir a falta da operação de bulk-loading para a Onion-tree, por meio da proposta de algoritmos que exploram as características intrínsecas desse método de acesso métrico. No total, são propostos três algoritmos de bulk-loading, denominados GreedyBL, SampleBL e HeightBL, os quais utilizam respectivamente as seguintes abordagens: gulosa, amostragem e de estimativa da altura do índice. Testes experimentais realizados sobre conjuntos de dados com volume variando de 2.536 a 102.240 imagens e com dimensionalidade variando de 32 a 117 dimensões mostraram que os algoritmos propostos introduziram vantagens em relação à estrutura criada pelo algoritmo de inserção um-a-um da Onion-tree. Comparado com a inserção um-a-um, o tamanho do índice foi reduzido de 9% até 88%. Em consultas por abrangência, houve redução de 16% até 99% no número de cálculos de distância e de 9% a 99% no tempo gasto em relação à inserção. Em consultas aos k-vizinhos mais próximos, houve redução de 13% a 86% em número de cálculos de distância e de 9% até 63% no tempo gasto / The main-memory Onion-tree [Carélo et al., 2009] is the most efficient metric access method to date. It indexes complex data by dividing the metric space into several disjoint regions (i.e. subspaces) by using two pivots per node. To provide a good division of the metric space, the Onion-tree introduces the following characteristics: (i) expansion procedure, which provides a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) replacement technique, which can replace the pivots of a leaf node during insert operations based on a replacement policy that ensures a better division of the metric space, regardless of the insertion order of the elements; and (iii) algorithms for processing range and k-NN queries, so that these types of query can efficiently use the partitioning method of the Onion-tree. However, the Onion-tree only performs element-by-element insertions into its structure. Another important issue is the mass loading technique, called bulk-loading, which builds the index considering all elements of the dataset at once. This technique is very useful in the case of reconstructing the index or inserting a large number of elements simultaneously. Despite the importance of this technique, to the best of our knowledge, there are not in the literature bulk-loading algorithms for the Onion-tree. In this masters thesis, we fill this gap. We propose three algorithms for bulk-loading Onion-trees: the GreedyBL algorithm, the SampleBL algorithm and the HeightBL algorithm. These algorithms are based on the following approaches, respectively: greedy, sampling and estime height of the index. Performance tests with real-world data with different volumes (ranging from 2,536 to 102,240 images) and different dimensionalities (ranging from 32 to 117 dimensions) showed that the indices produced by the proposed algorithms are very compact. Compared with the element-by-element insertion, the size of the index reduced from 9% up to 88%. The proposed algorithms also provided a great improvement in query processing. They required from 16% up to 99% less distance calculations and were from 9% up to 99% faster than the element-by-element insertion to process range queries. Also, they required from 13% up to 86% less distance calculations and were from 9% up to 63% faster than the element-by-element insertion to process k-NN queries
Read more
27

Tratamento de tempo e dinamicidade em dados representados em espaços métricos / Treatment of time and dynamics in dta represented in metric spaces

Bueno, Renato 15 December 2009 (has links)
Os Sistemas de Gerenciamento de Bases de Dados devem atualmente ser capazes de gerenciar dados complexos, como dados multimídia, sequências genéticas, séries temporais, além dos dados tradicionais. Em consultas em grandes coleções de dados complexos, a similaridade entre os dados é o fator mais importante, e pode ser adequadamente expressada quando esses dados são representados em espaços métricos. Independentemente do domínio de um tipo de dados, existem aplicações que devem acompanhar a evolução temporal dos elementos de dados. Porém, os Métodos de Acesso Métrico existentes consideram que os dados são imutáveis com o decorrer do tempo. Visando o tratamento do tempo e dinamicidade em dados representados em espaços métricos, o trabalho apresentado nesta tese foi desenvolvido em duas frentes principais de atividades. A primeira frente tratou da inclusão das operações de remoção e atualização em métodos de acesso métrico, e visa atender às necessidades de domínios de aplicação em que dados em espaços métricos sofram atualização frequente, independentemente de necessitarem de tratamento temporal. Desta frente de atividades também resultou um novo método de otimização de àrvores métricas, baseado no algoritmo de remoção desenvolvido. A segunda frente de atividades aborda a inclusão do conceito de evolução temporal em dados representados em espaços métricos. Para isso foi proposto o Espaço Métrico-temporal, um modelo de representação de dados que permite a comparação de elementos métricos associado a informações temporais. O modelo conta com um método para identificar as contribuições relativas das componentes métrica e temporal no cálculo da similaridade. Também foram apresentadas estratégias para análise de trajetórias de dados métricos com o decorrer do tempo, através da imersão de espaços métrico-temporais em espaços dimensionais. Por fim, foi apresentado um novo método de balanceamento de múltiplos descritores para representação de imagens, fruto de modificações no método proposto para identificar as contribuições das componentes que podem formar um espaço métrico-temporal / Nowadays, the Database Management Systems (DBMS) must be able to manage complex data, such as multimedia data, genetic sequences, temporal series, besides the traditional data. For queries on large collections of complex data, the similarity among elements is the most relevant concept, and it can be adequately expressed when data are represented in metric spaces. Regardless of the data domain, there are applications that must tracking the evolution of data over time However, the existing Metric Access Methods assume that the data elements are immutable. Aiming at both treating time and allowing changes in metric data, the work presented in this thesis consisted of two main parts. The first part addresses the inclusion of the operations for element remotion and updating in metric access methods. These operations are meant to application domains that work with metric data that changes over time, regardless of the needed to manage temporal information. A new method for metric trees optimization was also developed in this part of the work. It was based on the proposed remotion algorithm. The second part of the thesis addresses including the temporal evolution concept in data represented in metric spaces. The Metric-Temporal Space was proposed, a representation model to allow comparing elements consisting of metric data with temporal information associated. The model includes a method to identify the relative contributions of the temporal and the metric components in the final similarity calculation. Strategies for trajectory analysis of metric data over time was also presented, through the immersion of metric-temporal spaced in dimensional spaces. Finally, a new method for weighting multiple image descriptors was presented. It was derived from changes in the proposed method to identify the contributions of the components of the metric-temporal space
Read more
28

Effective and efficient similarity search in databases

Lange, Dustin January 2013 (has links)
Given a large set of records in a database and a query record, similarity search aims to find all records sufficiently similar to the query record. To solve this problem, two main aspects need to be considered: First, to perform effective search, the set of relevant records is defined using a similarity measure. Second, an efficient access method is to be found that performs only few database accesses and comparisons using the similarity measure. This thesis solves both aspects with an emphasis on the latter. In the first part of this thesis, a frequency-aware similarity measure is introduced. Compared record pairs are partitioned according to frequencies of attribute values. For each partition, a different similarity measure is created: machine learning techniques combine a set of base similarity measures into an overall similarity measure. After that, a similarity index for string attributes is proposed, the State Set Index (SSI), which is based on a trie (prefix tree) that is interpreted as a nondeterministic finite automaton. For processing range queries, the notion of query plans is introduced in this thesis to describe which similarity indexes to access and which thresholds to apply. The query result should be as complete as possible under some cost threshold. Two query planning variants are introduced: (1) Static planning selects a plan at compile time that is used for all queries. (2) Query-specific planning selects a different plan for each query. For answering top-k queries, the Bulk Sorted Access Algorithm (BSA) is introduced, which retrieves large chunks of records from the similarity indexes using fixed thresholds, and which focuses its efforts on records that are ranked high in more than one attribute and thus promising candidates. The described components form a complete similarity search system. Based on prototypical implementations, this thesis shows comparative evaluation results for all proposed approaches on different real-world data sets, one of which is a large person data set from a German credit rating agency. / Ziel von Ähnlichkeitssuche ist es, in einer Menge von Tupeln in einer Datenbank zu einem gegebenen Anfragetupel all diejenigen Tupel zu finden, die ausreichend ähnlich zum Anfragetupel sind. Um dieses Problem zu lösen, müssen zwei zentrale Aspekte betrachtet werden: Erstens, um eine effektive Suche durchzuführen, muss die Menge der relevanten Tupel mithilfe eines Ähnlichkeitsmaßes definiert werden. Zweitens muss eine effiziente Zugriffsmethode gefunden werden, die nur wenige Datenbankzugriffe und Vergleiche mithilfe des Ähnlichkeitsmaßes durchführt. Diese Arbeit beschäftigt sich mit beiden Aspekten und legt den Fokus auf Effizienz. Im ersten Teil dieser Arbeit wird ein häufigkeitsbasiertes Ähnlichkeitsmaß eingeführt. Verglichene Tupelpaare werden entsprechend der Häufigkeiten ihrer Attributwerte partitioniert. Für jede Partition wird ein unterschiedliches Ähnlichkeitsmaß erstellt: Mithilfe von Verfahren des Maschinellen Lernens werden Basisähnlichkeitsmaßes zu einem Gesamtähnlichkeitsmaß verbunden. Danach wird ein Ähnlichkeitsindex für String-Attribute vorgeschlagen, der State Set Index (SSI), welcher auf einem Trie (Präfixbaum) basiert, der als nichtdeterministischer endlicher Automat interpretiert wird. Zur Verarbeitung von Bereichsanfragen wird in dieser Arbeit die Notation der Anfragepläne eingeführt, um zu beschreiben welche Ähnlichkeitsindexe angefragt und welche Schwellwerte dabei verwendet werden sollen. Das Anfrageergebnis sollte dabei so vollständig wie möglich sein und die Kosten sollten einen gegebenen Schwellwert nicht überschreiten. Es werden zwei Verfahren zur Anfrageplanung vorgeschlagen: (1) Beim statischen Planen wird zur Übersetzungszeit ein Plan ausgewählt, der dann für alle Anfragen verwendet wird. (2) Beim anfragespezifischen Planen wird für jede Anfrage ein unterschiedlicher Plan ausgewählt. Zur Beantwortung von Top-k-Anfragen stellt diese Arbeit den Bulk Sorted Access-Algorithmus (BSA) vor, der große Mengen von Tupeln mithilfe fixer Schwellwerte von den Ähnlichkeitsindexen abfragt und der Tupel bevorzugt, die hohe Ähnlichkeitswerte in mehr als einem Attribut haben und damit vielversprechende Kandidaten sind. Die vorgestellten Komponenten bilden ein vollständiges Ähnlichkeitssuchsystem. Basierend auf einer prototypischen Implementierung zeigt diese Arbeit vergleichende Evaluierungsergebnisse für alle vorgestellten Ansätze auf verschiedenen Realwelt-Datensätzen; einer davon ist ein großer Personendatensatz einer deutschen Wirtschaftsauskunftei.
Read more
29

Similarity Search And Analysis Of Protein Sequences And Structures: A Residue Contacts Based Approach

Sacan, Ahmet 01 August 2008 (has links) (PDF)
The advent of high-throughput sequencing and structure determination techniques has had a tremendous impact on our quest in cracking the language of life. The genomic and protein data is now being accumulated at a phenomenal rate, with the motivation of deriving insights into the function, mechanism, and evolution of the biomolecules, through analysis of their similarities, differences, and interactions. The rapid increase in the size of the biomolecular databases, however, calls for development of new computational methods for sensitive and efficient management and analysis of this information. In this thesis, we propose and implement several approaches for accurate and highly efficient comparison and retrieval of protein sequences and structures. The observation that corresponding residues in related proteins share similar inter-residue contacts is exploited in derivation of a new set of biologically sensitive metric amino acid substitution matrices, yielding accurate alignment and comparison of proteins. The metricity of these matrices has allowed efficient indexing and retrieval of both protein sequences and structures. A landmark-guided embedding of protein sequences is developed to represent subsequences in a vector space for approximate, but extremely fast spatial indexing and similarity search. Whereas protein structure comparison and search tasks were hitherto handled separately, we propose an integrated approach that serves both of these tasks and performs comparable to or better than other available methods. Our approach hinges on identification of similar residue contacts using distance-based indexing and provides the best of the both worlds: the accuracy of detailed structure alignment algorithms, at a speed comparable to that of the structure retrieval algorithms. We expect that the methods and tools developed in this study will find use in a wide range of application areas including annotation of new proteins, discovery of functional motifs, discerning evolutionary relationships among genes and species, and drug design and targeting.
Read more
30

New paradigms for approximate nearest-neighbor search

Ram, Parikshit 20 September 2013 (has links)
Nearest-neighbor search is a very natural and universal problem in computer science. Often times, the problem size necessitates approximation. In this thesis, I present new paradigms for nearest-neighbor search (along with new algorithms and theory in these paradigms) that make nearest-neighbor search more usable and accurate. First, I consider a new notion of search error, the rank error, for an approximate neighbor candidate. Rank error corresponds to the number of possible candidates which are better than the approximate neighbor candidate. I motivate this notion of error and present new efficient algorithms that return approximate neighbors with rank error no more than a user specified amount. Then I focus on approximate search in a scenario where the user does not specify the tolerable search error (error constraint); instead the user specifies the amount of time available for search (time constraint). After differentiating between these two scenarios, I present some simple algorithms for time constrained search with provable performance guarantees. I use this theory to motivate a new space-partitioning data structure, the max-margin tree, for improved search performance in the time constrained setting. Finally, I consider the scenario where we do not require our objects to have an explicit fixed-length representation (vector data). This allows us to search with a large class of objects which include images, documents, graphs, strings, time series and natural language. For nearest-neighbor search in this general setting, I present a provably fast novel exact search algorithm. I also discuss the empirical performance of all the presented algorithms on real data.
Read more

Page generated in 0.0402 seconds