• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2606
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5945
  • 1424
  • 873
  • 728
  • 722
  • 669
  • 492
  • 492
  • 480
  • 448
  • 421
  • 414
  • 386
  • 366
  • 341
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1211

ON THE CONVERGENCE AND APPLICATIONS OF MEAN SHIFT TYPE ALGORITHMS

Aliyari Ghassabeh, Youness 01 October 2013 (has links)
Mean shift (MS) and subspace constrained mean shift (SCMS) algorithms are non-parametric, iterative methods to find a representation of a high dimensional data set on a principal curve or surface embedded in a high dimensional space. The representation of high dimensional data on a principal curve or surface, the class of mean shift type algorithms and their properties, and applications of these algorithms are the main focus of this dissertation. Although MS and SCMS algorithms have been used in many applications, a rigorous study of their convergence is still missing. This dissertation aims to fill some of the gaps between theory and practice by investigating some convergence properties of these algorithms. In particular, we propose a sufficient condition for a kernel density estimate with a Gaussian kernel to have isolated stationary points to guarantee the convergence of the MS algorithm. We also show that the SCMS algorithm inherits some of the important convergence properties of the MS algorithm. In particular, the monotonicity and convergence of the density estimate values along the sequence of output values of the algorithm are shown. We also show that the distance between consecutive points of the output sequence converges to zero, as does the projection of the gradient vector onto the subspace spanned by the D-d eigenvectors corresponding to the D-d largest eigenvalues of the local inverse covariance matrix. Furthermore, three new variations of the SCMS algorithm are proposed and the running times and performance of the resulting algorithms are compared with original SCMS algorithm. We also propose an adaptive version of the SCMS algorithm to consider the effect of new incoming samples without running the algorithm on the whole data set. As well, we develop some new potential applications of the MS and SCMS algorithm. These applications involve finding straight lines in digital images; pre-processing data before applying locally linear embedding (LLE) and ISOMAP for dimensionality reduction; noisy source vector quantization where the clean data need to be estimated before the quanization step; improving the performance of kernel regression in certain situations; and skeletonization of digitally stored handwritten characters. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2013-09-30 18:01:12.959
1212

A New Segmentation Algorithm for Prostate Boundary Detection in 2D Ultrasound Images

Chiu, Bernard January 2003 (has links)
Prostate segmentation is a required step in determining the volume of a prostate, which is very important in the diagnosis and the treatment of prostate cancer. In the past, radiologists manually segment the two-dimensional cross-sectional ultrasound images. Typically, it is necessary for them to outline at least a hundred of cross-sectional images in order to get an accurate estimate of the prostate's volume. This approach is very time-consuming. To be more efficient in accomplishing this task, an automated procedure has to be developed. However, because of the quality of the ultrasound image, it is very difficult to develop a computerized method for defining boundary of an object in an ultrasound image. The goal of this thesis is to find an automated segmentation algorithm for detecting the boundary of the prostate in ultrasound images. As the first step in this endeavour, a semi-automatic segmentation method is designed. This method is only semi-automatic because it requires the user to enter four initialization points, which are the data required in defining the initial contour. The discrete dynamic contour (DDC) algorithm is then used to automatically update the contour. The DDC model is made up of a set of connected vertices. When provided with an energy field that describes the features of the ultrasound image, the model automatically adjusts the vertices of the contour to attain a maximum energy. In the proposed algorithm, Mallat's dyadic wavelet transform is used to determine the energy field. Using the dyadic wavelet transform, approximate coefficients and detailed coefficients at different scales can be generated. In particular, the two sets of detailed coefficients represent the gradient of the smoothed ultrasound image. Since the gradient modulus is high at the locations where edge features appear, it is assigned to be the energy field used to drive the DDC model. The ultimate goal of this work is to develop a fully-automatic segmentation algorithm. Since only the initialization stage requires human supervision in the proposed semi-automatic initialization algorithm, the task of developing a fully-automatic segmentation algorithm is reduced to designing a fully-automatic initialization process. Such a process is introduced in this thesis. In this work, the contours defined by the semi-automatic and the fully-automatic segmentation algorithm are compared with the boundary outlined by an expert observer. Tested using 8 sample images, the mean absolute difference between the semi-automatically defined and the manually outlined boundary is less than 2. 5 pixels, and that between the fully-automatically defined and the manually outlined boundary is less than 4 pixels. Automated segmentation tools that achieve this level of accuracy would be very useful in assisting radiologists to accomplish the task of segmenting prostate boundary much more efficiently.
1213

DEUM : a framework for an estimation of distribution algorithm based on Markov random fields

Shakya, Siddhartha January 2006 (has links)
Estimation of Distribution Algorithms (EDAs) belong to the class of population based optimisation algorithms. They are motivated by the idea of discovering and exploiting the interaction between variables in the solution. They estimate a probability distribution from population of solutions, and sample it to generate the next population. Many EDAs use probabilistic graphical modelling techniques for this purpose. In particular, directed graphical models (Bayesian networks) have been widely used in EDA. This thesis proposes an undirected graphical model (Markov Random Field (MRF)) approach to estimate and sample the distribution in EDAs. The interaction between variables in the solution is modelled as an undirected graph and the joint probability of a solution is factorised as a Gibbs distribution. The thesis describes a model of fitness function that approximates the energy in the Gibbs distribution, and shows how this model can be fitted to a population of solutions to estimate the parameters of the MRF. The estimated MRF is then sampled to generate the next population. This approach is applied to estimation of distribution in a general framework of an EDA, called Distribution Estimation using Markov Random Fields (DEUM). The thesis then proposes several variants of DEUM using different sampling techniques and tests their performance on a range of optimisation problems. The results show that, for most of the tested problems, the DEUM algorithms significantly outperform other EDAs, both in terms of number of fitness evaluations and the quality of the solutions found by them. There are two main explanations for the success of DEUM algorithms. Firstly, DEUM builds a model of fitness function to approximate the MRF. This contrasts with other EDAs, which build a model of selected solutions. This allows DEUM to use fitness in variation part of the evolution. Secondly, DEUM exploits the temperature coefficient in the Gibbs distribution to regulate the behaviour of the algorithm. In particular, with higher temperature, the distribution is closer to being uniform and with lower temperature it concentrates near some global optima. This gives DEUM an explicit control over the convergence of the algorithm, resulting in better optimisation.
1214

Designing optical multi-band networks : polyhedral analysis and algorithms / Conception de réseaux optiques multi-bandes : Analyse polyédrale et algorithmes

Benhamiche, Amal 12 December 2013 (has links)
Dans cette thèse, on s'intéresse à deux problèmes de conception de réseaux, utilisant la technologie OFDM multi-bandes. Le premier problème concerne la conception d'un réseau mono-couche avec contraintes spécifiques. Nous donnons une formulation en PLNE pour ce problème et étudions le polyèdre associé à sa restriction sur un arc. Nous introduisons deux familles d'inégalités valides définissant des facettes et développons un algorithme de coupes et branchements pour le problème. Nous étudions la variante multicouche du problème précédent et proposons plusieurs PLNE pour le modéliser. Nous identifions plusieurs familles de facettes et discutons des problèmes de séparation associés. Nous développons un algorithme de coupes et branchements utilisant l'ensemble des contraintes identifiées. Enfin, une formulation compacte et deux formulations basées sur des chemins sont proposées pour le problème. Nous présentons deux algorithmes de génération de colonnes et branchements pour le problème. / In this thesis we consider two capacitated network design (CND) problems, using OFDM multi-band technology. The first problem is related to single-layer network design with specific requirements. We give an ILP formulation for this problem and study the polyhedra associated with its arc-set restriction. We describe two families of facet defining inequalities. We devise a Branch-and-Cut algorithm for the problem. Next, we investigate the multilayer version of CND using OFDM technology. We propose several ILP formulations and study the polyhedron associated with the first (cut) formulation. We identify several classes of facets and discuss the related separation problem. We devise a Branch-and-Cut algorithm embedding valid inequalities of both single-layer and multilayer problems. The second formulation is compact, and holds a polynomial number of constraints and variables. Two further path formulations are given which yield two efficient Branch-and-Price algorithms for the problem.
1215

Interprétation et amélioration d’une procédure de démodulation itérative / Interpretation and amelioration of an iterative demodulation procedure

Naja, Ziad 01 April 2011 (has links)
La géométrie de l’information est la théorie mathématique qui applique les méthodes de la géométrie différentielle dans le domaine des statistiques et de la théorie de l’information. C’est une technique très prometteuse pour l’analyse et l’illustration des algorithmes itératifs utilisés en communications numériques. Cette thèse porte sur l’application de cette technique ainsi que d’autre technique d’optimisation bien connue, l’algorithme itératif du point proximal, sur les algorithmes itératifs en général. Nous avons ainsi trouvé des interprétations géométriques (basée sur la géométrie de l’information) et proximales (basée sur l’algorithme du point proximal)intéressantes dans le cas d’un algorithme itératif de calcul de la capacité des canaux discrets sans mémoire, l’algorithme de Blahut-Arimoto. L’idée étant d’étendre cette application sur une classe d’algorithmes itératifs plus complexes. Nous avons ainsi choisi d’analyser l’algorithme de décodage itératif des modulations codées à bits entrelacés afin de trouver quelques interprétations et essayer de proposer des liens existant avec le critère optimal de maximum de vraisemblance et d’autres algorithmes bien connus dans le but d’apporter certaines améliorations par rapport au cas classique de cet algorithme, en particulier l’étude de la convergence.Mots-clefs : Géométrie de l’information, algorithme du point proximal, algorithme de Blahut-Arimoto, décodage itératif, Modulations codées à bits entrelacés, maximum de vraisemblance. / Information geometry is a mathematical theory that applies methods of differential geometryin the fields of statistics and information theory. It is a very promising technique foranalyzing iterative algorithms used in digital communications. In this thesis, we apply this technique, in addition to the proximal point algorithm, to iterative algorithms. First, we have found some geometrical and proximal point interpretations in the case of an iterative algorithmfor computing the capacity of discrete and memoryless channel, the Blahut-Arimoto algorithm.Interesting results obtained motivated us to extend this application to a larger class of iterative algorithms. Then, we have studied in details iterative decoding algorithm of Bit Interleaved Coded Modulation (BICM) in order to analyse and propose some ameliorations of the classical decoding case. We propose a proximal point interpretation of this iterative process and find the link with some well known decoding algorithms, the Maximum likelihood decoding.
1216

Seleção de características para reconhecimento biométrico baseado em sinais de eletrocardiograma / Feature selection for biometric recognition based on electrocardiogram signals

Teodoro, Felipe Gustavo Silva 22 June 2016 (has links)
O campo da Biometria abarca uma grande variedade de tecnologias usadas para identificar e verificar a identidade de uma pessoa por meio da mensuração e análise de vários aspectos físicos e/ou comportamentais do ser humano. Diversas modalidades biométricas têm sido propostas para reconhecimento de pessoas, como impressões digitais, íris, face e voz. Estas modalidades biométricas possuem características distintas em termos de desempenho, mensurabilidade e aceitabilidade. Uma questão a ser considerada com a aplicação de sistemas biométricos em mundo real é sua robustez a ataques por circunvenção, repetição e ofuscação. Esses ataques estão se tornando cada vez mais frequentes e questionamentos estão sendo levantados a respeito dos níveis de segurança que esta tecnologia pode oferecer. Recentemente, sinais biomédicos, como eletrocardiograma (ECG), eletroencefalograma (EEG) e eletromiograma (EMG) têm sido estudados para uso em problemas envolvendo reconhecimento biométrico. A formação do sinal do ECG é uma função da anatomia estrutural e funcional do coração e dos seus tecidos circundantes. Portanto, o ECG de um indivíduo exibe padrão cardíaco único e não pode ser facilmente forjado ou duplicado, o que tem motivado a sua utilização em sistemas de identificação. Entretanto, a quantidade de características que podem ser extraídas destes sinais é muito grande. A seleção de característica tem se tornado o foco de muitas pesquisas em áreas em que bases de dados formadas por dezenas ou centenas de milhares de características estão disponíveis. Seleção de característica ajuda na compreensão dos dados, reduzindo o custo computacional, reduzindo o efeito da maldição da dimensionalidade e melhorando o desempenho do preditor. O foco da seleção de característica é selecionar um subconjunto de característica a partir dos dados de entrada, que pode descrever de forma eficiente os dados de entrada ao mesmo tempo reduzir os efeitos de ruídos ou características irrelevantes e ainda proporcionar bons resultados de predição. O objetivo desta dissertação é analisar o impacto de algumas técnicas de seleção de característica tais como, Busca Gulosa, Seleção \\textit, Algoritmo Genético, Algoritmo Memético, Otimização por Enxame de Partículas sobre o desempenho alcançado pelos sistemas biométricos baseado em ECG. Os classificadores utilizados foram $k$-Vizinhos mais Próximos, Máquinas de Vetores Suporte, Floresta de Caminhos Ótimos e classificador baseado em distância mínima. Os resultados demonstram que existe um subconjunto de características extraídas do sinal de ECG capaz de fornecer altas taxas de reconhecimento / The field of biometrics includes a variety of technologies used to identify and verify the identity of a person by measuring and analyzing various physical and/or behavioral aspects of the human being. Several biometric modalities have been proposed for recognition of people, such as fingerprints, iris, face and speech. These biometric modalities have distinct characteristics in terms of performance, measurability and acceptability. One issue to be considered with the application of biometric systems in real world is its robustness to attacks by circumvention, spoof and obfuscation. These attacks are becoming more frequent and more questions are being raised about the levels of security that this technology can offer. Recently, biomedical signals, as electrocardiogram (ECG), electroencephalogram (EEG) and electromyogram (EMG) have been studied for use in problems involving biometric recognition. The ECG signal formation is a function of structural and functional anatomy of the heart and its surrounding tissues. Therefore, the ECG of an individual exhibits unique cardiac pattern and cannot be easily forged or duplicated, that have motivated its use in various identification systems. However, the amount of features that can be extracted from this signal is very large. The feature selection has become the focus of much research in areas where databases formed by tens or hundreds of thousands of features are available. Feature Selection helps in understanding data, reducing computation requirement, reducing the effect of curse of dimensionality and improving the predictor performance. The focus of feature selection is to select a subset of features from the input which can efficiently describe the input data while reducing effects from noise or irrelevant features and still provide good prediction results. The aim of this dissertation is to analyze the impact of some feature selection techniques, such as, greedy search, Backward Selection, Genetic Algorithm, Memetic Algorithm, Particle Swarm Optimization on the performance achieved by biometric systems based on ECG. The classifiers used were $k$-Nearest Neighbors, Support Vector Machines, Optimum-Path Forest and minimum distance classifier. The results demonstrate that there is a subset of features extracted from the ECG signal capable of providing high recognition rates
1217

Algoritmos de inferência exata para modelos de primeira ordem. / Exact inference algorithms for first-order models.

Takiyama, Felipe Iwao 27 February 2014 (has links)
Este trabalho descreve a implementação de algoritmos de inferência para modelos de primeira ordem. Três algoritmos foram implementados: ve, c-fove e ac-fove. Este último e o estado da arte no calculo de probabilidades em Redes Bayesianas Relacionais e não possua nenhuma implementação disponível. O desenvolvimento foi feito segundo uma metodologia ágil que resultou em um pacote de software que pode ser utilizado em outras implementações. Mostra-se que o software criado possui o desempenho esperado em teoria, embora apresente algumas limitações. Esta dissertação contribui também com novos tópicos teóricos que complementam o algoritmo. / In this work, we describe the implementation of inference algorithms for first order models. Three algorithms were implemented: ve, c-fove and ac-fove. The latter is the state of the art in probability calculations for Relational Bayesian Networks and had no implementation available. The development was done according to an agile methodology, which resulted in a software that can be used in other packages. We show that the resulting software has the expected performance from the theory, although with some limitations. This work also contributes with new theoretical topics that complement the algorithm.
1218

Diagnóstico de falhas em estruturas isotrópicas utilizando sistemas imunológicos artificiais com seleção negativa e clonal /

Oliveira, Daniela Cabral de January 2019 (has links)
Orientador: Fábio Roberto Chavarette / Resumo: Este trabalho é dedicado ao desenvolvimento de uma metodologia baseada no monitoramento da integridade estrutural em aeronaves com foco em técnicas de computação inteligente, tendo como intuito detectar, localizar e quantificar falhas estruturais utilizando os sistemas imunológicos artificiais (SIA). Este conceito permite compor o sistema de diagnóstico apto a aprender continuamente, contemplando distintas situações de danos, sem a necessidade de reiniciar o processo de aprendizado. Neste cenário, foi empregado dois algoritmos imunológicos artificiais, sendo o algoritmo de seleção negativa, responsável pelo processo de reconhecimento de padrões, e o algoritmo de seleção clonal responsável pelo processo de aprendizado continuado. Também foi possível quantificar o grau de influência do dano para as cinco situações de danos. Para avaliar a metodologia foi montada uma bancada experimental com transdutores piezelétricos que funcionam como sensor e atuador em configurações experimentais, que podem ser anexadas à estrutura para produzir ou coletar ondas numa placa de alumínio (representando a asa do avião), sendo coletados sinais na situação normal e em cinco situações distintas de danos. Os resultados demonstraram robustez e precisão da nova metodologia proposta. / Abstract: This work is dedicated to the development of a methodology based on the monitoring of structural integrity in aircraft with a focus on intelligent computing techniques, aiming to detect structural failures using the artificial immune systems (AIS). This concept allows to compose the diagnostic system capable of learning continuously, contemplating different situations of damages, without the need to restart the learning process. In this scenario, two artificial immunological algorithms were employed, the negative selection algorithm, responsible for the pattern recognition process, and the clonal selection algorithm responsible for the continuous learning process. It was also possible to quantify the degree of influence of the damage for the five damage situations. To assess the methodology, an experimental bench was mounted with piezoelectric transducers that act as sensors and actuators in experimental configurations, which can be attached to the structure to produce or collect waves on an aluminum plate (representing the wing of the airplane), being collected signals in the normal situation and in five different situations of damages. The results demonstrate the robustness and accuracy of the proposed new methodology. / Doutor
1219

Efficient structure optimization methods for large systems and their applications to problems of heterogeneous catalysis

Niedziela, Andrzej 28 April 2016 (has links)
Die vorliegende Arbeit behandelt die Entwicklung des genetischen Starrkörper-Algorithmus (rigid body genetic algorithm, RGBA), und seine Anwendung zur Untersuchung der Kohlenwasserstoff-Adsorption auf der MgO (001) Oberfläche. Die RBGA Methode ist ein modifizierter hybrid-genetischer Algorithmus mit Starrkörper-Optimierung im lokalen Optimierungsschritt. Diese Modifikation führt zu einer großen Vereinfachung des Optimierungsproblems und ermöglicht damit, eine große Anzahl von möglichen Konfigurationen zu analysieren. Die zentrale Annahme der Methode ist, dass die einzelnen Teile des Systems (starrer Körper) während der gesamten globalen Optimierung nicht ihre interne Konfiguration ändern. Daher ist diese Methode ein geeignetes Werkzeug, um Phänomene wie Adsorption zu studieren, in dem alle Teilsysteme - Oberfläche und einzelne Moleküle - ihre interne Struktur bewahren. Der Algorithmus ermöglicht das Auffinden der globalen Minima für die Starrkörper, die dann im nächsten Schritt vollständig optimiert („relaxiert“) werden, um Verformungen aufgrund der Entspannung der Oberfläche und des Adsorbats auszumachen. / The present work was concentrated on developing the Rigid Body Genetic Algorithm (RBGA), and applying it to investigate the hydrocarbon adsorption on the MgO(001) surface. The RBGA method is a modified hybrid genetic algorithm with rigid body optimization at the local optimization step. The modification allows for a vast simplification of the optimization problem, and, in turn, to search a large number of possible configuration. The key assumption of the method is that individual parts of the system (rigid bodies) do not change their internal configuration throughout the global optimization. Therefore, this method is a perfect tool to study phenomena like adsorption, where all the subsystems – surface and individual molecules – preserve their internal structure. The algorithm allows to obtain global minima, which then can be fully optimized and to account for deformations due to the relaxation of the surface and adsorbate molecules.
1220

Algoritmos anytime baseados em instâncias para classificação em fluxo de dados / Instance-based anytime algorithm to data stream classification

Lemes, Cristiano Inácio 09 March 2016 (has links)
Aprendizado em fluxo de dados é uma área de pesquisa importante e que vem crescendo nos últimos tempos. Em muitas aplicações reais os dados são gerados em uma sequência temporal potencialmente infinita. O processamento em fluxo possui como principal característica a necessidade por respostas que atendam restrições severas de tempo e memória. Por exemplo, um classificador aplicado a um fluxo de dados deve prover uma resposta a um determinado evento antes que o próximo evento ocorra. Caso isso não ocorra, alguns eventos do fluxo podem ficar sem classificação. Muitos fluxos geram eventos em uma taxa de chegada com grande variabilidade, ou seja, o intervalo de tempo de ocorrência entre dois eventos sucessivos pode variar muito. Para que um sistema de aprendizado obtenha sucesso na aquisição de conhecimento é preciso que ele apresente duas características principais: (i) ser capaz de prover uma classificação para um novo exemplo em tempo hábil e (ii) ser capaz de adaptar o modelo de classificação de maneira a tratar mudanças de conceito, uma vez que os dados podem não apresentar uma distribuição estacionária. Algoritmos de aprendizado de máquina em lote não possuem essas propriedades, pois assumem que as distribuições são estacionárias e não estão preparados para atender restrições de memória e processamento. Para atender essas necessidades, esses algoritmos devem ser adaptados ao contexto de fluxo de dados. Uma possível adaptação é tornar o algoritmo de classificação anytime. Algoritmos anytime são capazes de serem interrompidos e prover uma resposta (classificação) aproximada a qualquer instante. Outra adaptação é tornar o algoritmo incremental, de maneira que seu modelo possa ser atualizado para novos exemplos do fluxo de dados. Neste trabalho é realizada a investigação de dois métodos capazes de realizar o aprendizado em um fluxo de dados. O primeiro é baseado no algoritmo k-vizinhos mais próximo anytime estado-da-arte, onde foi proposto um novo método de desempate para ser utilizado neste algoritmo. Os experimentos mostraram uma melhora consistente no desempenho deste algoritmo em várias bases de dados de benchmark. O segundo método proposto possui as características dos algoritmos anytime e é capaz de tratar a mudança de conceito nos dados. Este método foi chamado de Algoritmo Anytime Incremental e possui duas versões, uma baseado no algoritmo Space Saving e outra em uma Janela Deslizante. Os experimentos mostraram que em cada fluxo cada versão deste método proposto possui suas vantagens e desvantagens. Mas no geral, comparado com outros métodos baselines, ambas as versões apresentaram melhor desempenho. / Data stream learning is a very important research field that has received much attention from the scientific community. In many real-world applications, data is generated as potentially infinite temporal sequences. The main characteristic of stream processing is to provide answers observing stringent restrictions of time and memory. For example, a data stream classifier must provide an answer for each event before the next one arrives. If this does not occur, some events from the data stream may be left unclassified. Many streams generate events with highly variable output rate, i.e. the time interval between two consecutive events may vary greatly. For a learning system to be successful, two properties must be satisfied: (i) it must be able to provide a classification for a new example in a short time and (ii) it must be able to adapt the classification model to treat concept change, since the data may not follow a stationary distribution. Batch machine learning algorithms do not satisfy those properties because they assume that the distribution is stationary and they are not prepared to operate with severe memory and processing constraints. To satisfy these requirements, these algorithms must be adapted to the data stream context. One possible adaptation is to turn the algorithm into an anytime classifier. Anytime algorithms may be interrupted and still provide an approximated answer (classification) at any time. Another adaptation is to turn the algorithm into an incremental classifier so that its model may be updated with new examples from the data stream. In this work, it is performed an evaluation of two approaches for data stream learning. The first one is based on a state-of-the-art k-nearest neighbor anytime classifier. A new tiebreak approach is proposed to be used with this algorithm. Experiments show consistently better results in the performance of this algorithm in many benchmark data sets. The second proposed approach is to adapt the anytime algorithm for concept change. This approach was called Incremental Anytime Algorithm, and it was designed with two versions. One version is based on the Space Saving algorithm and the other is based in a Sliding Window. Experiments show that both versions are significantly better than baseline approaches.

Page generated in 0.0514 seconds