Spelling suggestions: "subject:"discretization"" "subject:"iscretization""
141 |
Implicit runge-kutta methods to simulate unsteady incompressible flowsIjaz, Muhammad 15 May 2009 (has links)
A numerical method (SIMPLE DIRK Method) for unsteady incompressible
viscous flow simulation is presented. The proposed method can be used to achieve
arbitrarily high order of accuracy in time-discretization which is otherwise limited to
second order in majority of the currently used simulation techniques. A special class of
implicit Runge-Kutta methods is used for time discretization in conjunction with finite
volume based SIMPLE algorithm. The algorithm was tested by solving for velocity field
in a lid-driven square cavity. In the test case calculations, power law scheme was used in
spatial discretization and time discretization was performed using a second-order implicit
Runge-Kutta method. Time evolution of velocity profile along the cavity centerline was
obtained from the proposed method and compared with that obtained from a commercial
computational fluid dynamics software program, FLUENT 6.2.16. Also, steady state
solution from the present method was compared with the numerical solution of Ghia, Ghia,
and Shin and that of Erturk, Corke, and Goökçöl. Good agreement of the solution of the
proposed method with the solutions of FLUENT; Ghia, Ghia, and Shin; and Erturk, Corke,
and Goökçöl establishes the feasibility of the proposed method.
|
142 |
A Mathematical Contribution Of Statistical Learning And Continuous Optimization Using Infinite And Semi-infinite Programming To Computational StatisticsOzogur-akyuz, Sureyya 01 February 2009 (has links) (PDF)
A subfield of artificial intelligence, machine learning (ML), is concerned with the development of algorithms that allow computers to &ldquo / learn&rdquo / . ML is the process of training a system
with large number of examples, extracting rules and finding patterns in order to make predictions on new data points (examples). The most common machine learning schemes are
supervised, semi-supervised, unsupervised and reinforcement learning. These schemes apply to natural language processing, search engines, medical diagnosis, bioinformatics, detecting credit fraud, stock market analysis, classification of DNA sequences, speech and hand writing recognition in computer vision, to encounter just a few. In this thesis, we focus on Support Vector Machines (SVMs) which is one of the most powerful methods currently in machine learning.
As a first motivation, we develop a model selection tool induced into SVM in order to solve a particular problem of computational biology which is prediction of eukaryotic pro-peptide cleavage site applied on the real data collected from NCBI data bank. Based on our biological example, a generalized model selection method is employed as a generalization for all kinds of learning problems. In ML algorithms, one of the crucial issues is the representation
of the data. Discrete geometric structures and, especially, linear separability of the data play an important role in ML. If the data is not linearly separable, a kernel function transforms
the nonlinear data into a higher-dimensional space in which the nonlinear data are linearly separable. As the data become heterogeneous and large-scale, single kernel methods become
insufficient to classify nonlinear data. Convex combinations of kernels were developed to classify this kind of data [8]. Nevertheless, selection of the finite combinations of kernels
are limited up to a finite choice. In order to overcome this discrepancy, we propose a novel method of &ldquo / infinite&rdquo / kernel combinations for learning problems with the help of infinite and
semi-infinite programming regarding all elements in kernel space. This will provide to study variations of combinations of kernels when considering heterogeneous data in real-world applications. Combination of kernels can be done, e.g., along a homotopy parameter or a more specific parameter. Looking at all infinitesimally fine convex combinations of the kernels
from the infinite kernel set, the margin is maximized subject to an infinite number of constraints with a compact index set and an additional (Riemann-Stieltjes) integral constraint
due to the combinations. After a parametrization in the space of probability measures, it becomes semi-infinite. We analyze the regularity conditions which satisfy the Reduction Ansatz
and discuss the type of distribution functions within the structure of the constraints and our bilevel optimization problem. Finally, we adapted well known numerical methods of semiinfinite programming to our new kernel machine. We improved the discretization method for our specific model and proposed two new algorithms. We proved the convergence of the numerical methods and we analyzed the conditions and assumptions of these convergence theorems such as optimality and convergence.
|
143 |
Study on the Development of New BWR Core Analysis Scheme Based on the Continuous Energy Monte Carlo Burn-up Calculation Method東條, 匡志, tojo, masashi 28 September 2007 (has links)
名古屋大学博士学位論文 学位の種類:博士(工学) 学位授与年月日:平成19年9月28日
|
144 |
Implicit runge-kutta methods to simulate unsteady incompressible flowsIjaz, Muhammad 10 October 2008 (has links)
A numerical method (SIMPLE DIRK Method) for unsteady incompressible
viscous flow simulation is presented. The proposed method can be used to achieve
arbitrarily high order of accuracy in time-discretization which is otherwise limited to
second order in majority of the currently used simulation techniques. A special class of
implicit Runge-Kutta methods is used for time discretization in conjunction with finite
volume based SIMPLE algorithm. The algorithm was tested by solving for velocity field
in a lid-driven square cavity. In the test case calculations, power law scheme was used in
spatial discretization and time discretization was performed using a second-order implicit
Runge-Kutta method. Time evolution of velocity profile along the cavity centerline was
obtained from the proposed method and compared with that obtained from a commercial
computational fluid dynamics software program, FLUENT 6.2.16. Also, steady state
solution from the present method was compared with the numerical solution of Ghia, Ghia,
and Shin and that of Erturk, Corke, and Goökçöl. Good agreement of the solution of the
proposed method with the solutions of FLUENT; Ghia, Ghia, and Shin; and Erturk, Corke,
and Goökçöl establishes the feasibility of the proposed method.
|
145 |
Mathematical and algorithmic analysis of modified Langevin dynamics / L'analyse mathématique et algorithmique de la dynamique de Langevin modifiéTrstanova, Zofia 25 November 2016 (has links)
En physique statistique, l’information macroscopique d’intérêt pour les systèmes considérés peut être dé-duite à partir de moyennes sur des configurations microscopiques réparties selon des mesures de probabilitéµ caractérisant l’état thermodynamique du système. En raison de la haute dimensionnalité du système (quiest proportionnelle au nombre de particules), les configurations sont le plus souvent échantillonnées en util-isant des trajectoires d’équations différentielles stochastiques ou des chaînes de Markov ergodiques pourla mesure de Boltzmann-Gibbs µ, qui décrit un système à température constante. Un processus stochas-tique classique permettant d’échantillonner cette mesure est la dynamique de Langevin. En pratique, leséquations de la dynamique de Langevin ne peuvent pas être intégrées analytiquement, la solution est alorsapprochée par un schéma numérique. L’analyse numérique de ces schémas de discrétisation est maintenantbien maîtrisée pour l’énergie cinétique quadratique standard. Une limitation importante des estimateurs desmoyennes sontleurs éventuelles grandes erreurs statistiques.Sous certaines hypothèsessur lesénergies ciné-tique et potentielle, il peut être démontré qu’un théorème de limite central est vrai. La variance asymptotiquepeut être grande en raison de la métastabilité du processus de Langevin, qui se produit dès que la mesure deprobabilité µ est multimodale.Dans cette thèse, nous considérons la discrétisation de la dynamique de Langevin modifiée qui améliorel’échantillonnage de la distribution de Boltzmann-Gibbs en introduisant une fonction cinétique plus généraleà la place de la formulation quadratique standard. Nous avons en fait deux situations en tête : (a) La dy-namique de Langevin Adaptativement Restreinte, où l’énergie cinétique s’annule pour les faibles moments,et correspond à l’énergie cinétique standard pour les forts moments. L’intérêt de cette dynamique est que lesparticules avec une faible énergie sont restreintes. Le gain vient alors du fait que les interactions entre lesparticules restreintes ne doivent pas être mises à jour. En raison de la séparabilité des positions et des mo-ments marginaux de la distribution, les moyennes des observables qui dépendent de la variable de positionsont égales à celles calculées par la dynamique de Langevin standard. L’efficacité de cette méthode résidedans le compromis entre le gain de calcul et la variance asymptotique des moyennes ergodiques qui peutaugmenter par rapport à la dynamique standards car il existe a priori plus des corrélations dans le tempsen raison de particules restreintes. De plus, étant donné que l’énergie cinétique est nulle sur un ouvert, ladynamique de Langevin associé ne parvient pas à être hypoelliptique. La première tâche de cette thèse est deprouver que la dynamique de Langevin avec une telle énergie cinétique est ergodique. L’étape suivante con-siste à présenter une analyse mathématique de la variance asymptotique de la dynamique AR-Langevin. Afinde compléter l’analyse de ce procédé, on estime l’accélération algorithmique du coût d’une seule itération,en fonction des paramètres de la dynamique. (b) Nous considérons aussi la dynamique de Langevin avecdes énergies cinétiques dont la croissance est plus que quadratique à l’infini, dans une tentative de réduire lamétastabilité. La liberté supplémentaire fournie par le choix de l’énergie cinétique doit être utilisée afin deréduire la métastabilité de la dynamique. Dans cette thèse, nous explorons le choix de l’énergie cinétique etnous démontrons une convergence améliorée des moyennes ergodiques sur un exemple de faible dimension.Un des problèmes avec les situations que nous considérons est la stabilité des régimes discrétisés. Afind’obtenir une méthode de discrétisation faiblement cohérente d’ordre 2 (ce qui n’est plus trivial dans le casde l’énergie cinétique générale), nous nous reposons sur les schémas basés sur des méthodes de Metropolis. / In statistical physics, the macroscopic information of interest for the systems under consideration can beinferred from averages over microscopic configurations distributed according to probability measures µcharacterizing the thermodynamic state of the system. Due to the high dimensionality of the system (whichis proportional to the number of particles), these configurations are most often sampled using trajectories ofstochastic differential equations or Markov chains ergodic for the probability measure µ, which describesa system at constant temperature. One popular stochastic process allowing to sample this measure is theLangevin dynamics. In practice, the Langevin dynamics cannot be analytically integrated, its solution istherefore approximated with a numerical scheme. The numerical analysis of such discretization schemes isby now well-understood when the kinetic energy is the standard quadratic kinetic energy.One important limitation of the estimators of the ergodic averages are their possibly large statisticalerrors.Undercertainassumptionsonpotentialandkineticenergy,itcanbeshownthatacentrallimittheoremholds true. The asymptotic variance may be large due to the metastability of the Langevin process, whichoccurs as soon as the probability measure µ is multimodal.In this thesis, we consider the discretization of modified Langevin dynamics which improve the samplingof the Boltzmann–Gibbs distribution by introducing a more general kinetic energy function U instead of thestandard quadratic one. We have in fact two situations in mind:(a) Adaptively Restrained (AR) Langevin dynamics, where the kinetic energy vanishes for small momenta,while it agrees with the standard kinetic energy for large momenta. The interest of this dynamics isthat particles with low energy are restrained. The computational gain follows from the fact that theinteractions between restrained particles need not be updated. Due to the separability of the positionand momenta marginals of the distribution, the averages of observables which depend on the positionvariable are equal to the ones computed with the standard Langevin dynamics. The efficiency of thismethod lies in the trade-off between the computational gain and the asymptotic variance on ergodic av-erages which may increase compared to the standard dynamics since there are a priori more correlationsin time due to restrained particles. Moreover, since the kinetic energy vanishes on some open set, theassociated Langevin dynamics fails to be hypoelliptic. In fact, a first task of this thesis is to prove thatthe Langevin dynamics with such modified kinetic energy is ergodic. The next step is to present a math-ematical analysis of the asymptotic variance for the AR-Langevin dynamics. In order to complementthe analysis of this method, we estimate the algorithmic speed-up of the cost of a single iteration, as afunction of the parameters of the dynamics.(b) We also consider Langevin dynamics with kinetic energies growing more than quadratically at infinity,in an attempt to reduce metastability. The extra freedom provided by the choice of the kinetic energyshould be used in order to reduce the metastability of the dynamics. In this thesis, we explore thechoice of the kinetic energy and we demonstrate on a simple low-dimensional example an improvedconvergence of ergodic averages.An issue with the situations we consider is the stability of discretized schemes. In order to obtain aweakly consistent method of order 2 (which is no longer trivial for a general kinetic energy), we rely on therecently developped Metropolis schemes.
|
146 |
[en] DYNAMIC OF ONE-DIMENSIONAL FLEXIBLE STRUCTURES / [pt] DINÂMICA DE ESTRUTURAS FLEXÍVEIS UNIDIMENSIONAISPRISCILLA OLIVEIRA DE ALMEIDA 01 November 2006 (has links)
[pt] Nesse trabalho, calcula-se a dinâmica de sistemas
contínuos unidimensionais. Problemas de barras e vigas com
diferentes condições de contorno e condições
intermediárias são tratados no contexto da formulação
fraca para que seja aplicado o Método de Elementos
Finitos; e então seja possível calcular as aproximações
das freqüências naturais e dos modos de vibração do
sistema. Uma vez conhecidos os modos (exata ou
aproximadamente), constrói-se um modelo reduzido de
equações diferenciais ordinárias e, então, calcula-se a
dinâmica do sistema. Essa dissertação propõe um material
didático a ser utilizado no curso de Vibrações, com o
intuito de auxiliar os alunos de graduação no estudo de
sistemas contínuos, através do desenvolvimento da
formulação fraca e aplicação do MEF. / [en] In this work, the dynamic of one-dimensional continuum
systems is
calculated. Problems of bars and beams with different
boundary and
intermediate conditions are treated in the context of weak
formulation,
so the Finite Element Method (FEM) can be applied; and it
is possible
to calculate the approximation of natural frequencies and
vibration modes
of the system. Once the modes are known (exactly or
approximately), a
reduced-model of ordinary differential equations is
constructed and the
dynamic of the system is calculated. This essay proposes a
didactic material
to be used at the Vibration course, with the purpose to
help undergraduate
students in the studies of continuum systems, through the
development of
the weak formulation and the application of the FEM.
|
147 |
Vers une approche hybride mêlant arbre de classification et treillis de Galois pour de l'indexation d'images / Towards an hybrid model between decision trees and Galois lattice for image indexing and classificationGirard, Nathalie 05 July 2013 (has links)
La classification d'images s'articule généralement autour des deux étapes que sont l'étape d'extraction de signatures suivie de l'étape d'analyse des données extraites, ces dernières étant généralement quantitatives. De nombreux modèles de classification ont été proposés dans la littérature, le choix du modèle le plus adapté est souvent guidé par les performances en classification ainsi que la lisibilité du modèle. L'arbre de classification et le treillis de Galois sont deux modèles symboliques connus pour leur lisibilité. Dans sa thèse [Guillas 2007], Guillas a utilisé efficacement les treillis de Galois pour la classification d'images, et des liens structurels forts avec les arbres de classification ont été mis en évidence. Les travaux présentés dans ce manuscrit font suite à ces résultats, et ont pour but de définir un modèle hybride entre ces deux modèles, qui réunissent leurs avantages (leur lisibilité respective, la robustesse du treillis et le faible espace mémoire de l'arbre). A ces fins, l'étude des liens existants entre les deux modèles a permis de mettre en avant leurs différences. Tout d'abord, le type de discrétisation, les arbres utilisent généralement une discrétisation locale tandis que les treillis, initialement définis pour des données binaires, utilisent une discrétisation globale. A partir d'une étude des propriétés des treillis dichotomiques (treillis définis après une discrétisation), nous proposons une discrétisation locale pour les treillis permettant d'améliorer ses performances en classification et de diminuer sa complexité structurelle. Puis, le processus de post-élagage mis en œuvre dans la plupart des arbres a pour objectif de diminuer la complexité de ces derniers, mais aussi d'augmenter leurs performances en généralisation. Les simplifications de la structure de treillis (exponentielle en la taille de données dans les pires cas), quant à elles, sont motivées uniquement par une diminution de la complexité structurelle. En combinant ces deux simplifications, nous proposons une simplification de la structure du treillis obtenue après notre discrétisation locale et aboutissant à un modèle de classification hybride qui profite de la lisibilité des deux modèles tout en étant moins complexe que le treillis mais aussi performant que celui-ci. / Image classification is generally based on two steps namely the extraction of the image signature, followed by the extracted data analysis. Image signature is generally numerical. Many classification models have been proposed in the literature, among which most suitable choice is often guided by the classification performance and the model readability. Decision trees and Galois lattices are two symbolic models known for their readability. In her thesis {Guillas 2007}, Guillas efficiently used Galois lattices for image classification. Strong structural links between decision trees and Galois lattices have been highlighted. Accordingly, we are interested in comparing models in order to design a hybrid model between those two. The hybrid model will combine the advantages (robustness of the lattice, low memory space of the tree and readability of both). For this purpose, we study the links between the two models to highlight their differences. Firstly, the discretization type where decision trees generally use a local discretization while Galois lattices, originally defined for binary data, use a global discretization. From the study of the properties of dichotomic lattice (specific lattice defined after discretization), we propose a local discretization for lattice that allows us to improve its classification performances and reduces its structural complexity. Then, the process of post-pruning implemented in most of the decision trees aims to reduce the complexity of the latter, but also to improve their classification performances. Lattice filtering is solely motivated by a decrease in the structural complexity of the structures (exponential in the size of data in the worst case). By combining these two processes, we propose a simplification of the lattice structure constructed after our local discretization. This simplification leads to a hybrid classification model that takes advantage of both decision trees and Galois lattice. It is as readable as the last two, while being less complex than the lattice but also efficient.
|
148 |
Geração de conjuntos de base para alguns átomos do 5º e 6º período da tabela periódica utilizando o método da coordenada geradora polinomial Hartree-Fock / Generation of basis sets for some atoms of the 5th and 6th periods of the periodic table using the Hartree-Fock Polynomial Generating Coordinate MethodJúlia Maria Aragon Alves 20 February 2017 (has links)
A equação de Schrödinger possui solução exata para átomos monoeletrônicos, porém devido ao nível de complexidade das soluções para átomos polieletrônicos e moléculas houve a necessidade de métodos aproximados que descrevessem satisfatoriamente as propriedades de interesse. O Método da Coordenada Geradora Hartree-Fock (MCG-HF) é um método variacional que permitiu desenvolver conjuntos de base acurados para átomos, entretanto, utilizando apenas a discretização integral, os conjuntos de base requeriam um maior número de expoentes, deixando-os extensos. Barbosa e da Silva (2009) introduziram o conceito de discretização integral polinomial que diminuiu a quantidade de expoentes necessários para descrever adequadamente um átomo, gerando conjuntos de base menores. Os conjuntos de primitivas gerados pelo p-MCG-HF foram contraídos utilizando o método de contração geral para a qualidade 5Z e em seguida, polarizados, utilizando dois métodos de polarização, ou seja, a pinçada e otimizada, para descrever melhor propriedades químicas como ligação química, estados excitados, entre outros. As energias obtidas por esses métodos de polarização são semelhantes, porém o tempo computacional é menor para a polarização pinçada (na maioria dos casos). Ao comparar os resultados obtidos neste trabalho com aqueles da literatura, os resultados obtidos utilizando o p-MCG-HF são de alta qualidade e com a enorme vantagem de ter tempo computacionais muito reduzidos. Esse método já está sendo aplicado por este grupo de pesquisa para gerar conjunto base para átomos do 1º ao 2º períodos da tabela periódica e os metais de transição e têm demonstrado resultados satisfatórios. Então, no intuito de estudar todos os átomos da tabela periódica até o bário, os átomos Gálio, Germânio, Arsênio, Selênio, Bromo, Criptônio, Rubídio, Estrôncio, Índio, Estanho, Antimônio, Telúrio, Iodo, Xenônio, Césio e Bário serão o foco deste trabalho. / The Schrödinger equation has exact solution for monoelectronic atoms, but the high complexity for the exact solution for many electron systems, lead to the use of approximation methods that satisfactorily. The Hartree-Fock Generating Coordinate Method (MCG-HF) is a variational method that allows the development of accurate basis sets for atoms with many electrons. However, using only an integral discretization, the basis sets generated with the MCG-HF requires a larger number of exponentes. Barbosa and da Silva (2009) introduced the concept of integral polynomial discretization that reduced the amount of exponents required to properly describe an atom, generating smaller basis sets. The sets of primitives generated with p-MCG-HF were contracted using the general contraction method for 5Z quality and then polarized using two different polarization methods (pinched and optimized) to better describe chemical properties such as chemical bonding, excited states, etc. The energies obtained with these polarization methods are similar, but the computational cost is smaller for pinched polarization (in most cases). Comparing the results obtained in this work with those of the literature, the results obtained by using the p-MCG-HF have a high quality and a large advantage of having very reduced computational cost. This method is already being applied in ours research group to generate basis set for atoms of the 1st to 2nd period of the periodic table and the transition metals showing satisfactory results. In order to study all the atoms in the periodic table, the atoms Gallium, Germanium, Arsenic, Selenium, Bromine, Crypton, Rubidium, Strontium, Indium, Tin, Antimony, Tellurium, Iodine, Xenium, Cesium and Barium will be focused in this work.
|
149 |
Contrôle et stabilité Entrée-Etat en dimension infinie du profil du facteur de sécurité dans un plasma Tokamak / Infinite Dimensional Control and Input-to-State Stability of the Safety Factor Profile in a Tokamak PlasmaBribiesca Argomedo, Federico 12 September 2012 (has links)
Dans cette thèse, on s'intéresse au contrôle du profil de facteur de sécurité dans un plasma tokamak. Cette variable physique est liée à plusieurs phénomènes dans le plasma, en particulier des instabilités magnétohydrodynamiques (MHD). Un profil de facteur de sécurité adéquat est particulièrement important pour avoir des modes d'opération avancés dans le tokamak, avec haut confinement et stabilité MHD. Pour cela faire, on se focalise sur la commande du gradient du profil de flux magnétique poloidal dans le tokamak. L'évolution de cette variable est donnée par une équation de diffusion avec des coefficients distribuées et temps-variants. En utilisant des techniques de type Lyapunov et les propriétés de stabilité entrée-état du système on propose une loi de commande robuste qui prend en compte des contraintes non-linéaires dans l'action imposées par la physique des actionneurs. / In this thesis, we are interested in the control of the safety factor profile or q-profile in a tokamak plasma. This physical quantity has been found to be related to several phenomena in the plasma, in particular magnetohydrodynamic (MHD) instabilities. Having an adequate safety factor profile is particularly important to achieve advanced tokamak operation, providing high confinement and MHD stability. To achieve this, we focus in controlling the gradient of the poloidal magnetic flux profile. The evolution of this variable is given by a diffusion equation with distributed time-varying coefficients. Based on Lyapunov techniques and the Input-to-State stability properties of the system we propose a robust control law that takes into account nonlinear constraints on the control action imposed by the physical actuators.
|
150 |
Redução de dimensionalidade usando agrupamento e discretização ponderada para a recuperação de imagens por conteúdoPirolla, Francisco Rocha 19 November 2012 (has links)
Made available in DSpace on 2016-06-02T19:06:00Z (GMT). No. of bitstreams: 1
4756.pdf: 1515606 bytes, checksum: 12146689055c9826f258e527c3ae001a (MD5)
Previous issue date: 2012-11-19 / Universidade Federal de Sao Carlos / This work proposes two new techniques of feature vector pre-processing to improve CBIR and image classification systems: a method of feature transformation based on the k-means clustering approach (Feature Transformation based on K-means - FTK) and a method of Weighted Feature Discretization - WFD. The FTK method employs the clustering principle of k-means to compact the feature vector space. The WFD method performs a weighted feature discretization, privileging the most important feature ranges to distinguish images. The proposed methods were employed to pre-process the feature vector in CBIR and in classification approaches, comparing the results with the pre-processing performed by PCA (a well known feature transformation method) and the original feature vector: FTK produced a reduction in the feature vector size with an improving in the query precision and a improvement in the classification accuracy; WFD improved the query precision up to and a improvement in the classification accuracy; the combination of WFD and FTK improved also the query precision and a improvement in the classification accuracy. These are very important results, especially when compared with PCA results, which leads to a minor reduction in the feature vector size, a minor increase in the query precision and a minor increase in the classification accuracy. Also the proposed approaches have linear computational cost where PCA has a cubic computational cost. The results indicate that the proposed approaches are well-suited to perform image feature vector pre-processing improving the overall quality of CBIR and classification systems. / Neste trabalho, propomos diminuir o gap semântico e os problemas de maldição de dimensionalidade apresentando duas técnicas de préprocessamento do vetor de características com o objetivo de melhorar a recuperação de imagens baseada em conteúdo e sistemas de classificação de imagens: um método de redução de dimensionalidade do vetor de características original, baseado no algoritmo k-means, chamado FTK (Feature Transformation based on K-means) e um método de discretização ponderada de características que privilegia as faixas de características mais importantes para distinguir imagens, chamado WFD (Weighted Feature Discretization). Os métodos propostos foram utilizados para pré-processar os vetores de características nas abordagens CBIR e classificação, comparando o pré-processamento executado pelo método PCA e os resultados dos vetores de características originais. O algoritmo FTK promoveu uma redução no tamanho do vetor de características com uma melhoria na precisão da consulta e na precisão de classificação. O algoritmo WFD melhorou a precisão da consulta e classificação; a combinação de dos dois algoritmos propostos também melhorou a precisão da consulta e classificação. Estes resultados são muito importantes, especialmente quando comparados com os resultados do método PCA, que também leva a uma redução no tamanho do vetor de características, a um menor aumento na precisão da consulta e a menor aumento na precisão da classificação. Além disso, as técnicas propostas têm custo computacional linear, enquanto o PCA tem um custo computacional cúbico. Os resultados indicam que os métodos propostos são abordagens adequadas para realizar pré-processamento dos vetores de características de imagens em sistemas CBIR e em sistemas de classificação.
|
Page generated in 0.096 seconds