• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 9
  • 5
  • 3
  • 3
  • Tagged with
  • 77
  • 77
  • 25
  • 16
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analyse des tolérances des systèmes complexes – Modélisation des imperfections de fabrication pour une analyse réaliste et robuste du comportement des systèmes / Tolerance analysis of complex mechanisms - Manufacturing imperfections modeling for a realistic and robust geometrical behavior modeling of the mechanisms

Goka, Edoh 12 June 2019 (has links)
L’analyse des tolérances a pour but de vérifier lors de la phase de conception, l’impact des tolérances individuelles sur l’assemblage et la fonctionnalité d’un système mécanique. Les produits fabriqués possèdent différents types de contacts et sont sujets à des imperfections de fabrication qui sont sources de défaillances d’assemblage et fonctionnelle. Les méthodes généralement proposées pour l’analyse des tolérances ne considèrent pas les défauts de forme. L’objectif des travaux de thèse est de proposer une nouvelle procédure d’analyse des tolérances permettant de prendre en compte les défauts de forme et le comportement géométriques des différents types de contacts. Ainsi, dans un premier temps, une méthode de modélisation des défauts de forme est proposée afin de rendre les simulations plus réalistes. Dans un second temps, ces défauts de forme sont intégrés dans la modélisation du comportement géométrique d’un système mécanique hyperstatique, en considérant les différents types de contacts. En effet, le comportement géométrique des différents types de contacts est différent dès que les défauts de forme sont considérés. La simulation de Monte Carlo associée à une technique d’optimisation est la méthode choisie afin de réaliser l’analyse des tolérances. Cependant, cette méthode est très couteuse en temps de calcul. Pour pallier ce problème, une approche utilisant des modèles probabilistes obtenus grâce à l’estimation par noyaux, est proposée. Cette nouvelle approche permet de réduire les temps de calcul de manière significative. / Tolerance analysis aims toward the verification of the impact of individual tolerances on the assembly and functional requirements of a mechanical system. The manufactured products have several types of contacts and their geometry is imperfect, which may lead to non-functioning and non-assembly. Traditional methods for tolerance analysis do not consider the form defects. This thesis aims to propose a new procedure for tolerance analysis which considers the form defects and the different types of contact in its geometrical behavior modeling. A method is firstly proposed to model the form defects to make realistic analysis. Thereafter, form defects are integrated in the geometrical behavior modeling of a mechanical system and by considering also the different types of contacts. Indeed, these different contacts behave differently once the imperfections are considered. The Monte Carlo simulation coupled with an optimization technique is chosen as the method to perform the tolerance analysis. Nonetheless, this method is subject to excessive numerical efforts. To overcome this problem, probabilistic models using the Kernel Density Estimation method are proposed.
Read more
42

Formal Methods for Probabilistic Energy Models

Daum, Marcus 11 April 2019 (has links)
The energy consumption that arises from the utilisation of information processing systems adds a significant contribution to environmental pollution and has a big share of operation costs. This entails that we need to find ways to reduce the energy consumption of such systems. When trying to save energy it is important to ensure that the utility (e.g., user experience) of a system is not unnecessarily degraded, requiring a careful trade-off analysis between the consumed energy and the resulting utility. Therefore, research on energy efficiency has become a very active and important research topic that concerns many different scientific areas, and is as well of interest for industrial companies. The concept of quantiles is already well-known in mathematical statistics, but its benefits for the formal quantitative analysis of probabilistic systems have been noticed only recently. For instance, with the help of quantiles it is possible to reason about the minimal energy that is required to obtain a desired system behaviour in a satisfactory manner, e.g., a required user experience will be achieved with a sufficient probability. Quantiles also allow the determination of the maximal utility that can be achieved with a reasonable probability while staying within a given energy budget. As those examples illustrate important measures that are of interest when analysing energy-aware systems, it is clear that it is beneficial to extend formal analysis-methods with possibilities for the calculation of quantiles. In this monograph, we will see how we can take advantage of those quantiles as an instrument for analysing the trade-off between energy and utility in the field of probabilistic model checking. Therefore, we present algorithms for their computation over Markovian models. We will further investigate different techniques in order to improve the computational performance of implementations of those algorithms. The main feature that enables those improvements takes advantage of the specific characteristics of the linear programs that need to be solved for the computation of quantiles. Those improved algorithms have been implemented and integrated into the well-known probabilistic model checker PRISM. The performance of this implementation is then demonstrated by means of different protocols with an emphasis on the trade-off between the consumed energy and the resulting utility. Since the introduced methods are not restricted to the case of an energy-utility analysis only, the proposed framework can be used for analysing the interplay of cost and its resulting benefit in general.:1 Introduction 1.1 Related work 1.2 Contribution and outline 2 Preliminaries 3 Reward-bounded reachability properties and quantiles 3.1 Essentials 3.2 Dualities 3.3 Upper-reward bounded quantiles 3.3.1 Precomputation 3.3.2 Computation scheme 3.3.3 Qualitative quantiles 3.4 Lower-reward bounded quantiles 3.4.1 Precomputation 3.4.2 Computation scheme 3.5 Energy-utility quantiles 3.6 Quantiles under side conditions 3.6.1 Upper reward bounds 3.6.2 Lower reward bounds 3.6.2.1 Maximal reachability probabilities 3.6.2.2 Minimal reachability probabilities 3.7 Reachability quantiles and continuous time 3.7.1 Dualities 4 Expectation Quantiles 4.1 Computation scheme 4.2 Arbitrary models 4.2.1 Existential expectation quantiles 4.2.2 Universal expectation quantiles 5 Implementation 5.1 Computation optimisations 5.1.1 Back propagation 5.1.2 Reward window 5.1.3 Topological sorting of zero-reward sub-MDPs 5.1.4 Parallel computations 5.1.5 Multi-thresholds 5.1.6 Multi-state solution methods 5.1.7 Storage for integer sets 5.1.8 Elimination of zero-reward self-loops 5.2 Integration in Prism 5.2.1 Computation of reward-bounded reachability probabilities 5.2.2 Computation of quantiles in CTMCs 6 Analysed Protocols 6.1 Prism Benchmark Suite 6.1.1 Self-Stabilising Protocol 6.1.2 Leader-Election Protocol 6.1.3 Randomised Consensus Shared Coin Protocol 6.2 Energy-Aware Protocols 6.2.1 Energy-Aware Job-Scheduling Protocol 6.2.1.1 Energy-Aware Job-Scheduling Protocol with side conditions 6.2.1.2 Energy-Aware Job-Scheduling Protocol and expectation quantiles 6.2.1.3 Multiple shared resources 6.2.2 Energy-Aware Bonding Network Device (eBond) 6.2.3 HAECubie Demonstrator 6.2.3.1 Operational behaviour of the protocol 6.2.3.2 Formal analysis 7 Conclusion 7.1 Classification 7.2 Future prospects Bibliography List of Figures List of Tables
Read more
43

A Study Of Localization And Latency Reduction For Action Recognition

Masood, Syed Zain 01 January 2012 (has links)
The success of recognizing periodic actions in single-person-simple-background datasets, such as Weizmann and KTH, has created a need for more complex datasets to push the performance of action recognition systems. In this work, we create a new synthetic action dataset and use it to highlight weaknesses in current recognition systems. Experiments show that introducing background complexity to action video sequences causes a significant degradation in recognition performance. Moreover, this degradation cannot be fixed by fine-tuning system parameters or by selecting better feature points. Instead, we show that the problem lies in the spatio-temporal cuboid volume extracted from the interest point locations. Having identified the problem, we show how improved results can be achieved by simple modifications to the cuboids. For the above method however, one requires near-perfect localization of the action within a video sequence. To achieve this objective, we present a two stage weakly supervised probabilistic model for simultaneous localization and recognition of actions in videos. Different from previous approaches, our method is novel in that it (1) eliminates the need for manual annotations for the training procedure and (2) does not require any human detection or tracking in the classification stage. The first stage of our framework is a probabilistic action localization model which extracts the most promising sub-windows in a video sequence where an action can take place. We use a non-linear classifier in the second stage of our framework for the final classification task. We show the effectiveness of our proposed model on two well known real-world datasets: UCF Sports and UCF11 datasets. iii Another application of the weakly supervised probablistic model proposed above is in the gaming environment. An important aspect in designing interactive, action-based interfaces is reliably recognizing actions with minimal latency. High latency causes the system’s feedback to lag behind and thus significantly degrade the interactivity of the user experience. With slight modification to the weakly supervised probablistic model we proposed for action localization, we show how it can be used for reducing latency when recognizing actions in Human Computer Interaction (HCI) environments. This latency-aware learning formulation trains a logistic regression-based classifier that automatically determines distinctive canonical poses from the data and uses these to robustly recognize actions in the presence of ambiguous poses. We introduce a novel (publicly released) dataset for the purpose of our experiments. Comparisons of our method against both a Bag of Words and a Conditional Random Field (CRF) classifier show improved recognition performance for both pre-segmented and online classification tasks.
Read more
44

Ontology-Mediated Probabilistic Model Checking: Extended Version

Dubslaff, Clemens, Koopmann, Patrick, Turhan, Anni-Yasmin 20 June 2022 (has links)
Probabilistic model checking (PMC) is a well-established method for the quantitative analysis of dynamic systems. On the other hand, description logics (DLs) provide a well-suited formalism to describe and reason about static knowledge, used in many areas to specify domain knowledge in an ontology. We investigate how such knowledge can be integrated into the PMC process, introducing ontology-mediated PMC. Specifically, we propose a formalism that links ontologies to dynamic behaviors specified by guarded commands, the de-facto standard input formalism for PMC tools such as Prism. Further, we present and implement a technique for their analysis relying on existing DL-reasoning and PMC tools. This way, we enable the application of standard PMC techniques to analyze knowledge-intensive systems. Our approach is implemented and evaluated on a multi-server system case study, where different DL-ontologies are used to provide specifications of different server platforms and situations the system is executed in.
45

Syngas ash deposition for a three row film cooled leading edge turbine vane

Sreedhran, Sai Shrinivas 10 August 2010 (has links)
Coal gasification and combustion can introduce contaminants in the solid or molten state depending on the gas clean up procedures used, coal composition and operating conditions. These byproducts when combined with high temperatures and high gas stream velocities can cause Deposition, Erosion, and Corrosion (DEC) of turbine components downstream of the combustor section. The objective of this dissertation is to use computational techniques to investigate the dynamics of ash deposition in a leading edge vane geometry with film cooling. Large Eddy Simulations (LES) is used to model the flow field of the coolant jet-mainstream interaction and the deposition of syngas ash in the leading edge region of a turbine vane is modeled using a Lagrangian framework. The three row leading edge vane geometry is modeled as a symmetric semi-cylinder with a flat afterbody. One row of coolant holes is located along the stagnation line and the other two rows of coolant holes are located at ±21.3° from the stagnation line. The coolant is injected at 45° to the vane surface with 90° compound angle injection. The coolant to mainstream density ratio is set to unity and the freestream Reynolds number based on leading edge diameter is 32000. Coolant to mainstream blowing ratios (B.R.) of 0.5, 1.0, 1.5, and 2.0 are investigated. It is found that the stagnation cooling jets penetrate much further into the mainstream, both in the normal and lateral directions, than the off-stagnation jets for all blowing ratios. Jet dilution is characterized by turbulent diffusion and entrainment. The strength of both mechanisms increases with blowing ratio. The adiabatic effectiveness in the stagnation region initially increases with blowing ratio but then generally decreases as the blowing ratio increases further. Immediately downstream of off-stagnation injection, the adiabatic effectiveness is highest at B.R.=0.5. However, in spite of the larger jet penetration and dilution at higher blowing ratios, the larger mass of coolant injected increases the effectiveness with blowing ratio further downstream of injection location. A novel deposition model which integrates different sources of published experimental data to form a holistic numerical model is developed to predict ash deposition. The deposition model computes the ash sticking probabilities as a function of particle temperature and ash composition. This deposition model is validated with available experimental results on a flat plate inclined at 45°. Subsequently, this model was then used to study ash deposition in a leading edge vane geometry with film cooling for coolant to mainstream blowing ratios of 0.5, 1.0, 1.5 and 2.0. Ash particle sizes of 5, 7, 10μm are considered. Under the conditions of the current simulations, ash particles have Stokes numbers less than unity of O(1) and hence are strongly affected by the flow and thermal fields generated by the coolant interaction with the main-stream. Because of this, the stagnation coolant jets are successful in pushing and/or cooling the particles away from the surface and minimizing deposition and erosion in the stagnation region. Capture efficiency for eight different ash compositions are investigated. Among all the ash samples, ND ash sample shows the highest capture efficiency due to its low softening temperature. A trend that is common to all particle sizes is that the percentage capture efficiency is least for blowing ratio of 1.5 as the coolant is successful in pushing the particles away from the surface. However, further increasing the blowing ratio to 2.0, the percentage capture efficiency increases as more number of particles are transported to the surface by strong mainstream entrainment by the coolant jets. / Ph. D.
Read more
46

Tools and Techniques for Evaluating Reliability Trade-offs for Nano-Architectures

Bhaduri, Debayan 20 May 2004 (has links)
It is expected that nano-scale devices and interconnections will introduce unprecedented level of defects in the substrates, and architectural designs need to accommodate the uncertainty inherent at such scales. This consideration motivates the search for new architectural paradigms based on redundancy based defect-tolerant designs. However, redundancy is not always a solution to the reliability problem, and often too much or too little redundancy may cause degradation in reliability. The key challenge is in determining the granularity at which defect tolerance is designed, and the level of redundancy to achieve a specific level of reliability. Analytical probabilistic models to evaluate such reliability-redundancy trade-offs are error prone and cumbersome, and do not scalewell for complex networks of gates. In this thesiswe develop different tools and techniques that can evaluate the reliability measures of combinational circuits, and can be used to analyze reliability-redundancy trade-offs for different defect-tolerant architectural configurations. In particular, we have developed two tools, one of which is based on probabilistic model checking and is named NANOPRISM, and another MATLAB based tool called NANOLAB. We also illustrate the effectiveness of our reliability analysis tools by pointing out certain anomalies which are counter-intuitive but can be easily discovered by these tools, thereby providing better insight into defecttolerant design decisions. We believe that these tools will help furthering research and pedagogical interests in this area, expedite the reliability analysis process and enhance the accuracy of establishing reliability-redundancy trade-off points. / Master of Science
Read more
47

Computing Quantiles in Markov Reward Models

Ummels, Michael, Baier, Christel 10 July 2014 (has links) (PDF)
Probabilistic model checking mainly concentrates on techniques for reasoning about the probabilities of certain path properties or expected values of certain random variables. For the quantitative system analysis, however, there is also another type of interesting performance measure, namely quantiles. A typical quantile query takes as input a lower probability bound p ∈ ]0,1] and a reachability property. The task is then to compute the minimal reward bound r such that with probability at least p the target set will be reached before the accumulated reward exceeds r. Quantiles are well-known from mathematical statistics, but to the best of our knowledge they have not been addressed by the model checking community so far. In this paper, we study the complexity of quantile queries for until properties in discrete-time finite-state Markov decision processes with nonnegative rewards on states. We show that qualitative quantile queries can be evaluated in polynomial time and present an exponential algorithm for the evaluation of quantitative quantile queries. For the special case of Markov chains, we show that quantitative quantile queries can be evaluated in pseudo-polynomial time.
48

Analyse de dépendance des programmes à objet en utilisant les modèles probabilistes des entrées

Bouchoucha, Arbi 09 1900 (has links)
La tâche de maintenance ainsi que la compréhension des programmes orientés objet (OO) deviennent de plus en plus coûteuses. L’analyse des liens de dépendance peut être une solution pour faciliter ces tâches d’ingénierie. Cependant, analyser les liens de dépendance est une tâche à la fois importante et difficile. Nous proposons une approche pour l'étude des liens de dépendance internes pour des programmes OO, dans un cadre probabiliste, où les entrées du programme peuvent être modélisées comme un vecteur aléatoire, ou comme une chaîne de Markov. Dans ce cadre, les métriques de couplage deviennent des variables aléatoires dont les distributions de probabilité peuvent être étudiées en utilisant les techniques de simulation Monte-Carlo. Les distributions obtenues constituent un point d’entrée pour comprendre les liens de dépendance internes entre les éléments du programme, ainsi que leur comportement général. Ce travail est valable dans le cas où les valeurs prises par la métrique dépendent des entrées du programme et que ces entrées ne sont pas fixées à priori. Nous illustrons notre approche par deux études de cas. / The task of maintenance and understanding of object-oriented programs is becoming increasingly costly. Dependency analysis can be a solution to facilitate this engineering task. However, dependency analysis is a task both important and difficult. We propose a framework for studying program internal dependencies in a probabilistic setting, where the program inputs are modeled either as a random vector, or as a Markov chain. In that setting, coupling metrics become random variables whose probability distributions can be studied via Monte-Carlo simulation. The obtained distributions provide an entry point for understanding the internal dependencies of program elements, as well as their general behaviour. This framework is appropriate for the (common) situation where the value taken by the metric does depend on the program inputs and where those inputs are not fixed a priori. We provide a concrete illustration with two case studies.
Read more
49

On learning assumptions for compositional verification of probabilistic systems

Feng, Lu January 2014 (has links)
Probabilistic model checking is a powerful formal verification method that can ensure the correctness of real-life systems that exhibit stochastic behaviour. The work presented in this thesis aims to solve the scalability challenge of probabilistic model checking, by developing, for the first time, fully-automated compositional verification techniques for probabilistic systems. The contributions are novel approaches for automatically learning probabilistic assumptions for three different compositional verification frameworks. The first framework considers systems that are modelled as Segala probabilistic automata, with assumptions captured by probabilistic safety properties. A fully-automated approach is developed to learn assumptions for various assume-guarantee rules, including an asymmetric rule Asym for two-component systems, an asymmetric rule Asym-N for n-component systems, and a circular rule Circ. This approach uses the L* and NL* algorithms for automata learning. The second framework considers systems where the components are modelled as probabilistic I/O systems (PIOSs), with assumptions represented by Rabin probabilistic automata (RPAs). A new (complete) assume-guarantee rule Asym-Pios is proposed for this framework. In order to develop a fully-automated approach for learning assumptions and performing compositional verification based on the rule Asym-Pios, a (semi-)algorithm to check language inclusion of RPAs and an L*-style learning method for RPAs are also proposed. The third framework considers the compositional verification of discrete-time Markov chains (DTMCs) encoded in Boolean formulae, with assumptions represented as Interval DTMCs (IDTMCs). A new parallel operator for composing an IDTMC and a DTMC is defined, and a new (complete) assume-guarantee rule Asym-Idtmc that uses this operator is proposed. A fully-automated approach is formulated to learn assumptions for rule Asym-Idtmc, using the CDNF learning algorithm and a new symbolic reachability analysis algorithm for IDTMCs. All approaches proposed in this thesis have been implemented as prototype tools and applied to a range of benchmark case studies. Experimental results show that these approaches are helpful for automating the compositional verification of probabilistic systems through learning small assumptions, but may suffer from high computational complexity or even undecidability. The techniques developed in this thesis can assist in developing scalable verification frameworks for probabilistic models.
Read more
50

Algoritmos de estimação de distribuição para predição ab initio de estruturas de proteínas / Estimation of distribution algorithms for ab initio protein structure prediction

Bonetti, Daniel Rodrigo Ferraz 05 March 2015 (has links)
As proteínas são moléculas que desempenham funções essenciais para a vida. Para entender a função de uma proteína é preciso conhecer sua estrutura tridimensional. No entanto, encontrar a estrutura da proteína pode ser um processo caro e demorado, exigindo profissionais altamente qualificados. Neste sentido, métodos computacionais têm sido investigados buscando predizer a estrutura de uma proteína a partir de uma sequência de aminoácidos. Em geral, tais métodos computacionais utilizam conhecimentos de estruturas de proteínas já determinadas por métodos experimentais, para tentar predizer proteínas com estrutura desconhecida. Embora métodos computacionais como, por exemplo, o Rosetta, I-Tasser e Quark tenham apresentado sucesso em suas predições, são apenas capazes de produzir estruturas significativamente semelhantes às já determinadas experimentalmente. Com isso, por utilizarem conhecimento a priori de outras estruturas pode haver certa tendência em suas predições. Buscando elaborar um algoritmo eficiente para Predição de Estruturas de Proteínas livre de tendência foi desenvolvido um Algoritmo de Estimação de Distribuição (EDA) específico para esse problema, com modelagens full-atom e algoritmos ab initio. O fato do algoritmo proposto ser ab initio é mais interessante para aplicação envolvendo proteínas com baixa similaridade, com relação às estruturas já conhecidas. Três tipos de modelos probabilísticos foram desenvolvidos: univariado, bivariado e hierárquico. O univariado trata o aspecto de multi-modalidade de uma variável, o bivariado trata os ângulos diedrais (Φ Ψ) de um mesmo aminoácido como variáveis correlacionadas. O hierárquico divide o problema em subproblemas e tenta tratá-los separadamente. Os resultados desta pesquisa mostraram que é possível obter melhores resultados quando considerado a relação bivariada (Φ Ψ). O hierárquico também mostrou melhorias nos resultados obtidos, principalmente para proteínas com mais de 50 resíduos. Além disso, foi realiza uma comparação com algumas heurísticas da literatura, como: Busca Aleatória, Monte Carlo, Algoritmo Genético e Evolução Diferencial. Os resultados mostraram que mesmo uma metaheurística pouco eficiente, como a Busca Aleatória, pode encontrar a solução correta, porém utilizando muito conhecimento a priori (predição que pode ser tendenciosa). Por outro lado, o algoritmo proposto neste trabalho foi capaz de obter a estrutura da proteína esperada sem utilizar conhecimento a priori, caracterizando uma predição puramente ab initio (livre de tendência). / Proteins are molecules that perform critical roles in the living organism and they are essential for their lifes. To understand the function of a protein, its 3D structure should be known. However, to find the protein structure is an expensive and a time-consuming task, requiring highly skilled professionals. Aiming to overcome such a limitation, computational methods for Protein Structure Prediction (PSP) have been investigated, in order to predict the protein structure from its amino acid sequence. Most of computational methods require knowledge from already determined structures from experimental methods in order to predict an unknown protein. Although computational methods such as Rosetta, I-Tasser and Quark have showed success in their predictions, they are only capable to predict quite similar structures to already known proteins obtained experimentally. The use of such a prior knowledge in the predictions of Rosetta, I-Tasser and Quark may lead to biased predictions. In order to develop a computational algorithm for PSP free of bias, we developed an Estimation of Distribution Algorithm applied to PSP with full-atom and ab initio model. A computational algorithm with ab initio model is mainly interesting when dealing with proteins with low similarity with the known proteins. In this work, we developed an Estimation of Distribution Algorithm with three probabilistic models: univariate, bivariate and hierarchical. The univariate deals with multi-modality of the distribution of the data of a single variable. The bivariate treats the dihedral angles (Proteins are molecules that perform critical roles in the living organism and they are essential for their lifes. To understand the function of a protein, its 3D structure should be known. However, to find the protein structure is an expensive and a time-consuming task, requiring highly skilled professionals. Aiming to overcome such a limitation, computational methods for Protein Structure Prediction (PSP) have been investigated, in order to predict the protein structure from its amino acid sequence. Most of computational methods require knowledge from already determined structures from experimental methods in order to predict an unknown protein. Although computational methods such as Rosetta, I-Tasser and Quark have showed success in their predictions, they are only capable to predict quite similar structures to already known proteins obtained experimentally. The use of such a prior knowledge in the predictions of Rosetta, I-Tasser and Quark may lead to biased predictions. In order to develop a computational algorithm for PSP free of bias, we developed an Estimation of Distribution Algorithm applied to PSP with full-atom and ab initio model. A computational algorithm with ab initio model is mainly interesting when dealing with proteins with low similarity with the known proteins. In this work, we developed an Estimation of Distribution Algorithm with three probabilistic models: univariate, bivariate and hierarchical. The univariate deals with multi-modality of the distribution of the data of a single variable. The bivariate treats the dihedral angles (Φ Ψ) within an amino acid as correlated variables. The hierarchical approach splits the original problem into subproblems and attempts to treat these problems in a separated manner. The experiments show that, indeed, it is possible to achieve better results when modeling the correlation (Φ Ψ). The hierarchical model also showed that is possible to improve the quality of results, mainly for proteins above 50 residues. Besides, we compared our proposed techniques among other metaheuristics from literatures such as: Random Walk, Monte Carlo, Genetic Algorithm and Differential Evolution. The results show that even a less efficient metaheuristic such as Random Walk managed to find the correct structure, however using many prior knowledge (prediction that may be biased). On the other hand, our proposed EDA for PSP was able to find the correct structure with no prior knowledge at all, so we can call this prediction as pure ab initio (biased-free).
Read more

Page generated in 0.2452 seconds