Spelling suggestions: "subject:"blackbox"" "subject:"black7box""
91 |
Universal Algorithms as an Alternative for Generating Non-Uniform Continuous Random VariatesLeydold, Josef, Hörmann, Wolfgang January 2000 (has links) (PDF)
This paper presents an overview of the most powerful universal methods. These are based on acceptance/rejection techniques where hat and squeezes are constructed automatically. Although originally motivated to sample from non-standard distributions these methods have advantages that make them attractive even for sampling from standard distributions and thus are an alternative to special generators tailored for particular distributions. Most important are: the marginal generation time is fast and does not depend on the distribution. They can be used for variance reduction techniques, and they produce random numbers of predictable quality. These algorithms are implemented in a library, called UNURAN, which is available by anonymous ftp. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
92 |
Accuracy of Software Reliability Prediction from Different ApproachesVasudev, R.Sashin, Vanga, Ashok Reddy January 2008 (has links)
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years. / svra06@student.bth.se
|
93 |
Identification de systèmes utilisant les réseaux de neurones : un compromis entre précision, complexité et charge de calculs. / System identification using neural networks : a balanced accuracy, complexity and computational cost approach.Romero Ugalde, Héctor Manuel 16 January 2013 (has links)
Ce rapport porte sur le sujet de recherche de l'identification boîte noire du système non linéaire. En effet, parmi toutes les techniques nombreuses et variées développées dans ce domaine de la recherche ces dernières décennies, il semble toujours intéressant d'étudier l'approche réseau de neurones dans l'estimation de modèle de système complexe. Même si des modèles précis ont été obtenus, les principaux inconvénients de ces techniques restent le grand nombre de paramètres nécessaires et, en conséquence, le coût important de calcul nécessaire pour obtenir le niveau de pratique de la précision du modèle désiré. Par conséquent, motivés pour remédier à ces inconvénients, nous avons atteint une méthodologie complète et efficace du système d'identification offrant une précision équilibrée, la complexité et les modèles de coûts en proposant, d'une part, de nouvelles structures de réseaux de neurones particulièrement adapté à une utilisation très large en matière de modélisation système pratique non linéaire, d'autre part, un simple et efficace technique de réduction de modèle, et, troisièmement, une procédure de réduction de coût de calcul. Il est important de noter que ces deux dernières techniques de réduction peut être appliquée à une très large gamme d'architectures de réseaux de neurones sous deux simples hypothèses spécifiques qui ne sont pas du tout contraignant. Enfin, la dernière contribution importante de ce travail est d'avoir montré que cette phase d'estimation peut être obtenue dans un cadre robuste si la qualité des données d'identification qu'il oblige. Afin de valider la procédure d'identification système proposé, des exemples d'applications entraînées en simulation et sur un procédé réel, de manière satisfaisante validé toutes les contributions de cette thèse, confirmant tout l'intérêt de ce travail. / This report concerns the research topic of black box nonlinear system identification. In effect, among all the various and numerous techniques developed in this field of research these last decades, it seems still interesting to investigate the neural network approach in complex system model estimation. Even if accurate models have been derived, the main drawbacks of these techniques remain the large number of parameters required and, as a consequence, the important computational cost necessary to obtain the convenient level of the model accuracy desired. Hence, motivated to address these drawbacks, we achieved a complete and efficient system identification methodology providing balanced accuracy, complexity and cost models by proposing, firstly, new neural network structures particularly adapted to a very wide use in practical nonlinear system modeling, secondly, a simple and efficient model reduction technique, and, thirdly, a computational cost reduction procedure. It is important to notice that these last two reduction techniques can be applied to a very large range of neural network architectures under two simple specific assumptions which are not at all restricting. Finally, the last important contribution of this work is to have shown that this estimation phase can be achieved in a robust framework if the quality of identification data compels it. In order to validate the proposed system identification procedure, application examples driven in simulation and on a real process, satisfactorily validated all the contributions of this thesis, confirming all the interest of this work.
|
94 |
Black-Box Modeling of the Air Mass-Flow Through the Compressor in A Scania Diesel Engine / Svartboxmodellering av luftmassflödet förbi kompressorn i en Scania dieselmotorTörnqvist, Oskar January 2009 (has links)
Stricter emission legislation for heavy trucks in combination with the customers demand on low fuel consumption has resulted in intensive technical development of engines and their control systems. To control all these new solutions it is desirable to have reliable models for important control variables. One of them is the air mass-flow, which is important when controlling the amount of recirculated exhaust gases in the EGR system and to make sure that the air to fuel ratio is correct in the cylinders. The purpose with this thesis was to use system identification theory to develop a model for the air mass-flow through the compressor. First linear black-box models were developed without any knowledge of the physics behind. The collected data was preprocessed to work in the modeling procedure and then models with one or more inputs where built according to the ARX model structure. To further improve the models performance, non-linear regressors was developed from physical relations for the air mass-flow and used to form grey-box models of the air mass-flow.In conclusion, the performance was evaluated through comparing the estimated air mass-flow from the best model with the estimate that an extended Kalman filter together with a physical model produced. / Hårdare utsläppskrav för tunga lastbilar i kombination med kundernas efterfrågan på låg bränsleförbrukning har resulterat i en intensiv utveckling av motorer och deras kontrollsystem. För att kunna styra alla dessa nya lösningar är det nödvändigt att ha tillförlitliga modeller över viktiga kontrollvariabler. En av dessa är luftmassflödet som är viktig när man ska kontrollera den mängd avgaser som återcirkuleras i EGR-systemet och för att se till att kvoten mellan luft och bränsle är korrekt i motorns cylindrar. Syftet med det här examensarbetet var att använda systemidentifiering för att ta fram en modell över luftmassflödet förbi kompressorn. Först togs linjära svartboxmodeller fram utan att ta med någon kunskap om den bakomliggande fysiken. Insamlade data förbehandlades för att passa in i modelleringsproceduren och efter det skapades i enlighet med ARX-modellstrukturen modeller med en eller flera insignaler. För att ytterligare förbättra modellernas prestanda togs icke-linjära regressorer fram med hjälp av fysikaliska relationer för luftmassflödet. Dessa användes sedan för att skapa gråboxmodeller av luftmassflödet. Avslutningsvis utvärderades prestandan genom att det estimerade luftmassflödet från den bästa modellen jämfördes med det estimat som ett utökat kalmanfilter tillsammans med fysikaliska ekvationer genererade.
|
95 |
Universal Induction and Optimisation: No Free LunchEveritt, Tom January 2013 (has links)
No description available.
|
96 |
Random Variate Generation by Numerical Inversion When Only the Density Is KnownDerflinger, Gerhard, Hörmann, Wolfgang, Leydold, Josef January 2009 (has links) (PDF)
We present a numerical inversion method for generating random variates from continuous distributions when only the density function is given. The algorithm is based on polynomial interpolation of the inverse CDF and Gauss-Lobatto integration. The user can select the required precision which may be close to machine precision for smooth, bounded densities; the necessary tables have moderate size. Our computational experiments with the classical standard distributions (normal, beta, gamma, t-distributions) and with the noncentral chi-square, hyperbolic, generalized hyperbolic and stable distributions showed that our algorithm always reaches the required precision. The setup time is moderate and the marginal execution time is very fast and nearly the same for all distributions. Thus for the case that large samples with fixed parameters are required the proposed algorithm is the fastest inversion method known. Speed-up factors up to 1000 are obtained when compared to inversion algorithms developed for the specific distributions. This makes our algorithm especially attractive for the simulation of copulas and for quasi-Monte Carlo applications. <P> This paper is the revised final version of the working paper no. 78 of this research report series. / Series: Research Report Series / Department of Statistics and Mathematics
|
97 |
Online Supplement to "Random Variate Generation by Numerical Inversion When Only the Density Is Known"Derflinger, Gerhard, Hörmann, Wolfgang, Leydold, Josef January 2009 (has links) (PDF)
This Online Supplement summarizes our computational experiences with Algorithm NINIGL presented in our paper "Random Variate Generation by Numerical Inversion when only the Density Is Known" (Report No. 90). It is a numerical inversion method for generating random variates from continuous distributions when only the density function is given. The algorithm is based on polynomial interpolation of the inverse CDF and Gauss-Lobatto integration. The user can select the required precision which may be close to machine precision for smooth, bounded densities; the necessary tables have moderate size. Our computational experiments with the classical standard distributions (normal, beta, gamma, t-distributions) and with the noncentral chi-square, hyperbolic, generalized hyperbolic and stable distributions showed that our algorithm always reaches the required precision. The setup time is moderate and the marginal execution time is very fast and nearly the same for all these distributions. Thus for the case that large samples with fixed parameters are required the proposed algorithm is the fastest inversion method known. Speed-up factors up to 1000 are obtained when compared to inversion algorithms developed for the specific distributions. Thus our algorithm is especially attractive for the simulation of copulas and for quasi-Monte Carlo applications. / Series: Research Report Series / Department of Statistics and Mathematics
|
98 |
Efficient "black-box" multigrid solvers for convection-dominated problemsRees, Glyn Owen January 2011 (has links)
The main objective of this project is to develop a "black-box" multigrid preconditioner for the iterative solution of finite element discretisations of the convection-diffusion equation with dominant convection. This equation can be considered a stand alone scalar problem or as part of a more complex system of partial differential equations, such as the Navier-Stokes equations. The project will focus on the stand alone scalar problem. Multigrid is considered an optimal preconditioner for scalar elliptic problems. This strategy can also be used for convection-diffusion problems, however an appropriate robust smoother needs to be developed to achieve mesh-independent convergence. The focus of the thesis is on the development of such a smoother. In this context a novel smoother is developed referred to as truncated incomplete factorisation (tILU) smoother. In terms of computational complexity and memory requirements, the smoother is considerably less expensive than the standard ILU(0) smoother. At the same time, it exhibits the same robustness as ILU(0) with respect to the problem and discretisation parameters. The new smoother significantly outperforms the standard damped Jacobi smoother and is a competitor to the Gauss-Seidel smoother (and in a number of important cases tILU outperforms the Gauss-Seidel smoother). The new smoother depends on a single parameter (the truncation ratio). The project obtains a default value for this parameter and demonstrated the robust performance of the smoother on a broad range of problems. Therefore, the new smoothing method can be regarded as "black-box". Furthermore, the new smoother does not require any particular ordering of the nodes, which is a prerequisite for many robust smoothers developed for convection-dominated convection-diffusion problems. To test the effectiveness of the preconditioning methodology, we consider a number of model problems (in both 2D and 3D) including uniform and complex (recirculating) convection fields discretised by uniform, stretched and adaptively refined grids. The new multigrid preconditioner within block preconditioning of the Navier-Stokes equations was also tested. The numerical results gained during the investigation confirm that tILU is a scalable, robust smoother for both geometric and algebraic multigrid. Also, comprehensive tests show that the tILU smoother is a competitive method.
|
99 |
Studies on Stochastic Optimisation and applications to the Real-World / Contributions à l'Optimisation Stochastique et Applications au Monde-RéelBerthier, Vincent 29 September 2017 (has links)
Un grand nombre d'études ont été faites dans le domaine de l'Optimisation Stochastique en général et les Algorithmes Génétiques en particulier. L'essentiel des nouveaux développements ou des améliorations faites sont alors testés sur des jeux de tests très connus tels que BBOB, CEC, etc. conçus de telle manière que soient présents les principaux défis que les optimiseurs doivent relever : non séparabilité, multimodalité, des vallées où le gradient est quasi-nul, et ainsi de suite. La plupart des études ainsi faites se déroulent via une application directe sur le jeu de test, optimisant un nombre donné de variables pour atteindre un critère précis. La première contribution de ce travail consiste à étudier l'impact de la remise en cause de ce fonctionnement par deux moyens : le premier repose sur l'introduction d'un grand nombre de variables qui n'ont pas d'impact sur la valeur de la fonction optimisée ; le second quant à lui relève de l'étude des conséquences du mauvais conditionnement d'une fonction en grande dimension sur les performances des algorithmes d'optimisation stochastique. Une deuxième contribution se situe dans l'étude de l'impact de la modification des mutations de l'algorithme CMA-ES,où, au lieu d'utiliser des mutations purement aléatoires, nous allons utiliser des mutations quasi-aléatoires. Ce travail introduit également la ``Sieves Method'', bien connue des statisticiens. Avec cette méthode, nous commençons par optimiser un faible nombre de variables, nombre qui est ensuite graduellement incrémenté au fil de l'optimisation.Bien que les jeux de tests existants sont bien sûr très utiles, ils ne peuvent constituer que la première étape : dans la plupart des cas, les jeux de tests sont constitués d'un ensemble de fonctions purement mathématiques, des plus simples comme la sphère, aux plus complexes. Le but de la conception d'un nouvel optimiseur, ou l'amélioration d'un optimiseur existant, doit pourtant in fine être de répondre à des problèmes du monde réel. Ce peut-être par exemple la conception d'un moteur plus efficace, d'identifier les bons paramètres d'un modèle physique ou encore d'organiser des données en groupes.Les optimiseurs stochastiques sont bien évidemment utilisés sur de tels problèmes, mais dans la plupart des cas, un optimiseur est choisi arbitrairement puis appliqué au problème considéré. Nous savons comment les optimiseurs se comparent les uns par rapport aux autres sur des fonctions artificielles, mais peu de travaux portent sur leur efficacité sur des problèmes réels. L'un des principaux aspects de des travaux présentés ici consiste à étudier le comportement des optimiseurs les plus utilisés dans la littérature sur des problèmes inspirés du monde réel, voire des problèmes qui en viennent directement. Sur ces problèmes, les effets des mutations quasi-aléatoires de CMA-ES et de la``Sieves Method'' sont en outre étudiés. / A lot of research is being done on Stochastic Optimisation in general and Genetic Algorithms in particular. Most of the new developments are then tested on well know testbeds like BBOB, CEC, etc. conceived to exhibit as many pitfalls as possible such as non-separability, multi-modality, valleys with an almost null gradient and so on. Most studies done on such testbeds are pretty straightforward, optimising a given number of variables for there cognized criterion on the testbed. The first contribution made here is to study the impact of some changes in those assumptions, namely the effect of supernumerary variables that don't change anything to a function evaluation on the one hand, and the effect of a change of the studied criterion on the other hand. A second contribution is in the modification of the mutation design for the algorithm CMA-ES, where we will use Quasi-Random mutations instead of purely random ones. This will almost always result in a very clear improvement ofthe observed results. This research also introduces the Sieves Method well known in statistics, to stochastic optimisers: by first optimising a small subset of the variables and gradually increasing the number of variables during the optimization process, we observe on some problems a very clear improvement. While artificial testbeds are of course really useful, they can only be the first step: in almost every case, the testbeds are a collection of purely mathematical functions, from the simplest one like the sphere, to some really complex functions. The goal of the design of new optimisers or the improvement of an existing one is however, in fine, to answer some real world question. It can be the design of a more efficient engine, finding the correct parameters of a physical model or even to organize data in clusters. Stochastic optimisers are used on those problems, in research or industry, but in most instances, an optimiser ischosen almost arbitrarily. We know how optimisers compare on artificial functions, but almost nothing is known abouttheir performances on real world problems. One of the main aspect of the research exposed here will be to compare someof the most used optimisers in the literature on problems inspired or directly coming from the real-world. On those problems, we will additionally test the efficiency of quasi-random mutations in CMA-ES and the Sieves-Method.
|
100 |
Black-box optimization of simulated light extraction efficiency from quantum dots in pyramidal gallium nitride structuresOlofsson, Karl-Johan January 2019 (has links)
Microsized hexagonal gallium nitride pyramids show promise as next generation Light Emitting Diodes (LEDs) due to certain quantum properties within the pyramids. One metric for evaluating the efficiency of a LED device is by studying its Light Extraction Efficiency (LEE). To calculate the LEE for different pyramid designs, simulations can be performed using the FDTD method. Maximizing the LEE is treated as a black-box optimization problem with an interpolation method that utilizes radial basis functions. A simple heuristic is implemented and tested for various pyramid parameters. The LEE is shown to be highly dependent on the pyramid size, the source position and the polarization. Under certain circumstances, a LEE over 17% is found above the pyramid. The results are however in some situations very sensitive to the simulation parameters, leading to results not converging properly. Establishing convergence for all simulation evaluations must be done with further care. The results imply a high LEE for the pyramids is possible, which motivates the need for further research.
|
Page generated in 0.0346 seconds