• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 23
  • 9
  • 7
  • 6
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 181
  • 181
  • 29
  • 29
  • 28
  • 27
  • 26
  • 26
  • 23
  • 20
  • 19
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Inductive Program Synthesis with a Type System

Torres Padilla, Juan Pablo January 2019 (has links)
No description available.
82

Analyse et optimisation d'algorithmes pour l'inférence de modèles de composants logiciels / Analysis and optimization of software model inference algorithms

Irfan, Muhammad Naeem 19 September 2012 (has links)
Les Components-Off-The-Shelf (COTS) sont utilisés pour le développement rapide et efficace de logiciels tout en limitant le coût. Il est important de tester le fonctionnement des composants dans le nouvel environnement. Pour les logiciels tiers,le code source des composants, les spécifications et les modèles complets ne sont pas disponibles. Dans la littérature de tels systèmes sont appelés composants “boîte noire”. Nous pouvons vérifier leur fonctionnement avec des tests en boîte noire tels que le test de non-régression, le test aléatoire ou le test à partir de modèles. Pour ce dernier, un modèle qui représente le comportement attendu du système sous test(SUT) est nécessaire. Ce modèle contient un ensemble d’entrées, le comportement du SUT après stimulation par ces entrées et l’état dans lequel le système se trouve.Pour les systèmes en boîte noire, les modèles peuvent être extraits à partir des traces d’exécutions, des caractéristiques disponibles ou encore des connaissances des experts. Ces modèles permettent ensuite d’orienter le test de ces systèmes.Les techniques d’inférence de modèles permettent d’extraire une information structurelle et comportementale d’une application et de la présenter sous forme d’un modèle formel. Le modèle abstrait appris est donc cohérent avec le comportement du logiciel. Cependant, les modèles appris sont rarement complets et il est difficile de calculer le nombre de tests nécessaires pour apprendre de façon complète et précise un modèle.Cette thèse propose une analyse et des améliorations de la version Mealy de l’algorithme d’inférence L* [Angluin 87]. Elle vise à réduire le nombre de tests nécessaires pour apprendre des modèles. La version Mealy de L* nécessite d’utiliser deux types de test. Le premier type consiste à construire les modèles à partir des sorties du système, tandis que le second est utilisé pour tester l’exactitude des modèles obtenus. L’algorithme utilise ce que l’on appelle une table d’observation pour enregistrer les réponses du système.Le traitement d’un contre-exemple peut exiger d’envoyer un nombre conséquent de requêtes au système. Cette thèse aborde ce problème et propose une technique qui traite les contre-exemples de façon efficace. Nous observons aussi que l’apprentissage d’un modèle ne nécessite pas de devoir remplir complètement ces tables. Nous proposons donc un algorithme d’apprentissage qui évite de demander ces requêtes superflues.Dans certains cas, pour apprendre un modèle, la recherche de contre-exemples peut coûter cher. Nous proposons une méthode qui apprend des modèles sans demander et traiter des contre-exemples. Cela peut ajouter de nombreuses colonnes à la table d’observation mais au final, nous n’avons pas besoin d’envoyer toutes les requêtes. Cette technique ne demande que les requêtes nécessaires.Ces contributions réduisent le nombre de tests nécessaires pour apprendre des modèles de logiciels, améliorant ainsi la complexité dans le pire cas. Nous présentons les extensions que nous avons apportées à l’outil RALT pour mettre en oeuvre ces algorithmes. Elles sont ensuite validées avec des exemples tels que les tampons, les distributeurs automatiques, les protocoles d’exclusion mutuelle et les planificateurs. / Components-Off-The-Shelf (COTS) are used for rapid and cost effective developmentof software systems. It is important to test the correct functioning of COTS in new environment. For third party software components source code, completes pecifications and models are not available. In literature such systems are referred as black box software components. Their proper functioning in new environment can be tested with black box testing techniques like, comparison testing, fuzz testing, Model based testing. For Model based software testing, software models are required, which represent the desired behavior of a system under test (SUT). A software model shows that a certain set of inputs are applicable to the SUT and how it behaves when these inputs are applied under different circumstances. For software black box systems, models can be learned from behavioral traces, available specifications, knowledge of experts and other such sources. The software models steer the testing of software systems. The model inference algorithms extractstructural and design information of a software system and present it as a formal model. The learned abstract software model is consistent with the behavior of the particular software system. However, the learned models are rarely complete and it is difficult to calculate the number of tests required to learn precise and complete model of a software system. The thesis provides analysis and improvements on the Mealy adaptation of the model inference algorithm L* [Angluin 87]. It targets at reducing the number oftests required to learn models of software systems. The Mealy adaptation of thealgorithm L* requires learning models by asking two types of tests. First type oftests are asked to construct models i.e. output queries, whereas the second type is used to test the correctness of these models i.e. counterexamples. The algorithm uses an observation table to record the answers of output queries. Processing a counterexample may require a lot of output queries. The thesis addresses this problem and proposes a technique which processes the counterexamples efficiently. We observe that while learning the models of software systems asking output queries for all of the observation table rows and columns is not required. We propose a learning algorithm that avoids asking output queries for such observationtable rows and columns. In some cases to learn a software model, searching for counterexamples may govery expensive. We have presented a technique which learns the software models without asking and processing counterexamples. But this may add many columns to the observation table and in reality we may not require to ask output queries for all of the table cells. This technique asks output queries by targeting to avoid asking output queries for such cells. These contributions reduce the number of tests required to learn software models, thus improving the worst case learning complexity. We present the tool RALT which implements our techniques and the techniques are validated by inferring the examples like buffers, vending machines, mutual exclusion protocols and schedulers.
83

Black-Box Modeling of the Air Mass-Flow Through the Compressor in A Scania Diesel Engine / Svartboxmodellering av luftmassflödet förbi kompressorn i en Scania dieselmotor

Törnqvist, Oskar January 2009 (has links)
<p>Stricter emission legislation for heavy trucks in combination with the customers demand on low fuel consumption has resulted in intensive technical development of engines and their control systems. To control all these new solutions it is desirable to have reliable models for important control variables. One of them is the air mass-flow, which is important when controlling the amount of recirculated exhaust gases in the EGR system and to make sure that the air to fuel ratio is correct in the cylinders. The purpose with this thesis was to use system identification theory to develop a model for the air mass-flow through the compressor. First linear black-box models were developed without any knowledge of the physics behind. The collected data was preprocessed to work in the modeling procedure and then models with one or more inputs where built according to the ARX model structure. To further improve the models performance, non-linear regressors was developed from physical relations for the air mass-flow and used to form grey-box models of the air mass-flow.In conclusion, the performance was evaluated through comparing the estimated air mass-flow from the best model with the estimate that an extended Kalman filter together with a physical model produced.</p> / <p>Hårdare utsläppskrav för tunga lastbilar i kombination med kundernas efterfrågan på låg bränsleförbrukning har resulterat i en intensiv utveckling av motorer och deras kontrollsystem. För att kunna styra alla dessa nya lösningar är det nödvändigt att ha tillförlitliga modeller över viktiga kontrollvariabler. En av dessa är luftmassflödet som är viktig när man ska kontrollera den mängd avgaser som återcirkuleras i EGR-systemet och för att se till att kvoten mellan luft och bränsle är korrekt i motorns cylindrar. Syftet med det här examensarbetet var att använda systemidentifiering för att ta fram en modell över luftmassflödet förbi kompressorn. Först togs linjära svartboxmodeller fram utan att ta med någon kunskap om den bakomliggande fysiken. Insamlade data förbehandlades för att passa in i modelleringsproceduren och efter det skapades i enlighet med ARX-modellstrukturen modeller med en eller flera insignaler. För att ytterligare förbättra modellernas prestanda togs icke-linjära regressorer fram med hjälp av fysikaliska relationer för luftmassflödet. Dessa användes sedan för att skapa gråboxmodeller av luftmassflödet. Avslutningsvis utvärderades prestandan genom att det estimerade luftmassflödet från den bästa modellen jämfördes med det estimat som ett utökat kalmanfilter tillsammans med fysikaliska ekvationer genererade.</p>
84

Black-Box Model Development of the JAS 39 Gripen Fuel Tank Pressurization System : Intended for a Model-Based Diagnosis System / Black-box-modellering av tanktrycksättning hos bränslesystemet i JAS 39 Gripen : Avsedd för ett modellbaserat diagnossystem

Kensing, Vibeke January 2002 (has links)
The objective with this thesis is to build a Black-Box model of the tank pressurization system in JAS 39 Gripen. This model is intended to be used in an existing diagnosis system for the security control in the tank pressurization system. The tank pressurization system is a MIMO system. This makes the identification process more complicated when the best model is to be chosen. In this master's thesis the identification procedure for a MIMO system can be followed. Testing of the diagnosis system with the created Black-Box model shows that the model seems to be good enough. The diagnosis system takes the right decisions in the performed simulations. This shows that system identification might be a good alternative to physical modelling for a real-time model. The disadvantage with the Black-Box model is that it is less accurate in steady-state than the physical model used before is. The advantage is that it is faster than the physical model. The diagnosis system and the model developed in this thesis are not directly applicable on the real system today. The model has to be redesigned on the real system, this is also the case for the diagnosis system. The diagnosis system also has to be redesigned, so general flight cases, not only the security control can be supervised. However, experiences and choices like input and output signals, and choice of sample interval can be reused from this thesis when a new model might be developed.
85

Random Variate Generation by Numerical Inversion when only the Density Is Known

Derflinger, Gerhard, Hörmann, Wolfgang, Leydold, Josef January 2008 (has links) (PDF)
We present a numerical inversion method for generating random variates from continuous distributions when only the density function is given. The algorithm is based on polynomial interpolation of the inverse CDF and Gauss-Lobatto integration. The user can select the required precision which may be close to machine precision for smooth, bounded densities; the necessary tables have moderate size. Our computational experiments with the classical standard distributions (normal, beta, gamma, t-distributions) and with the noncentral chi-square, hyperbolic, generalized hyperbolic and stable distributions showed that our algorithm always reaches the required precision. The setup time is moderate and the marginal execution time is very fast and the same for all distributions. Thus for the case that large samples with fixed parameters are required the proposed algorithm is the fastest inversion method known. Speed-up factors up to 1000 are obtained when compared to inversion algorithms developed for the specific distributions. This makes our algorithm especially attractive for the simulation of copulas and for quasi-Monte Carlo applications. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
86

Towards a comprehensive framework for co-simulation of dynamic models with an emphasis on time stepping

Hoepfer, Matthias 08 July 2011 (has links)
Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently non-existing technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is co-simulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
87

Black-Box Model Development of the JAS 39 Gripen Fuel Tank Pressurization System : Intended for a Model-Based Diagnosis System / Black-box-modellering av tanktrycksättning hos bränslesystemet i JAS 39 Gripen : Avsedd för ett modellbaserat diagnossystem

Kensing, Vibeke January 2002 (has links)
<p>The objective with this thesis is to build a Black-Box model of the tank pressurization system in JAS 39 Gripen. This model is intended to be used in an existing diagnosis system for the security control in the tank pressurization system. The tank pressurization system is a MIMO system. This makes the identification process more complicated when the best model is to be chosen. In this master's thesis the identification procedure for a MIMO system can be followed. Testing of the diagnosis system with the created Black-Box model shows that the model seems to be good enough. The diagnosis system takes the right decisions in the performed simulations. This shows that system identification might be a good alternative to physical modelling for a real-time model. The disadvantage with the Black-Box model is that it is less accurate in steady-state than the physical model used before is. The advantage is that it is faster than the physical model. The diagnosis system and the model developed in this thesis are not directly applicable on the real system today. The model has to be redesigned on the real system, this is also the case for the diagnosis system. The diagnosis system also has to be redesigned, so general flight cases, not only the security control can be supervised. However, experiences and choices like input and output signals, and choice of sample interval can be reused from this thesis when a new model might be developed.</p>
88

Approaches for Automated Software Security Evaluations

Poller, Andreas 23 October 2006 (has links) (PDF)
As a consequence of the highly increasing cross-linking of computer systems in computer networks, the possibilities for accessing programs operated at these machines is becoming more and more independent from the possibilities of having physical access to them. Thus the former existing physical access controls have to be replaced by logical access controls which ensure that computer systems are only used for the intended purpose and that the stored data are handled securely and confidentially. The efficiency of such logical protection mechanism is verified by applying software security tests. During such tests it is proved whether security functions can be bypassed especially by exploiting software errors. In this diploma thesis approaches for the automation of software security tests are examined regarding their effectiveness and applicability. The results are used to introduce a requirement and evaluation model for the qualitative analysis of such security evaluation automation approaches. Additionally, the assertion is made that a highly automated software security evaluation is not a sensible development goal referring to the estimated cost-benefit ratio which is gained by trying to realise this goal. Based on this assertion, this diploma thesis discusses how to join the capabilities of a human tester and a software evaluation assistance system in an efficient test process. Based on this considerations, the design and implementation of a software security evaluation system which has been developed prototypically for this diploma thesis is described. This system significantly involves the human tester in the evaluation process but provides approaches for automation where possible. Furthermore this proof-of-concept prototype is evaluated regarding its practical applicability. / Durch die zunehmende starke Vernetzung von Computertechnologie wird die Möglichkeit des Zugriffs auf einzelne Computersysteme und den darauf ablaufenden Programmen zunehmend ebenso stark unabhängig von den physischen Zugangsmöglichkeiten des Zugreifenden zu diesen Systemen. Diese wegfallenden physischen Zugangsbarrieren müssen deshalb durch logische Zugriffsbeschränkungen ersetzt werden, die sicherstellen, dass Computersysteme nur zu den vorgesehen Zwecken verwendet und die darin gespeicherten Daten sicher und vertraulich verarbeitet werden. Die Wirksamkeit dieser logischen Schutzmechanismen wird mit Hilfe von s.g. Softwaresicherheitstests verifiziert. Dabei wird insbesondere überprüft, inwiefern Schutzfunktionen durch Zuhilfenahme von in der Software vorhandenen Programmfehlern umgangen werden können. Diese Diplomarbeit überprüft bestehende Ansätze für die Automatisierung solcher Sicherheitstests hinsichtlich ihrer Wirksamkeit und Anwendbarkeit. Aus den Resultaten dieser Untersuchung wird ein allgemeines Anforderungs- und Bewertungsmodell entwickelt, welches die qualitative Bewertung von Ansätzen zur Sicherheitstestautomatisierung zulässt. Desweiteren wird die Behauptung aufgestellt, dass die Forderung nach einer zu starken Automatisierung des Testverfahrens sich ungünstig gegenüber des Kosten-Nutzen-Verhältnisses auswirkt, welches bei der Realisierung dieser Forderungen zu erwarten ist. Darauf aufbauend versucht die Diplomarbeit abzugrenzen, wie sich die Fähigkeiten des menschlichen Testers und eines teilautomatisierbaren Softwaresystems effizient in einem Sicherheitstestprozess verbinden lassen. Basierend auf diesen Betrachtungen wird beschrieben, wie ein Sicherheitsevaluierungssystem, welches prototypisch für diese Diplomarbeit entwickelt wurde, den Menschen zur Erstellung von Testalgorithmen einbindet aber, wenn dies zweckmäßig ist, Automatisierungen ermöglicht. Dieses System wird daraufhin auf seine praktische Anwendbarkeit untersucht.
89

Variants of Transformed Density Rejection and Correlation Induction

Leydold, Josef, Janka, Erich, Hörmann, Wolfgang January 2001 (has links) (PDF)
In this paper we present some variants of transformed density rejection (TDR) that provide more flexibility (including the possibility to halve the expected number of uniform random numbers) at the expense of slightly higher memory requirements. Using a synchronized first stream of uniform variates and a second auxiliary stream (as suggested by Schmeiser and Kachitvichyanukul (1990)) TDR is well suited for correlation induction. Thus high positive and negative correlation between two streams of random variates with same or different distributions can be induced. The software can be downloaded from the UNURAN project page. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
90

Model-Based Test Case Generation for Real-Time Systems

Hessel, Anders January 2007 (has links)
Testing is the dominant verification technique used in the software industry today. The use of automatic test case execution increases, but the creation of test cases remains manual and thus error prone and expensive. To automate generation and selection of test cases, model-based testing techniques have been suggested. In this thesis two central problems in model-based testing are addressed: the problem of how to formally specify coverage criteria, and the problem of how to generate a test suite from a formal timed system model, such that the test suite satisfies a given coverage criterion. We use model checking techniques to explore the state-space of a model until a set of traces is found that together satisfy the coverage criterion. A key observation is that a coverage criterion can be viewed as consisting of a set of items, which we call coverage items. Each coverage item can be treated as a separate reachability problem. Based on our view of coverage items we define a language, in the form of parameterized observer automata, to formally describe coverage criteria. We show that the language is expressive enough to describe a variety of common coverage criteria described in the literature. Two algorithms for test case generation with observer automata are presented. The first algorithm returns a trace that satisfies all coverage items with a minimum cost. We use this algorithm to generate a test suite with minimal execution time. The second algorithm explores only states that may increase the already found set of coverage items. This algorithm works well together with observer automata. The developed techniques have been implemented in the tool CoVer. The tool has been used in a case study together with Ericsson where a WAP gateway has been tested. The case study shows that the techniques have industrial strength.

Page generated in 0.0446 seconds