• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluating Automatic Model Selection

PENG, SISI January 2011 (has links)
In this paper, we briefly describe the automatic model selection which is provided by Autometrics in the PcGive program. The modeler only needs to specify the initial model and the significance level at which to reduce the model. Then, the algorithm does the rest. The properties of Autometrics are discussed. We also explain its background concepts and try to see whether the model selected by the Autometrics can perform well. For a given data set, we use Autometrics to find a “new” model, and then compare the “new” model with a previously selected one by another modeler. It is an interesting issue to see whether Autometrics can also find models which fit better to the given data. As an illustration, we choose three examples. It is true that Autometrics is labor saving and always gives us a parsimonious model. It is really an invaluable instrument for social science. But, we still need more examples to strongly support the idea that Autometrics can find a model which fits the data better, just a few examples in this paper is far from enough.
2

Reference Model Based High Fidelity Simulation Modeling for Manufacturing Systems

Kim, Hansoo 12 April 2004 (has links)
Today, discrete event simulation is the only reliable tool for detailed analysis of complex behaviors of modern manufacturing systems. However, building high fidelity simulation models is expensive. Hence, it is important to improve the simulation modeling productivity. In this research, we explore two approaches for the improvement of simulation modeling productivity. One approach is the Virtual Factory Approach, using a general-purpose model for a system to achieve various simulation objectives with a single high fidelity model through abstraction. The other approach is the Reference Model Approach, which is to build fundamental building blocks for simulation models of any system in a domain with formal descriptions and domain knowledge. In the Virtual Factory Approach, the challenge is to show the validity of the methodology. We develop a formal framework for the relationships between higher fidelity and lower fidelity models, and provide justification that the models abstracted from a higher fidelity model are interchangeable with various abstract simulation models for a target system. For the Reference Model Approach, we attempt to overcome the weak points inherited from ad-hoc modeling and develop a formal reference model and a model generation procedure for discrete part manufacturing systems, which covers most modern manufacturing systems.
3

Dynamic Abstraction for Interleaved Task Planning and Execution

Nyblom, Per January 2008 (has links)
<p>It is often beneficial for an autonomous agent that operates in a complex environment to make use of different types of mathematical models to keep track of unobservable parts of the world or to perform prediction, planning and other types of reasoning. Since a model is always a simplification of something else, there always exists a tradeoff between the model’s accuracy and feasibility when it is used within a certain application due to the limited available computational resources. Currently, this tradeoff is to a large extent balanced by humans for model construction in general and for autonomous agents in particular. This thesis investigates different solutions where such agents are more responsible for balancing the tradeoff for models themselves in the context of interleaved task planning and plan execution. The necessary components for an autonomous agent that performs its abstractions and constructs planning models dynamically during task planning and execution are investigated and a method called DARE is developed that is a template for handling the possible situations that can occur such as the rise of unsuitable abstractions and need for dynamic construction of abstraction levels. Implementations of DARE are presented in two case studies where both a fully and partially observable stochastic domain are used, motivated by research with Unmanned Aircraft Systems. The case studies also demonstrate possible ways to perform dynamic abstraction and problem model construction in practice.</p> / Report code: LiU-Tek-Lic-2008:21.
4

Dynamic Abstraction for Interleaved Task Planning and Execution

Nyblom, Per January 2008 (has links)
It is often beneficial for an autonomous agent that operates in a complex environment to make use of different types of mathematical models to keep track of unobservable parts of the world or to perform prediction, planning and other types of reasoning. Since a model is always a simplification of something else, there always exists a tradeoff between the model’s accuracy and feasibility when it is used within a certain application due to the limited available computational resources. Currently, this tradeoff is to a large extent balanced by humans for model construction in general and for autonomous agents in particular. This thesis investigates different solutions where such agents are more responsible for balancing the tradeoff for models themselves in the context of interleaved task planning and plan execution. The necessary components for an autonomous agent that performs its abstractions and constructs planning models dynamically during task planning and execution are investigated and a method called DARE is developed that is a template for handling the possible situations that can occur such as the rise of unsuitable abstractions and need for dynamic construction of abstraction levels. Implementations of DARE are presented in two case studies where both a fully and partially observable stochastic domain are used, motivated by research with Unmanned Aircraft Systems. The case studies also demonstrate possible ways to perform dynamic abstraction and problem model construction in practice. / <p>Report code: LiU-Tek-Lic-2008:21.</p>
5

A data driven approach for automating vehicle activated signs

Jomaa, Diala January 2016 (has links)
Vehicle activated signs (VAS) display a warning message when drivers exceed a particular threshold. VAS are often installed on local roads to display a warning message depending on the speed of the approaching vehicles. VAS are usually powered by electricity; however, battery and solar powered VAS are also commonplace. This thesis investigated devel-opment of an automatic trigger speed of vehicle activated signs in order to influence driver behaviour, the effect of which has been measured in terms of reduced mean speed and low standard deviation. A comprehen-sive understanding of the effectiveness of the trigger speed of the VAS on driver behaviour was established by systematically collecting data. Specif-ically, data on time of day, speed, length and direction of the vehicle have been collected for the purpose, using Doppler radar installed at the road. A data driven calibration method for the radar used in the experiment has also been developed and evaluated. Results indicate that trigger speed of the VAS had variable effect on driv-ers’ speed at different sites and at different times of the day. It is evident that the optimal trigger speed should be set near the 85th percentile speed, to be able to lower the standard deviation. In the case of battery and solar powered VAS, trigger speeds between the 50th and 85th per-centile offered the best compromise between safety and power consump-tion. Results also indicate that different classes of vehicles report differ-ences in mean speed and standard deviation; on a highway, the mean speed of cars differs slightly from the mean speed of trucks, whereas a significant difference was observed between the classes of vehicles on lo-cal roads. A differential trigger speed was therefore investigated for the sake of completion. A data driven approach using Random forest was found to be appropriate in predicting trigger speeds respective to types of vehicles and traffic conditions. The fact that the predicted trigger speed was found to be consistently around the 85th percentile speed justifies the choice of the automatic model.
6

Investigation of similarity-based test case selection for specification-based regression testing.

OLIVEIRA NETO, Francisco Gomes de. 10 April 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-10T20:00:05Z No. of bitstreams: 1 FRANCISCO GOMES DE OLIVEIRA NETO - TESE PPGCC 2014..pdf: 5163454 bytes, checksum: 228c1fc4f2dc9aad01698011238cfde1 (MD5) / Made available in DSpace on 2018-04-10T20:00:05Z (GMT). No. of bitstreams: 1 FRANCISCO GOMES DE OLIVEIRA NETO - TESE PPGCC 2014..pdf: 5163454 bytes, checksum: 228c1fc4f2dc9aad01698011238cfde1 (MD5) Previous issue date: 2014-07-30 / uring software maintenance, several modifications can be performed in a specification model in order to satisfy new requirements. Perform regression testing on modified software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where specification models were modified. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modified regions of a software system’s specification model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modified elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator tools to create a space of models based on statistics from real industrial models, and eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SART’s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modification and revealing defects linked to model modifications. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment. / During software maintenance, several modifications can be performed in a specification model in order to satisfy new requirements. Perform regression testing on modified software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where specification models were modified. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modified regions of a software system’s specification model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modified elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator tools to create a space of models based on statistics from real industrial models, and eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SART’s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modification and revealing defects linked to model modifications. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment.
7

Approche intégrée pour l'analyse de risques et l'évaluation des performances : application aux services de stérilisation hospitalière / Integrated approach for risk analysis and performance evaluation : application to hospital sterilization services

Negrichi, Khalil 08 December 2015 (has links)
Les services de stérilisation sont des lieux de production de soins caractérisés par une multitude d’activités et situations auxquelles ils sont confrontés. En outre, les services de stérilisation doivent assurer leurs missions dans un environnement caractérisé par la présence d’une variété de risques. Les risques présents dans ces milieux peuvent aller des pannes des équipements jusqu’aux contaminations et transmission des maladies nosocomiales. Ces services sont aussi tenus de garder un niveau de performances satisfaisant pour assurer la continuité des soins dans les blocs opératoires.Pour aider ces services dans leur quête d'un système performant, capable d’évoluer dans un environnement à haut niveau de risques, nous nous intéressons dans ce travail de recherche au développement d’une approche intégrée pour l’analyse de risques et l’évaluation des performances. Ce travail s’intègre dans un cadre collaboratif entre le laboratoire G-SCOP et le service de stérilisation du CHU de Grenoble, terrain d’étude choisi pour mettre en œuvre l’approche proposée.L'approche que nous proposons se déroule en plusieurs étapes: tout d’abord, suite à une comparaison entre les méthodes de gestion des risques, nous nous sommes orientés vers l’approche pilotée par modèle, dénommée FIS (Fonction Interaction Structure). En nous basant sur FIS, nous avons développé un modèle de risque dans ce service de stérilisation, décrivant à la fois les fonctions, les ressources permettant la réalisation de ces fonctions ainsi que les différents risques qui peuvent être rencontrés. Dans un deuxième temps, nous avons représenté le comportement dynamique du modèle de risques obtenu. Ce modèle dynamique permet de simuler le comportement du service de stérilisation et le voir évoluer dans les situations normales de fonctionnement et les situations de risques. Pour ce faire, nous avons introduit une nouvelle classe de réseau de Petri appelée réseau de Petri PTPS (Predicate-Transition, Prioritized, Synchronous) permettant de représenter et simuler le comportement dynamique du modèle FIS. Par la suite, nous avons automatisé le passage entre le modèle de risque et le modèle dynamique. Cette automatisation est effectuée par un ensemble d’algorithmes de traduction, capables de convertir automatiquement le modèle FIS et le modèle de simulation en réseau de Petri PTPS.Cette approche a donné lieu à un outil de modélisation et de simulation en mode dégradé, appelé SIM-RISK. Nous avons également montré l’utilité de cet outil sur des exemples inspirés des différents risques rencontrés dans le service de stérilisation. / Sterilization services are vulnerable to risks, due to the contagious nature of their environment and to the degradation that risks can cause to their performances and to the safety of patients and staff. The risks in these facilities can range from equipment failure to the transmission of nosocomial infections and diseases. In this kind of high risk environment, these services are also required to maintain an adequate level of performance to ensure continuity of care in operating theaters.We focus in this research on the development of an integrated approach for risk analysis and performance assessment. This work is part of a collaborative work between the G-SCOP laboratory and the sterilization service of the University Hospital of Grenoble, which was the case study chosen to implement the proposed approach.The approach we propose is conducted in several steps: first, following a comparison of the risk analysis methods, we have chosen a model driven approach called FIS (Function Interaction Structure). Based on FIS, we have developed a risk model of Grenoble University Hospital sterilization service. This model describes the functions, the resources to achieve these functions as well as the various risks that may be encountered. Secondly, we introduced a new view to the FIS model dedicated to describe the dynamic behaviour of the resulting risk model.This dynamic model can simulate the behaviour of the sterilization service in normal situations of operations and risk situations.To do this, we have introduced a new Petri Net class called PTPS (Predicate-Transition, Prioritized, Synchronous) Petri Net to represent and simulate the dynamic behaviour of the FIS model. Subsequently, we automated the switching between the risk model and the dynamic model. This automation is performed by a set of translation algorithms capable of automatically converting the FIS model to a PTPS Petri Net simulation model .This approach resulted in a modelling and simulation tool in degraded mode called SIM-RISK. We also showed the usefulness of this tool by some examples based on different risks encountered in the sterilization service.
8

Genetické programování - Java implementace / Genetic programming - Java implementation

Tomaštík, Marek January 2013 (has links)
This Master´s thesis implements computer program in Java, useful for automatic model generating, specially in symbolic regression problem. Thesis includes short description of genetic programming (GP) and own implementation with advanced GP operands (non-destructive operations, elitism, exptression reduction). Mathematical model is generating by symbolic regression, exacly for choosen data set. For functioning check are used test tasks. Optimal settings is found for choosen GP parameters.
9

Fatores determinantes do nível do risco Brasil

Costa, Marisa Gomes da 01 February 2016 (has links)
Made available in DSpace on 2016-03-15T19:32:58Z (GMT). No. of bitstreams: 1 Marisa Gomes da Costa.pdf: 2649705 bytes, checksum: 9dfdf2c39e3c4389540dc1f3a8f8d26f (MD5) Previous issue date: 2016-02-01 / This study aims to identify the determinants of Brazil country risk level, during the period from February 1995 to August 2015, based on the deviations from the covered interest rate parity condition. These deviations represent a measure of the risk assumed by an investor who choose to invest in a Brazilian security in Brazil, rather than do it abroad. Using Autometrics, an algorithm for automatic model selection, developed by Doornik (2009), thirty-nine explanatories variables were selected from previous studies. The Brazil country risk level is susceptible to changes in the balance of payments, import by GDP, the deviation covered interest rate parity of the previous period, the inflation rate, the change in exports, total debt per GDP, and external debt by exports. / Este estudo propõe-se a identificar os fatores determinantes do nível do risco Brasil, durante o período de fevereiro de 1995 a agosto de 2015, calculado pelos desvios da condição da paridade coberta de juros. Estes desvios representam a medida do risco assumido por um investidor ao optar investir em um título brasileiro no Brasil, ao invés de fazê-lo no exterior. Utilizando a técnica de seleção automática de modelos com a aplicação do algoritmo Autometrics, desenvolvido por Doornik (2009), trinta e nove variáveis explicativas foram selecionadas a partir de estudos anteriores. O nível do risco Brasil é altamente suscetível às variações do balanço de pagamento, da importação por PIB, do desvio da condição da paridade coberta do período anterior, à taxa de inflação, à variação das exportações (em $ e em volume), à dívida total por PIB e à dívida externa pela exportação.

Page generated in 0.1049 seconds