Spelling suggestions: "subject:"metamodels"" "subject:"betamodels""
21 |
MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels in Stacking Ensemble LearningPloshchik, Ilya January 2023 (has links)
Stacking, also known as stacked generalization, is a method of ensemble learning where multiple base models are trained on the same dataset, and their predictions are used as input for one or more metamodels in an extra layer. This technique can lead to improved performance compared to single layer ensembles, but often requires a time-consuming trial-and-error process. Therefore, the previously developed Visual Analytics system, StackGenVis, was designed to help users select the set of the most effective and diverse models and measure their predictive performance. However, StackGenVis was developed with only one metamodel: Logistic Regression. The focus of this Bachelor's thesis is to examine how alternative metamodels affect the performance of stacked ensembles through the use of a visualization tool called MetaStackVis. Our interactive tool facilitates visual examination of individual metamodels and metamodels' pairs based on their predictive probabilities (or confidence), various supported validation metrics, and their accuracy in predicting specific problematic data instances. The efficiency and effectiveness of MetaStackVis are demonstrated with an example based on a real healthcare dataset. The tool has also been evaluated through semi-structured interview sessions with Machine Learning and Visual Analytics experts. In addition to this thesis, we have written a short research paper explaining the design and implementation of MetaStackVis. However, this thesis provides further insights into the topic explored in the paper by offering additional findings and in-depth analysis. Thus, it can be considered a supplementary source of information for readers who are interested in diving deeper into the subject.
|
22 |
Desenvolvimento de crash box do tipo origami através de metamodelos. / Development of origami crash box through metamodels.Silva, José Eduardo Corrêa Santana e 04 April 2019 (has links)
Este trabalho inicia com uma contextualização histórica e motivação, seguida por revisão bibliográfica nos tópicos discutidos: segurança veicular, crash box, crashworthiness, absorvedores de energia, tubos de impacto, metamodelos, algoritmos genéticos, Planejamento de Experimentos (DoE - Design of Experiments), origami e engenharia, e métodos de otimização na engenharia. Em seguida, o pesquisador propõe um experimento baseado em simulações, avaliando diversas crash box em forma de origami criadas a partir da variação de seus parâmetros dimensionais. Através de um algoritmo baseado em metamodelos, o autor realiza uma análise com o objetivo de maximizar a energia absorvida específica (Specific Energy Absorption - SEA) e a uniformidade de carga (Load Uniformity - LU). A fronteira de Pareto resultante dos dois objetivos é analisada de acordo a exemplos de critérios de decisão, e a configuração escolhida é então comparada a uma crash box da indústria. A configuração escolhida apresenta uma massa quatro vezes menor, e uma uniformidade de carga semelhante à crash box da indústria. Conclui com novas proposições de trabalhos, envolvendo outros métodos de otimização disponíveis. / This research begins with a historical background and motivation, followed by a bibliographic review on the discussed topics: vehicle safety, crash box, crashworthiness, energy absorbers, impact tubes, metamodels, Design of Experiments (DOE), origami and engineering, and optimization in engineering. Next, the researcher proposes a simulation-based experiment, evaluating origami crash boxes created through the variation of several dimensional parameters. Through a metamodel-based algorithm, the author performs an analysis with the objective of maximizing the Specific Energy Absorption (SEA) and the load uniformity (LU). The resultant Pareto frontier of the two objectives is analyzed according to examples of decision criteria, and the chosen design is compared to a crash box from industry. The chosen design presents four times less mass, and a load uniformity similar to the crash box from industry. The research concludes with propositions for new themes, involving other optimization methods available.
|
23 |
Intégration des systèmes mécatroniques dans les systèmes d'information / Integration of mechatronic systems in information systemsAbid, Houssem 12 January 2015 (has links)
L’innovation industrielle tend vers des produits de plus en plus complexes de type mécatronique qui combine des domaines pluridisciplinaires. Les processus de conception de ces produits fait appel aux compétences d’acteurs issus des différents métiers et la création des différentes facettes des constituants nécessite l’utilisation d’outils spécialisés; pour autant il n’existe pas de véritable intégration globale au sein du système d'information permettant une gestion intégrée des différents savoir-faire et domaines de compétence malgré la capacité de certains systèmes comme le PLM. Ce travail présente une méthode de résolution générique. L'objet du présent document est de définir une approche globale pour l'intégration des données des systèmes mécatroniques dans un système PLM en utilisant une modélisation spécifique basé sur la caractérisation du cycle de vie et l'utilisation de SysML. Les premiers essais d’implémentation au sein du PLM Windchill, nous ont permis de valider qu’il était possible d’intégrer, avec une structure sémantique, des liens entre des objets métiers pluridisciplinaires. / Industrial innovation aims towards more complex Mecatronics products which combine multidisciplinary domains. The design process of these products leans on several multi-business. The creation of components' facets requires the use of specialized tools. However there is no real global integration within the information system allowing an integrated management of various know-how and fields of expertise, in spite of capabilities certain systems as PLM. This work presents a generic resolution method. The object of this paper is to present a global approach for the integration of Mecatronics systems into a PLM system using a specific modeling. The first implementation tests within Windchill PLM system shows that it was possible to integrate with a semantic structure, links between multidisciplinary business objects.
|
24 |
Application of bridge specific fragility analysis in the seismic design process of bridges in californiaDukes, Jazalyn Denise 08 April 2013 (has links)
The California Department of Transportation (Caltrans) seismic bridge design process for an Ordinary Bridge described in the Seismic Design Criteria (SDC) directs the design engineer to meet minimum requirements resulting in the design of a bridge that should remain standing in the event of a Design Seismic Hazard. A bridge can be designed to sustain significant damage; however it should avoid the collapse limit state, where the bridge is unable to resist loads due to self-weight. Seismic hazards, in the form of a design spectrum or ground motion time histories, are used to determine the demands of the bridge components and bridge system. These demands are compared to the capacity of the components to ensure that the bridge meets key performance criteria. The SDC also specifies design detailing of various components, including abutments, foundations, hinge seats and bent caps. The expectation of following the guidelines set forth by the SDC during the design process is that the resulting bridge design will avoid collapse under anticipated seismic loads. While the code provisions provide different analyses to follow and component detailing to adhere to in order to ensure a proper bridge design, the SDC does not provide a way to quantitatively determine whether the bridge design has met the requirement of no-collapse.
The objectives of this research are to introduce probabilistic fragility analysis into the Caltrans design process and address the gap of information in the current design process, namely the determination of whether the bridge design meets the performance criteria of no-collapse at the design hazard level. The motivation for this project is to improve the designer's understanding of the probabilistic performance of their bridge design as a function of important design details. To accomplish these goals, a new bridge fragility method is presented as well as a design support tool that provides design engineers with instant access to fragility information during the design process. These products were developed for one specific bridge type that is common in California, the two-span concrete box girder bridge. The end product, the design support tool, is a bridge-specific fragility generator that provides probabilistic performance information on the bridge design. With this tool, a designer can check the bridge design, after going through the SDC design process, to determine the performance of the bridge and its components at any hazard level. The design support tool can provide the user with the probability of failure or collapse for the specific bridge design, which will give insight to the user about whether the bridge design has achieved the performance objective set out in the SDC. The designer would also be able to determine the effect of a change in various design details on the performance and therefore make more informed design decisions.
|
25 |
A Systematic Process for Adaptive Concept ExplorationNixon, Janel Nicole 29 November 2006 (has links)
This thesis presents a method for streamlining the process of obtaining and interpreting quantitative data for the purpose of creating a low-fidelity modeling and simulation environment. By providing a more efficient means for obtaining such information, quantitative analyses become much more practical for decision-making in the very early stages of design, where traditionally, quants are viewed as too expensive and cumbersome for concept evaluation.
The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information.
The results show that the combination of a tailored data set, and an informed model structure work together to provide a meaningful quantitative representation of the system while relying on only a small amount of resources to generate that information. In comparison to more traditional modeling and simulation approaches, the SPACE method provides a more accurate representation of the system using fewer resources to generate that representation. For this reason, the SPACE method acts as an enabler for decision making in the very early design stages, where the desire is to base design decisions on quantitative information while not wasting valuable resources obtaining unnecessary high fidelity information about all the candidate solutions. Thus, the approach enables concept selection to be based on parametric, quantitative data so that informed, unbiased decisions can be made.
|
26 |
System-level health assessment of complex engineered processesAbbas, Manzar 18 November 2010 (has links)
Condition-Based Maintenance (CBM) and Prognostics and Health Management (PHM) technologies aim at improving the availability, reliability, maintainability, and safety of systems through the development of fault diagnostic and failure prognostic algorithms. In complex engineering systems, such as aircraft, power plants, etc., the prognostic activities have been limited to the component-level, primarily due to the complexity of large-scale engineering systems. However, the output of these prognostic algorithms can be practically useful for the system managers, operators, or maintenance personnel, only if it helps them in making decisions, which are based on system-level parameters. Therefore, there is an emerging need to build health assessment methodologies at the system-level. This research employs techniques from the field of design-of-experiments to build response surface metamodels at the system-level that are built on the foundations provided by component-level damage models.
|
27 |
Progressive Validity Metamodel Trust Region OptimizationThomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.
|
28 |
Progressive Validity Metamodel Trust Region OptimizationThomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.
|
29 |
Towards Attribute Grammars for Metamodel SemanticsBürger, Christoff, Karol, Sven 15 August 2011 (has links) (PDF)
Of key importance for metamodelling are appropriate modelling formalisms. Most metamodelling languages permit the development of metamodels that specify tree-structured models enriched with semantics like constraints, references and operations, which extend the models to graphs. However, often the semantics of these semantic constructs is not part of the metamodel, i.e., it is unspeci ed. Therefore, we propose to reuse well-known compiler construction techniques to specify metamodel semantics. To be more precise, we present the application of reference attribute grammars (RAGs) for metamodel semantics and analyse commonalities and differences. Our focus is to pave the way for such a combination, by exemplifying why and how the metamodelling and attribute grammar (AG) world can be combined and by investigating a concrete example - the combination of the Eclipse Modelling Framework (EMF) and JastAdd, an AG evaluator generator.
|
30 |
Um metamodelo UML para a modelagem de requisitos em projetos de sistemas multiagentesGuedes, Gilleanes Thorwald Araujo January 2012 (has links)
A presente tese de doutorado está inserida dentro do contexto da área de AOSE – Agent-Oriented Software Engineering, uma área surgida recentemente voltada para a engenharia de software de sistemas multi-agentes que mescla conceitos tanto da Inteligência Artificial como da Engenharia de Software. Esta nova área surgiu devido aos novos desafios enfrentados pelos engenheiros de software ao projetar sistemas multi-agentes, uma vez que este tipo de sistema apresenta características que os diferenciam de outros tipos de software, precisamente o fato de possuírem agentes de software, entidades autônomas e pró-ativas que executam funções no sistema, possuidoras de objetivos próprios e capazes de perceber e agir sobre o ambiente que os cerca sem a intervenção de usuários externos. Este trabalho descreve um metamodelo UML desenvolvido para a modelagem de requisitos funcionais específicos para projetos de sistemas multi-agentes. O seu desenvolvimento baseou-se na constatação de que, apesar de já existirem linguagens derivadas da UML para o projeto de sistemas multi-agentes, nenhuma das linguagens estudadas desenvolveu mecanismos para a modelagem dos requisitos deste tipo de software, o que levou-nos a criar um metamodelo UML para este propósito. No decorrer desta tese serão descritas as linguagens estudadas derivadas da UML para ser aplicadas no projeto de sistemas multi-agentes, o metamodelo desenvolvido, sua adaptação aos princípios de projeto de Vicari (2007), três estudos de caso onde o metamodelo foi aplicado, além de uma proposta de mapeamento dos conceitos definidos no metamodelo para os conceitos das linguagens MAS-ML e AML, bem como uma proposta para a validação do metamodelo e dos diagramas criados por meio dele. / This PhD thesis is inserted within the context of the AOSE (Agent-Oriented Software Engineering) area, a recently-emerged field dealing with the software engineering of multi-agent systems which mixes concepts of Artificial Intelligence and Software Engineering together. This new area emerged from new challenges faced by the software engineers when designing multi-agent systems, since this kind of system presents characteristics that set them apart from other types of software, precisely for including software agents, autonomous and proactive entities that execute functions in the system, owning their own goals and able to perceive and act upon the surrounding environment without the intervention of external users. This work describes a UML metamodel developed for the modeling of the specific functional requirements for multi-agent systems projects. Its development was based on the perception that among the studied UML-derived languages for the multi-agent systems project, none of them had developed mechanisms for requirements modeling on this kind of software, leading us to create a UML metamodel for this purpose. Along this thesis we shall describe the UML-derived languages we studied to be applied in the multi-agent systems project, the developed metamodel, its adaptation to the Vicari (2007) design principles, three case studies on which the metamodel was applied, plus a mapping proposal for the concepts defined in the metamodel into MASML and AML languages concepts, as well as a validation proposal for the metamodel and the diagrams created by means of it.
|
Page generated in 0.6361 seconds