• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 24
  • 11
  • 9
  • 8
  • 7
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 126
  • 38
  • 38
  • 32
  • 27
  • 24
  • 21
  • 18
  • 18
  • 16
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Co-évolution des contraintes OCL suite à l'évolution des métamodèles / Co-evolution of OCL constraints with evolution of metamodels

Khelladi, Djamel Eddine 30 September 2016 (has links)
Le paradigme ingénierie-dirigée par les modèles (Model-Driven Engine MDE) encourage l'utilisation des modèles et des langages de modélisation durant le processus de développement afin d'améliorer la qualité et la productivité. Les métamodèles sont des composants essentiels de tout écosystème de langage de modélisation afin de définir les aspects structurels d'un domaine d'activité. En complément, l'Object Constraint Language (OCL) est utilisé pour spécifier en détail les aspects du domaine d'activité, par exemple plus de 750 contraintes viennent avec le métamodèle UML. Malheureusement, les métamodèles sont constamment soumis aux changements et évolution qui affectent les contraintes OCL définies qui peuvent avoir besoin d'être co-évolué en conséquence. Bien que plusieurs approches aient été proposées pour détecter les changements de métamodèle lors de l'évolution et de les utiliser par la suite pour co-évoluer les contraintes OCL. Ils ne peuvent toujours pas détecter une trace d'évolution complète et correcte du métamodèle tout en proposant une résolution unique par contrainte OCL impactée alors que des résolutions multiples et alternatives peuvent être appliquées. Dans cette thèse, nous proposons une approche pour détecter les changements de métamodèle lors d'une évolution, tout en visant la complétude et une haute précision. Notre approche de détection considère les changements atomiques et complexes au cours de l'évolution. Par ailleurs, nous proposons une approche dédiée à la co-évolution des contraintes OCL tout en tenant compte des résolutions alternatives et à veiller à ce que seules les résolutions appropriées sont proposées à l'utilisateur pour chaque contrainte OCL impactée. Notre validation montre d'une part que le rappel (recall) de 100% est toujours atteint dans nos cas d'études avec une précision moyenne de 70,75%, qui est encore améliorée par nos heuristiques jusqu'à 91% et 100% dans certains cas. D'autre part, une moyenne de 92% et 93% respectivement syntaxiquement et sémantiquement correcte de co-évolution d'OCL sont atteintes dans nos études de cas. Les deux approches sont implémentées sous forme de plugins pour l'IDE Eclipse, un environnement de développement très répandu pour les développeurs de logiciels. Les plugins sont testés par nos partenaires industriels du projet ANR MoNoGe. Cette thèse a abouti à huit articles publiés et trois autres articles qui sont en cours de soumission / phase de révision. / Model-Driven Engineering (MDE) paradigm promotes the use of models and modeling languages during the development process aiming at a better quality and productivity. Metamodels are core components of any modeling language ecosystem to define structural aspects of a business domain. As a complement, the Object Constraint Language (OCL) is used to specify detailed aspects of the business domain, e.g. more than 750 constraints come with the UML metamodel. Unfortunately, metamodels are subject to a constant change and evolution which impacts the defined OCL constraints that may need to be co-evolved as well. Although several approaches have been proposed to detect metamodel changes during evolution and to use them to co-evolve OCL constraints. They still cannot detect a complete and a correct evolution trace of the metamodel while proposing a unique resolution per impacted OCL constraint whereas multiple and alternative ones can be applied. In this thesis, we propose an approach to detect metamodel changes during evolution while aiming at completeness and high precision. Our detection approach considers both atomic and complex changes during evolution. In addition, we propose a dedicated approach to co-evolve OCL constraints while considering alternative resolutions and ensuring that only the appropriate resolutions are proposed to the user for each impacted OCL constraint. Our validation shows on the one hand that a 100% recall is always reached in our case studies with an average precision of 70.75%, which is improved by our heuristics up to 91% and 100% in some cases. On the other hand, an average of 92% and 93% of respectively syntactically and semantically correct OCL co-evolution are reached in our case studies. Both approaches are implemented as plugins for the Eclipse IDE a wide-spread development environment for software developers. The plugins are under test by our industrial partners in the ANR MoNoGe project. This PhD resulted in eight published papers and three other papers are currently under submission/revision phase.
52

Optimalizace materiálového toku v hromadné výrobě simulačními metodami / Optimization of Material Flow in Mass Production by Means of Simulation Methods

Hloska, Jiří January 2015 (has links)
The aim of the PhD thesis is to design a methodology for generating a material flow using a simulation meta-model of a mass production process. This methodology is in principle based on the relationship between selected material flow characteristics. Simulation of production and logistics processes has been increasingly used in planning, commissioning and subsequent operational management and optimization of the respective technological operations, in particular in mass production. The first part of the PhD thesis summarizes up-to-date findings in the field of discrete event simulation of material flow, related statistical and mathematical disciplines, but also information technology which enables effective realization of simulation studies. Attention is also paid to significant domestic and international conferences, symposia and interest associations related to simulation of manufacturing processes. The next part of the PhD thesis presents the methodology of reconstruction and generation of material flow using simulation meta-models developed for this purpose. Principles of algorithms used by these meta-models and their possible range of use are demonstrated by simulation experiments carried out. Their description and results are also commented. Special focus is put on the selection of significant material flow characteristics and their mutual relationship. For its evaluation a series of simulation experiments was conducted using a simulation model of a closed queuing system with variable parameters. Revealed interdependence between the selected material flow characteristics is experimentally verified using a detailed simulation model of particular selected mass production system. The conclusion of the PhD thesis summarizes provided findings and, with regard to the designed methodology of reconstruction and generating of material flow, it outlines possible further steps both in research and their practical application.
53

Redundancy and Robustness Quantification of Bridge Systems based on Reliability and Risk Approaches

Sarmiento, Silvia January 2023 (has links)
Over the last few decades, evaluating the performance of existing structures has become increasingly important, particularly as the number of bridges reaching their design life continues to rise. As a result, there is a growing need for effective and accurate procedures to guide the assessment of the current structures' capacity and safety levels to implement appropriate maintenance and rehabilitation strategies. Evaluating a structure's performance involves assessing its ability to carry loads, resist external forces, and maintain its functionality over time. This is a complex process that requires a deep understanding of the structure's behavior, as well as knowledge of the environmental conditions it is subjected to. In recent years, technological advances and an increased understanding of reliability concepts have allowed for the development of more sophisticated tools and methods for structural evaluation. Thus, engineers and researchers can obtain more accurate and reliable data about a structure's performance, which can inform decision-making processes related to maintenance, repair, and replacement. This study aims to present a methodology that guides the assessment of existing structures' performance effectively and accurately. Precisely, the performance is measured in terms of redundancy and robustness. Thus, a comparison of existing reliability- and risk-based indicators is performed through an example application presented in one of the appended papers. The comparison allows an overview of the difference between the available measures and the type of information provided by each one of them. Also, in one of the appended papers a new algorithm for evaluating the failure probability value is proposed. The algorithm is based on metamodel strategies and integrates the advantages of kriging, learning, and copula functions. The proposed algorithm aims to reduce the number of performance function evaluations, so the number of model runs is feasible when using Finite Element Modeling (FEM). By comparing the available redundancy and robustness indicators, it was possible to observe that each measure provides different insights into these two structural properties. Additionally, direct comparison between them is challenging since their units can differ, and the lack of a target or standard values makes their interpretation difficult. Therefore, when using a specific indicator, it is required to specify the definition adopted clearly. Furthermore, the proposed algorithm showed through the validation examples and the case study that it can obtain the failure probability accurately and effectively. Its application resulted in a more economical methodology, in terms of computational cost, compared to other existing reliability methods.
54

Metalearning by Exploiting Granular Machine Learning Pipeline Metadata

Schoenfeld, Brandon J. 08 December 2020 (has links)
Automatic machine learning (AutoML) systems have been shown to perform better when they use metamodels trained offline. Existing offline metalearning approaches treat ML models as black boxes. However, modern ML models often compose multiple ML algorithms into ML pipelines. We expand previous metalearning work on estimating the performance and ranking of ML models by exploiting the metadata about which ML algorithms are used in a given pipeline. We propose a dynamically assembled neural network with the potential to model arbitrary DAG structures. We compare our proposed metamodel against reasonable baselines that exploit varying amounts of pipeline metadata, including metamodels used in existing AutoML systems. We observe that metamodels that fully exploit pipeline metadata are better estimators of pipeline performance. We also find that ranking pipelines based on dataset metafeature similarity outperforms ranking based on performance estimates.
55

An Empirical Study on Software Modeling Curricula

Lila, Redion, Delishi, Alban January 2022 (has links)
Background: Software Engineering is a high-demand field constantly changing with new languages, tools, and frameworks. The use of software modeling in Software Engineering is essential for solving complex problems since it helps the developer understand abstraction and develop high-level code. For an overview of the current state of software modeling, this research attempts to present an overview of software modeling curricula and how students benefit from attending these courses. Method: First, to assess the survey’s clarity and eliminate any potential for bias, we conducted a pilot study with five teachers at M¨alardalen University. We surveyed 23 participants from 23 institutions and ten countries to get their responses to Research Questions 2, 3, and 4. We have collected data for programs from 10 different countries with their top 5 universities to address Research Question 1. Results: To maintain meaning in the Software Engineering domain, we used our findings to seek the usage of knowledge gained from the academic environment in the industry when the students shift from academia to industry. Through this thesis, we present the current view of software modeling curricula and their contribution to preparing students for the industry. The following significant findings were reached: (i) software modeling aids in better understanding of abstraction concepts and the development of high-level applications; (ii) UML is one of the most popular languages; and (iii) some limitations of software modeling tools include license type, outdated tools, or poorly documented tools.
56

Representation and Assisted Negotiation of Textual Agreements

Ayeleso, Emmanuel Celestine 13 November 2023 (has links)
Research into negotiation systems has primarily focused on those for e-commerce and electronic markets, where quantitative values such as prices are key to what is being negotiated. However, there is a lack of research into tool support for complex real-life negotiations of documents that contain large amounts of textual (qualitative) clauses. Examples of such text-based agreements include international trade and climate-change treaties, as well as labor-management collective agreements. Our goal is to improve the state of the art in textual negotiation technology, so it can be applied to such agreements and their negotiations. In particular, we want to be able to develop technology that can facilitate the delicate give-and-take involving proposed changes, positions, rationale exchange, partial resolutions to disagreements, tracking of notes taken by the negotiators, as well as the ability to search and compare all of the above to facilitate negotiations. We posit that there would be significant societal benefit from the hyper-local to the international level if better technology was available. We performed literature reviews of existing negotiation systems and systems for representing legal documents to study what has been done in this domain. We also performed a grounded theory study based on interviews with people that have participated in real-life negotiations. An end-user's survey of negotiation systems was also conducted and analyzed. We used the results from the literature review, grounded theory and survey analysis, as the basis for a subsequent phase of design-science research in which we developed use cases, requirements and a comprehensive metamodel for qualitative negotiation tools, as well as a prototype negotiation tool.
57

Metamodel-Based Design Optimization : A Multidisciplinary Approach for Automotive Structures

Ryberg, Ann-Britt January 2013 (has links)
Automotive companies are exposed to tough competition and therefore strive to design better products in a cheaper and faster manner. This challenge requires continuous improvements of methods and tools, and simulation models are therefore used to evaluate every possible aspect of the product. Optimization has become increasingly popular, but its full potential is not yet utilized. The increased demand for accurate simulation results has led to detailed simulation models that often are computationally expensive to evaluate. Metamodel-based design optimization (MBDO) is an attractive approach to relieve the computational burden during optimization studies. Metamodels are approximations of the detailed simulation models that take little time to evaluate and they are therefore especially attractive when many evaluations are needed, as e.g. in multidisciplinary design optimization (MDO). In this thesis, state-of-the-art methods for metamodel-based design optimization are covered and different multidisciplinary design optimization methods are presented. An efficient MDO process for large-scale automotive structural applications is developed where aspects related to its implementation is considered. The process is described and demonstrated in a simple application example. It is found that the process is efficient, flexible, and suitable for common structural MDO applications within the automotive industry. Furthermore, it fits easily into an existing organization and product development process and improved designs can be obtained even when using metamodels with limited accuracy. It is therefore concluded that by incorporating the described metamodel-based MDO process into the product development, there is a potential for designing better products in a shorter time.
58

A systematic mapping study of mapping modelling languages

Popovic, Marija, Cizmic, Amila January 2022 (has links)
Context - Various research teams, as well as individual researchers, have investigated mapping modelling languages. However, systematic studies that provide a structured overview of this research on this topic have not been conducted. It is noticeable that this leaves a big gap in the context of a modelling language. Conducting these studies could lead to a better understanding of the characteristics of mapping modelling languages, which would be of great importance for the future development of this area of research. Objective - The aim of the study is to assess the state of knowledge about mapping modelling languages and assist stakeholders in making informed decisions. This is carried out by identifying existing mapping modelling languages, and their characteristics. Another objective of this thesis is to identify potential mapping modelling languages that can support the generation of a blended modelling environment. Method - In order to achieve the goal, we conducted a systematic mapping study of mapping modelling languages. Our search showed that we had 2913 potential studies that were relevant to our topic. After the selection process, the final set of primary papers was 29. The information that was of importance for this study was derived according to the categories of a well-defined classification framework. Results - The analysis of the extracted data showed the following main findings: (i) most of the primary studies research focused on providing solution proposals, (ii) the largest number of publications was in 2010, (iii) most papers mention mapping modelling languages that allow the definition of unidirectional mapping rules, (iv) the most common cardinality was 1:N, (v) graphical syntax has been proposed in many primary studies, (vi) most studies suggest mapping modelling languages that can be used to define relationships between RDF models, (vii) documentation and implementation were available for a very small number of mapping modelling languages. Conclusion - These results can help the research community to identify research gaps on mapping modelling languages as well as identify possible directions for future research.
59

Digital Libraries with Superimposed Information: Supporting Scholarly Tasks that Involve Fine Grain Information

Murthy, Uma 02 May 2011 (has links)
Many scholarly tasks involve working with contextualized fine-grain information, such as a music professor creating a multimedia lecture on a musical style, while bringing together several snippets of compositions of that style. We refer to such contextualized parts of a larger unit of information (or whole documents), as subdocuments. Current approaches to work with subdocuments involve a mix of paper-based and digital techniques. With the increase in the volume and in the heterogeneity of information sources, the management, organization, access, retrieval, as well as reuse of subdocuments becomes challenging, leading to inefficient and ineffective task execution. A digital library (DL) facilitates management, access, retrieval, and use of collections of data and metadata through services. However, most DLs do not provide infrastructure or services to support working with subdocuments. Superimposed information (SI) refers to new information that is created to reference subdocuments in existing information resources. We combine this idea of SI with traditional DL services, to define and develop a DL with SI (an SI-DL). Our research questions are centered around one main question: how can we extend the notion of a DL to include SI, in order to support scholarly tasks that involve working with subdocuments? We pursued this question from a theoretical as well as a practical/user perspective. From a theoretical perspective, we developed a formal metamodel that precisely defines the components of an SI-DL, building upon related work in DLs, SI, annotations, and hypertext. From the practical/user perspective, we developed prototype superimposed applications and conducted user studies to explore the use of SI in scholarly tasks. We developed SuperIDR, a prototype SI-DL, which enables users to mark up subimages, annotate them, and retrieve information in multiple ways, including browsing, and text- and content-based image retrieval. We explored the use of subimages and evaluated the use of SuperIDR in fish species identification, a scholarly task that involves working with subimages. Findings from the user studies and other work in our research lead to theory- and experiment-based enhancements that can guide design of digital libraries with superimposed information. / Ph. D.
60

Metamodeling Driven IP Reuse for System-on-chip Integration and Microprocessor Design

Mathaikutty, Deepak Abraham 02 December 2007 (has links)
This dissertation addresses two important problems in reusing intellectual properties (IPs) in the form of reusable design or verification components. The first problem is associated with fast and effective integration of reusable design components into a System-on-chip (SoC), so faster design turn-around time can be achieved, leading to faster time-to-market. The second problem has the same goals of faster product design cycle, but emphasizes on verification model reuse, rather than design component reuse. It specifically addresses reuse of reusable verification IPs to enable a "write once, use many times" verification strategy. This dissertation is accordingly divided into part I and part II which are related but describe the two problems and our solutions to them. These two related but distinctive problems faced by system design companies have been tackled through a unique approach which hither-to-fore only have been used in the software engineering domain. This approach is called metamodeling, which allows creating customized meta-language to describe the syntax and semantics for a modeling domain. It provides a way to create, transform and analyze domain specific languages, which are themselves described by metamodels, and the transformation and processing of models in such languages are also described by metamodels. This makes machine based interpretation and translation from these models an easier and formal task. In part I, we consider the problem of rapid system-level model integration of existing reusable components such that (i) the required architecture of the SoC can be expressed formally, (ii) automatic selection of components from an IP library to match the need of the system being integrated can be done, (iii) integrability of the components is provable, or checkable automatically, and (iv) structural and behavioral type systems for each component can be utilized through inferencing and matching techniques to ensure their compatibility. Our solutions include a component composition language, algorithms for component selection, type matching and inferencing algorithms, temporal property based behavioral typing, and finally a software system on top of an existing metamodeling environment. In part II, we use the same metamodeling environment to create a framework for modeling generative verification IPs. Our main contributions relate to INTEL's microprocessor verification environment, and our solution spans various abstraction levels (System, architectural, and microarchitecture) to perform verification. We provide a unified language that can be used to model verification IPs at all abstraction levels, and verification collaterals such as testbenches, simulators, and coverage monitors can be generated from these models, thereby enhancing reuse in verification. / Ph. D.

Page generated in 0.0643 seconds