• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 34
  • 29
  • 6
  • 2
  • 1
  • Tagged with
  • 110
  • 40
  • 40
  • 34
  • 23
  • 22
  • 22
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

ASAP approche orientée : services pour un support agile et flexible des processus de conception de produit dans les systèmes PLM / Deployment of business functions in PLM solutions

Hachani, Safa 16 April 2013 (has links)
La dynamique de l’offre et de la demande des produits manufacturiers ainsi que leraccourcissement de leurs cycles de vie obligent les entreprises industrielles à se doter de processus dedéveloppement produit dynamiques et agiles. Nos travaux se positionnement sur le supportinformatisé de ces processus de développement qui sont actuellement gérés par les systèmes PLM.L’objectif d’un support informatisé est d’accélérer le processus en automatisant la notification et ladiffusion des informations. Il permet également de garder trace des opérations et décisions effectuéeset d’accroître la standardisation des processus. Face à la rigidité des solutions actuellement proposéespour gérer les processus vis-à-vis des modifications survenant dans le processus, notre objectif est deproposer une approche permettant de modifier un processus en cours d’exécution sans devoir leredéfinir et le relancer dans son ensemble. Pour y parvenir, nous avons proposé, une approche quidécline une orientation services inspirée des architectures orientées services (SOA). Ces architecturespermettent de définir des applications modulaires, en utilisant des services faiblement couplés. Notreobjectif est de décliner une telle architecture exploitée essentiellement pour les systèmes logiciels et leWeb, au niveau métier de l’entreprise afin de modéliser et d’exécuter de manière flexible desprocessus de conception de produits par composition de services réutilisables. Nous proposons unedémarche d'identification des services du domaine métier des processus de conception produit et dudomaine fonctionnel du PLM. Ces services sont organisés dans deux catalogues de services métiers etfonctionnels. Notre approche s'inscrit dans le cadre de l'Ingénierie Dirigée par les Modèles (IDM) avecune architecture de réference à trois niveaux et des mécanismes d’alignement entre les niveaux métier,fonctionnel et logiciel. Ces mécanismes d'alignement entre les niveaux permettent d’intégrerl’évolution et d'automatiser le déploiement d’un processus de conception du niveau métier auxniveaux fonctionnel et logiciel. / To cope with market dynamic and shortened time to market, industrial companies need toimplement an effective management of their design processes (DPs) and product information.Unfortunately, Product Lifecycle Management (PLM) systems which are dedicated to support designactivities are not efficient as it might be expected. Indeed, DPs are changing, emergent and nondeterministic, due to the business environment under which they are carried out. The aim of this workis to propose an alternative approach for flexible process support within PLM systems to facilitate thecoupling with the environment reality. The purpose of a support system is to accelerate the process byautomating the notification and dispatching of information and activities between actors. It also allowsto keep track of transactions and decisions made and to increase processes standardization. Our goal isto propose a solution which allows process change at run-time without having to redefine and restartthe whole of process activities. To achieve this, we proposed an approach based on service-orientedarchitectures (SOA). These architectures allow defining modular applications, using loosly coopledservices. They are mainly exploited for software systems and Web development. Our goal is to declinesuch architectures at the business level of a company in order to perform flexible DPs deploymentbased on services reuse and composition. We propose an identification approach for business levelservices (product design services) and functional PLM services. These services are organized in twocatalogs of business and functional services. Our approach is based on Model Driven Approach withthree levels which propose alignment mechanisms between business, functional and technical levels.These alignment mechanisms between levels allow integrating change and automating design processdeployment.
62

Monitoramento da morfologia costeira em setores da bacia potiguar sob influ?ncia da ind?stria petrol?fera utilizando geod?sia de alta precis?o e laser esc?ner terrestre

Santos, Andr? Luis Silva dos 04 April 2014 (has links)
Made available in DSpace on 2014-12-17T14:09:18Z (GMT). No. of bitstreams: 1 AndreLSS_TESE.pdf: 5521921 bytes, checksum: f9408bc166edc779a453c6cc730acf4f (MD5) Previous issue date: 2014-04-04 / The objective of this Doctoral Thesis was monitoring, in trimestral scale, the coastal morphology of the Northeastern coast sections of Rio Grande do Norte State, in Brazil, which is an area of Potiguar Basin influenced by the oil industry activities. The studied sections compose coastal areas with intense sedimentary erosion and high environmental sensitivity to the oil spill. In order to achieve the general objective of this study, the work has been systematized in four steps. The first one refers to the evaluation of the geomorphological data acquisition methodologies used on Digital Elevation Model (DEM) of sandy beaches. The data has been obtained from Soledade beach, located on the Northeastern coast of Rio Grande Norte. The second step has been centered on the increasing of the reference geodetic infrastructure to accomplish the geodetic survey of the studied area by implanting a station in Corta Cachorro Barrier Island and by conducting monitoring geodetic surveys to understand the beach system based on the Coastline (CL) and on DEM multitemporal analysis. The third phase has been related to the usage of the methodology developed by Santos; Amaro (2011) and Santos et al. (2012) for the surveying, processing, representation, integration and analysis of Coastlines from sandy coast, which have been obtained through geodetic techniques of positioning, morphological change analysis and sediment transport. The fourth stage represents the innovation of surveys in coastal environment by using the Terrestrial Laser Scanning (TLS), based on Light Detection and Ranging (LiDAR), to evaluate a highly eroded section on Soledade beach where the oil industry structures are located. The evaluation has been achieved through high-precision DEM and accuracy during the modeling of the coast morphology changes. The result analysis of the integrated study about the spatial and temporal interrelations of the intense coastal processes in areas of building cycles and destruction of beaches has allowed identifying the causes and consequences of the intense coastal erosion in exposed beach sections and in barrier islands / O objetivo da Tese de Doutorado foi o monitoramento da morfologia costeira em escala trimestral de trechos do Litoral Setentrional do Estado do Rio Grande do Norte, ?rea da Bacia Potiguar sob a influ?ncia das atividades petrol?feras. Trata-se de setores costeiros marcados por intensa eros?o sedimentar e de alta sensibilidade ambiental ao derramamento de ?leo. Para atingir o objetivo geral deste estudo, o trabalho foi sistematizado em quatro etapas. A primeira etapa apresenta a avalia??o das metodologias de aquisi??o de dados geomorfol?gicos utilizada na modelagem digital de eleva??o de praias arenosas a partir de dados obtidos na praia de Soledade, localizada no Litoral Setentrional do Rio Grande do Norte. A segunda etapa foi a amplia??o da infraestrutura geod?sica de refer?ncia para a realiza??o dos levantamentos geod?sicos da ?rea de estudo atrav?s da implanta??o de uma esta??o na ilha barreira de Corta Cachorro e de levantamentos geod?sicos de monitoramento para o entendimento do sistema praial com uso de an?lises multitemporal de LC e MDE. A terceira etapa consistiu na utiliza??o da metodologia geod?sica para o levantamento, processamento, representa??o, integra??o e an?lises de Linhas de Costa (LC) de litorais arenosos obtidos por t?cnicas geod?sicas de posicionamento, an?lise das altera??es morfol?gicas e transporte de sedimentos A quarta etapa foi definida pela inova??o de levantamentos em ambientes costeiros com a utiliza??o do Laser Esc?ner Terrestre (LiDAR) para avalia??o de um trecho submetido a intensa eros?o na praia de Soledade onde est?o instaladas infraestruturas da ind?stria petrol?fera, por meio de MDE de alta precis?o e acur?cia no modelamento das modifica??es na morfologia costeira. As an?lises dos resultados do estudo integrado das interrela??es espaciais e temporais dos intensos processos costeiros atuantes na ?rea ao longo de ciclos de constru??o e destrui??o das praias permitiram identificar as causas e consequ?ncias da intensa eros?o costeira em setores de praias expostas e ilhas barreiras
63

Variações na extensão da cobertura de gelo do Nevado Cololo, Bolívia

Oliveira, Ana Maria Sanches Dorneles Ferreira de January 2013 (has links)
Este estudo apresenta padrões de flutuações das geleiras do Nevado Cololo, Bolívia, no período 1975–2011, determinado partir de dados orbitais, cartográficos e climáticos. As massas de gelo do Nevado Cololo são representativas das geleiras tropicais andinas que estão sujeitas a alternância entre condições atmosféricas úmidas (novembro-abril) e secas (maio-outubro) (outer tropics). Essa sazonalidade é determinada pela oscilação latitudinal da Zona de Convergência Intertropical (ZCIT) e perturbada pelos eventos não sazonais do fenômeno ENOS. A fase positiva, o El Niño, contribui negativamente para o balanço de massa dessas geleiras e foi frequente no intervalo investigado. Esse trabalho usou imagens TM/Landsat-5 para determinar a cobertura de gelo em 1989, 1997, 2008 e 2011. Aplicando o Normalized Difference Snow Index (NDSI), que utiliza as características espectrais opostas das massas de gelo no visível e no infravermelho próximo, este trabalho delimitou as geleiras do Nevado Cololo. Utilizando as informações de carta topográfica foi obtido um Modelo Digital de Elevação (MDE), elaborado pela interpolação de pontos de elevação usando o método geoestatístico krigagem ordinária. As informações obtidas do sensoriamento remoto e da cartografia foram incorporadas a um Sistema de Informação Geográfica (SIG) para se obter parâmetros das geleiras. A análise da séries temporais de precipitação e temperatura usaram dados do Global Precipitation Climatology Centre (GPCC)/NOAA, do Climate Research Unit Time Series (CRUTS)/University of East Anglia e de duas estações meteorológicas. Os dados climáticos não apresentam tendências estatisticamente significativas, mas há uma fraca redução da precipitação durante os meses de novembro, dezembro e abril, condições essa que podem indicar menor nebulosidade durante o verão. Em 2011 só restavam 48 das 122 geleiras identificadas em 1975. Geleiras pequenas (< 0,1 km²) com cotas máximas baixas foram as mais afetadas e atualmente não existem geleiras abaixo de 4.626 m a.n.m. A cobertura de gelo era de 24,77 ±0,00032 km² em 2011, 42,02% menor do que em 1975. A perda superficial ocorreu em todas as vertentes, independente de orientação, mas as geleiras voltadas a leste foram mais afetadas. Mesmo a maior geleira do Nevado Cololo, face SW, perdeu 21,6% de sua área total e sua frente retraiu cerca de 1 km durante o intervalo de 36 anos. Proporcionalmente, houve o aumento do número de geleiras cuja declividade média está entre 30° e 40°. A redução da espessura gelo é atestada pela fragmentação de geleiras e afloramentos do embasamento em suas partes internas. A perda de massa dessas geleiras estudadas foi provavelmente causada pela intensificação dos processos de ablação. / This study presents fluctuations patterns for the Nevado Cololo glaciers, Bolivia, in the period 1975–2011, as determined from orbital, cartographic and climatic data. Nevado Cololo ice masses are representative of Andean tropical glaciers subjected to alternations of humid (November to April) and dry (May to October) (outer tropics) atmospheric conditions. This seasonality is determined by the Inter-tropical Convergence Zone (ITCZ) latitudinal oscillation and disturbed by the no seasonal ENSO phenomena. The positive phase, El Niño, contributes negatively to these glaciers mass balance and was frequent during the investigated time period. This work used TM/ Landsat-5 imagery to determine the ice cover in 1989, 1997, 2008 and 2011. Applying the Normalized Snow Difference Index (NDIS), which uses the opposite spectral characteristics of ice masses in the visible and near infrared region, this work delimited the Nevado Cololo glaciers. Based on information from a topographic chart, we obtained a Digital Elevation Model (DEM) using elevation points interpolated by the ordinary kriging geostatistical method. Information derived from remote sensing and cartographic sources was incorporated into a Geographic Information System (GIS) to obtain glaciers parameters. The analyses of precipitation and temperature time series used data from the Global Precipitation Climatology Centre (GPCC)/NOAA, the Climate Research Unit Time Series (CRUTS)/University of East Anglia and from two meteorological stations. Climatic data show no statistically significant trend, but there was a weak precipitation reduction during November, December and April months, a condition that may indicate low cloudiness during the summer. By 2011, there were only 48 of the 122 glaciers identified in 1975. Small glaciers (<0.1 km²) with low maximum elevations were most affected and currently there are no glaciers below 4,626 m asl. The ice covered 24.77 km² in 2011, 42.02% less than in 1975. Surface loss occurred in all slopes, regardless of orientation, but glaciers facing east were most affected. Even the largest glacier in Nevado Cololo, SW face, lost 21.6% of its total area and its front retreated about 1 km during the 36 years period. Proportionately, there was an increase in the number of glaciers whose average slope is between 30° and 40°. The ice thickness reduction is attested by glaciers break up and bedrock outcrops in its internal parts. These glaciers mass loss was probably caused by the intensification of ablation processes.
64

Geração automática de código VHDL a partir de modelos UML para sistemas embarcados de tempo-real / Automatic VHDL code generation from UML models for real-time embedded systems

Moreira, Tomás Garcia January 2012 (has links)
A crescente demanda da indústria exige a produção de dispositivos embarcados em menos tempo e com mais funcionalidades diferentes. Isso implica diretamente no processo de desenvolvimento destes produtos requerendo novas técnicas para absorver a complexidade crescente dos projetos e para acelerar suas etapas de desenvolvimento. A linguagem UML vem sendo utilizada para absorver a complexidade do projeto de sistemas embarcados através de sua representação gráfica que torna o processo mais simples e intuitivo. Para acelerar o desenvolvimento surgiram processos que permitem, diretamente a partir modelos UML, a geração de código para linguagens de descrição de software embarcado (C, C++, Java) e para linguagens tradicionais de descrição de hardware (VHDL, Verilog). Diversos trabalhos e ferramentas comerciais foram desenvolvidos para automatizar o processo de geração de código convencional a partir de modelos UML (software). No entanto, pela complexidade da transformação existem apenas poucos trabalhos e nenhuma ferramenta comercial direcionado à geração de HDL a partir de UML, tornando este processo ainda pouco difundido. Nossa proposta é focada na geração de descrições de hardware na linguagem VHDL a partir de modelos UML de sistemas tempo-real embarcados (STRE), surgindo como alternativa ao processo de desenvolvimento de hardware. Apresenta uma metodologia completa para geração automática de código VHDL, permitindo que o comportamento descrito para o sistema modelado seja testado e validado antes de ser desenvolvido, acelerando o processo de produção de hardware e diminuindo as chances de erros de projeto. É proposto como um processo de engenharia dirigido por modelos (MDE) que cobre desde as fases de análise de requisitos e modelagem UML, até a geração de código fonte na linguagem VHDL, onde o foco é gerar na forma de descrições de hardware, todas aquelas funções lógicas de um sistema embarcado que normalmente são desenvolvidas em software. Para atingir este objetivo, foi desenvolvido neste trabalho um conjunto de regras de mapeamento que estende a funcionalidade da ferramenta GenERTiCA, utilizada como suporte ao processo. Adicionalmente, foram pesquisados e desenvolvidos conceitos que serviram como base para o desenvolvimento de regras utilizadas pela ferramenta suporte para guiar o processo de mapeamento entre as linguagens. Os conceitos e as regras propostas foram validados por meio de um estudo de caso, cujos resultados obtidos estão demonstrados nesta dissertação. / The growing market demand requires the production of embedded devices in less time and with more different features. This directly implies on the development process of these products requiring new techniques to absorb the growing complexity of projects and to accelerate their development stages. UML has been used to handle the embedded systems design complexity through its graphical representation that makes the process simpler and more intuitive. To speed up the development cycle, it has emerged some processes that permit code generating directly from UML models to embedded software description languages (C, C++, Java), and traditional hardware description languages (VHDL, Verilog). Several researches and commercial tools have been developed to automate the code generation process from UML models to conventional languages (software). However, due to the transformation complexity there are only few studies and no commercial tool addressed to HDL generation from UML models, making this process almost unknown. Our proposal is focused on generating hardware descriptions as VHDL code from UML models of real-time embedded systems (RTES), emerging as an alternative to the hardware development. It presents a complete methodology to the VHDL code generation, allowing the behavior described to the modeled system to be tested and validated before being implemented, accelerating the hardware production and decreasing the chances of design errors. It is proposed as a model-driven engineering (MDE) process that covers the phases of requirements analysis, UML modeling, models transformations, and the source code generating process to the VHDL language, where the focus is to generate as hardware descriptions all the logic functions of an embedded system which are usually developed as software. To achieve this goal, this work was developed a set of mapping rules which extends the functionality of the tool GenERTiCA, used to support the process. Additionally, it was researched and developed concepts that were the basis for the development of rules used by the tool support to guide the mapping process between languages. The concepts and proposed rules have been validated through a case study, whose results are shown in this dissertation.
65

Geração automática de código VHDL a partir de modelos UML para sistemas embarcados de tempo-real / Automatic VHDL code generation from UML models for real-time embedded systems

Moreira, Tomás Garcia January 2012 (has links)
A crescente demanda da indústria exige a produção de dispositivos embarcados em menos tempo e com mais funcionalidades diferentes. Isso implica diretamente no processo de desenvolvimento destes produtos requerendo novas técnicas para absorver a complexidade crescente dos projetos e para acelerar suas etapas de desenvolvimento. A linguagem UML vem sendo utilizada para absorver a complexidade do projeto de sistemas embarcados através de sua representação gráfica que torna o processo mais simples e intuitivo. Para acelerar o desenvolvimento surgiram processos que permitem, diretamente a partir modelos UML, a geração de código para linguagens de descrição de software embarcado (C, C++, Java) e para linguagens tradicionais de descrição de hardware (VHDL, Verilog). Diversos trabalhos e ferramentas comerciais foram desenvolvidos para automatizar o processo de geração de código convencional a partir de modelos UML (software). No entanto, pela complexidade da transformação existem apenas poucos trabalhos e nenhuma ferramenta comercial direcionado à geração de HDL a partir de UML, tornando este processo ainda pouco difundido. Nossa proposta é focada na geração de descrições de hardware na linguagem VHDL a partir de modelos UML de sistemas tempo-real embarcados (STRE), surgindo como alternativa ao processo de desenvolvimento de hardware. Apresenta uma metodologia completa para geração automática de código VHDL, permitindo que o comportamento descrito para o sistema modelado seja testado e validado antes de ser desenvolvido, acelerando o processo de produção de hardware e diminuindo as chances de erros de projeto. É proposto como um processo de engenharia dirigido por modelos (MDE) que cobre desde as fases de análise de requisitos e modelagem UML, até a geração de código fonte na linguagem VHDL, onde o foco é gerar na forma de descrições de hardware, todas aquelas funções lógicas de um sistema embarcado que normalmente são desenvolvidas em software. Para atingir este objetivo, foi desenvolvido neste trabalho um conjunto de regras de mapeamento que estende a funcionalidade da ferramenta GenERTiCA, utilizada como suporte ao processo. Adicionalmente, foram pesquisados e desenvolvidos conceitos que serviram como base para o desenvolvimento de regras utilizadas pela ferramenta suporte para guiar o processo de mapeamento entre as linguagens. Os conceitos e as regras propostas foram validados por meio de um estudo de caso, cujos resultados obtidos estão demonstrados nesta dissertação. / The growing market demand requires the production of embedded devices in less time and with more different features. This directly implies on the development process of these products requiring new techniques to absorb the growing complexity of projects and to accelerate their development stages. UML has been used to handle the embedded systems design complexity through its graphical representation that makes the process simpler and more intuitive. To speed up the development cycle, it has emerged some processes that permit code generating directly from UML models to embedded software description languages (C, C++, Java), and traditional hardware description languages (VHDL, Verilog). Several researches and commercial tools have been developed to automate the code generation process from UML models to conventional languages (software). However, due to the transformation complexity there are only few studies and no commercial tool addressed to HDL generation from UML models, making this process almost unknown. Our proposal is focused on generating hardware descriptions as VHDL code from UML models of real-time embedded systems (RTES), emerging as an alternative to the hardware development. It presents a complete methodology to the VHDL code generation, allowing the behavior described to the modeled system to be tested and validated before being implemented, accelerating the hardware production and decreasing the chances of design errors. It is proposed as a model-driven engineering (MDE) process that covers the phases of requirements analysis, UML modeling, models transformations, and the source code generating process to the VHDL language, where the focus is to generate as hardware descriptions all the logic functions of an embedded system which are usually developed as software. To achieve this goal, this work was developed a set of mapping rules which extends the functionality of the tool GenERTiCA, used to support the process. Additionally, it was researched and developed concepts that were the basis for the development of rules used by the tool support to guide the mapping process between languages. The concepts and proposed rules have been validated through a case study, whose results are shown in this dissertation.
66

Efficient persistence, query, and transformation of large models / Persistance, requêtage, et transformation efficaces de grands modèles

Daniel, Gwendal 14 November 2017 (has links)
L’Ingénierie Dirigée par les Modèles (IDM) est une méthode de développement logicielle ayant pour but d’améliorer la productivité et la qualité logicielle en utilisant les modèles comme artefacts de premiers plans durant le processus développement. Dans cette approche, les modèles sont typiquement utilisés pour représenter des vues abstraites d’un système, manipuler des données, valider des propriétés, et sont finalement transformés en ressources applicatives (code, documentation, tests, etc). Bien que les techniques d’IDM aient montré des résultats positifs lors de leurs intégrations dans des processus industriels, les études montrent que la mise à l’échelle des solutions existantes est un des freins majeurs à l’adoption de l’IDM dans l’industrie. Ces problématiques sont particulièrement importantes dans le cadre d’approches génératives, qui nécessitent des techniques efficaces de stockage, requêtage, et transformation de grands modèles typiquement construits dans un contexte mono-utilisateur. Plusieurs solutions de persistance, requêtage, et transformations basées sur des bases de données relationnelles ou NoSQL ont été proposées pour améliorer le passage à l’échelle, mais ces dernières sont souvent basées sur une seule sérialisation model/base de données, adaptée à une activité de modélisation particulière, mais peu efficace pour d’autres cas d’utilisation. Par exemple, une sérialisation en graphe est optimisée pour calculer des chemins de navigations complexes,mais n’est pas adaptée pour accéder à des valeurs atomiques de manière répétée. De plus, les frameworks de modélisations existants ont été initialement développés pour gérer des activités simples, et leurs APIs n’ont pas évolué pour gérer les modèles de grande taille, limitant les performances des outils actuels. Dans cette thèse nous présentons une nouvelle infrastructure de modélisation ayant pour but de résoudre les problèmes de passage à l’échelle en proposant (i) un framework de persistance permettant de choisir la représentation bas niveau la plus adaptée à un cas d’utilisation, (ii) une solution de requêtage efficace qui délègue les navigations complexes à la base de données stockant le modèle,bénéficiant de ses optimisations bas niveau et améliorant significativement les performances en terme de temps d’exécution et consommation mémoire, et (iii) une approche de transformation de modèles qui calcule directement les transformations au niveau de la base de données. Nos solutions sont construites en utilisant des standards OMG tels que UML et OCL, et sont intégrées dans les solutions de modélisations majeures telles que ATL ou EMF. / The Model Driven Engineering (MDE) paradigm is a softwaredevelopment method that aims to improve productivity and software quality by using models as primary artifacts in all the aspects of software engineering processes. In this approach, models are typically used to represent abstract views of a system, manipulate data, validate properties, and are finally transformed to application artifacts (code, documentation, tests, etc). Among other MDE-based approaches, automatic model generation processes such as Model Driven Reverse Engineering are a family of approaches that rely on existing modeling techniques and languages to automatically create and validate models representing existing artifact. Model extraction tasks are typically performed by a modeler, and produce a set of views that ease the understanding of the system under study. While MDE techniques have shown positive results when integrated in industrial processes, the existing studies also report that scalability of current solutions is one of the key issues that prevent a wider adoption of MDE techniques in the industry. This isparticularly true in the context of generative approaches, that require efficient techniques to store, query, and transform very large models typically built in a single-user context. Several persistence, query, and transformation solutions based on relational and NoSQL databases have been proposed to achieve scalability, but they often rely on a single model-to-database mapping, which suits a specific modeling activity, but may not be optimized for other use cases. For example a graph-based representation is optimized to compute complex navigation paths, but may not be the best solution for repeated atomic accesses. In addition, low-level modeling framework were originally developed to handle simple modeling activities (such as manual model edition), and their APIs have not evolved to handle large models, limiting the benefits of advance storage mechanisms. In this thesis we present a novel modeling infrastructure that aims to tackle scalability issues by providing (i) a new persistence framework that allows to choose the appropriate model-to-database mapping according to a given modeling scenario, (ii) an efficient query approach that delegates complex computation to the underlying database, benefiting of its native optimization and reducing drastically memory consumption and execution time, and (iii) a model transformation solution that directly computes transformations in the database. Our solutions are built on top of OMG standards such as UML and OCL, and are integrated with the de-facto standard modeling solutions such as EMF and ATL.
67

Berechnung und Anwendung von Modelldifferenzen im Geschäftsprozessmanagement

Hillner, Stanley 12 February 2018 (has links)
Die Softwareentwicklung ist seit den Anfängen der Informatik stetig effizienter geworden. Beispielsweise wird heute neue Software fast schon vollautomatisch entwickelt. Neben der stark geförderten Wiederverwendung von Systemkomponenten und anderen mehr oder weniger verbreiteten Methoden, welche die Produktivität oder auch Softwarequalität verbessern sollen1, ist die modellgetriebene Softwareentwicklung ein sehr effizientes und weit verbreitetes Konzept, qualitativ hochwertige Softwaresysteme zu entwickeln. Bei der modellgetriebenen Softwareentwicklung spielen die Modelle der abzubildenden Realitätsausschnitte eine wichtigere Rolle als in der klassischen Softwareentwicklung. Hier werden die erstellten Modelle dazu verwendet Code, Dokumentationen oder andere Artefakte mittels Transformationen zu erzeugen. Beispielsweise können so auch Modelle anderer Modellierungssprachen aus den bestehenden Modellen erzeugt werden.
68

Optimisation multi-objectifs d'architectures par composition de transformation de modèles / Multiple-objectives architecture optimization by composition of model transformations

Rahmoun, Smail 07 February 2017 (has links)
Nous proposons dans cette thèse une nouvelle approche pour l'exploration d’espaces de conception. Plus précisément, nous utilisons la composition de transformations de modèles pour automatiser la production d'alternatives architecturales, et les algorithmes génétiques pour explorer et identifier des alternatives architecturales quasi-optimales. Les transformations de modèles sont des solutions réutilisables et peuvent être intégrées dans des algorithmes génétiques et ainsi être combinées avec des opérateurs génétiques tels que la mutation et le croisement. Grâce à cela, nous pouvons utiliser (ou réutiliser) différentes transformations de modèles implémentant différents patrons de conception sans pour autant modifier l’environnement d’optimisation. En plus de cela, les transformations de modèles peuvent être validées (par rapport aux contraintes structurelles) en amont et ainsi rejeter avant l’exploration les transformations générant des alternatives architecturales incorrectes. Enfin, les transformations de modèles peuvent être chainées entre elles afin de faciliter leur maintenance, leur réutilisabilité et ainsi concevoir des modèles plus détaillés et plus complexes se rapprochant des systèmes industrielles. A noter que l’exploration de chaines de transformations de modèles a été intégrée dans l’environnement d’optimisation. / In this thesis, we propose a new exploration approach to tackle design space exploration problems involving multiple conflicting non functional properties. More precisely, we propose the use of model transformation compositions to automate the production of architectural alternatives, and multiple-objective evolutionary algorithms to identify near-optimal architectural alternatives. Model transformations alternatives are mapped into evolutionary algorithms and combined with genetic operators such as mutation and crossover. Taking advantage of this contribution, we can (re)-use different model transformations, and thus solve different multiple-objective optimization problems. In addition to that, model transformations can be chained together in order to ease their maintainability and re-usability, and thus conceive more detailed and complex systems.
69

Nueva metodología para la obtención de distancias de visibilidad disponibles en carreteras existentes basada en datos LiDAR terrestre

Campoy Ungria, Jose Manuel 21 December 2015 (has links)
[EN] The existence of a visibility that is appropriate to the actual operating conditions is a sine qua non to achieve a safe geometric design. The sight distances required in driving tasks, such as decision-making, stopping, overtaking or crossing, represent an essential parameter in the geometric design of new roads; and they play a key role in all international design guidelines. Nevertheless, once the road has been built and operating, many other surrounding circumstances do determine the actual sight distance available over time. Moreover, since geometric design guidelines encompass visibility measurements based on the observer and the obstacle located on the roadway, systematic and periodic measurements prove difficult and tedious as well as risky and traffic-disruptive. In engineering practice, it is common to use elevation digital models and geometric design specific programs to establish the visibility conditions on roads; however, the development of new remote sensing technologies expand the possibilities to better estimate the visibility actually available. LiDAR technology has been enjoying a boost internationally in recent years. It is an important source of information that consists of millions of georeferenced points belonging to all kinds of objects, which represent not only the geometry of the road itself, but also its more immediate surroundings. It is precisely this ability to include all sorts of potential obstacles to vision in the analysis that raised our interest. This PhD thesis presents a newly developed and tested methodology for the systematic assessment of visibility available on roads that deploys visuals directly drawn against the LiDAR point cloud. To this purpose the concepts of Visual Prism (VP) and Rectangular Prismatic Unit (RPU) have been defined as key elements in this new way of thinking about vision. They represent an alternative to the traditional straight line drawn between the observer and the object. During the research, the impact on the results of the point cloud density has been analyzed; and this methodology has been compared to the visibility results yielded by known techniques based upon digital terrain models, digital surface models and project profiles in two existing road sections. In general, conventional methods overestimate sight distance compared to the new methodology based on LiDAR data, and in many cases the overestimation is significant.. The development, that displays both visuals and three dimensional point cloud results, also enables to spot the reason for the obstruction of vision. This improvement is practice-ready and could be used while assessing the road and improving the conditions of sight distance and road safety. / [ES] La existencia de una visibilidad adecuada a las condiciones reales de operación, es condición indispensable para alcanzar un diseño geométrico seguro. Las distancias de visibilidad requeridas para tareas inherentes a la conducción, tales como la decisión, la parada, el adelantamiento o el cruce, constituyen un parámetro esencial en el diseño geométrico de nuevas carreteras, formando parte importante de todas las guías de diseño a nivel internacional. Sin embargo, una vez construida la carretera y durante el tiempo en que esta se encuentra en servicio, muchas otras circunstancias de su entorno condicionan la visibilidad realmente disponible a lo largo del tiempo. Por otro lado, dado que las guías de diseño geométrico contemplan las mediciones de visibilidad disponible con el observador y el obstáculo situados sobre la calzada, su medición sistemática y periódica es una complicada y tediosa labor no exenta de riesgos y de perturbaciones al tráfico. En la práctica ingenieril, es habitual el empleo de modelos digitales de elevaciones y de programas específicos de diseño geométrico para establecer las condiciones de visibilidad en carreteras; no obstante, el desarrollo de nuevas tecnologías de teledetección amplían las posibilidades a una mejor estimación de la visibilidad realmente disponible. La tecnología LiDAR está gozando de un importante impulso a nivel internacional en los últimos años y constituye una importante fuente de información consistente en millones de puntos georreferenciados pertenecientes a todo tipo de objetos que representan no solo la geometría de la propia carretera, sino también su entorno más inmediato. Precisamente por su capacidad de incluir en el análisis todo tipo de obstáculos potenciales a la visión, en la presente Tesis Doctoral se ha desarrollado y analizado una nueva metodología de evaluación sistemática de visibilidades disponibles en carreteras a partir de visuales trazadas directamente contra la nube de puntos LiDAR. Para ello se han definido por primera vez los conceptos de Prisma Visual (PV) y de Unidad Prismática Rectangular (UPR) como elementos básicos constitutivos de esta nueva forma de concebir la visión, alternativos a la tradicional línea recta visual trazada entre el observador y el objetivo. Durante la investigación se ha analizado el efecto de la densidad de la nube de puntos en los resultados y se ha sometido esta metodología a comparación con los resultados de visibilidad obtenidos por técnicas conocidas a partir de modelos digitales del terreno, modelos digitales de superficies y perfiles de proyecto en dos tramos de carretera existentes. En general, se obtiene una sobreestimación generalizada y en muchos casos significativa de las visibilidades realmente disponibles si se emplean metodologías convencionales en comparación con las obtenidas a partir de la nueva metodología basada en datos LiDAR. El desarrollo, preparado para la visualización conjunta de resultados de visuales y nube de puntos en tres dimensiones, permite asimismo interpretar el motivo de la obstrucción a la visión, lo que constituye un avance puesto al servicio de los ingenieros en la evaluación de la carretera y en la mejora de sus condiciones de visibilidad y de seguridad vial. / [CA] L'existència d'una visibilitat adequada a les condicions reials d'operació, es condició indispensable per a aconseguir un disseny geomètric segur. Les distàncies de visibilitat requerides per a tasques inherents a la conducció, tals com la decisió, la parada, l'avançament, o l'encreuament, constitueixen un paràmetre essencial en el disseny geomètric de noves carreteres, formant part important de totes les guies de disseny a nivell internacional. No obstant, una volta construïda la carretera i durant el temps en què es troba en servici, moltes altres circumstancies del seu entorn condicionen la visibilitat realment disponible. D'altra banda, donat que les guies de disseny geomètric contemplen les mesures de visibilitat disponible en l'observador i el obstacle situats sobre la calçada, la seua medició es una complicada i tediosa llavor no exempta de riscs i de molèsties al trànsit. En la practica, es habitual l'ús de models digitals d'elevacions i de programes específics de disseny geomètric per a establir les condicions de visibilitat en carreteres; no obstant, el desenvolupament de noves tecnologies de tele-detecció amplien les possibilitats a una millor estima de la visibilitat realment disponible. La tecnologia LIDAR està gojant d'un important impuls a nivell internacional en els ultims anys i constitueix una important font d'informació consistent en milions de punts geo-referenciats de tot tipus d'objectes que representen no nomes la geometria de la pròpia carretera, sinó també el seu entorn mes immediat. Precisament per la seua capacitat d'incloure en l'analisis tot tipus d'obstacles potencials a la visió, en el present tesis doctoral s'ha analitzat una nova metodologia d'avaluació sistemàtica de visibilitats disponibles en carreteres a partir de visuals traçades directament contra el núvol de punts LIDAR. Per tal motiu s'han definit per primera vegada els conceptes de Prisma Visual (PV) i d'Unitat Prismàtica Rectangular (UPR) com a elements bàsics constitutius d'aquesta nova forma de concebre la visió, alternatius a la tradicional línia recta visual traçada entre l'observador i el objectiu. Durant la investigació s'ha analitzat l'efecte de la densitat del núvol de punts en els resultats i s'ha sotmès aquesta metodologia a comparació amb els resultats de visibilitat obtinguts per tècniques conegudes a partir de models digitals del terreny, models digitals de superfícies i perfils de projecte en dos trams de carretera existents. En general, s'obté una sobreestimació generalitzada i en molts casos significativa de les visibilitats realment disponibles si s'empren metodologies convencionals en comparació amb les obtingudes a partir de la nova metodologia basada en dades LiDAR. El desenvolupament, preparat per a la visualització conjunta de resultats de visuals i núvol de punts en tres dimensions, permet així mateix interpretar el motiu de l'obstrucció a la visió, el que constitueix un avanç posat al servei dels enginyers en l'avaluació de la carretera i en la millora de les seves condicions de visibilitat i de seguretat viària. / Campoy Ungria, JM. (2015). Nueva metodología para la obtención de distancias de visibilidad disponibles en carreteras existentes basada en datos LiDAR terrestre [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59062
70

Etude de l'approche de l'interopérabilité par médiation dans le cadre d'une dynamique de collaboration appliquée à la gestion de crise / Mediation information system to support interoperability and collaborative behaviors in a context of crisis management

Truptil, Sébastien 24 January 2011 (has links)
Les collaborations inter-organisationnelles relèvent généralement de circonstances opportunistes et s’avèrent par conséquent éphémères. Les organisations doivent alors être disposées à s’intégrer dans ce type de collaboration tout en gardant leur identité propre. Ce constat est le point de départ du projet MISE (Mediation Information System Engineering), qui aborde cette notion de collaboration d’organisations selon l’angle du système d’information, en proposant une démarche de conception d’un SIM (système d’information de médiation). Ce SIM constitue un système tiers, médiateur des SI des diverses organisations, destiné à prendre en charge, d’une part la coordination des actions des partenaires (orchestration de la dynamique collective) et d’autre part, de gérer la circulation de l’information au sein de la collaboration (acheminement et traduction des données). La conception du SIM repose sur une démarche d’ingénierie dirigée par les modèles (IDM). Par ailleurs, la notion de crise, reposant par définition sur la sollicitation d’acteurs hétérogènes concernés par une collaboration opportuniste (qui plus est dans le cadre d’un phénomène évolutif d’une durée indéterminée), fait du domaine de la gestion de crise un parfait cas d’étude pour le projet MISE. Ces travaux de thèse, liés au projet ANR-CSOSG ISyCri, présentent cette démarche de conception du SIM appliqué au domaine de la gestion de crise. Le manuscrit parcourt la démarche MISE appliquée au domaine de la gestion de crise depuis la définition conceptuelle jusqu’à la réalisation technique selon les trois étapes de cette démarche IDM : (i) au niveau « métier » : l’utilisation d’une base de connaissance, représentée par une ontologie, permet, à partir des caractéristiques de la situation de crise et du savoir-faire des partenaires de la collaboration, de définir le processus collaboratif représentatif de la succession des activités à exécuter dans le cadre de la réponse à la crise. (ii) au niveau « logique » : une transformation de modèle permet de construire, à partir du modèle de processus collaboratif obtenu au niveau « métier », une architecture logique du SIM (orientée service, selon les préceptes SOA). (iii) au niveau « technique » : une deuxième transformation de modèles permet de générer les éléments nécessaires à la configuration du SIM, notamment le fichier BPEL. L’agilité du SIM ainsi déployé constitue une exigence incontournable. Les travaux présentés dans ce manuscrit proposent donc d’intégrer ces différentes étapes de conception du SIM sous la forme de composants logiciels indépendants, sollicités à loisir au sein d’une architecture orientée service. Cette solution apporte une grande flexibilité structurelle à la démarche, en autorisant la reconfiguration partielle du SIM à partir du niveau adapté à la situation. / Organizations should be able to take part into opportunistic and brief collaborative networks. However, they should also control their identity. The MISE project (Mediation Information System Engineering) aims at dealing with that issue from the information system point of view. The main principle is to design a specific third part mediation information system (MIS) in charge of, first, orchestrating the collaborative workflow of the collaborative network and, second, managing information (carrying and translating data). Designing such a MIS is based on a model-driven engineering approach (MDE). Considering crisis management field, it is obvious that such a domain requires opportunistic collaboration of heterogeneous partners involved in the crisis response (furthermore, crisis management is a very dynamic process where agility is a crucial point). Directly linked to the French funded ISyCri project, this PhD research work presents the overall approach for MIS design in a crisis management context.That MDE approach is based on three steps: (i) “Business” level: a collaborative process model is deduced from a knowledge base represented through an ontology. (ii) “Logical” level: an abstract service-oriented architecture of MIS is built, based on a model transformation from the previously obtained collaborative process model. (iii) “Technical” level: all the required deployment files are generated (including BPEL file), based on another model transformation, from the logical architecture. Besides, agility is a strong requirement for such a MIS. Therefore, these three steps are integrated, as independent software components, in a service-oriented architecture of a MIS-design tool. This solution brings structural flexibility to the overall approach by allowing partial redesign of the MIS (at theexpected step)

Page generated in 0.0749 seconds