• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 13
  • 6
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 51
  • 51
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Arbetsflöde för automatgenerering av objektinformation för järnvägsanläggningar / Workflow for automatic generation of object information for railway facilities

Yeldico, Mansour, Yeldico, Fadi January 2019 (has links)
I Sverige ställer Trafikverket krav på hur dokumentationen för projekteringen av järnvägsanläggningar ska presenteras. Dessutom har Trafikverket krav på hur informationen om järnvägsritningarnas objekt ska redovisas. Ett område som påverkar arbetsmetodiken är Byggnadsinformationsmodellering (BIM). Arbetsmetoden behandlar frågor om bland annat hur informationen om objekten på en ritning ska redovisas på ett effektivt sätt. Trots att uppdragsgivaren - konsultföretaget Sweco Rail- redan använder BIM, fylls många tabeller fortfarande i manuellt. Det manuella arbetet innebär en risk för fel och ställer krav på projektörerna. Detta då de måste hålla reda på alla ritningsobjekt och ändringar som sker under pågående projekt. Därför ökar behovet av att hitta ett effektivare och mer kvalitetssäkert arbetsflöde. I detta arbete undersöktes möjligheten att automatisera det manuella arbetet vid framtagning av två olika tabeller för två olika ritningstyper, som redan används inom konsultföretaget. Dessa är tavellistor för geografiska 2D-ritningar och materielförteckningar för stomritningar. Tavellistor anger exempelvis information om placering av hastighetstavlor, materielförteckningar kan lista information om olika komponenter i ett elskåp. Stomritningar och geografiska 2D-ritningar är två vitala delar av projekteringen av järnvägsanläggningar. Stomritningar är ritningar som återanvänds i flera projekt där små modifieringar sker beroende på tillämpningen. En geografisk 2D-ritning är en översiktlig bild som beskriver hur en järnvägssträcka är utformad. Undersökningen resulterade i två arbetsflöden. Ett för att automatgenerera tavellistor för geografiska 2D-ritningar och ett för att automatgenerera materielförteckningar för stomritningar, vilket med hjälp av inbyggda funktioner i CAD-programvaran MicroStation Connect Edition. Utvärdering av de framtagna arbetsflödena, utfördes med utgångspunkt från de intervjuade projektörernas åsikter på Sweco Rail. Dessa ansåg att de automatiserade arbetsflödena, trots att automatiseringen inte var hundraprocentig, var effektivare än det befintliga manuella arbetssättet. Arbetsflödena bidrog även till ett mer kvalitetssäkert arbetssätt. Samtidigt sänktes framställningskostnaderna med cirka 47% per tavla vid automatgenereringen av tavellistan och cirka 58% per komponent vid automatgenereringen av materielförteckningen. Utöver detta bidrog arbetsflödena till en bättre arbetsmiljö eftersom arbetet enligt de fyra projektörerna som intervjuades blev mindre ansträngande. / In Sweden, the Swedish Transport Administration sets the requirements for how the documentation for the design of railway installations should be presented. The Transport Administration also has requirements for how the information on the railway drawings objects should be presented. One area that has an impact on the methodology for work in railway projects is the implementation of Building Information Modeling (BIM). The working method manages issues such as how object information should be presented effectively. Although the client, Sweco Rail, already uses BIM, many tables are still filled manually. The manual work can in turn become a risk due to the human factor, this also becomes an imposition on the projectors, who are the ones keeping track of all the drawings, objects and changes. Therefore, the need to find a more efficient and reliable workflow increases. This study investigates the possibility of automating the manual process when producing two different tables for two different drawing types, which are already used by Sweco Rail. These are tables with signs for geographical 2D drawings and bills of material for framework drawings. Tables with signs for instance, provides the placement of speed charts whilst the bills of material can present information about components in an electrical cabinet. Framework drawings and geographical 2D drawings are two important parts of the design of railway facilities. The drawings for framework are reused in many different projects where minor modifications are made depending on the project. A geographical 2D drawing is an overview that describes how a railway line is designed. The survey resulted in two workflows. One to automatically generate tables with signs for geographical 2D drawings and the other one to automatically generate bills of material for framework drawings. The evaluation of the developed workflows was carried out based on interviews with projectors in Sweco Rail regarding their experience. They considered that the proposed automated workflows, even though the automation was not fully automatized, still were an improvement in efficiency compared to the existing manual working method. The proposed workflows also contributed to a more quality-assured working method. In addition, the production costs were reduced by approximately 47 % per board in the automatic generation of the board list and approximately 58 % per component in the automatic generation regarding the bill of material. In addition, the workflows contributed to a better work environment.  According to the four interviewed projectors the work became less exhausting.
32

Modélisation du squelette pour la génération réaliste de postures de la langue des signes française / Skeleton modelling for the realistic generation of french sign language postures

Delorme, Maxime 07 December 2011 (has links)
Les avancées récentes en matière d'animation ont permis le déploiement de personnages virtuels à des fins diverses et variées. Les signeurs virtuels sont des personnages en trois dimension s'exprimant en langue des signes et permettant la diffusion de messages aux personnes sourdes et malentendantes signantes de manière anonyme et modulaire. Cependant, la génération d'animations pour ces personnages dépend de la description lexicale des signes, modèle linguistique dépendant du système de génération. Les signes décrits par ces modèles sont généralement des réalisations parfaites et géométriques menant à des mouvements robotiques et peu naturels de la part du signeur. Cette thèse s'intéresse à l'addition d'informations anatomiques au squelette de contrôle du personnage virtuel de manière à le faire signer de manière plus humaine et réaliste. Ces informations supplémentaires sont regroupées sous l'appellation de "modèle anatomique" et sont divisées en cinq contributions principales : une nouvelle représentation informatique du squelette, une étude anthropométrique sur la main, l'unification de dépendances articulatoires, un nouveau modèle de complexe carpo-métacarpien permettant l'opposition aisée du pouce et enfin un modèle calculant le confort d'une posture. Ces apports sont intégrés à une plateforme de génération au moyen de techniques adaptées aux contraintes imposées par les modèles linguistiques. Les travaux sont conclus par une évaluation du système ainsi qu'une réflexion sur les travaux futurs pouvant être élaborés à partir de cette thèse. / Recent progresses in animation have allowed the use of virtual character to many extents. Virtual signers (or signing avatars) are three-dimension characters expressing themselves in sign language. These characters allow the the broadcasting of audio information to deaf and hearing-impaired signers in an anonymous and modular fashion. However, automatic generation of animation for such characters strongly rely on the lexical description of signs. Signs described through these models are usually perfect and geometric performances leading to robotic and unrealistic movements. These thesis focuses on adding information to the control skeleton of the signer to help him perform signs in a more human and realistic way. Such information are grouped under the name of "anatomic model" and are divided in five main contributions : a new computer-based description of the skeleton, an anthropometric study of the hand, the merging of articulatory dependencies, a new model of the carpo-metacarpal complex allowing an easier opposition of the thumb and finally a model computing posture comfort. These contributions are then implemented in a generation system through linguistic contraints-adapted techniques. The study ends with an evaluation of the system and a the presentation of future prospects.
33

Formalisation et automatisation de YAO, générateur de code pour l’assimilation variationnelle de données

Nardi, Luigi 08 March 2011 (has links)
L’assimilation variationnelle de données 4D-Var est une technique très utilisée en géophysique, notamment en météorologie et océanographie. Elle consiste à estimer des paramètres d’un modèle numérique direct, en minimisant une fonction de coût mesurant l’écart entre les sorties du modèle et les mesures observées. La minimisation, qui est basée sur une méthode de gradient, nécessite le calcul du modèle adjoint (produit de la transposée de la matrice jacobienne avec le vecteur dérivé de la fonction de coût aux points d’observation). Lors de la mise en œuvre de l’AD 4D-Var, il faut faire face à des problèmes d’implémentation informatique complexes, notamment concernant le modèle adjoint, la parallélisation du code et la gestion efficace de la mémoire. Afin d’aider au développement d’applications d’AD 4D-Var, le logiciel YAO qui a été développé au LOCEAN, propose de modéliser le modèle direct sous la forme d’un graphe de flot de calcul appelé graphe modulaire. Les modules représentent des unités de calcul et les arcs décrivent les transferts des données entre ces modules. YAO est doté de directives de description qui permettent à un utilisateur de décrire son modèle direct, ce qui lui permet de générer ensuite le graphe modulaire associé à ce modèle. Deux algorithmes, le premier de type propagation sur le graphe et le second de type rétropropagation sur le graphe permettent, respectivement, de calculer les sorties du modèle direct ainsi que celles de son modèle adjoint. YAO génère alors le code du modèle direct et de son adjoint. En plus, il permet d’implémenter divers scénarios pour la mise en œuvre de sessions d’assimilation.Au cours de cette thèse, un travail de recherche en informatique a été entrepris dans le cadre du logiciel YAO. Nous avons d’abord formalisé d’une manière plus générale les spécifications deYAO. Par la suite, des algorithmes permettant l’automatisation de certaines tâches importantes ont été proposés tels que la génération automatique d’un parcours “optimal” de l’ordre des calculs et la parallélisation automatique en mémoire partagée du code généré en utilisant des directives OpenMP. L’objectif à moyen terme, des résultats de cette thèse, est d’établir les bases permettant de faire évoluer YAO vers une plateforme générale et opérationnelle pour l’assimilation de données 4D-Var, capable de traiter des applications réelles et de grandes tailles. / Variational data assimilation 4D-Var is a well-known technique used in geophysics, and in particular in meteorology and oceanography. This technique consists in estimating the control parameters of a direct numerical model, by minimizing a cost function which measures the misfit between the forecast values and some actual observations. The minimization, which is based on a gradient method, requires the computation of the adjoint model (product of the transpose Jacobian matrix and the derivative vector of the cost function at the observation points). In order to perform the 4DVar technique, we have to cope with complex program implementations, in particular concerning the adjoint model, the parallelization of the code and an efficient memory management. To address these difficulties and to facilitate the implementation of 4D-Var applications, LOCEAN is developing the YAO framework. YAO proposes to represent a direct model with a computation flow graph called modular graph. Modules depict computation units and edges between modules represent data transfer. Description directives proper to YAO allow a user to describe its direct model and to generate the modular graph associated to this model. YAO contains two core algorithms. The first one is a forward propagation algorithm on the graph that computes the output of the numerical model; the second one is a back propagation algorithm on the graph that computes the adjoint model. The main advantage of the YAO framework, is that the direct and adjoint model programming codes are automatically generated once the modular graph has been conceived by the user. Moreover, YAO allows to cope with many scenarios for running different data assimilation sessions.This thesis introduces a computer science research on the YAO framework. In a first step, we have formalized in a more general way the existing YAO specifications. Then algorithms allowing the automatization of some tasks have been proposed such as the automatic generation of an “optimal” computational ordering and the automatic parallelization of the generated code on shared memory architectures using OpenMP directives. This thesis permits to lay the foundations which, at medium term, will make of YAO a general and operational platform for data assimilation 4D-Var, allowing to process applications of high dimensions.
34

Aplicação do software scicoslab para análise do controle automático de geração de sistemas elétricos de potência / Application of ScicosLab software for analysis of automatic generation control of electric power systems

Oda, George 22 June 2012 (has links)
The purpose of this paper is to show that the software ScicosLab can be used as an interesting and effective computational tool to analyze the automatic generation control of electric power systems. Firstly it is presented the software and, afterwards, the concepts and definitions of rotational movements in order to develop mathematical models for the generators equipped with steam turbines or hydraulic turbines and speed governors, and for their electrical loads. For the studies it is used a system comprised by two distinct interconnected areas where a load increase in one area is simulated without/with the tieline, ignoring, and then considering the primary and supplementary controls. Finally, it is analyzed a more realistic system with three distinct areas extracted from the Brazilian power system. The computational results show graphically the variations of the two main quantities of interest: the frequency of each area of the system and the tieline power. These quantities allow the evaluation of the system behavior after a disturbance that affects the generation-load balance. Within the above context, it is verified that the ScicosLab computer package effectively models and simulates the load-frequency control of power systems, qualifying, therefore, as an excellent alternative to replace any similar program which requires license payment. / A proposta deste trabalho é mostrar que o software ScicosLab pode ser utilizado como uma ferramenta computacional, interessante e eficaz, para analisar o controle automático de geração de sistemas elétricos de potência. Inicialmente é apresentado o software e, em seguida, os conceitos de movimentos rotativos que permitem desenvolver os modelos matemáticos para os geradores equipados com turbinas a vapor ou hidráulica e reguladores de velocidade, e para suas cargas elétricas. Para possibilitar os estudos é utilizado um sistema constituído por duas áreas distintas interligadas, onde é simulado um aumento de carga em uma destas áreas, sem e com a linha de interligação, desconsiderando e, em seguida, considerando os controles primário e suplementar. Finalmente, analisa-se um sistema mais realístico com três áreas distintas extraído do sistema elétrico brasileiro. Os resultados computacionais mostram graficamente as variações das duas principais grandezas de interesse: a frequência de cada área do sistema e a potência da linha de interligação. Estas grandezas permitem avaliar o comportamento do sistema após uma perturbação que afeta o balanço geração-carga. Considerando o exposto acima, constata-se que o pacote computacional ScicosLab modela e simula eficazmente o controle carga-frequência de sistemas elétricos de potência, qualificando-se, portanto, como uma excelente alternativa para substituir qualquer programa similar que exige pagamento de licença. / Mestre em Ciências
35

Geração semi-automática de extratores de dados da web considerando contextos fracos / Semi-automatic generation of web data extractors considering weak contexts

Oliveira, Daniel Pereira de 03 March 2006 (has links)
Made available in DSpace on 2015-04-11T14:03:04Z (GMT). No. of bitstreams: 1 Daniel Pereira de Oliveira.pdf: 1962605 bytes, checksum: 022c425ec0a87d2146c7cae3f274903b (MD5) Previous issue date: 2006-03-03 / In the current days, the Internet has become the largest information repository available. However, this huge variety of information is mostly represented in textual format and it necessarily requires human intervention to be effectively used. On the other hand, there exists a large set of Web pages that are in fact composed of collections of implicit data objects. For instance, on-line catalogs, digital libraries and e-commerce Web sites in general. Extracting the contents of these pages and identifying the structure of the data objects available allow for more sophisticated forms of processing besides hyperlink browsing and keyword-based searching. The task of extracting data from Web pages is usually executed by specialized programs called wrappers. In the present work we propose and evaluate a new approach to the wrapper development problem. In this approach, the user is only responsible for providing examples for the atomic items that constitute the objects of interest. Based on these examples, our method automatically generates expressions for extracting other atomics items similar to those presented as example and infers a plausible and meaningful structure to organize them. Our method for generating extraction expression uses techniques inherited from solutions for the multiple string alignment problem. The method is able to produce good extraction expressions that can be easily encoded as regular expressions. Inferring a meaningful structure for the objects whose atomic values were extracted is the task of the HotCycles algorithm, that were previously proposed and which we have revised and extended in this work. The algorithm assembles an adjacency graph for these atomic values, and executes a structural analysis over this graph, looking for patterns that resemble structural constructs such as tuples and lists. From such constructs, a complex object type can be assigned to the extracted data. The experiments carried out using 21 collections of real Web pages have demonstrated the feasibility of our extraction method, reaching 94% of effectiveness using no more than 10 examples for each attribute. The HotCycles algorithm was able to infer a meaningful structure for the objects present in all used collections. Its effectiveness, combined with our atom extraction method, reached 97% of structures correctly inferred, also using no more than 10 examples per attribute. The association of these two methods has demonstrated to be extremely feasible. The high number of correctly inferred structures together with the high precision and recall values of the extraction process demonstrates that this new approach is indeed a promising one. / Hoje em dia a Web se apresenta como o maior repositório de informações da humanidade. Contudo, essa imensa gama de informação é formada principalmente por conteúdo textual e necessariamente requer interpretação humana para se tornar útil. Por outro lado, existe uma grande quantidade de páginas na Web que são, na verdade, formadas por um conjunto implícito de objetos. Isso ocorre, por exemplo, em páginas oriundas de sites de catálogos on-line, bibliotecas digitais e comércio eletrônico em geral. A extração desse conteúdo e a identificação da estrutura dos objetos disponíveis permite uma forma mais sofisticada de processamento além da tradicional navegação por hiperlinks e consultas por palavras-chave. A tarefa de extrair dados de páginas Web é executada por progamas chamados extratores ou wrappers. Neste trabalho propomos uma nova abordagem para o desenvolvimento de extratores. Nessa abordagem o usuário se restringe a fornecer exemplos de treinamento para os atributos que constituem os objetos de interesse. Baseado nesses exemplos, são gerados automaticamente padrões para extrair dados inseridos em contextos similares áqueles fornecidos como exemplos. Em seguida, esses dados são automaticamente organizados segundo uma estrutura plausível. Nosso método de geração de padrões de extração utiliza técnicas herdadas de soluções para o problema do alinhamento múltiplo de seqüências. O método é capaz de produzir padrões de extração que podem ser facilmente transformados em expressões regulares. A tarefa de inferir uma estrutura plausível para os objetos extraídos é realizada pelo algoritmo HotCycles, que foi previamente proposto e que foi revisto e ampliado neste trabalho. O algoritmo constrói um grafo de adjacências para esses dados, e realiza nele, uma análise estrutural em busca de padrões que indiquem construtores estruturais como tuplas e listas. A partir de tais construtores, é associado um tipo aninhado aos dados que foram extraídos da página. Experimentos realizados em 21 coleções de páginas reais da Web demonstram a viabilidade do método de extração de valores atômicos, obtendo um desempenho superior a 94% e utilizando no máximo 10 exemplos de treinamento por atributo. O algoritmo HotCycles foi capaz de inferir uma estrutura plausível para os objetos em todas as coleções utilizadas. Seu desempenho combinado com o método de extração de valores atômicos chegou a 97% de estruturas corretamente inferidas com a utilização também até 10 exemplos por atributo. A combinação desses dois métodos demonstrou-se extremamente viável. Os altos índices de estruturas corretamente inferidas juntamente com os elevados índices de precisão e revocação do processo de extração demonstram que esta é sem dúvida uma abordagem promissora.
36

Geração automática de padrões de navegação para web sites de conteúdo dinâmico / Automatic generation of search patterns on dynamic contents web sites

Vidal, Márcio Luiz Assis 22 March 2006 (has links)
Made available in DSpace on 2015-04-11T14:03:06Z (GMT). No. of bitstreams: 1 Marcio Luiz A Vidal.pdf: 1572903 bytes, checksum: 60a906bd7b466480ab44693ca47ff07c (MD5) Previous issue date: 2006-03-22 / Fundação de Amparo à Pesquisa do Estado do Amazonas / A growing number of Web applications need to process collection of similar pages obtained from Web sites. These applications have the ultimate goal of taking advantage of the valuable information implicitly available in these pages to perform such tasks as querying, searching, data extraction and mining. For some of these applications, the criteria to determine when a Web page must be present in a collection are related to features of the content of the page. However, there are many other important applications in which the inherent structure of the pages, instead of its content, provides a better criterion for gathering the pages. Motivated by this problem, we propose in this work a new approach for generating structure-driven crawlers that requires a minimum effort from the user, since it only require an example of the page to be crawled and an entry point to the Web site. Another important feature in our approach is that it is capable of dealing with Web sites in which the pages to be collected are dynamically generated through the filling of forms. Contrary to existing methods in the literature, our approach does not require a sample database to help in the process of filling out forms and it also does not demand a great interaction with users. Results obtained in experiments with our approach demonstrate a 100% value of precision in craws performed over 17 real Web sites with static and dynamic contents and at least 95% of recall in all 11 static Web sites. / Um crescente número de aplicações para Web necessitam processar coleções de páginas similares obtidas de Web sites. O objetivo final destas aplicações é tirar proveito de informações valiosas que estas páginas implicitamente contêm para realizar tarefas como consulta, busca, extração de dados, mineração de dados e análise de características de uso e popularidade. Para algumas destas aplicações os critérios para determinar quando uma página deve estar presente na coleção estão relacionados a características do conteúdo da página. Contudo, exitem muitas outras importantes situações em que características inerentes à estrutura das páginas, ao invés de seu conteúdo, provêm um critério melhor para guiar a coleta de páginas. Motivados por este problema, propomos nesta dissertação uma nova abordagem para geração de coletores guiados por estrutura que requer um esforço mínimo do usuário, pois são necessário apenas um exemplo das páginas a coletar e um ponto de entrada no Web site. Uma outra característica importante de nossa abordagem, é o fato de ser capaz de lidar com sites onde as páginas a serem coletadas são geradas dinamicamente através do preenchimento de formulários. Ao contrário dos métodos existentes na literatura, no nosso caso não é necessária a existência de um banco de dados de amostra para auxiliar no processo de preenchimento do formulário, nem tão pouco é necessária grande iteração com o usuário. Resultados obtidos em experimento com nossa abordagem demonstraram um valor de 100% de precisão em coletas realizadas sobre 17 Web sites reais de conteúdo estático e dinâmico, e pelo menos 95% de revocação para 11 sites estáticos utilizados nos experimentos.
37

Automated Configuration of Time-Critical Multi-Configuration AUTOSAR Systems

Chandmare, Kunal 28 September 2017 (has links)
The vision of automated driving demands a highly available system, especially in safety-critical functionalities. In automated driving when a driver is not binding to be a part of the control loop, the system needs to be operational even after failure of a critical component until driver regain the control of vehicle. In pursuit of such a fail-operational behavior, the developed design process with software redundancy in contrast to conventional dedicated backup requires the support of automatic configurator for scheduling relevant parameters to ensure real-time behavior of the system. Multiple implementation methods are introduced to provide an automatic service which also considers task criticality before assigning task to the processor. Also, a generic method is developed to generate adaptation plans automatically for an already monitoring and reconfiguration service to handle fault occurring environment.
38

Formalisation et automatisation de YAO, générateur de code pour l’assimilation variationnelle de données / Formalisation and automation of YAO, code generator for variational data assimilation

Nardi, Luigi 08 March 2011 (has links)
L’assimilation variationnelle de données 4D-Var est une technique très utilisée en géophysique, notamment en météorologie et océanographie. Elle consiste à estimer des paramètres d’un modèle numérique direct, en minimisant une fonction de coût mesurant l’écart entre les sorties du modèle et les mesures observées. La minimisation, qui est basée sur une méthode de gradient, nécessite le calcul du modèle adjoint (produit de la transposée de la matrice jacobienne avec le vecteur dérivé de la fonction de coût aux points d’observation). Lors de la mise en œuvre de l’AD 4D-Var, il faut faire face à des problèmes d’implémentation informatique complexes, notamment concernant le modèle adjoint, la parallélisation du code et la gestion efficace de la mémoire. Afin d’aider au développement d’applications d’AD 4D-Var, le logiciel YAO qui a été développé au LOCEAN, propose de modéliser le modèle direct sous la forme d’un graphe de flot de calcul appelé graphe modulaire. Les modules représentent des unités de calcul et les arcs décrivent les transferts des données entre ces modules. YAO est doté de directives de description qui permettent à un utilisateur de décrire son modèle direct, ce qui lui permet de générer ensuite le graphe modulaire associé à ce modèle. Deux algorithmes, le premier de type propagation sur le graphe et le second de type rétropropagation sur le graphe permettent, respectivement, de calculer les sorties du modèle direct ainsi que celles de son modèle adjoint. YAO génère alors le code du modèle direct et de son adjoint. En plus, il permet d’implémenter divers scénarios pour la mise en œuvre de sessions d’assimilation.Au cours de cette thèse, un travail de recherche en informatique a été entrepris dans le cadre du logiciel YAO. Nous avons d’abord formalisé d’une manière plus générale les spécifications deYAO. Par la suite, des algorithmes permettant l’automatisation de certaines tâches importantes ont été proposés tels que la génération automatique d’un parcours “optimal” de l’ordre des calculs et la parallélisation automatique en mémoire partagée du code généré en utilisant des directives OpenMP. L’objectif à moyen terme, des résultats de cette thèse, est d’établir les bases permettant de faire évoluer YAO vers une plateforme générale et opérationnelle pour l’assimilation de données 4D-Var, capable de traiter des applications réelles et de grandes tailles. / Variational data assimilation 4D-Var is a well-known technique used in geophysics, and in particular in meteorology and oceanography. This technique consists in estimating the control parameters of a direct numerical model, by minimizing a cost function which measures the misfit between the forecast values and some actual observations. The minimization, which is based on a gradient method, requires the computation of the adjoint model (product of the transpose Jacobian matrix and the derivative vector of the cost function at the observation points). In order to perform the 4DVar technique, we have to cope with complex program implementations, in particular concerning the adjoint model, the parallelization of the code and an efficient memory management. To address these difficulties and to facilitate the implementation of 4D-Var applications, LOCEAN is developing the YAO framework. YAO proposes to represent a direct model with a computation flow graph called modular graph. Modules depict computation units and edges between modules represent data transfer. Description directives proper to YAO allow a user to describe its direct model and to generate the modular graph associated to this model. YAO contains two core algorithms. The first one is a forward propagation algorithm on the graph that computes the output of the numerical model; the second one is a back propagation algorithm on the graph that computes the adjoint model. The main advantage of the YAO framework, is that the direct and adjoint model programming codes are automatically generated once the modular graph has been conceived by the user. Moreover, YAO allows to cope with many scenarios for running different data assimilation sessions.This thesis introduces a computer science research on the YAO framework. In a first step, we have formalized in a more general way the existing YAO specifications. Then algorithms allowing the automatization of some tasks have been proposed such as the automatic generation of an “optimal” computational ordering and the automatic parallelization of the generated code on shared memory architectures using OpenMP directives. This thesis permits to lay the foundations which, at medium term, will make of YAO a general and operational platform for data assimilation 4D-Var, allowing to process applications of high dimensions.
39

Automatic Generation Of Supply Chain Simulation Models From Scor Based Ontologies

Cope, Dayana 01 January 2008 (has links)
In today's economy of global markets, supply chain networks, supplier/customer relationship management and intense competition; decision makers are faced with a need to perform decision making using tools that do not accommodate the nature of the changing market. This research focuses on developing a methodology that addresses this need. The developed methodology provides supply chain decision makers with a tool to perform efficient decision making in stochastic, dynamic and distributed supply chain environments. The integrated methodology allows for informed decision making in a fast, sharable and easy to use format. The methodology was implemented by developing a stand alone tool that allows users to define a supply chain simulation model using SCOR based ontologies. The ontology includes the supply chain knowledge and the knowledge required to build a simulation model of the supply chain system. A simulation model is generated automatically from the ontology to provide the flexibility to model at various levels of details changing the model structure on the fly. The methodology implementation is demonstrated and evaluated through a retail oriented case study. When comparing the implementation using the developed methodology vs. a "traditional" simulation methodology approach, a significant reduction in definition and execution time was observed.
40

Étude en réacteur auto-agité par jets gazeux de l'oxydation d'hydrocarbures naphténiques et aromatiques présents dans les gazoles / Oxidation studying a jet-stirred reactor of aromatic and naphthenic compounds contained in Diesel fuels

Husson, Benoît 23 May 2013 (has links)
L'étude de l'oxydation d'hydrocarbures naphténiques (éthyl-cyclohexane, n-butyl-cyclohexane) et aromatiques (éthyl-benzène, n-butyl-benzène, n-hexyl-benzène) a été réalisée en réacteur auto-agité par jets-gazeux (pression de 1 à10 bar, température de500 à1100 K, richesse : 0,25, 1 et 2, temps de passage:2s). Les produits de réaction ont été quantifiés par chromatographie en phase gazeuse et identifiés par couplage avec la spectrométrie de masse. L'influence sur la réactivité et sur la sélectivité de la richesse, de la pression et de la taille de la chaîne alkyle greffée sur le cycle aromatique ou naphténique a été déterminée. La réactivité de l'éthyl-cyclohexane a également été comparée à celle obtenue pour deux autres composés contenant 8 atomes de carbone (le n-octane et le 1-octène). Les résultats expérimentaux pour l'éthyl-cyclohexane et le n-butyl-benzène sont en bon accord avec des prédictions réalisées à l'aide de modèles de la littérature, sauf pour le composé naphténique pour des températures inférieures à 800 K. Un mécanisme cinétique détaillé d'oxydation de l'éthyl-benzène a été développé (1411 réactions ; 205 espèces) et validé à partir des résultats obtenus lors de cette thèse mais également à partir de résultats disponibles dans la littérature. Ce mécanisme constitue la « base aromatique » implémentée dans le nouveau logiciel EXGAS Alkyl-aromatiques développé parallèlement à cette thèse et qui permet la génération automatique de mécanismes cinétiques d'oxydation des composés Alkyl-aromatiques. Une étude des règles génériques de décomposition des espèces primaires dans le mécanisme secondaire de ce logiciel a été réalisée lors de cette thèse / The study of the oxidation of naphthenic (ethyl-cyclohexane,n-butyl-cyclohexane) and aromatic (ethyl-benzene,n-butyl-benzene, n-hexyl-benzene) hydrocarbons was performed in a jet-stirred reactor (pressure from 1 to10 bar, temperature from 500 to 1100 K, equivalenceratio: 0.25, 1 and2, residence time: 2s). Reaction products were quantified by gas chromatography and identified using mass spectrometry. The influence on the reactivity and the product selectivity of the equivalence ratio, the pressure and the size of the side alkyl chain attached tothe aromatic or naphthenic ringwas determined. The reactivity of ethyl-cyclohexane was also compared to that obtained for two other compounds containing 8 carbon atoms (n-octane and1-octene). The experimental results for ethyl-cyclohexane and n-butyl-benzene have been satisfactorily compared with prediction made using detailed kinetic mechanisms from the literature, except for the naphthenic at temperature below 800 K. A detailed kinetic mechanismfor the oxidation of ethyl-benzene has been developed (1411 reactions, 205 species) and validated from experimental results obtained in this studybut also from results available in literature. This mechanism has now becomethe "aromatic base" implemented in the software EXGAS Alkyl-aromaticswhich has been developed together with this PhD work and which allows theautomatic generation of alkyl-aromatics oxidation kinetic mechanisms. A study of the generic rules of decomposition of primary species in the secondary mechanism of this softwarewas conducted in this thesis

Page generated in 0.5641 seconds