• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integrating design and construction to improve constructability through an effective usage of it

Underwood, Jason January 1995 (has links)
No description available.
2

A knowledge-based electronic prototype system (KEPS) for building and services design integration

Hew, Ken Ping January 1998 (has links)
No description available.
3

Aplinkos apsaugos inspekcijos informacinės sistemos posistemių projektavimas ir programinė realizacija / Design and realization of Information System of Environmental Inspection

Černiauskaitė, Agnė 27 May 2005 (has links)
Nowadays the complexity and size of business systems is growing steadily and, as practise shows, the situation is not going to change in the nearest future. This necessitates difficulties not only for lifecycle of programmes’ development but for maintenance and expansibility also. That’s why, solving them, requires a broader perspective that views the system as consisting of other systems. This work looks what kind of methods can be applied in this context. The analysis of modern technologies (Java programming language) and modelling methods (functional decomposition and object-oriented system engineering) are presented. Suggested system decomposition approach is the synthesis of already existing ones. In this work new method has been applied for the development of experimental Information System of Territorial Departments of Environmental Inspection. The prototype of the mentioned system has been realized by using J2EE (Java 2, Enterprise Edition) technologies. Developed software implements the requirements for usage and maintenance (large number of the users, security rights, compatibility, centralised administration, etc.) of large distributed systems. Suggested approach may be used while expanding the existing software or developing similar applications.
4

Spatio-temporal Analysis of Urban Heat Island and Heat Wave Evolution using Time-series Remote Sensing Images: Method and Applications

Yang, Bo 11 June 2019 (has links)
No description available.
5

Computational workflow management for conceptual design of complex systems : an air-vehicle design perspective

Balachandran, Libish Kalathil January 2007 (has links)
The decisions taken during the aircraft conceptual design stage are of paramount importance since these commit up to eighty percent of the product life cycle costs. Thus in order to obtain a sound baseline which can then be passed on to the subsequent design phases, various studies ought to be carried out during this stage. These include trade-off analysis and multidisciplinary optimisation performed on computational processes assembled from hundreds of relatively simple mathematical models describing the underlying physics and other relevant characteristics of the aircraft. However, the growing complexity of aircraft design in recent years has prompted engineers to substitute the conventional algebraic equations with compiled software programs (referred to as models in this thesis) which still retain the mathematical models, but allow for a controlled expansion and manipulation of the computational system. This tendency has posed the research question of how to dynamically assemble and solve a system of non-linear models. In this context, the objective of the present research has been to develop methods which significantly increase the flexibility and efficiency with which the designer is able to operate on large scale computational multidisciplinary systems at the conceptual design stage. In order to achieve this objective a novel computational process modelling method has been developed for generating computational plans for a system of non-linear models. The computational process modelling was subdivided into variable flow modelling, decomposition and sequencing. A novel method named Incidence Matrix Method (IMM) was developed for variable flow modelling, which is the process of identifying the data flow between the models based on a given set of input variables. This method has the advantage of rapidly producing feasible variable flow models, for a system of models with multiple outputs. In addition, criteria were derived for choosing the optimal variable flow model which would lead to faster convergence of the system. Cont/d.
6

Computational workflow management for conceptual design of complex systems: an air-vehicle design perspective

Balachandran, Libish Kalathil January 2007 (has links)
The decisions taken during the aircraft conceptual design stage are of paramount importance since these commit up to eighty percent of the product life cycle costs. Thus in order to obtain a sound baseline which can then be passed on to the subsequent design phases, various studies ought to be carried out during this stage. These include trade-off analysis and multidisciplinary optimisation performed on computational processes assembled from hundreds of relatively simple mathematical models describing the underlying physics and other relevant characteristics of the aircraft. However, the growing complexity of aircraft design in recent years has prompted engineers to substitute the conventional algebraic equations with compiled software programs (referred to as models in this thesis) which still retain the mathematical models, but allow for a controlled expansion and manipulation of the computational system. This tendency has posed the research question of how to dynamically assemble and solve a system of non-linear models. In this context, the objective of the present research has been to develop methods which significantly increase the flexibility and efficiency with which the designer is able to operate on large scale computational multidisciplinary systems at the conceptual design stage. In order to achieve this objective a novel computational process modelling method has been developed for generating computational plans for a system of non-linear models. The computational process modelling was subdivided into variable flow modelling, decomposition and sequencing. A novel method named Incidence Matrix Method (IMM) was developed for variable flow modelling, which is the process of identifying the data flow between the models based on a given set of input variables. This method has the advantage of rapidly producing feasible variable flow models, for a system of models with multiple outputs. In addition, criteria were derived for choosing the optimal variable flow model which would lead to faster convergence of the system. Cont/d.
7

Interprétation automatique de données hétérogènes pour la modélisation de situations collaboratives : application à la gestion de crise / Automatic interpretation of heterogeneous data to model collaborative situations : application to crisis management

Fertier, Audrey 29 November 2018 (has links)
Les travaux présentés dans ce manuscrit s’appliquent au domaine de la gestion de crise française, et notamment à la phase de réponse qui suit un évènement majeur, comme une crue ou un accident industriel. Suite à l’évènement, des cellules de crise sont activées pour prévenir et traiter les conséquences de la crise. Elles font face, dans l’urgence, à de nombreuses difficultés. Les parties-prenantes sont nombreuses, autonomes et hétérogènes, la coexistence de plans d’urgence engendre des contradictions et des effets en cascade se nourrissent des interconnexions entre réseaux. Ces constats arrivent alors que les données disponibles sur les réseaux informatiques ne cessent de se multiplier. Elles sont, par exemple, émises par des capteurs de mesures, sur des réseaux sociaux, ou par des bénévoles. Ces données sont l’occasion de concevoir un système d’information capable de les collecter pour les interpréter en un ensemble d’information formalisé, utilisable en cellule de crise. Pour réussir, les défis liés aux 4Vs du Big data doivent être relevés en limitant le Volume, unifiant (la Variété) et améliorant la Véracité des données et des informations manipulées, tout en suivant la dynamique (Vélocité) de la crise en cours. Nos états de l’art sur les différentes parties de l’architecture recherchée nous ont permis de définir un tel système d’information. Ce dernier est aujourd’hui capable de (i) recevoir plusieurs types d’évènements émis de sources de données connues ou inconnues, (ii) d’utiliser des règles d’interprétations directement déduites de règles métiers réelles et (iii) de formaliser l’ensemble des informations utiles aux parties-prenantes. Son architecture fait partie des architectures orientées évènements, et coexiste avec l’architecture orientée services du logiciel développé par le laboratoire Centre de Génie Industriel (CGI). Le système d’information ainsi implémenté a pu être éprouvé sur un scénario de crue majeure en Loire Moyenne, élaboré par deux Services de Prévision des Crues (SPC) français. Le modèle décrivant la situation de crise courante, obtenu par le système d’information proposé, peut être utilisé pour (i) déduire un processus de réponse à la crise, (ii) détecter des imprévus ou (iii) mettre à jour une représentation de la situation en cellule de crise. / The present work is applied to the field of French crisis management, and specifically to the crisis response phase which follows a major event, like a flood or an industrial accident. In the aftermath of the event, crisis cells are activated to prevent and deal with the consequences of the crisis. They face, in a hurry, many difficulties. The stakeholders are numerous, autonomous and heterogeneous, the coexistence of contingency plans favours contradictions and the interconnections of networks promotes cascading effects. These observations arise as the volume of data available continues to grow. They come, for example, from sensors, social media or volunteers on the crisis theatre. It is an occasion to design an information system able to collect the available data to interpret them and obtain information suited to the crisis cells. To succeed, it will have to manage the 4Vs of Big Data: the Volume, the Variety and Veracity of data and information, while following the dynamic (velocity) of the current crisis. Our literature review on the different parts of this architecture enables us to define such an information system able to (i) receive different types of events emitted from data sources both known and unknown, (ii) to use interpretation rules directly deduced from official business rules and (iii) to structure the information that will be used by the stake-holders. Its architecture is event-driven and coexists with the service oriented architecture of the software developed by the CGI laboratory. The implemented system has been tested on the scenario of a 1/100 per year flood elaborated by two French forecasting centres. The model describing the current crisis situation, deduced by the proposed information system, can be used to (i) deduce a crisis response process, (ii) to detect unexpected situations, and (iii) to update a COP suited to the decision-makers.

Page generated in 0.1161 seconds