• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 217
  • 145
  • 53
  • 38
  • 32
  • 14
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 592
  • 133
  • 120
  • 116
  • 100
  • 80
  • 79
  • 77
  • 76
  • 73
  • 70
  • 70
  • 59
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Passive interoperability testing for communication protocols / Le test d'interopérabilité passif pour les protocoles de communication

Chen, Nanxing 24 June 2013 (has links)
Dans le domaine des réseaux, le test de protocoles de communication est une activité importante afin de valider les protocoles applications avant de les mettre en service. Généralement, les services qu'un protocole doit fournir sont décrits dans sa spécification. Cette spécification est une norme ou un standard défini par des organismes de normalisation tels que l'ISO (International Standards Organisation), l'IETF (Internet Engineering Task Force), l'ITU (International Telecommunication Union), etc. Le but du test est de vérifier que les implémentations du protocole fonctionnent correctement et rendent bien les services prévus. Pour atteindre cet objectif, différentes méthodes de tests peuvent être utilisées. Parmi eux, le test de conformité vérifie qu'un produit est conforme à sa spécification. Le test de robustesse vérifie les comportements de l'implémentation de protocole face à des événements imprévus. Dans cette thèse, nous nous intéressons plus particulièrement au test d'interopérabilité, qui vise à vérifier que plusieurs composants réseaux interagissent correctement et fournissent les services prévus. L'architecture générale de test d'interopérabilité fait intervenir un système sous test (SUT) composé de plusieurs implémentations sous test (IUT). Les objectifs du test d'interopérabilité sont à la fois de vérifier que plusieurs implémentations (basées sur des protocoles conçus pour fonctionner ensemble) sont capables d'interagir et que, lors de leur interaction, elles rendent les services prévus dans leurs spécifications respectives. En général, les méthodes de test d'interopérabilité peuvent être classées en deux grandes approches: le test actif et le test passif. Le test actif est une technique de validation très populaire, dont l'objectif est essentiellement de tester les implémentations (IUT), en pratiquant une suite de contrôles et d'observations sur celles-ci. Cependant, une caractéristique fondamentale du test actif est que le testeur possède la capacité de contrôler les IUTs. Cela implique que le testeur perturbe le fonctionnement normal du système testé. De ce fait, le test actif n'est pas une technique appropriée pour le test d'interopérabilité, qui est souvent effectué dans les réseaux opérationnels, où il est difficile d'insérer des entrées arbitraires sans affecter les services ou les fonctionnements normaux des réseaux. A l'inverse, le test passif est une technique se basant uniquement sur les observations. Le testeur n'a pas besoin d'agir sur le SUT notamment en lui envoyant des stimuli. Cela permet au test d'être effectué sans perturber l'environnement normal du système sous test. Le test passif possède également d'autres avantages comme par exemple, pour les systèmes embarqués où le testeur n'a pas d'accès direct, de pourvoir effectuer le test en collectant des traces d'exécution du système, puis de détecter les éventuelles erreurs ou déviations de ces traces vis-à-vis du comportement du système. / In the field of networking, testing of communication protocols is an important activity to validate protocol applications before commercialisation. Generally, the services that must be provided by a protocol are described in its specification(s). A specification is generally a standard defined by standards bodies such as ISO (International Standards Organization), IETF (Internet Engineering Task Force), ITU (International Telecommunication Union), etc. The purpose of testing is to verify that the protocol implementations work correctly and guarantee the quality of the services in order to meet customers expectations. To achieve this goal, a variety of testing methods have been developed. Among them, interoperability testing is to verify that several network components cooperate correctly and provide expected services. Conformance testing verifies that a product conforms to its specification. Robustness testing determines the degree to which a system operates correctly in the presence of exceptional inputs or stressful environmental conditions. In this thesis, we focus on interoperability testing. The general architecture of interoperability testing involves a system under test (SUT), which consists of at least two implementations under test (IUT). The objectives of interoperability testing are to ensure that interconnected protocol implementations are able to interact correctly and, during their interaction, provide the services predefined in their specifications. In general, the methods of interoperability testing can be classified into two approaches: active and passive testing. Among them, active test is the most conventionally used technique, which aims to test the implementations (IUT) by injecting a series of test messages (stimuli) and observing the corresponding outputs. However, the intrusive nature of active testing is that the tester has the ability to control IUTS. This implies that the tester interrupts inevitably the normal operations of the system under test. In this sense, active testing is not a suitable technique for interoperability testing, which is often carried out in operational networks. In such context, it is difficult to insert arbitrary testing messages without affecting the normal behavior and the services of the system. On the contrary, passive testing is a technique based only on observation. The tester does not need to interact with the SUT. This allows the test to be carried out without disturbing the normal operations of the system under test. Besides, passive testing also has other advantages such as: for embedded systems to which the tester does not have direct access, test can still be performed by collecting the execution traces of the system and then detect errors by comparing the trace with the behavior of the system described in its specification. In addition, passive testing makes it possible to moniter a system over a long period, and report abnomality at any time.
292

Fatores críticos de sucesso para integração com sistemas legados: um estudo de caso no SERPRO / Critical success factors for integration with legacy systems: A SERPRO case study

Wellington Montefusco Anastácio 18 March 2014 (has links)
Para o cidadão solicitar serviços de governo através de um portal que centralize o atendimento e não requeira conhecimento prévio da estrutura administrativa que organiza esses serviços, é necessário que o governo se atente a uma questão desafiadora: Quais são as poucas áreas na integração com sistemas legados cujo tratamento inadequado implicará necessariamente no fracasso na implementação do portal? Uma vez que sistemas de informação estão fortemente ligados aos processos de negócio da organização que atendem, a resposta dessa pergunta não tem natureza unicamente tecnológica. Buscou-se, nesse trabalho, a identificação das poucas áreas críticas para o sucesso na integração com sistemas legados no governo e explicações que ajudassem a entender por que elas assim são. Conhecer e entender os resultados obtidos contribui para a superação de barreiras que residem no desafio de implantar o portal centralizador que facilitará o autoatendimento dos cidadãos e contribuirá para o avanço do governo eletrônico. A estratégia utilizada para atingir os objetivos foi a de estudo de caso da organização pública Serviço Federal de Processamento de Dados (SERPRO). Este estudo teve uma natureza exploratória e descritiva e a organização é relevante porque atende demandas de sistemas de informação para o governo nas esferas municipal, federal e estadual há quase cinco décadas. O estudo de caso se desenvolveu em duas fases: (1) survey para identificar os fatores críticos de sucesso, incluindo análise fatorial para identificar as dimensões críticas que resumem os fatores encontrados e; (2) entrevistas semiestruturadas aplicadas a dez profissionais escolhidos pelo critério de representarem percepções extremas e opostas em relação à percepção média sobre cada dimensões crítica encontrada. Todas as entrevistas foram transcritas e categorizadas por análise temática. Foram obtidos 106 respondentes do survey e mais de 12 horas de conteúdo transcrito para as análises. Os resultados obtidos foram que o sucesso da integração de sistemas de informação com sistemas legados no governo está fortemente ligado a quatro dimensões: (1) efetividade dos recursos tecnológicos e humanos, porque a complexidade do sistema legado foi identificada como a causa de fracasso mais relevante em projetos dessa natureza; (2) processo minimizador de incertezas, porque as distorções na comunicação e os imprevistos que surgem ao longo do projeto requisitam altíssima qualidade de comunicação; (3) poder de prioridade, porque conflitos de interesse são frequentes e é crítico que se tenha poder e recursos para resolver questões como falta de prioridade de uma equipe externa; e (4) clareza da necessidade do negócio, porque essa clareza dá à equipe de desenvolvimento a segurança necessária sobre a consistência da solução de integração de sistemas. / For the citizen requesting government services through a centralized portal service which does not require prior knowledge of the government administrative structure organizing these services, it is necessary that the government pay attention to a challenging question: What are the few areas in integration with legacy systems in which inadequate treatment results necessarily in failure to the portal implementation? Since information systems have a strong link to the business processes, the question has not only a technological nature answer. In this work, we sought to identify the few critical areas for successful integration with legacy systems in government environment. We also pursued to find possible explanations that help to understand the reason why they are critical. To know and to understand the results obtained in this work contributes to overcoming barriers that reside on the challenge of promoting centralized interface that will facilitate self-service for citizens and will promote the electronic government. The strategy used for the achievement of the objectives was a case study of the public organization Serviço Federal de Processamento de Dados (SERPRO). This study was of exploratory and descriptive nature and this organization is important because it meets the demands of information systems for the government at the municipal, state and federal levels to nearly five decades. This case study had two phases. First, we identified critical success factors with a survey and found dimensions that summarize the identified factors with a factor analyses. Second, we conducted a sequence of ten semi-structured interviews applied to experienced professionals. The criterion used to select these professionals were their perceived perception about the importance of the critical dimensions that we found. The selected perception should be extreme and opposite in relation to the average perception of the dimension. We transcribed and categorized all the interviews with a thematic analysis. As results, we got 106 survey respondents and 12 hours of transcribed interviews. We found four critical dimensions to the successful integration of information systems with legacy systems in government. The first is the effectiveness of technological and human resources because we identified that the complexity of legacy system in projects of system integration as the most important cause of failures. The second is to minimize process uncertainty because of the risk of distortions in communication and the need of contingencies that may arise during the project. The third is critical dimension is the priority force because conflicts of interest are common and it is critical to have the power and the necessary resources to solve them. The last dimension is the clarity of the business need, because this clearly gives the development team the necessary security about the consistency of the solution of systems integration.
293

Atendimento para composição de serviços justo e transacional com origem em múltiplos domínios. / Service composition attendance with fair policy and transactional support from multiples domains.

Fernando Ryoji Kakugawa 18 May 2016 (has links)
O uso de Web Services tem aberto novas possibilidades de desenvolvimento de software, entre elas a composição de serviços. A composição de serviços apresenta novas questões no ambiente computacional, entre elas a execução integral, garantindo consistência e contemplando o controle de concorrência. O workflow é um conjunto de tarefas e interações organizadas de modo que forneça uma funcionalidade ao sistema, provendo a automatização de processos complexos, através da composição de serviços. Tal composição deve ser executada de forma transacional, processando as operações com consistência. A execução de workflows oriundos de domínios diferentes, faz com que os serviços que estão sendo utilizados, não possuam ciência do contexto da execução, podendo gerar atendimentos que não sejam justos, causando situações de deadlock e de starvation. Este trabalho apresenta estratégias para a execução de workflows em domínios distintos, que requisitam múltiplos serviços de um mesmo conjunto, sem a necessidade de um coordenador central, de forma transacional. O atendimento a requisição contempla uma política justa na utilização do recurso que impede a ocorrência de deadlock ou de starvation para os workflows em execução. Os experimentos realizados neste trabalho mostram que o sistema desenvolvido, aplicando as estratégias propostas, executa as composições de serviços de maneira transacional, atendendo as requisições com justiça, livre de deadlock e starvation, mantendo o sistema independente e autônomo. / Web Services are increasing software development possibilities, among then service composition. Service composition introduces new issues on computational environment, such as the whole service execution, ensuring consistency and concurrency control. Workflow is a set of organized tasks and interactions in order to provide functionality to the system, automating complex process through composition service. Such composition must be performed by transactional support, performing operations consistently. The workflow execution from different domain clients sharing the same composition make these clients ignore the execution context. It may cause inconsistencies, from unfair attendance to deadlock or starvation. This work depicts strategies for workflow execution from different domains, requesting multiple services from the same composition, without a centralized coordinator, in transactional way. The request attendance contains a fair policy for resource usage and consumption to avoid deadlock and starvation. Applying the proposed strategy on the experiments performed in this work, it confirms that the developed system executes service composition with transactional support, avoiding deadlock or starvation, keeping the whole system autonomous and independent.
294

Interoperabilidade entre o modelo de dados do Taxonomic Data Working Group (TDWG) e tags do OpenStreetMap para a espécie Sotalia Guianensis / Interoperability between the data model of the Taxonomic Data Working Group (TDWG) and OpenStreetMap tags for the species Sotalia guianensis

Cyntia Virolli Cid Molina 23 March 2016 (has links)
A falta de padronização de dados pode resultar em perda de informações de suma importância nas diversas áreas do conhecimento, impossibilitando a integração de dados entre diferentes sistemas ou de diferentes bancos de dados, ou seja, os dados podem não ser interoperáveis. A solução para a integração de dados pode ser chamada de interoperabilidade, que são convenções e normas de formatos (extensões) e ontologias (padrões comuns) instituídos para que os sistemas possam dialogar. Um banco de dados de biodiversidade é um instrumento muito importante para as iniciativas de sua conservação, sendo útil para o seu conhecimento, registro histórico entre outros. Este trabalho desenvolveu uma metodologia para interoperar dados modelados no padrão Taxonomic Data Working Group (TDWG) e tags do OpenStreetMap (OSM) sobre a espécie Sotalia guianensis, conhecida como Boto Cinza. Dentro deste escopo, este trabalho se justifica pelo cenário de ameaça de extinção do Boto Cinza, pela necessidade no desenvolvimento de metodologias para a disponibilização de dados de ocorrência de Boto Cinza em bancos de dados de biodiversidade e pela necessidade de se desenvolver metodologia que permita a interoperabilidade entre bancos de dados de biodiversidade e outros Sistemas de Informação Geográfica (SIG). Este estudo propõe uma metodologia de baixo custo, com a utilização de plataformas livres, para que dados espaciais de Biodiversidade sejam modelados de maneira a evitar problemas taxonômicos, além de serem disponibilizados para conhecimento geral da população. O trabalho se mostra inovador por integrar dados do Global Diversity Information Facility (GBIF) com as Tags do OSM, possibilitando o cadastro padronizado e gratuito em uma plataforma livre e de alcance mundial através da criação de uma etiqueta interoperável de equivalência entre o padrão TDWG e as etiquetas do OSM. O resultado deste trabalho é a metodologia para a modelagem e publicação de dados de Boto Cinza no GBIF e OSM de forma interoperável, que foi implementada, testada e cujos resultados são positivos / The absence of data standardization may result in loss of information of major importance through several areas of knowledge, hindering data integration among different information systems or databases, that is, the data may not be interoperable. The solution for data integration may be called interoperability, which is comprised of conventions, data format standards (file extensions) and ontologies (standards), empowering the communication among information systems. A biodiversity database is a very important tool for biodiversity conservation initiatives, being useful for knowledge transfer, historical data storage among other activities. This work developed a methodology for interoperate data between the Taxonomic Data Working Group (TWDG) standard and OpenStreetMap (OSM) tags on Sotalia guianensis species, as known as Guiana dolphin. This work has its motivation scenario on the fact that the Guiana dolphin is under threat of extinction. This scenario demands the development of methodologies for the publication of the locations where the Guiana dolphin is being spotted over the biodiversity databases and the development of a methodology for interoperability among biodiversity databases as well as Geographic Information Systems (SIG). This study proposes a low cost methodology, which uses open-source platforms and focuses on two main goals: avoidance of taxonomical problems on biodiversity spatial data modelling and to provide the biodiversity spatial data to the population in general. This work proves itself innovative by integrating Global Diversity Information Facility (GBIF) data with OSM tags, allowing a free and standardized registry of data in an open-source global-scale platform by using an interoperable tag of equivalence data between the TDWG standard and OSM tags. The result of this study is the methodology for data modelling and publication of the Guiana dolphin on GBIF and OSM in an interoperable manner, which has been implemented, tested and gave positive results
295

Proposta de uma arquitetura interoperável para um sistema de informação em saúde / Study of an Interoperable Architecture for a Health Information System

Adriano de Jesus Holanda 01 June 2005 (has links)
A interoperabilidade entre sistemas de informação em saúde está se tornando fundamental para o compartilhamento da informação num ambiente de saúde, onde normalmente as diversas especialidades que atuam no atendimento ao paciente armazenam seus dados, em sistemas computacionais distintos e em regiões geograficamente distribuídas. Devido à diversidade existente entre estes sistemas, a integração as vezes torna-se difícil. Os problemas de interoperabilidade podem ser técnicos, onde os componentes de computação dos sistemas não permitem a cooperação devido às diferenças nos protocolos de comunicação ou semânticos, ocasionados devido à diversidade de representação da informação transmitida. Este trabalho propõe uma arquitetura para facilitar ambos os aspectos de interoperabilidade, sendo que a interoperabilidade técnica é proporcionada pela utilização de um middleware e a semântica, pela utilização de sistemas de terminologia adotados internacionalmente. Para a implementação de referência foi utilizada como middleware a arquitetura CORBA e suas especificações para o domínio da saúde, sendo que uma das especificações CORBA para o domínio da saúde foi adotada para padronizar a comunicação com os sistemas de terminologia. Para validar a implementação, foi construído um aplicativo cliente baseado na análise de requisitos de uma UTI neonatal. O cliente foi utilizado também para acessar os componentes implementados e verificar dificuldades e ajustes que podem ser feitos na implementação. / The interoperability among health information systems are becoming fundamental to share the information in a health environment, here commonly the diverse medical specialties that act in the patient care store the data, in distinct computational systems and in geographically distributed regions.Because of the existing diversity among these information systems, the integration can be a difficult task. Interoperability problems can either be technical, when the communication components do not cooperate due to the diversity of the information representation. This work proposes an architecture to improve both interoperability aspects. The technical and partial semantic interoperability is achieved by the use if a middleware whereas the semantic interoperability by the use of internationally approved terminological systems. For the reference implementation was used the CORBA middleware architecture. One of the CORBA specifications in health care was adopted to standardize the communication with the terminological systems. To validate the implementation it was developed a client application based on the requirement analysis of neonatal ICU. The client application was also used to access the software components and to verify possible problems.
296

Strategies to Mitigate Information Technology Discrepancies in Health Care Organizations

Oluokun, Oluwatosin Tolulope 01 January 2018 (has links)
Medication errors increased 64.4% from 2015 to 2018 in the United States due to the use of computerized physician order entry (CPOE) systems and the inability to exchange information among health care facilities. Healthcare information exchange (HIE) and subsequent discrepancies resulted in significant medical errors due to the lack of exchangeable health care information using technology software. The purpose of this qualitative multiple case study was to explore the strategies health care business managers used to manage computerized physician order entry systems within health care facilities to reduce medication errors and increase profitability. The population of the study was 8 clinical business managers in 2 successful small health care clinics located in the mid-Atlantic region of the United States. Data were collected from semistructured interviews with health care leaders and documents from the health care organization as a resource. Inductive analysis was guided by the Donabedian theory and sociotechnical system theory, and trustworthiness of interpretations was confirmed through member checking. Three themes emerged: standardizing data formats reduced medication errors and increased profits, adopting user-friendly HIE reduced medication errors and increase profits, and efficient communication reduced medication errors and increased profits. The findings of this study contribute to positive change through improved health care delivery to patients resulting in healthier communities.
297

Proposition d'un modèle pour la représentation de contexte d'exécution de simulations informatiques à des fins de reproductibilité / Proposing a representation model of computational simulations’ execution context for reproducibility purposes

Congo, Faïçal Yannick Palingwendé 19 December 2018 (has links)
La reproductibilité en informatique est un concept incontournable au 21ème siècle. Les évolutions matérielles des calculateurs font que le concept de reproductibilité connaît un intérêt croissant au sein de la communauté scientifique. Pour les experts en simulation, ce concept est indissociable de celui de vérification, de confirmation et de validation, que ce soit pour la crédibilité des résultats de recherches ou pour l’établissement de nouvelles connaissances. La reproductibilité est un domaine très vaste. Dans le secteur computationnel et numérique, nous nous attacherons, d’une part, à la vérification de la provenance et de la consistance des données de recherches. D’autre part, nous nous intéressons à la détermination précise des paramètres des systèmes d’exploitation, des options de compilation et de paramétrage des modèles de simulation permettant l’obtention de résultats fiables et reproductibles sur des architectures modernes de calcul. Pour qu’un programme puisse être reproduit de manière consistante il faut un certain nombre d’information de base. On peut citer entre autres le système d’exploitation, l’environnement de virtualisation, les diverses librairies utilisées ainsi que leurs versions, les ressources matérielles utilisées (CPU, GPU, accélérateurs de calcul multi coeurs tel que le précédent Intel Xeon Phi, Mémoires, ...), le niveau de parallélisme et éventuellement les identifiants des threads, le statut du ou des générateurs pseudo-aléatoires et le matériel auxquels ils accèdent, etc. Dans un contexte de calcul scientifique, même évident, il n’est actuellement pas possible d’avoir de manière cohérente toutes ces informations du fait de l’absence d’un modèle standard commun permettant de définir ce que nous appellerons ici contexte d'exécution. Un programme de simulation s'exécutant sur un ordinateur ou sur un noeud de calcul, que ce soit un noeud de ferme de calcul (cluster), un noeud de grille de calcul ou de supercalculateur, possède un état et un contexte d'exécution qui lui sont propres. Le contexte d'exécution doit être suffisamment complet pour qu’à partir de celui-ci, hypothétiquement,l'exécution d’un programme puisse être faite de telle sorte que l’on puisse converger au mieux vers un contexte d’exécution identique à l’original dans une certaine mesure. Cela, en prenant en compte l'architecture de l’environnement d’exécution ainsi que le mode d'exécution du programme. Nous nous efforçons, dans ce travail, de faciliter l'accès aux méthodes de reproductibilité et de fournir une méthode qui permettra d’atteindre une reproductibilité numérique au sens strict. En effet, de manière plus précise, notre aventure s’articule autour de trois aspects majeurs. Le premier aspect englobe les efforts de collaboration, qui favorisent l'éveil des consciences vis à vis du problème de la reproductibilité, et qui aident à implémenter des méthodes pour améliorer la reproductibilité dans les projets de recherche. Le deuxième aspect se focalise sur la recherche d’un modèle unifiant de contexte d'exécution et un mécanisme de fédération d’outils supportant la reproductibilité derrière une plateforme web pour une accessibilité mondiale. Aussi, nous veillons à l’application de ce deuxième aspect sur des projets de recherche. Finalement, le troisième aspect se focalise sur une approche qui garantit une reproductibilité numérique exacte des résultats de recherche. / Computational reproducibility is an unavoidable concept in the 21st century. Computer hardware evolutions have driven a growing interest into the concept of reproducibility within the scientificcommunity. Simulation experts press that this concept is strongly correlated to the one ofverification, confirmation and validation either may it be for research results credibility or for theestablishment of new knowledge. Reproducibility is a very large domain. Within the area ofnumerical and computational Science, we aim to ensure the verification of research dataprovenance and integrity. Furthermore, we show interest on the precise identification ofoperating systems parameters, compilation options and simulation models parameterizationwith the goal of obtaining reliable and reproducible results on modern computer architectures.To be able to consistently reproduce a software, some basic information must be collected.Among those we can cite the operating system, virtualization environment, the softwarepackages used with their versions, the hardware used (CPU, GPU, many core architectures suchas the former Intel Xeon Phi, Memory, …), the level of parallelism and eventually the threadsidentifiers, the status of pseudo-random number generators, etc. In the context of scientificcomputing, even obvious, it is currently not possible to consistently gather all this informationdue to the lack of a common model and standard to define what we call here execution context.A scientific software that runs in a computer or a computing node, either as a cluster node, a gridcluster or a supercomputer possesses a unique state and execution context. Gatheringinformation about the latter must be complete enough that it can be hypothetically used toreconstruct an execution context that will at best be identical to the original. This of course whileconsidering the execution environment and the execution mode of the software. Our effortduring this journey can be summarized as seeking an optimal way to both ease genuine access toreproducibility methods to scientists and aim to deliver a method that will provide a strictscientific numerical reproducibility. Moreover, our journey can be laid out around three aspects.The first aspect involves spontaneous efforts in collaborating either to bring awareness or toimplement approaches to better reproducibility of research projects. The second aspect focusesin delivering a unifying execution context model and a mechanism to federate existingreproducibility tools behind a web platform for World Wide access. Furthermore, we investigateapplying the outcome of the second aspect to research projects. Finally, the third aspect focusesin completing the previous one with an approach that guarantees an exact numerical reproducibility of research results.
298

Modelagem paramétrica para análise termoenergética de edificações nas fases iniciais de projeto. / Parametric modeling for thermoenergetic analysis in early design stages of buildings.

Tamanini Junior, Tiago 18 June 2019 (has links)
O trabalho na arquitetura sempre se baseou em processos e raciocínios lógicos, seguindo um fluxo de informações para solucionar questões referentes ao habitat humano. A partir da década de 1960 iniciou-se o desenvolvimento de métodos de incorporação da computação no trabalho do arquiteto, buscando tornar o processo de projeto mais eficiente. Entretanto, a influência do uso de ferramentas computacionais nas fases iniciais de projeto ainda é pouco explorada. A grande maioria dos arquitetos continua utilizando métodos tradicionais para a geração da forma, utilizando o computador simplesmente como suporte, sem aproveitar seu grande potencial para a realização de tarefas repetitivas na geração de alternativas. Os novos sistemas de modelagem paramétrica têm revolucionado essa fase do trabalho, mas ainda obrigam o arquiteto a se adaptar aos métodos e metáforas escolhidos por seus programadores, reduzindo sua liberdade de criação. Somado a esse fator, o surgimento de certificações ambientais e etiquetas de eficiência energética tem envolvido esforços para o desenvolvimento de métodos quantitativos para análise de projetos de edificações. Desse modo, projetar um edifício sustentável é sinônimo de quantificar seu impacto. A simulação computacional permite avaliar a quantidade desses impactos nas edificações, tornando possível analisar esses danos ainda em fase de projeto. Em atenção à necessidade do uso de simuladores nas etapas iniciais de projeto e à integração destes aos programas de modelagem paramétrica, desenvolvedores vêm realizando esforços para suprir essa lacuna. O progresso nesse campo de estudo tem sido realizado em integrar os motores de simulação termoenergética computacional existentes aos programas BIM (Building Information Modeling). Portanto, o objetivo deste trabalho é desenvolver um fluxo de trabalho para geração de um modelo paramétrico a partir de design algorítmico em estudos de viabilidade de edificações para análise termoenergética. O trabalho utiliza o Dynamo do Revit como ferramenta de design algorítmico para gerar a volumetria 3D automatizada para edifícios de escritórios e compara esse modelo à interoperabilidade BIM-CAD-BEM e BIM-BEM. O primeiro processo testa arquivos STL e DWG do sistema CAD exportados ao SketchUp e convertidos no Euclid para simulação computacional, sendo verificados posteriormente no EnergyPlus. O segundo processo exporta o modelo BIM gerado por massa conceitual e por elementos construtivos gerados no Dynamo e Revit diretamente para o Insight 360 e depois os exporta para o EnergyPlus. É realizada então uma análise comparativa aos modelos gerados em CAD e BIM. Os resultados validam para uma interoperabildiade mais confiável na proposta entre os modelos BIM e BEM, pois os arquivos CAD não suportam configurações de energia. A proposta de automatização de design algorítmico para geração de volumes 3D para o BIM e simulação se mostra viável, mas ainda é limitada pela integração entre os softwares. / The work in architecture has always been based on processes and logical thinking, following a flow of information to solve questions concerning human habitat. From the 1960s onwards, the development of methods of incorporating computing into the architect\'s work began, making the design process more efficient. However, the influence of the use of computational tools in the early design stages is still little explored. The vast majority of architects continue to use traditional methods for form generation, using the computer only as support, without taking advantage of their great potential for performing repetitive tasks in the generation of alternatives. The new parametric modeling systems have revolutionized this stage of the work, but still compel the architect to adapt to the methods and metaphors chosen by their programmers, reducing their freedom of design. Added to this factor, the emergence of environmental certifications and energy efficiency labels has involved efforts to develop quantitative methods for analysis of building projects. In this way, designing a sustainable building is synonymous of quantifying its impact. The computational simulation allows to evaluate the amount of these impacts in the buildings, making it possible to analyze these damages still in the design stage. Due to the need to use simulators in the early design stages and to the integration of these to parametric modeling programs, developers have been making efforts to fill this gap. The progress in this field of study has been realized in integrating the existent computational thermos-energetic simulation engines to the BIM (Building Information Modeling) programs. Therefore, the objective of this work is to develop a workflow for generating a parametric model from algorithmic design in feasibility studies for thermoenergetic analysis of buildings. The work uses Revit Dynamo as an algorithmic design tool to generate automated 3D volumetry for office buildings and compares this model between BIM-CAD-BEM and BIM-BEM interoperability. The first process tests CAD system with STL and DWG files exported to SketchUp and converted to Euclid for energy computer simulation and later verified in EnergyPlus. The second process exports the BIM model generated by conceptual mass and building elements generated in Dynamo and Revit directly to Insight 360 and then exports them to EnergyPlus. A comparative analysis is then made to the models generated in CAD and BIM. The results validate for a more accurate interoperability in the proposal between the BIM and BEM models, because CAD files do not support energy settings. The proposed algorithm design automation for 3D volume generation for BIM and simulation is feasible, but it is still limited by the integration between the programs.
299

L'aménagement forestier à la croisée des chemins : éléments de réponse au défi posé par les nouvelles attentes d'une société en mutation

Farcy, Christine 20 September 2005 (has links)
L'aménagement forestier, discipline dont l'émergence remonte à la fin du XVIIIème siècle, repose sur des principes héritiers de siècles de planification forestière à des fins de production de matériau bois. Depuis deux décennies, alors que se développe une société de type tertiaire, la forêt est amenée à contribuer à la satisfaction d'aspirations de plus en plus variées. Tant qu'a prévalu la thèse de l'effet de sillage, l'aménagiste a pu se contenter des principes théoriques en vigueur. La remise en question de celle-ci implique que soient développés de nouveaux principes. La recherche de type empirique a pour objectif d'apporter des éléments de réponse à cette problématique; elle est menée au travers d'analyses croisées via les prismes de l'histoire, du droit, des sciences sociales et de la politique forestière. Elle s'intéresse de plus près au cas de la Région wallonne qui par sa taille, sa situation géographique et son morcellement foncier, constitue un laboratoire intéressant d'un espace rural multifonctionnel et urbanisé. L'étude commence par un état de la question et un exposé des concepts, outils et méthodes émergents. Une étude historique de l'évolution des relations entre l'homme et la nature permet ensuite de saisir la portée de la mutation en cours et d'en comprendre les fondements tant philosophiques qu'économiques et sociaux. L'étude se penche ensuite sur la révision du cadre spatio-temporel de l'aménagement de la nature et des forêts par le service public wallon et ce, compte tenu, entre autres, des contraintes imposées par la mise en place du réseau européen de sites protégés Natura 2000. L'étude se poursuivit par l'évocation d'une solution technologique développée pour répondre aux contraintes de souplesse et d'interopérabilité que requiert la mise en œuvre de ce cadre. Après une mise en perspective générale, les conditions d'une intégration de l'aménagement de la nature et des forêts, discipline plus que jamais à la charnière des sciences de la nature et des sciences humaines, dans le cadre conceptuel du développement durable sont discutées.
300

L'aménagement forestier à la croisée des chemins : éléments de réponse au défi posé par les nouvelles attentes d'une société en mutation

Farcy, Christine 20 September 2005 (has links)
L'aménagement forestier, discipline dont l'émergence remonte à la fin du XVIIIème siècle, repose sur des principes héritiers de siècles de planification forestière à des fins de production de matériau bois. Depuis deux décennies, alors que se développe une société de type tertiaire, la forêt est amenée à contribuer à la satisfaction d'aspirations de plus en plus variées. Tant qu'a prévalu la thèse de l'effet de sillage, l'aménagiste a pu se contenter des principes théoriques en vigueur. La remise en question de celle-ci implique que soient développés de nouveaux principes. La recherche de type empirique a pour objectif d'apporter des éléments de réponse à cette problématique; elle est menée au travers d'analyses croisées via les prismes de l'histoire, du droit, des sciences sociales et de la politique forestière. Elle s'intéresse de plus près au cas de la Région wallonne qui par sa taille, sa situation géographique et son morcellement foncier, constitue un laboratoire intéressant d'un espace rural multifonctionnel et urbanisé. L'étude commence par un état de la question et un exposé des concepts, outils et méthodes émergents. Une étude historique de l'évolution des relations entre l'homme et la nature permet ensuite de saisir la portée de la mutation en cours et d'en comprendre les fondements tant philosophiques qu'économiques et sociaux. L'étude se penche ensuite sur la révision du cadre spatio-temporel de l'aménagement de la nature et des forêts par le service public wallon et ce, compte tenu, entre autres, des contraintes imposées par la mise en place du réseau européen de sites protégés Natura 2000. L'étude se poursuivit par l'évocation d'une solution technologique développée pour répondre aux contraintes de souplesse et d'interopérabilité que requiert la mise en œuvre de ce cadre. Après une mise en perspective générale, les conditions d'une intégration de l'aménagement de la nature et des forêts, discipline plus que jamais à la charnière des sciences de la nature et des sciences humaines, dans le cadre conceptuel du développement durable sont discutées.

Page generated in 0.0614 seconds