• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 285
  • 285
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Vers le contrôle commande distribué des systèmes de production manufacturiers : approche composant pour la prise en compte de l’architecture de communication dans la modélisation / Towards the distribution control of manufacturing systems : a component-based approach for taking into account the communication architecture in modeling

Masri, Aladdin 10 July 2009 (has links)
Les systèmes de production manufacturiers sont une classe des systèmes à événements discrets. Leur taille nécessite de distribuer le logiciel de contrôle sur une architecture industrielle de plusieurs ordinateurs reliés en réseau. Dans ce contexte, il devient essentiel d'être capable d'évaluer l'impact d'une architecture réseau spécifique sur les services des systèmes manufacturiers en termes de la performance et la qualité. Les performances du réseau sous-jacent peuvent notamment nuire à la productivité du système. Dans la méthodologie traditionnelle proposée dans la littérature, cet aspect n'est pas pris en compte au niveau conception. Cependant, la modélisation de tels systèmes est importante pour vérifier certaines propriétés. Dans cette thèse, nous proposons une approche de modélisation par composants à l’aide des réseaux de Petri haut niveau pour la modélisation de certains protocoles de réseaux afin d'évaluer les systèmes manufacturiers comme étant des systèmes distribués. La sélection des réseaux de Petri est justifiée par leur pouvoir d'expression en ce qui concerne la modélisation des systèmes distribués et concurrents. L’approche par composants permet de diminuer la complexité de la modélisation et encourage la généricité, la modularité et la réutilisabilité des composants prêt-à-utiliser. Cela permet de construire facilement de nouveaux modèles et de réduire les coûts de développement de systèmes. En outre, cela peut aider à une meilleure gestion des services et des protocoles et à changer facilement/modifier un élément du système. Notre modélisation permet enfin d'évaluer ces systèmes par le biais de simulations centralisées / Manufacturing systems belong to the class of distributed discrete event systems. Their size requires distributing the software to control them on architecture of several industrial computers connected by networks. In this context, it becomes crucial to be able to evaluate the impact of a specific architecture on the manufacturing systems services both in terms of performance and quality. The performance of the underlying network can notably affect the productivity of the system. In traditional methodology proposed in literature, this aspect is not taken into account in the design stage. Thus, modeling such systems is important to verify some properties at that stage. In this thesis, we propose a component-based modeling approach with High Level Petri nets based method for modeling some network protocols in order to evaluate the manufacturing systems as being distributed systems. The selection of Petri nets is justified by their expression power with regard to the modeling of distributed and concurrent systems. Component-based approach can decrease modeling complexity and encourages genericity, modularity and reusability of ready-to-use components. This allows building new models easily and reducing the systems development cost. Moreover, this can help in better managing services and protocols and to easily change/modify a system element. Finally, this modeling enables us to evaluate discrete event systems by means of centralized simulations
182

Particao de actinideos e de produtos de fissao de rejeito liquido de alta atividade

YAMAURA, MITIKO 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:43:26Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:17Z (GMT). No. of bitstreams: 1 06498.pdf: 10769439 bytes, checksum: e1653f842e3f8a16356a7f469da93549 (MD5) / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
183

Implantation matérielle de chiffrements homomorphiques / Hardware implementation of homomorphic encryption

Mkhinini, Asma 14 December 2017 (has links)
Une des avancées les plus notables de ces dernières années en cryptographie est sans contredit l’introduction du premier schéma de chiffrement complètement homomorphe par Craig Gentry. Ce type de système permet de réaliser des calculs arbitraires sur des données chiffrées, sans les déchiffrer. Cette particularité permet de répondre aux exigences de sécurité et de protection des données, par exemple dans le cadre en plein développement de l'informatique en nuage et de l'internet des objets. Les algorithmes mis en œuvre sont actuellement très coûteux en temps de calcul, et généralement implantés sous forme logicielle. Les travaux de cette thèse portent sur l’accélération matérielle de schémas de chiffrement homomorphes. Une étude des primitives utilisées par ces schémas et la possibilité de leur implantation matérielle est présentée. Ensuite, une nouvelle approche permettant l’implantation des deux fonctions les plus coûteuses est proposée. Notre approche exploite les capacités offertes par la synthèse de haut niveau. Elle a la particularité d’être très flexible et générique et permet de traiter des opérandes de tailles arbitraires très grandes. Cette particularité lui permet de viser un large domaine d’applications et lui autorise d’appliquer des optimisations telles que le batching. Les performances de notre architecture de type co-conception ont été évaluées sur l’un des cryptosystèmes homomorphes les plus récents et les plus efficaces. Notre approche peut être adaptée aux autres schémas homomorphes ou plus généralement dans le cadre de la cryptographie à base de réseaux. / One of the most significant advances in cryptography in recent years is certainly the introduction of the first fully homomorphic encryption scheme by Craig Gentry. This type of cryptosystem allows performing arbitrarily complex computations on encrypted data, without decrypting it. This particularity allows meeting the requirements of security and data protection, for example in the context of the rapid development of cloud computing and the internet of things. The algorithms implemented are currently very time-consuming, and most of them are implemented in software. This thesis deals with the hardware acceleration of homomorphic encryption schemes. A study of the primitives used by these schemes and the possibility of their hardware implementation is presented. Then, a new approach allowing the implementation of the two most expensive functions is proposed. Our approach exploits the high-level synthesis. It has the particularity of being very flexible and generic and makes possible to process operands of arbitrary large sizes. This feature allows it to target a wide range of applications and to apply optimizations such as batching. The performance of our co-design was evaluated on one of the most recent and efficient homomorphic cryptosystems. It can be adapted to other homomorphic schemes or, more generally, in the context of lattice-based cryptography.
184

High level waste system impacts from acid dissolution of sludge

Ketusky, Edward Thomas 31 March 2008 (has links)
Currently at the Savannah River Site (SRS), there are fifteen single-shell, 3.6-million liter tanks containing High Level Waste. To close the tanks, the sludge must be removed. Mechanical methods have had limited success. Oxalic acid cleaning is now being considered as a new technology. This research uses sample results and chemical equilibrium software to develop a preferred flowsheet and evaluate the acceptability of the system impacts. Based on modeling and testing, between 246,000 to 511,000 l of 8 wt% oxalic acid were required to dissolve a 9,000 liter Purex sludge heel. For SRS H-Area modified sludge, 322,000 to 511,000 l were required. To restore the pH of the treatment tank slurries, approximately 140,000 to 190,000 l of 50 wt% NaOH or 260,000 to 340,000 l of supernate were required. When developing the flowsheet, there were two primary goals to minimize downstream impacts. The first was to ensure that the Resultant oxalate solids were transferred to DWPF, without being washed. The second was to transfer the remaining soluble sodium oxalates to the evaporator drop tank, so they do not transfer through or precipitate in the evaporator pot. Adiabatic modeling determined the maximum possible temperature to be 73.5°C and the maximum expected temperature to be 64.6°C. At one atmosphere and at 73.5°C, a maximum of 770 l of water vapor was generated, while at 64.6°C a maximum 254 l of carbon dioxide were generated. Although tank wall corrosion was not a concern, because of the large cooling coil surface area, the corrosion induced hydrogen generation rate was calculated to be as high as 10,250 l/hr. Since the minimum tank purge exhaust was assumed to be 5,600 l/hr, the corrosion induced hydrogen generation rate was identified as a potential concern. Excluding corrosion induced hydrogen, trending the behavior of the spiked constituents of concern, and considering conditions necessary for ignition, energetic compounds were shown not to represent an increased risk Based on modeling, about 56,800 l of Resultant oxalates could be added to a washed sludge batch with minimal impact on the number of additional glass canisters produced. For each sludge batch, with 1 to 3 heel dissolutions, about 60,000 kg of sodium oxalate entered the evaporator system, with most collecting in the drop tank, where they will remain until eventual salt heel removal. For each 6,000 kg of sodium oxalate in the drop tank, about 189,000 l of Saltstone feed would eventually be produced. Overall, except for corrosion-induced hydrogen, there were no significant process impacts that would forbid the use of oxalic acid in cleaning High Level Waste tanks. / MATHEMATICAL SCIENCES / M. Tech. (Chemical Engineering)
185

Measurement of the differential cross section of Z boson production in association with jets at the LHC

Wang, Qun 13 June 2018 (has links)
This thesis presents the measurement of the differential cross section of Z boson pro-duction in association with jets (Z+jets) in proton-proton collision at the center-of-massenergy of 13 TeV. The data has been recorded with the CMS detector at the LHC duringthe year 2015, corresponding to an integrated luminosity of 2.19 fb −1 .A study of theCMS muon High Level Trigger (HLT) with the data collected in 2016 is also presented.The goal of analysis is to perform a first measurement at 13 TeV of the cross sections ofZ+jets as a function of the jet multiplicity, its dependence on the transverse momentumof the Z boson, the jet kinematic variables (transverse momentum and rapidity), thescalar sum of the jet momenta, and the balance in the transverse momentum betweenthe reconstructed jet recoil and the Z boson. The results are obtained by correctingthe detector effects, and are unfolded to particle level. The measurement are com-pared to four predictions using different approximations: at the leading-order (LO),next-to-leading-order (NLO) and next-to-next-to-leading order (NNLO) accuracy. Thefirst two calculations used M AD G RAPH 5_ A MC@NLO interfaced with PYTHIA 8 for theparton showering and hadronisation, one of which includes matrix elements (MEs) atLO, another includes one-loop corrections (NLO). The third is a fixed-order calculationwith NNLO accuracy for Z+1 jet using the N -jettiness subtraction scheme (N jetti ). Thefourth uses the GENEVA program with an NNLO calculation combined with higher-order resummation.A series of studies on the HLT double muon trigger are also included. Since 2015 theLHC reached higher luminosity, more events are produced inside the CMS detector persecond, which resulted in more challenges for the trigger system. The work presentedincludes the monitoring, validation and the calibration of the muon trigger paths since2016. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
186

Génération rapide d'accélerateurs matériels par synthèse d'architecture sous contraintes de ressources / High-level synthesis for fast generation of hardware accelerators under resource constraints

Prost-Boucle, Adrien 08 January 2014 (has links)
Dans le domaine du calcul générique, les circuits FPGA sont très attrayants pour leur performance et leur faible consommation. Cependant, leur présence reste marginale, notamment à cause des limitations des logiciels de développement actuels. En effet, ces limitations obligent les utilisateurs à bien maîtriser de nombreux concepts techniques. Ils obligent à diriger manuellement les processus de synthèse, de façon à obtenir une solution à la fois rapide et conforme aux contraintes des cibles matérielles visées.Une nouvelle méthodologie de génération basée sur la synthèse d'architecture est proposée afin de repousser ces limites. L'exploration des solutions consiste en l'application de transformations itératives à un circuit initial, ce qui accroît progressivement sa rapidité et sa consommation en ressources. La rapidité de ce processus, ainsi que sa convergence sous contraintes de ressources, sont ainsi garanties. L'exploration est également guidée vers les solutions les plus pertinentes grâce à la détection, dans les applications à synthétiser, des sections les plus critiques pour le contexte d'utilisation réel. Cette information peut être affinée à travers un scénario d'exécution transmis par l'utilisateur.Un logiciel démonstrateur pour cette méthodologie, AUGH, est construit. Des expérimentations sont menées sur plusieurs applications reconnues dans le domaine de la synthèse d'architecture. De tailles très différentes, ces applications confirment la pertinence de la méthodologie proposée pour la génération rapide et autonome d'accélérateurs matériels complexes, sous des contraintes de ressources strictes. La méthodologie proposée est très proche du processus de compilation pour les microprocesseurs, ce qui permet son utilisation même par des utilisateurs non spécialistes de la conception de circuits numériques. Ces travaux constituent donc une avancée significative pour une plus large adoption des FPGA comme accélérateurs matériels génériques, afin de rendre les machines de calcul simultanément plus rapides et plus économes en énergie. / In the field of high-performance computing, FPGA circuits are very attractive for their performance and low consumption. However, their presence is still marginal, mainly because of the limitations of current development tools. These limitations force the user to have expert knowledge about numerous technical concepts. They also have to manually control the synthesis processes in order to obtain solutions both fast and that fulfill the hardware constraints of the targeted platforms.A novel generation methodology based on high-level synthesis is proposed in order to push these limits back. The design space exploration consists in the iterative application of transformations to an initial circuit, which progressively increases its rapidity and its resource consumption. The rapidity of this process, along with its convergence under resource constraints, are thus guaranteed. The exploration is also guided towards the most pertinent solutions thanks to the detection of the most critical sections of the applications to synthesize, for the targeted execution context. This information can be refined with an execution scenarion specified by the user.A demonstration tool for this methodology, AUGH, has been built. Experiments have been conducted with several applications known in the field of high-level synthesis. Of very differen sizes, these applications confirm the pertinence of the proposed methodology for fast and automatic generation of complex hardware accelerators, under strict resource constraints. The proposed methodology is very close to the compilation process for microprocessors, which enable it to be used even by users non experts about digital circuit design. These works constitute a significant progress for a broader adoption of FPGA as general-purpose hardware accelerators, in order to make computing machines both faster and more energy-saving.
187

Efficient Static Analyses for Concurrent Programs

Mukherjee, Suvam January 2017 (has links) (PDF)
Concurrent programs are pervasive owing to the increasing adoption of multi-core systems across the entire computing spectrum. However, the large set of possible program behaviors make it difficult to write correct and efficient con-current programs. This also makes the formal and automated analysis of such programs a hard problem. Thus, concurrent programs provide fertile grounds for a large class of insidious defects. Static analysis techniques infer semantic properties of programs without executing them. They are attractive because they are sound (they can guarantee the absence of bugs), can execute with a fair degree of automation, and do not depend on test cases. However, current static analyses techniques for concurrent programs are either precise and prohibitively slow, or fast but imprecise. In this thesis, we partially address this problem by designing efficient static analyses for concurrent programs. In the first part of the thesis, we provide a framework for designing and proving the correctness of data flow analysis for race free multi-threaded programs. The resulting analyses are in the same spirit as the \sync-CFG" analysis, originally proposed in De et al, 2011. Using novel thread-local semantics as starting points, we devise abstract analyses which treat a concurrent program as if it were sequential. We instantiate these abstractions to devise efficient relational analyses for race free programs, which we have implemented in a prototype tool called RATCOP. On the benchmarks, RATCOP was fairly precise and fast. In a comparative study with a recent concurrent static analyzer, RATCOP was up to 5 orders of magnitude faster. In the second part of the thesis, we propose a technique for detecting all high-level data races in a system library, like the kernel API of a real-time operating system (RTOS) that relies on ag-based scheduling and synchronization. Such races are good indicators of atomicity violations. Using our technique, a user is able to soundly disregard 99:8% of an estimated 41; 000 potential high-level races. Our tool detected 38 high-level data races in FreeRTOS (a popular OS in the embedded systems domain), out of which 16 were harmful.
188

Uma arquitetura de suporte a modelagem de simulações de treinamento baseada na arquitetura HLA (High Level Architecture)

Rocha, Rafaela Vilela da 30 July 2009 (has links)
Made available in DSpace on 2016-06-02T19:05:46Z (GMT). No. of bitstreams: 1 3226.pdf: 6069687 bytes, checksum: 5d226daa93c8d5e313962b39c46b557e (MD5) Previous issue date: 2009-07-30 / Universidade Federal de Sao Carlos / The use of virtual environments, created by the computer, provides the execution of training simulations in a safe environment to investigate human behaviour and response to dangerous situations. However, building generic virtual environment simulations is a challenge to be researched. At present, existing virtual environment simulations are focused on specific applications. They also have their supporting architecture tightly coupled to the application making extensions or modifications difficult and dependent on computing specialists. Thus, it is important that environments that ease the building of these simulations be created. This work aims to specify an architecture to support the development, implementation, management, control and analysis of collaborative virtual environments, in conformance with the High Level Architecture, a reference architecture for simulations interoperability and reuse. In this work, simulation construction is driven by non-linear, interactive storytelling and instantiated ontologies. The history and ontologies integration facilitates the creation of different simulations without the need for programmers. The goal of the collaborative virtual environment simulations to be created is training for emergency preparedness and response application domain. However, new application domains can be designed by integrating new ontologies. For the ontologies creation, norms for protection against fire and exercise protocols, in use at São Paulo State, were used along with expert advice by São Carlos fire fighters. An environment for the creation of simulations is being developed as part of this work, as well as the whole process of development and execution of a simulation using our proposed architecture. A use case (fire and explosion occurrence) was devised for simulation instantiation. As the main results of this work a novel architecture was devised for the creation of complex training simulations as well as seven ontologies in the emergency management domain. These can be used as powerful tools for the creation of training simulations for the emergency preparedness and response teams. / O uso de ambientes virtuais, criados pelo computador, permite a realização de simulações de treinamento em um ambiente seguro para investigar o comportamento e a resposta de humanos a situações de perigo. Porém, a construção de simulações de ambientes virtuais genéricos e independentes é um desafio a ser pesquisado. Atualmente, as simulações de ambientes virtuais existentes são focadas em aplicações específicas e têm arquitetura de suporte estreitamente ligada à aplicação somado ao fato que a sua extensão ou alteração depende da atuação de especialistas em computação. Assim, é importante que ambientes que facilitem a construção dessas simulações sejam construídos. O presente trabalho tem como objetivo especificar uma arquitetura de suporte ao desenvolvimento, execução, gerenciamento, controle e análise de simulações de ambientes virtuais colaborativos, em conformidade com a arquitetura de referência High Level Architecture, que visa a interoperabilidade e o reuso de simulações. Neste trabalho, a construção de simulações é orientada pela narração de uma história interativa não linear e instanciada a partir de ontologias. A integração de história e ontologias pode facilitar a criação de diferentes modelos de simulações de treinamento sem a necessidade de programadores. As simulações de ambientes virtuais colaborativos a serem criados visam, inicialmente, treinamento de equipes no domínio de preparação e resposta a emergências. Porém novos domínios de aplicação podem ser concebidos ao integrarem-se novas ontologias. Para elaboração das ontologias, foram utilizadas normas de proteção contra incêndio e protocolos para realização de exercícios simulados, vigentes no Estado de São Paulo, além do apoio de profissionais especialistas (Corpo de Bombeiros de São Carlos). Um ambiente de criação de simulações está sendo desenvolvido como parte deste trabalho, bem como todo o processo de desenvolvimento e execução de uma simulação utilizando a arquitetura proposta. Um caso de uso (ocorrências de incêndio e explosão) foi elaborado para instanciação da simulação. Como resultados principais deste trabalho foram criadas uma arquitetura inovadora para a construção de simulações complexas e sete ontologias no domínio de emergência, que poderão ser usadas como ferramentas poderosas na criação de simulações de treinamento de profissionais da área de gerenciamento de emergência.
189

Implémentation sur SoC des réseaux Bayésiens pour l'état de santé et la décision dans le cadre de missions de véhicules autonomes / SoC implementation of Bayesian networks for health management and decision making for autonomous vehicles missions

Zermani, Sara 21 November 2017 (has links)
Les véhicules autonomes, tels que les drones, sont utilisés dans différents domaines d'application pour exécuter des missions simples ou complexes. D’un côté, ils opèrent généralement dans des conditions environnementales incertaines, pouvant conduire à des conséquences désastreuses pour l'humain et l'environnement. Il est donc nécessaire de surveiller continuellement l’état de santé du système afin de pouvoir détecter et localiser les défaillances, et prendre la décision en temps réel. Cette décision doit maximiser les capacités à répondre aux objectifs de la mission, tout en maintenant les exigences de sécurité. D’un autre côté, ils sont amenés à exécuter des tâches avec des demandes de calcul important sous contraintes de performance. Il est donc nécessaire de penser aux accélérateurs matériels dédiés pour décharger le processeur et répondre aux exigences de la rapidité de calcul.C’est ce que nous cherchons à démontrer dans cette thèse à double objectif. Le premier objectif consiste à définir un modèle pour l’état de santé et la décision. Pour cela, nous utilisons les réseaux Bayésiens, qui sont des modèles graphiques probabilistes efficaces pour le diagnostic et la décision sous incertitude. Nous avons proposé un modèle générique en nous basant sur une analyse de défaillance de type FMEA (Analyse des Modes de Défaillance et de leurs Effets). Cette analyse prend en compte les différentes observations sur les capteurs moniteurs et contextes d’apparition des erreurs. Le deuxième objectif était la conception et la réalisation d’accélérateurs matériels des réseaux Bayésiens d’une manière générale et plus particulièrement de nos modèles d’état de santé et de décision. N’ayant pas d’outil pour l’implémentation embarqué du calcul par réseaux Bayésiens, nous proposons tout un atelier logiciel, allant d’un réseau Bayésien graphique ou textuel jusqu’à la génération du bitstream prêt pour l’implémentation logicielle ou matérielle sur FPGA. Finalement, nous testons et validons nos implémentations sur la ZedBoard de Xilinx, incorporant un processeur ARM Cortex-A9 et un FPGA. / Autonomous vehicles, such as drones, are used in different application areas to perform simple or complex missions. On one hand, they generally operate in uncertain environmental conditions, which can lead to disastrous consequences for humans and the environment. Therefore, it is necessary to continuously monitor the health of the system in order to detect and locate failures and to be able to make the decision in real time. This decision must maximize the ability to meet the mission objectives while maintaining the security requirements. On the other hand, they are required to perform tasks with large computation demands and performance requirements. Therefore, it is necessary to think of dedicated hardware accelerators to unload the processor and to meet the requirements of a computational speed-up.This is what we tried to demonstrate in this dual objective thesis. The first objective is to define a model for the health management and decision making. To this end, we used Bayesian networks, which are efficient probabilistic graphical models for diagnosis and decision-making under uncertainty. We propose a generic model based on an FMEA (Failure Modes and Effects Analysis). This analysis takes into account the different observations on the monitors and the appearance contexts. The second objective is the design and realization of hardware accelerators for Bayesian networks in general and more particularly for our models of health management and decision-making. Having no tool for the embedded implementation of computation by Bayesian networks, we propose a software workbench covering graphical or textual Bayesian networks up to the generation of the bitstream ready for the software or hardware implementation on FPGA. Finally, we test and validate our implementations on the Xilinx ZedBoard, incorporating an ARM Cortex-A9 processor and an FPGA.
190

Particao de actinideos e de produtos de fissao de rejeito liquido de alta atividade

YAMAURA, MITIKO 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:43:26Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:17Z (GMT). No. of bitstreams: 1 06498.pdf: 10769439 bytes, checksum: e1653f842e3f8a16356a7f469da93549 (MD5) / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP

Page generated in 0.0573 seconds