• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2839
  • 994
  • 582
  • 554
  • 541
  • 252
  • 187
  • 121
  • 101
  • 80
  • 50
  • 43
  • 24
  • 24
  • 22
  • Tagged with
  • 7251
  • 1351
  • 1046
  • 802
  • 629
  • 597
  • 540
  • 487
  • 482
  • 476
  • 470
  • 448
  • 377
  • 363
  • 357
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
921

Evaluating immersive approaches to multidimensional information visualization / Avaliando abordagens imersivas para visualização de informações multidimensionais

Wagner Filho, Jorge Alberto January 2018 (has links)
O uso de novos recursos de display e interação para suportar a visualização imersiva de dados e incrementar o raciocínio analítico é uma tendência de pesquisa em Visualização de Informações. Neste trabalho, avaliamos o uso de ambientes baseados em HMD para a exploração de dados multidimensionais, representados em scatterplots 3D como resultado de redução de dimensionalidade. Nós apresentamos uma nova modelagem para o problema de avaliação neste contexto, levando em conta os dois fatores cuja interação determina o impacto no desempenho total nas tarefas: a diferença nos erros introduzidos ao se realizar redução de dimensionalidade para 2D ou 3D, e a diferença nos erros de percepção humana sob diferentes condições de visualização. Este framework em duas etapas oferece uma abordagem simples para estimar os benefícios de se utilizar um setup 3D imersivo para um dado conjunto de dados. Como caso de uso, os erros de redução de dimensionalidade para uma série de conjuntos de dados de votações na Câmara dos Deputados, ao se utilizar duas ou três dimensões, são avaliados por meio de uma abordagem empírica baseada em tarefas. O erro de percepção e o desempenho geral de tarefa, por sua vez, são avaliados através de estudos controlados comparativos com usuários. Comparando-se visualizações baseadas em desktop (2D e 3D) e em HMD (3D), resultados iniciais indicaram que os erros de percepção foram baixos e similares em todas abordagens, resultando em benefícios para o desempenho geral em ambas técnicas 3D A condição imersiva, no entanto, demonstrou requerer menor esforço para encontrar as informações e menos navegação, além de prover percepções subjetivas de precisão e engajamento muito maiores. Todavia, o uso de navegação por voo livre resultou em tempos ineficientes e frequente desconforto nos usuários. Em um segundo momento, implementamos e avaliamos uma abordagem alternativa de exploração de dados, onde o usuário permanece sentado e mudanças no ponto de vista só são possíveis por meio de movimentos físicos. Toda a manipulação é realizada diretamente por gestos aéreos naturais, com os dados sendo renderizados ao alcance dos braços. A reprodução virtual de uma cópia exata da mesa de trabalho do analista visa aumentar a imersão e possibilitar a interação tangível com controles e informações bidimensionais associadas. Um segundo estudo com usuários foi conduzido em comparação a uma versão equivalente baseada em desktop, explorando um conjunto de 9 tarefas representativas de percepção e interação, baseadas em literatura prévia. Nós demonstramos que o nosso protótipo, chamado VirtualDesk, apresentou resultados excelentes em relação a conforto e imersão, e desempenho equivalente ou superior em todas tarefas analíticas, enquanto adicionando pouco ou nenhum tempo extra e ampliando a exploração dos dados. / The use of novel displays and interaction resources to support immersive data visualization and improve the analytical reasoning is a research trend in Information Visualization. In this work, we evaluate the use of HMD-based environments for the exploration of multidimensional data, represented in 3D scatterplots as a result of dimensionality reduction. We present a new modelling for the evaluation problem in such a context, accounting for the two factors whose interplay determine the impact on the overall task performance: the difference in errors introduced by performing dimensionality reduction to 2D or 3D, and the difference in human perception errors under different visualization conditions. This two-step framework offers a simple approach to estimate the benefits of using an immersive 3D setup for a particular dataset. As use case, the dimensionality reduction errors for a series of roll calls datasets when using two or three dimensions are evaluated through an empirical task-based approach. The perception error and overall task performance are assessed through controlled comparative user studies. When comparing desktop-based (2D and 3D) with an HMD-based (3D) visualization, initial results indicated that perception errors were low and similar in all approaches, resulting in overall performance benefits in both 3D techniques. The immersive condition, however, was found to require less effort to find information and less navigation, besides providing much larger subjective perception of accuracy and engagement. Nonetheless, the use of flying navigation resulted in inefficient times and frequent user discomfort In a second moment, we implemented and evaluated an alternative data exploration approach where the user remains seated and viewpoint change is only realisable through physical movements. All manipulation is done directly by natural mid-air gestures, with the data being rendered at arm’s reach. The virtual reproduction of an exact copy of the analyst’s desk aims to increase immersion and enable tangible interaction with controls and two dimensional associated information. A second user study was carried out comparing this scenario to a desktop-based equivalent, exploring a set of 9 representative perception and interaction tasks based on previous literature. We demonstrate that our prototype setup, named VirtualDesk, presents excellent results regarding user comfort and immersion, and performs equally or better in all analytical tasks, while adding minimal or no time overhead and amplifying data exploration.
922

Stereoskopie a kameraman / Stereoscopy and the Cinematographer

Gunaratna, Vidu January 2013 (has links)
The thesis is a study of stereoscopy from a cinematographer's point of view. It views stereoscopy in it's entire spectrum of today's use - from rides in theme parks, over commercial narrative films, to documentaries with a cultural value. It briefly takes a look at the early theories of binoculars, the early stages of stereoscopy discovery and stereophotography. Next it briefly focuses on the history of stereoscopy in cinema from its beginning, to it's renaissance in the 1950's up to today. Thereafter the thesis continues with a description of the principles of depth perception, with keywords like disparity, parallax, convergence and accommodation. It mentions the importance of non-stereoscopic depth cues to the process of stereopsis. The next part concentrates on the stereoscopic window, which is analogical to the frame and it's borders in conventional cinema. It investigates the screen size, the distance limit of the far plane and stereoscopic window violation. It continues in breaking down the stereoscopic image and factors that influence the size perception of objects and figures in it's space, as well as analyzing the perception of the whole image depending on the viewer's position to the screen. Attention is given to a special case of stereoscopy - orthostereoscopy. Depth bracket, native parallax and terms such as stereo comfort zone, depth budget and depth bracket are defined to create an understanding of the importance of the concept of the depth chart. The aesthetics of depth is demonstrated on a few recent stereoscopic films. A general overview of equipment for stereoscopy follows - rigs, 3D monitors and software. The thesis continues in discussing the tools of the cinematographer divided into two groups - non-stereoscopic and stereoscopic. The non-stereoscopic are linear and colour perspective, atmosphere blur, focus and camera movement. While the stereoscopic tools are interaxial distance and convergence. The cases of hypostereoscopy and hyperstereoscopy are understood as tools of the trade as well. The next chapter is dedicated to the specifics and limitations of stereoscopic imaging - the necessity of perfect pairing of cameras, lenses, the specifics of framing for stereoscopy, issues related to the mirror in the rigs, lighting and use of optical camera filters. The next part of the thesis briefly covers related topics that have an impact on the image as well. Namely set design, post-production, with stress on colour grading, and various stereoscopic projection systems, including autostereoscopy. The last part of the thesis is an interview with cinematographer Andrew Lesnie, ACS, ASC who shares his experience on shooting the Hobbit. The conclusion is a reflection on the question, if stereoscopy is a tool and it also tries to estimate the future of stereoscopy in film and television.
923

Arcabouço tectônico do Gráben de Barra de São João, Bacia de Campos Brasil / Tectonic Framework of the Barra de São João Graben, Campos Basin Brazil

Leandro Barros Adriano 21 May 2014 (has links)
O Gráben de Barra de São João, situado na região de águas rasas da Bacia de Campos, é parte integrante do Sistema de Riftes do Cenozóico, localizado na região sudeste do Brasil. Este sistema foi formado em um evento que resultou na reativação das principais zonas de cisalhamento do Pré-Cambriano do sudeste do Brasil no Paleoceno. Neste trabalho proponho um novo arcabouço estrutural para o Gráben de Barra de São João baseado na interpretação de dados gravimétricos. Dados magnéticos aéreos, dados gravimétricos, uma linha sísmica 2D e um perfil de densidades de um poço foram utilizados como vínculos na interpretação. Para a estimativa do topo do embasamento foi necessário separar o efeito das fontes profundas no dado gravimétrico (anomalia residual). Com isso, foi realizada uma modelagem 2D direta baseada na interpretação sísmica para estimar as densidades das entidades geológicas da área em questão. Após esta modelagem, foi realizada uma inversão estrutural 3D do dado gravimétrico residual a fim de recuperar a profundidade do topo do embasamento. Este fluxograma de interpretação permitiu a identificação de um complexo arcabouço estrutural com três sistemas de falhas bem definido: Falhas normais de orientação NE-SW, e um sistema de falhas transcorrentes NW-SE e E-W. Estas orientações dividem o gráben em diversos altos e baixos internos. O dado magnético aéreo corrobora esta interpretação. A existência de rochas ultra-densas e fortemente magnéticas no embasamento foram interpretadas como um ofiolito que foi provavelmente intrudido (por obducção) na época do fechamento de um oceano no Proterozóico. / Barra de São João Graben, shallow water Campos Basin, is part of the Tertiary rift system that runs parallel to the Brazilian continental margin. This system was formed in an event that caused the reactivation of the main Precambrian shear zones of southeastern Brazil in the Paleocene. I propose a new the structural framework of Barra São João Graben based on gravity data interpretation. Magnetic data an available 2D seismic line and a density well-log of a nearby well were used as constraints to our interpretation. To estimate the top of the basement structure we separated the gravity effects of deep-sources from the shallow basement (residual anomaly). Then, we performed a 2D modeling exercise, where we keptfixed the basement topography and the density of the sediments, to estimate the density of the basement rocks. Next, we inverted the residual anomaly to recover the depth to the top of the basement. This interpretation strategy allowed the identification of a complex structural framework with three main fault systems: NE-SW normal faults system, and a NW-SE and E-W transfer fault systems. These trends divide the graben into several internal highs and lows. The magnetic anomalies corroborate our interpretation. The existence of ultra-dense and strongly magnetized elongated bodies in the basementwere interpreted as ophiolite bodies that probably intruded by the time of the shutdown of the Proterozoic ocean.
924

Modélisation cinématique et stochastique des failles à partir de données éparses pour l’analyse des incertitudes structurales / Kinematic and stochastic fault modeling from sparse data for structural uncertainty analysis

Godefroy, Gabriel 29 March 2018 (has links)
Le manque et l’ambiguïté des observations géologiques ainsi que le choix des concepts utilisés pour leur interprétation ne permettent pas de représenter les structures géologiques avec certitude. Dans ces travaux de thèse, je m’intéresse à l’estimation des incertitudes associées à la représentation des structures faillées à partir de données éparses. Une première contribution de cette thèse est l’étude des incertitudes d’interprétation lors de l’association des observations pouvant appartenir à une même faille. Des règles numériques, traduisant les concepts et les connaissances utilisés pendant l’interprétation structurale (telles que les orientations et les dimensions des structures) assurent la reproductibilité de l’interprétation. Chaque scénario d’association est représenté par un métamodèle reposant sur la théorie des graphes. Je présente une méthode numérique d’interprétation multi-scénarios basée sur ce formalisme. Son utilisation montre que la dimension combinatoire du problème rend faible la probabilité de trouver la bonne association et que le choix des règles perturbent les associations obtenues. Une seconde contribution est l’intégration d’une distribution théorique du déplacement sur le plan de failles normales et isolées pour assurer la cohérence cinématique des modèles structuraux en trois dimensions. Je présente un opérateur cinématique pour déplacer numériquement les structures à proximité d’une faille non nécessairement plane. Le champ de déplacement est paramétré par deux profils décrivant l’évolution du rejet sur la surface de faille (TX et TZ), un profil d’atténuation dans la direction orthogonale à la surface de faille (TY) et une valeur de déplacement au centre de la faille (Dmax). Ces paramètres sont choisis à partir des observations structurales par optimisation numérique. L’utilisation de cet opérateur permet de valider des interprétations structurales et d’assurer la cohérence cinématique des modèles structuraux construits à partir de données éparses et/ou dans des contextes de déformation polyphasée. Les méthodologies présentées sont testées et discutées en utilisant deux jeux de données. Le premier est composé de neuf lignes sismiques 2D acquises au large du Maroc (marge d’Ifni) qui imagent un socle cristallin faillé. L’interprétation de ces données éparses est guidée par des connaissances dérivées de l’étude d’un affleurement proche. Cependant, l’association des observations appartenant à la même faille ainsi que la chronologie de la mise en place des structures restent fortement incertaines. Le second jeu de données utilisé est situé dans le bassin de Santos, au large du Brésil. Des failles normales y recoupent une série sédimentaire bien stratifiée. Elles sont imagées par un cube sismique de haute résolution. Des observations éparses synthétiques en sont extraites pour tester l’approche. La qualité des données sismiques 3D donne une bonne confiance dans le modèle de référence. Cela permet de tester les méthodes numériques et les règles géologiques développées dans cette thèse pour estimer les incertitudes structurales / The sparsity and the incompleteness of geological data sets lead geologists to use their prior knowledge while modeling the Earth. Uncertainties in the interpretation are an inherent part of geology. In this thesis, I focus on assessing uncertainties related to the modeling of faulted structures from sparse data. Structural uncertainties arise partly from the association of fault evidence explaining the same structure. This occurs especially while interpreting sparse data such as 2D seismic lines or limited outcrop observations. I propose a mathematical formalism to cast the problem of associating fault evidence into graph theory. Each possible scenario is represented by a graph. A combinatorial analysis shows that the number of scenarios is related to the Bell number and increases in a non-polynomial way. I formulate prior geological knowledge as numerical rules to reduce the number of scenarios and to make structural interpretation more objective. I present a stochastic numerical method to generate several interpretation scenarios. A sensitivity analysis, using synthetic data extracted from a reference model, shows that the choice of the interpretation rules strongly impacts the simulated associations. In a second contribution, I integrate a quantitative description of fault-related displacement while interpreting and building 3D subsurface models. I present a parametric fault operator that displaces structures closely surrounding a fault in accordance with a theoretical isolated normal fault model. The displacement field is described using the maximum displacement (Dmax), two profiles on the fault surface (TX and TZ), and a third profile representing the displacement attenuation in the normal direction to the fault surface. These parameters are determined by numerical optimization from the available structural observations. This kinematic fault operator ensures the kinematic consistency of structural models built from sparse data and/or in polyphasic deformation contexts. These two modeling methodologies are tested and discussed on two data sets. The first one contains nine seismic lines imaging a faulted and fractured basement in the Ifni Margin, offshore Morocco. The interpretation of these lines is guided by orientation measurements coming from a nearby onshore field analog. However, uncertainties remain on the association of observations and on the structure chronology. The second data set is located in the Santos Basin, offshore Brazil. A seismic cube exhibits normal faults within a layered sedimentary sequence. I build a reference structural model from this high quality seismic data. The kinematic and stochastic methodologies can be tested and discussed on synthetic sparse data extracted from this known reference model
925

Técnicas de modelagem 3D aplicadas a dados paleobatimétricos das bacias de Santos e Campos e à simulação deformacional de objetos geológicos

Lavorante, Luca Pallozzi [UNESP] 28 October 2005 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:13Z (GMT). No. of bitstreams: 0 Previous issue date: 2005-10-28Bitstream added on 2014-06-13T19:13:13Z : No. of bitstreams: 1 lavorante_lp_me_rcla.pdf: 8726262 bytes, checksum: 7e256344a5ee4b910a8250a0774717ae (MD5) / Agência Nacional do Petróleo, Gás Natural e Biocombustíveis (ANP) / As pesquisas em geociências tendem cada vez mais a utilizar grande volume de dados heterogêneos, cuja interpretação integrada é complexa devido ao envolvimento de diferentes parâmetros, bem como de relações temporais e espaciais. As técnicas de computação gráfica e visualização científica assumem importância crescente por permitirem representar e manipular dados geológicos tal como são e estão no espaço, isto é, em 3D. Esta dissertação teve por objetivo utilizar ferramentas computacionais para modelar geometricamente e visualizar dados geológicos. Utilizando o programa GOCAD foram construídas superfícies paleobatimétricas das bacias de Santos e Campos para o meso- Neocretáceo a partir dos dados disponíveis na literatura. Sua integração com dados litológicos e estruturais em um único ambiente de visualização 3D permitiu aumentar o potencial de interpretação dos dados originalmente representados em mapas bidimensionais. Para melhor contextualizar a evolução paleogeográfica destas bacias durante a abertura do Atlântico Sul e representar analogias com ambientes de deposição atuais foram construídas superfícies batimétricas do Atlântico Sul do mesocretáceo ao recente e do Mar Vermelho. A partir da utilização de ferramentas de modelagem e visualização 3D de domínio público (VTK) foi desenvolvido um programa computacional (Tensor3D) para a simulação da deformação de objetos geológicos, desde rochas, estruturas tectônicas, domos de sal até bacias, a partir da modificação dos componentes de cisalhamento simples e puro contidos em tensores de deformação... / Research in Geosciences is currently using extensive volumes of heterogeneous data, whose integrated interpretation is complex due to the involvement of different parameters, as well as time and spatial relationships. Computer graphics and scientific visualization techniques are assuming increasing importance as they allow the representation and manipulation of geologic data exactly as they appear in 3D space. The purpose of this work is using computational tools in order to geometrically model and visualize geologic data. Using the GOCAD program, mid-Early Cretaceus paleobathymetric surfaces have been constructed for the Santos and Campos basins, based on published data. Their additional integration with lithologic and structural data in a unified 3D visualization environment, allowed an increase in the interpretative potential of data originally represented using 2D maps. In order to provide a more general context for the paleogeographic evolution of these basins during the opening of the South Atlantic Ocean, and to represent analogies with current depositional environments, paleobathymetric surfaces have been modelled for the South Atlantic Ocean, from mid- Early Cretaceous to present time, and for the present Red Sea. By using 3D open-source modelling and visualization tools (VTK), a computational program (Tensor3D) has been devoloped to simulate deformation of geologic objects, from rocks, tectonic structures, salt domes to basins. This process is controlled by modifyng simple and pure shear components contained in strain tensors... (Complete abstract, click electronic access below)
926

Reconnaissance d’objets 3D par points d’intérêt / 3D object recognition with points of interest

Shaiek, Ayet 21 March 2013 (has links)
Soutenue par les progrès récents et rapides des techniques d'acquisition 3D, la reconnaissance d'objets 3D a suscité de nombreux efforts de recherche durant ces dernières années. Cependant, il reste à résoudre dans ce domaine plusieurs problématiques liées à la grande quantité d'information, à l'invariance à l'échelle et à l'angle de vue, aux occlusions et à la robustesse au bruit.Dans ce contexte, notre objectif est de reconnaitre un objet 3D isolé donné dans une vue requête, à partir d'une base d'apprentissage contenant quelques vues de cet objet. Notre idée est de formuler une méthodologie locale qui combine des aspects d'approches existantes et apporte une amélioration sur la performance de la reconnaissance.Nous avons opté pour une méthode par points d'intérêt (PIs) fondée sur des mesures de la variation locale de la forme. Notre sélection de points saillants est basée sur la combinaison de deux espaces de classification de surfaces : l'espace SC (indice de forme- intensité de courbure), et l'espace HK (courbure moyenne-courbure gaussienne).Dans la phase de description de l'ensemble des points extraits, nous proposons une signature d'histogrammes, qui joint une information sur la relation entre la normale du point référence et les normales des points voisins, avec une information sur les valeurs de l'indice de forme de ce voisinage. Les expérimentations menées ont permis d'évaluer quantitativement la stabilité et la robustesse de ces nouveaux détecteurs et descripteurs.Finalement nous évaluons, sur plusieurs bases publiques d'objets 3D, le taux de reconnaissance atteint par notre méthode, qui montre des performances supérieures aux techniques existantes. / There has been strong research interest in 3D object recognition over the last decade, due to the promising reliability of the 3D acquisition techniques. 3D recognition, however, conveys several issues related to the amount of information, to scales and viewpoints variation, to occlusions and to noise.In this context, our objective is to recognize an isolated object given in a request view, from a training database containing some views of this object. Our idea is to propose a local method that combines some existent approaches in order to improve recognition performance.We opted for an interest points (IPs) method based on local shape variation measures. Our selection of salient points is done by the combination of two surface classification spaces: the SC space (Shape Index-Curvedness), and the HK space (Mean curvature- Gaussian curvature).In description phase of the extracted set of points, we propose a histogram based signature, in which we join information about the relationship between the reference point normal and normals of its neighbors, with information about the shape index values of this neighborhood. Performed experiments allowed us to evaluate quantitatively the stability and the robustness of the new proposed detectors and descriptors.Finally we evaluate, on several public 3D objects databases, the recognition rate attained by our method, which outperforms existing techniques on same databases.
927

Criando roadmaps a partir de estados de configuração uniformemente distribuídos / Creating roadmaps from uniform distributed configuration states

Ughini, Cleber Souza January 2007 (has links)
A geração de bons movimentos em tempo real para corpos com muitos graus de liberdade ainda é um desafio. Uma quantidade elevada de graus de liberdade aumenta de forma exponencial a quantidade de posições diferentes que um determinado corpo pode obter. Fazer uso dessa quantidade de possibilidades para gerar movimentos complexos pode ser extremamente útil para planejamento de movimentos de robôs ou personagens virtuais, porém incrivelmente caro em termos computacionais. Existem muitos algoritmos que se baseiam no uso de mapas de caminhos (chamados roadmaps) para trabalhar com corpos com muitos graus de liberdade. Um roadmap funciona como uma coletânea de poses de um corpo interligadas entre si, onde cada ligação representa uma possibilidade de transição livre de colisões. Geralmente as técnicas que utilizam roadmaps usam abordagens determinísticas ou aleatórias para atingir o objetivo. Através de métodos determinísticos é possível explorar de forma mais uniforme o espaço de configuração, garantindo uma melhor cobertura e qualidade do roadmap. Já as abordagens aleatórias, geralmente permitem um melhor desempenho e, principalmente, tornam viáveis a aplicação de uma solução para corpos com muitos graus de liberdade. Neste trabalho é proposto um método determinístico adaptável para a geração de roadmaps (ADRM) que provê uma cobertura adequada do espaço de configuração em um tempo perfeitamente aceitável em comparação a outros métodos. Para obter isso, é feita em primeiro lugar uma classificação de todos os DOFs do modelo e, então, essa classificação é usada como parâmetro para decidir quantas amostras serão geradas de cada DOF. A combinação entre as amostras de todos os DOFs gera a quantidade total de amostras. Para validação do novo método foram executados diversos testes em ambientes distintos. Os testes foram avaliados através da comparação com outras técnicas existentes, em quesitos como tempo de geração e cobertura do espaço de configuração. Os resultados demonstram que o método atinge uma cobertura do espaço de configuração muito boa, em um tempo aceitável. / The creation of good real time movements for bodies with many degrees of freedom (DOF) still remains a challenge. A great amount of DOFs increase, in an exponential way, the quantity of different positions that a body can assume. Making use of that amount of possibilities to generate complex movements can be useful for planning robots’ movements or even to animate virtual characters, however it is extremely expensive in computational terms. There are many algorithms that are based on the use of roadmaps to work with bodies with many degrees of freedom. A roadmap works as a collection of valid body’s positions interconnected, where each connection represents a possibility of a transaction free of collisions. Usually, the techniques which make use of roadmaps follow deterministic or probabilistic approaches to get to the objective. Trough deterministic methods it is possible to explore in a more uniform way the configuration’s space, assuring a better covering and quality of the roadmap. Therefore, probabilistic (or random) approaches allow a better performance and, mainly, make possibly the application of a solution for bodies with higher degrees of freedom. This work proposes a deterministic method applicable to roadmaps generation (ADRM) which provides an adequate covering of the configuration’s space in a completely acceptable time range comparing to other rates. To achieve this goal, first of all a classification of all of the DOFs of the model is made and, then, this classification is used as a parameter to decide how many samples will be generated of each DOF. The combining between the samples of all of the DOFs generates the total amount of samples. To validate the new method, several tests were executed at different environments. The tests were evaluated trough the comparison with other existents techniques, using criteria like the time spent in generating a roadmap and covering of the space of configuration. The results show us that the method achieves a satisfactory covering of the space configuration in an acceptable time range.
928

Magnétisme structure et ordre chimique dans les métaux 3D et leur alliages à très hautes pressions / Magnetism, structure and chemical order in 3d metals ad their alloys at extreme pressures

Torchio, Raffaella 23 January 2012 (has links)
Cette thèse concerne l'étude des transformations structurelles et magnétiques qui se produisent dans les métaux 3d quand ils sont comprimés jusqu'à des pressions extrêmes. L'étude a été réalisée en utilisant les techniques de polarisation absorption des rayons X (Dichroïsme circulaire magnétique de rayons X ou XMCD) et diffraction des rayons X couplées à les calculs DTF, et appliquée à les cas du cobalt, du nickel et des alliages de fer et cobalt (FeCo). En particulier, pour le cobalt, on présente la première preuve expérimentale de la suppression du son ferromagnétisme induite par la pression et on explore la relation entre les changements structurels et magnétique. Le cas du nickel, qui est structurellement stable sur une large gamme de pressions, permet d'aller plus loin dans l'interprétation du signal XMCD à la seuil K, encore débattue aujourd'hui. Enfin l'enquête sur les alliages FeCo vise à comprendre le rôle joué par l'ordre chimique dans les propriétés de réglage haute pression structurales et magnétiques. / This thesis concerns the study of the magnetic and structural transformations that occur in the 3d metals when they are compressed up to extreme pressures. The investigation has been carried out using polarized X-ray absorption (X-ray magnetic circular dichroism or XMCD) coupled to X-ray diffraction and DTF calculations and applied to the cases of cobalt, nickel and iron-cobalt (FeCo) alloys. In particular, in cobalt we present the first experimental evidence of pressure-induced suppression of ferromagnetism and we explore the interplay between structural and magnetic changes. The case of nickel, that is structurally stable over a wide range of pressures, allows to go deeper into the interpretation of the K-edge XMCD signal, so far still unsettled. Finally the investigation of the FeCo alloys is aimed at understanding the role played by the chemical order in tuning the high pressure structural and magnetic properties.
929

Détermination, maîtrise et réduction des biais et incertitudes de la réactivité du réacteur Jules HOROWITZ / Determination, control and reduction of biais and uncertainties on reactivity of the Jules Horowitz Reactor

Leray, Olivier 25 September 2012 (has links)
Le formulaire de calcul neutronique HORUS3D/N dédié au Réacteur Jules Horowitz (RJH), sert aux études de conception et de sûreté du réacteur. La maîtrise de l'ensemble des paramètres neutroniques du RJH doit être assurée pour l'instruction du rapport de sûreté de l'installation. Ce travail de recherche s'inscrit dans cet objectif et s'attache à la détermination, la maîtrise et la réduction des incertitudes dues aux données nucléaires sur la réactivité du Réacteur Jules Horowitz (RJH). Une démarche rigoureuse et générique a été mise en place : c'est un ensemble cohérent, complet et incrémental qui permet l'identification et la quantification de l'ensemble des sources d'incertitudes et qui a abouti à la maîtrise du biais et des incertitudes dus aux données nucléaires sur la réactivité du cas étudié : le Réacteur Jules Horowitz. Cette méthode est basée sur une succession d'études : l'élaboration d'un jeu de matrices de variance-covariance cohérentes concernant les données nucléaires des isotopes d'intérêt, les études de sensibilité aux données nucléaires de l'expérience et de l'application étudiées, la détermination fine des incertitudes technologiques par la mise en œuvre d'une méthode innovante une étude de transposition estimant le biais et l'incertitude a posteriori dus aux données nucléaires sur l'application étudiée. Les différentes étapes s'appuient sur les outils de calcul de référence du CEA (code de calcul Monte-Carlo TRIPOLI4, codes déterministes APOLLO2 et CRONOS2, code d'évaluation CONRAD), l'évaluation de données nucléaires JEFF-3.1.1 et des méthodes de propagation, marginalisation et de transposition des incertitudes. La propagation des incertitudes sur les données nucléaires contenues dans les matrices de variance-covariance conclut à une incertitude a priori de 637 pcm (1σ) sur la réactivité du RJH pour le combustible U3Si2Al enrichi à 19.75% en 235U. L'interprétation des mesures d'oscillation d'échantillons du programme VALMONT dans le réacteur MINERVE a permis la qualification des données nucléaires relatives au combustible du RJH de l'évaluation JEFF-3.1.1 et a mis en évidence la cohérence de leurs incertitudes. Ainsi, l'interprétation de la réactivité du cœur AMMON/Référence a été réalisée avec l'évaluation JEFF-3.1.1 (et le code de référence TRIPOLI4). Un écart calcul/expérience de +376 pcm est observé. Une étude fine de détermination des incertitudes sur la réactivité de la configuration de référence aboutit à 340 pcm (1σ) dues aux incertitudes technologiques et à 671 pcm (1σ) dues aux incertitudes sur les données nucléaires. La transposition du biais et des incertitudes de l'expérience AMMON/Référence est réalisable grâce à l'excellente représentativité de cette dernière vis-à-vis du RJH. La réduction d'un facteur 2 sur l'incertitude à appliquer sur la réactivité début de vie du RJH est ainsi obtenue, ce qui équivaut à un gain d'environ deux jours équivalents pleine puissance (1σ) sur la longueur de cycle. Ainsi, le biais et l'incertitude associée à retenir pour un combustible U3Si2Al enrichi à 19.75 % en 235U, et un cœur RJH critique non barré (réflecteur nu) en début de vie sont de +266 ± 352 pcm (1σ). / The neutronics calculation scheme HORUS3D/N is dedicated to the design and safety studies of the Jules Horowitz Reactor (JHR). The control of the whole neutronics parameters of the JHR must be ensured for the safety report. This work focuses in the determination and control of uncertainties on the reactivity of the Jules Horowitz reactor due to nuclear data. A rigorous and consistent method was used. It leads to the identification and quantification of the bias and the uncertainty bias due to nuclear data on the reactivity of the considered case: the Jules Horowitz Reactor. Several steps were followed: - the set-up of a reliable dataset of covariance matrices on nuclear data of the isotopes of interest, - the sensitivity studies to nuclear data of the representative experiment and the reactor, - an accurate determination of the technological uncertainties using an innovative method, - a transposition stage of the experimental bias and the associated uncertainty due to nuclear data using the representativity method applied to the JHR. These steps were performed using the CEA's reference calculation tools (Monte-Carlo calculation code TRIPOLI4, deterministic codes APOLLO2 and CRONOS2, evaluation code CONRAD), the European JEFF-3.1.1 nuclear data evaluation and a suitable set of uncertainty propagation, marginalization and transposition techniques. The propagation of uncertainties on nuclear data contained by the variance-covariance matrices concludes to a prior uncertainty of 637 pcm (1σ) on the JHR reactivity for U3Si2Al fuel enriched to 19.75% in 235U. The interpretation of the oscillations of the VALMONT program allowed the experimental validation of the JEFF-3.1.1 nuclear data concerning the JHR fuel and highlights the good agreement of their uncertainties Thus, the interpretation of the reactivity of the AMMON/Reference core was done with the JEFF-3.1.1 evaluation (and Monte-Carlo code TRIPOLI4) and shows a bias of + 376 pcm. A fine study of the technological uncertainties leads to a value of 340 pcm (1σ) on the reactivity and the propagation of the nuclear data uncertainties on the reactivity amounts to 671 pcm (1σ). Transposition and reduction of the bias and prior nuclear data uncertainty were made using the Representativity method which demonstrates the similarity of the AMMON experiment with JHR. The final impact of JEFF-3.1.1 nuclear data on the Begin of Life JHR reactivity calculated by the HORUS3D/N V4.0 (U3Si2Al fuel enriched to 19.75% in 235U) is a bias of +266 pcm with an associated posterior uncertainty of 352 pcm (1σ).
930

Méthodologies de conception ASIC pour des systèmes sur puce 3D hétérogènes à base de réseaux sur puce 3D / ASIC Design Methodologies for 3D NOC Based 3D Heterogeneous Multiprocessor on Chip

Jabbar, Mohamad 21 March 2013 (has links)
Dans cette thèse, nous étudions les architectures 3D NoC grâce à des implémentations de conception physiques en utilisant la technologie 3D réel mis en oeuvre dans l'industrie. Sur la base des listes d'interconnexions en déroute, nous procédons à l'analyse des performances d'évaluer le bénéfice de l'architecture 3D par rapport à sa mise en oeuvre 2D. Sur la base du flot de conception 3D proposé en se concentrant sur la vérification temporelle tirant parti de l'avantage du retard négligeable de la structure de microbilles pour les connexions verticales, nous avons mené techniques de partitionnement de NoC 3D basé sur l'architecture MPSoC y compris empilement homogène et hétérogène en utilisant Tezzaron 3D IC technlogy. Conception et mise en oeuvre de compromis dans les deux méthodes de partitionnement est étudiée pour avoir un meilleur aperçu sur l'architecture 3D de sorte qu'il peut être exploitée pour des performances optimales. En utilisant l'approche 3D homogène empilage, NoC topologies est explorée afin d'identifier la meilleure topologie entre la topologie 2D et 3D pour la mise en œuvre MPSoC 3D sous l'hypothèse que les chemins critiques est fondée sur les liens inter-routeur. Les explorations architecturales ont également examiné les différentes technologies de traitement. mettant en évidence l'effet de la technologie des procédés à la performance d'architecture 3D en particulier pour l'interconnexion dominant du design. En outre, nous avons effectué hétérogène 3D d'empilage pour la mise en oeuvre MPSoC avec l'approche GALS de style et présenté plusieurs analyses de conception physiques connexes concernant la conception 3D et la mise en œuvre MPSoC utilisant des outils de CAO 2D. Une analyse plus approfondie de l'effet microbilles pas à la performance de l'architecture 3D à l'aide face-à-face d'empilement est également signalé l'identification des problèmes et des limitations à prendre en considération pendant le processus de conception. / In this thesis, we study the exploration 3D NoC architectures through physical design implementations using real 3D technology used in the industry. Based on the proposed 3D design flow focusing on timing verification by leveraging the benefit of negligible delay of microbumps structure for vertical connections, we have conducted partitioning techniques for 3D NoC-based MPSoC architecture including homogeneous and heterogeneous stacking using Tezzaron 3D IC technlogy. Design and implementation trade-off in both partitioning methods is investigated to have better insight about 3D architecture so that it can be exploited for optimal performance. Using homogeneous 3D stacking approach, NoC architectures are explored to identify the best topology between 2D and 3D topology for 3D MPSoC implementation. The architectural explorations have also considered different process technologies highlighting the wire delay effect to the 3D architecture performance especially for interconnect-dominated design. Additionally, we performed heterogeneous 3D stacking of NoC-based MPSoC implementation with GALS style approach and presented several physical designs related analyses regarding 3D MPSoC design and implementation using 2D EDA tools. Finally we conducted an exploration of 2D EDA tool on different 3D architecture to evaluate the impact of 2D EDA tools on the 3D architecture performance. Since there is no commercialize 3D design tool until now, the experiment is important on the basis that designing 3D architecture using 2D EDA tools does not have a strong and direct impact to the 3D architecture performance mainly because the tools is dedicated for 2D architecture design.

Page generated in 0.0382 seconds