• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 82
  • 61
  • 26
  • 7
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 442
  • 51
  • 44
  • 41
  • 38
  • 36
  • 28
  • 28
  • 28
  • 28
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

SUPERNOVAE THEORY: STUDY OF ELECTRO-WEAK PROCESSES DURING GRAVITATIONAL COLLAPSE OF MASSIVE STARS

Fantina, A. F. 18 October 2010 (has links) (PDF)
La physique des supernovae requiert la connaissance soit des phénomènes complexes hydrodynamiques dans la matière dense (comme le transport d'énergie et des neutrinos, le traitement du choc) soit de la microphysique liée à la physique des noyaux et de la matière nucléaire dans la matière dense et chaude. Dans le cadre de la théorie des supernovae de type II, la plus part des simulations numériques qui simulent l'effondrement du coeur de supernova jusqu'à la formation et la propagation de l'onde du choc n'arrive pas à reproduire l'explosion des couches extérieures des étoiles massives. La raison pour cela pourrait être due soit aux phénomènes hydrodynamiques comme la rotation, la convection, ou bien la relativité générale, soit aux processus microphysiques qui ne sont pas très bien connus dans ce domaine de densités, températures et asymétries. Le but de ce travail de thèse est d'étudier l'effet de certaines processus microphysiques, en particulier les processus électro-faibles, qui jouent un rôle fondamental pendant l'effondrement gravitationnel, et d'analyser leur impact avec une simulation hydrodynamique. Parmi les processus microphysiques qui interviennent lors d'un effondrement de supernova, le plus important processus électro-faible est la capture électronique sur les protons libres et sur les noyaux. La capture est essentielle pour déterminer l'évolution de la fraction leptonique dans le coeur pendant la phase de neutronisation. Elle a un impact sur l'efficacité du rebond et, par conséquent, sur l'énergie de l'onde du choc. De plus, l'équation d'état de la matière et les taux de capture électronique sur les noyaux sont modifiés par la masse effective des nucléons dans les noyaux, due aux correlations à multi-corps dans le milieu dense, et à sa dépendence de la température. Après une introduction générale qui contient une revue de la phénoménologie des supernovae en appuyant sur la nécessité de la connaissance des données nucléaires pour les simulations numériques, dans la première partie de la thèse les aspects nucléaires abordés dans ce travail sont présentés. Le Chapitre 2 est constitué par une courte introduction sur les concepts importantes qui sont développés dans la Partie I et utilisés dans la Partie II de la thèse; en particulier: la théorie du champ moyen, de l'appariement en approximation BCS, la définition de masse effective en connexion avec la densité des niveaux et l'énergie de symétrie. Dans le Chapitre 3, un modèle nucléaire dont le but est d'améliorer la densité d'états autours du niveau de Fermi dans les noyaux est présenté. On a inclu dans l'approche de la fonctionnelle de la densité une masse effective piquée en surface qui simule certains effets au delà de Hartree-Fock. Cela a été possible en ajoutant un terme à la fonctionnelle de Skyrme qui puisse reproduire l'augmentation de la masse effective et de la densité d'états à la surface de Fermi, comme attendu par les données expérimentales. On a étudié l'impact de ce nouveau terme sur les propriétés de champ moyen dans les noyaux 40Ca et 208Pb, et sur les propriétés d'appariement à température nulle et à température finie dans le noyau 120Sn. On a aussi commencé des nouveaux calculs pour évaluer la dépendance en température de la masse effective dans l'approche microphysique de la RPA, dont les résultats préliminaires sont montrés dans l'Appendice D. Cette partie nucléaire est complétée par une appendice (Appendice B), qui donne les détails des paramétrisations de Skyrme utilisées dans le texte, et par l'Appendice C qui analyse la dépendence de la température de la masse effective en connection avec le paramètre de densité des niveaux qui peut être extrait par les expériences de physique nucléaire. La deuxième partie de la thèse est dediée aux modèles de supernova sur lequels j'ai travaillé. On présente les résultats obtenus avec un approche à une zone, et deux modèles monodimensionnels en symétrie sphérique: newtonien et en relativité générale. Bien que un modèle en symétrie sphérique n'est pas capable de saisir tous les aspects complexes du phénomène de supernova, et les observations des vitesses des étoiles à neutrons ou des inhomogéneitées des éjecta requièrent l'inclusion dans les simulations des effets multidimensionnels, un modèle monodimensionnel permet un premier étude détaillé de l'impact des différentes données microphysiques en focalisant l'analyse sur l'incertitude des données de physique nucléaire. Après une introduction générale faite dans le Chapitre 4 qui décrit les principals ingrédients des différentes simulations numériques (comme le traitement du choc et le transport de neutrinos), les codes sur lequels j'ai travaillé sont illustrés en détail. Le Chapitre 5 présente un modèle à une zone, où le coeur de supernova a été approximé par une sphère de densité homogène. Bien que ceci est un modèle simple, il est capable de reproduire de façon qualitative (et quantitative dans ses ordres de grandeur) la "trajectoire" d'effondrement (i.e. l'évolution des grandeurs thérmodynamiques le long de l'effondrement). Dans ce cadre, on a évalué l'impact de la dépendance en température de l'énergie de symétrie (via la dépendance en température de la masse effective) dans la dymanique du collapse, et on a montré que, en incluant cette dépendance en température, la deleptonisation dans le coeur est systématiquement réduite et l'effet sur l'énergie du choc est non-négligeable. Ces résultats nous ont conduit à effectuer des simulations plus réalistes, en employant un code monodimensionnel newtonien en symétrie sphérique, avec transport des neutrinos. La description de ce code, développé par P. Blottiau et Ph. Mellor au CEA,DAM,DIF, est l'object du Chapitre 6. On a inclu dans l'équation d'état dérivée par Bethe et al.(BBAL), aussi utilisée dans le code à une zone, la même paramétrisation de la masse effective, qui agit à la fois sur les Q-valeurs des taux de capture et sur l'équation d'état du système. Les résultats de ces simulations ont confirmés ceux qui avaient été obtenus avec le code one-zone, c'est à dire la reduction systématique de la deleptonisation dans le coeur si on inclue la dépendance en température de l'énergie de symétrie. De plus, on en a estimé l'impact sur la position de la formation de l'onde du choc, qui est déplacée vers l'extérieur d'une quantité non-négligeable. On a aussi travaillé pour inclure dans le code l'équation d'état plus récente de Lattimer et Swesty. Enfin, le Chapitre 7 décrit un code, à l'origine développé par le groupe de Valence, écrit en rélativité générale et qui utilise un approche moderne pour le traitment du choc (la "capture du choc"). Bien que ce modèle ne contient pas le transport des neutrinos, l'équation de l'évolution de la fraction neutrinique est déjà écrite avec un schema multi-groupe qui permet une première analyse spectrale des neutrinos. On étudie l'effet de l'équation d'état dans la dynamique d'effondrement ainsi que l'impact de la capture électronique. Une versione newtonienne a été aussi implémentée et les résultats obtenus sont en accord avec la littérature. Cette partie est complétée par plusieurs appendices. Dans l'Appendice A, les différentes unités de mesure employées dans les codes sont listées. Les Appendices E et F sont dédiées à deux équations d'état: la prémière est celle d'un gas de neutrons, protons et électrons; la deuxième décrit l'équations d'état de Lattimer et Swesty et les modifications qu'on a apportés pour corriger une erreur dans la définition de l'énergie de liaison des particules alpha et pour étendre l'équation d'état à des densités plus basses. Enfin, l'Appendice G détaille les processus des neutrinos implémentés dans les simulations. Le développement des codes numériques pour simuler l'effondrement gravitationnel de supernova effectué dans ce travail de thèse est apte pour tester les propriétés de la matière et peux constituer un outil pour des projets de recherche futurs.
42

MEASURING NEUTRON STAR RADII VIA PULSE PROFILE MODELING WITH NICER

Özel, Feryal, Psaltis, Dimitrios, Arzoumanian, Zaven, Morsink, Sharon, Bauböck, Michi 18 November 2016 (has links)
The Neutron-star Interior Composition Explorer is an X-ray astrophysics payload that will be placed on the International Space Station. Its primary science goal is to measure with high accuracy the pulse profiles that arise from the non-uniform thermal surface emission of rotation-powered pulsars. Modeling general relativistic effects on the profiles will lead to measuring the radii of these neutron stars and to constraining their equation of state. Achieving this goal will depend, among other things, on accurate knowledge of the source, sky, and instrument backgrounds. We use here simple analytic estimates to quantify the level at which these backgrounds need to be known in order for the upcoming measurements to provide significant constraints on the properties of neutron stars. We show that, even in the minimal-information scenario, knowledge of the background at a few percent level for a background-to-source countrate ratio of 0.2 allows for a measurement of the neutron star compactness to better than 10% uncertainty for most of the parameter space. These constraints improve further when more realistic assumptions are made about the neutron star emission and spin, and when additional information about the source itself, such as its mass or distance, are incorporated.
43

Caracterização silvigênica de um trecho de floresta Ombrófila densa do parque estadual Carlos Botelho, Sete Barras - SP / Silvigenic characterization of a Dense Rain Forest on Parque Estadual Carlos Botelho, SeteBarras - SP

Viecili, Renata Rodrigues Fernandez 05 March 2013 (has links)
O presente trabalho teve por objetivo realizar a caracterização silvigênica de um trecho de Floresta Ombrófila Densa Sub Montana em conjunto com o estabelecimento de possíveis relações entre as alterações espaciais do mosaico silvático e os fatores abióticos (solo e topografia). O método utilizado foi o de interceptação de linhas de inventário, com identificação das ecounidades descrito por Torquebiau (1986). Foram dispostas linhas paralelas entre si e distantes 10 m uma da outra. Todos os indivíduos dominantes (mais altos naquele ponto), cujas projeções horizontais das copas interceptaram as linhas, foram amostrados na caracterização silvigênica. Foram tomadas medidas, de no mínimo quatro pontos, da projeção horizontal da copa destes indivíduos até as linhas de inventário, em um sistema de eixos ortogonais (coordenadas x e y). Cada árvore marcada no campo foi classificada, quanto à sua arquitetura, em: árvores do futuro, árvores do presente e árvores do passado (OLDEMAN,1987). As áreas de clareira que interceptaram as linhas também foram amostradas, medidas e mapeadas. O estabelecimento das diversas ecounidades em cada trecho amostrado é feito a partir da união das copas de árvores de mesma categoria. O desenho do mosaico e o cálculo das áreas das ecounidades foram feitos por meio do programa TNTmips, a partir das coordenadas das copas dentro das linhas de inventário. Este trabalho resultou na representação gráfica da cobertura vegetal da área estudada e a sua correlação com os fatores abióticos. Para avaliar o papel dos fatores abióticos na composição espacial do mosaico vegetacional, foram analisadas e combinadas as diversas informações em um Sistema de Informações Geográficas (SIG). Para tal, cada \"classe\" de informação constitui um plano de informação ou um \"layer\" dentro do SIG. De acordo com os resultados pode-se concluir que a caracterização silvigênica indicou que a área estudada representa uma floresta em fase de pré-maturidade por apresentar sinais de perturbações recentes, traduzidas nas altas proporções de ecounidades 1A e em reorganização observadas. Conclui-se ainda quepode ser estabelecida uma relação entre a distribuição das ecounidades e os fatores abióticos estudados. / This study aimed to realize the silvigenic characterization of a Dense Rain Forest in according to the establishment of possible relationships between spatial changing\'s on the silvatic standards and some abiotic factors, such as soil and topography. The method applied was the inventory line interception, identifying the ecounits described by Torquebiau (1986). There were set parallel lines in every 10 meters. All dominant trees (the highest in that point), whose horizontal canopy projections intercepted the inventory line, were sampled in the silvigenic characterization. To measure the canopy projection, there were used at least four points on the inventory line as an orthogonal axis system (X and Y coordinates). Each sampled tree was classified based on its architecture features as: trees of the future, trees of the present or trees of the past (OLDEMAN, 1987). The gap surface crossing inventory lines were also measured and mapped. The ecounit establishment is created by the connection of canopies from the same category (future, present or past). The ecounits design were mapped and its surface measured using the TNTmips software, based on all canopies coordinates over the inventory lines. The study resulted in the graphic representation of vegetation coverage and its correlation with abiotic factors. To evaluate the contribution of the abiotic factors on the vegetation mosaic`s spatial composition, a Geographic Information System (GIS) was settled to combine and analyze all data. Different information classes were overlapped as layers on the GIS environment. According to the results it`s possible to conclude that silvigenic characterization indicates that the studied area represents a pre mature forest, based on recent disturbances sings, confirmed on high rates of ecounits as 1A or reorganization types. It is concluded that the silvigenic mapping represented the architectural behavior of the species related to the soil classification.
44

Accelerating Dense Linear Algebra for GPUs, Multicores and Hybrid Architectures: an Autotuned and Algorithmic Approach

Nath, Rajib Kumar 01 August 2010 (has links)
Dense linear algebra(DLA) is one of the most seven important kernels in high performance computing. The introduction of new machines from vendors provides us opportunities to optimize DLA libraries for the new machines and thus exploit their power. Unfortunately the optimization phase is not straightforward. The optimum code of a certain Basic Linear Algebra Subprogram (BLAS) kernel, which is the core of DLA algorithms, in two different machines with different semiconductor process can be different even if they share the same features in terms of instruction set architecture, memory hierarchy and clock speed. It has become a tradition to optimize BLAS for new machines. Vendors maintain highly optimized BLAS libraries targeting their CPUs. Unfortunately the existing BLAS for GPUs is not highly optimized for DLA algorithms. In my research, I have provided new algorithms for several important BLAS kernels for different generation of GPUs and introduced a pointer redirecting approach to make BLAS run faster in generic problem size. I have also presented an auto-tuning approach to parameterize the developed BLAS algorithms and select the best set of parameters for a given card. The hardware trends have also brought up the need for updates on existing legacy DLA software packages, such as the sequential LAPACK. To take advantage of the new computational environment, successors of LAPACK must incorporate algorithms of three main characteristics: high parallelism, reduced communication, and heterogeneity-awareness. On multicore architectures, Parallel Linear Algebra Software for Multicore Architectures (PLASMA) has been developed to meet the challenges in multicore. On the other extreme, Matrix Algebra on GPU and Multicore Architectures (MAGMA) library demonstrated a hybridization approach that indeed streamlined the development of high performance DLA for multicores with GPU accelerators. The performance of these two libraries depend upon right choice of parameters for a given problem size and given number of cores and/or GPUs. In this work, the issue of automatically tuning these two libraries is presented. A prune based empirical auto-tuning method has been proposed for tuning PLASMA. Part of the tuning method for PLASMA was considered to tune hybrid MAGMA library.
45

Accelerating Dense Linear Algebra for GPUs, Multicores and Hybrid Architectures: an Autotuned and Algorithmic Approach

Nath, Rajib Kumar 01 August 2010 (has links)
Dense linear algebra(DLA) is one of the most seven important kernels inhigh performance computing. The introduction of new machines from vendorsprovides us opportunities to optimize DLA libraries for the new machinesand thus exploit their power. Unfortunately the optimization phase is notstraightforward. The optimum code of a certain Basic Linear AlgebraSubprogram (BLAS) kernel, which is the core of DLA algorithms, in twodifferent machines with different semiconductor process can be differenteven if they share the same features in terms of instruction setarchitecture, memory hierarchy and clock speed. It has become a traditionto optimize BLAS for new machines. Vendors maintain highly optimized BLASlibraries targeting their CPUs. Unfortunately the existing BLAS for GPUsis not highly optimized for DLA algorithms. In my research, I haveprovided new algorithms for several important BLAS kernels for differentgeneration of GPUs and introduced a pointer redirecting approach to makeBLAS run faster in generic problem size. I have also presented anauto-tuning approach to parameterize the developed BLAS algorithms andselect the best set of parameters for a given card.The hardware trends have also brought up the need for updates on existinglegacy DLA software packages, such as the sequential LAPACK. To takeadvantage of the new computational environment, successors of LAPACK mustincorporate algorithms of three main characteristics: high parallelism,reduced communication, and heterogeneity-awareness. On multicorearchitectures, Parallel Linear Algebra Software for MulticoreArchitectures (PLASMA) has been developed to meet the challenges inmulticore. On the other extreme, Matrix Algebra on GPU and MulticoreArchitectures (MAGMA) library demonstrated a hybridization approach thatindeed streamlined the development of high performance DLA for multicoreswith GPU accelerators. The performance of these two libraries depend uponright choice of parameters for a given problem size and given number ofcores and/or GPUs. In this work, the issue of automatically tuning thesetwo libraries is presented. A prune based empirical auto-tuning method hasbeen proposed for tuning PLASMA. Part of the tuning method for PLASMA wasconsidered to tune hybrid MAGMA library.
46

A calculus of loop invariants for dense linear algebra optimization

Low, Tze Meng 29 January 2014 (has links)
Loop invariants have traditionally been used in proofs of correctness (e.g. program verification) and program derivation. Given that a loop invariant is all that is required to derive a provably correct program, the loop invariant can be thought of as being the essence of a loop. Being the essence of a loop, we ask the question “What other information is embedded within a loop invariant?” This dissertation provides evidence that in the domain of dense linear algebra, loop invariants can be used to determine the behavior of the loops. This dissertation demonstrates that by understanding how the loop invariant describes the behavior of the loop, a goal-oriented approach can be used to derive loops that are not only provably correct, but also have the desired performance behavior. / text
47

A study of dispersion and combustion of particle clouds in post-detonation flows

Gottiparthi, Kalyana Chakravarthi 21 September 2015 (has links)
Augmentation of the impact of an explosive is routinely achieved by packing metal particles in the explosive charge. When detonated, the particles in the charge are ejected and dispersed. The ejecta influences the post-detonation combustion processes that bolster the blast wave and determines the total impact of the explosive. Thus, it is vital to understand the dispersal and the combustion of the particles in the post-detonation flow, and numerical simulations have been indispensable in developing important insights. Because of the accuracy of Eulerian-Lagrangian (EL) methods in capturing the particle interaction with the post-detonation mixing zone, EL methods have been preferred over Eulerian-Eulerian (EE) methods. However, in most cases, the number of particles in the flow renders simulations using an EL method unfeasible. To overcome this problem, a combined EE-EL approach is developed by coupling a massively parallel EL approach with an EE approach for granular flows. The overall simulation strategy is employed to simulate the interaction of ambient particle clouds with homogenous explosions and the dispersal of particles after detonation of heterogeneous explosives. Explosives packed with aluminum particles are also considered and the aluminum particle combustion in the post-detonation flow is simulated. The effect of particles, both reactive and inert, on the combustion processes is analyzed. The challenging task of solving for clouds of micron and sub-micron particles in complex post-detonation flows is successfully addressed in this thesis.
48

The selective removal of components from gasoline using membrane technology

Robinson, John January 2004 (has links)
Membrane technology is a potential method for upgrading gasoline quality, with respect to its tendency to promote fouling of engine inlet-systems. This thesis investigates the transport and separation mechanisms of dense polydimethylsiloxane (PDMS) membranes in nanofiltration applications relating to the filtration of gasoline fuels. Simulated fuels were created which comprised representative organic solvents with organometallic and poly-nuclear aromatic solutes. The flux and separation behaviour of the solvent-solute systems were studied using several apparatus and a range of operating regimes. Tests were performed with real fuels and refinery components to verify the mechanisms observed with the model solvent-solute systems, and several strategies were developed by which the process could be optimised or improved. Parallel to this work, a project was undertaken to assess the suitability of the technology on an industrial scale and to identify any scale-up issues. The key factors influencing flux were found to be the viscosity and swelling-effect of the solvent or solvent mixture. The dense membrane was shown to exhibit many characteristics of a porous structure when swollen with solvents, with the separation of low-polarity solutes governed principally by size-exclusion. It is postulated that swelling causes expansion of the polymer network such that convective and diffusive flow can take place between polymer chains. In general terms, a higher degree of swelling resulted in a higher flux and lower solute rejection. The separation potential of the membrane could be partly controlled by changing the swelling-effect of the solvent and the degree of membrane crosslinking. The transport of polar/non-polar solvent mixtures through PDMS was influenced by swelling equilibria, with separations occurring upon swelling the membrane. Separation of the more polar solvent occurred in this manner, and the solute rejection in multicomponent polar/non-polar mixtures deviated significantly from the behaviour in binary mixtures. The results obtained from a pilot-plant scale apparatus were largely consistent with those from laboratory-scale equipment, and engine tests showed that fuel filtration with PDMS is a technically-viable means of upgrading gasoline quality.
49

Går det att skapa städer som är både förtätade och grönskande? : En studie av Malmö stad

Jangefelt Nilsson, Jenny January 2014 (has links)
Creating a city that is both dense and green could result in conflicts. The aim with this report is to describe how the municipality of Malmö works with densification and at the same time creating green environments. Also, the aim is to describe how densification and greenery in the city are problematized by researchers. It was shown in the empirical study that it is a political directive to create a dense and green cite. To be able to create this, all of the interviewees refers to the comprehensive plan of Malmö. The methods the municipality of Malmö uses to create green environments during densification varies from different projects, considers John Lepic. The results in the literature survey show that greenery often is identified as a key ingredient of a sustainable city (Boverket 2002; Jim 2004). In order to systematically create a greener city the spatial planning and data produced in connection with it are important tools (Nordmalm, et al. 1999). To create exciting and interesting environments both quantity and quality are important (Jim 2004). Greenery could be introduced in cities in many ways. Even if the wildest ideas may not be reality, it is possible to get inspiration from these ideas and with their help find new solutions.
50

Roles of Sec5 in the Regulation of Dense-Core Vesicle Secretion in PC12 Cells

Jiang, Tiandan T. J. 03 January 2011 (has links)
The exocyst is thought to tether secretory vesicles to specific sites on the plasma membrane. As a member of the exocyst, Sec5 is implicated in cell survival and membrane growth in Drosophila. Little is known of the exocyst function in mammals, with previous work suggesting involvement of exocyst in GTP-dependent exocytosis. Using RNA interference, we stably down-regulated Sec5 in PC12 cells. We found that these knockdown cells exhibit decreased GTP- and Ca2+-dependent exocytosis of dense-core vesicles (DCVs), and contain less proportion of docked vesicles. Expression of Sec6/8 is also slightly reduced in Sec5 knockdown cells. Our results suggest that Sec5 is involved in both GTP- and Ca2+-dependent exocytosis, possibly through the regulation of DCV docking. We also established doxycycline-inducible knockdown system for Sec5 in PC12 cells which may be more appropriate to study development-related proteins. Efforts were also made to re-introduce Sec5 into the Sec5 knockdown cells for rescue purposes.

Page generated in 0.0585 seconds