• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 490
  • 228
  • 163
  • 44
  • 43
  • 28
  • 17
  • 9
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1207
  • 310
  • 121
  • 111
  • 106
  • 82
  • 81
  • 75
  • 75
  • 73
  • 53
  • 50
  • 47
  • 47
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Caracterização de super-redes semicondutoras amorfas por difração de raios x. / Characterization of amorphous semiconductive super-networks by x-ray diffraction.

Elvira Leticia Zeballos Velasquez 05 July 1995 (has links)
Neste trabalho foram investigados dois sistemas de multicamadas de semicondutores amorfos, a-SI:h/SI IND.1-XC IND.X:h e a-SI:h/a-GE:h, crescidos pela técnica de plasma enhanced chemical vapor deposition (PECVD). A difração de raios x a baixo angulo (SAXRD) foi a técnica utilizada para estudo das propriedades estruturais destas super-redes. Os objetivos deste trabalho foram: a) determinar as propriedades estruturais destes sistemas, no que tange a periodicidade da super-rede, uniformidade em espessura e tipo de interface; b) desenvolver modelos teóricos de simulação das intensidades difratadas; e c) avaliar o processo de difusão e cristalização dos componentes das multicamadas, através de tratamentos térmicos. As multicamadas de a-SI:h/a-SI IND.1-XC IND.X:h foram crescidas variando-se dois parâmetros de deposição: a concentração de metano na mistura gasosa e o tempo de plasma etching de hidrogênio entre deposições consecutivas. A combinação dos resultados de espectroscopia de eletrons auger (aes) e saxrd permitiram avaliar a espessura das interfaces. Interfaces mais abruptas foram obtidas em sistemas crescidos sobre uma camada buffer, tempos de plasma etching de hidrogênio de, pelo menos, 2min, e camadas de a-SI IND.1-XC IND.X:H crescidas com mais alta concentração de CH IND.4 na mistura gasosa e em condições de baixo fluxo de silano. / In this work, two types of amorphous semiconductor multilayers were investigated, a-SI:h/SI IND.1-XC IND.X:h and a-SI:h/a-GE:h, deposited by the Plasma Enhanced Chemical Vapor Deposition (PECVD) method. The Small Angle X-Ray Diffraction (SAXRD) technique was used to study the structural properties of these super-lattices. The aim of this work was: a) to determine the structural properties of these systems, including the periodicity, thickness uniformity and interface sharpness; b) to develop theoretical models to simulate the diffracted intensities; and c) to evaluate the diffusion and crystallization processes of the multilayer components, by means of heat treatments. The a-SI:h/a-SI IND.1-XC IND.X multilayers were deposited, varying two growth parameters: the methane concentration in the gaseous mixture and the intermediate plasma etching time between consecutive depositions. The Auger Electron Spectroscopy (AES) and SAXRD results were combined to evaluate the interface thickness. The sharpest interfaces were obtained on samples deposited on top of a buffer layer, plasma etching times of at least 2 min. and a-SI IND.1-XC IND.X:H layers deposited with a higher CH IND.4. concentration in the gaseous mixture and in conditions of low silane flow.
242

Um método iterativo e escalonável para super-resolução de imagens usando a interpolação DCT e representação esparsa. / Iterative and scalable image super-resolution method with DCT interpolation and sparse representation.

Saulo Roberto Sodré dos Reis 23 April 2014 (has links)
Num cenário em que dispositivos de aquisição de imagens e vídeo possuem recursos limitados ou as imagens disponíveis não possuem boa qualidade, as técnicas de super-resolução (SR) apresentam uma excelente alternativa para melhorar a qualidade das imagens. Nesta tese é apresentada uma proposta para super-resolução de imagem única que combina os benefícios da interpolação no domínio da transformada DCT e a eficiência dos métodos de reconstrução baseados no conceito de representação esparsa de sinais. A proposta busca aproveitar as melhorias já alcançadas na qualidade e eficiência computacional dos principais algoritmos de super-resolução existentes. O método de super-resolução proposto implementa algumas melhorias nas etapas de treinamento e reconstrução da imagem final. Na etapa de treinamento foi incluída uma nova etapa de extração de características utilizando técnicas de aguçamento por máscara de nitidez e construção de um novo dicionário. Esta estratégia busca extrair mais informações estruturais dos fragmentos de baixa e alta resolução do conjunto de treinamento e ao mesmo tempo reduzir o tamanho dos dicionários. Outra importante contribuição foi a inclusão de um processo iterativo e escalonável no algoritmo, reinserindo no conjunto de treinamento e na etapa de reconstrução, uma imagem de alta resolução obtida numa primeira iteração. Esta solução possibilitou uma melhora na qualidade da imagem de alta resolução final utilizando poucas imagens no conjunto de treinamento. As simulações computacionais demonstraram a capacidade do método proposto em produzir imagens com qualidade e com tempo computacional reduzido. / In a scenario in which the acquisition systems have limited resources or available images do not have good quality, the super-resolution (SR) techniques have become an excellent alternative for improving the image quality. In this thesis, we propose a single-image super-resolution (SR) method that combines the benefits of the DCT interpolation and efficiency of sparse representation method for image reconstruction. Also, the proposed method seeks to take advantage of the improvements already achieved in quality and computational efficiency of the existing SR algorithms. The proposed method implements some improvements in the dictionary training and the reconstruction process. A new dictionary was built by using an unsharp mask technique to characteristics extraction. Simultaneously, this strategy aim to extract more structural information of the low resolution and high resolution patches and reduce the dictionaries size. Another important contribution was the inclusion of an iterative and scalable process by reinserting the HR image obtained of first iteration. This solution aim to improve the quality of the final HR image using a few images in the training set. The results have demonstrated the ability of the proposed method to produce high quality images with reduced computational time.
243

Construction d'une nouvelle expérience pour l'étude de gaz quantiques dégénérés des réseaux optiques, et étude d'un système d'imagerie super-résolution / Construction of a new experiment for studying degenerated quantum gases in optical lattices, and study a of a super resolution imaging system.

Vasquez Bullon, Hugo Salvador 29 February 2016 (has links)
Depuis quelques temps, les physiciens théoriciens de la matière condensée sont confrontés à un problème majeur : la puissance de calcul nécessaire pour simuler numériquement et étudier certains systèmes à N corps est insuffisante. Comme le contrôle et l’utilisation des systèmes d’atomes ultra-froids se sont développés de manière importante,principalement durant les deux dernières décennies, nous sommes peut-être en mesure d eproposer une solution alternative : utiliser des atomes ultra-froids piégés dans des réseaux optiques en tant que simulateur quantique. En effet, la physique des électrons se déplaçant sur la structure cristalline d’un solide, ainsi que celle des atomes piégés dans des réseaux optiques, sont toutes les deux décrites par le même modèle de Fermi-Hubbard, qui est une présentation simplifiée du comportement des fermions sur un réseau périodique. Les simulateurs quantiques peuvent donc simuler des propriétés électriques des matériaux, telle sque la conductivité ou le comportement isolant, et potentiellement aussi des propriété smagnétiques telles que l’ordre antiferromagnétique.L’expérience AUFRONS, sur laquelle j’ai travaillé pendant mon doctorat, a pour but d’étudie rla physique des fermions fortement corrélés, avec un simulateur quantique basé sur l’utilisation d’atomes ultra-froids de rubidium 87 et de potassium 40, piégés dans le potentiel nanostructuré des réseaux optiques bidimensionnels, générés en champ proche. Pour détecter la distribution atomique à d’aussi courtes distances, nous avons développé une technique d’imagerie novatrice, qui nous permettra de contourner la limite de diffraction. Une fois terminé, notre système d’imagerie pourrait potentiellement détecter et identifier des sites individuels du réseau optique sub-longueur d’onde.Dans ce manuscrit, je décris le travail que j’ai effectué pour construire l’expérience AUFRONS,ainsi que l’étude de faisabilité que j’ai réalisée pour la technique d’imagerie à super-résolution. / For some time now, theoretical physicists in condensed matter face a majorproblem: the computing power needed to numerically simulate and study some interactingmany-body systems is insufficient. As the control and use of ultracold atomic systems hasexperimented a significant development in recent years, an alternative to this problem is to usecold atoms trapped in optical lattices as a quantum simulator. Indeed, the physics of electronsmoving on a crystalline structure of a solid, and the one of trapped atoms in optical lattices areboth described by the same model, the Fermi-Hubbard model, which is a simplifiedrepresentation of fermions moving on a periodic lattice. The quantum simulators can thusreproduce the electrical properties of materials such as conductivity or insulating behavior, andpotentially also the magnetic ones such as antiferromagnetism.The AUFRONS experiment, in which I worked during my PhD, aims at building a quantumsimulator based on cooled atoms of 87Rb and 40K trapped in near field nanostructured opticalpotentials. In order to detect the atom distribution at such small distances, we have developedan innovative imaging technique for getting around the diffraction limit. This imaging systemcould potentially allow us to detect single-site trapped atoms in a sub-wavelength lattice.In this thesis, I introduce the work I have done for building the AUFRONS experiment, as wellas the feasability study that I did for the super-resolution imaging technique.
244

Algorithms for super-resolution of images based on sparse representation and manifolds / Algorithmes de super-résolution pour des images basées sur représentation parcimonieuse et variété

Ferreira, Júlio César 06 July 2016 (has links)
La ''super-résolution'' est définie comme une classe de techniques qui améliorent la résolution spatiale d’images. Les méthodes de super-résolution peuvent être subdivisés en méthodes à partir d’une seule image et à partir de multiple images. Cette thèse porte sur le développement d’algorithmes basés sur des théories mathématiques pour résoudre des problèmes de super-résolution à partir d’une seule image. En effet, pour estimer un’image de sortie, nous adoptons une approche mixte : nous utilisons soit un dictionnaire de « patches » avec des contraintes de parcimonie (typique des méthodes basées sur l’apprentissage) soit des termes régularisation (typiques des méthodes par reconstruction). Bien que les méthodes existantes donnent déjà de bons résultats, ils ne prennent pas en compte la géométrie des données dans les différentes tâches. Par exemple, pour régulariser la solution, pour partitionner les données (les données sont souvent partitionnées avec des algorithmes qui utilisent la distance euclidienne comme mesure de dissimilitude), ou pour apprendre des dictionnaires (ils sont souvent appris en utilisant PCA ou K-SVD). Ainsi, les méthodes de l’état de l’art présentent encore certaines limites. Dans ce travail, nous avons proposé trois nouvelles méthodes pour dépasser ces limites. Tout d’abord, nous avons développé SE-ASDS (un terme de régularisation basé sur le tenseur de structure) afin d’améliorer la netteté des bords. SE-ASDS obtient des résultats bien meilleurs que ceux de nombreux algorithmes de l’état de l’art. Ensuite, nous avons proposé les algorithmes AGNN et GOC pour déterminer un sous-ensemble local de données d’apprentissage pour la reconstruction d’un certain échantillon d’entrée, où l’on prend en compte la géométrie sous-jacente des données. Les méthodes AGNN et GOC surclassent dans la majorité des cas la classification spectrale, le partitionnement de données de type « soft », et la sélection de sous-ensembles basée sur la distance géodésique. Ensuite, nous avons proposé aSOB, une stratégie qui prend en compte la géométrie des données et la taille du dictionnaire. La stratégie aSOB surpasse les méthodes PCA et PGA. Enfin, nous avons combiné tous nos méthodes dans un algorithme unique, appelé G2SR. Notre algorithme montre de meilleurs résultats visuels et quantitatifs par rapport aux autres méthodes de l’état de l’art. / Image super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super-resolution problems. Indeed, in order to estimate an output image, we adopt a mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already perform well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in order to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the-art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for reconstructing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
245

Discovery and Extraction of Protein Sequence Motif Information that Transcends Protein Family Boundaries

Chen, Bernard 17 July 2009 (has links)
Protein sequence motifs are gathering more and more attention in the field of sequence analysis. The recurring patterns have the potential to determine the conformation, function and activities of the proteins. In our work, we obtained protein sequence motifs which are universally conserved across protein family boundaries. Therefore, unlike most popular motif discovering algorithms, our input dataset is extremely large. As a result, an efficient technique is essential. We use two granular computing models, Fuzzy Improved K-means (FIK) and Fuzzy Greedy K-means (FGK), in order to efficiently generate protein motif information. After that, we develop an efficient Super Granular SVM Feature Elimination model to further extract the motif information. During the motifs searching process, setting up a fixed window size in advance may simplify the computational complexity and increase the efficiency. However, due to the fixed size, our model may deliver a number of similar motifs simply shifted by some bases or including mismatches. We develop a new strategy named Positional Association Super-Rule to confront the problem of motifs generated from a fixed window size. It is a combination approach of the super-rule analysis and a novel Positional Association Rule algorithm. We use the super-rule concept to construct a Super-Rule-Tree (SRT) by a modified HHK clustering, which requires no parameter setup to identify the similarities and dissimilarities between the motifs. The positional association rule is created and applied to search similar motifs that are shifted some residues. By analyzing the motifs results generated by our approaches, we realize that these motifs are not only significant in sequence area, but also in secondary structure similarity and biochemical properties.
246

Image Formation from a Large Sequence of RAW Images : performance and accuracy / Formation d’image à partir d’une grande séquence d’images RAW : performance et précision

Briand, Thibaud 13 November 2018 (has links)
Le but de cette thèse est de construire une image couleur de haute qualité, contenant un faible niveau de bruit et d'aliasing, à partir d'une grande séquence (e.g. des centaines) d'images RAW prises avec un appareil photo grand public. C’est un problème complexe nécessitant d'effectuer à la volée du dématriçage, du débruitage et de la super-résolution. Les algorithmes existants produisent des images de haute qualité, mais le nombre d'images d'entrée est limité par des coûts de calcul et de mémoire importants. Dans cette thèse, nous proposons un algorithme de fusion d'images qui les traite séquentiellement de sorte que le coût mémoire ne dépend que de la taille de l'image de sortie. Après un pré-traitement, les images mosaïquées sont recalées en utilisant une méthode en deux étapes que nous introduisons. Ensuite, une image couleur est calculée par accumulation des données irrégulièrement échantillonnées en utilisant une régression à noyau classique. Enfin, le flou introduit est supprimé en appliquant l'inverse du filtre équivalent asymptotique correspondant (que nous introduisons). Nous évaluons la performance et la précision de chaque étape de notre algorithme sur des données synthétiques et réelles. Nous montrons que pour une grande séquence d'images, notre méthode augmente avec succès la résolution et le bruit résiduel diminue comme prévu. Nos résultats sont similaires à des méthodes plus lentes et plus gourmandes en mémoire. Comme la génération de données nécessite une méthode d'interpolation, nous étudions également les méthodes d'interpolation par polynôme trigonométrique et B-spline. Nous déduisons de cette étude de nouvelles méthodes d'interpolation affinées / The aim of this thesis is to build a high-quality color image, containing a low level of noise and aliasing, from a large sequence (e.g. hundreds or thousands) of RAW images taken with a consumer camera. This is a challenging issue requiring to perform on the fly demosaicking, denoising and super-resolution. Existing algorithms produce high-quality images but the number of input images is limited by severe computational and memory costs. In this thesis we propose an image fusion algorithm that processes the images sequentially so that the memory cost only depends on the size of the output image. After a preprocessing step, the mosaicked (or CFA) images are aligned in a common system of coordinates using a two-step registration method that we introduce. Then, a color image is computed by accumulation of the irregularly sampled data using classical kernel regression. Finally, the blur introduced is removed by applying the inverse of the corresponding asymptotic equivalent filter (that we introduce).We evaluate the performance and the accuracy of each step of our algorithm on synthetic and real data. We find that for a large sequence of RAW images, our method successfully performs super-resolution and the residual noise decreases as expected. We obtained results similar to those obtained by slower and memory greedy methods. As generating synthetic data requires an interpolation method, we also study in detail the trigonometric polynomial and B-spline interpolation methods. We derive from this study new fine-tuned interpolation methods
247

[en] GENERATING SUPERRESOLVED DEPTH MAPS USING LOW COST SENSORS AND RGB IMAGES / [pt] GERAÇÃOO DE MAPAS DE PROFUNDIDADE SUPER-RESOLVIDOS A PARTIR DE SENSORES DE BAIXO CUSTO E IMAGENS RGB

LEANDRO TAVARES ARAGAO DOS SANTOS 11 January 2017 (has links)
[pt] As aplicações da reconstrução em três dimensões de uma cena real são as mais diversas. O surgimento de sensores de profundidade de baixo custo, tal qual o Kinect, sugere o desenvolvimento de sistemas de reconstrução mais baratos que aqueles já existentes. Contudo, os dados disponibilizados por este dispositivo ainda carecem em muito quando comparados àqueles providos por sistemas mais sofisticados. No mundo acadêmico e comercial, algumas iniciativas, como aquelas de Tong et al. [1] e de Cui et al. [2], se propõem a solucionar tal problema. A partir do estudo das mesmas, este trabalho propôs a modificação do algoritmo de super-resolução descrito por Mitzel et al. [3] no intuito de considerar em seus cálculos as imagens coloridas também fornecidas pelo dispositivo, conforme abordagem de Cui et al. [2]. Tal alteração melhorou os mapas de profundidade super-resolvidos fornecidos, mitigando interferências geradas por movimentações repentinas na cena captada. Os testes realizados comprovam a melhoria dos mapas gerados, bem como analisam o impacto da implementação em CPU e GPU dos algoritmos nesta etapa da super-resolução. O trabalho se restringe a esta etapa. As etapas seguintes da reconstrução 3D não foram implementadas. / [en] There are a lot of three dimensions reconstruction applications of real scenes. The rise of low cost sensors, like the Kinect, suggests the development of systems cheaper than the existing ones. Nevertheless, data provided by this device are worse than that provided by more sophisticated sensors. In the academic and commercial world, some initiatives, described in Tong et al. [1] and in Cui et al. [2], try to solve that problem. Studying that attempts, this work suggests the modification of super-resolution algorithm described for Mitzel et al. [3] in order to consider in its calculations coloured images provided by Kinect, like the approach of Cui et al. [2]. This change improved the super resolved depth maps provided, mitigating interference caused by sudden changes of captured scenes. The tests proved the improvement of generated maps and analysed the impact of CPU and GPU algorithms implementation in the superresolution step. This work is restricted to this step. The next stages of 3D reconstruction have not been implemented.
248

Limitando a opacidade cósmica com super novas e gamma-ray bursts.

COSTA, Felipe Sérvulo Maciel. 07 November 2018 (has links)
Submitted by Emanuel Varela Cardoso (emanuel.varela@ufcg.edu.br) on 2018-11-07T20:16:04Z No. of bitstreams: 1 FELIPE SÉRVULO MACIEL COSTA – DISSERTAÇÃO (PPGFísica) 2017.pdf: 4976980 bytes, checksum: eb50b091cd9c69ef0ba3d67fba7d69bb (MD5) / Made available in DSpace on 2018-11-07T20:16:04Z (GMT). No. of bitstreams: 1 FELIPE SÉRVULO MACIEL COSTA – DISSERTAÇÃO (PPGFísica) 2017.pdf: 4976980 bytes, checksum: eb50b091cd9c69ef0ba3d67fba7d69bb (MD5) Previous issue date: 2017-11 / Capes / Há cerca de vinte anos, dois grupos de pesquisadores estudaram o brilho aparente das super novas do tipo Ia(SNe Ia) e, de forma independente, descobriram que a expansão atual do universo é acelerada. Esta descoberta lançou a astronomia para a era da energia escura, componente energética que, dentro da teoria da relatividade geral,é a responsável pela aceleração cósmica. Porém, a presença de uma opacidade cósmica nos dados de super novas pode imitar o comportamento de uma componente escura. Hoje em dia, embora a aceleração cósmica seja sustentada por outras observações astronômicas, uma possível presença de opacidade nos dados das SNe Ia pode levar a erros nas estimativas de parâmetros cosmológicos. Assim, vários trabalhos na literatura tem investigado a hipótese da transparência do Universo utilizando medidas de distâncias de luminosidade de velas-padrão, como supernovas do tipo Ia (SNeIa) e gamma raybursts (GRBs),e de distâncias obtidas pela taxa de expansão de Hubble H(z), sendo estas últimas independentes da hipótese de transparência cósmica. Nesta dissertação, nós fazemos uma revisão bibliográ fica sobre estes trabalhos, nos quais foram usados dados de SNeIa, GRBse H(z). Novos limites sobre a opacidade foram colocados com os mais recentes dados de GRBse H(z) no contexto do modelo padrão da cosmologia. Os resultados obtidos mostraram que a hipótese da transparência cósmica está em acordo com os dados, porém, os resultados vindos das observações de GRBs, que alcançam z > 9, onde z é o redshift, não excluem a presença de alguma fonte de opacidade com alto grau de con fiança estatística. / About twenty years ago, two groups of researchers studying the apparent brightness of type Ia super nova e (SNe Ia), independently discovered that the current expansion of the universe is accelerated. This discovery launched astronomy in to the dark energy era, an energy component that, with in the theory of general relativity, is responsible for the cosmic acceleration. However, the presence of a cosmic opacity in SNe Ia data may mimicthe behavior of a dark component. Now a days, although the cosmic acceleration is supported by other astronomical observations, a possible presence of opacity in the SNe Ia data can lead to errors in the cosmological parameter estimates. Thus, several works in the literature have investigated the universe's transparency hypothesis using measurement so fluminosity distances of standard candles, suchas SNe Ia and gamma ray bursts (GRBs), and distances obtained of the cosmic expansion rate H(z). These last ones being independent of the cosmic transparency hypothesis. In this dissertation, we make a bibliographical review on these works and new limits on opacity were placed with the latest data of GRB sand H(z) in the context of the standard model. We have found that the cosmic transparency hypothesis is in agreement with the data, but the results from the observations of GRBs, which reach z > 9, where z is the it red shift, do not exclude the presence of some source of opacity with a high degree of statistical con dence.
249

Nouveaux matériaux nanoporeux et bio-hybrides à base de nanoparticules minérales et/ou celllulosiques : relation structure/propriétés / New nanoporous and bio-hybrid materials based on inorganic and/or cellulosic nanoparticles : relationship structure/properties

Ben Dahou, Dounia 18 March 2015 (has links)
Cette thèse s'intéresse à la préparation, par la technique de la lyophilisation, des aérogelsà base de celluloses et de charges minérales destinés à une utilisation potentielle dans le domainede l'isolation thermique. Le premier objectif de la thèse a été la caractérisation de différentescelluloses (cellulose (PBPD), nanocristaux (NCC) et nanofibrilles oxydées (NFCs)), les chargesminérales (principalement la zéolithe) et les différents aérogels résultants de différentescombinaisons des matériaux de départ utilisés. Nous avons utilisé pour la caractérisation desmatériaux de départ et des aérogels des techniques d'analyse telles que la diffraction des rayonsX (DRX), la BET, le MEB et le potentiel zêta. Nous avons également caractérisé les propriétésmécaniques des aérogels par des essais de compression et leurs propriétés de conductionthermique dans le régime non stationnaire par la technique du fil chaud. Il s’est avéré qu’unestructuration multi-échelles de ces différentes celluloses favorise la création de méso etnanoporosités au détriment de la macroporosité. Ceci favorise le confinement de l’air dans le bioaérogelpar effet de Knüdsen et améliore ses propriétés d’isolation thermique. D'autre part lesnanoparticules (organiques et inorganiques) permettent d'avoir des aérogels de très bonnespropriétés mécaniques. Le troisième objectif était d'essayer d'autres charges minérales (autres quela zéolithe) dans les différentes celluloses et d’explorer les propriétés morphologiques,structurales, thermiques et mécaniques. Cette étude a permis de montrer l'importance descaractéristiques morphologiques et géométriques des charges minérales dans le contrôle despropriétés physiques et mécaniques des aérogels bio-hybrides. / This thesis focuses on the preparation, using freeze drying technique, of aerogels madefrom cellulose and mineral fillers intended for potential use in the field of thermal insulation. Thefirst goal of this thesis was the characterization of different cellulose (cellulose (PBPD)nanocrystals (NCC) and oxidized nanofibrils (NFCs)), the inorganic filler (mainly zeolite) and theresulting aerogels prepared by various combinations. We used for the characterization of thestarting materials and the aerogels analytical techniques such as x-ray diffraction (XRD), BET,SEM and the zeta potential. We also characterized the mechanical properties of the aerogels bycompression tests and their thermal conduction properties in the non-steady state by the hot wiretechnique. It has been found that multi-scale structure of these celluloses promotes the creation ofmeso and nanoporosities to the detriment of macroporosity. This promotes the confinement ofthe air in the bio-aerogel by Knudsen effect and improves their thermal insulation properties. Onthe other hand, the nanoparticles (organic and inorganic) allow the aerogels to have very goodmechanical properties. The third objective was to try other mineral fillers (other than the zeolite)in combination with the different cellulose and explore the morphological, structural, thermaland mechanical of the corresponding aerogels. This study has allowed showing the importance ofmorphological and geometrical characteristics of the mineral fillers in controlling physical andmechanical properties of the bio-hybrid aerogels.
250

Simulações atomísticas de eventos raros através do método super-simétrico / Atomistic simulation of rare events via the super-symmetric method

Landinez Borda, Edgar Josué, 1984- 11 March 2010 (has links)
Orientador: Maurice de Koning / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Fisica Gleb Wataghin / Made available in DSpace on 2018-08-17T03:22:05Z (GMT). No. of bitstreams: 1 LandinezBorda_EdgarJosue_M.pdf: 6192602 bytes, checksum: b68b0a1398ca87f49a34034ae4473a58 (MD5) Previous issue date: 2010 / Resumo: Nesta dissertação abordamos o problema da escala temporal nas simulações atomísticas, focando no problema de eventos raros. A solução deste problema so e possível com o desenvolvimento de técnicas especiais. Especificamente, estudamos o método super-simétrico para encontrar caminhos de reação. Este metodo não apresenta as limitições comuns de outros metodos para eventos raros. Aplicamos o método a três problemas padrão e encontramos que o método permite estudar as transições raras sem precisar de um conhecimento detalhado do sistema. Além disso permite observar qualitativamente os mecanismos de transição / Abstract: This thesis deals with the problem of time scale in atomistic simulations, focusing on the problem of rare events. The solution to this problem is only possible with the development of special techniques. Specifically, we studied the super-symmetric method to find reaction pathways. This method does not have the usual limitations of other methods for rare events. We apply the method to three standard problems and find that the method allows to study the rare transitions without a detailed knowledge of the system. In addition, it allows us to observe qualitatively the transition mechanisms / Mestrado / Física Estatistica e Termodinamica / Mestre em Física

Page generated in 0.1831 seconds