• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 179
  • 61
  • 35
  • 25
  • 17
  • 12
  • 11
  • 7
  • 7
  • 7
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 393
  • 153
  • 115
  • 101
  • 83
  • 79
  • 74
  • 61
  • 57
  • 56
  • 41
  • 39
  • 38
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Preparation, Characterization And Ionic Conductivity Studies On Certain Fast Ionic Conductors

Borgohain, Madhurjya Modhur 06 1900 (has links)
Fast ionic conductors, i.e. materials in which charge transport mainly occurs through the motion of ions, are an important class of materials with immense scope for industrial applications. There are different classes of fast ionic conductors e.g. polymer electrolytes, glasses, oxide ion conductors etc. and they find applications such as solid electrolytes in batteries, in fuel cells and in electro active sensors. There are mixed conducting materials as well which have both ions and electrons as conducting species that are used as electrode materials. Specifically, polymer electrolytes 1−3 have been in use in lithium polymer batteries, which have much more advantages compared to other secondary batteries. Polymer electrolyte membranes have been in use in direct methanol fuel cells (DMFC). The membranes act as proton conductors and allow the protons produced from the fuel (methanol) to pass through. Oxide ion conductors are used in high temperature solid oxide fuel cells (SOFC) and they conduct via oxygen ion vacancies. Fuel cells are rapidly replacing the internal combustion engines, because they are more energy efficient and environment friendly. The present thesis is concerned with the preparation, characterization and conductivity studies on the following fast ionic conductors: (MPEG)xLiClO4, (MPEG)xLiCF3SO3 where (MPEG) is methoxy poly(ethylene glycol), the hydrotalcite [Mg0.66Al0.33(OH)2][(CO3)0.17.mH2O] and the nanocomposite SPE, (PEG)46 LiClO4 with dispersed nanoparticles of hydrotalcite. We also present our investigations of spin probe electron spin resonance (SPESR) as a possible technique to determine the glass transition temperature (Tg) of polymer electrolytes where the conventional technique of Tg determination, namely, differential scanning calorimetry, (DSC), is not useful due to the high crystallinity of the polymers. In the following we summarize the main contents of the thesis. In Chapter 1 we provide a brief introduction to the phenomenon of fast ionic conduction. A description of the different experimental techniques used as well as the relevant theories is also given in this chapter. In most solid polymer electrolytes (SPE), the usability is limited by the low value of the ionic conductivity. A number of different routes to enhance the electrical, thermal and mechanical properties of these materials is presently under investigation. One such route to enhance the ionic conductivity in polymer electrolytes is by irradiating the polymer electrolyte with gamma rays, electron beam, ion beams etc. In Chapter 2, we describe our work on the effect of electron beam (e-beam) irradiation on the solid polymer electrolytes (MPEG)xLiClO4 and (MPEG)xLiCF3SO3. The polymer used is methoxy poly(ethylene glycol) or poly(ethylene glycol) methyl ether with a molecular weight 2000. Salts used are LiClO4 and LiCF3SO3. ’x’ in the subscript is a measure of the salt concentration; it is the ratio of the number of ether oxygens in the polymer chain to that of the Li+ ion. ’x’ values chosen are 100, 46, 30 and 16. Nearly one order of magnitude increase in the conductivity is observed for samples (MPEG)100LiClO4 and (MPEG)16LiCF3SO3 on irradiation. It was found that the increase in the net ionic conductivity is a function of both the irradiation dose and the salt concentration. The enhanced ionic conductivity remains constant for ∼ 100 hrs, which signifies a possible near permanent change in the polymer electrolyte system due to irradiation. The samples were also characterized using DSC and Fourier transform infrared spectroscopy (FTIR). DSC results could be correlated with conductivity findings, giving low Tg values for samples having high conductivity. It was also found that there is a small increase in the crystalline fraction of the samples on irradiation, which agrees with earlier reports on samples irradiated with low dosage. FTIR results are suggestive of decreased cross linking as the reason for increased ionic conductivity. However, this aspect needs a further confirmatory look before the findings can be termed conclusive. In Chapter 3, we describe the studies we have carried out on Li -doped hydrotalcite. We report the details of preparation and characterization of hydrotalcite as well as NMR and ionic conductivity measurements on both doped (with Li+ ions) and undoped hydrotalcite. Hydrotalcite was prepared by co-precipitation method and the composition of hydrotalcite was chosen as [Mg0.66Al0.33(OH)2][(CO3)0.17.mH2O]. Samples were prepared with salt (LiClO4) concentration 5 %, 10 %, 15 %, 20 % and 25 %. It was found that the highest ionic conductivity occurs for the sample with 20 % doping. 7Li NMR plots for all the samples clearly show an overlap of a Gaussian and a Lorentzian lineshape. The Gaussian line is because of the presence of a less mobile fraction of the 7Li+ ions and the Lorentzian line is because of the presence of a more mobile fraction of 7Li+ ions. The highest ionic conductivity was found for the salt concentration 20 % and from the room temperature 7Li NMR studies we found that for this particular concentration, the mobile fraction of the 7Li ion is also maximum. Without the salt doping, the conductivity of the sample was too small to be measured. Temperature variation of both 1H and 7Li NMR was also done, to compare the ionic conductivities from NMR. Another method to obtain enhanced properties in polymer electrolytes is by forming ’nanocomposite’ polymer electrolytes. Nanocomposites are formed by dispersing nanoparticles of certain materials in the polymer electrolyte matrix. Till now, nanoparticles used are mostly oxides of metals, e.g. Al2O3, TiO2, MgO, SiO2 etc and clays like montmorillonite, liponite, hydrotalcite etc. Chapter 4 describes the preparation and characterization of the nanocomposite polymer electrolyte (PEG)46LiClO4 formed with hydrotalcite nanoparticles. The polymer used is PEG, poly(ethylene glycol) of molecular weight 2000, and salt used is LiClO4. The salt concentration is selected so as to give the highest ionic conductivity for the solid polymer electrolyte. Hydrotalcite belongs to a class of materials called LDH, layered double hydroxides. The composition selected is [Mg0.66Al0.33(OH)2][(CO3)0.17 .mH2O], since this is the most stable composition. These materials are easy to prepare in the nano size and are being used in a number of applications. These are characterized by the presence of layers of positively charged double hydroxides separated by layers of anions and water molecules. The water molecules give stability to the structure. Nanoparticles of hydrotalcite were prepared in the laboratory itself. XRD data of hydrotalcite confirm the crystal structure. TEM data show the particle size to be ∼ 50 nm. The polymer electrolyte (PEG)46LiClO4 was doped with these nanoparticles and the doping levels are 1.8 %, 2.1 %, 2.7 %, 3.6 % and 4.5 % by weight. Impedance spectroscopy was used to find the ionic conductivity. We have found that the sample with a doping of 3.6 % by weight gives the highest ionic conductivity and the increase in ionic conductivity is nearly one order of magnitude. DSC was used for thermal characterization of these nanocomposites. The glass transition temperatures, Tg , found from DSC measurements corroborates the ionic conductivity data, giving the lowest Tg for the sample with highest conductivity. Temperature variation of the ionic conductivity shows Arrhenius behavior. 7Li NMR was done on the pristine SPE (PEG)46LiClO4 and the nanocomposite of (PEG)46LiClO4 with 3.6 % filler. The ionic conductivity was also estimated from the temperature variation of 7Li NMR line widths. Studies on the DSC endotherms of the nanocomposites give the fractional crystallinity of the samples. From these studies it can be concluded that the variation in ionic conductivity can be attributed to the change in fractional crystallinity; the nanocomposite polymer electrolyte having highest ionic conductivity, i.e. the NCPE with filler concentration of 3.6 % also has the lowest fractional crystallinity. Additionally, a possible increase in the segmental motion inferred from a reduction in the glass transition temperature coupled with a lowering of the activation energy may also contribute to the increased ionic conductivity in the nanocomposite polymer electrolyte. Glass transition temperature Tg has a very important role in studying the dynamics of polymer electrolytes. In Chapter 5, we explore the possibility of using spin probe electron spin resonance (SPESR) as a tool to study the glass transition temperature of polymer electrolytes. When the temperature of the polymer is increased across the glass transition, the viscosity of the sample decreases. This corresponds to a transition from a slow tumbling regime with τc = 10−6 s to a fast tumbling regime with τc = 10−9 s where τc is the correlation time for the probe dynamics. Spin probe ESR can be used to probe this transition in polymers. We have used 4-hydroxy tempo (TEMPOL) as the spin probe which is dispersed in the nanocomposite polymer electrolyte based on (PEG)46LiClO4 and hydrotalcite. Below and across the glass transition, this nitroxide probe exhibits a powder pattern showing both Zeeman (g) and hyperfine (hf) interaction anisotropy. When the frequency of the dynamics increases such that the jump frequency f is of the same order of magnitude as the anisotropy of the hf interaction, i.e., ∼ 108 Hz, the anisotropy of the interactions averages out and a spectrum of reduced splitting and increased symmetry in the line shape is observed. This splitting corresponds to the nonvanishing isotropic value of the hyperfine tensor and is observed at a temperature higher than but correlated with Tg. The crossover from the anisotropic to isotropic spectrum is reflected in a sharp reduction in the separation between the two outermost components of the ESR spectrum, which corresponds to twice the value of the z-principal component of the nitrogen hyperfine tensor, 2Azz, from ∼75 G to ∼ 35 G. In our study, we have varied the concentration of the nano-fillers. The Tg for all the samples were estimated from the measurement of T50G and the known correlation between 4 T50G and Tg, where T50G is the temperature at which the extrema separation (2Azz) of the ESR spectra becomes 50 Gauss. The values obtained from this method are compared with the values found from DSC done on the same samples. Within experimental error, these two techniques give reasonably close values. Tg’s were also estimated by a cross over in the correlation time (τc) vs temperature plot. The τc values were calculated using a spectral simulation program. We conclude that spin probe ESR can be an alternative to the DSC technique for polymers with high fraction of crystallinity, for which DSC often does not give any glass transition signature. In Appendix I, ionic conductivity studies on quenched and gamma irradiated polymer electrolytes (PEG)46LiClO4 and (MPEG)16LiClO4 is done. It is observed that, (i) the samples quenched to 77 K after melting show enhancement of ionic conductivity by a factor of 3 & 4; (ii) on irradiation, the ionic conductivity decreases for a dose of 5 kGy and subsequently, keeps on increasing for higher doses of 10 kGy and 15 kGy. In Appendix II, the BASIC language program (eq-res.bas) used for impedance data analysis is given.
272

Δημιουργία περιλήψεων από ακολουθίες βίντεο στο συμπιεσμένο πεδίο

Ρήγας, Ιωάννης 08 December 2008 (has links)
Στην παρούσα εργασία υλοποιούμε ένα σύστημα δημιουργίας περιλήψεων από ακολουθίες βίντεο. Υλοποιούνται όλα τα βήματα που θα πρέπει να ακολουθηθούν (εξαγωγή χαρακτηριστικών-ανίχνευση πλάνων-εξαγωγή χαρακτηριστικών καρέ) έτσι ώστε να εξαχθεί ένα σύνολο καρέ (χαρακτηριστικά καρέ) τα οποία να συνοψίζουν νοηματικά το περιεχόμενο μιας ακολουθίας βίντεο. Η επεξεργασία του βίντεο γίνεται απευθείας στο συμπιεσμένο πεδίο και συγκεκριμένα σε συμπιεσμένα αρχεία MPEG-1-2, έτσι ώστε τα αποτελέσματα να εξάγονται σε σχετικά μικρό χρόνο και με σχετικά χαμηλές απαιτήσεις σε αποθηκευτικό χώρο και επεξεργαστική ισχύ. / In this paper a video summarization system is being constructed. We acomplish all the needed steps (feature extraction -shot detection-keyframe extraction) in order to extract a set of frames (keyframes) that capture the semantic content of the video sequence. The processing of the video takes place directly at the compressed domain (at MPEG-1-2 video files). Thus we obtain results at relatively little time and with relatively low storage and computer power demands.
273

Comparação de técnicas para a determinação de semelhança entre imagens digitais

Tannús, Marco Túlio Faissol 25 May 2008 (has links)
The retrieval of similar images in databases is a wide and complex research field that shows a great demand for good performance applications. The increasing volume of information available in the Internet and the success of textual search engines motivate the development of tools that make possible image searches by content similarity. Many features can be applied in determining the similarity between images, such as size, color, shape, color variation, texture, objects and their spatial distribution, among others. Texture and color are the most important features which allow a preliminary analysis of image similarity. This dissertation presents many techniques introduced in the literature, which analyze texture and color. Some of them were implemented, their performances were compared and the results were presented. This comparison allows the determination of the best techniques, making possible the analysis of their applicability and can be used as a reference in future works. The quantitative performance analyses were done using the ANMRR metric, defined in the MPEG-7 standard, and the confusion matrices were presented for each of the tested techniques. Two groups of quantitative tests were realized: the first one was applied upon a gray scale texture database and the second one, upon a color image database. For the experiment with the gray scale texture images, the techniques PBLIRU16, MCNC and their combination presented the best performances. For the experiment with the color images, SCD, HDCIG and CSD techniques performed best. / A recuperação de imagens semelhantes em bancos de dados é um campo de pesquisa amplo, complexo e que apresenta grande demanda por aplicativos que apresentem bons resultados. O volume crescente de informações disponibilizadas ao público e o sucesso das ferramentas de busca textuais na Internet motivam a criação de utilitários que possibilitem a busca de imagens por semelhança de conteúdo. Podem-se utilizar várias características para a determinação da semelhança entre imagens digitais, tais como tamanho, cor, forma, variação de cores, textura, objetos e sua disposição espacial, entre outras. A textura e a cor são as duas características mais importantes que permitem uma análise preliminar da semelhança. Este trabalho apresenta várias técnicas constantes da literatura, que analisam textura e cor. Algumas dessas técnicas foram implementadas, seus desempenhos foram analisados e comparados e os resultados foram apresentados detalhadamente. Esse comparativo amplo permite determinar as melhores técnicas, possibilita a análise da aplicabilidade de cada uma delas e pode ser utilizada como referência em estudos futuros. As análises quantitativas de desempenho foram realizadas utilizando a métrica ANMRR, definida no padrão MPEG-7, e as matrizes de confusão, apresentadas para cada técnica testada. Dois grupos de testes quantitativos foram realizados: o primeiro utilizando um banco de imagens de texturas em tons de cinza e o segundo utilizando um banco de imagens coloridas. Os resultados dos testes com o banco de texturas em tons de cinza mostraram que as técnicas PBLIRU16, MCNC e sua combinação apresentaram os melhores desempenhos. Para o banco de imagens coloridas, os melhores desempenhos foram observados com a utilização das técnicas SCD, HDCIG e CSD. / Mestre em Ciências
274

Simulação e análise comparativa dos métodos do mecanismo de policiamento dual leaky bucket em chaves ATM para classe de serviço VBR para tráfegos de vídeo / Not available

Michelle Miranda Pereira 16 October 2002 (has links)
A garantia de qualidade de serviço (QoS) tem-se demonstrado muito importante em aplicações em tempo real. Este trabalho apresenta um estudo sobre Mecanismos de Policiamento na tecnologia A TM, mais especificamente, sobre o funcionamento do Mecanismo Dual Leaky Bucket, utilizado pela classe de serviço VBR em rede ATM. Para este estudo foi implementado um simulador por software do mecanismo Dual Leaky Bucket. Foram analisados dois tipos de tráfegos de vídeo com compressão MPEG-2, com pouca e muita movimentação. A partir da simulação pôde-se analisar como o erro na definição de parâmetros do contrato de QoS definidos pelo usuário no estabelecimento da conexão pode levar ao aumento na taxa de perda de informações e, conseqüentemente, a degradação da qualidade necessária pela aplicação / The guarantee of quality of service (QoS) has been demonstrating very important in real time applications. This work presents a study on Policing Mechanisms in the ATM technology, more specifically, on the operation of the Dual Leaky Bucket Mechanism, used by the class of service VBR in ATM networks. For this study a Dual Leaky Bucket mechanism simulator by software was implemented. Two kinds of MPEG-2 video traffics were analyzed with a little and a lot of movement. The simulation shows how a mistake in the definition of parameters in the QoS contract, defined by user, during of the connection establishment can leads to increase of information loss rate and, consequently, the degradation of the necessary quality for the application
275

Aproximações para DCT via pruning com aplicações em codificação de imagem e vídeo

COUTINHO, Vítor de Andrade 23 February 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-06-21T15:14:55Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Vitor_de_Andrade_Coutinho-dissertacao_ppgee.pdf: 3622975 bytes, checksum: 01a22e0302dfc1890d745c6b1bffe327 (MD5) / Made available in DSpace on 2016-06-21T15:14:56Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Vitor_de_Andrade_Coutinho-dissertacao_ppgee.pdf: 3622975 bytes, checksum: 01a22e0302dfc1890d745c6b1bffe327 (MD5) Previous issue date: 2015-02-23 / CNPq / O presente trabalho aborda o desenvolvimento de aproximações para a transformada dis- reta do osseno (DCT) utilizando a abordagem pruning. Devido à propriedade da ompa ta- ção de energia, a DCT é empregada em diversas apli ações de ompressão de dados. Embora algoritmos rápidos permitam omputar a DCT e ientemente, operações de multipli ação são inevitáveis. Devido a res ente demanda por métodos de baixo onsumo energéti o, novos algoritmos de usto omputa ional reduzido são ne essários. Neste ontexto, aproximações para a DCT foram propostas nos últimos anos. Tais aproximações permitem algoritmos livres de multipli ação, sem a ne essidade de operações de ponto utuante, mantendo o desempe- nho de ompressão omparável ao forne ido por métodos baseados na DCT. Uma abordagem adi ional para reduzir o usto omputa ional da DCT é a utilização de pruning. Tal té ni a onsiste em não onsiderar oe ientes dos vetores de entrada e/ou saída que apresentam menor relevân ia em termos de energia on entrada. No aso da DCT, esses oe ientes são os termos de mais alta frequên ia do vetor transformado. A apli ação de pruning a aproxima- ções para a DCT é uma área pou o explorada. O objetivo deste trabalho é apli ar a té ni a a diferentes métodos aproximados para a DCT. As transformações resultantes foram apli adas no ontexto de ompressão de imagem e vídeo e os resultados mostraram desempenho ompa- rável ao de métodos exatos a um usto omputa ional bastante reduzido. Uma generalização do on eito é apresentada, assim omo uma análise da omplexidade aritméti a. / This work introdu es approximate dis rete osine transforms (DCT) based on the pruning approa h. Due to the energy ompa tion property, the DCT is employed in several data ompression appli ations. Although fast algorithms allow an e ient DCT omputation, multipli ation operations are inevitable. Due to the in reasing demand for energy e ient methods, new algorithms with redu ed omputational ost are required. In this ontext, DCT approximations have been proposed re ently. Su h approximations allow multipli ation free algorithms whi h an avoid oating point operations, while maintaining a ompetitive performan e. A further approa h to redu e the omputational ost of the DCT is pruning. The te hnique onsists of dis arding input and/or output ve tors oe ients whi h are regarded as less signi ant. In the ase of the DCT, su h oe ients are output oe ients asso iated to higher frequen y terms. Pruned DCT approximations is a relatively unexplored eld of resear h. The obje tive of this work is the ombination of approximations and pruning to derive extremely low- omplexity DCT approximations. The resulting methods were applied in the image and vídeo ompression s enario and results showed omparative performan e with exa t methods at a mu h lower omputational omplexity. A qualitative and quantitative omparison with a omprehensive list of existing methods is presented. A generalization of the pruning on ept is presented.
276

[en] ELASTIC TIME ALGORITHM FOR VIDEO IN MPEG-2 FLOWS / [pt] ALGORITMO DE AJUSTE ELÁSTICO PARA VÍDEO EM FLUXOS MPEG-2

SERGIO ALVES CAVENDISH 09 August 2006 (has links)
[pt] Em apresentações hipermídia, umas das principais tarefas coordenadas pelo orquestrador da apresentação é a sincronização entre os diversos objetos componentes, que pode ser obtida através do ajuste elástico do tempo de exibição dos objetos. Esta técnica pode ser aplicada em tempo de compilação, de forma a manter os relacionamentos de sincronização especificados pelo autor, ou em tempo de apresentação, para prevenir qualquer descasamento temporal causado pelos ambientes de transmissão e de execução. Este trabalho descreve um conjunto de mecanismos para executar o ajuste elástico em fluxos MPEG-2 de Vídeo e de Sistemas, propondo algoritmos para a realização da compressão e expansão do tempo de exibição, do controle da ocupação do buffer do decodificador, da sincronização intermídia e da reconstrução do relógio de referência. Visando seu emprego em tempo de execução, todo o processo de ajuste é realizado diretamente no fluxo MPEG, sem qualquer transcodificação. / [en] In hypermedia presentations, one of the main tasks provided by the orchestrator is the synchronization of all presentation objects, which may be achieved by elastic time adjustment of period of exhibition of the objects, or simply timescale adaptation. This technique can be applied at compilation time, in order to keep track of synchronism relationships specified by authors, or at presentation time, to prevent any temporal mismatch caused by transmission or execution environments. This work presents a set of mechanisms to carry out timescale adaptation in MPEG-2 Systems and Video streams, proposing algorithms to perform compression and expansion of exhibition period, also called playback dilation, rate control, inter-media synchronization and clock reconstruction. In order to be performed at execution time, timescale operations are realized directly in compressed MPEG-2 streams, requiring no transcodification.
277

Elaboration de nanoparticules fonctionnelles : applications comme agents de contraste en IRM / Elaboration of functionalized nanoparticles : applications as MRI contrast agent

Maurizi, Lionel 03 December 2010 (has links)
Les nanoparticules d’oxyde de fer de structure spinelle ouvrent de nombreuses voies dans le domaine biomédical. Parmi les applications possibles, les propriétés superparamagnétiques des cristallites d'une dizaine de nanomètres permettent de les utiliser pour le diagnostic médical, notamment en Imagerie par Résonance Magnétique (IRM).Ce travail a consisté à élaborer des suspensions colloïdales de nanoparticules de magnétite ou de maghémite (nommées USPIO pour Ultrasmall SuperParamagnetic Iron Oxide) compatibles avec les conditions physiologiques (pH = 7,4 et [NaCl] = 0,15 M).Par co-précipitation classique, des USPIO, de taille de cristallites de 8 nm, de surface spécifique de 110 m².g-1 et agrégés en assemblages d’environ 20 nm ont été obtenus. Pour stabiliser ces nano-objets, deux voies ont été explorées. Des agents électrostatiques (acide citrique et DMSA) ont modifié la charge nette de surface des oxydes de fer. La stabilisation stérique a également été explorée par greffage de méthoxy-PEG couplés à des fonctions silanes (mPEG-Si). Par combinaison de mPEG2000-Si et de DMSA, des suspensions stables ont également été obtenues. De plus, les fonctions thiols apportées par le DMSA et présentes à la surface des agrégats se trouvent protégées de leur oxydation naturelle par l'encombrement stérique des chaînes de polymère (la formation de ponts disulfures est évitée). La post-fonctionnalisation de ces nanoparticules via ces fonctions thiols est alors possible plusieurs semaines après leur synthèse. Ce concept a été validé par post-greffage d’un fluorophore (0,48 RITC/nm²) pour la détection in vitro en microscopie à fluorescence.En parallèle de cette étude en « batch », des nanoparticules d’oxyde de fer ont été synthétisées en continu à l’aide d’un procédé hydrothermal pouvant s’étendre au domaine eau supercritique. En voie hydrothermale classique, des USPIO stabilisés par des ions citrates ont été obtenus en continu. Grâce aux propriétés physicochimiques de l’eau supercritique, la co-précipitation de magnétite a été possible sans l’utilisation de base.La cytotoxicité et l’internalisation cellulaire de ces USPIO ont été évaluées sur trois modèles cellulaires (macrophages RAW, hépatocytes HepG2 et cardiomyocytes) et les efficacités comme agents de contraste en IRM de ces nanoparticles ont été mesurées sur gel et sur modèle murin et comparées à un agent de contraste commercial à base d’oxyde de fer. Les nanohybrides étudiés n’ont pas présenté de cytotoxicité et ont développé des pouvoirs contrastants comparables à l’agent commercial. La biodistribution hépatique des nanoparticules couplées au mPEG-Si a été retardée de plus de 3 heures ouvrant la voie à des détections spécifiques. / Spinel structured iron oxide nanoparticles open the way of biomedical applications of nanomaterials.Superparamagnetic properties of ten nanometer size crystallites permit to use them in diagnosis such as Magnetic Resonance Imaging (MRI).The aim of this work was to synthesize colloidal suspension of magnetite or maghemite (called USPIO for Ultrasmall SuperParamagnetic Iron Oxide) stable in physiological conditions (pH = 7.4 and [NaCl] = 0.15M).By classical co-precipitation method, UPSIO were synthesized with a mean crystallite size of 8 nm, with a specific surface area of 110 m².g-1 and an aggregate size of 20 nm. To stabilize these nano-objects, two ways were investigated. Electrostatic agents (like citric acid and DMSA) modified iron oxide surface charge. Steric stabilization was also studied by grafting methoxy-PEG coupled with a silane function (mPEG-Si).and the combination mPEG - DMSA also resulted in stable suspensions. Moreover thiols functions coming from DMSA and present on the surface of the nanoparticles were prevented from oxidation thanks to steric protection of polymer chains. Thanks to this method post-functionalization of USPIO was possible several weeks after synthesis. This concept was validated with the post-grafting of a dye (0.48 RITC per nm²) used for in vitro detection in fluorescent microscopy.Nanoparticles were also synthesized in a continuous way with a hydrothermal process which could work from soft chemistry to supercritical water. In classical hydrothermal conditions, USPIO stabilized with citrates were obtained in a continuous way. Thanks to the physico-chemical properties of supercritical water, co-precipitation of magnetite without base adding was possible.Cytotoxicity and cellular internalization assays were done with our USPIO in three cellular models (macrophages RAW, hepatocytes HePG2 and cardiomyocytes). Moreover the efficiency as MRI contrast agents were measured in gels tubes and on mice models and compared to an iron oxide commercial product. Late hepatic biodistribution (more than three hours) was proven with pegylated nanoparticles, which opens the way of specific detections.
278

Porovnání možností komprese multimediálních signálů / Comparison of Multimedia Signal Compression Possibilities

Špaček, Milan January 2013 (has links)
Thesis deals with multimedia signal comparison of compression options focused on video and advanced codecs. Specifically it describes the encoding and decoding of video recordings according to the MPEG standard. The theoretical part of the thesis describes characteristic properties of the video signal and justification for the need to use recording and transmission compression. There are also described methods for elimination of encoded video signal redundancy and irrelevance. Further on are discussed ways of measuring the video signal quality. A separate chapter is focused on the characteristics of currently used and promising codecs. In the practical part of the thesis were created functions in Matlab environment. These functions were implemented into graphic user interface that simulates the activity of functional blocks of the encoder and decoder. Based on user-specified input parameters it performs encoding and decoding of any given picture, composed of images in RGB format, and displays the outputs of individual functional blocks. There are implemented algorithms for the initial processing of the input sequence including sub-sampling, as well as DCT, quantization, motion compensation and their inverse operations. Separate chapters are dedicated to the realisation of codec description in the Matlab environment and to the individual processing steps output. Further on are mentioned compress algorithm comparisons and the impact of parameter change onto the final signal. The findings are summarized in conclusion.
279

Visual saliency extraction from compressed streams / Extraction de la saillance visuelle à partir de flux compressés

Ammar, Marwa 15 June 2017 (has links)
Les fondements théoriques pour la saillance visuelle ont été dressés, il y a 35 ans, par Treisman qui a proposé "feature-integration theory" pour le système visuel humain: dans n’importe quel contenu visuel, certaines régions sont saillantes en raison de la différence entre leurs caractéristiques (intensité, couleur, texture, et mouvement) et leur voisinage. Notre thèse offre un cadre méthodologique et expérimental compréhensif pour extraire les régions saillantes directement des flux compressés (MPEG-4 AVC et HEVC), tout en minimisant les opérations de décodage. L’extraction de la saillance visuelle à partir du flux compressé est à priori une contradiction conceptuelle. D’une part, comme suggéré par Treisman, dans un contenu vidéo, la saillance est donnée par des singularités visuelles. D’autre part, afin d’éliminer la redondance visuelle, les flux compressés ne devraient plus préserver des singularités. La thèse souligne également l’avantage pratique de l’extraction de la saillance dans le domaine compressé. Dans ce cas, nous avons démontré que, intégrée dans une application de tatouage robuste de la vidéo compressée, la carte saillance agit comme un outil d’optimisation, ce qui permet d’augmenter la transparence (pour une quantité d’informations insérées et une robustesse contre les attaques prescrites) tout en diminuant la complexité globale du calcul. On peut conclure que la thèse démontre aussi bien méthodologiquement que expérimentalement que même si les normes MPEG-4 AVC et HEVC ne dépendent pas explicitement d’aucun principe de saillance visuelle, leurs flux préservent cette propriété remarquable reliant la représentation numérique de la vidéo au mécanisme psycho-cognitifs humains / The theoretical ground for visual saliency was established some 35 years ago by Treisman who advanced the integration theory for the human visual system: in any visual content, some regions are salient (appealing) because of the discrepancy between their features (intensity, color, texture, motion) and the features of their surrounding areas. This present thesis offers a comprehensive methodological and experimental framework for extracting the salient regions directly from video compressed streams (namely MPEG-4 AVC and HEVC), with minimal decoding operations. Note that saliency extraction from compressed domain is a priori a conceptual contradiction. On the one hand, as suggested by Treisman, saliency is given by visual singularities in the video content. On the other hand, in order to eliminate the visual redundancy, the compressed streams are no longer expected to feature singularities. The thesis also brings to light the practical benefit of the compressed domain saliency extraction. In this respect, the case of robust video watermarking is targeted and it is demonstrated that the saliency acts as an optimization tool, allowing the transparency to be increased (for prescribed quantity of inserted information and robustness against attacks) while decreasing the overall computational complexity. As an overall conclusion, the thesis methodologically and experimentally demonstrates that although the MPEG-4 AVC and the HEVC standards do not explicitly rely on any visual saliency principle, their stream syntax elements preserve this remarkable property linking the digital representation of the video to sophisticated psycho-cognitive mechanisms
280

Machine virtuelle universelle pour codage vidéo reconfigurable / A universal virtual machine for reconfigurable video coding

Gorin, Jérôme 22 November 2011 (has links)
Cette thèse propose un nouveau paradigme de représentation d’applications pour les machines virtuelles, capable d’abstraire l’architecture des systèmes informatiques. Les machines virtuelles actuelles reposent sur un modèle unique de représentation d’application qui abstrait les instructions des machines et sur un modèle d’exécution qui traduit le fonctionnement de ces instructions vers les machines cibles. S’ils sont capables de rendre les applications portables sur une vaste gamme de systèmes, ces deux modèles ne permettent pas en revanche d’exprimer la concurrence sur les instructions. Or, celle-ci est indispensable pour optimiser le traitement des applications selon les ressources disponibles de la plate-forme cible. Nous avons tout d’abord développé une représentation « universelle » d’applications pour machine virtuelle fondée sur la modélisation par graphe flux de données. Une application est ainsi modélisée par un graphe orienté dont les sommets sont des unités de calcul (les acteurs) et dont les arcs représentent le flux de données passant au travers de ces sommets. Chaque unité de calcul peut être traitée indépendamment des autres sur des ressources distinctes. La concurrence sur les instructions dans l’application est alors explicite. Exploiter ce nouveau formalisme de description d'applications nécessite de modifier les règles de programmation. A cette fin, nous avons introduit et défini le concept de « Représentation Canonique et Minimale » d’acteur. Il se fonde à la fois sur le langage de programmation orienté acteur CAL et sur les modèles d’abstraction d’instructions des machines virtuelles existantes. Notre contribution majeure qui intègre les deux nouvelles représentations proposées, est le développement d’une « Machine Virtuelle Universelle » (MVU) dont la spécificité est de gérer les mécanismes d’adaptation, d’optimisation et d’ordonnancement à partir de l’infrastructure de compilation Low-Level Virtual Machine. La pertinence de cette MVU est démontrée dans le contexte normatif du codage vidéo reconfigurable (RVC). En effet, MPEG RVC fournit des applications de référence de décodeurs conformes à la norme MPEG-4 partie 2 Simple Profile sous la forme de graphe flux de données. L’une des applications de cette thèse est la modélisation par graphe flux de données d’un décodeur conforme à la norme MPEG-4 partie 10 Constrained Baseline Profile qui est deux fois plus complexe que les applications de référence MPEG RVC. Les résultats expérimentaux montrent un gain en performance en exécution de deux pour des plates-formes dotées de deux cœurs par rapport à une exécution mono-cœur. Les optimisations développées aboutissent à un gain de 25% sur ces performances pour des temps de compilation diminués de moitié. Les travaux effectués démontrent le caractère opérationnel et universel de cette norme dont le cadre d’utilisation dépasse le domaine vidéo pour s’appliquer à d’autres domaine de traitement du signal (3D, son, photo…) / This thesis proposes a new paradigm that abstracts the architecture of computer systems for representing virtual machines’ applications. Current applications are based on abstraction of machine’s instructions and on an execution model that reflects operations of these instructions on the target machine. While these two models are efficient to make applications portable across a wide range of systems, they do not express concurrency between instructions. Expressing concurrency is yet essential to optimize processing of application as the number of processing units is increasing in computer systems. We first develop a “universal” representation of applications for virtual machines based on dataflow graph modeling. Thus, an application is modeled by a directed graph where vertices are computation units (the actors) and edges represent the flow of data between vertices. Each processing units can be treated apart independently on separate resources. Concurrency in the instructions is then made explicitly. Exploit this new description formalism of applications requires a change in programming rules. To that purpose, we introduce and define a “Minimal and Canonical Representation” of actors. It is both based on actor-oriented programming and on instructions ‘abstraction used in existing Virtual Machines. Our major contribution, which incorporates the two new representations proposed, is the development of a “Universal Virtual Machine” (UVM) for managing specific mechanisms of adaptation, optimization and scheduling based on the Low-Level Virtual Machine (LLVM) infrastructure. The relevance of the MVU is demonstrated on the MPEG Reconfigurable Video Coding standard. In fact, MPEG RVC provides decoder’s reference application compliant with the MPEG-4 part 2 Simple Profile in the form of dataflow graph. One application of this thesis is a new dataflow description of a decoder compliant with the MPEG-4 part 10 Constrained Baseline Profile, which is twice as complex as the reference MPEG RVC application. Experimental results show a gain in performance close to double on a two cores compare to a single core execution. Developed optimizations result in a gain on performance of 25% for compile times reduced by half. The work developed demonstrates the operational nature of this standard and offers a universal framework which exceeds the field of video domain (3D, sound, picture...)

Page generated in 0.0419 seconds