• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Probabilistic Models for the Analysis of Gene Expression Profiles

Quon, Gerald 16 August 2013 (has links)
Gene expression profiles are some of the most abundant sources of data about the cellular state of a collection of cells in an organism. Comparison of the expression profiles of multiple samples allows biologists to find associations between observations at the molecular level and the phenotype of the samples. A key challenge is to distinguish variation in expression due to biological factors of interest from variation due to confounding factors that can arise for unrelated technical or biological reasons. This thesis presents models that can explicitly adjust the comparison of expression profiles to account for specific types of confounding factors. One such confounding factor arises when comparing tissue-specific expression profiles across multiple organisms to identify differences in expression that are indicative of changes in gene function. When the organisms are separated by long evolutionary distances, tissue functions may be re-distributed and introduce expression changes unrelated to changes in gene function. We developed Brownian Factor Phylogenetic Analysis, a model that can account for such re-distribution of function, and demonstrate that removing this confounding factor improves tasks such as predicting gene function. Another confounding factor arises because current protocols for expression profiling require RNA extracts from multiple cells. Often biological samples are heterogeneous mixtures of multiple cell types, so the measured expression profile is an average of the RNA levels of the constituent cells. When the biological sample contains both cells of interest and nuisance cells, the confounding expression from the nuisance cells can mask the expression of the cells of interest. We developed ISOLATE and ISOpure, two models for addressing the heterogeneity of tumor samples. We demonstrated that modeling tumor heterogeneity leads to an improvement in two tasks: identifying the site of origin of metastatic tumors, and predicting the risk of death of lung cancer patients.
152

Probabilistic Models for the Analysis of Gene Expression Profiles

Quon, Gerald 16 August 2013 (has links)
Gene expression profiles are some of the most abundant sources of data about the cellular state of a collection of cells in an organism. Comparison of the expression profiles of multiple samples allows biologists to find associations between observations at the molecular level and the phenotype of the samples. A key challenge is to distinguish variation in expression due to biological factors of interest from variation due to confounding factors that can arise for unrelated technical or biological reasons. This thesis presents models that can explicitly adjust the comparison of expression profiles to account for specific types of confounding factors. One such confounding factor arises when comparing tissue-specific expression profiles across multiple organisms to identify differences in expression that are indicative of changes in gene function. When the organisms are separated by long evolutionary distances, tissue functions may be re-distributed and introduce expression changes unrelated to changes in gene function. We developed Brownian Factor Phylogenetic Analysis, a model that can account for such re-distribution of function, and demonstrate that removing this confounding factor improves tasks such as predicting gene function. Another confounding factor arises because current protocols for expression profiling require RNA extracts from multiple cells. Often biological samples are heterogeneous mixtures of multiple cell types, so the measured expression profile is an average of the RNA levels of the constituent cells. When the biological sample contains both cells of interest and nuisance cells, the confounding expression from the nuisance cells can mask the expression of the cells of interest. We developed ISOLATE and ISOpure, two models for addressing the heterogeneity of tumor samples. We demonstrated that modeling tumor heterogeneity leads to an improvement in two tasks: identifying the site of origin of metastatic tumors, and predicting the risk of death of lung cancer patients.
153

Desconvolução não-supervisionada baseada em esparsidade

Fernandes, Tales Gouveia January 2016 (has links)
Orientador: Prof. Dr. Ricardo Suyama / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2016. / O presente trabalho analisa o problema de desconvolução não-supervisionada de sinais abordando a característica esparsa dos sinais envolvidos. O problema de desconvolução não-supervisionada de sinais se assemelha, em muitos aspectos, ao problema de separação cega de fontes, que consiste basicamente de se estimar sinais a partir de versões que correspondem a misturas desses sinais originais, denominados simplesmente de fontes. Ao aplicar a desconvolução não-supervisionada é necessario explorar características dos sinais e/ou do sistema para auxiliar na resolução do problema. Uma dessas características, a qual foi utilizada neste trabalho, é o conceito de esparsidade. O conceito de esparsidade está relacionado a sinais e/ou sistemas em que toda a informação está concentrada em uma quantidade pequena de valores, os quais representam a informação real do que se queira analisar sobre o sinal ou sobre o sistema. Nesse contexto, há critérios que estabelecem condições suficientes, sobre os sinais e/ou sistemas envolvidos, capazes de garantir a desconvolução dos mesmos. Com isso, os algoritmos para recuperação dos sinais e/ou sistemas utilizarão os critérios estabelecidos baseado na característica esparsa dos mesmos. Desta forma, neste trabalho será feito a comparação de convergência dos algoritmos aplicados em alguns cenários específicos, os quais definem o sinal e o sistema utilizados. Por fim, os resultados obtidos nas simulações permitem obter uma boa ideia do comportamento dos diferentes algoritmos analisados e a viabilidade de uso no problema de desconvolução de sinais esparsos. / The present work analyzes the deconvolution problem unsupervised signs approaching the sparse characteristic of the signals involved. The deconvolution problem unsupervised signals resembles in many aspects to the problem of blind source separation, which consists primarily of estimating signals from versions which are mixtures of these original signals, simply referred to as sources. By applying unsupervised deconvolution it is necessary to explore characteristics of signals and/or system to assistant in problem resolution. One of these features, which was used in this work is the concept of sparsity. The concept of sparseness associated signs and/or systems in which all the information is concentrated in a small number of values, which represent the actual information that one wants to analyze on the signal or on the system. In this context, there are criteria that establish sufficient conditions on the signs and/or systems involved, able to ensure the deconvolution of them. Thus, the algorithms for signal recovery and/or systems will use the criteria based on sparse characteristic of them. Thus, the present work will be doing the convergence of algorithms comparison applied in some specific scenarios, which define the signal and the system used. Finally, the results obtained from simulations allow getting a good idea of the behavior of different algorithms and analyzed for viability using the deconvolution problem of sparse signals.
154

Compara??o de desempenho da deconvolu??o preditiva multicanal e da filtragem f-k na atenua??o de m?ltiplas do fundo do mar

Luz, Marcos Augusto Lima da 18 December 2012 (has links)
Made available in DSpace on 2015-03-13T17:08:35Z (GMT). No. of bitstreams: 1 MarcosALL_DISSERT.pdf: 5173840 bytes, checksum: 26fa2fd6ccf8445fa5e27cbfeebf642c (MD5) Previous issue date: 2012-12-18 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / The seismic reflection is used on a large scale in oil exploration. In case of marine acquisition the presence of high impedance contrast at the interfaces water/air generates multiple reflections events. Such multiple events can mask primary events; thus from the interpretational viewpoint it is necessary to mitigate the multiples. In this manuscript we compare two methods of multiple attenuation: the predictive multichannel deconvolution (DPM) and the F-K filtering (FKF). DPM is based in the periodicity of the multiples while FKF is based in multiples and primaries splitting in F-K domain. DPM and FKF were applied in common-offset and CDP gathers, respectively. DPM is quite sensible to the correct identification of the period and size of the filter while FKF is quite sensible to an adequate choice of the velocity in order to split multiples and primaries events in the F-K domain. DPM is a method that is designed to act over a specific event. So, when the parameters are well selected, DPM is very efficient in removing the specified multiple. Then DPM can be optimized by applying it several times, each time with a different parameterization. A deficiency of DPM occurs when a multiple is superposed to a primary event: in this situation, DPM can attenuate also the primary event. On the other hand, FKF presents almost the same performance to all multiples that are localized in the same sector of the F-K domain. The two methods can be combined in order to take advantage of their associated potentials. In this situation, DPM is firstly applied, with a focus in the sea bed multiples. Then FKF is applied in order to attenuate the remaining multiples / A s?smica de reflex?o ? utilizada em grande escala na explora??o de petr?leo. No caso de aquisi??o marinha, devido ao alto contraste de imped?ncia nas interfaces ?gua/ar, podem ocorrer eventos de reflex?o m?ltipla. Tais m?ltiplas podem mascarar eventos prim?rios, sendo necess?rio atenu?-las para facilitar o processo de interpreta??o. Neste trabalho faremos a compara??o usando dados sint?ticos e reais de duas t?cnicas de atenua??o de m?ltiplas: a deconvolu??o preditiva multicanal do tipo Wiener-Levinson (DPM) e a filtragem F-K. A primeira t?cnica ? baseada na periodicidade das m?ltiplas enquanto a segunda ? baseada nas diferen?as de mergulho dos eventos. A DPM foi aplicada em fam?lias de afastamento comum e a filtragem F-K em fam?lias CDP. Constatamos que a efici?ncia da t?cnica DPM ? bastante sens?vel ? identifica??o correta do per?odo e do tamanho do filtro. Por sua vez, a filtragem F-K ? bastante sens?vel ? escolha da velocidade adequada para separar as m?ltiplas dos eventos prim?rios. A DPM ? uma t?cnica que ? focada num dado evento; quando bem parametrizada, ela ? bastante eficiente para remover a m?ltipla especificada, podendo ter atua??o menos eficiente em outras m?ltiplas. A DPM pode ser ent?o otimizada aplicando-se a t?cnica v?rias vezes, em cada vez com uma diferente parametriza??o. Uma defici?ncia da t?cnica DPM ? quando h? sobreposi??o de m?ltiplas com eventos prim?rios, em que a DPM pode remover tamb?m uma parcela do evento prim?rio. Por sua vez, a filtragem F-K tem aproximadamente o mesmo desempenho em todas as m?ltiplas que estejam localizadas em um mesmo setor do espectro F-K. As duas t?cnicas podem ser combinadas de modo a tomar partido do potencial de cada uma delas, aplicando-se primeiro a t?cnica DPM, focada na m?ltipla do fundo do mar, seguida da filtragem F-K para a atenua??o das demais m?ltiplas
155

Determinação de um novo valor para a entalpia de fusão do cristal perfeito de acetato de celulose

Cerqueira, Daniel Alves 17 February 2006 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The enthalpy of fusion of a perfect crystal of cellulose acetate was calculated in this thesis. In order to do so, cellulose samples from different sources were acetylated through the heterogenous acetilation methodology. The cellulose acetate samples were characterized by differencial scanning calorimetry (DSC) and by wide angle X-ray diffraction (WAXD). The X-ray diffractograms were deconvoluted into halos and peaks using the Pseudo-Voigt peak function of program Origin® 7.0. Two hypotheses were proposed in order to fit the deconvolution patterns into the two-phase model. In the first, the amorphous regions of the material was considered to be represented by the area of the halo located at 21º and the crystalline area by the maxima at 8º, 11º, 13º and 16º. In the second hypothesis, the amorphous region was considered to be represented by the areas of the maxima at 11º and 21º, and the crystalline region by the maxima at 8º, 13º and 16º. The WAXD crystallinities of the samples were then calculated from these values. The first hypothesis was ignored for presenting a very high crystallinity value for a sample that did not present an enthalpy of fusion. The second hypothesis was used, but the linear regression that defined the relationship between the enthalpy of fusion and the crystallinity of the materials was forced through zero. Through this relationship, the enthalpy of fusion of a perfect crystal of cellulose acetate was calculated to be 58.8 J/g. / A entalpia de fusão de um cristal perfeito de acetato de celulose foi calculada nessa dissertação. Para isso, amostras de celulose de diferentes origens foram acetiladas através do método de acetilação heterogêneo. As amostras de acetato de celulose foram caracterizadas por calorimetria diferencial de varredura (DSC) e difração de raios-X a alto ângulo (WAXD). Os difratogramas de raios-X foram deconvoluídos em halos e picos utilizando a função pico Pseudo-Voigt do programa Origin® 7.0. Duas hipóteses foram propostas para que as deconvoluções estivessem de acordo com o modelo de duas fases. Na primeira, foi considerado que a parte amorfa do material era representada pela área do halo localizado em 21º e a área cristalina pela soma das áreas dos máximos em 8º, 11º, 13º e 16º. Na segunda hipótese, a região amorfa foi considerada como sendo representada pelas áreas dos máximos em 11º e 21º, e a região cristalina pelos máximos em 8º, 13º e 16º. A partir desses valores se calculou a cristalinidade das amostras de acetato de celulose via WAXD. A primeira hipótese foi desconsiderada por apresentar um valor muito alto de cristalinidade para uma amostra que não apresentou entalpia de fusão. A segunda hipótese foi utilizada, porém a regressão linear que definiu a relação entre a entalpia de fusão e cristalinidade dos materiais foi forçada a passar pela origem. Através dessa relação, calculou-se a entalpia de fusão de um cristal perfeito de acetato de celulose como sendo 58,8 J/g. / Mestre em Química
156

Estudo de técnicas de deconvolução para reconstrução de energia online no calorímetro hadrônico do ATLAS

Duarte, João Paulo Bittencourt da Silveira 27 August 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-25T14:05:10Z No. of bitstreams: 1 joaopaulobittencourtdasilveiraduarte.pdf: 8307204 bytes, checksum: a026db8104dd8767f6e5b0fc71d452e9 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-25T15:27:51Z (GMT) No. of bitstreams: 1 joaopaulobittencourtdasilveiraduarte.pdf: 8307204 bytes, checksum: a026db8104dd8767f6e5b0fc71d452e9 (MD5) / Made available in DSpace on 2017-04-25T15:27:51Z (GMT). No. of bitstreams: 1 joaopaulobittencourtdasilveiraduarte.pdf: 8307204 bytes, checksum: a026db8104dd8767f6e5b0fc71d452e9 (MD5) Previous issue date: 2015-08-27 / Este trabalho apresenta um estudo sobre técnicas de deconvolução de sinais para a reconstrução online de energia no primeiro nível de trigger do calorímetro hadrônico (TileCal) do ATLAS. O ambiente de alta luminosidade, previsto para ocorrer nos próximos anos no colisionador de partículas LHC, aumenta a probabilidade de ocorrência de colisões adjacentes, promovendo o efeito de empilhamento de sinais. O algoritmo atualmente utilizado para a reconstrução de energia não é robusto a este efeito. Neste trabalho, o TileCal é interpretado como um canal de comunicação, cuja a resposta ao impulso deve ser compensada a fim de remover o efeito de empilhamento e recuperar a informação de energia depositada em cada colisão. Os métodos desenvolvidos requisitam uma implementação online. As FPGAs, por serem dispositivos reconfiguráveis e de alta velocidade, foram escolhidas para implementação destes algoritmos. Assim, neste trabalho avaliou-se dois tipos de técnicas de deconvolução, uma direta baseada em filtros FIR e outra baseada em métodos iterativos. O segundo tipo de técnica, permite uma melhora de desempenho na reconstrução pela possibilidade de se utilizar um conhecimento especialista de que a energia reconstruída deve ser sempre positiva. Os resultados da avaliação mostram que os métodos propostos apresentam maior desempenho, em alta luminosidade, do que o método atualmente implementado. Como esperado, os métodos iterativos reconstroem a energia com menor erro quando comparados às técnicas baseadas em filtros FIR. Porém, com relação a implementação, as técnicas iterativas são de maior complexidade e utilizam mais recursos de hardware. / This work presents a study of deconvolution techniques to be used in the online energy reconstruction for the ATLAS hadronic calorimeter (TileCal) first level trigger system. The high-luminosity environment, foreseen for the next years of operation of the LHC particle collider, increases the probability of observe adjacent collisions, promoting the signal superposition effect. The current algorithm for energy reconstruction is not robust against this pile-up effect. In this work, the TileCal is considered as a communication channel whose impulse response must be compensated in order to remove the pile-up effect and to recover the deposited energy information at each collision. The developed methods require an online implementation. Since FPGAs are suitable for high-speed applications, they are chosen to be used in the ATLAS trigger system. Therefore, in this work two different online deconvolution techniques were tested, a direct FIR filter implementation and techniques based on iterative processes. The later outperforms the former due to the possibility of imposing a constraint for reconstructing only positive energies, which is know to reflect the reality. The results have shown that the proposed methods present better reconstruction performance than the current employed method when the environment presents superposition effect (high luminosity). As expected, the iterative methods present smaller errors than the direct FIR methods. However, regarding the FPGA implementation, the iterative techniques have a higher computational cost and uses more hardware resources.
157

New NMR tools for impurity analysis

Power, Jane Elizabeth January 2016 (has links)
New NMR Tools for Impurity Analysis was written by Jane Power and submitted for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences at the University of Manchester, on 31st March 2016.NMR spectroscopy is rich in structural information and is a widely used technique for structure elucidation and characterization of organic molecules; however, for impurity analysis it is not generally the tool of choice. While 1H NMR is quite sensitive, due to its narrow chemical shift range (0 - 10 ppm) and the high abundance of hydrogen atoms in most drugs, its resolution is often poor, with much signal overlap. Therefore, impurity signals, especially for chemically cognate species, are frequently obscured. 19F NMR on the other hand offers extremely high resolution for pharmaceutical applications. It exhibits far wider chemical shift ranges (± 300 ppm) than 1H NMR, and typical fluorinated drugs, of which there are many on the market, have only one or two fluorine atoms. In view of this, 19F NMR is being considered as an alternative for low-level impurity analysis and quantification, using a chosen example drug, rosuvastatin. Before 19F NMR can be effectively used for such analysis, the significant technical problem of pulse imperfections, such as sensitivity to B1 inhomogeneity and resonance-offset effects, has to be overcome. At present, due to the limited power of the radiofrequency amplifiers, only a fraction of the very wide frequency ranges encountered with nuclei such as fluorine can be excited uniformly at any one time. In this thesis, some of the limitations imposed by pulse imperfections are addressed and overcome. Two new pulse sequences are developed and presented, CHORUS and CHORUS Oneshot, which use tailored, ultra-broadband swept-frequency chirp pulses to achieve uniform constant amplitude and constant phase excitation and refocusing over very wide bandwidths (approximately 250 kHz), with no undue B1 sensitivity and no significant loss in sensitivity. CHORUS, for use in quantitative NMR, is demonstrated to give accuracies better than 0.1%. CHORUS Oneshot, a diffusion-ordered spectroscopic technique, exploits the exquisite sensitivity of the 19F chemical shift to its local environment, giving excellent resolution, which allows for accurate discrimination between diffusion coefficients with high dynamic range and over very wide bandwidths. Sulfur hexafluoride (SF6) is investigated and shown to be a suitable reference material for use in 19F NMR. The bandshape of the fluorine signal and its satellites is simple, without complex splitting patterns, and therefore good for reference deconvolution; in addition, it is sufficiently soluble in the solvent of choice, DMSO-d6.To demonstrate the functionality of the CHORUS sequences for low-level impurity analysis, 470 MHz 1H decoupled 19F spectra were acquired on a 500 MHz Bruker system, using a degraded sample of rosuvastatin, to reveal two low-level impurities. Using a standard Varian probe with a single high frequency channel, simultaneous 1H irradiation and 19F acquisition was made possible by time-sharing. Simultaneous 19F{1H} and 19F{13C} double decoupling was then performed using degraded and fresh samples of rosuvastatin, to reveal three low-level impurities (in the degraded sample) and low-level 1H and 13C modulation artefacts.
158

Blind inverse imaging with positivity constraints / Inversion aveugle d'images avec contraintes de positivité

Lecharlier, Loïc 09 September 2014 (has links)
Dans les problèmes inverses en imagerie, on suppose généralement connu l’opérateur ou matrice décrivant le système de formation de l’image. De façon équivalente pour un système linéaire, on suppose connue sa réponse impulsionnelle. Toutefois, ceci n’est pas une hypothèse réaliste pour de nombreuses applications pratiques pour lesquelles cet opérateur n’est en fait pas connu (ou n’est connu qu’approximativement). On a alors affaire à un problème d’inversion dite “aveugle”. Dans le cas de systèmes invariants par translation, on parle de “déconvolution aveugle” car à la fois l’image ou objet de départ et la réponse impulsionnelle doivent être estimées à partir de la seule image observée qui résulte d’une convolution et est affectée d’erreurs de mesure. Ce problème est notoirement difficile et pour pallier les ambiguïtés et les instabilités numériques inhérentes à ce type d’inversions, il faut recourir à des informations ou contraintes supplémentaires, telles que la positivité qui s’est avérée un levier de stabilisation puissant dans les problèmes d’imagerie non aveugle. La thèse propose de nouveaux algorithmes d’inversion aveugle dans un cadre discret ou discrétisé, en supposant que l’image inconnue, la matrice à inverser et les données sont positives. Le problème est formulé comme un problème d’optimisation (non convexe) où le terme d’attache aux données à minimiser, modélisant soit le cas de données de type Poisson (divergence de Kullback-Leibler) ou affectées de bruit gaussien (moindres carrés), est augmenté par des termes de pénalité sur les inconnues du problème. La stratégie d’optimisation consiste en des ajustements alternés de l’image à reconstruire et de la matrice à inverser qui sont de type multiplicatif et résultent de la minimisation de fonctions coût “surrogées” valables dans le cas positif. Le cadre assez général permet d’utiliser plusieurs types de pénalités, y compris sur la variation totale (lissée) de l’image. Une normalisation éventuelle de la réponse impulsionnelle ou de la matrice est également prévue à chaque itération. Des résultats de convergence pour ces algorithmes sont établis dans la thèse, tant en ce qui concerne la décroissance des fonctions coût que la convergence de la suite des itérés vers un point stationnaire. La méthodologie proposée est validée avec succès par des simulations numériques relatives à différentes applications telle que la déconvolution aveugle d'images en astronomie, la factorisation en matrices positives pour l’imagerie hyperspectrale et la déconvolution de densités en statistique. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
159

Využití dekonvoluce v digitální fluorescenční mikroskopii kvasinek / Deconvolution fluorescence microscopy of yeast cells

Štec, Tomáš January 2015 (has links)
Title: Deconvolution fluorescence microscopy of yeast cells Author: Tomáš Štec Department: Institute of Physics of Charles University Supervisor: prof. RNDr. Jarmoír Plášek, CSc., Institute of Physics of Charles Uni- versity Abstract: Fluorescence microscopy presents an fast and cheap alternative to more advanced imaging methods like confocal and electron microscopy, even though it is subject to heavy image distortion. It is possible to recover most of the original distortion-free image using deconvolution in computer image processing. This al- lows reconstruction of 3D structure of studied objects. Deconvolution procedure of NIS Elements AR program undergoes an thorough inspection in this diploma the- sis. It is then applied on restoration of 3D structure of calcofluor stained cell wall of budding yeast Saccharomyces cerevisiae. Changes of the structure of the cell wall during cell ageing are being examined. Cell wall of aged cells shows increased surface roughness and even ruptures at the end of cell life. Keywords: fluorescence, microscopy, deconvolution, NIS Elements AR, calcofluor, yeast, cell wall, ageing
160

Estimation of Pareto distribution functions from samples contaminated by measurement errors

Kondlo, Lwando Orbet January 2010 (has links)
Magister Scientiae - MSc / The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher’s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available. / South Africa

Page generated in 0.5492 seconds