• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 23
  • 13
  • 7
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 117
  • 117
  • 24
  • 22
  • 21
  • 20
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

M-ary Runlength Limited Coding and Signal Processing for Optical Data Storage

Licona-Nunez, Jorge Estuardo 12 April 2004 (has links)
Recent attempts to increase the capacity of the compact disc (CD) and digital versatile disc (DVD) have explored the use of multilevel recording instead of binary recording. Systems that achieve an increase in capacity of about three times that of conventional CD have been proposed for production. Marks in these systems are multilevel and fixed-length as opposed to binary and variable length in CD and DVD. The main objective of this work is to evaluate the performance of multilevel ($M$-ary) runlength-limited (RLL) coded sequences in optical data storage. First, the waterfilling capacity of a multilevel optical recording channel ($M$-ary ORC) is derived and evaluated. This provides insight into the achievable user bit densities, as well as a theoretical limit against which simulated systems can be compared. Then, we evaluate the performance of RLL codes on the $M$-ary ORC. A new channel model that includes the runlength constraint in the transmitted signal is used. We compare the performance of specific RLL codes, namely $M$-ary permutation codes, to that of real systems using multilevel fixed-length marks for recording and the theoretical limits. The Viterbi detector is used to estimate the original recorded symbols from the readout signal. Then, error correction is used to reduce the symbol error probability. We use a combined ECC/RLL code for phrase encoding. We evaluate the use of trellis coded modulation (TCM) for amplitude encoding. The detection of the readout signal is also studied. A post-processing algorithm for the Viterbi detector is introduced, which ensures that the detected word satisfies the code constraints. Specifying the codes and detector for the $M$-ary ORC gives a complete system whose performance can be compared to that of the recently developed systems found in the literature and the theoretical limits calculated in this research.
32

The Quantitative Investigation of LCModel BASIS Using GAMMA Visual Analysis (GAVA) for in vivo 1H MR Spectroscopy

Huang, Chia-Min 05 August 2010 (has links)
Magnetic resonance imaging (MRI) and magnetic resonance spectroscopy (MRS) has been developed and applied to clinical analysis studies due to its non-invasive properties. Because of the increasing clinical interests of applying MRS, a lot of post-processing tools has been developed, among which LCModel is one of the most popular. LCModel estimates the absolute metabolite concentrations in vivo according to the basis file, so basis files play an important role for the accuracy of absolute metabolite concentrations. The default basis sets of LCModel were made by phantom experiments. However, some special metabolites are difficult to get them, so the basis sets lack for these metabolites. In order to avoid this trouble, LCModel provides a special method called ¡§spectra offering¡¨. In this study, we use GAMMA Visual Analysis (GAVA) software to create basis sets and compare the shape of LCModel default basis sets with the shape of GAVA basis sets. Some metabolites which are not included in the LCModel phantom experiments are also generated. Finally, we estimate the absolute concentrations in normal subjects and patients by using these two kinds of basis sets respectively. Using LCModel ¡§spectra offering¡¨ method to append extra metabolites for LCModel basis sets is applicable to those metabolites of singlet resonance but not of J-coupling resonance in the meanwhile. Our results demonstrate that using GAVA simulation as the basis set leads to different quantitative results from using basis sets of in vitro. We believe that using GAVA simulation as the basis set would provide better consistency among all metabolites and thus achieve more accurate quantification of MRS.
33

Development of Multi-console Analysis Tool for 2D MR Spectroscopic Imaging with LCModel

Hsueh, Po-Tsung 22 July 2008 (has links)
Magnetic resonance (MR) has been developed and applied to clinical analysis studies due to its non-invasive properties. Because of the increasing interest of applying magnetic resonance spectroscopy imaging (MRSI) to clinical application, some post-processing softwares, like LCModel, provide a graphical user interface for convenient and efficient analysis. However, the features of combining MR imaging (MRI) with MRS information and browsing all analyzed results are not provided by LCModel. Our study proposed a method to implement the architecture for processing General Electric (GE), Siemens MRSI data sets and provides features including interactive display, selection and analysis of full 2D slices. For multi-console analysis, our tool also provides the combination of MRS, MRI, and data sets generated by LCModel, such as the projection of three planes and metabolite/spectra map, and therefore the three formats of data sets could be obtained from scanners of various manufactures. Especially, it is more complicated when processing GE data sets, so some mechanisms for processing are proposed, like the transformation, the three plane loc images detection and MRSI detection, etc. Additionally, our tool also has the advantage of the compatibility of further extended functionalities, which would be more flexible and useful for clinical applications.
34

Advanced Real-time Post-Processing using GPGPU techniques

Lönroth, Per, Unger, Mattias January 2008 (has links)
<p> </p><p>Post-processing techniques are used to change a rendered image as a last step before presentation and include, but is not limited to, operations such as change of saturation or contrast, and also more advanced effects like depth-of-field and tone mapping.</p><p>Depth-of-field effects are created by changing the focus in an image; the parts close to the focus point are perfectly sharp while the rest of the image has a variable amount of blurriness. The effect is widely used in photography and movies as a depth cue but has in the latest years also been introduced into computer games.</p><p>Today’s graphics hardware gives new possibilities when it comes to computation capacity. Shaders and GPGPU languages can be used to do massive parallel operations on graphics hardware and are well suited for game developers.</p><p>This thesis presents the theoretical background of some of the recent and most valuable depth-of-field algorithms and describes the implementation of various solutions in the shader domain but also using GPGPU techniques. The main objective is to analyze various depth-of-field approaches and look at their visual quality and how the methods scale performance wise when using different techniques.</p><p> </p>
35

Metric Optimized Gating for Fetal Cardiac MRI

Jansz, Michael 01 January 2011 (has links)
Phase-contrast magnetic resonance imaging (PC-MRI) can provide a complement to echocardiography for the evaluation of the fetal heart. Cardiac imaging typically requires gating with peripheral hardware; however, a gating signal is not readily available in utero. In this thesis, I present a technique for reconstructing time-resolved fetal phase-contrast MRI in spite of this limitation. Metric Optimized Gating (MOG) involves acquiring data without gating and retrospectively determining the proper reconstruction by optimizing an image metric, and the research in this thesis describes the theory, implementation, and evaluation of this technique. In particular, results from an experiment with a pulsatile flow phantom, an adult volunteer study, in vivo application in the fetal population, and numerical simulations are presented for validation. MOG enables imaging with conventional PC-MRI sequences in the absence of a gating signal, permitting flow measurements in the great vessels in utero.
36

Metric Optimized Gating for Fetal Cardiac MRI

Jansz, Michael 01 January 2011 (has links)
Phase-contrast magnetic resonance imaging (PC-MRI) can provide a complement to echocardiography for the evaluation of the fetal heart. Cardiac imaging typically requires gating with peripheral hardware; however, a gating signal is not readily available in utero. In this thesis, I present a technique for reconstructing time-resolved fetal phase-contrast MRI in spite of this limitation. Metric Optimized Gating (MOG) involves acquiring data without gating and retrospectively determining the proper reconstruction by optimizing an image metric, and the research in this thesis describes the theory, implementation, and evaluation of this technique. In particular, results from an experiment with a pulsatile flow phantom, an adult volunteer study, in vivo application in the fetal population, and numerical simulations are presented for validation. MOG enables imaging with conventional PC-MRI sequences in the absence of a gating signal, permitting flow measurements in the great vessels in utero.
37

Influência do tipo e da técnica de aplicação de agente infiltrante na resistência mecânica de componentes produzidos por manufatura aditiva (3DP) /

Mello, Silvia Teixeira de. January 2017 (has links)
Orientador: Ruis Camargo Tokimatsu / Resumo: Ao longo das duas últimas décadas, a contribuição da manufatura aditiva passou da confecção de um mero protótipo de um produto, no início de seu desenvolvimento, para a confecção de qualquer produto direto, presente em todos os setores industriais. Com este avanço, diferentes tecnologias da manufatura aditiva surgiram com o intuito de melhorar alguns parâmetros de produção. Neste meio, a tecnologia de impressão tridimensional 3DP, por consequência de suas várias características intrínsecas, se destaca para atender o setor biomédico, através da técnica de biomodelagem, que contribuem imensamente de forma didática e prática para a performance de cirurgias. Porém, há algumas limitações finais nas peças obtidas por esta tecnologia que devem ser contornadas, focando-se no tratamento adicional necessário destas peças, o pós-processamento, de modo a aprimorá-las, conferindo então sucesso ao destino destas. Neste trabalho, adotou-se a tecnologia de manufatura aditiva 3DP para estudar como a adição de diferentes agentes infiltrantes influenciam no acréscimo de densidade aparente e resistência mecânica de amostras feitas de componentes de gesso, constituídas por corpos de prova cilíndricos e prismáticos, de modo a simular a melhor composição para biomodelos. Para isto, o pós-processamento foi dividido em duas etapas. Na primeira etapa, foram aplicados separadamente nas amostras, quatro tipos de adesivos à base de etilcianocrilato, por gotejamento, e um à base de epóxi, por moldagem com... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
38

Statistical methods for post-processing ensemble weather forecasts

Williams, Robin Mark January 2016 (has links)
Until recent times, weather forecasts were deterministic in nature. For example, a forecast might state ``The temperature tomorrow will be $20^\circ$C.'' More recently, however, increasing interest has been paid to the uncertainty associated with such predictions. By quantifying the uncertainty of a forecast, for example with a probability distribution, users can make risk-based decisions. The uncertainty in weather forecasts is typically based upon `ensemble forecasts'. Rather than issuing a single forecast from a numerical weather prediction (NWP) model, ensemble forecasts comprise multiple model runs that differ in either the model physics or initial conditions. Ideally, ensemble forecasts would provide a representative sample of the possible outcomes of the verifying observations. However, due to model biases and inadequate specification of initial conditions, ensemble forecasts are often biased and underdispersed. As a result, estimates of the most likely values of the verifying observations, and the associated forecast uncertainty, are often inaccurate. It is therefore necessary to correct, or post-process ensemble forecasts, using statistical models known as `ensemble post-processing methods'. To this end, this thesis is concerned with the application of statistical methodology in the field of probabilistic weather forecasting, and in particular ensemble post-processing. Using various datasets, we extend existing work and propose the novel use of statistical methodology to tackle several aspects of ensemble post-processing. Our novel contributions to the field are the following. In chapter~3 we present a comparison study for several post-processing methods, with a focus on probabilistic forecasts for extreme events. We find that the benefits of ensemble post-processing are larger for forecasts of extreme events, compared with forecasts of common events. We show that allowing flexible corrections to the biases in ensemble location is important for the forecasting of extreme events. In chapter~4 we tackle the complicated problem of post-processing ensemble forecasts without making distributional assumptions, to produce recalibrated ensemble forecasts without the intermediate step of specifying a probability forecast distribution. We propose a latent variable model, and make a novel application of measurement error models. We show in three case studies that our distribution-free method is competitive with a popular alternative that makes distributional assumptions. We suggest that our distribution-free method could serve as a useful baseline on which forecasters should seek to improve. In chapter~5 we address the subject of parameter uncertainty in ensemble post-processing. As in all parametric statistical models, the parameter estimates are subject to uncertainty. We approximate the distribution of model parameters by bootstrap resampling, and demonstrate improvements in forecast skill by incorporating this additional source of uncertainty in to out-of-sample probability forecasts. In chapter~6 we use model diagnostic tools to determine how specific post-processing models may be improved. We subsequently introduce bias correction schemes that move beyond the standard linear schemes employed in the literature and in practice, particularly in the case of correcting ensemble underdispersion. Finally, we illustrate the complicated problem of assessing the skill of ensemble forecasts whose members are dependent, or correlated. We show that dependent ensemble members can result in surprising conclusions when employing standard measures of forecast skill.
39

Influência do tipo e da técnica de aplicação de agente infiltrante na resistência mecânica de componentes produzidos por manufatura aditiva (3DP) / Influence of the type and the technique of application of infiltrating agent on the mechanical strength of components produced by additive manufacture (3DP)

Mello, Silvia Teixeira de [UNESP] 30 August 2017 (has links)
Submitted by Silvia Teixeira de Mello null (silviateixmello@gmail.com) on 2017-10-26T21:23:12Z No. of bitstreams: 1 DISSERTAÇÃO SILVIA MELLO 2017-3 CORRIGIDA 26-10.pdf: 3630355 bytes, checksum: 302fa9b705ffc11d7192a4b67392d0c6 (MD5) / Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-10-31T18:59:52Z (GMT) No. of bitstreams: 1 mello_st_me_ilha.pdf: 3630355 bytes, checksum: 302fa9b705ffc11d7192a4b67392d0c6 (MD5) / Made available in DSpace on 2017-10-31T18:59:52Z (GMT). No. of bitstreams: 1 mello_st_me_ilha.pdf: 3630355 bytes, checksum: 302fa9b705ffc11d7192a4b67392d0c6 (MD5) Previous issue date: 2017-08-30 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Ao longo das duas últimas décadas, a contribuição da manufatura aditiva passou da confecção de um mero protótipo de um produto, no início de seu desenvolvimento, para a confecção de qualquer produto direto, presente em todos os setores industriais. Com este avanço, diferentes tecnologias da manufatura aditiva surgiram com o intuito de melhorar alguns parâmetros de produção. Neste meio, a tecnologia de impressão tridimensional 3DP, por consequência de suas várias características intrínsecas, se destaca para atender o setor biomédico, através da técnica de biomodelagem, que contribuem imensamente de forma didática e prática para a performance de cirurgias. Porém, há algumas limitações finais nas peças obtidas por esta tecnologia que devem ser contornadas, focando-se no tratamento adicional necessário destas peças, o pós-processamento, de modo a aprimorá-las, conferindo então sucesso ao destino destas. Neste trabalho, adotou-se a tecnologia de manufatura aditiva 3DP para estudar como a adição de diferentes agentes infiltrantes influenciam no acréscimo de densidade aparente e resistência mecânica de amostras feitas de componentes de gesso, constituídas por corpos de prova cilíndricos e prismáticos, de modo a simular a melhor composição para biomodelos. Para isto, o pós-processamento foi dividido em duas etapas. Na primeira etapa, foram aplicados separadamente nas amostras, quatro tipos de adesivos à base de etilcianocrilato, por gotejamento, e um à base de epóxi, por moldagem com pá. Já na segunda etapa, foram aplicados nas amostras, também separadamente, quatro tipos de adesivos à base de etilcianocrilato, por gotejamento e banho de imersão, e um à base de epóxi, por moldagem com pá. Além dos métodos de aplicação dos adesivos, as duas etapas se diferem também pelos binders utilizados para constituírem as amostras à base de gesso. Para ambas etapas, obteve-se o melhor resultado com o adesivo de cianocrilato de baixíssima viscosidade, capaz de provocar maiores variações de densidade aparente às amostras, além de maiores acréscimos de resistência. / Over the past two decades, the contribution of additive manufacturing has shifted from a mere prototype of a product at the beginning of its development to the production of any direct product present in all industrial sectors. With this advance, different technologies of the additive manufacturing appeared with the intention to improve some parameters of production. In this environment, three-dimensional printing 3DP technology, due to its various intrinsic characteristics, stands out to serve the biomedical sector through the biomodelling technique, which contribute immensely in a didactic and practical way for the performance of surgeries. However, there are some final limitations in the parts obtained by this technology that must be improved, focusing on the necessary additional treatment of these parts, the post-processing, in order to upgrade them, thus giving success to their destination. In this study, 3DP additive manufacturing technology was adopted to study how different infiltrating agents influence the increase in apparent density and mechanical strength of samples made of gypsum components, constituted by cylindrical and prismatic specimens, in order to simulate the best composition for biomodels. For this, the post-processing was divided in two stages. In the first stage, separately, four types of ethylcyanoacrylate-based adhesives were applied on the samples by dripping, and one epoxy-based adhesive was applied by shovel molding. In the second stage, also separately, four types of ethylcyanocrylate based adhesives were applied in the samples, by dripping and by dipping, and the epoxy-based, by shovel molding. Besides the adhesive application methods, the two stages also differ from the binders used to constitute the gypsum-based samples. For both stages, the best result was obtained by the cyanoacrylate adhesive with very low viscosity, capable of causing greater variations of apparent density and additions of strength to the samples.
40

Contribution à l’amélioration des performances de décodage des turbo codes : algorithmes et architecture / Contribution to the improvement of the decoding performance of turbo codes : algorithms and architecture

Tonnellier, Thibaud 05 July 2017 (has links)
Les turbo codes sont une classe de codes correcteurs d’erreurs approchant la limite théorique de capacité formulée par Claude Shannon. Conjointement à leurs excellentes performances de décodage, la complexité calculatoire modérée des turbo décodeurs a permis leur inclusion dans de nombreux standards de communications numériques. Une des métriques permettant la caractérisation de codes correcteurs d’erreurs est l’évolution du taux d’erreurs binaires en fonction du rapport signal sur bruit. Dans le cadre des turbo codes, une courbe de performance de décodage comprend deux zones principales.Dans la première zone, une faible amélioration de la qualité du canal de transmission entraîne de grandes améliorations au niveau des performances de décodage. En revanche dans la seconde, une amélioration de cette qualité ne résulte qu’en une amélioration marginale des performances de décodage. Cette seconde région est nommée zone du plancher d’erreurs. Elle peut empêcher l’utilisation de turbo codes dans des contextes nécessitant de très faibles taux d’erreurs. C’est pourquoi la communauté scientifique a proposé différentes optimisations favorisant la construction de turbo codes atténuant ce plancher d’erreurs. Cependant, ces approches ne peuvent être considérées pour des turbocodes déjà standardisés. Dans ce contexte, cette thèse adresse le problème de la réduction du plancher d’erreurs en s’interdisant de modifier la chaîne de communications numériques du côté de l’émetteur.Pour ce faire, un état de l’art de méthodes de post-traitement de décodage est dressé pour les turbo codes. Il apparaît que les solutions efficaces sont coûteuses à mettre en oeuvre car elles nécessitent une multiplication des ressources calculatoires ou impactent fortement la latence globale de décodage.Dans un premier temps, deux algorithmes basés sur une supervision de l’évolution de métriques internes aux décodeurs, sont proposés. L’un deux permet d’augmenter la convergence du turbo décodeur. L’autre ne permet qu’une réduction marginale du plancher d’erreurs. Dans un second temps, il est observé que dans la zone du plancher d’erreurs, les trames décodées par le turbo décodeur sont très proches du mot de code originellement transmis. Ceci est démontré par une proposition de prédiction analytique de la distribution du nombre d’erreurs binaires par trame erronée. Cette dernière est réalisée grâce au spectre de distance du turbo code. Puisque ces erreurs binaires responsables du plancher d’erreurs sont peu nombreuses, une métrique permettant de les identifier est mise en oeuvre. Ceci mène alors à l’établissement d’un algorithme de décodage permettant de corriger des erreurs résiduelles. Cet algorithme, appelé algorithme Flip-and-Check se base sur un principe de création de mots candidats et de vérifications successives parun code détecteur d’erreurs. Grâce à cet algorithme de décodage, un abaissement du plancher d’erreurs d’un ordre de grandeur est obtenu pour les turbo codes de différents standards (LTE, CCSDS, DVB-RCS et DVB-RCS2), ce, tout en conservant une complexité calculatoire raisonnable.Finalement, une architecture matérielle de décodage implémentant l’algorithme Flipand-Check est présentée. Une étude préalable de l’impact des différents paramètres de l’algorithme est menée. Elle aboutit à la définition de valeurs optimales pour certains de ces paramètres. D’autres sont à adapter en fonction des gains visés en terme de performances de décodage. Cette architecture démontre alors la possible intégration de cet algorithme aux turbo décodeurs existants ; permettant alors d’abaisser le plancher d’erreurs des différents turbo codes présents dans les différents standards de télécommunication. / Since their introduction in the 90’s, turbo codes are considered as one of the most powerful error-correcting code. Thanks to their excellent trade-off between computational complexity and decoding performance, they were chosen in many communication standards. One way to characterize error-correcting codes is the evolution of the bit error rate as a function of signal-to-noise ratio (SNR). The turbo code error rate performance is divided in two different regions : the waterfall region and the error floor region. In the waterfall region, a slight increase in SNR results in a significant drop in error rate. In the error floor region, the error rate performance is only slightly improved as the SNR grows. This error floor can prevent turbo codes from being used in applications with low error rates requirements. Therefore various constructions optimizations that lower the error floor of turbo codes has been proposed in recent years by scientific community. However, these approaches can not be considered for already standardized turbo codes.This thesis addresses the problem of lowering the error floor of turbo codes without allowing any modification of the digital communication chain at the transmitter side. For this purpose, the state-of-the-art post-processing decoding method for turbo codes is detailed. It appears that efficient solutions are expensive to implement due to the required multiplication of computational resources or can strongly impact the overall decoding latency. Firstly, two decoding algorithms based on the monitoring of decoder’s internal metrics are proposed. The waterfall region is enhanced by the first algorithm. However, the second one marginally lowers the error floor. Then, the study shows that in the error floor region, frames decoded by the turbo decoder are really close to the word originally transmitted. This is demonstrated by a proposition of an analytical prediction of the distribution of the number of bits in errors per erroneous frame. This prediction rests on the distance spectrum of turbo codes. Since the appearance of error floor region is due to only few bits in errors, an identification metric is proposed. This lead to the proposal of an algorithm that can correct residual errors. This algorithm, called Flip-and-Check, rests on the generation of candidate words, followed by verification according to an error-detecting code. Thanks to this decoding algorithm, the error floor of turbo codes encountered in different standards (LTE, CCSDS, DVB-RCS and DVB-RCS2) is lowered by one order of magnitude. This performance improvement is obtained without considering an important computational complexity overhead. Finally, a hardware decoding architecture implementing the Flip-and-Check algorithm is presented. A preliminary study of the impact of the different parameters of this algorithm is carried out. It leads to the definition of optimal values for some of these parameters. Others has to be adapted according to the gains targeted in terms of decoding performance. The possible integration of this algorithm along with existing turbo decoders is demonstrated thanks to this hardware architecture. This therefore enables the lowering of the error floors of standardized turbo codes.

Page generated in 0.0581 seconds