• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 79
  • 36
  • 24
  • 16
  • 7
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 438
  • 438
  • 45
  • 45
  • 43
  • 40
  • 40
  • 34
  • 32
  • 32
  • 30
  • 29
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Quelques résultats sur la percolation d'information dans les marchés OTC.

Bayade, Sophia January 2014 (has links)
Résumé : La principale caractéristique des marchés OTC (Over-The-Counter) est l’absence d’un mécanisme de négociation centralisée (comme des ventes aux enchères, des spécialistes ou des limit-order books). Les acheteurs et les vendeurs sont donc souvent dans l'ignorance des prix actuellement disponibles auprès d'autres contreparties potentielles et ont une connaissance limitée de l’amplitude des transactions récemment négociées ailleurs sur le marché. C'est la raison pour laquelle les marchés OTC sont qualifiés de relativement opaques et nommés «Dark Markets» par Duffie (2012) dans sa récente monographie afin de refléter le fait que les investisseurs sont en quelque sorte dans le noir au sujet du meilleur prix disponible et de la personne à contacter pour faire la meilleure transaction. Dans ce travail, nous sommes particulièrement intéressés à l’évolution temporelle de la transmission de l’information au cours des séances de négociation. Plus précisément, nous cherchons à établir la stabilité asymptotique de la dynamique de partage de l'information au sein d’une large population d’investisseurs caractérisés par la fréquence/intensité des rencontres entre investisseurs. L’effort optimal déployé par un agent en recherche d’informations dépend de son niveau actuel d'information et de la distribution transversale des efforts de recherche des autres agents. Dans le cadre défini par Duffie-Malamud-Manso (2009), à l’équilibre, les agents recherchent au maximum jusqu'à ce que la qualité de leur information atteigne un certain niveau, déclenchant une nouvelle phase de recherche minimale. Dans le contexte de percolation d'information entre agents, l'information peut être transmise parfaitement ou imparfaitement. La première étude de ce problème de percolation a été faite par Duffie-Manso (2007), puis par Duffie-Giroux-Manso (2010). Dans cette deuxième étude, le cas de la percolation de l'information par des groupes de plus de deux investisseurs a été abordé et résolu. Cette dernière étude a conduit au problème de l'extension des sommes de Wild dans Bélanger-Giroux (2013). D'autre part, dans Duffie-Malamud-Manso (2009), chaque agent est doté de signaux quant à l'issue probable d'une variable aléatoire d'intérêt commun dans l’optique de transmission d’informations dans une large population d'agents. Un tel contexte conduit à des systèmes d'équations non linéaires d’évolution. Leur objectif est d'obtenir une politique d'équilibre déterminée par un ensemble de paramètres d'une politique de cible traduisant le fait que l’effort de recherche qui doit être minimal lorsqu’un agent possède suffisamment d’information. Dans ce travail, nous sommes en mesure d'obtenir l'existence de l’état d’équilibre, même lorsque la fonction d'intensité n'est pas un produit. De plus, nous sommes également en mesure de montrer la stabilité asymptotique pour toute loi initiale par un changement de noyaux. Enfin, nous élargissons les hypothèses de Bélanger-Giroux (2012) pour montrer la stabilité exponentielle par le critère de Routh-Hurwitz pour un autre exemple de système à un nombre fini d’équations. // Abstract : Over-the-counter (OTC) markets have the main characteristic that they do not use a centralized trading mechanism (such as auctions, specialist, or limit-order book) to aggregate bids and offers and to allocate trades. The buyers and sellers have often a limited knowledge of trades recently negotiated elsewhere in the market. They are also negotiating in potential ignorance of the prices currently available from other counterparties. This is the reason why OTC markets are said to be relatively opaque and are qualified as «Dark Markets» by Duffie (2012) in his recent monograph to reflect the fact that investors are somewhat in the dark about the most attractive available deals and about whom to contact. In this work, we are particularly interested in the evolution over time of the distribution across investors of information learned from private trade negotiations. Specifically, we aim to establish the asymptotic stability of equilibrium dynamics of information sharing in a large interaction set. An agent’s optimal current effort to search for information sharing opportunities depends on that agent’s current level of information and on the cross-sectional distribution of information quality and search efforts of other agents. Under the Duffie-Malamud-Manso (2009) framework, in equilibrium, agents search maximally until their information quality reaches a trigger level and then search minimally. In the context of percolation of information between agents, the information can be transmitted directly or indirectly. The first studies of such a problem were made by Duffie-Manso (2007) and then by Duffie-Giroux-Manso (2010). In that second study the case of the percolation of information by groups of more than 2 investors was addressed and solved for a perfect information transmission kernel. That last study has led Bélanger-Giroux (2013) to the problem of extending the Wild sums for a general interacting kernel (not only for the kernel which adds the information). On the other hand, in Duffie-Malamud-Manso (2009), the authors explain that, for the information sharing in a large population, each agent is endowed with signals regarding the likely outcome of a random variable of common concern, like the price of an asset of common interest. Such a setting leads to nonlinear systems of evolution equations. The agents’ goal is to obtain an equilibrium policy specified by a set of parameters of a trigger policy; more specifically the minimal search effort trigger policies. We concentrate our study on those trigger policies in order to provide more intuitive and practical results. Doing so, we are able to obtain the existence of the steady state even when the intensity function is not a product. And in our framework, we are even able to show the asymptotic stability starting with any initial law. This can be done because we are able to show that, by a change of kernels, the systems of ODE’s, which are expressed by a set of kernels (one 1-airy and one 2-airy) are equivalent to systems expressed with a single 2-airy kernel even with a constant intensity equal to one (by a change of time). We show also that starting from any distribution, the solution converges to the limit proportions. Furthermore, we are able to show the exponential stability using the Routh-Hurwitz criterion for an example of a finite system of differential equations. The solution of such a system of equations describes the cross distribution of types in the market.
62

Sequential estimation in statistics and steady-state simulation

Tang, Peng 22 May 2014 (has links)
At the onset of the "Big Data" age, we are faced with ubiquitous data in various forms and with various characteristics, such as noise, high dimensionality, autocorrelation, and so on. The question of how to obtain accurate and computationally efficient estimates from such data is one that has stoked the interest of many researchers. This dissertation mainly concentrates on two general problem areas: inference for high-dimensional and noisy data, and estimation of the steady-state mean for univariate data generated by computer simulation experiments. We develop and evaluate three separate sequential algorithms for the two topics. One major advantage of sequential algorithms is that they allow for careful experimental adjustments as sampling proceeds. Unlike one-step sampling plans, sequential algorithms adapt to different situations arising from the ongoing sampling; this makes these procedures efficacious as problems become more complicated and more-delicate requirements need to be satisfied. We will elaborate on each research topic in the following discussion. Concerning the first topic, our goal is to develop a robust graphical model for noisy data in a high-dimensional setting. Under a Gaussian distributional assumption, the estimation of undirected Gaussian graphs is equivalent to the estimation of inverse covariance matrices. Particular interest has focused upon estimating a sparse inverse covariance matrix to reveal insight on the data as suggested by the principle of parsimony. For estimation with high-dimensional data, the influence of anomalous observations becomes severe as the dimensionality increases. To address this problem, we propose a robust estimation procedure for the Gaussian graphical model based on the Integrated Squared Error (ISE) criterion. The robustness result is obtained by using ISE as a nonparametric criterion for seeking the largest portion of the data that "matches" the model. Moreover, an l₁-type regularization is applied to encourage sparse estimation. To address the non-convexity of the objective function, we develop a sequential algorithm in the spirit of a majorization-minimization scheme. We summarize the results of Monte Carlo experiments supporting the conclusion that our estimator of the inverse covariance matrix converges weakly (i.e., in probability) to the latter matrix as the sample size grows large. The performance of the proposed method is compared with that of several existing approaches through numerical simulations. We further demonstrate the strength of our method with applications in genetic network inference and financial portfolio optimization. The second topic consists of two parts, and both concern the computation of point and confidence interval (CI) estimators for the mean µ of a stationary discrete-time univariate stochastic process X \equiv \{X_i: i=1,2,...} generated by a simulation experiment. The point estimation is relatively easy when the underlying system starts in steady state; but the traditional way of calculating CIs usually fails since the data encountered in simulation output are typically serially correlated. We propose two distinct sequential procedures that each yield a CI for µ with user-specified reliability and absolute or relative precision. The first sequential procedure is based on variance estimators computed from standardized time series applied to nonoverlapping batches of observations, and it is characterized by its simplicity relative to methods based on batch means and its ability to deliver CIs for the variance parameter of the output process (i.e., the sum of covariances at all lags). The second procedure is the first sequential algorithm that uses overlapping variance estimators to construct asymptotically valid CI estimators for the steady-state mean based on standardized time series. The advantage of this procedure is that compared with other popular procedures for steady-state simulation analysis, the second procedure yields significant reduction both in the variability of its CI estimator and in the sample size needed to satisfy the precision requirement. The effectiveness of both procedures is evaluated via comparisons with state-of-the-art methods based on batch means under a series of experimental settings: the M/M/1 waiting-time process with 90% traffic intensity; the M/H_2/1 waiting-time process with 80% traffic intensity; the M/M/1/LIFO waiting-time process with 80% traffic intensity; and an AR(1)-to-Pareto (ARTOP) process. We find that the new procedures perform comparatively well in terms of their average required sample sizes as well as the coverage and average half-length of their delivered CIs.
63

A Mixed Frequency Steady-State Bayesian Vector Autoregression: Forecasting the Macroeconomy

Unosson, Måns January 2016 (has links)
This thesis suggests a Bayesian vector autoregressive (VAR) model which allows for explicit parametrization of the unconditional mean for data measured at different frequencies, without the need to aggregate data to the lowest common frequency. Using a normal prior for the steady-state and a normal-inverse Wishart prior for the dynamics and error covariance, a Gibbs sampler is proposed to sample the posterior distribution. A forecast study is performed using monthly and quarterly data for the US macroeconomy between 1964 and 2008. The proposed model is compared to a steady-state Bayesian VAR model estimated on data aggregated to quarterly frequency and a quarterly least squares VAR with standard parametrization. Forecasts are evaluated using root mean squared errors and the log-determinant of the forecast error covariance matrix. The results indicate that the inclusion of monthly data improves the accuracy of quarterly forecasts of monthly variables for horizons up to a year. For quarterly variables the one and two quarter forecasts are improved when using monthly data.
64

Fluorination Effect on the Conformational Properties of Alkanes

Xu, Wenjian 05 1900 (has links)
A Series of fluorophores of the general formular P(CF2)nP and P(CF2)n-1CF3 has been synthesized. Copper catalyzed coupling of 1-bromopyrene and the corresponding mono and di-iodoperfluoroalkanes were used in most cases. For the n=3 dimer, a novel 1,w-perfluoroalkylation of pyrene via bis-decarboxylation of hexafluorogultaric acid was utilized. These compounds, along with suitable hydrocarbon analogs, are being used to study the flexibility of fluorocarbon chains using emission. We have found that the excimer formation for the fluorinated pyrene monomers is highly dependent on concentration and is less efficient than for pyene. Excimer formation for the fluorinated pyrene dimers is much more efficient than for the fluorocarbon monomers and is only slightly concentraion dependent. Steady-state emission spectra indicate hydrocarbon dimers-models form excimers more efficiently than the fluorinated dimers suggesting the fluorinated chains are stiffer than the hydrocarbons. We conducted the temperature-dependent studies and quantified the conformational difference.
65

Perturbation Dynamics on Moving Chains

Zakirova, Ksenia V 01 January 2015 (has links)
Chain dynamics have gained renewed interest recently, following the release of a viral YouTube video showcasing a phenomenon called the chain fountain. Recent work in the field shows that there exists unexplained behavior in newly proposed chain systems. We consider a general system of a chain traveling at constant velocity in an external force field and derive steady state solutions for the time invariant shape of the chain. Perturbing the solution introduces moving waves along the steady state shape with components that propagate along and against the direction of travel of the chain. Furthermore, we develop a numerical model using a discrete approximation of the chain in order to empirically test our results. The behavior of the chain fountain and related chain systems is discussed in the context of these findings.
66

B-Spline Boundary Element Method for Ships

Aggarwal, Aditya Mohan 07 August 2008 (has links)
The development of a three dimensional B-Spline based method, which is suitable for the steady-state potential flow analysis of free surface piercing bodies in hydrodynamics, is presented. The method requires the B-Spline or Non Uniform Rational B-Spline (NURBS) representation of the body as an input. In order to solve for the unknown potential, the source surface, both for the body as well as the free surface, is represented by NURBS surfaces. The method does not require the body surface to be discritized into flat panels. Therefore, instead of a mere panel approximation, the exact body geometry is utilized for the computation. The technique does not use a free surface Green's function, which already satisfies the linear free surface boundary conditions, but uses a separate source patch for the free surface. By eliminating the use of a free surface Green's function, the method can be extended to considering non-linear free surface conditions, thus providing the possibility for wave resistance calculations. The method is first applied to the double body flow problem around a sphere and a Wigley hull. Some comparisons are made with exact solutions to validate the accuracy of the method. Results of linear free surface conditions are then presented.
67

Modelo matemático para avaliação hidrodinâmica de escoamentos em regime não-permanente / Mathematical model for hydrodynamic evaluation in non-steady state reactors

Costa, Daniel Jadyr Leite 20 March 2015 (has links)
Projetos de reatores para tratamento de águas de abastecimento e águas residuárias envolvem o conhecimento da hidrodinâmica do escoamento e das reações químicas e bioquímicas que ocorrem em seu interior. A variável hidrodinâmica pode interferir de modo significativo na eficiência da unidade, visto que ela influencia diretamente no desempenho da cinética das reações. Existem muitos reatores que operam em regime não-permanente de vazão, entretanto, são poucos os trabalhos disponíveis na literatura científica que sugerem o desenvolvimento de métodos para a avaliação da hidrodinâmica desse tipo de escoamento. A aplicação de modelos convencionais para avaliações hidrodinâmicas desses reatores é conceitualmente errada, visto que os mesmos são desenvolvidos considerando-se um regime permanente de vazão. Nesse contexto apresenta-se nesse trabalho um modelo matemático voltado para a avaliação hidrodinâmica de reatores que operam em regime não-permanente de vazão, com intuito de subsidiar as análises e previsões de seu comportamento. Foi utilizada a técnica DTR para levantamento de dados experimentais e um software de simulação numérica, o Vensim 6.3 da Ventana Systems, para auxiliar no desenvolvimento do modelo. Após a sua calibração e validação, com as devidas restrições, o modelo demonstrou ser comparativamente mais adequado para a avaliação do comportamento hidrodinâmico de reatores em condições de regime não-permanente com variação senoidal cíclica de vazão, principalmente para escoamentos que possuem tempo de detenção hidráulica relativamente baixo e amplitude de variação de vazão relativamente elevada. / Reactor designs for water supply and wastewater treatment require the knowledge of hydrodynamic and chemical reactions that occur in its interior. The hydrodynamics is very important as it interferes the efficiency of an treatment unit, since it directly influences the chemical reactions. There are many non-steady state reactors, but there are little studies about their hydrodynamic evaluation in the literature. The use of conventional models is conceptually wrong because they have been developed for steady state conditions. This work presents a mathematical model for hydrodynamic evaluation in non-steady state reactors to support analisys of these flows. The RTD technique to get experimental data and a numerical software simulation have been used as well as the Vensim 6.3 program, of Ventana Systems, to support the model development. After its the calibration and validation, the model proved to be suitable for the experimental conditions, especially for flows that have relatively low hydraulic retention time and relatively high amplitude of flow variation.
68

Efeito de características microestruturais na difusividade do hidrogênio em dois aços grau API X65. / Effect of microstructural features on the H diffusivity in two API X65 steels.

Pereira, Viviam Serra Marques 31 January 2017 (has links)
Os aços de alta resistência e baixa liga são amplamente utilizados em dutos transportadores de óleo e gás e, atualmente, o desenvolvimento de novos projetos de liga e o uso de técnicas altamente avançadas de fabricação e processamento dos aços se tornaram essenciais para obtenção de estruturas que resistam aos danos provocados por H, principal motivo de falha de oleodutos e gasodutos em meios ricos em H2S. No presente trabalho, avaliou-se o efeito de características microestruturais na difusividade do H em dois aços grau API X65, com diferentes teores de Mn. Uma das chapas ainda está em fase experimental de desenvolvimento, tem baixo teor de Mn e foi produzida para aplicação em ambientes sour. A outra chapa tem alto teor de Mn, já é usada comercialmente há alguns anos e foi desenvolvida para trabalho em ambientes doces. Os dois materiais passaram por caracterização microestrutural nas três seções da chapa: longitudinal e transversal à direção de laminação e do topo da chapa (paralela à direção de laminação). Após a caracterização, amostras de cada seção dos aços foram submetidas a ensaios de permeabilidade ao H; o aço baixo Mn passou por análises de EBSD (Difração de Elétrons Retroespalhados), para determinação de textura. O aço baixo Mn tem microestrutura homogênea ao longo da espessura da chapa, composta por ferrita refinada e pequenas ilhas de perlita. O aço alto Mn, por sua vez, apresenta microestrutura heterogênea ao longo da espessura, formada por bandas de ferrita e perlita, com marcada presença de segregação central de elementos de liga. Os ensaios de permeabilidade ao H mostraram que os coeficientes de difusão efetiva do H, Deff, do aço baixo Mn são ligeiramente superiores aos do aço alto Mn. Outros dois importantes parâmetros que foram calculados para os dois aços são a concentração de H na sub-superfície do material, C0, e o número de traps por unidade de volume, Nt. Contrariando expectativas, o aço baixo Mn apresentou maiores valores de C0 e Nt do que o aço alto Mn. Ensaios preliminares de dessorção térmica realizados nos dois aços mostraram os mesmos resultados: o aço baixo Mn aprisiona mais H do que o aço alto Mn. Estes resultados contraditórios de C0 e Nt foram atribuídos à presença de nanoprecipitados de microadições de liga no aço baixo Mn, não detectáveis por microscopia óptica e eletrônica de varredura. Ainda, para os dois aços, os valores de Deff variaram em função da seção analisada da seguinte maneira: Deff longitudinal ? Deff transversal > Deff topo. Para entender melhor o comportamento anisotrópico da difusão do H nos dois aços calculou-se um novo coeficiente de difusão, que foi chamado de coeficiente de difusão no estado estacionário, Dss. O Dss considera que todos os traps do aço estão saturados, permitindo, assim que se avalie somente o efeito de obstáculos físicos à difusão do H. No aço alto Mn, o Dss variou da mesma maneira que o Deff: Dss longitudinal ? Dss transversal > Dss topo; este comportamento foi atribuído ao bandeamento presente no material. No aço baixo Mn, o Dss variou de forma diferente do Deff: Dss transversal > Dss longitudinal >= Dss topo, indicando que a difusão do H pode ser auxiliada por contornos de grão enquanto os traps estão sendo saturados, e que a textura cristalográfica pode influenciar a difusão após o estado estacionário ser atingido. / High strength low alloy steels are widely applied as pipelines for crude oil and natural gas transportation and, currently, new approaches to alloy design, in addition to the use of advanced steelmaking and processing techniques, have become essential for obtaining structures that resist to hydrogen damage, which is the main cause of pipelines failure in H2S-rich environments. The main objective of the present work is to evaluate the influence of microstructural features on hydrogen diffusivity in two API X65 steels, with different Mn contents. One of the steel plates has been recently developed for usage in sour environments, is on its experimental stage and has a low Mn content. The other one is a commercial plate steel, with high Mn content, developed for sweet applications. Both steel plates were characterized in its three sections, in relation to the rolling direction: longitudinal, transverse and top surface of the plate (parallell to the rolling direction). After that, samples obtained from each section of the plates were submitted to hydrogen permeation tests; the low Mn steel was also analysed with EBSD, for texture determination. The low Mn steel presents a homogeneous microstructure through plate thickness, composed of refined ferrite and small pearlite islands. The high Mn steel has a heterogeneous microstructure through the plate thickness, composed of ferrite and pearlite bands, and presents centerline segregation. Hydrogen permeation tests showed that the Deff obtained for the low Mn steel sections are slightly higher than for the high Mn steel. Another two important parameters that were calculated for both steels are the subsurface hydrogen concentration, C0, and the number of traps per unit volume, Nt. Contrary to what was expected, the low Mn steel presented the higher C0 and Nt values. Thermal dessorption spectroscopy analysis confirmed that the low Mn steel traps more H atoms than the high Mn one. These results, along with the similar Deff values, were related to the presence of nanoprecipitates of microalloying elements, that cannot be detected via optical and scanning electron microscopy. Additionally, also for both steels, the Deff values varied in function of the analyzed section as it follows: Deff longitudinal ? Deff transverse > Deff top. In order to better understand this anisotropic behavior, a new diffusion coefficient, which was called diffusion coefficient at the steady state, Dss, was determined. Dss considers that all the trapping sites are saturated, enabling, thus, the evaluation of physical obstacles to H diffusion. For the high Mn steel, the Dss varied in the same matter as the Deff: Dss longitudinal ? Dss transverse > Dss top; this behavior was associated with the microstructural banding present in the material. For the low Mn steel, the Dss exhibited a different behavior: Dss transverse > Dss longitudinal >= Dss top, suggesting that H diffusion can be aided by grain boundaries while the trapping sites are being filled and that crystallographic texture may play its role after the steady state is reached.
69

Estudo da estabilidade direcional de um veículo comercial de 2 eixos em situação de regime permanente / not available

Ferreira, André Luís Francioso 16 December 2002 (has links)
O trabalho apresentado nesta dissertação consiste do estudo do comportamento direcional de um veículo comercial de 2 eixos, classificando-o quanto a sua estabilidade (Oversteer, Understeer ou Neutral Steer) e predizendo em que condições sua instabilidade torna-se crítica. para esta finalidade, o veículo foi modelado desenvolvendo uma trajetória curvilínea de raio constante em situação de regime permanente e uma rotina de cálculos representa sua dinâmica lateral. O recurso computacional utilizado (software Excel) foi propositadamente escolhido tendo como premissa ser o mais simples possível, para que os custos e tempo envolvidos fossem mínimos. Foram realizadas algumas medições com o veículo em questão e, levando-se em conta todas as simplificações implementadas, os resultados práticos e teóricos demonstraram correlação satisfatória. Desta maneira, então, pode-se dizer que o instrumento desenvolvido neste trabalho pode ser aplicado como um recurso valioso durante a fase inicial de conceituação da suspensão de um veículo de 2 eixos, principalmente tratando-se de uma avaliação comparativa com veículos semelhantes já testados. / The work presented consists of a 2 axles light truck directional behavior evaluation, where computational resources applied are very simple and easy hand (software excel). Steady state cornering concept was used to get its stability classification (Oversteer, Understeer or Neutral Steer) and show in which moment its behavior become unstable. Experimental measurements took place and the practical (measured) and theoretical (came from the developed model) results showed theirselves satisfactory, considering all the simplifications. Thus, this procedure might be useful during a two axle light truck suspension development, ultimately if is adopted to compare with another one already known. Costs and time are saved in this way.
70

Técnicas de aquisição rápida em tomografia por ressonância magnética nuclear / Magnetic resonance tomography fast acquisition techniques

Foerster, Bernd Uwe 02 March 1994 (has links)
Neste trabalho apresentamos e comparamos diferentes técnicas de tomografia bidimensional por RMN implementados num sistema de tomografia de campo magnético ultra-baixo (O.05T). A partir da seqüência convencional \"Spin Echo\" (SE), utilizada rotineiramente, implementamos a seqüência \"Gradient Recalled Echo\" (GRE) e duas seqüências que utilizam o principio de Steady-State Free Precession (SSFP), sendo \"Fast Low Angle Shot\" (FLASH) e \"Fast Acquisition Double Echo\" (FADE). Com as seqüências de SSFP conseguimos diminuir drasticamente a duração de um exame de tomografia convencional (seqüência SE). A seqüência FADE ainda permite adquirir duas imagens com contrastes claramente diferentes sem aumentar significativamente a duração do exame. Desenvolvemos procedimentos de calibração indispensáveis para as técnicas de SSFP que também melhoraram a relação sinal ruído de 15 por cento na técnica SE . Analisamos teórica e experimentalmente o comportamento do contraste das seqüências apresentadas. Fizemos ainda uma serie de imagens de um phantom e da cabeça de um voluntário com as diferentes seqüências e sugerimos algumas combinações dos parâmetros (protocolos) como tempo de repetição, tempo ao eco e angulo de excitação. Um destes protocolos esta sendo testado em casos clínicos para comparar a utilidade das seqüências apresentadas no diagnóstico medico. Com este trabalho ganhamos uma experiência na utilização de técnicas rápidas indispensáveis para a elaboração de uma metodologia para obter imagens tridimensionais / In this work we present and compare different techniques for bidimensional tomography in NMR which were implemented on a ultra low magnetic field (O.05T) tomographic system. Based on the conventional spin echo pulse sequence (SE), which is routinely used, we implemented the gradient recalled echo (GRE) pulse sequence and two sequences that use the principle of Steady-State Free Precession (SSFP) being \"Fast Low Angle Shot\" (FLASH) and \"Fast Acquisition Double Echo\" (FADE). With the SSFP sequences we shorten drastically the duration of the conventional SE tomography sequence. Besides this the FADE sequence gives two images with clearly different contrast without extending significantly the duration of the experiment. Needed for the SSFP techniques, we developed calibration procedures, which improved as well the SE sequence signal to noise ratio of about 15 percent. We analyzed theoretically and experimentally the behavior of the contrast of the presented sequences. Moreover we acquired various images of a phantom and the human brain of a normal volunteer using the different sequences and proposed some combinations of the parameters (protocols) repetition time, time to echo and flip angle. One of these protocols is being tested in clinical cases to compare the usefulness of the presented techniques for medical diagnostic. With this work we gained a wide experience in using SSFP techniques that will be indispensable in the elaboration of three-dimensional tomography methodologies

Page generated in 0.0504 seconds