501 |
Reuso especulativo de traços com instruções de acesso à memória / Speculative trace reuse with memory access instructionsLaurino, Luiz Sequeira January 2007 (has links)
Mesmo com o crescente esforço para a detecção e tratamento de instruções redundantes, as dependências verdadeiras ainda causam um grande atraso na execução dos programas. Mecanismos que utilizam técnicas de reuso e previsão de valores têm sido constantemente estudados como alternativa para estes problemas. Dentro desse contexto destaca-se a arquitetura RST (Reuse through Speculation on Traces), aliando essas duas técnicas e atingindo um aumento significativo no desempenho de microprocessadores superescalares. A arquitetura RST original, no entanto, não considera instruções de acesso à memória como candidatas ao reuso. Desse modo, esse trabalho introduz um novo mecanismo de reuso e previsão de valores chamado RSTm (Reuse through Speculation on Traces with Memory), que estende as funcionalidades do mecanismo original, com a adição de instruções de acesso à memória ao domínio de reuso da arquitetura. Dentre as soluções analisadas, optou-se pela utilização de uma tabela dedicada (Memo_Table_L) para o armazenamento das instruções de carga/escrita. Esta solução garante boa economia de hardware, não limita o número de instruções de acesso à memória por traço e, também, armazena tanto o endereço como seu respectivo valor. Os experimentos, realizados com benchmarks do SPEC2000 integer e floating-point, mostram um crescimento de 2,97% (média harmônica) no desempenho do RSTm sobre o mecanismo original e de17,42% sobre a arquitetura base. O ganho é resultado de uma combinação de diversos fatores: traços maiores (em média, 7,75 instruções por traço; o RST original apresenta 3,17 em média), embora com taxa de reuso de aproximadamente 10,88% (inferior ao RST, que apresenta taxa de 15,23%); entretanto, a latência das instruções presentes nos traços do RSTm é maior e compensa a taxa de reuso inferior. / Even with the growing efforts to detect and handle redundant instructions, the true dependencies are still one of the bottlenecks of the computations. Value reuse and value prediction techniques have been studied in order to become an alternative to these issues. Following this approach, RST (Reuse through Speculation on Traces) combines both reuse mechanisms and has achieved some good performance improvements for superscalar processors. However, the original RST mechanism does not consider load/store instructions as reuse candidates. Because of this, our work presents a new value reuse and value prediction technique named RSTm (Reuse through Speculation on Traces with Memory), that extends RST and adds memory-access instructions to the reuse domain of the architecture. Among all studied solutions, we chose the approach of using a dedicated table (Memo_Table_L) to take care of the load/store instructions. This solution guarantees low hardware overhead, does not limit the number of memory-access instructions that could be stored for each trace and stores both the address and its value. From our experiments, performed with SPEC2000 integer and floating-point benchmarks, RSTm can achieve average performance improvements (harmonic means) of 2,97% over the original RST and 17,42% over the baseline architecture. These performance improvements are due to several reasons: bigger traces (in average, 7,75 per trace; the original RST has 3,17 in average), with a reuse rate of around 10,88% (less than RST, that presents reuse rate of 15,23%) because the latency of the instructions in the RSTm traces is bigger and compensates the smaller reuse rate.
|
502 |
Um mecanismo de busca especulativa de múltiplos fluxos de instruções / A multistreamed speculative instruction fetch mechanismSantos, Rafael Ramos dos January 1997 (has links)
Este trabalho apresenta um novo modelo de busca especulativa de múltiplos fluxos de instruções em arquiteturas superescalares. A avaliação de desempenho de uma arquitetura superescalar com esta característica é também apresentada como forma de validar o modelo proposto e comparar seu desempenho frente a uma arquitetura superescalar real. O modelo em questão pretende eliminar a latência de busca de instruções introduzida pela ocorrência de comandos de desvio em pipelines superescalares. O desempenho de uma arquitetura superescalar dotada de escalonamento dinâmico de instruções, previsão de desvios e execução especulatva é bastante inferior ao desempenho máximo teórico esperado. Como demonstrado em outros trabalhos, isto ocorre devido às constantes quebras de fluxo, derivadas de instruções de desvio, e do conseqüente esvaziamento da fila de instruções. O emprego desta técnica permite encadear instruções pertencentes a diferentes fluxos lógicos, logo após a identificação de uma instrução de desvio, disponibilizando um maior número de instruções ao mecanismo de escalonamento dinâmico e diminuindo o número de ciclos com despacho nulo devido as quebras de fluxo. Algumas considerações sobre a implementação do modelo descrito são apresentadas ao final do trabalho assim como sugestões para trabalhos futuros. / This work presents a new model to fetch instructions along multiple streams in superscalar pipelines. Also, the performance evaluation of a superscalar architecture including this feature is presented in order to validate the model and to compare its performance with a real superscalar architecture. The proposed technique intents to eliminate the instruction fetch latency introduced by branch instructions in superscalar pipelines. The performance delivered by a superscalar architecture which incorporate dynamic instruction scheduling, branch prediction and speculative execution is not the expected one which should be at least proportional to the number of functional units. Related works have shown that constant stream breaks caused by disruptions in the sequential flow of control reduce the amount of instructions into the instruction queue. This technique allows instruction fetch through different logic streams, as soon as the branch instruction has been detected during the fetch. The scheduler needs a large instruction window to be able to schedule efficiently consequently the instructions window should hold as many instructions as possible to allow an efficient schedule. The improvement realized by the proposed scheme is to increase the size of the instruction window by putting there more instructions avoiding interruptions on the event of branch occurrence. Some considerations about the implementation of this model are presented at final as well as suggestions to future works.
|
503 |
DCE: the dynamic conditional execution in a multipath control independent architecture / DCE: execução dinâmica condicional em uma arquitetura de múltiplos fluxos com independência de controleSantos, Rafael Ramos dos January 2003 (has links)
Esta tese apresenta DCE, ou Execução Dinâmica Condicional, como uma alternativa para reduzir o custo da previsão incorreta de desvios. A idéia básica do modelo apresentado é buscar e executar todos os caminhos de desvios que obedecem à certas restrições no que diz respeito a complexidade e tamanho. Como resultado, tem-se um número menor de desvios sendo previstos e consequentemente um número menor de desvios previstos incorretamente. DCE busca todos os caminhos dos desvios selecionados evitando quebras no fluxo de busca quando estes desvios são buscados. Os caminhos buscados dos desvios selecionados são então executados mas somente o caminho correto é completado. Nesta tese nós propomos uma arquitetura para executar múltiplos caminhos dos desvios selecionados. A seleção dos desvios ocorre baseada no tamanho do desvio e em outras condições. A seleção de desvios simples e complexos permite a predicação dinâmica destes desvios sem a necessidade da existência de um conjunto específico de instruções nem otimizações especiais por parte do compilador. Além disso, é proposta também uma técnica para reduzir a sobrecarga gerada pela execução dos múltiplos caminhos dos desvios selecionados. O desempenho alcançado atinge níveis de até 12% quando um previsor de desvios Local é usado no DCE e um previsor Global é usado na máquina de referência. Quando ambas as máquinas empregam previsão Local, há um aumento de desempenho da ordem de 3-3.5%. / This thesis presents DCE, or Dynamic Conditional Execution, as an alternative to reduce the cost of mispredicted branches. The basic idea is to fetch all paths produced by a branch that obey certain restrictions regarding complexity and size. As a result, a smaller number of predictions is performed, and therefore, a lesser number of branches are mispredicted. DCE fetches through selected branches avoiding disruptions in the fetch flow when these branches are fetched. Both paths of selected branches are executed but only the correct path commits. In this thesis we propose an architecture to execute multiple paths of selected branches. Branches are selected based on the size and other conditions. Simple and complex branches can be dynamically predicated without requiring a special instruction set nor special compiler optimizations. Furthermore, a technique to reduce part of the overhead generated by the execution of multiple paths is proposed. The performance achieved reaches levels of up to 12% when comparing a Local predictor used in DCE against a Global predictor used in the reference machine. When both machines use a Local predictor, the speedup is increased by an average of 3-3.5%.
|
504 |
RST: Reuse through Speculation on Traces / RST: Reuso Especulativo de TracesPilla, Mauricio Lima January 2004 (has links)
Na presente tese, apresentamos uma nova abordagem para combinar reuso e prvisão de seqüências dinâmicas de instruções, chamada Reuso por Especulação em traces (RST). Esta técnica permite a identificação dinâmica de traces de instruções redundantes ou previsíveis e o reuso (especulativo ou não) desses traces. RST procura resolver a questão de traces que não são reusados por seus valores de entradas de Traces (DTM). Em estudo anteriores, esses traces foram contabilizados como sendo cerca de 69% de todos os traces reusáveis. Uma das maiores vantagens de RST sobre a combinação de um mecanismo de previsão com uma técnica de reuso de valores em que mecanismos não são relacionados é que RST não necessita de tabelas adicionais para o armazenamento dos valores a serem previstos. A aplciação de reuso e previsão de valores pela simples combinação de mecanismos pode necessitar de uma quantidade proibitiva de espaço de armazenamento. No mecanismo RST, os valores já estão presentes na Tabela de Memorização de Traces, não incorrendo em custos adicionais para lê-los se comparado com uma técnica não-especulativa de reuso de traces. O contexto de entrada de cada trace (os valores de entrada de todas as instruções contidas no trace) já armazenam os valores para o teste de reuso, os quais podem ser também utilizados para previsão de valores para o teste de reuso, os quais podem ser também utilizados para previsão de valores. As principais contribuições de nosso trabalho incluem: (i) um framework de reuso especulativo de traces que pode ser modificado para diferentes arquiteturas de processadores; (ii) definição das modificações necessárias em um processador superescalar e superpipeline para implementar nosso mecanismo; (iii) estudo de questões de implementação relacionadas à essa arquitetura; (iv) estudo dos limites de desempenho da nossa técnica; (v) estudo de uma implementação RST limitada por fatores realísticos; e (vi) ferramentas de simulação que podem ser utilizadas em outros estudos, representando um processador superescalar e superpipeline em detalhes. Salientamos que, em uma arquitetura utilizando mecanismos realistas de estimativa de confiança das previsões, nossa técnica RST consegue atingir speedups médios (médias harmônicas) de 1.29 sobre uma arquitetura sem reuso e 1.09 sobre uma técnica não-especulativa de reuso de traces (DTM). / In this thesis, we present a novel approach to combine both reuse and prediction of dynamic sequences of instructions called Reuse through Speculation on Traces (RST). Our technique allows the dynamic identification of instruction traces that are redundant or predictable, and the reuse (speculative or not) of these traces. RST addresses the issue, present on Dynamic Trace Memoization (DTM), of traces not being reused because some of their inputs are not ready for the reuse test. These traces were measured to be 69% of all reusable traces in previous studies. One of the main advantages of RST over just combining a value prediction technique with an unrelated reuse technique is that RST does not require extra tables to store the values to be predicted. Applying reuse and value prediction in unrelated mechanisms but at the same time may require a prohibitive amount of storage in tables. In RST, the values are already stored in the Trace Memoization Table, and there is no extra cost in reading them if compared with a non-speculative trace reuse technique. . The input context of each trace (the input values of all instructions in the trace) already stores the values for the reuse test, which may also be used for prediction. Our main contributions include: (i) a speculative trace reuse framework that can be adapted to different processor architectures; (ii) specification of the modifications in a superscalar, superpipelined processor in order to implement our mechanism; (iii) study of implementation issues related to this architecture; (iv) study of the performance limits of our technique; (v) a performance study of a realistic, constrained implementation of RST; and (vi) simulation tools that can be used in other studies which represent a superscalar, superpipelined processor in detail. In a constrained architecture with realistic confidence, our RST technique is able to achieve average speedups (harmonic means) of 1.29 over the baseline architecture without reuse and 1.09 over a non-speculative trace reuse technique (DTM).
|
505 |
Reusing values in a dynamic conditional execution architecture / Reusando Valores em uma Arquitetura com Execução Condicional DinâmicaSantos, Tatiana Gadelha Serra dos January 2004 (has links)
A Execução Condicional Dinâmica (DCE) é uma alternativa para redução dos custos relacionados a desvios previstos incorretamente. A idéia básica é buscar todos os fluxos produzidos por um desvio que obedecem algumas restrições relativas à complexidade e tamanho. Como conseqüência, um número menor de previsões é executado, e assim, um número mais baixo de desvios é incorretamente previsto. Contudo, tal como outras soluções multi-fluxo, o DCE requer uma estrutura de controle mais complexa. Na arquitetura DCE, é observado que várias réplicas da mesma instrução são despachadas para as unidades funcionais, bloqueando recursos que poderiam ser utilizados por outras instruções. Essas réplicas são geradas após o ponto de convergência dos diversos fluxos em execução e são necessárias para garantir a semântica correta entre instruções dependentes de dados. Além disso, o DCE continua produzindo réplicas até que o desvio que gerou os fluxos seja resolvido. Assim, uma seção completa do código pode ser replicado, reduzindo o desempenho. Uma alternativa natural para esse problema é reusar essas seções (ou traços) que são replicadas. O objetivo desse trabalho é analisar e avaliar a efetividade do reuso de valores na arquitetura DCE. Como será apresentado, o princípio do reuso, em diferentes granularidades, pode reduzir efetivamente o problema das réplicas e levar a aumentos de desempenho. / The Dynamic Conditional Execution (DCE) is an alternative to reduce the cost of mispredicted branches. The basic idea is to fetch all paths produced by a branch that obey certain restrictions regarding complexity and size. As a consequence, a smaller number of predictions is performed, and therefore, a lower number branches is mispredicted. Nevertheless, as other multipath solutions, DCE requires a more complex control engine. In a DCE architecture, one may observe that several replicas of the same instruction are dispatched to the functional units, blocking resources that might be used by other instructions. Those replicas are produced after the join point of the paths and are required to guarantee the correct semantic among data dependent instructions. Moreover, DCE continues producing replicas until the branch that generated the paths is resolved. Thus, a whole section of code may be replicated, harming performance. A natural alternative to this problem is the attempt to reuse those replicated sections, namely the replicated traces. The goal of this work is to analyze and evaluate the effectiveness of value reuse in DCE architecture. As it will be presented, the principIe of reuse, in different granularities, can reduce effectively the replica problem and lead to performance improvements.
|
506 |
Digital holography and optical contouringLi, Yan January 2009 (has links)
Digital holography is a technique for the recording of holograms via CCD/CMOS devices and enables their subsequent numerical reconstruction within computers, thus avoiding the photographic processes that are used in optical holography. This thesis investigates the various techniques which have been developed for digital holography. It develops and successfully demonstrates a number of refinements and additions in order to enhance the performance of the method and extend its applicability. The thesis contributes to both the experimental and numerical analysis aspects of digital holography. Regarding experimental work: the thesis includes a comprehensive review and critique of the experimental arrangements used by other workers and actually implements and investigates a number of these in order to compare performance. Enhancements to these existing methods are proposed, and new methods developed, aimed at addressing some of the perceived short-comings of the method. Regarding the experimental aspects, the thesis specifically develops:• Super-resolution methods, introduced in order to restore the spatial frequencies that are lost or degraded during the hologram recording process, a problem which is caused by the limited resolution of CCD/CMOS devices.• Arrangements for combating problems in digital holography such as: dominance of the zero order term, the twin image problem and excessive speckle noise.• Fibre-based systems linked to tunable lasers, including a comprehensive analysis of the effects of: signal attenuation, noise and laser instability within such systems.• Two-source arrangements for contouring, including investigating the limitations on achievable accuracy with such systems. Regarding the numerical processing, the thesis focuses on three main areas. Firstly, the numerical calculation of the Fresnel-Kirchhoff integral, which is of vital importance in performing the numerical reconstruction of digital holograms. The Fresnel approximation and the convolution approach are the two most common methods used to perform numerical reconstruction. The results produced by these two methods for both simulated holograms and real holograms, created using our experimental systems, are presented and discussed. Secondly, the problems of the zero order term, twin image and speckle noise are tackled from a numerical processing point of view, complementing the experimental attack on these problems. A digital filtering method is proposed for use with reflective macroscopic objects, in order to suppress both the zero-order term and the twin image. Thirdly, for the two-source contouring technique, the following issues have been discussed and thoroughly analysed: the effects of the linear factor, the use of noise reduction filters, different phase unwrapping algorithms, the application of the super-resolution method, and errors in the illumination angle. Practical 3D measurement of a real object, of known geometry, is used as a benchmark for the accuracy improvements achievable via the use of these digital signal processing techniques within the numerical reconstruction stage. The thesis closes by seeking to draw practical conclusions from both the experimental and numerical aspects of the investigation, which it is hoped will be of value to those aiming to use digital holography as a metrology tool.
|
507 |
Storage System for Harvested Energy in IoT SensorsAlhuttaitawi, Saif January 2018 (has links)
This work presents an energy system design for wireless sensor networks (WSNs) after applying our design the WSN should theoretically have an infinite lifetime. Energy harvesting sources can provide suitable energy for WSN nodes and reduce their dependence on battery. In this project, an efficient energy harvesting and storage system is proposed. By using (two supercapacitors and four DC/DC converters with step up /step down capabilities) all of them controlled by Microcontroller via switches to consider the best way to save energy to keep the WSN alive as long as possible. The usage of supercapacitors as an energy buffer to supply the sensor components (microcontroller and radio) with energy it needs to work. We could control the energy flow according to a specific voltage levels in supercapacitors to guaranty the full functionality for WSN with minimizing the loss of energy, and that’s leads to long time life for the wireless sensor node WSN. Another important thing we find in our experiment that is the inner leakage of the supercapacitor and how it has a critical effect on how long it can serve our system with energy. This paper contains on two theoretical sections (Part one and part two) which are based on literature reviews, and one experimental section (Part three) based on experimental building the prototype, coding and testing.
|
508 |
Establishing Super- and Sub-Chandrasekar Limiting Mass White Dwarfs to Explain Peculiar Type La SupernovaeDas, Upasana January 2015 (has links) (PDF)
A white dwarf is most likely the end stage of a low mass star like our Sun, which results when the parent star consumes all the hydrogen in its core, thus bringing fusion to a halt. It is a dense and compact object, where the inward gravitational pull is balanced by the outward pressure arising due to the motion of its constituent degenerate electrons. The theory of non-magnetized and non-rotating white dwarfs was formulated extensively by S. Chandrasekhar in the 1930s, who also proposed a maximum possible mass for this objects, known as the Chandrasekhar limit (Chandrasekhar 1935)1.
White dwarfs are believed to be the progenitors of extremely bright explosions called type Ia supernovae (SNeIa). SNeIa are extremely important and popular astronomical events, which are hypothesized to be triggered in white dwarfs having mass close to the famous Chandrasekhar limit ∼ 1.44M⊙. The characteristic nature of the variation of luminosity with time of SNeIa is believed to be powered by the decay of 56Ni to
56Co and, finally, to 56Fe. This feature, along with the consistent mass of the exploding white dwarf, is deeply linked with their utilization as “standard candles” for cosmic distance measurement. In fact, SNeIa measurements were instrumental in establishing the accelerated nature of the current expansion of the universe (Perlmutter et al. 1999).
However, several recently observed peculiar SNeIa do not conform to this traditional explanation. Some of these SNeIa are highly over-luminous, e.g. SN 2003fg, SN 2006gz, SN 2007if, SN 2009dc (Howell et al. 2006; Scalzo et al. 2010), and some others are highly under-luminous, e.g. SN 1991bg, SN 1997cn, SN 1998de, SN 1999by, SN 2005bl (Filippenko et al. 1992; Taubenberger et al. 2008). The luminosity of the former group of SNeIa implies a huge Ni-mass (often itself super-Chandrasekhar), invoking highly super-Chandrasekhar white dwarfs, having mass 2.1 − 2.8M⊙, as their most plausible progenitors (Howell et al. 2006; Scalzo et al. 2010). On the other hand, the latter group produces as low as ∼ 0.1M⊙ of Ni (Stritzinger et al. 2006), which rather seem to favor sub-Chandrasekhar explosion scenarios.
In this thesis, as the title suggests, we have endeavored to establish the existence of exotic, super- and sub-Chandrasekhar limiting mass white dwarfs, in order to explain the aforementioned peculiar SNeIa. This is an extremely important puzzle to solve in order to comprehensively understand the phenomena of SNeIa, which in turn is essential for the correct interpretation of the evolutionary history of the universe.
Effects of magnetic field:
White dwarfs have been observed to be magnetized, having surface fields as high as 105 − 109 G (Vanlandingham et al. 2005). The interior field of a white dwarf cannot be probed directly but it is quite likely that it is several orders of magnitude higher than the surface field. The theory of weakly magnetized white dwarfs has been investigated by a few authors, however, their properties do not starkly contrast with that of the non-magnetized cases (Ostriker & Hartwick 1968).
In our venture to find a fundamental basis behind the formation of super-Chandrasekhar white dwarfs, we have explored in this thesis the impact of stronger magnetic fields on the properties of white dwarfs, which has so far been overlooked. We have progressed from a simplistic to a more rigorous, self-consistent model, by adding complexities step by step, as follows:
• spherically symmetric Newtonian model with constant (central) magnetic field
• spherically symmetric general relativistic model with varying magnetic field
• model with self-consistent departure from spherical symmetry by general relativis-tic magnetohydrodynamic (GRMHD) numerical modeling.
We have started by exploiting the quantum mechanical effect of Landau quanti-zation due to a maximum allowed equipartition central field greater than a critical value Bc = 4.414 × 1013 G. To begin with, we have carried out the calculations in a Newtonian framework assuming spherically symmetric white dwarfs. The primary ef-fect of Landau quantization is to stiffen the equation of state (EoS) of the underlying electron degenerate matter in the high density regime, and, hence, yield significantly super-Chandrasekhar white dwarfs having mass much & 2M⊙ (Das & Mukhopadhyay 2012a,b). Consequently, we have proposed a new mass limit for magnetized white dwarfs which may establish the aforementioned peculiar, over-luminous SNeIa as new standard candles (Das & Mukhopadhyay 2013a,b). We have furthermore predicted possible evo-lutionary scenarios by which super-Chandrasekhar white dwarfs could form by accretion on to a commonly observed magnetized white dwarf, by invoking the phenomenon of flux freezing, subsequently ending in over-luminous, super-Chandrasekhar SNeIa (Das et al. 2013). Before moving on to a more complex model, we have justified the assumptions in our simplistic model, in the light of various related physics issues (Das & Mukhopad-hyay 2014b), and have also clarified, and, hence, removed some serious misconceptions regarding our work (Das & Mukhopadhyay 2015c).
Next, we have considered a more self-consistent general relativistic framework. We have obtained stable solutions of magnetostatic equilibrium models for white dwarfs pertaining to various magnetic field profiles, however, still in spherical symmetry. We have showed that in this framework, a maximum stable mass as high as ∼ 3.3M⊙ can be realized (Das & Mukhopadhyay 2014a).
However, it is likely that the anisotropic effect due to a strong magnetic field may cause a deformation in the spherical structure of the white dwarfs. Hence, in order to most self-consistently take into account this departure from spherical symmetry, we have constructed equilibrium models of strongly magnetized, static, white dwarfs in a general relativistic framework, first time in the literature to the best of our knowledge. In order to achieve this, we have modified the GRMHD code XNS (Pili et al. 2014), to apply it in the context of white dwarfs. Interestingly, we have found that signifi-cantly super-Chandrasekhar white dwarfs, in the range ∼ 1.7 − 3.4M⊙, are obtained for many possible field configurations, namely, poloidal, toroidal and mixed (Das & Mukhopadhyay 2015a). Furthermore, due to the inclusion of deformation caused by a strong magnetic field, super-Chandrasekhar white dwarfs are obtained for relatively lower central magnetic field strengths (∼ 1014 G) compared to that in the simplistic model — as correctly speculated in our first work of this series (Das & Mukhopadhyay 2012a). We have also found that although the characteristic deformation induced by a purely toroidal field is prolate, the overall shape remains quasi-spherical — justifying our earlier spherically symmetric assumption while constructing at least some models of strongly magnetized white dwarfs (Das & Mukhopadhyay 2014a). Indeed more accurate and extensive numerical analysis seems to have validated our analytical findings.
Thus, very interestingly, our investigation has established that magnetized white dwarfs can indeed have mass that significantly exceeds the Chandrasekhar limit, irre-spective of the origin of the underlying magnetic effect — a discovery which is not only of theoretical importance, but also has a direct astrophysical implication in explaining the progenitors of the peculiar, over-luminous, super-Chandrasekhar SNeIa.
Effects of modified Einstein’s gravity:
A large array of models has been required to explain the peculiar, over- and under-
luminous SNeIa. However, it is unlikely that nature would seek mutually antagonistic scenarios to exhibit sub-classes of apparently the same phenomena, i.e., triggering of thermonuclear explosions in white dwarfs. Hence, driven by the aim to establish a unification theory of SNeIa, we have invoked in the last part of this thesis a modification to Einstein’s theory of general relativity in white dwarfs.
The validity of general relativity has been tested mainly in the weak field regime, for example, through laboratory experiments and solar system tests. However, the question remains, whether general relativity requires modification in the strong gravity regime, such as, the expanding universe, the region close to a black hole and neutron star. For instance, there is evidence from observational cosmology that the universe has undergone two epochs of cosmic acceleration, the theory behind which is not yet well understood. The period of acceleration in the early universe is known as inflation, while the current accelerated expansion is often explained by invoking a mysterious dark energy. An alternative approach to explain the mysteries of inflation and dark energy is to modify the underlying gravitational theory itself, as it conveniently avoids involving any exotic form of matter. Several modified gravity theories have been proposed which are extensions of Einstein’s theory of general relativity. A popular class of such theories is known as f (R) gravity (e.g. see de Felice & Tsujikawa 2010), where the Lagrangian density f of the gravitational field is an arbitrary function of the Ricci scalar R.
In the context of astrophysical compact objects, so far, modified gravity theories have been applied only to neutron stars, which are much more compact than white dwarfs, in order to test the validity of such theories in the strong field regime (e.g. Cooney et al. 2010; Arapoˇglu et al. 2011). Moreover, a general relativistic correction itself does not seem to modify the properties of a white dwarf appreciably when compared to Newtonian calculations. Our venture of exploring modified gravity in white dwarfs in this thesis, is a first in the literature to the best of our knowledge. We have exploited the advantage that white dwarfs have over neutron stars, i.e., their EoS is well established. Hence, any change in the properties of white dwarfs can be solely attributed to the modification of the underlying gravity, unlike in neutron stars, where similar effects could be produced by invoking a different EoS.
We have explored a popular, yet simple, model of f (R) gravity, known as the Starobinsky model (Starobinsky 1980) or R−squared model, which was originally pro-posed to explain inflation. Based on this model, we have first shown that modified gravity reproduces those results which are already explained in the paradigm of general relativity (and Newtonian framework), namely, low density white dwarfs in this context. This is a very important test of the modified gravity model and is furthermore necessary to constrain the underlying model parameter. Next, depending on the magnitude and sign of a single model parameter, we have not only obtained both highly super-Chandrasekhar and highly sub-Chandrasekhar limiting mass white dwarfs, but we have also established them as progenitors of the peculiar, over- and under-luminous SNeIa, respectively (Das & Mukhopadhyay 2015b). Thus, an effectively single underlying the-ory unifies the two apparently disjoint sub-classes of SNeIa, which have so far hugely puzzled astronomers.
To summarize, in the first part of the thesis, we have established the enormous significance of magnetic fields in white dwarfs in revealing the existence of significantly super-Chandrasekhar white dwarfs. These super-Chandrasekhar white dwarfs could be ideal progenitors of the peculiar, over-luminous SNeIa, which can, hence, be used as new standard candles of cosmic distance measurements. In the latter part of the thesis, we have established the importance of a modified theory of Einstein’s gravity in revealing both highly super- and highly sub-Chandrasekhar limiting mass white dwarfs. We have furthermore demonstrated how such a theory can serve as a missing link between the peculiar, super- and sub-Chandrasekhar SNeIa. Thus, the significance of the current thesis lies in the fact that it not only questions the uniqueness of the Chandrasekhar mass-limit for white dwarfs, but it also argues for the need of a modified theory of Einstein’s gravity to explain astrophysical observations.
|
509 |
Extrapolação espectral na restauração de imagens tridimensionais de microscopia ótica de fluorescênciaPonti Junior, Moacir Pereira 26 September 2008 (has links)
Made available in DSpace on 2016-06-02T19:02:39Z (GMT). No. of bitstreams: 1
2365.pdf: 3383262 bytes, checksum: 5fa930c4afb4d585bb6d96947f2cc22f (MD5)
Previous issue date: 2008-09-26 / Financiadora de Estudos e Projetos / The study of living cells, isolated or in tissues, in several applications, requires the use of microscopy techniques. The fluorescence microscopes are specially important for making
possible images with enhancement of specific structures and detection of biological processes. However, microscopes, like other optical systems, corrupt images so that many details are lost after the passage of the image through their optical components. The conventional (wide-field) fluorescence microscopes degrade images mainly on the axial direction, limiting the amount of frequencies that passes through the system. As a result, there is an out-of-focus blur, making
it difficult to use the images to obtain three-dimensional (3D) images by computational optical sectioning microscopy (COSM). The main contribution of this thesis is the development of computer-based methods that are able to restore acquired images, through spectrum extrapolation algorithms that restore a portion of the lost frequencies, even in noisy images. A non-linear algorithm was proposed, based on the Richardson-Lucy method, with space and frequency domain constraints as in the Gerchberg-Papoulis algorithm. this method defines an unified algorithm to restore and extrapolate images, focusing on the spatial finite support constraint. The proposed method showed improved extrapolation when compared to previously known methods. Besides, other algorithms were developed based on the proposed method. Each variation of the basic algorithm has distinct features to attenuate the noise, define adaptively
the spatial constraint, and detect the image background region. The use of an adaptive constraint and the extraction of information directly from the images were shown to contribute to the recovery of lost frequencies. The results are promising, showing the potential of extrapolation in real conditions, improving the three-dimensional visualization of specimens in wide-field (non-confocal) microscopes, helping many important applications in biotechnology, such as the assessment of cell cultures. / O estudo de células, isoladas ou na forma de tecidos, em diversas aplicações biotecnológicas requer a utilização de técnicas de microscopia. O microscópio de fluorescência, em especial, é atualmente uma ferramenta de grande importância por permitir destacar detalhes em células e detectar processos biológicos. Contudo, os microscópios, como outros sistemas óticos, corrompem as imagens de forma que muitos detalhes são perdidos na passagem da imagem pelos componentes óticos deste tipo de equipamento. Os microscópios de fluorescência convencionais degradam a imagem principalmente na direção axial, o que, no domínio da frequência, é visto como um limite de banda nesta direção que inviabiliza a visualização de imagens tridimensionais
por microscopia de seccionamento ótico computacional. A principal contribuição deste projeto é o desenvolvimento de métodos computacionais que restaurem estas imagens mediante a utilização de algoritmos de extrapolação que recuperem parte das frequências perdidas além do limite de banda do microscópio, mesmo na presença de ruído. Para tal fim, foi proposto um procedimento não linear com base no algoritmo Richardson-Lucy, com restrições no domínio do espaço e da frequência, conforme o algoritmo de Gerchberg-Papoulis. O método proposto define um algoritmo único para restauração e extrapolação, com foco na restrição de suporte finito espacial. Este método mostrou melhoria na extrapolação quando comparado à metodos conhecidos na literatura. Foram desenvolvidas variantes deste algoritmo, cada qual possuindo características para atenuar o ruído, calcular de forma adaptativa a restrição espacial, e detectar a região de fundo da imagem. Foi mostrado que o uso de restrições adaptativas e a extração de informações a partir da imagem pode contribuir para a recuperação de frequências perdidas. Os resultados obtidos são promissores, pois mostram o potencial de extrapolação dos métodos em condições reais, permitindo a melhoria na visualização tridimensional de espécimes em microscópios wide-field (não-confocais), auxiliando diversas aplicações importantes em biotecnologia, como no caso de acompanhamento de cultivos celulares.
|
510 |
Aumento de resolução de imagens de ressonância magnética do trato vocal utilizadas em modelos de síntese articulatóriaMartins, Ana Luísa Dine 31 October 2011 (has links)
Made available in DSpace on 2016-06-02T19:02:41Z (GMT). No. of bitstreams: 1
4013.pdf: 3833898 bytes, checksum: 7a490bae6746f0e2b3c0c3472c9fd2b4 (MD5)
Previous issue date: 2011-10-31 / Universidade Federal de Minas Gerais / Articulatory Synthesis consists in reproducing speech by means of models of the vocal tract and of articulatory processes. Recent advances in Magnetic Resonance Imaging (MRI) allowed for important improvements with respect to the speech comprehension and the forms taken by the vocal tract. However, one of the main challenges in the field is the fast and at the same time high-quality acquisition of image sequences. Since adopting more powerful acquisition devices might be financially inviable, a more feasible solution proposed in the literature is the resolution enhancement of the images by changes introduced in the acquisition model. This dissertation proposes a method for the spatio-temporal resolution enhancement of the obtained sequences using only digital image processing techniques. The approach involves two stages: (1) the temporal resolution enhancement by means of a motion compensated interpolation technique; and (2) the spatial resolution enhancement by means of a super resolution image reconstruction technique. With respect to the temporal resolution enhancement, two interpolation models are compared: linear interpolation considering two adjacent images and cubic splines interpolation considering four contiguous images. Since both models performed equally in the experiments, the linear interpolation was adopted, for its simplicity and lower computational cost. The initial goal of the spatial resolution enhancement was an extension of the candidate s approach proposed in her master s thesis. Adopting a maximum a posteriori probability approach (MAP), the high-resolution images were modeled using the Markov Random Fields (MRF) Generalized Isotropic Multi-Level Logistic (GIMLL) model and the Iterated Conditional Modes (ICM) algorithm. However, even though the approach has presented promising results, due to the dimension of the target problem, the algorithm presented high computational cost. Considering this limitation, an adaptation of the Wiener filter for the super-resolution reconstruction problem was considered. Inspired by two methods available in the literature, three approaches were proposed: the statistical interpolation, the multi-temporal approach, and the adaptive Wiener filter. In all cases, a separable Markovian model and an isotropic model were compared in the characterization of the spatial correlation structures. These models were used to characterize the correlation and cross correlation of observations for the statistical interpolation and the multi-temporal approach. On the other hand, for the adaptive Wiener filter, these models were used to characterize the a priori spatial correlation. According to the conducted experiments, the isotropic model outperformed the separable Markovian model. Besides, considering all Wiener filter-based approaches and the initial approach based on the GIMLL model, the adaptive Wiener filter outperformed all other approaches and was also faster than a single iteration of the GIMLL-based approach. / A síntese articulatória procura produzir a fala através de modelos do trato vocal e dos processos articulatórios envolvidos. Os avanços no imageamento por ressonância magnética, permitiram que resultados importantes fossem alcançados com relação à fala e à forma do trato vocal. Entretanto um dos principais desafios ainda é a aquisição rápida e de alta qualidade das sequências de imagens. Além da opção de se utilizar meios de aquisição cada vez mais potentes, o que pode ser financeiramente inviável, abordagens propostas na literatura procuram aumentar a resolução modificando o processo de aquisição. Este trabalho propõe o aumento de resolução espaço-temporal das sequências adquiridas utilizando apenas técnicas de processamento de imagens digitais. A abordagem proposta é formada por duas etapas: o aumento de resolução temporal por meio de uma técnica de interpolação por compensação de movimento; e o aumento de resolução espacial por meio de uma técnica de reconstrução de imagens por super resolução. Com relação ao aumento de resolução temporal, dois métodos de interpolação são comparados: interpolação linear considerando duas imagens adjacentes e interpolação por splines cúbicas considerando quatro imagens consecutivas. Como, de acordo com os experimentos desenvolvidos, não existe diferença significativa entre esses dois métodos, a interpolação linear foi adotada por ser um procedimento mais simples e, consequentemente, apresentar menor custo computacional. O objetivo inicial para o aumento de resolução espacial das imagens observadas foi a extensão da abordagem proposta pela aluna em seu projeto de mestrado. Adotando uma abordagem de máxima probabilidade a posteriori (MAP), as imagens de alta resolução foram modeladas utilizando o modelo de campos aleatórios de Markov (MRF) Generalized Isotropic Multi-Level Logistic (GIMLL) e o algoritmo Iterated Conditional Modes (ICM) foi utilizado para maximizar as probabilidades condicionais locais sequencialmente. Entretanto, apesar de ter apresentado resultados promissores, devido à dimensão do problema tratado, o algoritmo ICM apresentou alto custo computacional. Considerando as limitações de performance desse algoritmo, decidiu-se adaptar o filtro de Wiener para o problema da reconstrução por super resolução. Utilizando dois trabalhos encontrados na literatura como inspiração, foram desenvolvidas três abordagens denominadas interpolação estatística, abordagem multitemporal e filtro de Wiener adaptativo. Em todos os casos, um modelo Markoviano separável e um modelo isotrópico foram comparados na caracterização das estruturas de correlação espacial. No caso da interpolação estatística e da abordagem multitemporal esses modelos foram utilizados para caracterizar as estruturas de correlação das observações e cruzada. Por outro lado, no caso da abordagem denominada filtro de Wiener adaptativo, esses modelos foram utilizados para caracterizar as estruturas de correlação espaciais a priori. De acordo com os experimentos desenvolvidos, o modelo isotrópico apresentou desempenho superior quando comparado ao modelo Markoviano separável. Além disso, considerando todas as propostas baseadas no filtro de Wiener e a proposta inicial baseada no modelo de Markov GIMLL, o filtro de Wiener adaptativo apresentou os melhores resultados e se mostrou mais rápido do que apenas uma iteração da abordagem baseada no modelo GIMLL.
|
Page generated in 0.034 seconds