• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 494
  • 228
  • 163
  • 44
  • 43
  • 28
  • 17
  • 9
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1215
  • 315
  • 121
  • 115
  • 106
  • 83
  • 82
  • 77
  • 75
  • 73
  • 56
  • 51
  • 48
  • 47
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

D=10 Super Yang-Mills, D=11 Supergravity and the Pure Spinor Superfield Formalism / D=10 Super Yang Mills, D=11 Supergravidade e o Formalismo de Supercampo de Espinor Puro

Guillen Quiroz, Luis Max [UNESP] 07 March 2016 (has links)
Submitted by LUIS MAX GUILLEN QUIROZ null (luismax@ift.unesp.br) on 2016-05-10T15:29:35Z No. of bitstreams: 1 Pure-Spinor-Superfield-Formalism-MasterDissertation.pdf: 748046 bytes, checksum: dc1994a99330048c6f153d322a0863ee (MD5) / Rejected by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo: O arquivo submetido está sem a ficha catalográfica e folha de aprovação. Lembramos que a versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação. Corrija estas informações e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-05-13T12:10:28Z (GMT) / Submitted by LUIS MAX GUILLEN QUIROZ null (luismax@ift.unesp.br) on 2016-09-22T03:10:43Z No. of bitstreams: 1 Pure-Spinor-Superfield-Formalism-MasterDissertation.pdf: 748046 bytes, checksum: dc1994a99330048c6f153d322a0863ee (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2016-09-27T14:13:13Z (GMT) No. of bitstreams: 1 quiroz_lmg_me_ift.pdf: 748046 bytes, checksum: dc1994a99330048c6f153d322a0863ee (MD5) / Made available in DSpace on 2016-09-27T14:13:13Z (GMT). No. of bitstreams: 1 quiroz_lmg_me_ift.pdf: 748046 bytes, checksum: dc1994a99330048c6f153d322a0863ee (MD5) Previous issue date: 2016-03-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / E bem conhecido como descrever as teor´ıas de Super Yang-Mills (SYM) ´ em D = 10 dimens˜oes e Supergravidade (SG) em D = 11 dimens˜oes no superespa¸co e via seus campos componentes. No entanto, uma nova vers˜ao desses modelos foi formulada nos finais da d´ecada de 2000, quando Martin Cederwall usando o formalismo de supercampo de espinor puro conseguiu construir uma pure spinor a¸c˜ao, que a diferen¸ca das anteriores abordagens, esta n˜ao precisa de impor constraints a m˜ao, proporciona uma descri¸c˜ao completa de cada modelo (no sentido do formalismo BV) e as equa¸c˜oes do movimento obtidas a partir do respectivo principio de a¸c˜ao s˜ao supersim´etricas. Neste trabalho iremos explicar toda a base necess´aria para entender a constru¸c˜ao de tal formalismo. Para esse prop´osito, come¸caremos estudando a teoria SYM (abeliana) em D = 10 em suas formula¸c˜oes em componentes e no superespa¸co. Usaremos a a¸c˜ao da formula¸c˜ao on-shell para quantizar a teoria via o formalismo de Batalin-Vilkovisky (BV). Seguiremos para SG em D = 11 e estudaremos suas formula¸c˜oes em componentes e no superespa¸co. Ent˜ao iremos mostrar que podemos obter o mesmo espectro f´ısico de SYM em D = 10 (SG em D = 11) estudando a superpart´ıcula em D = 10 (D = 11) na calibre do cone de luz. De forma a ter uma quantiza¸c˜ao covariante desses modelos, introduziremos a superpart´ıcula de espinor puro em D = 10 (D = 11), a qual possui o operador BRST usual de espinor puro (Q = λD). Verificar-se-´a que a cohomologia desse operador coincidir´a com a teoria SYM em D=10 (SG em D=11) linearizada depois de ser quantizada via o formalismo BV. Esse resultado introduzir´a naturalmente a ideia de construir a¸c˜oes usando um supercampo de espinor puro. Finalmente, explicaremos como o formalismo de supercampo de espinor puro surge nesse contexto e como podemos us´a-lo para construir a¸c˜oes manifestamente supersim´etricas para SYM em D=10 e SG em D=11. / It is well known how to describe the D = 10 (SYM) Super Yang-Mills and D = 11 (SG) Supergravity theories on superspace and by component fields. However, a new version of these models was formulated in the late 2000, when Martin Cederwall using the pure spinor superfield formalism achieved to construct a pure spinor action for these theories, which unlike the previously mentioned approaches, this does not require to impose any constraint by hand, provides a full description of each model (in the BV sense) and the equations of motion coming from the corresponding action principle are supersymmetric. In this work we will explain all the background required to understand the construction of this action. For this purpose, we will start with the D=10 (abelian) SYM theory in its component and superspace formulations. We will use the action of the on-shell formulation to quantize the theory via the Batalin-Vilkovisky framework. We will move to D=11 supergravity and study its component and superspace formulations. Then we will show that we can obtain the same physical spectrum of D = 10 SYM (D = 11 SG) by studying the D = 10 (D = 11) superparticle in the light-cone gauge. In order to have a covariant quantization of these models, we will introduce the D = 10 (D = 11) pure spinor superparticle, which possesses the usual pure spinor BRST operator (Q = λD). It will turn out that the cohomology of this operator will coincide with the linearized D = 10 SYM (D = 11 SG) theory after being quantized via BV-formalism. This result will introduce naturally the idea of constructing pure spinor actions. Finally, we will explain how the pure spinor superfield framework arises in this context and how we can use it to construct manifestly supersymmetric actions for D = 10 SYM and D = 11 SG.
482

Vers des métamatériaux thermoélectriques à base de super-réseaux verticaux : principes et verrous technologiques / Towards thermoelectric metamaterials based on vertical superlattices : fabrication and challenges

Parasuraman, Jayalakshmi 28 June 2013 (has links)
Les méta-matériaux offrent la possibilité d'obtenir des propriétés physiques nettement améliorées en comparaison avec celles des matériaux naturels. Dans ce travail, nous explorons une nouvelle variété de métamatériaux thermoélectriques à base de micro-et nano-structuration du silicium, sous la forme de super-réseaux verticaux, avec comme visée applicative la récupération d'énergie thermique ainsi que le refroidissement. En outre, nous focalisons nos efforts sur une méthodologie expérimentale permettant la réalisation de ces matériaux par des moyens simples et peu coûteux. La première partie de cette thèse sert d'introduction aux phénomènes thermiques qui constituent la base de la conduction électrique et de la dissipation de chaleur dans les nanostructures, respectivement par émission thermo-ionique et par la diffusion de phonons. Cette partie détaille également les principes et résultats de caractérisation thermique à l'aide des méthodes 3ω et 2ω. La deuxième partie de cette thèse décrit les approches de micro- nanostructuration descendante « top-down » et ascendante « bottom-up », en vue de la fabrication de super-réseaux nanométriques sur du silicium mono-cristallin. La nouvelle architecture verticale proposée soulève des défis technologiques qui sont traités à travers l'exploration de techniques expérimentales originales pour produire, d'une manière efficace et sur de grandes surfaces, des structures submicroniques à fort facteur de forme. Ces techniques comprennent l'utilisation de motifs résultant de lithographie traditionnelle combinée à l'extrusion pour en produire des structures volumiques. En outre, l'utilisation de nanofibres et de diblocs copolymères comme nano-motifs géométriques sont également présentés pour nous rapprocher davantage de l'objectif ultime du projet / Metamaterials offer the benefit of obtaining improved physical properties over natural materials. In this work, we explore a new variety of thermoelectric metamaterials based on silicon micro- and nano- structuration, in the form of vertical superlattices for use in energy-related applications. Additionally, we focus on a route towards fabricating these materials using simple and low-cost means compared to prior attempts. The first part of this thesis serves as an introduction to the thermal phenomena which form the basis for electrical conduction and heat dissipation by thermionic emission and phonon scattering at the nanoscale. These principles forms the crux of the device. This section also details the characterization principles and results using the 3ω and 2ω methods for thermal measurement. The second part of this thesis describes both top-down and bottom-up approaches towards fabricating nanoscale superlattices from single-crystalline silicon. The novel proposed vertical architecture raised technological challenges that were tackled through the exploration of original experimental techniques for producing high aspect ratio (HAR) structures in an effective manner and over large surface areas. These techniques include the use of traditional lithography patterning and subsequent extrusion of volumic structures. Additionally, the use of nanofibers and diblock copolymers as templates for further etching of HAR silicon nanostructures are also presented to bring us closer to the ultimate goal of the project
483

Fonctions de corrélation en théories supersymétriques / Correlation functions in N=4 super-Yang-Mills theory

Chicherin, Dmitry 13 September 2016 (has links)
Dans cette thèse on étudie les (super)fonctions de corrélation à plusieurs points et à plusieursboucle du multiplets demi-BPS en théorie N = 4 super-Yang-Mills. Les fonctions de corrélationsont des objets dynamiques naturels à considérer dans toutes les théories conformes des champs.Elles sont des quantités finies et leur symétrie (super)conforme n’est pas brisée par des divergences.Elles contiennent des informations sur de nombreuses autres intéressantes quantités dynamiques dela théorie. Le produit opératoire engendre les règles de somme pour les fonctions à trois points et lesdimensions anormales. Dans la limite du cône de lumière, elles coïncident avec les boucles de Wilsonde lumière et avec des superamplitudes de diffusion. Cette dualité tient tant au niveau des intégralesdivergentes régularisés que au niveau de leurs intégrandes rationnels finis.La partie principale de la thèse est consacrée aux super-corrélateurs à plusieurs points au niveau Born du supermultiplet du tenseur de stress. Pour les étudier on utilise les règles de Feynman qui préservent une quantité de la supersymétrie. Donc, on reformule la théorie N = 4 SYM dans le superespace harmonique de Lorentz. On s’occupe de l’espace euclidien et on harmonise la moitié du groupe de Lorentz SU(2) × SU(2). La théorie est formulée en termes de deux demi-superchamps chiraux-analytique. L’action de la théorie est une somme de deux termes : l’action de Chern-Simons et une action non-polynomiale qui prend en compte les interactions. Puisque la formulation de l’action est chiral, la Ǭ-supersymétrie est réalisée d’un façon non-linéaire sur la paire de champs. L’action se simplifie considérablement dans la jauge axiale. On obtient les propagateurs correspondants et on formule les règles de Feynman en superspace harmonique de Lorentz. Afin d’étudier super-corrélateurs non-chiraux du supermultiplet de tenseur de stress on formule l’opérateur composite pertinent en termes de demi-superchamps chiraux-analytique ainsi. Au niveau chiral, on propose la construction par R-vertex du super-corrélateur chiral. Afin d’élucider la structure du super-corrélateur on réorganise les règles de Feynman harmoniques qui introduisent une nouvelle classe des invariants hors-shell nilpotent analytique qui sont des blocs de construction élémentaires de la super-corrélateur. Ensuite, on procède au secteur non-chiral et on constate que la dépendance de Ɵ̅ est pris en compte par une légère modification du R-vertex qui consiste à une modification des variables spatio-temporelles de la base chirale à la base analytique. Ainsi, le corrélateur non-chiral est exprimée en termes d’une classe assez particulière des invariants nilpotents non-chiraux. Dans la dernière partie de la thèse, on étudie les fonctions de corrélation à quatre points des opérateurs demi-BPS dans l’approximation de trois boucle dans la limite planaire. Cette étude est motivée par une conjecture basée sur intégrabilité pour les constantes de structure. A l’ordre de trois boucles toutes les approches de graphes de Feynman connus sont extrêmement inefficaces. Le principal obstacle est un grand nombre de diagrammes de Feynman pertinents. Cependant, le corrélateur est presque complètement fixé par ses propriétés élémentaires comme symétries, singularités et planairité. La structure de pôle et la symétrie super-conforme spécifient les intégrandes rationnelles des corrélateurs à un nombre de coefficients numériques. Les coefficients sont fixés par la planairité, la symétrie de croisement et le produit opératoire en cône de lumière des intégrandes avec diverses configurations de poids dans la limite par rapport à une paire de points. / In the present thesis we study the multi-point multi-loop (super)correlation functions of half-BPSmultiplets in N = 4 super-Yang-Mills theory. Correlation functions are natural dynamical objectsto consider in any Conformal Field Theory. They are finite quantities and their (super)conformalsymmetry is not broken by divergences. They contain information about many others interestingdynamical quantities of the theory. The Operator Product Expansion being applied to them producessum rules for three-point functions and anomalous dimensions. In the light-cone limit they coincidewith the light-like Wilson loops and scattering superamplitudes. This duality holds both at the levelof the regularized divergent integrals and at the level of their finite rational integrands.The main part of the thesis is devoted to multi-point Born level super-correlators of the stress-tensor supermultiplet. There exists a number of hints that such super-correlators are remarkable dynamicalquantities in N = 4 SYM. Studying the supercorrelators it is convenient to use the Feynman rulespreserving an amount of the supersymmetry. So, we reformulate the N = 4 SYM in the Lorentzharmonic superspace. We deal with Euclidean space and harmonize one half of the Lorentz groupSU(2) x SU(2). The theory is formulated in terms of two chiral-analytic semi-superfields one ofwhich is scalar and the other one is spinor. The action of the theory is a sum of two terms: theChern-Simons action describing the self-dual N = 4 SYM theory and a non-polynomial action whichtakes into account interactions. Since the formulation of the action is chiral the Ǭ-supersymmetry isnon-linearly realized on the pair of fields. The action considerably simplifies in the axial gauge. Wework out corresponding propagators and formulate Lorentz harmonic superspace Feynman rules. Inorder to study nonchiral supercorrelators of the stress-tensor supermultiplet we formulate the relevant composite operator in terms of the chiral-analytic semi-superfields as well.At the chiral level we propose the R-vertex construction of the chiral supercorrelator which turnsout to be rational at the Born level by construction. In order to elucidate the structure of thesupercorrelator we rearrange harmonic Feynman rules introducing a new class of off-shell analyticnilpotent (Grassmann degree two). They are simple building blocks of the super-correlator. Thenwe proceeded to the nonchiral sector and and that the dependence on Ɵ̅ is taken into account by aslight modification of the R-vertices. This modification of the R-vertices is equivalent to a change of the space-time variables from the chiral to analytic bases. So the non-chiral correlator is expressed in terms of a rather special class of non-chiral nilpotent invariants.In the last part of the thesis we study four-point correlation functions of half-BPS operators inthe three-loop approximation in the planar limit. This study is motivated by an integrability basedconjecture for the structure constants. At the three-loop order all known Feynman graph approachesare extremely inefficient. The main obstacle is a huge number of relevant Feynman diagrams andthe complexity of the corresponding loop integrals. However the correlator is almost completely fixedby its elementary properties like symmetries, singularities and planarity. The pole structure andthe super-conformal symmetry specify the rational integrands of the correlators up to a number ofnumerical coefficients. We fix these coefficients using planarity, the crossing symmetry and comparingthe light-cone OPE of the correlator integrands with various weight configurations in the light-likelimit with respect to a pair of points.
484

Synthetic Aperture Radar Image Formation Via Sparse Decomposition

January 2011 (has links)
abstract: Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation. / Dissertation/Thesis / M.S. Electrical Engineering 2011
485

Análise dos efeitos da superexpressão do componente RNA da telomerase de Leishmania major (LeishTER)

Vassilievitch, Alessandro Cabral. January 2018 (has links)
Orientador: Maria Isabel Nogueira Cano / Resumo: Parasitos do gênero Leishmania pertencem à família Trypanosomatidae, os quais causam a leishmaniose, doença tropical negligenciada, que pode se apresentar em três formas clínicas: cutânea, mucocutânea e visceral. O Brasil é um dos países mais afetados pela doença, devido principalmente às condições socioeconômicas, às mudanças climáticas e ambientais. Pesquisas relacionadas à biologia da Leishmania contribuem para o entendimento dos mecanismos fisiológicos do parasito, e assim fornecem a possibilidade de encontrar novos alvos terapêuticos. O estudo dos telômeros de Leishmania se mostram promissores, já que estão relacionados com a estabilidade do genoma. Os telômeros estão localizados nas extremidades dos cromossomos e são responsáveis por proteger os cromossomos assegurando que a informação genética seja corretamente copiada durante a duplicação celular. Os telômeros são elongados por uma transcritase reversa especializada denominada telomerase. A telomerase é uma ribonucleoproteína, constituída por duas subunidades, uma proteína com função de transcriptase reversa denominada TERT, e um componente RNA (TER) que contém a sequência do molde da repetição telomérica copiado pela TERT. Estudos recentes mostram que o TER possui outras funções além de conter apenas um molde para elongamento dos telômeros. Sua estrutura secundária possui domínios com funções de controle da inserção de nucleotídeos pelo TERT, reconhecimento da sequência e ligação de proteínas acessórias. Recentemente... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Parasites of the Leishmania genus belong to the Trypanosomatide family, which present peculiar and particular characteristics. Among them are the species that cause leishmaniasis, a neglected tropical disease that can be expressed in three different clinical forms: cutaneous, mucocutaneous and visceral. Brazil is one of the most affected countries, due mainly to socioeconomic conditions, climate change and environmental alterations. Research related to the biology of Leishmania contributes to the understanding of the important physiological mechanisms of the parasite, and thus provide new therapeutic targets against the disease. The study of Leishmania telomeres appears promising since they related are to the genome stability. Telomeres are nucleoprotein structures located at the ends of the chromosomes and are responsible for protecting the chromosomes ensuring that the genetic information copied is correctly during cell duplication. DNA polymerase does not elongate telomeres as the rest of the genetic material, and thus maintained are by the action of a specialized reverse transcriptase named telomerase. Telomerase is a ribonucleoprotein minimally composed by two subunits, a protein with reverse transcriptase function TERT, and an RNA component (TER) that contains the telomeric repeat template sequence copied by TERT. Recent studies shown that TER has other functions besides being just a template for telomeres elongation. Its secondary structure has domains with control fun... (Complete abstract click electronic access below) / Mestre
486

Efficient Perceptual Super-Resolution

January 2011 (has links)
abstract: Super-Resolution (SR) techniques are widely developed to increase image resolution by fusing several Low-Resolution (LR) images of the same scene to overcome sensor hardware limitations and reduce media impairments in a cost-effective manner. When choosing a solution for the SR problem, there is always a trade-off between computational efficiency and High-Resolution (HR) image quality. Existing SR approaches suffer from extremely high computational requirements due to the high number of unknowns to be estimated in the solution of the SR inverse problem. This thesis proposes efficient iterative SR techniques based on Visual Attention (VA) and perceptual modeling of the human visual system. In the first part of this thesis, an efficient ATtentive-SELective Perceptual-based (AT-SELP) SR framework is presented, where only a subset of perceptually significant active pixels is selected for processing by the SR algorithm based on a local contrast sensitivity threshold model and a proposed low complexity saliency detector. The proposed saliency detector utilizes a probability of detection rule inspired by concepts of luminance masking and visual attention. The second part of this thesis further enhances on the efficiency of selective SR approaches by presenting an ATtentive (AT) SR framework that is completely driven by VA region detectors. Additionally, different VA techniques that combine several low-level features, such as center-surround differences in intensity and orientation, patch luminance and contrast, bandpass outputs of patch luminance and contrast, and difference of Gaussians of luminance intensity are integrated and analyzed to illustrate the effectiveness of the proposed selective SR frameworks. The proposed AT-SELP SR and AT-SR frameworks proved to be flexible by integrating a Maximum A Posteriori (MAP)-based SR algorithm as well as a fast two-stage Fusion-Restoration (FR) SR estimator. By adopting the proposed selective SR frameworks, simulation results show significant reduction on average in computational complexity with comparable visual quality in terms of quantitative metrics such as PSNR, SNR or MAE gains, and subjective assessment. The third part of this thesis proposes a Perceptually Weighted (WP) SR technique that incorporates unequal weighting parameters in the cost function of iterative SR problems. The proposed approach is inspired by the unequal processing of the Human Visual System (HVS) to different local image features in an image. Simulation results show an enhanced reconstruction quality and faster convergence rates when applied to the MAP-based and FR-based SR schemes. / Dissertation/Thesis / Ph.D. Electrical Engineering 2011
487

DCE: the dynamic conditional execution in a multipath control independent architecture / DCE: execução dinâmica condicional em uma arquitetura de múltiplos fluxos com independência de controle

Santos, Rafael Ramos dos January 2003 (has links)
Esta tese apresenta DCE, ou Execução Dinâmica Condicional, como uma alternativa para reduzir o custo da previsão incorreta de desvios. A idéia básica do modelo apresentado é buscar e executar todos os caminhos de desvios que obedecem à certas restrições no que diz respeito a complexidade e tamanho. Como resultado, tem-se um número menor de desvios sendo previstos e consequentemente um número menor de desvios previstos incorretamente. DCE busca todos os caminhos dos desvios selecionados evitando quebras no fluxo de busca quando estes desvios são buscados. Os caminhos buscados dos desvios selecionados são então executados mas somente o caminho correto é completado. Nesta tese nós propomos uma arquitetura para executar múltiplos caminhos dos desvios selecionados. A seleção dos desvios ocorre baseada no tamanho do desvio e em outras condições. A seleção de desvios simples e complexos permite a predicação dinâmica destes desvios sem a necessidade da existência de um conjunto específico de instruções nem otimizações especiais por parte do compilador. Além disso, é proposta também uma técnica para reduzir a sobrecarga gerada pela execução dos múltiplos caminhos dos desvios selecionados. O desempenho alcançado atinge níveis de até 12% quando um previsor de desvios Local é usado no DCE e um previsor Global é usado na máquina de referência. Quando ambas as máquinas empregam previsão Local, há um aumento de desempenho da ordem de 3-3.5%. / This thesis presents DCE, or Dynamic Conditional Execution, as an alternative to reduce the cost of mispredicted branches. The basic idea is to fetch all paths produced by a branch that obey certain restrictions regarding complexity and size. As a result, a smaller number of predictions is performed, and therefore, a lesser number of branches are mispredicted. DCE fetches through selected branches avoiding disruptions in the fetch flow when these branches are fetched. Both paths of selected branches are executed but only the correct path commits. In this thesis we propose an architecture to execute multiple paths of selected branches. Branches are selected based on the size and other conditions. Simple and complex branches can be dynamically predicated without requiring a special instruction set nor special compiler optimizations. Furthermore, a technique to reduce part of the overhead generated by the execution of multiple paths is proposed. The performance achieved reaches levels of up to 12% when comparing a Local predictor used in DCE against a Global predictor used in the reference machine. When both machines use a Local predictor, the speedup is increased by an average of 3-3.5%.
488

RST: Reuse through Speculation on Traces / RST: Reuso Especulativo de Traces

Pilla, Mauricio Lima January 2004 (has links)
Na presente tese, apresentamos uma nova abordagem para combinar reuso e prvisão de seqüências dinâmicas de instruções, chamada Reuso por Especulação em traces (RST). Esta técnica permite a identificação dinâmica de traces de instruções redundantes ou previsíveis e o reuso (especulativo ou não) desses traces. RST procura resolver a questão de traces que não são reusados por seus valores de entradas de Traces (DTM). Em estudo anteriores, esses traces foram contabilizados como sendo cerca de 69% de todos os traces reusáveis. Uma das maiores vantagens de RST sobre a combinação de um mecanismo de previsão com uma técnica de reuso de valores em que mecanismos não são relacionados é que RST não necessita de tabelas adicionais para o armazenamento dos valores a serem previstos. A aplciação de reuso e previsão de valores pela simples combinação de mecanismos pode necessitar de uma quantidade proibitiva de espaço de armazenamento. No mecanismo RST, os valores já estão presentes na Tabela de Memorização de Traces, não incorrendo em custos adicionais para lê-los se comparado com uma técnica não-especulativa de reuso de traces. O contexto de entrada de cada trace (os valores de entrada de todas as instruções contidas no trace) já armazenam os valores para o teste de reuso, os quais podem ser também utilizados para previsão de valores para o teste de reuso, os quais podem ser também utilizados para previsão de valores. As principais contribuições de nosso trabalho incluem: (i) um framework de reuso especulativo de traces que pode ser modificado para diferentes arquiteturas de processadores; (ii) definição das modificações necessárias em um processador superescalar e superpipeline para implementar nosso mecanismo; (iii) estudo de questões de implementação relacionadas à essa arquitetura; (iv) estudo dos limites de desempenho da nossa técnica; (v) estudo de uma implementação RST limitada por fatores realísticos; e (vi) ferramentas de simulação que podem ser utilizadas em outros estudos, representando um processador superescalar e superpipeline em detalhes. Salientamos que, em uma arquitetura utilizando mecanismos realistas de estimativa de confiança das previsões, nossa técnica RST consegue atingir speedups médios (médias harmônicas) de 1.29 sobre uma arquitetura sem reuso e 1.09 sobre uma técnica não-especulativa de reuso de traces (DTM). / In this thesis, we present a novel approach to combine both reuse and prediction of dynamic sequences of instructions called Reuse through Speculation on Traces (RST). Our technique allows the dynamic identification of instruction traces that are redundant or predictable, and the reuse (speculative or not) of these traces. RST addresses the issue, present on Dynamic Trace Memoization (DTM), of traces not being reused because some of their inputs are not ready for the reuse test. These traces were measured to be 69% of all reusable traces in previous studies. One of the main advantages of RST over just combining a value prediction technique with an unrelated reuse technique is that RST does not require extra tables to store the values to be predicted. Applying reuse and value prediction in unrelated mechanisms but at the same time may require a prohibitive amount of storage in tables. In RST, the values are already stored in the Trace Memoization Table, and there is no extra cost in reading them if compared with a non-speculative trace reuse technique. . The input context of each trace (the input values of all instructions in the trace) already stores the values for the reuse test, which may also be used for prediction. Our main contributions include: (i) a speculative trace reuse framework that can be adapted to different processor architectures; (ii) specification of the modifications in a superscalar, superpipelined processor in order to implement our mechanism; (iii) study of implementation issues related to this architecture; (iv) study of the performance limits of our technique; (v) a performance study of a realistic, constrained implementation of RST; and (vi) simulation tools that can be used in other studies which represent a superscalar, superpipelined processor in detail. In a constrained architecture with realistic confidence, our RST technique is able to achieve average speedups (harmonic means) of 1.29 over the baseline architecture without reuse and 1.09 over a non-speculative trace reuse technique (DTM).
489

Reusing values in a dynamic conditional execution architecture / Reusando Valores em uma Arquitetura com Execução Condicional Dinâmica

Santos, Tatiana Gadelha Serra dos January 2004 (has links)
A Execução Condicional Dinâmica (DCE) é uma alternativa para redução dos custos relacionados a desvios previstos incorretamente. A idéia básica é buscar todos os fluxos produzidos por um desvio que obedecem algumas restrições relativas à complexidade e tamanho. Como conseqüência, um número menor de previsões é executado, e assim, um número mais baixo de desvios é incorretamente previsto. Contudo, tal como outras soluções multi-fluxo, o DCE requer uma estrutura de controle mais complexa. Na arquitetura DCE, é observado que várias réplicas da mesma instrução são despachadas para as unidades funcionais, bloqueando recursos que poderiam ser utilizados por outras instruções. Essas réplicas são geradas após o ponto de convergência dos diversos fluxos em execução e são necessárias para garantir a semântica correta entre instruções dependentes de dados. Além disso, o DCE continua produzindo réplicas até que o desvio que gerou os fluxos seja resolvido. Assim, uma seção completa do código pode ser replicado, reduzindo o desempenho. Uma alternativa natural para esse problema é reusar essas seções (ou traços) que são replicadas. O objetivo desse trabalho é analisar e avaliar a efetividade do reuso de valores na arquitetura DCE. Como será apresentado, o princípio do reuso, em diferentes granularidades, pode reduzir efetivamente o problema das réplicas e levar a aumentos de desempenho. / The Dynamic Conditional Execution (DCE) is an alternative to reduce the cost of mispredicted branches. The basic idea is to fetch all paths produced by a branch that obey certain restrictions regarding complexity and size. As a consequence, a smaller number of predictions is performed, and therefore, a lower number branches is mispredicted. Nevertheless, as other multipath solutions, DCE requires a more complex control engine. In a DCE architecture, one may observe that several replicas of the same instruction are dispatched to the functional units, blocking resources that might be used by other instructions. Those replicas are produced after the join point of the paths and are required to guarantee the correct semantic among data dependent instructions. Moreover, DCE continues producing replicas until the branch that generated the paths is resolved. Thus, a whole section of code may be replicated, harming performance. A natural alternative to this problem is the attempt to reuse those replicated sections, namely the replicated traces. The goal of this work is to analyze and evaluate the effectiveness of value reuse in DCE architecture. As it will be presented, the principIe of reuse, in different granularities, can reduce effectively the replica problem and lead to performance improvements.
490

Development of Nanobodies to Image Synaptic Proteins in Super-Resolution Microscopy

Maidorn, Manuel 15 November 2017 (has links)
No description available.

Page generated in 0.0187 seconds