• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 502
  • 76
  • 69
  • 58
  • 56
  • 32
  • 20
  • 17
  • 16
  • 12
  • 10
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1040
  • 120
  • 85
  • 81
  • 74
  • 57
  • 56
  • 56
  • 48
  • 48
  • 46
  • 45
  • 43
  • 43
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Functional timing analysis of VLSI circuits containing complex gates / Análise de timing funcional de circuitos VLSI contendo portas complexas

Guntzel, Jose Luis Almada January 2000 (has links)
Os recentes avanços experimentados pela tecnologia CMOS tem permitido a fabricação de transistores em dimensões submicrônicas, possibilitando a integração de dezenas de milhões de dispositivos numa única pastilha de silício, os quais podem ser usados na implementação de sistemas eletrônicos muito complexos. Este grande aumento na complexidade dos projetos fez surgir uma demanda por ferramentas de verificação eficientes e sobretudo que incorporassem modelos físicos e computacionais mais adequados. A verificação de timing objetiva determinar se as restrições temporais impostas ao projeto podem ou não ser satisfeitas quando de sua fabricação. Ela pode ser levada a cabo por meio de simulação ou por análise de timing. Apesar da simulação oferecer estimativas mais precisas, ela apresenta a desvantagem de ser dependente de estímulos. Assim, para se assegurar que a situação crítica é considerada, é necessário simularem-se todas as possibilidades de padrões de entrada. Obviamente, isto não é factível para os projetos atuais, dada a alta complexidade que os mesmos apresentam. Para contornar este problema, os projetistas devem lançar mão da análise de timing. A análise de timing é uma abordagem independente de vetor de entrada que modela cada bloco combinacional do circuito como um grafo acíclico direto, o qual é utilizado para estimar o atraso do circuito. As primeiras ferramentas de análise de timing utilizavam apenas a topologia do circuito para estimar o atraso, sendo assim referenciadas como analisadores de timing topológicos. Entretanto, tal aproximação pode resultar em estimativas demasiadamente pessimistas, uma vez que os caminhos mais longos do grafo podem não ser capazes de propagar transições, i.e., podem ser falsos. A análise de timing funcional, por sua vez, considera não apenas a topologia do circuito, mas também as relações temporais e funcionais entre seus elementos. As ferramentas de análise de timing funcional podem diferir por três aspectos: o conjunto de condições necessárias para se declarar um caminho como sensibilizável (i.e., o chamado critério de sensibilização), o número de caminhos simultaneamente tratados e o método usado para determinar se as condições de sensibilização são solúveis ou não. Atualmente, as duas classes de soluções mais eficientes testam simultaneamente a sensibilização de conjuntos inteiros de caminhos: uma baseia-se em técnicas de geração automática de padrões de teste (ATPG) enquanto que a outra transforma o problema de análise de timing em um problema de solvabilidade (SAT). Apesar da análise de timing ter sido exaustivamente estudada nos últimos quinze anos, alguns tópicos específicos não têm recebido a devida atenção. Um tal tópico é a aplicabilidade dos algoritmos de análise de timing funcional para circuitos contendo portas complexas. Este constitui o objeto básico desta tese de doutorado. Além deste objetivo, e como condição sine qua non para o desenvolvimento do trabalho, é apresentado um estudo sistemático e detalhado sobre análise de timing funcional. / The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
302

Evaluation et timing des fusions-acquisitions : une approche par les options réelles

Ben Flah, Inès 09 December 2011 (has links)
Cette thèse s'intéresse à montrer l'intérêt aussi bien conceptuel qu'empirique de l'approche optionnelle de l'évaluation et du timing des projets de fusions-acquisitions. Pour ce faire, nous avons, tout d'abord, mobilisé une large littérature sur les fusions-acquisitions et les options réelles qui y sont liées. Constatant le manque de contributions empiriques au niveau de cette littérature, nous avons procédé à la réalisation de deux études empiriques. La première est une étude qualitative exploratoire réalisée auprès d'experts en fusions-acquisitions. Les résultats de cette étude nous ont permis d'étudier d'une manière approfondie les particularités de l'évaluation et du timing des fusions-acquisitions et de faire émerger de nouvelles catégories d'options réelles présentes dans les différentes phases du processus d'évaluation et dans les moments de choix de timing.Ces options ont été par la suite classées en options stratégiques de croissance et en option de flexibilité. Une fois les options identifiées, nous sommes passés à notre deuxième étude empirique qui est une étude de cas réel. Celle-ci vise, à partir d'un projet de fusion-acquisition réel, à expliciter les problématiques d'évaluation et de choix de timing lorsque l'acquéreur utilise les techniques traditionnelles d'évaluation telles que la Valeur Actuelle Nette. Les limites de ces méthodes nous amènent à proposer des solutions pour une meilleure approche de l'évaluation et du timing des fusions-acquisitions en contexte d'incertitude: la méthode par les options réelles. Pour ce faire, nous proposons d'évaluer l'opportunité d'acquisition et d'étudier le choix de timing opportun à sa conclusion à partir de la méthodologie de l'option simple. Trois méthodes d'évaluation sont alors adoptées: le modèle d'évaluation en temps continu (Black et Scholes), le modèle développé en temps discret ( arbres binomiaux) et la technique des simulations de Monte Carlo. La deuxième solution proposée est celle de l'approche de l'évaluation et du timing des fusions-acquisitions par la méthodologie de l'option composée multi-séquentielle. A ce titre, nous mobilisons le modèle binomial adapté par Mun (2010) et proposons une modélisation sur mesure sous Visual Basic des séquences d'options sur options liées au processus d'évaluation et au choix du timing / The aim of this thesis is to study the conceptual and the empirical role when valuation and timing of mergers and acquisitions are approached by real options theory. To reach this aim, we started by analyzing a huge litterature on real option approach of mergers and acquisitions. We noticed a big lack on empirical contributions of real options in the mergers and acquisitions field, specially in pre-closing phases, where the acquirer value his project and choose the optimal timing to conclude it. To more investigate on that, we led deux studies. The first one is an exploratory study, in which we interviewed professionals on mergers and acquisitions on partucularities of valuation and timing on mergers and acquisitions. Then we asked them to identify real options on valuation and timing. Identified options were divided on strategic growth options and flexibility options. After the identification, we led our second study which is a real case study of a merger and acquisition project. The aim of this study is to prove limits of traditionnal valuation methods like the Net Present Value. As solutions to these limits, we proposed to use the real option approach. First, we used the simple option methodology and then the multi-phased compound options methodology to ameliorate valuation results and the timing choice of concluding mergers and acquisitions
303

Implication de l’ADN polymérase spécialisée zêta au cours de la réplication de l’hétérochromatine dans les cellules de mammifères / Involvement of the specialized DNA polymerase zeta during heterochromatin replication in mammalian cells

Ahmed-Seghir, Sana 24 September 2015 (has links)
La synthèse translésionnelle (TLS) est un processus important pour franchir des lésions de l’ADN au cours de la duplication du génome dans les cellules humaines. Le modèle « d’échange de polymérases » suggère que la polymérase réplicative est transitoirement remplacée par une polymérase spécialisée, qui va franchir le dommage et permettre de continuer la synthèse d’ADN. Ces ADN polymérases spécialisées appelées Pol êta (η), iota (ι), kappa (κ), zêta (ζ), et Rev1 ont été bien caractérisées pour leur capacité à franchir différents types de lésions in vitro. Un concept en émergence est que ces enzymes pourraient également être requises pour répliquer des zones spécifiques du génome qui sont « difficiles à répliquer ». Polζ est constituée d’au moins 2 sous-unités : Rev3 qui est la sous-unité catalytique et Rev7 sous-unité augmentant l’activité de Rev3L. Jusqu'ici, la fonction la mieux caractérisée de Polζ était de sa capacité à catalyser l'extension d'un mésappariement en face d'une lésion d'ADN. Cependant, il a été montré que la sous unité catalytique Rev3 de levure et humaine interagissent avec les deux sous-unités accessoires de Polδ que sont pol31 et pol32 chez la levure et p50 et p66 chez l’humain. Il a aussi été mis en évidence que Rev3L est importante pour la réplication des sites fragiles (SFCs) dans les cellules humaines, zones connues pour être à l’origine d’une instabilité génétique et pour être répliquées de manière tardive (en G2/M). Tout ceci suggère que Polζ pourrait jouer un rôle dans la réplication du génome non endommagé, et plus spécifiquement lorsque des barrières naturelles (e.g. ADN non-B) entravent la progression normale des fourches de réplication.Chez la levure S. cerevisiae, l’inactivation du gène rev3 est viable et conduit à une diminution de la mutagenèse spontanée ou induite par des agents génotoxiques suggérant que Polζ est impliquée dans le franchissement mutagène des lésions endogènes ou induite. En revanche, l’inactivation du gène Rev3L chez la souris est embryonnaire létale alors que la plupart des autres ADN polymérases spécialisées ne sont pas vitales. Ceci suggère que Polζ a acquis des fonctions essentielles au cours de l’évolution qui restent inconnues à ce jour. Les fibroblastes embryonnaires murins (MEF) Rev3L-/- présente une grande instabilité génétique spontanée associée une forte augmentation de cassures et de translocations chromosomiques indiquant que Polζ est directement impliquée dans le maintien de la stabilité du génome. Afin de clarifier le rôle de cette polymérase spécialisée au cours de la réplication du génome, nous avons entrepris de procéder à une étude sur les relations structure/fonction/localisation de la protéine Rev3. Notre étude met en évidence que la progression en phase S des cellules Rev3L-/- est fortement perturbée, notamment en fin de phase S. Dans ces cellules invalidées pour Rev3L, on constate des changements dans le programme de réplication et plus particulièrement dans des régions de transition (TTR) répliquées à partir du milieu de la phase S. Nous montrons aussi un enrichissement global en marques épigénétiques répressives (marques associées à l’hétérochromatine et méthylation de l’ADN) suggérant qu’un ralentissement de la progression de la fourche de réplication à des loci particuliers peut promouvoir une hétérochromatinisation lorsque Rev3L est invalidé. De manière intéressante, nous constatons une diminution de l’expression de plusieurs gènes impliqués dans le développement qui pourrait peut-être expliquer la létalité embryonnaire constatée en absence de Rev3L. Enfin, nous mettons en évidence une interaction directe entre la protéine d’organisation de l’hétérochromatine HP1α et Rev3L via un motif PxVxL. Tout ceci nous suggère fortement que Polζ pourrait assister les ADN polymérases réplicatives Polδ et Polε dans la réplication des domaines compactés de la chromatine en milieu et fin de phase S. / DNA polymerase zeta (Polζ) is a key player in Translesion DNA synthesis (TLS). Polζ is unique among TLS polymerases in mammalian cells, because inactivation of the gene encoding its catalytic subunit (Rev3L) leads to embryonic lethality in the mouse. However little is known about its biological functions under normal growth conditions.Here we show that S phase progression is impaired in Rev3L-/- MEFs with a delay in mid and late S phase. Genome-wide profiling of replication timing revealed that Rev3L inactivation induces changes in the temporal replication program, mainly in particular genomic regions in which the replication machinery propagates a slower velocity. We also highlighted a global enrichment of repressive histone modifications as well as hypermethylation of major satellites DNA repeats in Rev3L-deficient cells, suggesting that fork movements can slow down or stall in specific locations, and a delay in restarting forks could promote heterochromatin formation in Rev3L-depleted cells. As a direct or indirect consequence, we found that several genes involved in growth and development are down-regulated in Rev3L-/- MEFs, which might potentially explain the embryonic lethality observed in Rev3L KO mice. Finally we discovered that HP1α directly interacts and recruits Rev3L to pericentromeric heterochromatin. We therefore propose that Polζ has been co-opted by evolution to assist DNA polymerase ε and δ in duplicating condensed chromatin domains during mid and late S phase.
304

Développement, validation clinique et valorisation d'une nouvelle technologie pour la rééducation de la dextérité manuelle / Development, clinical validation and evaluation of a new technological tool for the rehabilitation of manual dexterity

Térémetz, Maxime 27 September 2016 (has links)
La dextérité manuelle est au centre de notre interaction physique avec le monde. La sophistication des mouvements des doigts chez l’homme nécessite le contrôle de plusieurs composants clés comme la force, l’indépendance, le timing et le séquençage des mouvements des doigts. La dextérité est souvent affectée dans de nombreuses pathologies impactant l’indépendance et la vie quotidienne des patients. Le but global de cette thèse est d’améliorer la rééducation de la dextérité chez ces patients par une meilleure mesure et compréhension de la dextérité et de ses composants. Nous avons développé le Finger Force Manipulandum (FFM), un outil permettant de quantifier les principaux composants de la dextérité chez des sujets sains et des patients. Afin de valider cet appareil, nous avons testé la faisabilité de son utilisation chez des patients souffrant d’un important déficit de dextérité après un accident vasculaire cérébral. Le FFM permet, chez ces patients (N= 10 vs N= 10 sujets sains), de quantifier les différents composants de la dextérité et d’identifier des déficits dans chacun d’entre eux (exemple : les patients font trois fois plus d’erreur que les témoins pour le contrôle de force ; P=0,0002). Les mesures sont plus sensibles que certains tests cliniques comme l’ARAT : elles permettent de détecter des déficits de la dextérité même chez des patients atteignant le score maximum de l’ARAT. Le FFM permet également de créer un profil de dextérité affectée chez chaque patient permettant ainsi de détecter quel composant est significativement affecté et permet aussi de suivre la récupération. Dans une maladie affectant légèrement la dextérité, comme la schizophrénie, lorsque l’on compare les scores FFM des patients stabilisés (N= 35) avec ceux des témoins (N= 20) on constate que les patients ont une performance diminuée de façon significative dans chacun des quatre composants de la dextérité. Certaines des mesures du FFM corrèlent avec des échelles cliniques comme la PANSS (R=0,53, P=0,0019) mais aussi avec des échelles neuropsychologiques. Ces mesures FFM sont également assez sensibles pour détecter une évolution au cours du temps : certains composants restent stables après une remédiation cognitive alors que d’autres s’améliorent. En conclusion, le FFM est un nouvel outil qui permet de quantifier les différents composants de la dextérité. Il est utilisable même chez des patients avec un important déficit manuel et permet d’identifier des profils individuels de dextérité affectée. Il est également assez sensible pour détecter de faibles diminutions de performances motrices comme celles retrouvées chez des patients schizophrènes et pourrait permettre d’identifier certains marqueurs moteurs ayant trait au background neuro-développemental des patients schizophrènes (détection précoce) et à l’évolution de la maladie. / Manual dexterity is essential for our physical interaction with the world. The high degree of dexterity in humans requires sophisticated control of several key components such as the control of force, of independence, timing and sequencing of finger movements. Manual dexterity is affected in various pathologies, impacting activities of daily living and leading to loss of independence. The main purpose of this thesis is to improve rehabilitation of dexterity in these patients by a better behavioral quantification and a clearer understanding of manual dexterity and its components of control. We developed the Finger Force Manipulandum (FFM), a new tool allowing for the quantification of the main components of the dexterity in healthy subjects and in patients. To validate this device, we tested the feasibility of its use with stroke patients suffering from moderate-to-severe deficits of dexterity. In these patients, the FFM allowed for quantification of four components of dexterity and for identification of deficits in each of them (example: patients (N=10) made three times more error than controls (N=10) in force control; P=0.0002). These measures (components) are more sensitive than clinical tests, such as the ARAT: patients reaching the maximum ARAT score still showed deficits of dexterity with the FFM. Based on the four FFM scores, individual profiles of affected dexterity were calculated, highlighting the individual deficit of each patient. This allowed for quantitative longitudinal follow up during recovery. In a disease affecting dexterity mildly, such as schizophrenia, the FFM scores of stabilized patients (N = 35) indicated a significantly lower performance compared to control subjects (N = 20) in each of the four dexterity components. Some of the FFM measures correlated with clinical scales, such as the PANSS (R=0.53, P=0.0019), and also with some neuropsychological scales. These FFM measures also provide indicators for the evolution of dexterity over time: certain components remained stable after cognitive remediation, while others improved. In conclusion, the FFM is a new tool, which allows for quantification of manual dexterity (by measuring various underlying components). It is suitable for patients with moderate-to-sever manual deficits and allows for identification of individual profiles of affected dexterity. It also detects minor manual deficits in schizophrenic patients, and may allow for identification of potential behavioral markers related to the neurodevelopmental background of schizophrenic patients (early detection) and to the evolution of the disease.
305

A música tímida de João Gilberto / The shy music of João Gilberto

Menezes, Enrique Valarelli 16 October 2012 (has links)
Nesse trabalho procuro examinar as relações de João Gilberto com o samba e com os modos tradicionais do canto brasileiro, particularmente o estilo do samba sincopado. Representante fundamental de um estilo que levou a música brasileira ao centro da indústria cultural, estarei em busca das continuidades e desenvolvimentos que esse estilo promove em relação ao samba feito nos subúrbios das cidades brasileiras em formação. Invertendo a orientação frequentemente biográfica da bibliografia tradicional, estarei em busca das novidades trazidas por João Gilberto no que diz respeito aos parâmetros musicais dos timbres, durações, alturas e intensidades. Longe de desprezar os estudos biográficos já feitos sobre o autor, a estratégia da inversão pretende fomentar um novo ambiente de debate com bases tão sólidas quanto aquelas, no qual se criem condições de dialogar com a bibliografia tradicional por um novo viés: o da musicologia. / In this academic work I examine the relation of João Gilberto with samba and traditional Brazilian singing, particularly the syncopated samba style. Through this representative of a style that brought Brazilian music to the center of the cultural industry, I will be searching for continuities and developments of the samba made in the outskirts of emerging Brazilian cities, promoted by that style. Inverting the orientation, frequently biographic of his traditional bibliography, I will be looking for new developments brought by João Gilberto on tones, durations, intensities and timbres. The strategy of this inversion does not intend to ignore the biographical studies already done, but to foster a new environment for discussion with a solid bases as well, in which conditions are created to dialogue with the traditional bibliography by a new bias: one of musicology.
306

Mário de Andrade e a síncopa do Brasil / -

Menezes, Enrique Valarelli 06 April 2017 (has links)
Esse trabalho está dividido em duas partes. Na primeira realizamos a transcrição de um manuscrito inédito de Mário de Andrade intitulado \"Síncopa\", pertencente à série \"Manuscritos do autor\", do Arquivo Mário de Andrade, hoje localizado no Instituto de Estudos Brasileiros - IEB/USP. Trata-se de um conjunto de anotações diversas sobre o assunto, feitas ao longo do tempo e ajuntadas no arquivo pessoal do poeta e musicólogo. À transcrição desse conjunto acrescentei análises, contextualização das notas e articulações à bibliografia publicada do autor. Em uma segunda parte, construo minha tese sobre a síncopa do Brasil a partir do desenvolvimento das ideias e da metodologia exposta no manuscrito de Mário de Andrade, procurando sustentá-la através de análises diversas da estrutura rítmica da música popular brasileira e da síncopa em particular. / This dissertation is divided in two parts. In the first one I make a transcription of an unpublished manuscript by Mário de Andrade titled \"Syncopation\", belonging to the series \"Author\'s Manuscripts\" from the Mário de Andrade Archives, now located at the Institute of Brazilian Studies - IEB/USP. This manuscript consists in a collection of various annotations about the subject, made in several periods and gathered in the personal files of the poet and musicologist. To the transcription of this collection I added analyses, a contextualization of the annotations, and articulations with the published biography of the author. In the second part, I elaborate my thesis about syncopation in Brazil based on a development of the ideas and methodology presented in Mário de Andrade\'s manuscript, and seek to demonstrate it with various analyses of the rhythmic structure of Brazilian popular music, in particular of syncopation.
307

Programação de tarefas em um ambiente flow shop com m máquinas para a minimização do desvio absoluto total de uma data de entrega comum / Scheduling in a n-machine flow shop for the minimization of the total absolute deviation from a common due date

Vasquez, Julio Cesar Delgado 28 August 2017 (has links)
Neste trabalho abordamos o problema de programação de tarefas em um ambiente flow shop permutacional com mais de duas máquinas. Restringimos o estudo para o caso em que todas as tarefas têm uma data de entrega comum e restritiva, e onde o objetivo é minimizar a soma total dos adiantamentos e atrasos das tarefas em relação a tal data de entrega. É assumido também um ambiente estático e determinístico. Havendo soluções com o mesmo custo, preferimos aquelas que envolvem menos tempo de espera no buffer entre cada máquina. Devido à dificuldade de resolver o problema, mesmo para instâncias pequenas (o problema pertence à classe NP-difícil), apresentamos uma abordagem heurística para lidar com ele, a qual está baseada em busca local e faz uso de um algoritmo linear para atribuir datas de conclusão às tarefas na última máquina. Este algoritmo baseia-se em algumas propriedades analíticas inerentes às soluções ótimas. Além disso, foi desenvolvida uma formulação matemática do problema em programação linear inteira mista (PLIM) que vai permitir validar a eficácia da abordagem. Examinamos também o desempenho das heurísticas com testes padrões (benchmarks) e comparamos nossos resultados com outros obtidos na literatura. / In this work we approach the permutational flow shop scheduling problem with more than two machines. We restrict the study to the case where all the jobs have a common and restrictive due date, and where the objective is to minimize the total sum of the earliness and tardiness of jobs relative to the due date. A static and deterministic environment is also assumed. If there are solutions with the same cost, we prefer those that involve less buffer time between each machine. Due to the difficulty of solving the problem, even for small instances (the problem belongs to the NP-hard class), we present a heuristic approach to dealing with it, which is based on local search and makes use of a linear algorithm to assign conclusion times to the jobs on the last machine. This algorithm is based on some analytical properties inherent to optimal solutions. In addition, a mathematical formulation of the problem in mixed integer linear programming (MILP) was developed that will validate the effectiveness of the approach. We also examined the performance of our heuristics with benchmarks and compared our results with those obtained in the literature.
308

Contribution à la conception d'architecture de calcul auto-adaptative intégrant des nanocomposants neuromorphiques et applications potentielles / Adaptive Computing Architectures Based on Nano-fabricated Components

Bichler, Olivier 14 November 2012 (has links)
Dans cette thèse, nous étudions les applications potentielles des nano-dispositifs mémoires émergents dans les architectures de calcul. Nous montrons que des architectures neuro-inspirées pourraient apporter l'efficacité et l'adaptabilité nécessaires à des applications de traitement et de classification complexes pour la perception visuelle et sonore. Cela, à un cout moindre en termes de consommation énergétique et de surface silicium que les architectures de type Von Neumann, grâce à une utilisation synaptique de ces nano-dispositifs. Ces travaux se focalisent sur les dispositifs dit «memristifs», récemment (ré)-introduits avec la découverte du memristor en 2008 et leur utilisation comme synapse dans des réseaux de neurones impulsionnels. Cela concerne la plupart des technologies mémoire émergentes : mémoire à changement de phase – «Phase-Change Memory» (PCM), «Conductive-Bridging RAM» (CBRAM), mémoire résistive – «Resistive RAM» (RRAM)... Ces dispositifs sont bien adaptés pour l'implémentation d'algorithmes d'apprentissage non supervisés issus des neurosciences, comme «Spike-Timing-Dependent Plasticity» (STDP), ne nécessitant que peu de circuit de contrôle. L'intégration de dispositifs memristifs dans des matrices, ou «crossbar», pourrait en outre permettre d'atteindre l'énorme densité d'intégration nécessaire pour ce type d'implémentation (plusieurs milliers de synapses par neurone), qui reste hors de portée d'une technologie purement en «Complementary Metal Oxide Semiconductor» (CMOS). C'est l'une des raisons majeures pour lesquelles les réseaux de neurones basés sur la technologie CMOS n'ont pas eu le succès escompté dans les années 1990. A cela s'ajoute la relative complexité et inefficacité de l'algorithme d'apprentissage de rétro-propagation du gradient, et ce malgré tous les aspects prometteurs des architectures neuro-inspirées, tels que l'adaptabilité et la tolérance aux fautes. Dans ces travaux, nous proposons des modèles synaptiques de dispositifs memristifs et des méthodologies de simulation pour des architectures les exploitant. Des architectures neuro-inspirées de nouvelle génération sont introduites et simulées pour le traitement de données naturelles. Celles-ci tirent profit des caractéristiques synaptiques des nano-dispositifs memristifs, combinées avec les dernières avancées dans les neurosciences. Nous proposons enfin des implémentations matérielles adaptées pour plusieurs types de dispositifs. Nous évaluons leur potentiel en termes d'intégration, d'efficacité énergétique et également leur tolérance à la variabilité et aux défauts inhérents à l'échelle nano-métrique de ces dispositifs. Ce dernier point est d'une importance capitale, puisqu'il constitue aujourd'hui encore la principale difficulté pour l'intégration de ces technologies émergentes dans des mémoires numériques. / In this thesis, we study the potential applications of emerging memory nano-devices in computing architecture. More precisely, we show that neuro-inspired architectural paradigms could provide the efficiency and adaptability required in some complex image/audio processing and classification applications. This, at a much lower cost in terms of power consumption and silicon area than current Von Neumann-derived architectures, thanks to a synaptic-like usage of these memory nano-devices. This work is focusing on memristive nano-devices, recently (re-)introduced by the discovery of the memristor in 2008 and their use as synapses in spiking neural network. In fact, this includes most of the emerging memory technologies: Phase-Change Memory (PCM), Conductive-Bridging RAM (CBRAM), Resistive RAM (RRAM)... These devices are particularly suitable for the implementation of natural unsupervised learning algorithms like Spike-Timing-Dependent Plasticity (STDP), requiring very little control circuitry.The integration of memristive devices in crossbar array could provide the huge density required by this type of architecture (several thousand synapses per neuron), which is impossible to match with a CMOS-only implementation. This can be seen as one of the main factors that hindered the rise of CMOS-based neural network computing architectures in the nineties, among the relative complexity and inefficiency of the back-propagation learning algorithm, despite all the promising aspects of such neuro-inspired architectures, like adaptability and fault-tolerance. In this work, we propose synaptic models for memristive devices and simulation methodologies for architectural design exploiting them. Novel neuro-inspired architectures are introduced and simulated for natural data processing. They exploit the synaptic characteristics of memristives nano-devices, along with the latest progresses in neurosciences. Finally, we propose hardware implementations for several device types. We assess their scalability and power efficiency potential, and their robustness to variability and faults, which are unavoidable at the nanometric scale of these devices. This last point is of prime importance, as it constitutes today the main difficulty for the integration of these emerging technologies in digital memories.
309

Efeitos do market timing sobre a estrutura de capital de companhias abertas brasileiras / Market timing effects on capital structure of Brazilian public companies

Albanez, Tatiana 16 October 2012 (has links)
De acordo com a teoria de market timing, as empresas aproveitam janelas de oportunidade para a captação de recursos, com a intenção de explorar flutuações temporárias no custo de fontes alternativas de financiamento. Assim, a estrutura de capital seria determinada por tentativas passadas de emitir títulos em momentos considerados favoráveis para a emissão. O presente trabalho teve por objetivo examinar o comportamento de market timing em companhias abertas brasileiras, buscando verificar a existência e persistência de um comportamento oportunista quando da escolha dentre diferentes fontes de financiamento. Para tanto, foram desenvolvidos dois estudos complementares. Primeiramente, investiga-se o comportamento de market timing por meio da análise da influência de valores de mercado históricos sobre a estrutura de capital de companhias brasileiras que realizaram IPO no período 2001-2011. Como principal resultado, verifica-se uma relação negativa entre valores de mercado históricos e alavancagem, evidenciando que, em momentos de altos valores de mercado, as empresas reduzem o endividamento, por ser mais vantajosa a emissão de ações, e vice-versa, o que pode indicar um comportamento oportunista na captação de recursos. No entanto, o comportamento não é permanente em todo o período, a ponto de determinar a estrutura de capital destas empresas. Assim, julgou-se necessário examinar diretamente os efeitos do market timing sobre a estrutura de capital de companhias brasileiras relacionando indicadores de custo de capital (próprio e de terceiros) com os níveis de endividamento destas companhias. Para tanto, foram utilizadas duas amostras: a primeira foi composta por 235 companhias abertas ativas na BM&FBOVESPA, analisadas no período 2000-2011; a segunda foi composta por 75 companhias abertas ativas e com ratings de crédito atribuídos pelas principais agências de classificação de risco, analisadas no período 2005-2011. Foram utilizadas quatro proxies para o custo de capital próprio, baseadas no Modelo de Precificação de Ativos Financeiros - CAPM, e duas proxies para o custo de capital de terceiros, sendo uma delas baseada no custo médio do passivo oneroso e a outra no rating de crédito das companhias, esta última testada apenas para a amostra 2. Os resultados obtidos com os modelos de dados em painel indicaram que quanto maior o custo de capital próprio, maior o nível de endividamento, bem como, quanto maior o custo de capital de terceiros, menor a utilização de dívida como fonte de financiamento. Estes resultados estão de acordo com o esperado pela teoria de market timing, refletindo que as empresas estão atentas ao custo de diferentes fontes de recursos, em busca das melhores alternativas de financiamento. Este comportamento se justifica e é confirmado por meio dos resultados obtidos: no primeiro estudo, verifica-se que o valor de mercado, em média, caiu após a abertura de capital, tornando indesejável a emissão de novas ações e preferível a utilização de dívida. No segundo estudo, verifica-se que as proxies para custo de capital se mostraram as variáveis mais significativas, exercendo forte influência sobre a estrutura de capital das empresas. Assim sendo, os resultados obtidos se complementam e levam à confirmação da tese proposta: o market timing influencia a estrutura de capital de companhias abertas brasileiras, sendo que as empresas aproveitam janelas de oportunidades para a captação de recursos para financiar seus projetos de investimento. / According to market timing theory, the companies use windows of opportunity to raise funds, aiming to explore temporary fluctuations in alternative sources of capital. Thus the capital structure would be determinate by past attempts to issue securities when security issue was considered propitious. The present thesis aimed to examine the market timing behavior in Brazilian public companies, trying to verify the existence and persistence of opportunistic behavior when choosing among different sources of capital. In order to do so, we developed two complementary studies. Firstly we investigate market timing behavior by analyzing the influence of historical market value on capital structure of Brazilian companies that performed IPO from 2001 to 2011. The main result was that there is a negative relation between historical market value and leverage, evidence shows that in moments of high market value, companies reduce indebtedness because equity issue is more advantageous and vice-versa, it might indicate an opportunistic behavior when raising funds. However, the behavior is not permanent throughout the period to determine the capital structure of these companies. Therefore, it was deemed necessary to directly examine market timing effects on capital structure of Brazilian companies matching cost of capital proxies (equity and debt) with indebtedness levels of these companies. In order to do so, we used two samples: the first was composed by 235 active public companies listed at BM&FBOVESPA that were analyzed from 2000 to 2011; the second was composed by 75 active public companies with credit ratings assigned by major credit rating agencies, they were analyzed from 2005 to 2011. Four proxies were used for cost of equity capital, based on Capital Asset Pricing Model - CAPM and two proxies for cost of debt, one of them was based on average cost of book value of debt and the other on credit rating of companies, the last was tested only for sample 2. The results found with panel data model show that the higher the cost of equity, the greater the level of indebtedness, as well as the higher the cost of debt, the less the use of debt as a financing source. These results are according to the expected by market timing theory, they reflect that the companies are aware of the cost of different financing sources in the search for the best financing alternatives. This behavior is justified and confirmed by the results reached: in the first study we can see that the market value, on average, dropped after the initial public offering making it undesirable to issuing equity and preferable to using debt. In the second study we verify that the proxies for cost of capital were the most significant variables, exerting strong influence on the capital structure of companies. Thus, the results obtained are complementary and lead to the confirmation of the proposed thesis: market timing has influence on the capital structure of Brazilian public companies, and in order to raise funds to finance their investment projects they use windows of opportunity.
310

Utilisation des nano-composants électroniques dans les architectures de traitement associées aux imageurs / Integration of memory nano-devices in image sensors processing architecture

Roclin, David 16 December 2014 (has links)
En utilisant les méthodes d’apprentissages tirées des récentes découvertes en neuroscience, les réseaux de neurones impulsionnels ont démontrés leurs capacités à analyser efficacement les grandes quantités d’informations provenant de notre environnement. L’implémentation de ces circuits à l’aide de processeurs classiques ne permet pas d’exploiter efficacement leur parallélisme. L’utilisation de mémoire numérique pour implémenter les poids synaptique ne permet pas la lecture ou la programmation parallèle des synapses et est limité par la bande passante reliant la mémoire à l’unité de calcul. Les technologies mémoire de type memristive pourrait permettre l’implémentation de ce parallélisme au coeur de la mémoire.Dans cette thèse, nous envisageons le développement d’un réseau de neurones impulsionnels dédié au monde de l’embarqué à base de dispositif mémoire émergents. Dans un premier temps, nous avons analysé un réseau impulsionnel afin d’optimiser ses différentes composantes : neurone, synapse et méthode d’apprentissage STDP en vue d’une implémentation numérique. Dans un second temps, nous envisageons l’implémentation de la mémoire synaptique par des dispositifs memristifs. Enfin, nous présentons le développement d’une puce co-intégrant des neurones implémentés en CMOS avec des synapses en technologie CBRAM. / By using learning mechanisms extracted from recent discoveries in neuroscience, spiking neural networks have demonstrated their ability to efficiently analyze the large amount of data from our environment. The implementation of such circuits on conventional processors does not allow the efficient exploitation of their parallelism. The use of digital memory to implement the synaptic weight does not allow the parallel reading or the parallel programming of the synapses and it is limited by the bandwidth of the connection between the memory and the processing unit. Emergent memristive memory technologies could allow implementing this parallelism directly in the heart of the memory.In this thesis, we consider the development of an embedded spiking neural network based on emerging memory devices. First, we analyze a spiking network to optimize its different components: the neuron, the synapse and the STDP learning mechanism for digital implementation. Then, we consider implementing the synaptic memory with emergent memristive devices. Finally, we present the development of a neuromorphic chip co-integrating CMOS neurons with CBRAM synapses.

Page generated in 0.0653 seconds