• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 57
  • 22
  • 10
  • 10
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 244
  • 46
  • 38
  • 34
  • 31
  • 24
  • 23
  • 23
  • 22
  • 21
  • 18
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Metamodeling Driven IP Reuse for System-on-chip Integration and Microprocessor Design

Mathaikutty, Deepak Abraham 02 December 2007 (has links)
This dissertation addresses two important problems in reusing intellectual properties (IPs) in the form of reusable design or verification components. The first problem is associated with fast and effective integration of reusable design components into a System-on-chip (SoC), so faster design turn-around time can be achieved, leading to faster time-to-market. The second problem has the same goals of faster product design cycle, but emphasizes on verification model reuse, rather than design component reuse. It specifically addresses reuse of reusable verification IPs to enable a "write once, use many times" verification strategy. This dissertation is accordingly divided into part I and part II which are related but describe the two problems and our solutions to them. These two related but distinctive problems faced by system design companies have been tackled through a unique approach which hither-to-fore only have been used in the software engineering domain. This approach is called metamodeling, which allows creating customized meta-language to describe the syntax and semantics for a modeling domain. It provides a way to create, transform and analyze domain specific languages, which are themselves described by metamodels, and the transformation and processing of models in such languages are also described by metamodels. This makes machine based interpretation and translation from these models an easier and formal task. In part I, we consider the problem of rapid system-level model integration of existing reusable components such that (i) the required architecture of the SoC can be expressed formally, (ii) automatic selection of components from an IP library to match the need of the system being integrated can be done, (iii) integrability of the components is provable, or checkable automatically, and (iv) structural and behavioral type systems for each component can be utilized through inferencing and matching techniques to ensure their compatibility. Our solutions include a component composition language, algorithms for component selection, type matching and inferencing algorithms, temporal property based behavioral typing, and finally a software system on top of an existing metamodeling environment. In part II, we use the same metamodeling environment to create a framework for modeling generative verification IPs. Our main contributions relate to INTEL's microprocessor verification environment, and our solution spans various abstraction levels (System, architectural, and microarchitecture) to perform verification. We provide a unified language that can be used to model verification IPs at all abstraction levels, and verification collaterals such as testbenches, simulators, and coverage monitors can be generated from these models, thereby enhancing reuse in verification. / Ph. D.
152

Gait termination on a declined surface in trans-femoral amputees: Impact of using microprocessor-controlled limb system

Abdulhasan, Zahraa M., Scally, Andy J., Buckley, John 30 May 2018 (has links)
Yes / Walking down ramps is a demanding task for transfemoral-amputees and terminating gait on ramps is even more challenging because of the requirement to maintain a stable limb so that it can do the necessary negative mechanical work on the centre-of-mass in order to arrest (dissipate) forward/downward velocity. We determined how the use of a microprocessor-controlled limb system (simultaneous control over hydraulic resistances at ankle and knee) affected the negative mechanical work done by each limb when transfemoral-amputees terminated gait during ramp descent. Methods: Eight transfemoral-amputees completed planned gait terminations (stopping on prosthesis) on a 5-degree ramp from slow and customary walking speeds, with the limb's microprocessor active or inactive. When active the limb operated in its ‘ramp-descent’ mode and when inactive the knee and ankle devices functioned at constant default levels. Negative limb work, determined as the integral of the negative mechanical (external) limb power during the braking phase, was compared across speeds and microprocessor conditions. Findings: Negative work done by each limb increased with speed (p < 0.001), and on the prosthetic limb it was greater when the microprocessor was active compared to inactive (p = 0.004). There was no change in work done across microprocessor conditions on the intact limb (p = 0.35). Interpretation: Greater involvement of the prosthetic limb when the limb system was active indicates its ramp-descent mode effectively altered the hydraulic resistances at the ankle and knee. Findings highlight participants became more assured using their prosthetic limb to arrest centre-of-mass velocity. / ZA is funded by the Higher Committee of Education Development in IRAQ (HCED student number D13 626).
153

Science of the Small: Nanotechnology Research Laboratory in Washington, D.C.

Porter, Gregory Thomas 13 February 2010 (has links)
This thesis is an attempt to explore post industrial society and how modern industry can become part of the urban experience. Through the design of a nanotechnology research laboratory, I was able to discover a connection between modern architecture and nanotechnology which revolved around the topics of scale, layering and revealing. / Master of Architecture
154

Characterization and management of voltage noise in multi-core, multi-threaded processors

Kim, Youngtaek 14 July 2014 (has links)
Reliability is one of the important issues of recent microprocessor design. Processors must provide correct behavior as users expect, and must not fail at any time. However, unreliable operation can be caused by excessive supply voltage fluctuations due to an inductive part in a microprocessor power distribution network. This voltage fluctuation issue is referred to as inductive or di/dt noise, and requires thorough analysis and sophisticated design solutions. This dissertation proposes an automated stressmark generation framework to characterize di/dt noise effect, and suggests a practical solution for management of di/dt effects while achieving performance and energy goals. First, the di/dt noise issue is analyzed from theory to a practical view. Inductance is a parasitic part in power distribution network for microprocessor, and its characteristics such as resonant frequencies are reviewed. Then, it is shown that supply voltage fluctuation from resonant behavior is much harmful than single event voltage fluctuations. Voltage fluctuations caused by standard benchmarks such as SPEC CPU2006, PARSEC, Linpack, etc. are studied. Next, an AUtomated DI/dT stressmark generation framework, referred to as AUDIT, is proposed to identify maximum voltage droop in a microprocessor power distribution network. The di/dt stressmark generated from AUDIT framework is an instruction sequence, which draws periodic high and low current pulses that maximize voltage fluctuations including voltage droops. AUDIT uses a Genetic Algorithm in scheduling and optimizing candidate instruction sequences to create a maximum voltage droop. In addition, AUDIT provides with both simulation and hardware measurement methods for finding maximum voltage droops in different design and verification stages of a processor. Failure points in hardware due to voltage droops are analyzed. Finally, a hardware technique, floating-point (FP) issue throttling, is examined, which provides a reduction in worst case voltage droop. This dissertation shows the impact of floating point throttling on voltage droop, and translates this reduction in voltage droop to an increase in operating frequency because additional guardband is no longer required to guard against droops resulting from heavy floating point usage. This dissertation presents two techniques to dynamically determine when to tradeoff FP throughput for reduced voltage margin and increased frequency. These techniques can work in software level without any modification of existing hardware. / text
155

Meta assembler and emulator for the Intel 8086 microprocessor

Shoaib, Rao Mohammad, 1960 - January 1989 (has links)
The thesis describes a Universal meta cross assembler and an emulator for the Intel 8086 microprocessor. The utility is designed to be used as an instructional tool to teach assembly language programming to students. One implementation is available to allow students to run Intel 8086 programs on the university's vax mainframe, so that students can test their programs at their convenience. This setup also results in low operating costs with no additional equipment requirements. Several options are provided in the emulator to debug the 8086 assembly language programs composed by students. The assembler, besides generating Intel 8086 machine code, has the capability to generate machine code for a number of microprocessors or microcontrollers. The machine code file generated by the assembler is the input to the emulator. Both the assembler and the emulator are completely portable and can be recompiled to run on any system with a standard C compiler.
156

Projeto e construção de um digitalizador e promediador de dois canais para tomografia por ressonância magnética nuclear / Design and construction of a dual channel signal digitizer and averager for nuclear magnetic resonance tomography

Torre Neto, André 09 December 1988 (has links)
Este trabalho descreve o projeto, a construção e a avaliação de um digitalizador de sinais controlado por microprocessador, desenvolvido para ser utilizado em Tomografia por Ressonância Magnética Nuclear, TORM. O digitalizador apresenta dois canais de entrada com digitalização simultânea em 256, 512 ou 1024 palavras por canal e com taxa de amostragem máxima de 22,7 Khz. A resolução é de 12 bits com conversão analógico/digital por aproximação sucessiva. Não há controles manuais o que exige um computador hospedeiro para o ajuste de parâmetros via interface de comunicação paralela destinada para este fim. Opcionalmente pode-se utilizar uma interface serial do tipo RS232C-EIA operando com velocidade máxima de 9600 bauds. O equipamento efetua o processamento local da média acumulativa do sinal, técnica empregada para melhorar a relação sinal/ruído no caso de ruído aleatório. Um circuito dedicado à monitoração permite que se visualize em monitor X-Y tanto o sinal como a sua média. No caso da média, por ela ser acumulativa, há um ajuste automático de escala / This work describes the design, construction and evaluation of a microprocessor controlled digitizer developed to be used in Magnetic Resonance Tomography or Imaging, MRI. The digitizer presents two input channels with simultaneous digitalization in 256, 512 or 1024 words per channel with a sample rate up to 22.7 Khz. A resolution of 12 bits is obtained with successive approximation A/D conversion. There are no manual controls. So a host computer is needed to adjust the parameters through a parallel communication interface available for this purpose. Optionally, a RS232-EIA type serial interface may be used, operating at speeds up to 9600 bauds. Signal average can be processed locally by the equipment. This technique is used to improve the signal to noise ratio in case of random noise. A dedicated circuit permits the visualization of the signal and or its average on an x-y monitor. To monitor cumulative averaged data an automatic scale adjustment is provided
157

Power Laws na modelagem de caches de microprocessadores. / Power Laws on the modeling of caches of microprocessors.

Scoton, Filipe Montefusco 10 June 2011 (has links)
Power Laws são leis estatísticas que permeiam os mais variados campos do conhecimento humano tais como Biologia, Sociologia, Geografia, Linguística, Astronomia, entre outros, e que têm como característica mais importante a disparidade entre os elementos causadores, ou seja, alguns poucos elementos são responsáveis pela grande maioria dos efeitos. Exemplos famosos são o Princípio de Pareto, a Lei de Zipf e o modelo de Incêndios Florestais. O Princípio de Pareto diz que 80% da riqueza de uma nação está nas mãos de apenas 20% da população; em outras palavras, uma relação causa e efeito chamada 80-20. A Lei de Zipf enuncia que o comportamento da frequência versus o ranking de ocorrência é dado por uma curva hiperbólica com um comportamento semelhante a 1/x. O modelo de Incêndios Florestais modela o comportamento do crescimento de árvores em uma floresta entre sucessivas queimadas que causam destruição de agrupamentos de árvores. As Power Laws demonstram que uma porcentagem pequena de uma distribuição tem uma alta frequência de ocorrência, enquanto o restante dos casos que aparecem tem uma frequência baixa, o que levaria a uma reta decrescente em uma escala logarítmica. A partir de simulações utilizando o conjunto de benchmarks SPEC-CPU2000, este estudo procura investigar como essas leis estatísticas podem ser utilizadas para entender e melhorar o desempenho de caches baseados em diferentes políticas de substituição de linhas de cache. O estudo sobre a possibilidade de uma nova política de substituição composta por um cache Pareto, bem como um novo mecanismo de chaveamento do comportamento de algoritmos adaptativos de substituição de linhas de cache, chamado de Forest Fire Switching Mechanism, ambos baseados em Power Laws, são propostos a fim de se obter ganhos de desempenho na execução de aplicações. / Power Laws are statistical laws that permeate the most varied fields of human knowledge such as Biology, Sociology, Geography, Linguistics, Astronomy, among others, and have as most important characteristic the disparity between the cause events, in other words, some few elements are responsible for most of the effects. Famous examples are the Pareto Principle, the Zipfs Law and the Forest Fire model. The Pareto Principle says that 80% of a nations wealth is in the hands of just 20% of the population; in other words, a cause and effect relationship called 80-20. Zipf\'s Law states that the behavior of frequency versus ranking of occurrence is given by a hyperbolic curve with a behavior similar to 1/x. The Forest Fire model represents the behavior of trees growing in a forest between successive fires that cause the destruction of clusters of trees. The Power Laws demonstrate that a small percentage of a distribution has a high frequency of occurrence, while the rest of the cases that appear have a low frequency, which would take to a decreasing line in a logarithmic scale. Based on simulations using the SPEC-CPU2000 benchmarks, this work seeks to investigate how these distributions can be used in order to understand and improve the performance of caches based on different cache line replacement policies. The study about the possibility of a new replacement policy composed by a Pareto cache, and a new switching mechanism of the behavior of cache line replacement adaptive algorithms, called Forest Fire Switching Mechanism, both based on Power Laws, are proposed in order to obtain performance gains on the execution of applications.
158

Power Laws na modelagem de caches de microprocessadores. / Power Laws on the modeling of caches of microprocessors.

Filipe Montefusco Scoton 10 June 2011 (has links)
Power Laws são leis estatísticas que permeiam os mais variados campos do conhecimento humano tais como Biologia, Sociologia, Geografia, Linguística, Astronomia, entre outros, e que têm como característica mais importante a disparidade entre os elementos causadores, ou seja, alguns poucos elementos são responsáveis pela grande maioria dos efeitos. Exemplos famosos são o Princípio de Pareto, a Lei de Zipf e o modelo de Incêndios Florestais. O Princípio de Pareto diz que 80% da riqueza de uma nação está nas mãos de apenas 20% da população; em outras palavras, uma relação causa e efeito chamada 80-20. A Lei de Zipf enuncia que o comportamento da frequência versus o ranking de ocorrência é dado por uma curva hiperbólica com um comportamento semelhante a 1/x. O modelo de Incêndios Florestais modela o comportamento do crescimento de árvores em uma floresta entre sucessivas queimadas que causam destruição de agrupamentos de árvores. As Power Laws demonstram que uma porcentagem pequena de uma distribuição tem uma alta frequência de ocorrência, enquanto o restante dos casos que aparecem tem uma frequência baixa, o que levaria a uma reta decrescente em uma escala logarítmica. A partir de simulações utilizando o conjunto de benchmarks SPEC-CPU2000, este estudo procura investigar como essas leis estatísticas podem ser utilizadas para entender e melhorar o desempenho de caches baseados em diferentes políticas de substituição de linhas de cache. O estudo sobre a possibilidade de uma nova política de substituição composta por um cache Pareto, bem como um novo mecanismo de chaveamento do comportamento de algoritmos adaptativos de substituição de linhas de cache, chamado de Forest Fire Switching Mechanism, ambos baseados em Power Laws, são propostos a fim de se obter ganhos de desempenho na execução de aplicações. / Power Laws are statistical laws that permeate the most varied fields of human knowledge such as Biology, Sociology, Geography, Linguistics, Astronomy, among others, and have as most important characteristic the disparity between the cause events, in other words, some few elements are responsible for most of the effects. Famous examples are the Pareto Principle, the Zipfs Law and the Forest Fire model. The Pareto Principle says that 80% of a nations wealth is in the hands of just 20% of the population; in other words, a cause and effect relationship called 80-20. Zipf\'s Law states that the behavior of frequency versus ranking of occurrence is given by a hyperbolic curve with a behavior similar to 1/x. The Forest Fire model represents the behavior of trees growing in a forest between successive fires that cause the destruction of clusters of trees. The Power Laws demonstrate that a small percentage of a distribution has a high frequency of occurrence, while the rest of the cases that appear have a low frequency, which would take to a decreasing line in a logarithmic scale. Based on simulations using the SPEC-CPU2000 benchmarks, this work seeks to investigate how these distributions can be used in order to understand and improve the performance of caches based on different cache line replacement policies. The study about the possibility of a new replacement policy composed by a Pareto cache, and a new switching mechanism of the behavior of cache line replacement adaptive algorithms, called Forest Fire Switching Mechanism, both based on Power Laws, are proposed in order to obtain performance gains on the execution of applications.
159

Teste integrado de software e hardware : reusando casos de teste de software em teste de microprocessadores / Integrated test of software and hardware: reusing software test cases to test of microprocessor

Meirelles, Paulo Roberto Miranda January 2008 (has links)
Sistemas embarcados estão mais complexos e são cada vez mais utilizados em contextos que exigem muitos recursos computacionais. Isso significa que o hardware embarcado pode ser composto por vários processadores, memórias, partes reconfiguráveis e ASIPs integrados em um único silício. Adicionalmente, o software embarcados pode conter muitas rotinas de programação executadas sob restrição de processamento e memória. Esse cenário estabelece uma forte dependência entre o hardware e o software embarcado. Portanto, o teste de um sistema embarcado compreende o teste do hardware e do software. Neste contexto, a reutilização de procedimentos e estruturas de teste é um caminho para se reduzir o tempo de desenvolvimento e execução dos testes. Neste trabalho é apresentado um método de teste integrado de hardware e software. Nesse método, casos de teste desenvolvidos para testar o software embarcado também são usados para testar o seu processador. Comparou-se os custos e cobertura de falhas do método proposto com técnicas de auto-teste funcional. Os resultados experimentais demonstraram que foi possível reduzir os custos de aplicação e geração do teste do sistema usando um método de teste integrado de software e hardware. / Embedded Systems are more complexity. Nowadays, they are used in context that requires computational resources. This means an embedded hardware may be compound of several processors, memories, reconfigurable parts, and ASICs integrated in a single die. Additionally, an embedded software has a lot of programming procedures, which is under processing and memory constraints. This scenario provides a stronger connection between hardware and software. Therefore, the test of an embedded system is the test of both, hardware and software. In this context, reuse of testing structures and procedures is one way to reduce the test development time and execution. This work presents an integrated test of software and software method. In this method, test cases developed to test the embedded software are also used to test its processor. We compared the costs and fault coverage of our proposed method with techniques of functional self-test. The experimental results show that it is possible to reduce the implementation and test generation costs using an integrated test of software and hardware.
160

Prospects of voltage regulators for next generation computer microprocessors

López Julià, Toni 18 June 2010 (has links)
Synchronous buck converter based multiphase architectures are evaluated to determine whether or not the most widespread voltage regulator topology can meet the power delivery requirements of next generation computer microprocessors. According to the prognostications, the load current will rise to 200A along with the decrease of the supply voltage to 0.5V and staggering tight dynamic and static load line tolerances. In view of these demands, researchers face serious challenges to bring forth compliant solutions that can further offer acceptable conversion efficiencies and minimum mainboard area occupancy. Among the most prominent investigation fronts are those surveying fundamental technology improvements aiming at making power semiconductor devices more effective at high switching frequency. The latter is of critical importance as the increase of the switching frequency is fundamentally recognized as the way forward to enhance power density conversion. Provided that switching losses must be kept low to enable the miniaturization of the filter components, one primary goal is to cope with semiconductor and system integration technologies enabling fast dynamic operation of ultra-low ON resistance power switches. This justifies the main focus of this thesis work, centered around a comprehensive analysis of the MOSFET switching behavior in the synchronous buck converter. The MOSFETs dynamic operation, far from being well describable with the traditional clamped inductive hard-switching mode, is strongly influenced by a number of frequently ignored linear and nonlinear parasitic elements that must be taken into account in order to fully predict real switching waveforms, understand their dynamics, and most importantly, identify and quantify the related mechanisms leading to heat generation. This will be revealed from in-depth investigations of the switched converter under fast switching speeds and heavy load. Recognizing the key relevance of appropriate modeling tools that support this task, the second focal point of the thesis aims at developing a number of suitable models for the switching analysis of power MOSFETs. Combined with a series of design guidelines and optimization procedures, these models form the basis of a proposed methodological approach, where numerical computations replace the usually enormous experimental effort to elucidate the most effective pathways towards reducing power losses. This gives rise to the concept referred to as virtual design loop, which is successfully applied to the development of a new power MOSFET technology offering outstanding dynamic and static performance characteristics. From a system perspective, the limits of the power density conversion will be explored for this and other emerging technologies that promise to open up a new paradigm in power integration capabilities.

Page generated in 0.0717 seconds