Spelling suggestions: "subject:"microprocessor"" "subject:"icroprocessor""
151 |
Biomechanics of ramp descent in unilateral trans-tibial amputees: Comparison of a microprocessor controlled foot with conventional ankle–foot mechanismsStruchkov, Vasily, Buckley, John 05 December 2015 (has links)
Yes / Background
Walking down slopes and/or over uneven terrain is problematic for unilateral trans-tibial amputees. Accordingly, ‘ankle’ devices have been added to some dynamic-response feet. This study determined whether use of a microprocessor controlled passive-articulating hydraulic ankle–foot device improved the gait biomechanics of ramp descent in comparison to conventional ankle–foot mechanisms.
Methods
Nine active unilateral trans-tibial amputees repeatedly walked down a 5° ramp, using a hydraulic ankle–foot with microprocessor active or inactive or using a comparable foot with rubber ball-joint (elastic) ‘ankle’ device. When inactive the hydraulic unit's resistances were those deemed to be optimum for level-ground walking, and when active, the plantar- and dorsi-flexion resistances switched to a ramp-descent mode. Residual limb kinematics, joints moments/powers and prosthetic foot power absorption/return were compared across ankle types using ANOVA.
Findings
Foot-flat was attained fastest with the elastic foot and second fastest with the active hydraulic foot (P < 0.001). Prosthetic shank single-support mean rotation velocity (p = 0.006), and the flexion (P < 0.001) and negative work done at the residual knee (P = 0.08) were reduced, and negative work done by the ankle–foot increased (P < 0.001) when using the active hydraulic compared to the other two ankle types.
Interpretation
The greater negative ‘ankle’ work done when using the active hydraulic compared to other two ankle types, explains why there was a corresponding reduction in flexion and negative work at the residual knee. These findings suggest that use of a microprocessor controlled hydraulic foot will reduce the biomechanical compensations used to walk down slopes.
|
152 |
Investigation of Simultaneous Effects of Surface Roughness, Porosity, and Magnetic Field of Rough Porous Microfin Under a Convective-Radiative Heat Transfer for Improved Microprocessor Cooling of Consumer ElectronicsOguntala, George A., Sobamowo, G., Eya, Nnabuike N., Abd-Alhameed, Raed 30 October 2018 (has links)
Yes / The ever-increasing demand for high-processing
electronic systems has unequivocally called for improved
microprocessor performance. However, increasing
microprocessor performance requires increasing power and on-chip
power density, both of which are associated with increased
heat dissipation. Electronic cooling using fins have been
identified as a reliable cooling approach. However, an
investigation into the thermal behaviour of fin would help in the
design of miniaturized, effective heatsinks for reliable
microprocessor cooling. The aim of this paper is to investigates
the simultaneous effects of surface roughness, porosity and
magnetic field on the performance of a porous micro-fin under a
convective-radiative heat transfer mechanism. The developed
thermal model considers variable thermal properties according
to linear, exponential and power laws, and are solved using
Chebychev spectral collocation method. Parametric studies are
carried using the numerical solutions to establish the influences
of porosity, surface roughness, and magnetic field on the microfin
thermal behaviour. Following the results of the simulation, it
is established that the thermal efficiency of the micro-fin is
significantly affected by the porosity, magnetic field, geometric
ratio, nonlinear thermal conductivity parameter, thermogeometric
parameter and the surface roughness of the micro-fin.
However, the performance of the micro-fin decreases when it
operates only in a convective environment. In addition, we
establish that the fin efficiency ratio which is the ratio of the
efficiency of the rough fin to the efficiency of the smooth fin is
found to be greater than unity when the rough and smooth fins
of equal geometrical, physical, thermal and material properties
are subjected to the same operating condition. The investigation
establishes that improved thermal management of electronic
systems would be achieved using rough surface fins with
porosity under the influences of the magnetic field. / Supported in part by the Tertiary Education Trust Fund of Federal Government of Nigeria, and the European Union’s Horizon 2020 research and innovation programme under grant agreement H2020-MSCA-ITN- 2016SECRET-722424.
|
153 |
Gait termination on a declined surface in trans-femoral amputees: Impact of using microprocessor-controlled limb systemAbdulhasan, Zahraa M., Scally, Andy J., Buckley, John 30 May 2018 (has links)
Yes / Walking down ramps is a demanding task for transfemoral-amputees and terminating gait on ramps is even more challenging because of the requirement to maintain a stable limb so that it can do the necessary negative mechanical work on the centre-of-mass in order to arrest (dissipate) forward/downward velocity. We determined how the use of a microprocessor-controlled limb system (simultaneous control over hydraulic resistances at ankle and knee) affected the negative mechanical work done by each limb when transfemoral-amputees terminated gait during ramp descent.
Methods:
Eight transfemoral-amputees completed planned gait terminations (stopping on prosthesis) on a 5-degree ramp from slow and customary walking speeds, with the limb's microprocessor active or inactive. When active the limb operated in its ‘ramp-descent’ mode and when inactive the knee and ankle devices functioned at constant default levels. Negative limb work, determined as the integral of the negative mechanical (external) limb power during the braking phase, was compared across speeds and microprocessor conditions.
Findings:
Negative work done by each limb increased with speed (p < 0.001), and on the prosthetic limb it was greater when the microprocessor was active compared to inactive (p = 0.004). There was no change in work done across microprocessor conditions on the intact limb (p = 0.35).
Interpretation:
Greater involvement of the prosthetic limb when the limb system was active indicates its ramp-descent mode effectively altered the hydraulic resistances at the ankle and knee. Findings highlight participants became more assured using their prosthetic limb to arrest centre-of-mass velocity. / ZA is funded by the Higher Committee of Education Development in IRAQ (HCED student number D13 626).
|
154 |
Characterization and management of voltage noise in multi-core, multi-threaded processorsKim, Youngtaek 14 July 2014 (has links)
Reliability is one of the important issues of recent microprocessor design. Processors must provide correct behavior as users expect, and must not fail at any time. However, unreliable operation can be caused by excessive supply voltage fluctuations due to an inductive part in a microprocessor power distribution network. This voltage fluctuation issue is referred to as inductive or di/dt noise, and requires thorough analysis and sophisticated design solutions. This dissertation proposes an automated stressmark generation framework to characterize di/dt noise effect, and suggests a practical solution for management of di/dt effects while achieving performance and energy goals. First, the di/dt noise issue is analyzed from theory to a practical view. Inductance is a parasitic part in power distribution network for microprocessor, and its characteristics such as resonant frequencies are reviewed. Then, it is shown that supply voltage fluctuation from resonant behavior is much harmful than single event voltage fluctuations. Voltage fluctuations caused by standard benchmarks such as SPEC CPU2006, PARSEC, Linpack, etc. are studied. Next, an AUtomated DI/dT stressmark generation framework, referred to as AUDIT, is proposed to identify maximum voltage droop in a microprocessor power distribution network. The di/dt stressmark generated from AUDIT framework is an instruction sequence, which draws periodic high and low current pulses that maximize voltage fluctuations including voltage droops. AUDIT uses a Genetic Algorithm in scheduling and optimizing candidate instruction sequences to create a maximum voltage droop. In addition, AUDIT provides with both simulation and hardware measurement methods for finding maximum voltage droops in different design and verification stages of a processor. Failure points in hardware due to voltage droops are analyzed. Finally, a hardware technique, floating-point (FP) issue throttling, is examined, which provides a reduction in worst case voltage droop. This dissertation shows the impact of floating point throttling on voltage droop, and translates this reduction in voltage droop to an increase in operating frequency because additional guardband is no longer required to guard against droops resulting from heavy floating point usage. This dissertation presents two techniques to dynamically determine when to tradeoff FP throughput for reduced voltage margin and increased frequency. These techniques can work in software level without any modification of existing hardware. / text
|
155 |
Meta assembler and emulator for the Intel 8086 microprocessorShoaib, Rao Mohammad, 1960 - January 1989 (has links)
The thesis describes a Universal meta cross assembler and an emulator for the Intel 8086 microprocessor. The utility is designed to be used as an instructional tool to teach assembly language programming to students. One implementation is available to allow students to run Intel 8086 programs on the university's vax mainframe, so that students can test their programs at their convenience. This setup also results in low operating costs with no additional equipment requirements. Several options are provided in the emulator to debug the 8086 assembly language programs composed by students. The assembler, besides generating Intel 8086 machine code, has the capability to generate machine code for a number of microprocessors or microcontrollers. The machine code file generated by the assembler is the input to the emulator. Both the assembler and the emulator are completely portable and can be recompiled to run on any system with a standard C compiler.
|
156 |
Projeto e construção de um digitalizador e promediador de dois canais para tomografia por ressonância magnética nuclear / Design and construction of a dual channel signal digitizer and averager for nuclear magnetic resonance tomographyTorre Neto, André 09 December 1988 (has links)
Este trabalho descreve o projeto, a construção e a avaliação de um digitalizador de sinais controlado por microprocessador, desenvolvido para ser utilizado em Tomografia por Ressonância Magnética Nuclear, TORM. O digitalizador apresenta dois canais de entrada com digitalização simultânea em 256, 512 ou 1024 palavras por canal e com taxa de amostragem máxima de 22,7 Khz. A resolução é de 12 bits com conversão analógico/digital por aproximação sucessiva. Não há controles manuais o que exige um computador hospedeiro para o ajuste de parâmetros via interface de comunicação paralela destinada para este fim. Opcionalmente pode-se utilizar uma interface serial do tipo RS232C-EIA operando com velocidade máxima de 9600 bauds. O equipamento efetua o processamento local da média acumulativa do sinal, técnica empregada para melhorar a relação sinal/ruído no caso de ruído aleatório. Um circuito dedicado à monitoração permite que se visualize em monitor X-Y tanto o sinal como a sua média. No caso da média, por ela ser acumulativa, há um ajuste automático de escala / This work describes the design, construction and evaluation of a microprocessor controlled digitizer developed to be used in Magnetic Resonance Tomography or Imaging, MRI. The digitizer presents two input channels with simultaneous digitalization in 256, 512 or 1024 words per channel with a sample rate up to 22.7 Khz. A resolution of 12 bits is obtained with successive approximation A/D conversion. There are no manual controls. So a host computer is needed to adjust the parameters through a parallel communication interface available for this purpose. Optionally, a RS232-EIA type serial interface may be used, operating at speeds up to 9600 bauds. Signal average can be processed locally by the equipment. This technique is used to improve the signal to noise ratio in case of random noise. A dedicated circuit permits the visualization of the signal and or its average on an x-y monitor. To monitor cumulative averaged data an automatic scale adjustment is provided
|
157 |
Power Laws na modelagem de caches de microprocessadores. / Power Laws on the modeling of caches of microprocessors.Scoton, Filipe Montefusco 10 June 2011 (has links)
Power Laws são leis estatísticas que permeiam os mais variados campos do conhecimento humano tais como Biologia, Sociologia, Geografia, Linguística, Astronomia, entre outros, e que têm como característica mais importante a disparidade entre os elementos causadores, ou seja, alguns poucos elementos são responsáveis pela grande maioria dos efeitos. Exemplos famosos são o Princípio de Pareto, a Lei de Zipf e o modelo de Incêndios Florestais. O Princípio de Pareto diz que 80% da riqueza de uma nação está nas mãos de apenas 20% da população; em outras palavras, uma relação causa e efeito chamada 80-20. A Lei de Zipf enuncia que o comportamento da frequência versus o ranking de ocorrência é dado por uma curva hiperbólica com um comportamento semelhante a 1/x. O modelo de Incêndios Florestais modela o comportamento do crescimento de árvores em uma floresta entre sucessivas queimadas que causam destruição de agrupamentos de árvores. As Power Laws demonstram que uma porcentagem pequena de uma distribuição tem uma alta frequência de ocorrência, enquanto o restante dos casos que aparecem tem uma frequência baixa, o que levaria a uma reta decrescente em uma escala logarítmica. A partir de simulações utilizando o conjunto de benchmarks SPEC-CPU2000, este estudo procura investigar como essas leis estatísticas podem ser utilizadas para entender e melhorar o desempenho de caches baseados em diferentes políticas de substituição de linhas de cache. O estudo sobre a possibilidade de uma nova política de substituição composta por um cache Pareto, bem como um novo mecanismo de chaveamento do comportamento de algoritmos adaptativos de substituição de linhas de cache, chamado de Forest Fire Switching Mechanism, ambos baseados em Power Laws, são propostos a fim de se obter ganhos de desempenho na execução de aplicações. / Power Laws are statistical laws that permeate the most varied fields of human knowledge such as Biology, Sociology, Geography, Linguistics, Astronomy, among others, and have as most important characteristic the disparity between the cause events, in other words, some few elements are responsible for most of the effects. Famous examples are the Pareto Principle, the Zipfs Law and the Forest Fire model. The Pareto Principle says that 80% of a nations wealth is in the hands of just 20% of the population; in other words, a cause and effect relationship called 80-20. Zipf\'s Law states that the behavior of frequency versus ranking of occurrence is given by a hyperbolic curve with a behavior similar to 1/x. The Forest Fire model represents the behavior of trees growing in a forest between successive fires that cause the destruction of clusters of trees. The Power Laws demonstrate that a small percentage of a distribution has a high frequency of occurrence, while the rest of the cases that appear have a low frequency, which would take to a decreasing line in a logarithmic scale. Based on simulations using the SPEC-CPU2000 benchmarks, this work seeks to investigate how these distributions can be used in order to understand and improve the performance of caches based on different cache line replacement policies. The study about the possibility of a new replacement policy composed by a Pareto cache, and a new switching mechanism of the behavior of cache line replacement adaptive algorithms, called Forest Fire Switching Mechanism, both based on Power Laws, are proposed in order to obtain performance gains on the execution of applications.
|
158 |
Power Laws na modelagem de caches de microprocessadores. / Power Laws on the modeling of caches of microprocessors.Filipe Montefusco Scoton 10 June 2011 (has links)
Power Laws são leis estatísticas que permeiam os mais variados campos do conhecimento humano tais como Biologia, Sociologia, Geografia, Linguística, Astronomia, entre outros, e que têm como característica mais importante a disparidade entre os elementos causadores, ou seja, alguns poucos elementos são responsáveis pela grande maioria dos efeitos. Exemplos famosos são o Princípio de Pareto, a Lei de Zipf e o modelo de Incêndios Florestais. O Princípio de Pareto diz que 80% da riqueza de uma nação está nas mãos de apenas 20% da população; em outras palavras, uma relação causa e efeito chamada 80-20. A Lei de Zipf enuncia que o comportamento da frequência versus o ranking de ocorrência é dado por uma curva hiperbólica com um comportamento semelhante a 1/x. O modelo de Incêndios Florestais modela o comportamento do crescimento de árvores em uma floresta entre sucessivas queimadas que causam destruição de agrupamentos de árvores. As Power Laws demonstram que uma porcentagem pequena de uma distribuição tem uma alta frequência de ocorrência, enquanto o restante dos casos que aparecem tem uma frequência baixa, o que levaria a uma reta decrescente em uma escala logarítmica. A partir de simulações utilizando o conjunto de benchmarks SPEC-CPU2000, este estudo procura investigar como essas leis estatísticas podem ser utilizadas para entender e melhorar o desempenho de caches baseados em diferentes políticas de substituição de linhas de cache. O estudo sobre a possibilidade de uma nova política de substituição composta por um cache Pareto, bem como um novo mecanismo de chaveamento do comportamento de algoritmos adaptativos de substituição de linhas de cache, chamado de Forest Fire Switching Mechanism, ambos baseados em Power Laws, são propostos a fim de se obter ganhos de desempenho na execução de aplicações. / Power Laws are statistical laws that permeate the most varied fields of human knowledge such as Biology, Sociology, Geography, Linguistics, Astronomy, among others, and have as most important characteristic the disparity between the cause events, in other words, some few elements are responsible for most of the effects. Famous examples are the Pareto Principle, the Zipfs Law and the Forest Fire model. The Pareto Principle says that 80% of a nations wealth is in the hands of just 20% of the population; in other words, a cause and effect relationship called 80-20. Zipf\'s Law states that the behavior of frequency versus ranking of occurrence is given by a hyperbolic curve with a behavior similar to 1/x. The Forest Fire model represents the behavior of trees growing in a forest between successive fires that cause the destruction of clusters of trees. The Power Laws demonstrate that a small percentage of a distribution has a high frequency of occurrence, while the rest of the cases that appear have a low frequency, which would take to a decreasing line in a logarithmic scale. Based on simulations using the SPEC-CPU2000 benchmarks, this work seeks to investigate how these distributions can be used in order to understand and improve the performance of caches based on different cache line replacement policies. The study about the possibility of a new replacement policy composed by a Pareto cache, and a new switching mechanism of the behavior of cache line replacement adaptive algorithms, called Forest Fire Switching Mechanism, both based on Power Laws, are proposed in order to obtain performance gains on the execution of applications.
|
159 |
Teste integrado de software e hardware : reusando casos de teste de software em teste de microprocessadores / Integrated test of software and hardware: reusing software test cases to test of microprocessorMeirelles, Paulo Roberto Miranda January 2008 (has links)
Sistemas embarcados estão mais complexos e são cada vez mais utilizados em contextos que exigem muitos recursos computacionais. Isso significa que o hardware embarcado pode ser composto por vários processadores, memórias, partes reconfiguráveis e ASIPs integrados em um único silício. Adicionalmente, o software embarcados pode conter muitas rotinas de programação executadas sob restrição de processamento e memória. Esse cenário estabelece uma forte dependência entre o hardware e o software embarcado. Portanto, o teste de um sistema embarcado compreende o teste do hardware e do software. Neste contexto, a reutilização de procedimentos e estruturas de teste é um caminho para se reduzir o tempo de desenvolvimento e execução dos testes. Neste trabalho é apresentado um método de teste integrado de hardware e software. Nesse método, casos de teste desenvolvidos para testar o software embarcado também são usados para testar o seu processador. Comparou-se os custos e cobertura de falhas do método proposto com técnicas de auto-teste funcional. Os resultados experimentais demonstraram que foi possível reduzir os custos de aplicação e geração do teste do sistema usando um método de teste integrado de software e hardware. / Embedded Systems are more complexity. Nowadays, they are used in context that requires computational resources. This means an embedded hardware may be compound of several processors, memories, reconfigurable parts, and ASICs integrated in a single die. Additionally, an embedded software has a lot of programming procedures, which is under processing and memory constraints. This scenario provides a stronger connection between hardware and software. Therefore, the test of an embedded system is the test of both, hardware and software. In this context, reuse of testing structures and procedures is one way to reduce the test development time and execution. This work presents an integrated test of software and software method. In this method, test cases developed to test the embedded software are also used to test its processor. We compared the costs and fault coverage of our proposed method with techniques of functional self-test. The experimental results show that it is possible to reduce the implementation and test generation costs using an integrated test of software and hardware.
|
160 |
Prospects of voltage regulators for next generation computer microprocessorsLópez Julià, Toni 18 June 2010 (has links)
Synchronous buck converter based multiphase architectures are evaluated to
determine whether or not the most widespread voltage regulator topology can
meet the power delivery requirements of next generation computer
microprocessors. According to the prognostications, the load current will rise to
200A along with the decrease of the supply voltage to 0.5V and staggering tight
dynamic and static load line tolerances. In view of these demands, researchers face
serious challenges to bring forth compliant solutions that can further offer
acceptable conversion efficiencies and minimum mainboard area occupancy.
Among the most prominent investigation fronts are those surveying
fundamental technology improvements aiming at making power semiconductor
devices more effective at high switching frequency. The latter is of critical
importance as the increase of the switching frequency is fundamentally recognized
as the way forward to enhance power density conversion. Provided that switching
losses must be kept low to enable the miniaturization of the filter components, one
primary goal is to cope with semiconductor and system integration technologies
enabling fast dynamic operation of ultra-low ON resistance power switches.
This justifies the main focus of this thesis work, centered around a
comprehensive analysis of the MOSFET switching behavior in the synchronous
buck converter.
The MOSFETs dynamic operation, far from being well describable with the
traditional clamped inductive hard-switching mode, is strongly influenced by a
number of frequently ignored linear and nonlinear parasitic elements that must be
taken into account in order to fully predict real switching waveforms, understand
their dynamics, and most importantly, identify and quantify the related
mechanisms leading to heat generation. This will be revealed from in-depth
investigations of the switched converter under fast switching speeds and heavy
load.
Recognizing the key relevance of appropriate modeling tools that support this
task, the second focal point of the thesis aims at developing a number of suitable
models for the switching analysis of power MOSFETs.
Combined with a series of design guidelines and optimization procedures, these
models form the basis of a proposed methodological approach, where numerical
computations replace the usually enormous experimental effort to elucidate the
most effective pathways towards reducing power losses. This gives rise to the
concept referred to as virtual design loop, which is successfully applied to the
development of a new power MOSFET technology offering outstanding dynamic
and static performance characteristics. From a system perspective, the limits of the
power density conversion will be explored for this and other emerging
technologies that promise to open up a new paradigm in power integration
capabilities.
|
Page generated in 0.0562 seconds