• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

ModPET: Novel Applications of Scintillation Cameras to Preclinical PET

Moore, Stephen K. January 2011 (has links)
We have designed, developed, and assessed a novel preclinical positron emission tomography (PET) imaging system named ModPET. The system was developed using modular gamma cameras, originally developed for SPECT applications at the Center for Gamma Ray Imaging (CGRI), but configured for PET imaging by enabling coincidence timing. A pair of cameras are mounted on a exible system gantry that also allows for acquisition of optical images such that PET images can be registered to an anatomical reference. Data is acquired in a super list-mode form where raw PMT signals and event times are accumulated in events lists for each camera. Event parameter estimation of position and energy is carried out with maximum likelihood methods using careful camera calibrations accomplished with collimated beams of 511-keV photons and a new iterative mean-detector-response-function processing routine. Intrinsic lateral spatial resolution for 511-keV photons was found to be approximately 1.6 mm in each direction. Lists of coincidence pairs are found by comparing event times in the two independent camera lists. A timing window of 30 nanoseconds is used. By bringing the 4.5 inch square cameras in close proximity, with a 32-mm separation for mouse imaging, a solid angle coverage of ∼75% partially compensates for the relatively low stopping power in the 5-mm-thick NaI crystals to give a mea- sured sensitivity of up to 0.7%. An NECR analysis yields 11,000 pairs per second with 84 μCi of activity. A list-mode MLEM reconstruction algorithm was developed to reconstruct objects in a 88 x 88 x 30 mm field of view. Tomographic resolution tests with a phantom suggest a lateral resolution of 1.5 mm and a slightly degraded resolution of 2.5 mm in the direction normal to the camera faces. The system can also be configured to provide (99m)Tc planar scintigraphy images. Selected biological studies of inammation, apoptosis, tumor metabolism, and bone osteogenic activity are presented.
2

Photon Statistics in Scintillation Crystals

Bora, Vaibhav Joga Singh January 2015 (has links)
Scintillation based gamma-ray detectors are widely used in medical imaging, high-energy physics, astronomy and national security. Scintillation gamma-ray detectors are field-tested, relatively inexpensive, and have good detection efficiency. Semi-conductor detectors are gaining popularity because of their superior capability to resolve gamma-ray energies. However, they are relatively hard to manufacture and therefore, at this time, not available in as large formats and much more expensive than scintillation gamma-ray detectors. Scintillation gamma-ray detectors consist of: a scintillator, a material that emits optical (scintillation) photons when it interacts with ionization radiation, and an optical detector that detects the emitted scintillation photons and converts them into an electrical signal. Compared to semiconductor gamma-ray detectors, scintillation gamma-ray detectors have relatively poor capability to resolve gamma-ray energies. This is in large part attributed to the "statistical limit" on the number of scintillation photons. The origin of this statistical limit is the assumption that scintillation photons are either Poisson distributed or super-Poisson distributed. This statistical limit is often defined by the Fano factor. The Fano factor of an integer-valued random process is defined as the ratio of its variance to its mean. Therefore, a Poisson process has a Fano factor of one. The classical theory of light limits the Fano factor of the number of photons to a value greater than or equal to one (Poisson case). However, the quantum theory of light allows for Fano factors to be less than one. We used two methods to look at the correlations between two detectors looking at same scintillation pulse to estimate the Fano factor of the scintillation photons. The relationship between the Fano factor and the correlation between the integral of the two signals detected was analytically derived, and the Fano factor was estimated using the measurements for SrI₂:Eu, YAP:Ce and CsI:Na. We also found an empirical relationship between the Fano factor and the covariance as a function of time between two detectors looking at the same scintillation pulse. This empirical model was used to estimate the Fano factor of LaBr₃:Ce and YAP:Ce using the experimentally measured timing-covariance. The estimates of the Fano factor from the time-covariance results were consistent with the estimates of the correlation between the integral signals. We found scintillation light from some scintillators to be sub-Poisson. For the same mean number of total scintillation photons, sub-Poisson light has lower noise. We then conducted a simulation study to investigate whether this low-noise sub-Poisson light can be used to improve spatial resolution. We calculated the Cramér-Rao bound for different detector geometries, position of interactions and Fano factors. The Cramér-Rao calculations were verified by generating simulated data and estimating the variance of the maximum likelihood estimator. We found that the Fano factor has no impact on the spatial resolution in gamma-ray imaging systems.
3

EnergyBox: Tool improvement and GUI

Polis, Rihards January 2014 (has links)
EnergyBox is a parametrised estimation tool that uses packet traces as input to simulate the energy consumption of communication in mobile devices. This tool models the transmission behaviour of a smart phone by analysing a recorded packet trace from the device. The purpose of the thesis is to reimplement the original EnergyBox energy consumption modelling tool. The project aims to develop support for a graphical user interface (GUI) and a code base that is easier to modify and maintain. The motivation for the reimplementation of the tool is to simplify its usage and to structure the code so that new features can be added. The existing features such as the calculation of total power consumed by the packet trace and the modelling of a device's energy states are reimplemented and new features are developed. Among the new features, a GUI is added to simplify the usage of the application features such as the detection of the recording device's IP address and the ability to alter the configuration parameters used as input to the energy model. The application is written with a GUI and modularity in mind. This is achieved using Java's proprietary new GUI framework - JavaFX, which supports built-in chart and graph GUI elements, that can be easily integrated and supported. The energy modelling engines follow the semantics of the original implementation and the evaluation shows that the new implementation's results are identical to the original tool in 94.94% of the tested cases.
4

A Framework for Estimating Energy Consumed by Electric Loads Through Minimally Intrusive Approaches

Giri, Suman 01 April 2015 (has links)
This dissertation explores the problem of energy estimation in supervised Non-Intrusive Load Monitoring (NILM). NILM refers to a set of techniques used to estimate the electricity consumed by individual loads in a building from measurements of the total electrical consumption. Most commonly, NILM works by first attributing any significant change in the total power consumption (also known as an event) to a specific load and subsequently using these attributions (i.e. the labels for the events) to estimate energy for each load. For this last step, most proposed solutions in the field impart simplifying assumptions to make the problem more tractable. This has severely limited the practicality of the proposed solutions. To address this knowledge gap, we present a framework for creating appliance models based on classification labels and aggregate power measurements that can help relax many of these assumptions. Within the framework, we model the problem of utilizing a sequence of event labels to generate energy estimates as a broader class of problems that has two major components (i) With the understanding that the labels arise from a process with distinct states and state transitions, we estimate the underlying Finite State Machine (FSM) model that most likely generated the observed sequence (ii) We allow for the observed sequence to have errors, and present an error correction algorithm to detect and correct them. We test the framework on data from 43 appliances collected from 19 houses and find that it improves errors in energy estimates when compared to the case with no correction in 19 appliances by a factor of 50, leaves 17 appliances unchanged, and negatively impacts 6 appliances by a factor of 1.4. This approach of utilizing event sequences to estimate energy has implications in virtual metering of appliances as well. In a case study, we utilize this framework in order to substitute the need of plug-level sensors with cheap and easily deployable contacless sensors, and find that on the 6 appliances virtually metered using magnetic field sensors, the inferred energy values have an average error of 10:9%.
5

Energy-Efficient Mobile Communication with Cached Signal Maps

Holm, Rasmus January 2016 (has links)
Data communication over cellular networks is expensive for the mobile device in terms of energy, especially when the received signal strength (RSS) is low. The mobile device needs to amplify its transmission power to compensate for noise leading to an increased energy consumption. This thesis focuses on developing a RSS map for the third generation cellular technology (3G) which can be stored locally at the mobile device, and can be used for avoiding expensive communication in low RSS areas. The proposed signal map is created by crowdsourced information collected from several mobile devices. An application is used to collect data in the mobile device of the user and the application periodically sends the information back to the server which computes the total signal map. The signal map is composed of three levels of information: RSS information, data rate tests and estimated energy levels. The energy level categorizes the energy consumption of an area into "High", "Medium" or "Low" based on the RSS, data rate test information and an energy model developed from physical power measurements. The coarse categorization provides an estimation of the energy consumption at each location. It is evaluated by collecting data traces on a smartphone at different locations and comparing the measured energy consumption at each location to the energy level categories of the map. The RSS prediction is preliminarily evaluated by collecting new data along a path and comparing how well it correlates to the signal map. The evaluation in this thesis shows that with the current collected data there are not enough observations in the map to properly estimate the RSS. However, we believe that with more observations a more accurate evaluation could be done.
6

Computational Study of Calmodulin’s Ca2+-dependent Conformational Ensembles

Westerlund, Annie M. January 2018 (has links)
Ca2+ and calmodulin play important roles in many physiologically crucial pathways. The conformational landscape of calmodulin is intriguing. Conformational changes allow for binding target-proteins, while binding Ca2+ yields population shifts within the landscape. Thus, target-proteins become Ca2+-sensitive upon calmodulin binding. Calmodulin regulates more than 300 target-proteins, and mutations are linked to lethal disorders. The mechanisms underlying Ca2+ and target-protein binding are complex and pose interesting questions. Such questions are typically addressed with experiments which fail to provide simultaneous molecular and dynamics insights. In this thesis, questions on binding mechanisms are probed with molecular dynamics simulations together with tailored unsupervised learning and data analysis. In Paper 1, a free energy landscape estimator based on Gaussian mixture models with cross-validation was developed and used to evaluate the efficiency of regular molecular dynamics compared to temperature-enhanced molecular dynamics. This comparison revealed interesting properties of the free energy landscapes, highlighting different behaviors of the Ca2+-bound and unbound calmodulin conformational ensembles. In Paper 2, spectral clustering was used to shed light on Ca2+ and target protein binding. With these tools, it was possible to characterize differences in target-protein binding depending on Ca2+-state as well as N-terminal or C-terminal lobe binding. This work invites data-driven analysis into the field of biomolecule molecular dynamics, provides further insight into calmodulin’s Ca2+ and targetprotein binding, and serves as a stepping-stone towards a complete understanding of calmodulin’s Ca2+-dependent conformational ensembles. / <p>QC 20180912</p>
7

Hardware-Software Co-Design for Sensor Nodes in Wireless Networks

Zhang, Jingyao 11 June 2013 (has links)
Simulators are important tools for analyzing and evaluating different design options for wireless sensor networks (sensornets) and hence, have been intensively studied in the past decades. However, existing simulators only support evaluations of protocols and software aspects of sensornet design. They cannot accurately capture the significant impacts of various hardware designs on sensornet performance.  As a result, the performance/energy benefits of customized hardware designs are difficult to be evaluated in sensornet research. To fill in this technical void, in first section, we describe the design and implementation of SUNSHINE, a scalable hardware-software emulator for sensornet applications. SUNSHINE is the first sensornet simulator that effectively supports joint evaluation and design of sensor hardware and software performance in a networked context. SUNSHINE captures the performance of network protocols, software and hardware up to cycle-level accuracy through its seamless integration of three existing sensornet simulators: a network simulator TOSSIM, an instruction-set simulator SimulAVR and a hardware simulator GEZEL. SUNSHINE solves several sensornet simulation challenges, including data exchanges and time synchronization across different simulation domains and simulation accuracy levels. SUNSHINE also provides hardware specification scheme for simulating flexible and customized hardware designs. Several experiments are given to illustrate SUNSHINE's simulation capability. Evaluation results are provided to demonstrate that SUNSHINE is an efficient tool for software-hardware co-design in sensornet research. Even though SUNSHINE can simulate flexible sensor nodes (nodes contain FPGA chips as coprocessors) in wireless networks, it does not estimate power/energy consumption of sensor nodes. So far, no simulators have been developed to evaluate the performance of such flexible nodes in wireless networks. In second section, we present PowerSUNSHINE, a power- and energy-estimation tool that fills the void. PowerSUNSHINE is the first scalable power/energy estimation tool for WSNs that provides an accurate prediction for both fixed and flexible sensor nodes. In the section, we first describe requirements and challenges of building PowerSUNSHINE. Then, we present power/energy models for both fixed and flexible sensor nodes. Two testbeds, a MicaZ platform and a flexible node consisting of a microcontroller, a radio and a FPGA based co-processor, are provided to demonstrate the simulation fidelity of PowerSUNSHINE. We also discuss several evaluation results based on simulation and testbeds to show that PowerSUNSHINE is a scalable simulation tool that provides accurate estimation of power/energy consumption for both fixed and flexible sensor nodes. Since the main components of sensor nodes include a microcontroller and a wireless transceiver (radio), their real-time performance may be a bottleneck when executing computation-intensive tasks in sensor networks. A coprocessor can alleviate the burden of microcontroller from multiple tasks and hence decrease the probability of dropping packets from wireless channel. Even though adding a coprocessor would gain benefits for sensor networks, designing applications for sensor nodes with coprocessors from scratch is challenging due to the consideration of design details in multiple domains, including software, hardware, and network. To solve this problem, we propose a hardware-software co-design framework for network applications that contain multiprocessor sensor nodes. The framework includes a three-layered architecture for multiprocessor sensor nodes and application interfaces under the framework. The layered architecture is to make the design of multiprocessor nodes' applications flexible and efficient. The application interfaces under the framework are implemented for deploying reliable applications of multiprocessor sensor nodes. Resource sharing technique is provided to make processor, coprocessor and radio work coordinately via communication bus. Several testbeds containing multiprocessor sensor nodes are deployed to evaluate the effectiveness of our framework. Network experiments are executed in SUNSHINE emulator to demonstrate the benefits of using multiprocessor sensor nodes in many network scenarios. / Ph. D.
8

SINGLE VIEW RECONSTRUCTION FOR FOOD PORTION ESTIMATION

Shaobo Fang (6397766) 10 June 2019 (has links)
<p>3D scene reconstruction based on single-view images is an ill-posed problem since most 3D information has been lost during the projection process from the 3D world coordinates to the 2D pixel coordinates. To estimate the portion of an object from a single-view requires either the use of priori information such as the geometric shape of the object, or training based techniques that learn from existing portion sizes distribution. In this thesis, we present a single-view based technique for food portion size estimation.</p><p><br></p> <p>Dietary assessment, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as cancer, diabetes and heart diseases. Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We have developed a mobile dietary assessment system, the Technology Assisted Dietary Assessment<sup>TM</sup> (TADA<sup>TM</sup>) system to automatically determine the food types and energy consumed by a user using image analysis techniques.</p><p><br></p><p>In this thesis we focus on the use of a single image for food portion size estimation to reduce a user’s burden from having to take multiple images of their meal. We define portion size estimation as the process of determining how much food (or food energy/nutrient) is present in the food image. In addition to estimating food energy/nutrient, food portion estimation could also be estimating food volumes (in cm<sup>3</sup>) or weights (in grams), as they are directly related to food energy/nutrient. Food portion estimation is a challenging problem as food preparation and consumption process can pose large variations in food shapes and appearances.</p><p><br></p><p>As single-view based 3D reconstruction is in general an ill-posed problem, we investigate the use of geometric models such as the shape of a container that can help to partially recover 3D parameters of food items in the scene. We compare the performance of portion estimation technique based on 3D geometric models to techniques using depth maps. We have shown that more accurate estimation can be obtained by using geometric models for objects whose 3D shape are well defined. To further improve the food estimation accuracy we investigate the use of food portions co-occurrence patterns. The food portion co-occurrence patterns can be estimated from food image dataset we collected from dietary studies using the mobile Food Record<sup>TM</sup> (mFR<sup>TM</sup>) system we developed. Co-occurrence patterns is used as prior knowledge to refine portion estimation results. We have been shown that the portion estimation accuracy has been improved when incorporating the co-occurrence patterns as contextual information.</p><p><br></p><p>In addition to food portion estimation techniques that are based on geometric models, we also investigate the use deep learning approach. In the geometric model based approach, we have focused on estimation food volumes. However, food volumes are not the final results that directly show food energy/nutrient consumed. Therefore, instead of developing food portion estimation techniques that lead to an intermediate results (food volumes), we present a food portion estimation method to directly estimate food energy (kilocalories) from food images using Generative Adversarial Networks (GANs). We introduce the concept of an “energy distribution” for each food image. To train the GAN, we design a food image dataset based on ground truth food labels and segmentation masks for each food image as well as energy information associated with the food image. Our goal is to learn the mapping of the food image to the food energy. We then estimate food energy based on the estimated energy distribution image. Based on the estimated energy distribution image, we use a Convolutional Neural Networks (CNN) to estimate the numeric values of food energy presented in the eating scene.</p><p><br></p>
9

Anàlisi de l'energia de transició màxima en circuits combinacionals CMOS

Manich Bou, Salvador 17 November 1998 (has links)
En la dècada actual, l'augment del consum energètic dels circuits integrats està tenint un impacte cada vegada més important en el disseny electrònic. Segons l'informe de la Semiconductor Industry Association de l'any 1997, es preveu que aquest impacte serà encara major en la propera dècada. En la bibliografia existeixen diversos treballs on es relaciona un consumo energètic elevat amb la degradació de les prestacions i la fiabilitat del xip. Per aquesta raó, el consum energètic ha estat incorporat com a un altre paràmetre a tenir en compte en el disseny dels circuits integrats. Es coneix com a energia de transició l'energia consumida per un circuit combinacional CMOS quan es produeix un canvi en les seves entrades. Una energia de transició excessivament elevada pot afectar a la fiabilitat del xip a través dels anomenats hot spots, i de l'electromigració. Altres efectes com el ground bouncing i la signal integrity degradation poden repercutir en les prestacions del circuit. La minimització de les degradacions esmentades anteriorment requereixen de la caracterització de l'energia de transició màxima durant la fase de disseny. A tal efecte, en aquesta tesi es proposen dues metodologies que permeten l'estimació de l'energia de transició màxima en circuits combinacionals CMOS. Donat que l'estimació del nivell màxim exacte es inviable en circuits a partir de mides mitjanes, es proposa el càlcul de dues cotes, una d'inferior i una altra de superior, que delimiten un interval de localització de l'esmentat nivell màxim. La tesi està estructurada en els següents capítols. En el capítol 1 es fa una introducció al tema investigat en aquesta tesi i es presenten els treballs existents que el tracten. En el capítol 2 s'introdueixen els models d'estimació de l'energia de transició emprats més habitualment a nivell lògic, que és el nivell de disseny considerat en aquesta tesi. Aquests models assumeixen que l'únic mecanisme de consum és la commutació de les capacitats paràsites del circuit. En els capítols 3 i 4 es tracta l'estimació de l'energia de transició màxima. Aquesta estimació es realitza a partir del càlcul de dues cotes properes, una superior i una altre inferior, a aquesta energia màxima. En el capítol 5 es presenta l'anàlisi del comportament de l'activitat ponderada front als models de retard estàtics. Finalment, en el capítol 6 s'aborden les conclusions generals de la tesis i el treball futur. / El consumo energético de los circuitos integrados es un factor cuyo impacto en el diseño electrónico ha crecido significativamente en la década actual. Según el informe de la Semiconductor Industry Association del año 1997, se prevé que este impacto será aún mayor en la próxima década. En la bibliografía existen diversos trabajos donde se relaciona un consumo energético elevado con la degradación de las prestaciones y la fiabilidad del chip. Por esta razón, el consumo energético ha sido incorporado como otro parámetro a tener en cuenta en el diseño de los circuitos integrados. Se conoce como energía de transición la energía consumida por un circuito combinacional CMOS cuando se produce un cambio en las entradas del mismo. Una energía de transición excesivamente elevada puede afectar a la fiabilidad del chip a través de los hot spots, de la electromigración. Otros efectos como el ground bouncing y la signal integrity degradation pueden repercutir en las prestaciones del circuito. La minimización de las degradaciones mencionadas anteriormente requiere de la caracterización de la energía de transición máxima durante la fase de diseño. A este efecto, se propone en esta tesis dos metodologías que permiten la estimación de la energía de transición máxima en circuitos combinacionales CMOS. Dado que la estimación del nivel máximo exacto es inviable en circuitos a partir de tamaños medios, se propone el cálculo de dos cotas, una de inferior y otra de superior, que delimiten un intervalo de localización de dicho nivel máximo. La tesis está estructurada en los siguientes capítulos. En el capítulo 1 se presenta una introducción al tema investigado en esta tesis y se resumen los trabajos existentes más importantes. En el capítulo 2 se introducen los modelos de estimación de la energía de transición más comúnmente utilizados a nivel lógico, que es el nivel de diseño considerado en esta tesis. Estos modelos asumen que el único mecanismo de consumo es la conmutación de las capacidades parásitas del circuito. En los capítulos 3 y 4 se aborda la estimación de la energía de transición máxima. Esta estimación se realiza a partir del cálculo de dos cotas cercanas, una superior y una inferior, a esta energía máxima. En el capítulo 5 se presenta el análisis del comportamiento de la actividad ponderada frente a los modelos de retardo estáticos. Finalmente, en el capítulo 6 se presentan las conclusiones generales de la tesis y el trabajo futuro. / The importance of the energy consumption on the design of electronic circuits has increased significantly during the last decade. According to the report of the Semiconductor Industry Association, of 1997, the impact in the next decade will be even greater. In the bibliography several works exist relating to the high energy consumption with the degradation of the reliability and performance of the xip. For this reason, the energy consumption has been included as another parameter to take into account during the design of integrated circuits. It is known as transition energy, the energy consumed by a CMOS combinational circuit when its inputs change their value. Excessively high transition energy may affect the reliability of the chip through the generation of hot spots and electromigration. Other effects such as ground bouncing and signal integrity degradation may reduce the performance of the circuit. In order to minimize the previously detected bad effects it is useful to characterize the maximum transition energy, during the design phase. To this objective, this thesis presents two methodologies that allow for the estimation of the maximum transition energy in CMOS combinational circuits. Given that the estimation of the maximum level is only attainable for medium size circuits, it is proposed the calculation of bounds (higher and lower) delimiting the interval where the maximum level is located. The thesis is divided into the following chapters. In chapter 1 an introduction to the topic and a review of the previous works related to this research domain is given. In chapter 2 the models most extended for the estimation of the transition energy are presented. These models are mainly used at logic level which is the level assumed in this thesis. They assume that the switching of the parasitic capacitances is the only mechanism producing energy consumption. In chapters 3 and 4 the estimation of the maximum transition energy is considered. This estimation is made from the calculation of an upper and lower bound to this maximum transition energy. In chapter 5 the analysis of the switching activity is made for different static delay models. Finally, in chapter 6 the general conclusions of the thesis and future work are discussed.
10

Design and analysis of an inertial properties measurement device for manual wheelchairs

Eicholtz, Matthew R. 07 July 2010 (has links)
The dynamics of rigid body motion are dependent on the inertial properties of the body - that is, the mass and moment of inertia. For complex systems, it may be necessary to derive these results empirically. Such is the case for manual wheelchairs, which can be modeled as a rigid body frame connected to four wheels. While 3D modeling software is capable of estimating inertial parameters, modeling inaccuracies and ill-defined material properties may introduce significant errors in this estimation technique and necessitate experimental measurements. To that end, this thesis discusses the design of a device called the iMachine that empirically determines the mass, location of the center of mass, and moment of inertia about the vertical (yaw) axis passing through the center of mass of the wheelchair. The iMachine is a spring-loaded rotating platform that freely oscillates about an axis passing through its center due to an initial angular velocity. The mass and location of the center of mass can be determined using a static analysis of a triangular configuration of load cells. An optical encoder records the dynamic angular displacement of the platform, and the natural frequency of free vibration is calculated using several techniques. Finally, the moment of inertia is determined from the natural frequency of the system. In this thesis, test results are presented for the calibration of the load cells and spring rate. In addition, objects with known mass properties were tested and comparisons are made between the analytical and empirical inertia results. In general, the mass measurement of the test object had greater than 99% accuracy. The average relative error for the x and y-coordinates of the center of mass was 0.891% and 1.99%, respectively. For the moment of inertia, a relationship was established between relative error and the ratio of the test object inertia to the inertia of the system. The results suggest that 95% accuracy can be achieved if the test object accounts for at least 25% of the total inertia of the system. Finally, the moment of inertia of a manual wheelchair is determined using the device (I = 1.213 kg-m²), and conclusions are made regarding the reliability and validity of results. The results of this project will feed into energy calculations for the Anatomical Model Propulsion System (AMPS), a wheelchair-propelling robot used to measure the mechanical efficiency of manual wheelchairs.

Page generated in 0.1219 seconds