381 |
An Application of Augmented Reality (AR) in the Teaching of an Arc Welding RobotChong, J. W. S., Nee, Andrew Y. C., Youcef-Toumi, Kamal, Ong, S. K. 01 1900 (has links)
Augmented Reality (AR) is an emerging technology that utilizes computer vision methods to overlay virtual objects onto the real world scene so as to make them appear to co-exist with the real objects. Its main objective is to enhance the user’s interaction with the real world by providing the right information needed to perform a certain task. Applications of this technology in manufacturing include maintenance, assembly and telerobotics. In this paper, we explore the potential of teaching a robot to perform an arc welding task in an AR environment. We present the motivation, features of a system using the popular ARToolkit package, and a discussion on the issues and implications of our research. / Singapore-MIT Alliance (SMA)
|
382 |
The Effect of Body Mass Index on Pedometer Accuracy in a Free-Living EnvironmentTyo, Brian Matthew 01 August 2010 (has links)
The purpose of this dissertation was to determine if the New Lifestyles NL-2000 (NL) and the Digi-Walker SW-200 (DW), waist-mounted devices, yield similar daily step counts as compared to the StepWatch 3 (SW), an ankle-mounted device, worn by adults and children in the free-living environment.
For the first study, fifty-six adults (32.7 + 14.5 y) wore the devices for seven consecutive days. There were 20 normal weight, 18 overweight, and 18 obese participants. The NL and DW undercounted (pedometer error) similarly in the normal weight and overweight groups (-15.4% to -18.2%, respectively). However, the DW undercounted more than the NL in the obese group (-32.8% vs -23.9%, respectively). Stepwise regression revealed that both the NL and DW had more error (undercounted more) as a greater percentage steps were accumulated while walking slowly. The DW also had more error with greater BMI. Use of the DW in an obese population will result in twice the error as compared to a normal weight population and thus the DW should not be used to determine relationships between walking volume and adiposity
For the second study, 74 children (13 ± 1.1 y) wore the same devices during one weekday. There were 33 normal weight, 21 overweight, and 20 obese participants. The error was determined for the NL and DW, and the values were similar in the normal weight and overweight groups (-10.8% to -15.4%, respectively). The DW undercounted more than the NL in the obese group (-27.3% vs -8.4%, respectively). The NL was very consistent regardless of BMI category, recording 89.1% (-10.8% error), 89.1% (-10.9% error), and 91.6% (-8.4% error) for the normal weight, overweight, and obese participants, respectively. Stepwise regression revealed that the DW undercounted more in participants with a high weight. Using the DW in obese children of this age group will result in significantly more undercounting when compared to normal weight children. The DW should not be used to determine relationships between walking volume and adiposity in this population. The NL undercounted by ~10%, regardless of BMI category.
|
383 |
The Influence of Dietary Restraint, Social Desirability, and Food Type on Accuracy of Reported Dietary IntakeSchoch, Ashlee Hirt 01 May 2010 (has links)
Underreporting in dietary assessment has been linked to dietary restraint (DR) and social desirability (SD). Thus, this study investigated accuracy of reporting energy intake (EI) of a laboratory meal during a 24-hour dietary recall (24HR) in 38 healthy, college-aged (20.3 +/- 1.7 years), normal-weight women (22.4 +/- 1.8 kg/m2), categorized as high or low in DR and SD.
Participants consumed a meal (sandwich wrap, chips, fruit, and ice cream) and completed a telephone 24HR. Accuracy of reported intake = (((reported intake - measured intake)/measured intake) x 100) [positive numbers = overreporting].
Overreporting of EI was found in all groups (meal accuracy rate = 43.1 +/- 49.9%). An interaction of SD x individual foods (p < 0.05) occurred. SD-High as compared to SD-Low more accurately reported EI of chips (19.8 +/- 56.2% vs. 117.1 +/- 141.3%, p < 0.05) and ice cream (17.2 +/- 78.2% vs. 71.6 +/- 82.7%, p < 0.05). An effect of SD occurred, where SD-High as compared to SD-Low more accurately reported meal EI (29.8 +/- 48.2% vs. 58.0 +/- 48.8%, p < 0.05). For measured meal EI, an effect of DR occurred where DR-High consumed less than DR-Low (437 +/- 169 kcals vs. 559 +/- 207 kcals, p < 0.05). An interaction of DR x food type (p < 0.05) occurred where DR-High as compared to DR-Low consumed less sandwich wrap (156 +/- 63 kcals vs. 210 +/- 76 kcals, p < 0.05) and ice cream (126 +/-73 kcals vs. 190 +/- 106 kcals, p < 0.05). For reported meal EI, an effect of DR occurred where DR-High reported consuming less than DR-Low (561 +/- 200 kcals vs. 818 +/- 362 kcals, p < 0.05). An interaction of DR x individual foods (p < 0.05) occurred where DR-High reported consuming less ice cream than DR-Low (145 +/- 91 kcals vs. 302 +/- 235 kcals, p < 0.05).
Overreporting EI from a laboratory meal was prevalent. However, those high in SD were more accurate in reporting intake, particularly of high-fat foods. Future research is needed to investigate factors that contribute to overreporting.
|
384 |
Tradeoff between Investments in Infrastructure and Forecasting when Facing Natural Disaster RiskKim, Seong D. 2009 May 1900 (has links)
Hurricane Katrina of 2005 was responsible for at least 81 billion dollars of property damage. In planning for such emergencies, society must decide whether to invest in the ability to evacuate more speedily or in improved forecasting technology to better predict the timing and intensity of the critical event. To address this need, we use dynamic programming and Markov processes to model the interaction between the emergency response system and the emergency forecasting system. Simulating changes in the speed of evacuation and in the accuracy of forecasting allows the determination of an optimal mix of these two investments. The model shows that the evacuation improvement and the forecast improvement give different patterns of impact to their benefit. In addition, it shows that the optimal investment decision changes by the budget and the feasible range of improvement.
|
385 |
Improvement of the predictive character of test results issued from analytical methods life cycle / Amélioration du caractère prédictif des résultats associés au cycle de vie des méthodes analytiquesRozet, Eric 29 April 2008 (has links)
Les résultats issus des méthodes analytiques ont un rôle essentiel dans de nombreux domaines suite aux décisions qui sont prises sur leur base telles que la détermination de la qualité des principes actifs, des spécialités pharmaceutiques, des nutriments ou autres échantillons tels que ceux dorigine biologique impliqués dans les études pharmacocinétiques ou de biodisponibilité et bioéquivalence. La fiabilité des résultats analytiques est primordiale dans ce contexte et surtout ils doivent être en accord avec les besoins des utilisateurs finaux. Pour sassurer de la fiabilité des résultats qui seront fournis lors des analyses de routine, la validation des méthodes analytiques est un élément crucial du cycle de vie dune méthode analytique. Par ailleurs, bien souvent une méthode analytique nest pas uniquement employée dans le laboratoire qui la développée et validée, elle est régulièrement transférée vers un autre laboratoire, comme par exemple lors du passage dun laboratoire de recherche et développement vers un laboratoire de contrôle qualité ou, vers ou depuis un sous-traitant. Le transfert de cette méthode doit permettre de garantir la fiabilité des résultats qui seront fournis par ces laboratoires receveurs. Ce sont en effet eux qui utiliseront la méthode analytique en question ultérieurement.
Cest dans ce contexte que se situe notre thèse. Son objectif principal est daméliorer la fiabilité des décisions prises au moyen des résultats obtenus par des méthodes analytiques quantitatives lors de ces deux étapes de leur cycle de vie.
Pour atteindre cet objectif, nous avons dune part reprécisé lobjectif de toute méthode analytique quantitative et de sa validation. Ensuite, une revue des textes réglementaires de lindustrie pharmaceutique relatifs à la validation des méthodes a été faite, dégageant les erreurs et confusions incluses dans ces documents et en particulier leurs implications pratiques lors de lévaluation de la validité dune méthode. Compte tenu de ces constatations, une nouvelle approche pour évaluer la validité des méthodes analytiques quantitatives a été proposée et détaillées dun point de vue statistique. Elle se base sur lutilisation dune méthodologie statistique utilisant un intervalle de tolérance de type « β-expectation » qui a été transposée en un outil de décision final appelé profil dexactitude. Ce profil permet de garantir quune proportion définie des futurs résultats qui seront fournis par la méthode lors de son utilisation en routine sera bien inclue dans des limites dacceptation fixées a priori en fonction des besoins des utilisateurs. De cette manière lobjectif de la validation est parfaitement cohérant avec celui de toute méthode quantitative : obtenir des résultats exactes. Cette approche de validation a été appliquée avec succès a différents types de méthodes analytiques comme la chromatographie liquide, lélectrophorèse capillaire, la spectrophotométrie UV ou proche infra-rouge, aussi bien pour le dosage danalyte dans des matrices issues de la production de médicaments (formulations pharmaceutiques) que dans des matrices plus complexes comme les fluides biologiques (plasma, urine). Ceci démontre le caractère universel du profil dexactitude pour statuer sur la validité dune méthode. Ensuite, et afin daugmenter lobjectivité de cet outil de décision, nous avons introduit des indexes de désirabilité articulés autour de critères de validation à savoir les indexes de justesse, de fidélité, dintervalle de dosage et dexactitude. Ce dernier constitue un indexe de désirabilité global qui correspond à la moyenne géométrique des trois autres indexes. Ces différents indexes permettent de comparer et classer les profils dexactitude obtenus en phase de validation et ainsi de choisir celui qui correspond le mieux à lobjectif de la méthode, et ce de manière plus objective. Enfin, nous avons pour la première fois démontré le caractère prédictif du profil dexactitude en vérifiant que la proportion de résultats inclue dans les limites dacceptation prédite lors de létape de validation létait effectivement bien lors de lapplication en routine des diverse méthodes de dosage.
Contrairement à létape de validation, le transfert de méthode analytique dun laboratoire émetteur qui a validé la méthode vers un laboratoire receveur qui utilisera la méthode en routine, est une étape qui ne bénéficie daucun texte normatif. La seule exigence réglementaire est de documenter le transfert. Dès lors toutes les approches sont possibles. Toutefois celles que lon rencontre le plus souvent ne répondent pas à lobjectif du transfert, à savoir garantir que les résultats obtenus par le laboratoire receveur seront exactes et donc fiables. Dès lors, nous avons développé une approche originale qui permet de statuer de manière appropriée quant à lacceptabilité du transfert. Cette approche basée sur le concept de lerreur totale, utilise également comme méthodologie statistique lintervalle de tolérance et tient compte simultanément de lincertitude sur lestimation de la vraie valeur fournie par le laboratoire émetteur. En effet, un intervalle de tolérance de type « β-expectation » est calculé avec les résultats du receveur puis, comparé à des limites dacceptation autour de la vraie valeur et ajustées en fonction de lincertitude associée à cette valeur de référence. Dautre part, des simulations statistiques ont permis de montrer le gain dans la gestion des risques associés à un transfert à savoir rejeter un transfert acceptable et accepter un transfert qui ne lest pas. Enfin, ladéquation et lapplicabilité de cette nouvelle approche ont été démontrées par le transfert dune méthode dédiée au contrôle de qualité dune formulation pharmaceutique et de deux méthodes bio-analytiques.
Les améliorations de la qualité prédictive des méthodologies proposées pour évaluer la validité et le transfert de méthodes analytiques quantitatives permettent ainsi daugmenter la fiabilité des résultats générés par ces méthodes et par conséquent daccroître la confiance dans les décisions critiques qui en découleront.
|
386 |
Simulation of turbocharged SI-engines - with focus on the turbineWestin, Fredrik January 2005 (has links)
The aim is to share experience gained when simulating (and doing measurements on) the turbocharged SI-engine as well as describing the limits of current state of the technology. In addition an overview of current boosting systems is provided. The target readers of this text are engineers employed in the engine industry as well as academia who will get in contact, or is experienced, with 1D engine performance simulation and/or boosting systems. Therefore the text requires general knowledge about engines. The papers included in the thesis are, in reverse chronological order: [8] SAE 2005-XX-XXX Calculation accuracy of pulsating flow through the turbine of SI-engine turbochargers - Part 2 Measurements, simulation correlations and conclusions Westin & Ångström To be submitted to the 2005 SAE Powertrain and Fluid Systems Conference in San Antonio [7] SAE 2005-01-2113 Optimization of Turbocharged Engines’ Transient Response with Application on a Formula SAE / Student engine Westin & Ångström Approved for publication at the 2005 SAE Spring Fuels and Lubricants Meeting in Rio de Janeiro [6] SAE 2005-01-0222 Calculation accuracy of pulsating flow through the turbine of SI-engine turbochargers - Part 1 Calculations for choice of turbines with different flow characteristics Westin & Ångström Published at the 2005 SAE World Congress in Detroit April 11-14, 2005 [5] SAE 2004-01-0996 Heat Losses from the Turbine of a Turbocharged SI-Engine – Measurements and Simulation Westin, Rosenqvist & Ångström Presented at the 2004 SAE World Congress in Detroit March 8-11, 2004 [4] SAE 2003-01-3124 Simulation of a turbocharged SI-engine with two software and comparison with measured data Westin & Ångström Presented at the 2003 SAE Powertrain and Fluid Systems Conference in Pittsburgh [3] SIA C06 Correlation between engine simulations and measured data - experiences gained with 1D-simulations of turbocharged SI-engines Westin, Elmqvist & Ångström Presented at the SIA International Congress SIMULATION, as essential tool for risk management in industrial product development in Poissy, Paris September 17-18 2003 [2] IMechE C602/029/2002 A method of investigating the on-engine turbine efficiency combining experiments and modelling Westin & Ångström Presented at the 7th International Conference on Turbochargers and Turbocharging in London 14-15 May, 2002 [1] SAE 2000-01-2840 The Influence of Residual Gases on Knock in Turbocharged SI-Engines Westin, Grandin & Ångström Presented at the SAE International Fall Fuels and Lubricants Meeting in Baltimore October 16-19, 2000 The first step in the investigation about the simulation accuracy was to model the engine as accurately as possible and to correlate it against as accurate measurements as possible. That work is covered in the chapters 3 and 5 and in paper no. 3 in the list above. The scientific contribution here is to isolate the main inaccuracy to the simulation of turbine efficiency. In order to have anything to compare the simulated turbine efficiency against, a method was developed that enables calculation of the CA-resolved on-engine turbine efficiency from measured data, with a little support from a few simulated properties. That work was published in papers 2 and 8 and is the main scope of chapter 6 in the thesis. The scientific contributions here are several: · The application on a running SI-engine is a first · It was proven that CA-resolution is absolutely necessary in order to have a physically and mathematically valid expression for the turbine efficiency. A new definition of the time-varying efficiency is developed. · It tests an approach to cover possible mass accumulation in the turbine housing · It reveals that the common method for incorporating bearing losses, a constant mechanical efficiency, is too crude. The next step was to investigate if different commercial codes differ in the results, even though they use equal theoretical foundation. That work is presented in chapter 4, which corresponds to paper 4. This work has given useful input to the industry in the process of choosing simulation tools. The next theory to test was if heat losses were a major reason for the simulation accuracy. The scientific contribution in this part of the work was a model for the heat transport within the turbocharger that was developed, calibrated and incorporated in the simulations. It was concluded that heat losses only contributed to a minor part of the inaccuracy, but that is was a major reason for a common simulation error of the turbine outlet temperature, which is very important when trying to simulate catalyst light off. This work was published in paper 5 and is covered in chapter 7. Chapter 8, and papers 6 and 8, covers the last investigation of this work. It is a broad study where the impact of design changes of both manifold at turbines on both simulation accuracy as well as engine performance. The scientific contribution here is that the common theory that the simulation inaccuracy is proportional to the pulsation amplitude of the flow is non-valid. It was shown that the reaction was of minor importance for the efficiency of the turbine in the pulsating engine environment. Furthermore it presents a method to calculate internal flow properties in the turbine, by use of a steady-flow design software in a quasi-steady procedure. Of more direct use for the industry is important information of how to design the manifolds as well as it sheds more light on how the turbine works under unsteady flow, for instance that the throat area is the single most important property of the turbine and that the system has a far larger sensitivity to this parameter than to any other design parameters of the turbine. Furthermore it was proven that the variation among individual turbines is of minor importance, and that the simulation error was of similar magnitude for different turbine manufacturers. Paper 7, and chapter 9, cover a simulation exercise where the transient performance of turbocharged engines is optimised with help from factorials. It sorts out the relative importance of several design parameters of turbocharged engines and gives the industry important information of where to put the majority of the work in order to maximize the efficiency in the optimisation process. Overall, the work presented in this thesis has established a method for calibration of models to measured data in a sequence that makes the process efficient and accurate. It has been shown that use of controllers in this process can save time and effort tenfold or more. When designing turbocharged engines the residual gas is a very important factor. It affects both knock sensitivity and the volumetric efficiency. The flow in the cylinder is in its nature of more dimensions than one and is therefore not physically modelled in 1D codes. It is modelled through models of perfect mixing or perfect displacement, or at a certain mix between them. Before the actual project started, the amount of residual gases in an engine was measured and it’s influence on knock was established and quantified. This was the scope of paper 1. This information has been useful when interpreting the model results throughout the entire work.
|
387 |
New concepts and techniques for quantification and trace analysis by gas chromatographyPettersson, Johan January 2004 (has links)
In the first part of this thesis, strategies for avoiding systematic errors when quantifying the main components in mixtures of solvents has been developed (paper I). It is shown that variations in density caused by variations in sample composition and/or volume contraction need to be taken into account. In this way, quantification can be improved. The second part (paper II-V) describes a number of new methods for the analysis of organic trace components present in gaseous or aqueous samples, after an overview of the most commonly applied sample enrichment techniques has been given. For the enrichment of volatile trace compounds from gaseous samples, the concept of open tubular trapping has been further developed. A simplified procedure for preparing ultra-thick film, sorptive open tubular traps (OTTs) is described (paper III). The traps are coated with an irregular film of PDMS, and it is shown that the performance in terms of breakthrough volume is only marginally affected by the use of such traps. In paper IV, it is shown both experimentally and with a theoretical model that the enrichment capacity of OTTs can be significantly increased by increasing the inner diameter of the traps. A fully automated procedure for high-capacity sorption enrichment of trace organic analytes present in water is also reported (paper II). Time-based non-equilibrium extractions are feasible, enabling fast extractions that still allow sub-ppt limits of detection. The high flexibility of the automated system makes it possible to sample from process streams or off-line sources. Finally, the development of a new 2-dimensional precolumn-backflush method for the analysis of polar volatile trace analytes in water is described (paper V). This concept is based on the action of a hygroscopic salt which has a strong affinity for water, in a precolumn. Organic trace compounds, such as volatile alcohols or ketones show little retention on the precolumn and are eluted ahead of the bulk amount of the water onto a capillary column for subsequent high resolution separation. The residual water is removed from the system by backflushing the precolumn. The procedure allows the direct injection of aqueous sample volumes of at least 100 µl, and the pre-fractionation is accomplished within only a few minutes. Quantification limits for selected polar trace components were in the low ppb-region.
|
388 |
Play to promote development and learning in children infected with Human Immune Virus (HIV): Case studies of three childrenSymonds, Gene January 2010 (has links)
<p>The aim of this study was to explore the use of play with toddlers who are HIV positive to facilitate play, playfulness and sensory-motor development. The objectives were to explore how the therapist facilitated play, to explore how the child responded to the intervention, to explore how playfulness manifested as a facilitatory strategy and to explore how playfulness manifested as a response. A qualitative approach framed the case study research method with three participants between the ages of twelve months and three years. The main source of data was a record of the play-based intervention with the three participants. Additional data was obtained from participant observation of the children&rsquo / s responses to the play-based intervention, and hospital and occupational therapy record notes. A theory analytical strategy was used by coding data using theoretic propositions inductively. Each case was first analyzed individually, and then an analysis was made across the cases. Qualitative analysis of the data was done manually by coding, seeking categories and eliciting emergent themes by using an analytical strategy of theoretical propositions and an analytical technique of explanation building. Coding was done inductively, using theoretical constructs from the occupation by design, namely the elements of appeal, intactness and accuracy. Signs of playfulness were coded according to evidence of the elements of playfulness, namely perception of control, intrinsic motivation, suspension of reality or framing were evident in the data. Findings of the study were reported under two themes: Playful enablement &ndash / the therapist and Engaging, playing and developing &ndash / the child.</p>
|
389 |
Effects Of Muscle Fatigue On Shooting Accuracy In Handball PlayersSimsek, Beyza 01 September 2012 (has links) (PDF)
The purpose of the study was to investigate the effects of muscle fatigue on shooting accuracy in male handball players. Sixteen elite young male handball players (age: 17.12± / 1.74 year / height: 185.26± / 7.17 / body mass: 78.93± / 11.07) volunteered to participate in the study. The study composed of laboratory and field test sessions. In the laboratory test, maximal oxygen consumption (VO2max) obtained from treadmill running test, blood lactate concentration, heart rate monitoring at resting and every 3 minutes during running were measured. Running speed equal to 75% speed at VO2max values of participants was used as initial velocity for 30-15 intermittent fitness test (30-15IFT). In the field tests, after determined the optimum shooting velocity of each participant, they shots to each targets placed at the four corner of the handball goal 4 times, totally 16 times. Blood lactate concentration was measured from the earlobe of participant who completed shooting to target test session. Then, 30-15IFT was applied as fatigue protocol and at the end of the test, blood lactate concentration was measured again and participant repeated shooting to target test session immediately after fatigue protocol. During all shooting procedures, acceleration of wrist and speed of ball were recorded. Blood lactate concentration over 8mmol/L,
v
90% of HRmax, respiratory exchange ratio> / 1 and exhaustion of participant accepted ending criteria for the tests. As a result, no significant differences were found between pre-fatigue and post-fatigue protocols in terms of accurate and inaccurate shots. Shooting consistency, ball speed, response time, X, Y, Z axis of wrist acceleration variables highly correlated each other in terms of shooting accuracy both in pre and post fatigue conditions. Shooting consistency has an effect on accurate shots. Ball velocity has effect on inaccurate shots in pre-fatigue condition. However, none of variable has effect on accurate and inaccurate shots in post-fatigue conditions. In pre-fatigue conditions, right to left motion of wrist (X axis) was the most important motion, back to forward motion (y axis) was became more important in post fatigue condition.
|
390 |
A framework for flexible integration in robotics and its applications for calibration and error compensationTo, Minh Hoang 06 1900 (has links)
Robotics has been considered as a viable automation solution for the aerospace industry to address manufacturing cost. Many of the existing robot systems augmented with guidance from a large volume metrology system have proved to meet the high dimensional accuracy requirements in aero-structure assembly. However, they have been mainly deployed as costly and dedicated systems, which might not be ideal for aerospace manufacturing having low production rate and long cycle time. The work described in this thesis is to provide technical solutions to improve the flexibility and cost-efficiency of such metrology-integrated robot systems.
To address the flexibility, a software framework that supports reconfigurable system integration is developed. The framework provides a design methodology to compose distributed software components which can be integrated dynamically at runtime. This provides the potential for the automation devices (robots, metrology, actuators etc.) controlled by these software components to be assembled on demand for various assembly applications.
To reduce the cost of deployment, this thesis proposes a two-stage error compensation scheme for industrial robots that requires only intermittent metrology input, thus allowing for one expensive metrology system to be used by a number of robots. Robot calibration is employed in the first stage to reduce the majority of robot inaccuracy then the metrology will correct the residual errors. In this work, a new calibration model for serial robots having a parallelogram linkage is developed that takes into account both geometric errors and joint deflections induced by link masses and weight of the end-effectors.
Experiments are conducted to evaluate the two pieces of work presented above. The proposed framework is adopted to create a distributed control system that implements calibration and error compensation for a large industrial robot having a parallelogram linkage. The control system is formed by hot-plugging the control applications of the robot and metrology used together. Experimental results show that the developed error model was able to improve the 3 positional accuracy of the loaded robot from several millimetres to less than one millimetre and reduce half of the time previously required to correct the errors by using only the metrology. The experiments also demonstrate the capability of sharing one metrology system to more than one robot.
|
Page generated in 0.0764 seconds