• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 11
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Head Acceleration Experienced by Man: Exposure, Tolerance, and Applications

Rowson, Steven 03 May 2011 (has links)
Between 1.6 and 3.8 million sports-related concussions are sustained by persons living in the United States annually. While sports-related concussion was once considered to only result in immediate neurocognitive impairment and symptoms that are transient in nature, recent research has correlated long-term neurodegenerative effects with a history of sports-related concussion. Increased awareness and current media attention have contributed to concussions becoming a primary health concern. Although much research has been performed investigating the biomechanics of concussion, little is understood about the biomechanics that cause concussion in humans. The research presented in this dissertation investigates human tolerance to head acceleration using methods that pair biomechanical data collected from human volunteers with clinical data. Head impact exposure and injury risk are quantified and presented. In contrast to the publicly available data on the safety of automobiles, consumers have no analytical mechanism to evaluate the protective performance of football helmets. With this in mind, the Summation of Tests for the Analysis of Risk (STAR) evaluation system was developed to evaluate the impact performance of footballs helmets and provide consumers with information about helmet safety. The STAR evaluation system was designed using real world data that relate impact exposure to injury risk. / Ph. D.
12

Effect of Belt Usage Reporting Errors on Injury Risk Estimates

Swanseen, Kimberly Dawn 07 January 2010 (has links)
This thesis presents the results of a research effort investigating the effect of belt usage reporting errors of National Automotive Sampling System-Crash Data System (NASS-CDS) investigators on injury risk estimates. Current estimates of injury risk are developed under the assumption that NASS-CDS investigators are always accurate at determining seat belt usage. The primary purpose of this research is to determine the accuracy of NASS-CDS investigators using event data recorders (EDRs) as the baseline for accuracy, and then recalculating injury risk estimates based on our findings. The analysis of a 107 EDR dataset, from vehicle tests conducted by the National Highway Traffic Safety Administration (NHTSA) and the Insurance Institute for Highway Safety (IIHS), was conducted to determine the accuracy of Chrysler, Ford, GM and Toyota EDRs. This accuracy was examined by both EDR module type and vehicle make. EDR accuracy was determined for crash delta-V, seat belt buckle status, pre-impact speed, airbag deployment status and front seat position. From this analysis we were able to conclude that EDRs were accurate, within 4.5%, when comparing maximum delta-V of EDRs that recorded the entire crash pulse length. We also determined that EDRs were 100% accurate when reporting driver seat belt status for EDRs that completely recorded the event and recorded a status for the driver's seat belt. All GM, Ford and Chrysler EDRs in our database reported a pre-impact velocity less than 6 mph different than the NHTSA and IIHS reported pre-impact velocities. We also found that all but 2 (101 out of 103) of the GM, Ford, and Toyota EDRs correctly reported airbag deployment status. Lastly we were able to conclude that seat position status was useful in determining when a smaller sized occupant was the driver or right front occupant. EDRs reported seat position of 5% Hybrid III females as "forward" in every case that seat position was recorded for this smaller occupant size. Based on the analysis of seat belt status accuracy, a comparison of NASS-CDS investigator driver seat belt status and EDR driver seat belt status was conducted to determine the accuracy of the NASS-CDS investigators. This same comparison was conducted on reports of driver seat belt status provided by police. We found that NASS-CDS investigators had an overall error of 9.5% when determining driver seat belt status. When the EDR stated that the driver was unbuckled, investigators incorrectly coded buckled in of 29.5% of the cases. When the EDR stated that the driver was buckled, NASS-CDS error was only 1.2%. Police officers were less accurate than NASS-CDS investigators, with an overall error of 21.7%. When the EDR stated that the driver was buckled, police had an error of 2.4%. When the EDR stated that the driver's belt was unbuckled, police had an error of around 69%. In 2008, NASS-CDS investigators reported that drivers had an overall belt usage rate during accidents of 82%. After correcting for the errors we discovered, we estimate that the driver belt buckle status during a crash is around 72.6%. Injury risk estimates and odd ratio point estimates were then calculated for NASS-CDS investigator and EDR buckled versus unbuckled cases. The cases included only frontal collisions in which there was no rollover event or fire. Injury was defined as AIS 2+. The risk ratios and point estimates were then compared between investigators and EDRs. We found that injury risk for unbelted drivers may be over estimated by NASS-CDS investigators. The unbuckled to buckled risk ratio for EDRs was 8%-12% lower than the risk ratio calculated for NASS-CDS investigators. / Master of Science
13

The Crash Injury Risk to Rear Seated Passenger Vehicle Occupants

Tatem, Whitney M. 22 January 2020 (has links)
Historically, rear seat occupants have been at a lower risk of serious injury and fatality in motor vehicle crashes than their front seat counterparts. However, many passive safety advancements that have occurred over the past few decades such as advanced airbag and seatbelt technology primarily benefit occupants of the front seat. Indeed, safety for front seat occupants has improved drastically in the 21st century, but has it improved so much that the front seat is now safer than the rear? Today, rear-seated occupants account for 10% of all passenger vehicle fatalities. In this era focused on achieving zero traffic deaths, the safety of rear-seated occupants must be further addressed. This dissertation analyzed U.S. national crash data to quantify the risk of injury and fatality to rear-seated passenger vehicle occupants while accounting for the influence of associated crash, vehicle, and occupant characteristics such as crash severity, vehicle model year, and occupant age/sex. In rear impacts, the risk of moderate-to-fatal injury was greater for rear-seated occupants than their front-seated counterparts. In high-severity rear impact crashes, catastrophic occupant compartment collapse can occur and carries with it a great fatality risk. In frontal impacts, there is evidence that the rear versus front seat relative risk of fatality has been increasing in vehicle model years 2007 and newer. Rear-seated occupants often sustained serious thoracic, abdomen, and/or head injuries that are generally related to seatbelt use. Seatbelt pretensioners and load limiters – commonplace technology in the front seating positions – aim to mitigate these types of injuries but are rarely provided as standard safety equipment in the rear seats of vehicles today. Finally, in side impacts, injury and fatality risks to rear- and front-seated occupants are more similar than in the other crash modes studied, though disparities in protection remain, especially in near-side vehicle-to-vehicle crashes. Finally, this work projects great injury reduction benefits if a rear seat belt reminder system were to be widely implemented in the U.S. vehicle fleet. This dissertation presents a comprehensive investigation of the factors that contribute to rear-seated occupant injury and/or fatality through retrospective studies on rear, front, and side impacts. The overall goal of this dissertation is to better quantify the current risk of injury to rear-seated occupants under a variety of crash conditions, compare this to the current risk to front-seated occupants, and, when possible, identify how exactly injuries are occurring and ways in which they may be prevented in the future. The findings can benefit automakers who seek to improve the effectiveness of rear seat safety systems as well as regulatory agencies seeking to improve was vehicle tests targeting rear seat passenger vehicle safety. / Doctor of Philosophy / Historically, if a passenger vehicle such as a sedan or SUV is in a crash, occupants who are rear-seated were less likely to be hurt than someone who was front-seated. In other words, rear-seated occupants have been at a lower risk of injury than front-seated occupants. Indeed, safety for front seat occupants has improved drastically in the 21st century due to advancements in airbag and seatbelt technologies, among others, but has it improved so much that the front seat is now safer than the rear? Today, of all vehicle occupants who are killed in crashes on U.S. roadways, 10% are rear-seated. During this time when conversations surrounding vehicle safety are focused on achieving zero traffic deaths, the safety of rear-seated occupants must be further studied. This dissertation looked at national databases of all police-reported crashes that occur each year in the United States. The risk of injury to rear-seated passenger vehicle occupants was quantified and compared to that of front-seated occupants. Factors that may increase or decrease this risk of injury and fatality such as crash type, vehicle type, and occupant demographics were further explored and reported. In vehicles that were rear-ended, the risk of injury was greater for rear-seated occupants than their front-seated counterparts. When a vehicle crashes into something front-first (the most common type of impact in a vehicle crash), evidence is presented that the risk of fatality is greater in the rear seats than the front seats in model year 2007 and newer vehicles which generally are equipped with the most recent airbag and seatbelt technology. When a vehicle is hit on either of its sides, the risk of injury is closer between rear- and front-seated occupants than it was in the rear-end or frontal crashes previously studied. That said, differences in occupant protection were still observed between the rear and front seats, especially when the occupants studied were seated on the closest side of impact, or the near-side, and the vehicle was struck by another vehicle rather than sliding into an object such as a pole. Finally, this work projects great injury reduction benefits if a rear seat belt reminder system were to be widely implemented in the U.S. vehicle fleet. This dissertation presents a comprehensive investigation of the factors that contribute to rear-seated occupant injury and/or fatality through retrospective studies on rear, front, and side impacts. The overall goal of this dissertation is to better quantify the current risk of injury to rear-seated occupants under a variety of crash conditions, compare this to the current risk to front-seated occupants, and, when possible, identify how exactly injuries are occurring and ways in which they may be prevented in the future. The findings can benefit automakers who seek to improve the effectiveness of rear seat safety systems as well as regulatory agencies seeking to improve was vehicle tests targeting rear seat passenger vehicle safety.
14

Uppfattad skaderisk hos friidrottare på landslagsnivå : En enkätstudie / Perceived injury risk among elite Track & Field athletes : A questionnaire-based study

Mereman, Maria January 2019 (has links)
Aim Recent studies of Swedish track and field athletes have shown that there is a substantial risk of injury. None of these studies have investigated the potential role of how athletes perceive their injury risk, and how it may play a part in the occurrence and prevention of sport injures. The aim of the study was to explore how Swedish track and field athletes perceive their injury risk, and examine the potential correlation with prior injury experience using a quantitative design. Method The sample comprised 69 Swedish junior elite track and field athletes. The athletes filled out a two-part online questionnaire. The first part requested relevant personal information including gender, track and field event and previous injuries in the past 12 months; the second part consisted of "The Perception of Risk of Injury Scale", but modified accordingly to the targeted sport. For the statistical analysis, the R version 3.5.2 software was used and the results were analyzed with the Mann-Whitney U-test and Spearman's non-parametric test. Results 52 out of the 69 athletes in the study reported at least one injury during the past 12 months, and the most commonly reported period of time lost due to injury was between eight and 28 days. Non-significant results (p=0,095) were found between gender and perceived injury risk. If an athlete reported more than one injury in the past 12 months, they perceived their re-injury risk (p<0,025) to be higher. Significant results were found between perceived injury risk and the severity of the injury (p<0,006), with a negative correlation (r=-0,32). When the severity of the injury increased, the tendency of perceived injury risk decreased. Conclusions A history of previous injury has a small correlation to perceived injury risk. This study comes one step closer to understanding the potential impact of perceived risk of injury on occurrence of actual injury. With this knowledge, it may be possible to reduce the negative perceptions concerning re-injury in athletes with higher perceptions of injury risk. Finally, awareness of reinjury should be increased among athletes with a history of severe injury. / Syfte Studier inom Svensk Friidrott har visat att friidrottare löper en påtaglig skaderisk. Inga av dessa studier har studerat den potentiella rollen av hur skadefria friidrottare uppfattar sin skaderisk eller hur det kan påverka skadeförekomst och prevention. Syftet med studien var att undersöka förekomsten av uppfattad skaderisk hos svenska friidrottare på elitnivå samt att undersöka den potentiella korrelationen med skadehistorik. Metod Totalt inkluderades 69 juniorer på elitnivå inom Svensk Friidrott. Friidrottarna fyllde i en tvådelad webbaserad enkät. Första delen bestod av bakgrundsfrågor som inkluderade kön, friidrottsgren och skadehistorik de senaste 12 månaderna. Den andra delen innefattade ”The Perception of Risk of Injury Scale”, modifierad för att passa idrotten friidrott. Till den statistiska analysen användes statistikprogrammet R version 3.5.2 och resultatet analyserades med Mann-Whitney U-test samt Spearmans icke parametriska test. Resultat 52 av de 69 inkluderade friidrottarna rapporterade minst en skada under de senaste 12 månaderna och vanligaste skattad frånvarotid på grund av skada var åtta till 28 dagar. Ingen signifikant skillnad (p=0,095) påvisades mellan kön och uppfattad skaderisk. Friidrottare som rapporterade mer än en skada de senaste 12 månaderna uppfattade sin skaderisk som högre för återfallsskada (p<0,025). Signifikanta resultat fanns mellan uppfattad skaderisk och skadans allvarlighetsgrad (p<0,006), där sambanden var negativa (r=-0,32). När skadans allvarlighetsgrad ökade så tenderade den uppfattade skaderisken att minska hos friidrottarna. Slutsats Skadehistorik har ett mindre samband med uppfattad skaderisk. Denna studie är ett steg mot en ökad förståelse för den potentiella inverkan som den egna uppfattade skaderisken har på den faktiska risken för skada. Med denna kunskap kan det vara möjligt att minska de negativa uppfattningarna gällande återfallsskador hos friidrottare som uppfattar en högre skaderisk. Resultatet kan även användas för att medvetengöra friidrottare med en historik av allvarliga skador om risken för återfallsskada.
15

Adolescent characteristics, neighbourhood social processes and socioeconomic factors and adolescent injury risk

Klemencic, Nora 15 September 2011 (has links)
Adolescent participants (N=170) completed questionnaires assessing individual characteristics (gender, age, Sensation Seeking, Aggression/Oppositionality, Impulsivity) and characteristics of the neighbourhoods in which they live (Neighbourhood social cohesion/informal social control of youth). Postal codes as reported by the youth were linked to 2006 Canadian census data in order to determine area-level Socioeconomic Status (SES) for each adolescent. Data regarding adolescents’ individual traits and characteristics of the neighbourhoods in which they live were examined both as main effects and in individual by neighbourhood interactions as predictors of adolescents’ risk of injury. Individual traits predicted injury risk, however, neighbourhood social processes and SES did not predict adolescent injury risk when examined as main effects, whether included alone or together with individual characteristics. Neighbourhood social processes and Neighbourhood SES each moderated the relation between certain individual traits and injury risk. The value of examining individual-context interactions in injury risk research is discussed.
16

The Impact of Anterior Cruciate Ligament Reconstruction, Sex, and Sport-specific, Game-like Factors on Limb Stiffness and Limb Stiffness Asymmetry during Landing

Teater, Michael Anthony 30 June 2023 (has links)
Non-contact injuries can occur when athletes use poor or inconsistent mechanics during typical sport-related movements like landing from a jump. Anterior cruciate ligament (ACL) injuries are especially devastating, and certain populations like female athletes and athletes with a previous ACL reconstruction (ACLR) are at greater risk of suffering an ACL injury, with altered biomechanical strategies being one proposed reason. Asymmetric landings where one limb experiences greater landing force can decrease joint stability and place the overloaded limb at greater risk for ACL injury. Additionally, a stiff landing, characterized by increased ground reaction force (GRF), extended joints at initial ground contact, and decreased joint flexion throughout the landing, has been proposed to increase ACL injury risk. While load distribution between limbs is a common landing assessment to determine injury risk, it is unclear what role limb stiffness plays in the likelihood of experiencing an ACL injury. Limb stiffness is simply the deformation of the limb in response to the downward force applied to the lower limb during ground contact, which can be approximated using GRF. Limb stiffness has been commonly used to assess performance in running, hopping, and jumping, however, its relationship with injury risk during landings is relatively unexplored. Past research has revealed that the ACL experiences peak strain prior to initial ground contact when the knee is at or near full extension. Additionally, expert video analyses have determined that ACL injuries most likely occur within 50 milliseconds of ground contact. It is possible that limb stiffness and limb stiffness asymmetry can be used during the early impact phase of landings to reveal ACLR- and sex-specific landing mechanics differences when the ACL appears to be most vulnerable. Moreover, game-like, sport-specific landing tasks with a greater horizontal component that load the ACL and those that divert attention away from landing strategies may uncover differences that do not appear in standard, controlled laboratory tasks. The overall goal of this project was to use limb stiffness, limb stiffness asymmetry, and related measures to analyze the early landing phase mechanics of groups at greater risk for ACL injury during game-like, sport-specific landings. First, in an ACLR cohort, greater knee power and knee work asymmetries were found when compared to healthy recreational athletes, supporting previous literature that found that athletes with an ACLR land unevenly by offloading their surgical limb. However, limb stiffness asymmetry was not different between groups, implying that the groups may have modulated limb stiffness differently between limbs. Second, minimal sex-by-task interactions were determined for landings that varied by horizontal approach prior to initial ground contact. Significant differences were found for most measures across tasks overall, however, male and female athletes displayed similar landing mechanics, indicating that expected sex-specific differences may not exist during the immediate landing phase when ACL injuries are thought to occur. Last a landing task that mimicked a ball in mid-air and diverted attention away from landing mechanics produced a sex-by-task interaction for peak impact force but no other measure. When comparing each sex-task pairing, a trend for greater peak impact force by female athletes during the distracted landing (p=0.098) was found which may indicate that future tasks with additional external focuses or another game-like component will reveal anticipated sex-specific differences. Increased time between limbs for initial ground contact for female athletes also revealed that a time-synchronized assessment of between-limb coordination may be beneficial for future research. / Doctor of Philosophy / Non-contact injuries can occur when athletes use poor or inconsistent mechanics during typical sport-related movements like landing from a jump. Anterior cruciate ligament (ACL) injuries are especially tough, and certain populations like female athletes and athletes with a previous ACL reconstruction surgery (ACLR) are at greater risk of suffering an ACL injury, with different movement techniques being one proposed reason. Uneven landings where one limb has greater landing forces can decrease joint posture and place the overloaded limb at greater risk for ACL injury. Additionally, a stiff landing, defined by larger ground reaction force (GRF), extended joints at initial ground contact, and decreased joint flexion throughout the landing, is thought to increase ACL injury risk. While landing force distribution between limbs is a common way of evaluating landings to determine injury risk, it is unclear what role limb stiffness plays in the likelihood of experiencing an ACL injury. Limb stiffness is simply the deformation of the limb in response to the downward force applied on the lower limb during ground contact, which can be estimated using GRF. Limb stiffness has been commonly used to assess performance in running, hopping, and jumping, however, its relationship with injury risk during landings is pretty limited. Past research has revealed that the ACL experiences maximum stretch prior to initial ground contact when the knee is or is almost completely straight. Additionally, expert video investigations have determined that ACL injuries most likely occur within 50 milliseconds of ground contact. It is possible that limb stiffness and limb stiffness asymmetry can be used during the early impact phase of landings to reveal sex- and ACLR-specific landing mechanics differences when the ACL appears to be most in danger. Additionally, game-like, sport-specific landing tasks with a greater horizontal element that load the ACL and those that redirect attention away from landing strategies may show differences that do not appear in basic laboratory tasks. The overall goal of this project was to use limb stiffness, limb stiffness asymmetry, and related measures to examine the early landing phase techniques of groups at greater risk for ACL injury during game-like, sport-specific landings. First, in a group of athletes with a previous ACLR, greater knee storage differences between limbs were found when compared to healthy recreational athletes, supporting previous research studies that found that athletes with an ACLR land unevenly by offloading their surgical limb. However, limb stiffness asymmetry was not different between groups, implying that the groups may have regulated limb stiffness differently between limbs. Second, only a couple measures were significantly affected by the combined effect of sex and task during landings that were different due to their horizontal element. Significant differences were found for most measures across tasks overall, however, male and female athletes had similar landing techniques, showing that the expected differences between sexes may not happen very early in the landing phase when ACL injuries are thought to happen. Last, a landing task that imitated a ball in mid-air and redirected attention away from landing mechanics produced a larger sex-specific difference for peak impact force compared to a basic landing task. When comparing each sex-task pairing, a trend for greater peak impact force by female athletes during the distracted landing (p=0.098) was found which may show that future tasks with additional distractions or another game-like element will reveal expected differences between sexes. Increased time between limbs for initial ground contact for female athletes also revealed that looking at the coordination of both limbs on the same timescale may be useful for future research.
17

Effectiveness of Intersection Advanced Driver Assistance Systems in Preventing Crashes and Injuries in Left Turn Across Path / Opposite Direction Crashes in the United States

Bareiss, Max January 2019 (has links)
Intersection crashes represent one-fifth of all police reported traffic crashes and one-sixth of all fatal crashes in the United States each year. Active safety systems have the potential to reduce crashes and injuries across all crash modes by partially or fully controlling the vehicle in the event that a crash is imminent. The objective of this thesis was to evaluate crash and injury reduction in a future United States fleet equipped with intersection advanced driver assistance systems (I-ADAS). In order to evaluate this, injury risk modeling was performed. The dataset used to evaluate injury risk was the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS). An injured occupant was defined as vehicle occupant who experienced an injury of maximum Abbreviated Injury Scale (AIS) of 2 or greater, or who were fatally injured. This was referred to as MAIS2+F injury. Cases were selected in which front-row occupants of late-model vehicles were exposed to a frontal, near-, or far-side crash. Logistic regression was used to develop an injury model with occupant, vehicle, and crash parameters as predictor variables. For the frontal and near-side impact models, New Car Assessment Program (NCAP) test results were used as a predictor variable. This work quantitatively described the injury risk for a wide variety of crash modes, informing effectiveness estimates. This work reconstructed 501 vehicle-to-vehicle left turn across path / opposite direction (LTAP/OD) crashes in the United States which had been originally investigated in NMVCCS. The performance of thirty different I-ADAS system variations was evaluated for each crash. These variations were the combinations of five Time to Collision (TTC) activation thresholds, three latency times, and two different intervention types (automated braking and driver warning). In addition, two sightline assumptions were modeled for each crash: one where the turning vehicle was visible long before the intersection, and one where the turning vehicle was only visible after entering the intersection. For resimulated crashes which were not avoided by I-ADAS, a new crash delta-v was computed for each vehicle. The probability of MAIS2+F injury to each front row occupant was computed. Depending on the system design, sightline assumption, I-ADAS variation, and fleet penetration, an I-ADAS system that automatically applies emergency braking could avoid 18%-84% of all LTAP/OD crashes. An I-ADAS system which applies emergency braking could prevent 44%-94% of front row occupants from receiving MAIS2+F injuries. I-ADAS crash and injured person reduction effectiveness was higher when both vehicles were equipped with I-ADAS. This study presented the simulated effectiveness of a hypothetical intersection active safety system on real crashes which occurred in the United States, showing strong potential for these systems to reduce crashes and injuries. However, this crash and injury reduction effectiveness made the idealized assumption of full installation in all vehicles of a future fleet. In order to evaluate I-ADAS effectiveness in the United States fleet the proportion of new vehicles with I-ADAS was modeled using Highway Loss Data Institute (HLDI) fleet penetration predictions. The number of potential LTAP/OD conflicts was modeled as increasing year over year due to a predicted increase in Vehicle Miles Traveled (VMT). Finally, the combined effect of these changes was used to predict the number of LTAP/OD crashes each year from 2019 to 2060. In 2060, we predicted 70,439 NMVCCS-type LTAP/OD crashes would occur. The predicted number of MAIS2+F injured front row occupants in 2060 was 3,836. This analysis shows that even in the long-term fleet penetration of Intersection Active Safety Systems, many injuries will continue to occur. This underscores the importance of maintaining passive safety performance in future vehicles. / M.S. / Future vehicles will have electronic systems that can avoid crashes in some cases where a human driver is unable, unaware, or reacts insufficiently to avoid the crash without assistance. The objective of this work was to determine, on a national scale, how many crashes and injuries could be avoided due to Intersection Advanced Driver Assistance Systems (I-ADAS), a hypothetical version of one of these up-and-coming systems. This work focused on crashes where one car is turning left at an intersection and the other car is driving through the intersection and not turning. The I-ADAS system has sensors which continuously search for other vehicles. When the I-ADAS system determines that a crash may happen, it applies the brakes or otherwise alerts the driver to apply the brakes. Rather than conduct actual crash tests, this was simulated on a computer for a large number of variations of the I-ADAS system. The basis for the simulations was real crashes that happened from 2005 to 2007 across the United States. The variations that were simulated changed the time at which the I-ADAS system triggered the brakes (or alert) and the simulated amount of computer time required for the I-ADAS system to make a choice. In some turning crashes, the car cannot see the other vehicle because of obstructions, such as a line of people waiting to turn left across the road. Because of this, simulations were conducted both with and without the visual obstruction. For comparison, we performed a simulation of the original crash as it happened in real life. Finally, since there are two cars in each crash, there are simulations when either car has the I-ADAS system or when both cars have the I-ADAS system. Each simulation either ends in a crash or not, and these are tallied up for each system variation. The number of crashes avoided compared to the number of simulations run is crash effectiveness. Crash effectiveness ranged from 1% to 84% based on the system variation. For each crash that occurred, there is another simulation of the time immediately after impact to determine how severe the impact was. This is used to determine how many injuries are avoided, because often the crashes which still happened were made less severe by the I-ADAS system. In order to determine how many injuries can be avoided by making the crash less severe, the first chapter focuses on injury modeling. This analysis was based on crashes from 2008 to 2015 which were severe enough that one of the vehicles was towed. This was then filtered down by only looking at crashes where the front or sides were damaged. Then, we compared the outcome (injury as reported by the hospital) to the circumstances (crash severity, age, gender, seat belt use, and others) to develop an estimate for how each of these crash circumstances affected the injury experienced by each driver and front row passenger. A second goal for this chapter was to evaluate whether federal government crash ratings, commonly referred to as “star ratings”, are related to whether the driver and passengers are injured or not. In frontal crashes (where a vehicle hits something going forwards), the star rating does not seem to be related to the injury outcome. In near-side crashes (the side next to the occupant is hit), a higher star rating is better. For frontal crashes, the government test is more extreme than all but a few crashes observed in real life, and this might be why the injury outcomes measured in this study are not related to frontal star rating. Finally, these crash and injury effectiveness values will only ever be achieved if every car has an I-ADAS system. The objective of the third chapter was to evaluate how the crash and injury effectiveness numbers change each year as new cars are purchased and older cars are scrapped. Early on, few cars will have I-ADAS and crashes and injuries will likely still occur at roughly the rate they would without the system. This means that crashes and injuries will continue to increase each year because the United States drives more miles each year. Eventually, as consumers buy new cars and replace older ones, left turn intersection crashes and injuries are predicted to be reduced. Long into the future (around 2050), the increase in crashes caused by miles driven each year outpaces the gains due to new cars with the I-ADAS system, since almost all of the old cars without I-ADAS have been removed from the fleet. In 2025, there will be 173,075 crashes and 15,949 injured persons that could be affected by the I-ADAS system. By 2060, many vehicles will have I-ADAS and there will be 70,439 crashes and 3,836 injuries remaining. Real cars will not have a system identical to the hypothetical I-ADAS system studied here, but systems like it have the potential to significantly reduce crashes and injuries.
18

Upper extremity biomechanics in native and non-native signers

January 2018 (has links)
abstract: Individuals fluent in sign language who have at least one deaf parent are considered native signers while those with non-signing, hearing parents are non-native signers. Musculoskeletal pain from repetitive motion is more common from non-natives than natives. The goal of this study was twofold: 1) to examine differences in upper extremity (UE) biomechanical measures between natives and non-natives and 2) upon creating a composite measure of injury-risk unique to signers, to compare differences in scores between natives and non-natives. Non-natives were hypothesized to have less favorable biomechanical measures and composite injury-risk scores compared to natives. Dynamometry was used for measurement of strength, electromyography for ‘micro’ rest breaks and muscle tension, optical motion capture for ballistic signing, non-neutral joint angle and work envelope, a numeric pain rating scale for pain, and the modified Strain Index (SI) as a composite measure of injury-risk. There were no differences in UE strength (all p≥0.22). Natives had more rest (natives 76.38%; non-natives 26.86%; p=0.002) and less muscle tension (natives 11.53%; non-natives 48.60%; p=0.008) for non-dominant upper trapezius across the first minute of the trial. For ballistic signing, no differences were found in resultant linear segment acceleration when producing the sign for ‘again’ (natives 27.59m/s2; non-natives 21.91m/s2; p=0.20). For non-neutral joint angle, natives had more wrist flexion-extension motion when producing the sign for ‘principal’ (natives 54.93°; non-natives 46.23°; p=0.04). Work envelope demonstrated the greatest significance when determining injury-risk. Natives had a marginally greater work envelope along the z-axis (inferior-superior) across the first minute of the trial (natives 35.80cm; non-natives 30.84cm; p=0.051). Natives (30%) presented with a lower pain prevalence than non-natives (40%); however, there was no significant difference in the modified SI scores (natives 4.70 points; non-natives 3.06 points; p=0.144) and no association between presence of pain with the modified SI score (r=0.087; p=0.680). This work offers a comprehensive analysis of all the previously identified UE biomechanics unique to signers and helped to inform a composite measure of injury-risk. Use of the modified SI demonstrates promise, although its lack of association with pain does confirm that injury-risk encompasses other variables in addition to a signer’s biomechanics. / Dissertation/Thesis / Doctoral Dissertation Exercise and Nutritional Sciences 2018
19

Factors associated with injuries in road-runners at a local athletic club

Hendricks, Candice January 2011 (has links)
<p>Across the world, physical inactivity was found to be associated with cardiovascular and chronic diseases of lifestyle which often leads to an increased rate of various physical disabilities andpremature death. To combat these high incidences of chronic diseases of lifestyle, WHO strongly encourages people to become physically active on a daily basis to reduce the risk of&nbsp / premature death. Running has thus become the preferred choice of physical activity by thousands of people to help improve their overall health and wellbeing. Apart from the health benefits&nbsp / that running provides, it can also predispose the runner to potential injury especially when runners follow an inappropriate training programme and have inadequate knowledge about factors causing injury. Therefore, baseline data about the prevalence, incidence of injury and the identification of the aetiological factors associated with running injuries are needed to develop and&nbsp / implement preventative programmes to allow runners to optimally perform in training and races without injury. In South Africa, there is limited research available on the incidence of injury in runners yet there is an annual increase in participation in races such as Two Oceans and Comrades marathon which could lead to an increase in the number of running injuries.Thus, the purpose of this study was to determine the incidence of injuries and identify the various risk factors that are associated with injuries in road runners at a local athletic club. Methods: A prospective cohort study design over a 16 week period using quantitative research methods was used. A sample of 50 runners had consented to participate in the study. The participants had to complete a self-administered questionnaire and clinical measurements of BMI, Q-angle, leglength, muscle strength of lower leg and ROM of hip and knee were recorded. The participants had&nbsp / to complete an injury report form to record any new injuries sustained over the 16 week period of the study. Statistical Package for Social Sciences (SPSS) version 18 and software SAS v9 (SAS Institute Inc., Cary, NC, USA) was used for data capturing and analysis. Descriptive and inferential statistics were done to summarize the data and was expressed as frequencies, percentages, means and standard deviations. Injury prevalence and cumulative incidence was calculated as a proportion rate along with 95% confidence interval. The Poisson regression model was used to analyse the association between running injury and the independent variables of interest such as demographics, anthropometric measurements, training methods, running experience and&nbsp / previous injury. The alpha level was set as p&lt / 0.05. Results: The study found that the majority (92%) of the participants (n=46) sustained running injuries in the past prior to the study. A total of 16 participants sustained a number of 50 new injuries over the 16 week study period. Thus the prevalence rate of injuries was 32%. The incidence rate of injuries for this study was 0.67 per&nbsp / 1000km run at a 95% confidence interval of 0.41, 1.08. Furthermore, the most common location of new injuries reported were the calf (20%) and the second most common location was the&nbsp / knee (18%). PFPS was the most common type of knee injury diagnosed, followed by lumbar joint sprain. The results showed that none of the identified factors (running distance, stretching, age, Q-angle, BMI, running experience, leg-length discrepancy and previous running injuries) were directly associated with running injuries. However, a marginal significance was found for&nbsp / running distance (p = 0.08) and leg length discrepancy (p = 0.06). Conclusions: The study found a high prevalence and incidence rate of injury thus the need for preventative programmes have been highlighted. There was no statistical significance found between the identified factors and risk of injury however, there was clinical relevance found between factors identified. One major&nbsp / limitation was the small sample of participants and the short duration of study period. Thus, future research is needed to further determine possible factors associated with running injuries over a longer period and including a larger sample. The results of the study will be made available to all the stakeholders (runners, coaches and medical team) to implement in athletic club. </p>
20

Factors associated with injuries in road-runners at a local athletic club

Hendricks, Candice January 2011 (has links)
<p>Across the world, physical inactivity was found to be associated with cardiovascular and chronic diseases of lifestyle which often leads to an increased rate of various physical disabilities andpremature death. To combat these high incidences of chronic diseases of lifestyle, WHO strongly encourages people to become physically active on a daily basis to reduce the risk of&nbsp / premature death. Running has thus become the preferred choice of physical activity by thousands of people to help improve their overall health and wellbeing. Apart from the health benefits&nbsp / that running provides, it can also predispose the runner to potential injury especially when runners follow an inappropriate training programme and have inadequate knowledge about factors causing injury. Therefore, baseline data about the prevalence, incidence of injury and the identification of the aetiological factors associated with running injuries are needed to develop and&nbsp / implement preventative programmes to allow runners to optimally perform in training and races without injury. In South Africa, there is limited research available on the incidence of injury in runners yet there is an annual increase in participation in races such as Two Oceans and Comrades marathon which could lead to an increase in the number of running injuries.Thus, the purpose of this study was to determine the incidence of injuries and identify the various risk factors that are associated with injuries in road runners at a local athletic club. Methods: A prospective cohort study design over a 16 week period using quantitative research methods was used. A sample of 50 runners had consented to participate in the study. The participants had to complete a self-administered questionnaire and clinical measurements of BMI, Q-angle, leglength, muscle strength of lower leg and ROM of hip and knee were recorded. The participants had&nbsp / to complete an injury report form to record any new injuries sustained over the 16 week period of the study. Statistical Package for Social Sciences (SPSS) version 18 and software SAS v9 (SAS Institute Inc., Cary, NC, USA) was used for data capturing and analysis. Descriptive and inferential statistics were done to summarize the data and was expressed as frequencies, percentages, means and standard deviations. Injury prevalence and cumulative incidence was calculated as a proportion rate along with 95% confidence interval. The Poisson regression model was used to analyse the association between running injury and the independent variables of interest such as demographics, anthropometric measurements, training methods, running experience and&nbsp / previous injury. The alpha level was set as p&lt / 0.05. Results: The study found that the majority (92%) of the participants (n=46) sustained running injuries in the past prior to the study. A total of 16 participants sustained a number of 50 new injuries over the 16 week study period. Thus the prevalence rate of injuries was 32%. The incidence rate of injuries for this study was 0.67 per&nbsp / 1000km run at a 95% confidence interval of 0.41, 1.08. Furthermore, the most common location of new injuries reported were the calf (20%) and the second most common location was the&nbsp / knee (18%). PFPS was the most common type of knee injury diagnosed, followed by lumbar joint sprain. The results showed that none of the identified factors (running distance, stretching, age, Q-angle, BMI, running experience, leg-length discrepancy and previous running injuries) were directly associated with running injuries. However, a marginal significance was found for&nbsp / running distance (p = 0.08) and leg length discrepancy (p = 0.06). Conclusions: The study found a high prevalence and incidence rate of injury thus the need for preventative programmes have been highlighted. There was no statistical significance found between the identified factors and risk of injury however, there was clinical relevance found between factors identified. One major&nbsp / limitation was the small sample of participants and the short duration of study period. Thus, future research is needed to further determine possible factors associated with running injuries over a longer period and including a larger sample. The results of the study will be made available to all the stakeholders (runners, coaches and medical team) to implement in athletic club. </p>

Page generated in 0.0539 seconds