• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 584
  • 124
  • 76
  • 66
  • 54
  • 41
  • 38
  • 35
  • 17
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • Tagged with
  • 1319
  • 342
  • 246
  • 236
  • 149
  • 129
  • 123
  • 121
  • 121
  • 119
  • 94
  • 94
  • 90
  • 83
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Development of a Performance-Based Procedure for Assessment of Liquefaction-Induced Free-Field Settlements

Peterson, Brian David 01 December 2016 (has links)
Liquefaction-induced settlement can cause significant damage to structures and infrastructure in the wake of a seismic event. Predicting settlement is an essential component of a comprehensive seismic design. The inherent uncertainty associated with seismic events makes the accurate prediction of settlement difficult. While several methods of assessing seismic hazards exist, perhaps the most promising is performance-based earthquake engineering, a framework presented by the Pacific Earthquake Engineering Research (PEER) Center. The PEER framework incorporates probability theory to generate a comprehensive seismic hazard analysis. Two settlement estimation methods are incorporated into the PEER framework to create a fully probabilistic settlement estimation procedure. A seismic hazard analysis tool known as PBLiquefY was updated to include the fully probabilistic method described above. The goal of the additions to PBLiquefY is to facilitate the development of a simplified performance-based procedure for the prediction of liquefaction-induced free-field settlements. Settlement estimations are computed using conventional deterministic methods and the fully probabilistic procedure for five theoretical soil profiles in 10 cities of varying seismicity levels. A comparison of these results suggests that deterministic methods are adequate when considering events of low seismicity but may result in a considerable under-estimation of seismic hazard when considering events of mid to high seismicity.
432

Design and laboratory evaluation of an inexpensive noise sensor

Hallett, Laura Ann 01 August 2017 (has links)
Noise is a pervasive workplace hazard that varies spatially and temporally. Hazard mapping is a useful way to communicate intensity and distribution of noise sources in the workplace. These maps can be created using a stationary network of sensors, although the cost of noise measurement instruments has prohibited their use in such a network. The objectives for this work were to (1) develop an inexpensive noise sensor (<$100) that measures A-weighted sound pressure levels within ±2 dBA of a Type 2 sound level meter (SLM, ~$1,800); and (2) evaluate 50 noise sensors before field deployment as part of an inexpensive sensor network. The inexpensive noise sensor consists of an electret condenser microphone, an amplifier circuit, and a microcontroller with a small form factor (28mm by 47 mm by 9 mm) than can be operated as a stand-alone unit. Laboratory tests were conducted to evaluate 50 of the new sensors at 5 test levels. The testing levels were (1) ambient noise in a quiet office, (2) a pink noise test signal from 65 to 85 dBA in 10 dBA increments, and, (3) 94 dBA using a SLM calibrator. The difference between the output of the sensor and SLM were computed for each level and overall. Ninety-four percent of the noise sensors (n=46) were within ± 2 dBA of the SLM for noise levels from 65 dBA to 94 dBA. As noise level increased, bias decreased, ranging from 18.3% in the quiet office to 0.48% at 94 dBA. Overall bias of the sensors was 0.83% across the 75 dBA to 94 dBA range. These sensors are available for a variety of uses and can be customized for many applications, including incorporation into a stationary sensor network for continuously monitoring noise in manufacturing environments.
433

Wintertime factors affecting contaminant distribution in farrowing barns

Reeve, Kelsie Ann 01 July 2012 (has links)
Respirable dust, carbon dioxide, ammonia, hydrogen sulfide, and carbon monoxide concentrations were measured using fixed-area monitoring and contaminant mapping in a 19–crate farrowing room during the winter. Direct–reading instruments were used with fixed–area stations and contaminant mapping to evaluate concentrations during five days over a period of a three–week farrowing cycle. Concentrations were evaluated to determine the effect of the pit ventilation on contaminant concentrations, a change in concentration occurred over a sample day, and to determine if three data collection methods produce different daily respirable dust concentrations. Pit ventilation did have a significant effect on contaminant concentration in a farrowing barn during winter. Compared to when the pit fan was on, mean area contaminant concentration, with the exception of CO, was significantly higher when the pit fan was turned off (p<0.001). Mean respirable dust concentration was 79% higher, CO2 concentration was 35% higher, NH3 increased from 0.03 ppm to 10.8 ppm, and H2S concentrations increased from 0.03 ppm to 0.67 ppm. A significant change in area respirable dust (p<0.001) and CO2 (p<0.001) mean concentrations occurred over time throughout the course of a sample day. Mean area respirable dust concentrations were highest in the beginning of the sample day and decreased by 77 % (pit fan off) to 87% (pit fan on) over a five–hour sample period. Higher concentrations were likely attributed to the feeding period that occurred early in the day. When the pit fan was turned off, mean area CO2 concentrations increased by 24% by the end of the sample day due to the inefficient ventilation and the constant production of CO2 generated by the swine. Finally, comparing the three data collection methods produced similar results concerning the ranking of the daily mean concentrations of respirable dust; however, differences were seen in the magnitude of the daily average respirable dust concentrations across the three data collection methods, which might lead to different interpretations of risk. To ensure risk is not underestimated, multiple fixed–area monitors are recommended to characterize room concentrations. Throughout the study, contaminant concentration did not exceed regulatory or international consensus standards; however, recommended agricultural health limits suggested in the literature were exceeded for respirable dust, CO2, and NH3. These findings indicate the need to consider personal exposures to those working in farrowing barns and control options to reduce these contaminant concentrations in production facilities.
434

Situating the Perception and Communication of Flood Risk: Components and Strategies

Bell, Heather M 02 November 2007 (has links)
Loss prevention and distribution must begin well before a flood event at multiple levels. However, the benchmarks and terminology we use to manage and communicate flood risk may be working against this goal. U.S. flood policy is based upon a flood with a one percent chance of occurring in any year. Commonly called the "hundred year flood," it has been upheld as a policy criterion, but many have questioned the effectiveness of hundred year flood terminology in public communication. This research examined public perceptions of the hundred year flood and evaluated the comparative effectiveness of this term and two other methods used to frame the benchmark flood: a flood with a one percent chance of occurring in any year and a flood with a 26 percent chance of occurring in thirty years. This research also explored how flooding and flood risk messages fit into the larger context of people's lives by modeling the relationships between flood related understanding, attitude and behavior and the situational and cognitive contexts in which these factors are embedded. The final goal was to come up with locally based suggestions for improving flood risk communication. Data were collected in the Towns of Union and Vestal, New York. Participants were adult residents of single family homes living in one of two FEMA designated floodplains. Face to face surveys and focus groups were used to gather information on respondents' flood experience and loss mitigation activities; general perception of flood risk and cause; flood information infrastructure; perceptions associated with specific flood risk descriptions; and basic demographic data. Focus groups were also asked to suggest improvements to flood risk communication. Results indicated that experience was the most influential factor in perception and behavior. Additionally, there was little evidence that understanding led to "appropriate" behavior. The 26 percent chance description was the most effective when both understanding and persuasion were included, but interpretations of probabilistic flood risk messages were highly individualized. Finally, regulatory practice likely influences attitude and behavior and may emphasize the likelihood of a particular flood at the expense of the possibility of flooding in general.
435

HAZARD RECOGNITION AND RISK PERCEPTION AMONG UNION ELECTRICIANS

Jazayeri, Elyas 01 January 2019 (has links)
Hazard recognition and risk perception are two important factors that are a focus of most safety training programs. According to previous research, unrecognized hazards could lead to underestimation of risks, which ultimately could lead to injuries and fatalities. The primary objective of this research was to assess hazard recognition and safety risk perception skills in the electrician trade among electricians in unions. Another goal of this study was to find possible correlation between level of engagement in safety training and hazard recognition and risk perception skills. The research objectives were accomplished by gathering data from sixty-seven apprentices and journeymen across the United States. Each individual was asked to find identify hazards and to assess the risk associated with each hazard. both groups of apprentices and journeymen are similar to each other in terms of hazard recognition and both are significantly different than an expert group.The result also shows that apprentices perceive the risk not significantly different than the expert group. The result will help understand the impact of the level of engagement of safety training on hazard recognition and risk perception skills of their workers. The result could also help electrical unions identify performance gaps in their training and ultimately improve safety behaviors with union electricians.
436

Using Repeat Terrestrial Laser Scanning and Photogrammetry to Monitor Reactivation of the Silt Creek Landslide in the Western Cascade Mountains, Linn County, Oregon

McCarley, Justin Craig 10 April 2018 (has links)
Landslides represent a serious hazard to people and property in the Pacific Northwest. Currently, the factors leading to sudden catastrophic failure vs. gradual slow creeping are not well understood. Utilizing high-resolution monitoring techniques at a sub-annual temporal scale can help researchers better understand the mechanics of mass wasting processes and possibly lead to better mitigation of their danger. This research used historical imagery analysis, precipitation data, aerial lidar analysis, Structure from Motion (SfM) photogrammetry, terrestrial laser scanning (TLS), and hydrologic measurements to monitor displacement of the Silt Creek Landslide in the western Cascade Mountain Range in Linn County, Oregon. This landslide complex is ~4 km long by ~400 m wide. The lower portion of the landslide reactivated following failure of an internal scarp in June 2014. Precipitation was measured on site and historical precipitation data was determined from a nearby SNOTEL site. Analysis of aerial lidar data found that the internal scarp failure deposited around 1.00x106 m3 of material over an area of 1.20x105 m2 at the uppermost portion of the reactivated slide. Aerial lidar analysis also found that displacement rates on the slide surface were as high as 3 m/yr during the 2015 water year, which was the year immediately following the failure. At the beginning of the 2016 water year, very low altitude aerial images were collected and used to produce point cloud data, via SfM, of a deformed gravel road which spans a portion of the reactivated slide. The SfM data were complimentary to the aerial and TLS scans. The SfM point cloud had an average point density of >7500 points per square meter. The resulting cloud was manipulated in 3D software to produce a model of the road prior to deformation. This was then compared to the original deformed model. Average displacement found in the deformed gravel road was 7.5 m over the 17 months between the scarp failure and the collection of the images, or ~3 m/yr. TLS point clouds were collected quarterly over the course of the 2016 water year at six locations along the eastern margin of the reactivated portion of the landslide. These 3D point cloud models of the landslide surface had an average density of 175 points per square meter. Scans were georeferenced to UTM coordinates and relative alignment of the scans was accomplished by first using the iterative closest point algorithm to align stable, off-slide terrain, and then applying the same rigid body translation to the entire scan. This was repeated for each scan at each location. Landmarks, such as tree trunks, were then manually selected at each location and their coordinates were recorded from the initial scan and each successive scan to measure displacement vectors. Average annual displacement for the 2016 water year ranged from a maximum of 0.92 m/yr in the uppermost studied area of the slide, to a low of 0.1 m/yr at the toe. Average standard deviation of the vectors of features on stable areas was 0.039 m, corresponding to a minimum detectable displacement of about ±4 cm. Displacement totals decreased with increasing distance downslope from the internal scarp failure. Additionally, displacement tended to increase with increasing distance laterally onto the slide body away from the right margin at all locations except the uppermost, where displacement rates were relatively uniform for all landmarks. Volumetric discharge measurements were collected for Silt Creek in 2016 using salt dilution gauging and found that discharge in the upslope portion of the study area was ~1 m3/s and increased to ~1.6 m3/s in the downslope portion. Landslide displacement rates were found to be much lower during the 2016 water year than during the 2015 water year, despite higher precipitation. This suggests that the over-all displacement trend was decoupled from precipitation values. Displacement rates at all locations on the slide decreased with each successive scan period with some portions of the landslide stopping by autumn of 2016, suggesting the study captured the slide as it returned to a state of stability. The spatial and temporal pattern of displacement is consistent with the interpretation that the landslide reactivation was a response to the undrained load applied by the internal scarp failure. This finding highlights the importance of detailed landslide monitoring to improve hazard estimation and quantification of landslide mechanics. This study provides new evidence that supports previous research showing that internal processes within landslide complexes can have feedback relationships, combines several existing 3D measurement tools to develop a detailed landslide monitoring methodology, uses a novel approach to landslide surface deformation measurements using SfM, and suggests that landslide initiation models which rely heavily on precipitation values may not account for other sources of landslide activation.
437

A Concurrent Mixed Method Study Exploring Iraqi Immigrants' Views of Michigan

Chamberlain, Kerry Luise 01 January 2016 (has links)
Failure of emergency response personnel to communicate effectively with different cultures can have dire consequences during an emergency, including loss of lives and litigation costs. For emergency response personnel to communicate the risk of an emergency, it is important to understand how different groups, especially newly arrived foreign immigrants, perceive warnings and related messages. This study addressed how one of the largest category of immigrants in Michigan perceived severe tornados, influenza pandemics, power outages, severe floods, and snowstorms. The research question examined the degree to which the equation, Risk = Hazard + Outrage, explained perceptions of these hazards in Michigan among newly arrived Iraqi immigrants. A concurrent mixed-method design was used. In-person interviews were conducted using quantitative and qualitative questions based on the equation and the PEN 3 model with 84 immigrants from Iraq who lived in the United States 4 years or less. Respondents' levels of outrage and hazard were compared using ANOVA. The calculated levels were compared with the qualitative comments made during the interviews. Snowstorms measured the highest outrage, and power outages measured the least. The reported awareness level was lowest for snowstorms with the highest being power outages. More information needs to reach Iraqi immigrants regarding unfamiliar hazards. Communicators should use Iraqi immigrants' experience with familiar hazards to identify effective ways of responding to this population. The results of this study may promote social change of more effective communication and saving lives in the future should an emergency occur in Michigan that affects Iraqi immigrants.
438

Evaluating U.S. Counterterrorism Policy on Domestic Terrorism Using the Global Terrorism Database

Kennedy, Colleen Michelle 01 January 2019 (has links)
The United States has a long history of domestic terrorism, yet U.S. counterterrorism policy has focused almost completely on the threat from international terrorism. The gap in the literature was the absence of an empirical evaluation of U.S. counterterrorism policy on domestic terrorism in general. The purpose of this quantitative study was to describe the impact of 21st century U.S. counterterrorism policy on incidence, lethality, and cost of domestic terrorism using data from the Global Terrorism Database. The multiple streams framework and the power elite theory were used. In this longitudinal trend study using secondary data analysis, domestic terrorism data were analyzed from 749 terrorist attacks using descriptive statistics, visual analysis, and the series hazard model to examine any changes in the frequency and hazard of domestic terrorism in relation to the following 5 policies: USA PATRIOT Act, USA PATRIOT Improvement and Reauthorization Act, Animal Enterprise Terrorism Act, Implementing Recommendations of the 9/11 Commission Act, and USA FREEDOM Act. The results empirically supported the greater threat of domestic terrorism and showed that domestic terrorism changed in relation to counterterrorism policy. Further, the addition of the series hazard model in the analysis of domestic terrorism following policy implementation added additional depth to the results. This study contributed to positive social change by providing policy makers and counterterrorism agencies with an empirical, evidence-based method for evaluating U.S. counterterrorism policy and for a non-partisan, non-political, evidence-based method for quantitatively determining terrorist threat.
439

Essays on dynamic contracts

Shan, Yaping 01 December 2012 (has links)
This dissertation analyzes the contracting problem between a firm and the research employees in its R&D department. The dissertation consists of two chapters. The first chapter addresses a simplified problem in which the R&D unit has only one agent. The second chapter studies a scenario in which the R&D unit consists of a team. In the first chapter, I look at problem in which a principal hires an agent to do a multi-stage R&D project. The transition from one stage to the next is modeled by a Poisson-type process, whose arrival rate depends on the agents choice of effort. I assume that effort choice is binary and unobservable by the principal. To overcome the repeated moral-hazard problem, the principal offers the agent a long-term contract which specifies a flow of payments based on his observation of the outcome of the project. The optimal contract combines rewards and punishments: the payment to the agent decrease over time in case of failure and jumps up to a higher level after each success. I also show that the optimal contract can be implemented by using a risky security that has some of the features of the stocks of these firms, thereby providing a theoretical justification for the wide-spread use of stock-based compensation in firms that rely on R&D. In the second chapter, I look at a scenario in which the R&D unit consists of a team, which I assume, for simplicity, comprises two risk-averse agents. Now, the Poisson arrival rate is jointly determined by the actions of both agents with the action of each remaining unobservable by both the principal and the other agent. I assume that when success in a phase occurs the principal can identify the agent who was responsible for it. In this model, incentive compatibility means that each agent is willing to exert effort conditional on his coworker putting in effort, and thus exerting effort continuously is a Nash-equilibrium strategy played by the agents. In this multiagent problem, each agents payment depends not only on his own performance, but is affected by the other agents performance as well. Similar to the single-agent case, an agent is rewarded when he succeeds, and his payment decreases over time when both agents fail. Regarding how an agents payment relates to his coworkers performance, I find that the optimal incentive regime is a function of the way in which agents efforts interact with one another: relative-performance evaluation is used when their efforts are substitutes whereas joint-performance evaluation is used when their efforts are complements. This result sheds new light on the notion of optimal incentive regimes, an issue that has been widely discussed in multi-agent incentive problems.
440

Spatial and Temporal Landslide Distribution and Hazard Evaluation Analyzed by Photogeologic Mapping and Relative-Dating Techniques, Salt River Range, Wyoming

Rice, John B., Jr. 01 May 1987 (has links)
The distribution of landslide type and age was analyzed to determine the causes and timing of landsliding, and to assess landslide hazards in the study area. 1173 landslides and zones of landsliding were mapped on 1:15,840 scale air photos and designated by their style of movement and age. Slides were assigned to one of four age classes based on their degree of m orphologic modification visible on air photos. Relative dating (RD) methods previously applied to glacial deposits were used to refine and calibrate the age classification. Eleven RD para meters were measured on 21 rockslide and 19 glacial deposits. Cluster analyses were run on the RD data set. Slides assigned to Age-Classes 4, 3+, and 2 tend to cluster with probable Pinedale, early Holocene, and Neoglacial-age moraines respectively. Cluster analyses indicate poor age resolution by the RD method from approximately early Altithermal to early Neoglacial time. Landslide age cannot be resolved in this study to a finer degree by the RD method than by the morphologic (air -photo) method. However, cluster analyses generally confirm age assignments and absolute age estimates of the four landslide age classes, despite limitations of the RD method such as boulder spalling, and variations in lithology, deposit type, and elevation/climate between sampled deposits. The temporal distribution of landslides indicates that mass movements may have occurred rather uniformly throughout Holocene time, with slightly higher rates of sliding during post-Altithermal time due to climatic effects associated with Neoglacial advances. Spatial analyses indicate that landslides cover 73% of the Cretaceous section. Development, such as logging and road construction, could trigger landsliding in the Cretaceous section. Landslides account for 15% and 10% of the outcrop areas of the Paleozoic and Triassic-Jurassic sections respectively. Debris flows and slump-earth flows dominate sliding in both sections, with minor numbers of rockslides present. Debris flows pose the greatest hazard in both sections. Fine-grained stratigraphic units have the highest landslide densities in both sections. The previous event locations define areas most susceptible to future sliding.

Page generated in 0.1044 seconds