• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 582
  • 124
  • 76
  • 66
  • 54
  • 41
  • 38
  • 35
  • 17
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • Tagged with
  • 1315
  • 342
  • 246
  • 236
  • 149
  • 129
  • 122
  • 121
  • 120
  • 119
  • 94
  • 93
  • 88
  • 83
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Local Labor Market Scale, Search Duration, and Re-Employment Match Quality for U.S. Displaced Workers

Wilkin, Kelly R 18 December 2012 (has links)
Geographic space is an important friction preventing the instantaneous matching of unemployed workers to job vacancies. Cities reduce spatial frictions by decreasing the average distance between potential match partners. Owing to these search efficiencies, theories of agglomeration predict that unemployed workers in larger labor markets find employment faster than observationally similar workers in smaller markets. Existing studies rely on cross-sectional variation in aggregate unemployment rates across spatially distinct labor markets to test for scale effects in job search. A major difficulty with these studies is that the unemployment rate is, at any given time, simultaneously the incidence and duration of unemployment. Therefore, conclusions about unemployment exits using the unemployment rate are confounded by transitions into unemployment. This dissertation examines the relationship between market scale unemployment duration for permanently laid off workers in the U.S. Using a large sample of individual unemployment spells in 259 MSAs, proportional hazard model estimates predict a negative relationship between market scale and the hazard of exiting unemployment. This effect is strengthened when space is explicitly controlled for and measured with greater precision. These results are consistent with the hypothesis that search efficiencies lead workers to increase their reservation wages. 2SLS estimates show that re-employment earnings for permanently laid off workers increase with market scale after controlling for endogenous search duration. These effects are robust to standard controls, as well as controls for local labor market conditions. These results challenge the view that search efficiencies lead to lower unemployment rates through faster job-finding rates.
462

Analytic and agent-based approaches: mitigating grain handling risks

2013 March 1900 (has links)
Agriculture is undergoing extreme change. The introduction of new generation agricultural products has generated an increased need for efficient and accurate product segregation across a number of Canadian agricultural sectors. In particular, monitoring, controlling and preventing commingling of various wheat grades is critical to continued agri-food safety and quality assurance in the Canadian grain handling system. The Canadian grain handling industry is a vast regional supply chain with many participants. Grading of grain for blending had historically been accomplished by the method of Kernel Visual Distinguishability (KVD). KVD allowed a trained grain grader to distinguish the class of a registered variety of wheat solely by visual inspection. While KVD enabled rapid, dependable, and low-cost segregation of wheat into functionally different classes or quality types, it also put constraints on the development of novel traits in wheat. To facilitate the introduction of new classes of wheat to enable additional export sales in new markets, the federal government announced that KVD was to be eliminated from all primary classes of wheat as of August 1, 2008. As an alternative, the Canadian Grain Commission has implemented a system called Variety Eligibility Declaration (VED) to replace KVD. As a system based on self-declaration, the VED system may create moral hazard for misrepresentation. This system is problematic in that incentives exist for farmers to misrepresent their grain. Similarly, primary elevators have an incentive to commingle wheat classes in a profitable manner. Clearly, the VED system will only work as desired for the grain industry when supported by a credible monitoring system. That is, to ensure the security of the wheat supply chain, sampling and testing at some specific critical points along the supply chain is needed. While the current technology allows the identification of visually indistinguishable grain varieties with enough precision for most modern segregation requirements, this technology is relatively slow and expensive. With the potential costs of monitoring VED through the current wheat supply chain, there is a fundamental tradeoff confronting grain handlers, and effective handling strategies will be needed to maintain historical wheat uniformity and consistency while keeping monitoring costs down. There are important operational issues to efficiently testing grain within the supply chain, including the choice of the optimal location to test and how intensively to test. The testing protocols for grain deliveries as well as maintaining effective responsiveness to information feedback among farmers will certainly become a strategic emphasis for wheat handlers in the future. In light of this, my research attempts to identify the risks, incentives and costs associated with a functional declaration system. This research tests a series of incentives designed to generate truthful behavior within the new policy environment. In this manner, I examine potential and easy to implement testing strategies designed to maintain integrity and efficiency in this agricultural supply chain. This study is developed in the first instance by using an analytic model to explore the economic incentives for motivating farmer’s risk control efforts and handlers’ optimal handling strategies with respect to testing cost, penalty level, contamination risks and risk control efforts. We solve for optimal behavior in the supply chain assuming cost minimization among the participants, under several simplifying assumptions. In reality, the Canadian grain supply chain is composed of heterogeneous, boundedly rational and dynamically interacting individuals, and none of these characteristics fit the standard optimization framework used to solve these problems. Given this complex agent behavior, the grain supply chain is characterized by a set of non-linear relationships between individual participants, coupled with out of equilibrium dynamics, meaning that analytic solutions will not always identify or validate the set of optimized strategies that would evolve in the real world. To account for this inherent complexity, I develop an agent-based (farmers and elevators) model to simulate behaviour in a more realistic but virtual grain supply chain. After characterizing the basic analytics of the problem, the grain supply chain participants are represented as autonomous economic agents with a certain level of programmed behavioral heterogeneity. The agents interact via a set of heuristics governing their actions and decisions. The operation of a major portion of the Canadian grain handling system is simulated in this manner, moving from the individual farm up through to the country elevator level. My simulation results suggest testing strategies to alleviate misrepresentation (moral hazard) in this supply chain are more efficient for society when they are flexible and can be easily adjusted to react to situational change within the supply chain. While the idea of using software agents for modeling and understanding the dynamics of the supply chain under consideration is somewhat novel, I consider this exercise a first step to a broader modeling representation of modern agricultural supply chains. The agent-based simulation methodology developed in my dissertation can be extended to other economic systems or chains in order to examine risk management and control costs. These include food safety and quality assurance network systems as well as natural-resource management systems. Furthermore, to my knowledge there are no existing studies that develop and compare both analytic and agent-based simulation approaches for this type of complex economic situation. In the dissertation, I conduct explicit comparisons between the analytic and agent-based simulation solutions where applicable. While the two approaches generated somewhat different solutions, in many respects they led to similar overall conclusions regarding this particular agricultural policy issue.
463

Ecotoxicological classification of ash materials

Stiernström, Sara January 2013 (has links)
Incineration of waste is increasing in the EU. However, in the incineration process, both fly and bottom ash materials are generated as waste that requires further action. A common goal throughout Europe is to find ways to utilize ash materials in an environmentally and economically efficient manner in accordance with the current legislation. This legislation is the Waste Framework Directive (WFD) which lists essential properties (H-criteria) to classify waste, as hazardous or not. Of these criteria, ecotoxicity (H-14) should be classified based on the wastes’ inherent hazardous properties. The WFD further states that this classification should be based on the Community legislation on chemicals (the CLP Regulation). Today, there are no harmonized quantitative criteria for the H-14 classification in the WFD, but there is a proposal from the EU on a computing model that summarizes all the measured elements classified as ecotoxic in the solid material. However, there may be a poor relationship between the theoretical ecotoxicity, based on analysed individual elements, and their actual contribution to the measured total toxicity. Therefore, to reduce the risk of incorrectly assessing the hazard potential, the overall aim of this doctoral Thesis was to develop a scientifically well-founded basis for the choice of leaching methodology and ecotoxicity testing for the H-14 classification of ash materials in Europe. In Paper I, different ash materials were classified, two leaching methods were compared and the sensitivity as well as the usefulness of a selected number of aquatic ecotoxicity tests were evaluated. Paper III and IV studied different leaching conditions, relevant for both hazard classification and risk evaluation of ash. Moreover, all four papers investigated potentially causative ecotoxic elements in the ash leachates. The results from this Thesis show that elements not classified as ecotoxic in the chemical legislation have a significant influence on the overall toxicity of the complex ash materials and will be considered if using the approach with ecotoxicity tests on ash leachates, but not if using the computing model. In addition, the approach of comparing chemically analysed elements in the solid ash with literature toxicity data for the same elements systematically over-estimates the hazard potential. This emphasizes the importance of using leaching tests in combination with ecotoxicity tests for the ecotoxicity classification of ash materials, at least if the aim is to fully understand the inherent hazard potential of the ash. To conclude, the recommendation for H-14 classification of ash is that leachates should be prepared using the leaching test and conditions evaluated in Paper III and that the generated leachates should be tested in a battery of test organisms representing a wide range of biological variation and different routes of exposure. This classification proposal has support in the CLP Regulation and contributes to harmonizing the waste and chemical legislation. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: Manuscript. Paper 4: Manuscript.</p>
464

Model-Based Hazard Analysis of Undesirable Environmental and Components Interaction

Mehrpouyan, Hoda January 2011 (has links)
Identifying the detrimental effect of environmental factors and subsystem interactions are one of themost challenging aspects of early hazard assessment in the design of complex safety critical systems.Therefore, a complete understanding of potential failure effects before the catastrophe happens is a verydifficult task. The thesis proposes a model-based hazard analysis procedure for early identification ofpotential safety issues caused by unexpected environmental factors and subsystem interactions within acomplex safety critical system. The proposed methodology maps hazard and vulnerability modes tospecific components in the system and analyzes the hazard propagation paths for risk control andprotection strategies. The main advantage of the proposed method is the ability to provide the designerswith means to use low-fidelity, high level models to identify hazardous interactions. Using thistechnique, designers can examine the collective impacts of environmental and subsystem risks onoverall system during early stages of design and develop a hazard mitigation strategy.
465

The Occurrence and Behavior of Rainfall-Triggered Landslides in Coastal British Columbia

Guthrie, Richard 05 June 2009 (has links)
This thesis seeks to analyze the occurrence and behavior of rainfall-triggered landslides in coastal British Columbia. In particular, it focuses on the analysis of landslide temporal and spatial distributions occurrence and their magnitudes, and considers the major factors that influence regional landslide behavior. Implicit in the research is the understanding that the landscape of coastal BC is managed, and that landslides, in addition to occurring naturally may be caused by, and certainly impact, resources that are important to humankind. Underlying each chapter is the rationale that by better understanding the causes of, and controls on landslide occurrence and magnitude, we can reduce the impacts and lower the associated risk. Statistical magnitude-frequency relationships are examined in coastal BC. Observations suggest that landslides in coastal British Columbia tend to a larger size until about 10,000 m2 in total area. At this point larger landslides are limited by landscape controls according to a power law. Probabilistic regional hazard analysis is one logical outcome of magnitude-frequency analysis and a regional mass movement hazard map for Vancouver Island is presented. Physiographic controls on statistical magnitude-frequency distributions are examined using a cellular automata based model and results compare favorably to actual landslide behavior: modeled landslides bifurcate at local elevation highs, deposit mass preferentially where the local slopes decrease, find routes in confined valley or channel networks, and, when sufficiently large, overwhelm the local topography. The magnitude-frequency distribution of both the actual landslides and the cellular automata model follow a power law for magnitudes higher than 10,000 m2 - 20,000 m2 and show a flattening of the slope for smaller magnitudes. The results provide strong corroborative evidence for physiographic limitations related to slope, slope distance and the distribution of mass within landslides. The physiographic controls on landslide magnitude, debris flow mobility and runout behavior is examined using detailed field and air photograph analysis. The role of slope on deposition and scour is investigated and a practical method for estimating both entrainment and runout in the field, as well as in the GIS environment, is presented. Further controls on landslide mobility, including the role of gullies and stream channels, roads and benches and intact forests, are considered. The role of landslides in controlling landscape physiography is also examined. In particular, it is determined that moderate-sized landslides do the most work transporting material on hillslopes, defined by a work peak, and that magnitude varies based on local physiography and climate. Landslides that form the work peak are distinct from catastrophic landslides that are themselves formative and system resetting. The persistence time for debris slides/debris flows and rock slides/rock avalanches is calculated over six orders of magnitude and an event is considered catastrophic when it persists in the landscape ten times longer than the population of landslides that form the work peak. A detailed case study examines meteorological controls on landslide occurrence and the role of extreme weather is considered. A critical onset of landslide triggering rainfall intensity is determined to be between 80 mm and 100 mm in 24 hours and wind is determined to result in increased local precipitation. The role of rain-on-snow is also evaluated and determined to be crucial to landslide occurrence. Finally, a conceptual model of landslide-induced denudation for coastal mountain watersheds spanning 10,000 years of environmental change is presented. Recent human impacts are calculated for landslide frequencies over the 20th century. The impact of logging during the last 100 years is unambiguous; logging induced landslides almost doubles the effect frequency of the wettest millennia in the last 10,000 years. This suggests that the impact of logging outpaces that of climatic change. Debris slides and debris flows are estimated to have resulted in a landscape lowering of 0.7 m across the Vancouver Island during the last 10,000 years.
466

Weather-related geo-hazard assessment model for railway embankment stability

Gitirana Jr., Gilson 01 June 2005 (has links)
The primary objective of this thesis is to develop a model for quantification of weather-related railway embankments hazards. The model for quantification of embankment hazards constitutes an essential component of a decision support system that is required for the management of railway embankment hazards. A model for the deterministic and probabilistic assessment of weather-related geo-hazards (W-GHA model) is proposed based on concepts of unsaturated soil mechanics and hydrology. The model combines a system of two-dimensional partial differential equations governing the thermo-hydro-mechanical behaviour of saturated/unsaturated soils and soil-atmosphere coupling equations. A Dynamic Programming algorithm for slope stability analysis (Safe-DP) was developed and incorporated into the W-GHA model. Finally, an efficient probabilistic and sensitivity analysis framework based on an alternative point estimate method was proposed. According to the W-GHA model framework, railway embankment hazards are assessed based on factors of safety and probabilities of failures computed using soil property variability and case scenarios. <p> A comprehensive study of unsaturated property variability is presented. A methodology for the characterization and assessment of unsaturated soil property variability is proposed. Appropriate fitting equations and parameter were selected. Probability density functions adequate for representing the unsaturated soil parameters studied were determined. Typical central tendency measures, variability measures, and correlation coefficients were established for the unsaturated soil parameters. The inherent variability of the unsaturated soil properties can be addressed using the probabilistic analysis framework proposed herein. <p> A large number of hypothetical railway embankments were analysed using the proposed model. The embankment analyses were undertaken in order to demonstrate the application of the proposed model and in order to determine the sensitivity of the factor of safety to the uncertainty in several input variables. The conclusions drawn from the sensitivity analysis study resulted in important simplifications of the W-GHA model. It was shown how unsaturated soil mechanics can be applied for the assessment of near ground surface stability hazards. The approach proposed in this thesis forms a protocol for application of unsaturated soil mechanics into geotechnical engineering practice. This protocol is based on predicted unsaturated soil properties and based on the use of case scenarios for addressing soil property uncertainty. Other classes of unsaturated soil problems will benefit from the protocol presented in this thesis.
467

Tradeoff between Investments in Infrastructure and Forecasting when Facing Natural Disaster Risk

Kim, Seong D. 2009 May 1900 (has links)
Hurricane Katrina of 2005 was responsible for at least 81 billion dollars of property damage. In planning for such emergencies, society must decide whether to invest in the ability to evacuate more speedily or in improved forecasting technology to better predict the timing and intensity of the critical event. To address this need, we use dynamic programming and Markov processes to model the interaction between the emergency response system and the emergency forecasting system. Simulating changes in the speed of evacuation and in the accuracy of forecasting allows the determination of an optimal mix of these two investments. The model shows that the evacuation improvement and the forecast improvement give different patterns of impact to their benefit. In addition, it shows that the optimal investment decision changes by the budget and the feasible range of improvement.
468

Control of Vapor Dispersion and Pool Fire of Liquefied Natural Gas (LNG) with Expansion Foam

Yun, Geun Woong 2010 August 1900 (has links)
Liquefied Natural Gas (LNG) is flammable when it forms a 5 – 15 percent volumetric concentration mixture with air at atmospheric conditions. When the LNG vapor comes in contact with an ignition source, it may result in fire and/or explosion. Because of flammable characteristics and dense gas behaviors, expansion foam has been recommended as one of the safety provisions for mitigating accidental LNG releases. However, the effectiveness of foam in achieving this objective has not been sufficiently reported in outdoor field tests. Thus, this research focused on experimental determination of the effect of expansion foam application on LNG vapor dispersion and pool fire. Specifically, for evaluating the use of foam to control the vapor hazard from spilled LNG, this study aimed to obtain key parameters, such as the temperature changes of methane and foam and the extent reduction of vapor concentration. This study also focused on identifying the effectiveness of foam and thermal exclusion zone by investigating temperature changes of foam and fire, profiles of radiant heat flux, and fire height changes by foam. Additionally, a schematic model of LNG-foam system for theoretical modeling and better understanding of underlying mechanism of foam was developed. Results showed that expansion foam was effective in increasing the buoyancy of LNG vapor by raising the temperature of the vapor permeated through the foam layer and ultimately decreasing the methane concentrations in the downwind direction. It was also found that expansion foam has positive effects on reducing fire height and radiant heat fluxes by decreasing fire heat feedback to the LNG pool, thus resulting in reduction in the safe separation distance. Through the extensive data analysis, several key parameters, such as minimum effective foam depth and mass evaporation rate of LNG with foam, were identified. However, caution must be taken to ensure that foam application can result in initial adverse effects on vapor and fire control. Finally, based on these findings, several recommendations were made for improving foam delivery methods which can be used for controlling the hazard of spilled LNG.
469

Parameter estimation in proportional hazard model with interval censored data

Chang, Shih-hsun 24 June 2006 (has links)
In this paper, we estimate the parameters $S_0(t)$ and $ eta$ in Cox proportional hazard model when data are all interval-censored. For the application of this model, data should be either exact or right-censored, therefore we transform interval-censored data into exact data by three di&#x00AE;erent methods and then apply Nelson-Aalen estimate to obtain $S_0(t)$ and $ eta$. The test statistic $hat{ eta}^2I(hat{ eta})$ is not approximately distributed as $chi^2_{(1)}$ but $chi^2_{(1)}$ times a constant c.
470

Attenuation Relationship For Peak Ground Velocity Based On Strong Ground Motion Data Recorded In Turkey

Altintas, Suleyman Serkan 01 December 2006 (has links) (PDF)
Estimation of the ground motion parameters is extremely important for engineers to make the structures safer and more economical, so it is one of the main issues of Earthquake Engineering. Peak values of the ground motions obtained either from existing records or with the help of attenuation relationships, have been used as a useful parameter to estimate the effect of an earthquake on a specific location. Peak Ground Velocities (PGV) of a ground motion is used extensively in the recent years as a measure of intensity and as the primary source of energy-related analysis of structures. Consequently, PGV values are used to construct emergency response systems like Shake Maps or to determine the deformation demands of structures. Despite the importance of the earthquakes for Turkey, there is a lack of suitable attenuation relationships for velocity developed specifically for the country. The aim of this study is to address this deficiency by developing an attenuation relationship for the Peak Ground Velocities of the chosen database based on the strong ground motion records of Turkey. A database is processed with the established techniques and corrected database for the chosen ground motions is formed. Five different forms of equations that were used in the previous studies are selected to be used as models and by using nonlinear regression analysis, best fitted mathematical relation for attenuation is obtained. The result of this study can be used as an effective tool for seismic hazard assessment studies for Turkey. Besides, being a by-product of this study, a corrected database of strong ground motion recordings of Turkey may prone to be a valuable source for the future researchers.

Page generated in 0.0339 seconds