• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1335
  • 308
  • 127
  • 70
  • 59
  • 28
  • 27
  • 20
  • 20
  • 20
  • 20
  • 20
  • 20
  • 19
  • 17
  • Tagged with
  • 2567
  • 550
  • 537
  • 377
  • 249
  • 206
  • 193
  • 164
  • 152
  • 151
  • 151
  • 148
  • 142
  • 123
  • 117
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

DETERMINING RATES OF LANDSCAPE RESPONSE TO TECTONIC FORCING ACROSS A RANGE OF TEMPORAL SCALES AND EROSIONAL MECHANISMS: TETON RANGE, WY

Swallom, Meredith 01 January 2019 (has links)
Understanding how mountain landscapes respond to variations in tectonic forcing over a range of temporal scales in active mountain belts remains as a prominent challenge in tectonic and geomorphological studies. Although a number of empirical and numerical studies have examined this problem, many of them were complicated by issues of scale and climatic variability. More specifically, the relative efficiencies of fluvial and glacial erosion, which are presumably controlled by climate, are difficult to unravel. The Teton Range in Wyoming, which results from motion on the crustal-scale Teton fault, is an ideal natural laboratory for addressing this challenge as the tectonic uplift boundary condition and the variation of uplift along strike is well-documented by previous studies and due to its relatively small size, climate can be reasonably expected to vary consistently along strike. Here, we present the results from a study that examines how the Teton landscape responds across the longest (106-7 yrs) and shortest (102-4 yrs) temporal scales. Long-term canyon incision rates determined from apatite (U-Th)/He (AHe) analysis of major drainages are highest (0.24 mm yr-1) where measured uplift rates and duration are highest (near Mount Moran), leading us to propose that tectonic forcing operates as the first order control on long-term Teton erosion. Short-term denudation rates, which are derived by determining sediment volumes in Moran Bay that are deposited in catchments generated during the most recent glacial interval (Pinedale, ~15.5 ka), are 0.00303 – 0.4672 mm yr-1. We compare these rates to previous work, which found that high rock fall rates (1.13-1.14 mm yr-1) deposit large talus volumes in Avalanche and Moran Canyons. Despite their magnitude, such high rates of mass wasting are not sustained over long periods of time, as measured lake sediment volumes (0.007 km3) are. We conclude that the Tetons are transport limited during the interglacial and large volumes of canyon sediment generated during this time cannot be moved absent the advance of valley glaciers. That is, fluvial systems in small mountain systems are substantially less effective than glaciers in denuding mountain topography.
552

Marijuana and Crime: A Critique and Proposal

Jones, Urban Lynn 12 1900 (has links)
Of the plethora of social problems with which government has had to contend in recent history, few have generated more controversy than the non-therapeutic use of drugs. Many of those which are currently in common use did not exist fifty years ago; but the most dramatic growth in non-therapeutic use has been experienced with a drug that man has known for centuries: marijuana.1 Known generically as Cannabis sativa, internationally as Indian hemp, popularly as marijuana, and in American slang as "pot" or "grass," the drug was introduced to the United States as an intoxicant by itinerate Mexican farm workers in the early decades of this century. The acknowledged use of marijuana in the ghettos and communities of ethnic minorities for several decades stimulated no public outcry with the exception of the sensational press campaigns which led to the passage of the Marihuana Tax Act of 1937.
553

Dementia Caregive Module and Pamphlet

Ransby, Shawen Denise 01 January 2016 (has links)
Dementia care is an immediate and growing issue that affects everyone. People are living longer increasing the likelihood that they may be diagnosed with dementia. Friends and family are become caregivers but are often unprepared for the role. The purpose of this project was to develop a 15-minute dementia care module to assist caregivers with the home care of dementia patients. A pamphlet was created to reinforce the module information and to provide a quick reference for dementia support. The self-efficacy theory, along with the review of best practice guidelines and evidence from literature, informed the development of the module. The Simple Measure of Gobbledygook (SMOG) and the Flesch Reading Ease scales were used to ensure that the written materials were at an appropriate reading level for the targeted group. A single group evaluation was used to determine whether caregivers would be able to understand and use the information. A total of 5 lay dementia caregivers volunteered to evaluate the dementia module and related pamphlet. They volunteered to provide feedback using Appraisal of Guidelines for Research and Evaluation (AGREE) tool. Four out of the 5 caregivers strongly agreed or agreed that the module met the designated criteria. All participants stated that the information presented in the module/pamphlet was applicable to their circumstance as dementia caregivers, that the information would assist to provide better care for their loved one, and that they would recommend the dementia module to other caregivers. This project will have a positive impact on social change by providing dementia caregivers with strategies and information to deliver quality dementia care for their loved ones.
554

Exploring the unique water properties of metal-organic nanotubes

Jayasinghe, Ashini Shamindra 01 May 2017 (has links)
Metal-organic nanotubular (MON) materials have garnered significant attention in the recent years not only due to the aesthetic architecture but also due to the interesting chemical and physical properties that have been reported for these compounds. The number of MONs reported in the literature are limited compared to metal organic frameworks due to synthetic challenges and difficulties in crystal engineering. These types of materials are of interest given their one-dimensional channels that lead to their potential application in advanced membrane technologies. In Forbes group, a uranium-based metal-organic nanotube (UMON) was synthesized using zwitterionic like iminodiacetic acid (IDA) as the ligand. IDA ligand chelates to the U(VI) metal center in a tridentate fashion and doubly protonated IDA linker connects the neighboring uranyl moieties until it forms hexameric macrocycles. These macrocycles stack into a nanotubular array due to supramolecular interactions. Single crystal X-ray diffraction studies displayed there are two crystallographically unique water molecules that can be removed reversibly at 37 °C. UMON indicated selectivity to water, the selectivity of this material was analyzed using solvents with different polarities, sizes, and shapes. In the current body of work, dehydrated UMON crystallites were exposed to these solvents (in liquid and vapor phase) and studied using TGA coupled FTIR set up, confirming the highly selective nature of UMON. Kinetic studies were also conducted using an in-house built vapor adsorption setup confirmed the water uptake rate of the nanotube depends on the humidity of the environment. Uptake rates were estimated using a simple kinetic model and indicated enhanced hydration compared to other porous materials. One of the hypotheses regarding the interesting properties of UMON is that the uranium metal center might play a central role in the selectivity of this material. To test this hypothesis, a similar uranium based metal-organic nanotube containing 2,6-pyridine dicarboxylic acid (UPDC) as the ligand was synthesized and its properties were compared to that of the UMON material. UPDC did display some selectivity based upon size exclusion but did not exhibit the same selectivity to water that is observed for UMON. Different transition metals were also incorporated into the nanotubular structures to determine the influence of dopants on the observable properties. Only small amounts of transition metal dopants were incorporated into the structure, but it increased the stability under high humid environment. Attempts to incorporate transition metal dopants in the UPDC led to the formation of novel chain structures.
555

Bridge Failure Rates, Consequences, and Predictive Trends

Cook, Wesley 01 May 2014 (has links)
A database of United States bridge failures was used to ascertain the failure rate of bridge collapses for a sample population with associated rates by causes. By using the National Bridge Inventory bridge counts, the bridge population, from which the collapsed bridge came from, was determined. The average number of bridge collapses based on the sample population was approximately 1/4,700 annually. The geometric distribution was determined to be a valid model for the number of bridge failures per annum through multiple methods. Based on the data extrapolation and 95% confidence interval, the estimated average annual bridge collapse rate in the United States is between 87 and 222 with an expected value of 128. The database showed hazards that have caused bridges to collapse historically, throughout the United States. Conditional probabilities of collapse rate with consideration for the features under the structures were constructed. The most likely cause of collapse was determined to be hydraulic in nature when adjusting for the features under the structure. The collapse rate of hydraulic causes was unknown from past investigations; however, the value was determined to be an annual rate of 1.52E-4. Collapse rates were also quantifiably established for other causes. The consequences coupled with the rate of failure by cause were quantitatively evaluated. A benchmark, set by the United States Army Corps of Engineers interim guideline for dam safety, was used to show that bridge collapses within the United States are within a tolerable range comparing collapses to life loss. To enhance risk-based and data-driven approaches to bridge management systems in compliance with Moving Ahead for Progress in the 21st Century Act, efficacious bridge collapse data collection is examined for this investigation. Trends obtained from statistical analysis of existing data show 53% of collapsed bridges were structurally deficient prior to collapse, and a failure rate of structurally deficient bridges to be 1/1,100 annually. Age and structural deficiency are related, structural deficiency and collapse are related, and age at collapse is contingent on collapse cause. It was determined that deterioration-caused and overload-caused bridge collapses are age related, but hydraulic-caused and collision-caused bridge collapses are not. Based on the desired results, trends seen in existing collapse data, improved collection efforts and data fields of interest are assessed with recommendations for analytical methods and consequence assessment while maintaining concise data. A national repository of bridge collapses at the federal level is paramount for effective bridge collapse risk analysis. Currently, bridge failure data is incomplete and insufficient to enable in-depth lifetime data analysis for improved bridge preservation. However, the frequency of collapses is often enough for large amounts of data to be collected in relatively few years.
556

Modelling and kinetics estimation in gibbsite precipitation from caustic aluminate solutions

Li, Tian Siong January 2000 (has links)
Precipitation of gibbsite from supersaturated caustic aluminate solutions has been investigated extensively due to its central role in the commercial Bayer plant, for extracting the alumina compound from bauxite. The primary focus of Bayer process simulation and optimisation is to help maximise the product recovery and the production of a product crystal size distribution (CSD) that meets the product specification and improves downstream process performance. The product CSD is essentially determined by the nucleation, growth and agglomeration kinetics, which occur simultaneously during the precipitation process. These processes are still poorly understood, owing to the high complexity of their mechanisms and of the structure of the caustic aluminate solutions. This research focuses on the modelling and kinetics estimation aspects of simulating gibbsite precipitation. Population balance theory was used to derive different laboratory gibbsite precipitator models, and the discretised population balance models of Hounslow, Ryall & Marshall (1988) and Litster, Smit & Hounslow (1995) were employed to solve the resulting partial integro-differential equations. Gibbsite kinetics rates were determined from literature correlation models and also estimated from the CSD data using the, so-called, differential method. Modelling of nonstationary gibbsite precipitation systems showed that error propagated with the precipitation time scale. The main contribution to the observed error was found to be from the uncertainties in the kinetic parameter estimates, which are estimated from experimental data and used in the simulation. This result showed that care is required when simulating the CSD of non-stationary precipitators over longer time scales, and methods that produce precise estimates of the kinetics rates from the experimental data need to be used. / Kinetics estimation study from repeated batch gibbsite precipitation data showed that the uncertainty in the experimental data coupled with the error incurred from the kinetic parameter estimation procedure used, resulted in large uncertainties in the kinetics estimates. The influences of the experimental design and the kinetics estimation technique on the accuracy and precision of estimates of the nucleation, growth and agglomeration kinetics for the gibbsite precipitation system were investigated. It was found that the operating conditions have a greater impact on the uncertainties in the estimated kinetics than does the precipitator configuration. The kinetics estimates from the integral method, i.e. non-linear parameter optimisation method, describe the gibbsite precipitation data better than those obtained by the differential method. However, both kinetics estimation techniques incurred significant uncertainties in the kinetics estimates, particularly toward the end of the precipitation runs where the kinetics rates are slow. The uncertainties in the kinetics estimates are strongly correlated to the magnitude of kinetics values and are dependent on the change in total crystal numbers and total crystal volume. Batch gibbsite precipitation data from an inhomogeneously-mixed precipitator were compared to a well-mixed precipitation system operated under the same operating conditions, i.e. supersaturation, seed charge, seed type, mean shear rate and temperature. / It was found that the gibbsite agglomeration kinetic estimates were significantly different, and hence, the product CSD, but the gibbsite growth rates were similar. It was also found that a compartmental model approach cannot fully account for the differences in suspension hydrodynamics, and resulted in unsatisfactorily CSD predictions of the inhomogeneously-mixed precipitator. This is attributed to the coupled effects of local energy dissipation rate and solids phase mixing on agglomeration process.
557

Rational versus anchored traders : exchange rate behaviour in macro models

Marshall, Peter John, 1960- January 2001 (has links)
Abstract not available
558

Mobile data services adoption in New Zealand: future predictions

Cosgrove, Steve January 2007 (has links)
The fast pace of development in the Mobile Data Services area means innovators have to remain vigilant to stay in the market. There is not time to undertake the usual market development cycles. As a consequence, researchers are looking at various ways to predict the adoption rate of a new product and ways to better forecast adoption in different niche contexts. Rogers’ (2003) provides a review of historical trends in innovation and diffusion studies, and the foundational (1962) model he developed. In the context of the most recent literature, it is found that Rogers’ generic model still works well, but variations built on his model need to be considered. In particular, the ‘Chasm’ model, developed by Moore (1999), adapts Rogers’s model to cope well with the 21st century business environment. Gilbert (2005) has taken the work of both Rogers and Moore and applied the learning to research into adoption rates and characteristics in cross-cultural situations. In New Zealand the past consumer behaviour when new mobile services have been introduced has shown a number of characteristics and specific problems. Vodafone New Zealand provides mobile services only and they now claim 54% market share (Vodafone 2005`). An early success was to significantly lower the cost of sending text messages (SMS), followed by promotion of that service to the teenage market sector. In contrast to the popularity of SMS, introduction of the WAP mobile Internet protocol was not successful in New Zealand, as was the case elsewhere. The failure is commonly attributed to a lack of services being offered to use the technology. Near the end of 2004 Telecom New Zealand launched a new product, branded ‘T3G’. Vodafone New Zealand released ‘Vodafone 3G’ during the middle of 2005. The technologies behind these products is generally called ‘3G Mobile’, or Third Generation Mobile technology. Operators in Singapore also have 3G networks, commissioned during 2004. Authors such as Salz et el (2004) find evidence to suggest that US network operators need to speed up the adoption of this technology to meet predicted demand. There are unique factors likely to affect in the New Zealand market. The OECD has repeatedly found evidence that broadband Internet adoption in New Zealand is lower than other countries. Introduction of 3G technology provides another way to access broadband Internet. The OECD indicates that pricing is one of the barriers to broadband adoption. Telephone companies will have to consider pricing 3G to appeal as an option to having a fixed Internet option. The key question to be addressed in this research is: Do the adoption intensions of New Zealanders match those of Malaysia and Singapore for expected data services use? A related question is: What other factors effect New Zealand's current relatively slow rate of adoption? Product positioning of mobile data products is going to become more critical, given that some telephone operators are ‘expecting to get 25% of revenues from mobile data within five years’ (Molony, 2001). This Thesis will provide information to assist Mobile Service Providers to predict adoption rates of new services. It will also provide a comparative reference for researchers in other countries to replicate the study, and contribute to an exciting body of international literature. The New Zealand market is characterised by high cost of broadband Internet in general (OECD, TUANZ, and others), proprietary knowledge capture, and regulation, but these issues do not stop research into the intensions of potential adopters. This thesis will fill part of that research void, by comparing emergent demand for mobile data with existing models, which have previously been used, to predict future demand. New Zealand has a reputation as an earlier adopter of new technologies (Min Economic Dev & others). This thesis will contribute evidence to indicate how New Zealanders plan to adopt mobile data services, and how intensions of adoption compare with parallel studies in Singapore, and other countries.
559

Exchange Rate Pass-Through in a Small Open Economy: the Case of Australian Export Prices

Swift, Robyn, n/a January 2001 (has links)
Expectations regarding the relationship between exchange rates and the prices of traded goOds in small open economies have traditionally been derived from the idea of the relative unimportance of a single small country when trading in much larger international markets. This concept has led to the use of distinct 'small-country' or 'dependent-economy' models to analyse the effects of macroeconomic changes. Thus for small economies like Australia, it is usually assumed that the foreign-currency prices of traded goods are fixed in perfectly competitive international markets. Accordingly, exchange rate movements must be completely absorbed in domestic-currency prices. In other words, the pass-through of exchange rate changes to destination-currency prices must be zero for Australian exports, and complete for Australian imports. Such expectations regarding the degree of exchange rate pass-through contrast sharply with those found in conventional macroeconomic models for large countries, in which pass-through is assumed to be complete for all traded goods. Moreover, they conflict with the results derived from the large theoretical and empirical literature on the microeconomic determinants of pass-through, which suggests that much international trade takes place in imperfectly competitive markets, in which the degree of less-than-complete pass-through depends on industry-specific factors. This study explores these apparent conflicts by re-examining the small-country assumption, with particular emphasis on export prices as the area of greatest divergence. Specifically, it addresses three research questions: 1) What are the theoretical conditions that underlie the small-country assumption? 2)What are the implications for the macroeconomic models of small economies if this assumption is violated? 3) In practice, is the data more consistent with the validity or otherwise of the assumption? The analysis focuses on Australia as a practical example of a small open economy with a high proportion of commodity exports. In summary, the theoretical and empirical results reported in this study suggest that the small-country assumption is unlikely to hold in practice. That is, exchange rate pass-through is more likely to be determined by industry-specific factors, rather than by the universal conclusion of zero pass-through for all Australian exports that is derived from the small-country assumption. Further, they imply that the movement in internal prices required to restore equilibrium in a small country following an external shock is likely to be both larger and more uncertain than has previously been expected. Under such circumstances, the full flexibility of the exchange rate, as the primary and most rapid source of the required adjustments, becomes particularly significant. An important policy implication for small open economies that are subject to frequent terms of trade shocks, such as Australia, is that attempts to manage the exchange rate in order to reduce apparently excessive movements may in fact result in a longer and more protracted process of adjustment through the labour market.
560

Topics in human capital and taxation: effective tax rates on education, the heterogeneous human capital model and the impact of nominal rigidities in the tax system

Anderson, Glenn Michael, Economics, Australian School of Business, UNSW January 2007 (has links)
In this thesis I address several neglected issues relating to the theoretical and applied analysis of human capital and the impact of taxation. I begin with the problem of measuring the effective tax rate on human capital accumulation. I develop a forward-looking measure of the effective tax rate that is grounded in human capital theory, allowing for features that differentiate human capital formation from physical capital formation. These features include concavity of the earnings-investment frontier and adjustments in capital utilization through leisure. I argue that the few attempts that have been made to measure the effective tax rate on skill formation are either limited by the fact that they inherit assumptions applicable to the theory of the firm or have dubious theoretical foundations (Chapter Two). The new measure is used to derive the effective tax rate on human capital in 25 OECD countries, including Australia (Chapter Three). While there are numerous general equilibrium models which integrate nominal rigidities of one form or another, little attention has been devoted to nominal rigidities arising from partial indexation of income tax thresholds. No doubt one of the reasons for this gap in the literature is the difficulty associated with introducing a fully specified progressive tax regime into an applied general equilibrium model. I show that this hurdle can be overcome through a zero-profit condition for general equilibrium on the labour market. The condition is integrated into an aggregative model of the economy consisting of two sectors (consumption and education) and two factors of production (skilled and unskilled labour). Since skill formation is endogenous, the model allows us to reopen research into the optimal level of skill formation and the role of government (Chapter Four). An applied general equilibrium version of the model is used to evaluate the impact of recent tax reform proposals on skill formation (Chapter Five). A concluding chapter draws together these lines of enquiry with suggestions for future research (Chapter Six).

Page generated in 4.7065 seconds