• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 10
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 111
  • 59
  • 23
  • 20
  • 20
  • 17
  • 17
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Evolutionary Tax Competition with Formulary Apportionment

Wagener, Andreas 10 1900 (has links) (PDF)
Evolutionary stability is a necessary condition for imitative dynamics of policy learning and innovation to come to a rest. We apply this concept to profit tax competition in a regime where a common and consolidated profit tax base for multi-jurisdictional firms is divided among governments by means of formulary apportionment. In evolutionary play, governments exhibit aggregate-taking behavior: when comparing their performance with others, they ignore their impact on the consolidated tax base. Consequently, evolutionarily stable tax rates are less efficient than tax rates in best-response tax competition. / Series: WU International Taxation Research Paper Series
52

The EU CCCTB proposal. A critical appraisal.

Zagler, Martin January 2009 (has links) (PDF)
With the ambition to reduce compliance costs for multinational enterprises within the European Union, but also in order to reduce the erosion of the tax base through transfer pricing and harmful tax competition among member states, the European Commission has promised to deliver a proposal for a Common Consolidated Corporate Tax Base (CCCTB) by the end of 2008. A vast literature has since emerged on the advantages and disadvantages of a move towards formulary apportionment (CCCTB). Whilst no official proposal has yet been submitted by the European Union, several documents have since been released. It is the novel contribution of this paper to critically evaluate the proposal itself. We argue that the formula is overly complex and should be simplified to source and destination based revenue weights only. (author´s abstract) / Series: Discussion Papers SFB International Tax Coordination
53

Proposed redistribution of provincial electoral districts on the basis of nodal regions

Chalk, John Robert January 1966 (has links)
Provincial electoral districts were first created in British Columbia in I869. At that time the criteria used to determine the ridings on the mainland were the existing mining division boundaries and on Vancouver Island the land district boundaries. Since 1869 many different sets of constituency boundaries have been used in the province. At all times the government has attempted to give the more settled areas the greatest number of electoral seats and yet provide each region of the province with legislative representation. Since electoral ridings were initiated, however, there has not been a stated policy by which the legislature has determined new constituency boundaries. In certain instances areal size has been the determining factor in deliniation, whereas in other cases electoral numbers were used. In 1965 the ratio of voting numbers between the largest constituency and the smallest was in excess of twenty-five votes to one. It was therefore believed that a major revision of British Columbia's electoral boundaries was due. There are three major methods by which new political boundaries may be determined; these being representation by population, by area, and by community of interest. Each method has certain qualities and liabilities. Representation by population is considered the best method of boundary delineation because the votes of all persons are then of equal weight. Since British Columbia contains such an uneven population distribution many constituencies created by employing this principle would be too large in area to be served effectively by one representative. As well, many urban constituencies would be extremely small. Therefore the thesis concluded that this method of boundary determination was not suitable for British Columbia. Representation by area was not considered to be practical for many ridings would contain only a few hundred voters while others over one hundred thousand. Therefore, representation by community of interest appeared to be the best method of determining legislative constituency boundaries. In this system the under-populated areas of the province would have few electoral representatives. Using this method of deliniation each riding would contain persons affected by similar problems and sharing common interests. Community of Interest regions were determined by isolating all territory which is primarily dependent upon a central settlement. Throughout British Columbia large settlements exist which serve the economic and social needs of the surrounding urban and rural population. The thesis recommended I that such regions would make good provincial constituencies since the rural and urban areas would have equal interest in both local affairs and development. To determine the sphere of influence surrounding each large settlement an examination of services provided by various sized communities was undertaken in order to determine which services were offered only by the larger nucleations. As this method of analysis was not applicable in the Lower Mainland area a study of shopping patterns and community activities was used as a basis for boundary determination. Each of these areas of common interest became the basis for the recommended urban constituencies. As a potential political instrument the value of a new set of electoral boundaries lies in the result which its employment would achieve. Using the 1963 provincial election statistics in the proposed constituencies, the results would have changed the political party representation in the legislature very little. Therefore more equable districts could be adopted without a shift in political party strength. / Arts, Faculty of / Geography, Department of / Graduate
54

Étude de la composition isotopique moléculaire (delta13C) comme traceur de source qualitatif et quantitatif des hydrocarbures aromatiques polycycliques (HAP) particulaires dans l’atmosphère / Study of molecular isotopic composition as qualitative and quantitative source tracer for particulate polycyclic aromatic hydrocarbons (PAHs) in the atmosphere

Guillon, Amélie 16 December 2011 (has links)
Les hydrocarbures aromatiques polycycliques (HAP) sont des composés organiques présents dans l’ensemble des compartiments environnementaux. Dans l’atmosphère, leurs sources sont à la fois naturelles (feux de biomasse, éruptions volcaniques) et anthropiques (industrie, transport, chauffage résidentiel). Une fois émis, sous forme gazeuse ou adsorbés à la surface de particules atmosphériques, les HAP sont susceptibles d’être impliqués dans des processus physico-chimiques tels que la photodégradation et/ou des réactions d’oxydation avec différentes espèces radicalaires. Du fait de leur toxicité avérée, ces composés font l’objet de différentes réglementations, législations françaises et européennes. Concernant le compartiment atmosphérique, seul le benzo(a)pyrène présente aujourd’hui des seuils d’émission à respecter. Afin de faire évoluer ces textes et de mettre en place des mesures de réduction d’émissions, diverses approches ont été développées dans le but de différencier leurs sources dans l’atmosphère. L’approche moléculaire, basée sur les profils moléculaires et les rapports de concentrations, permet d’apporter des informations quant à leurs origines. En revanche, elle souffre de biais induits par les conditions de formation des HAP (température, conditions environnementales…) et par les processus physico-chimiques dans lesquels ils sont impliqués. L’objectif principal de ce travail est de mettre en place une méthodologie de traçage de sources des HAP particulaires par une approche isotopique. Le développement du protocole analytique a été réalisé pour déterminer la composition isotopique moléculaire des HAP particulaires par GC/C/IRMS. Il a été montré que la réactivité des HAP sous l’action d’oxydants (O3, NO2, OH) et/ou de la lumière solaire n’induisait pas de variation significative de la composition isotopique moléculaire des HAP. Cette méthodologie a ainsi pu être appliquée sur des échantillons naturels, prélevés sur des sites caractérisés par des sources spécifiques. Il a été montré que les 13C/12C des HAP, en complément de données moléculaires, permettent de différencier les origines de ces composés. Par exemple, les caractéristiques moléculaires et isotopiques de HAP issus de la combustion de plusieurs espèces de bois d’origine méditerranéenne ont été déterminées en appliquant cette méthodologie à des échantillons collectés directement à l’émission. Enfin, dans le cadre de l’étude de la pollution et de ses impacts dans le Bassin d’Arcachon, les apports atmosphériques en HAP ont été mesurés par l’approche moléculaire couplée à d’autres outils (rétrotrajectoires, oxydants, roses des vents…) afin de compléter le diagnostic environnemental. / Polycyclic Aromatic Hydrocarbons (PAH) are carcinogenic compounds, present in all the compartments of the Environment. In the atmosphere, their sources are both from natural (biomass burning, volcanic emissions...) and anthropogenic (transport, industry, residential heating...) origins. Once emitted in the atmosphere, PAH are distributed between the gaseous or particulate phases and may be involved in different physico-chemical processes such as photodegradation, radical-initiated oxidations... Due to their carcinogenicity, PAH emissions are nowadays subjected to various regulations from France and more largely, European Union. In the atmosphere, benzo(a)pyrene has been selected as representative of the PAHs because of its high toxicity. In order to improve regulations involving emission reductions, several methodologies have been developed to perform source apportionment. The most commonly used in the literature is the molecular approach, based on molecular profiles and particular ratios. Nevertheless, conditions of PAH formation and physico-chemical processes affect these characteristic values. The main objective of this work was to develop a new methodology of particulate-PAH source tracking based on the molecular isotopic composition. The development of analytical procedure was performed to determine 13C/12C of PAHs by GC/C/IRMS. The study of the impact of PAH reactivity in the presence of O3, NO2, OH and/or solar radiations shows that no significant isotopic fractionation is induced on their isotopic compositions. Molecular isotopic approach was applied on natural particles, collected at different specific sites: 13C/12C of PAHs and molecular data allow differentiating particulate-PAH sources. Therefore, determinations of molecular and isotopic characteristics have been undertaken by applying this methodology on particulate-PAHs emitted during the combustion of fifteen Mediterranean woods. Finally, molecular approach coupled with different parameters (back-trajectories, oxidant concentrations, wind roses...) enables to measure the levels of PAH concentrations in the atmosphere in order to evaluate their impacts as a source of pollution in the Arcachon Bay.
55

The applicability of the apportionment of Damages Act 34 of 1956 to contractual claims with emphasis on the development of apportionment laws in South Africa and similar foreign jurisdictions

Grimbeek, Mathew 25 July 2013 (has links)
This study will follow the development of the rules pertaining to apportionment of damages, with particular emphasis on the Apportionment of Damages Act 34 of 1956 (‘the Act”) and its applicability to contractual claims. It furthermore delves into the current legal position in England, Australia and New Zealand. In Thoroughbred Breeders Association v Price Waterhouse 1999 (4) SA 968 (W), the Court decided that the Act was applicable to contractual claims and apportioned the damages payable by the defendant to the plaintiff. However, the matter was taken on appeal with the decision of the Court a Quo overturned. It will be argued that, although the reasoning at first glance seems sound, upon closer examination, the application of the Act need not be limited solely to delictual claims. The best manner in which to remedy this lacunae in our law is an amendment to Section 1 (1) and 1(3) of the Act, to explicitly extend the application thereof to contractual claims. / Dissertation (LLM)--University of Pretoria, 2012. / Private Law / unrestricted
56

PM2.5 Source Apportionment for Cincinnati, OH Using the Chemical MassBalance with Gas Constraints (CMB-GC) Model

Jathan, Yajna January 2020 (has links)
No description available.
57

Source Apportionment of Wastewater Using Bayesian Analysis of Fluorescence Spectroscopy

Blake, Daniel B. 10 July 2014 (has links) (PDF)
This research uses Bayesian analysis of fluorescence spectroscopy results to determine if wastewater from the Heber Valley Special Service District (HVSSD) lagoons in Midway, UT has seeped into the adjacent Provo River. This flow cannot be directly measured, but it is possible to use fluorescence spectroscopy to determine if there is seepage into the river.Fluorescence spectroscopy results of water samples obtained from HVSSD lagoons and from upstream and downstream in the Provo River were used to conduct this statistical analysis. The fluorescence 'fingerprints' for the upstream and lagoon samples were used to deconvolute the two sources in a downstream sample in a manner similar to the tools and methods discussed in the literature and used for source apportionment of air pollutants. The Bayesian statistical method employed presents a novel way of conducting source apportionment and identifying the existence of pollution.This research demonstrates that coupling fluorescence spectroscopy with Bayesian statistical methods allows researchers to determine the degree to which a water source has been contaminated by a pollution source. This research has applications in determining the affect sanitary wastewater lagoons and other lagoons have on an adjacent river due to groundwater seepage. The method used can be applied in scenarios where direct collection of hydrogeologic data is not possible. This research demonstrates that the Bayesian chemical mass balance model presented is a viable method of performing source apportionment.
58

Characterizing Spatiotemporal Variation of Trace Pollutants in Surface Water and Their Driving Forces

Wang, Zhenyu 26 March 2024 (has links)
The expanding urbanisation, growing population, and industrial development are threatening global surface water quality. With increasing concern about surface-water quality, it is crucial to deeply understand the evolution of surface-water quality problems and comprehensively de-termine its fundamental driving forces. In this Dissertation, systematic work on the mechanisms of water pollution with trace elements has been carried out in three steps: i) to identify the sources contributing to surface water pollution by receptor-based models, ii) to determine the factors dominating the pollution risk transmission from sources to surface water by a source-based model, and iii) to capture the primary driving forces to the spatiotemporal variation in surface water pollution by Bayesian-based approaches. The following specific topics were ad-dressed based on five publications: a) The temporal trends of trace metal pollution in the surface water were characterised by the Mann-Kendall test and the Generalised Additive Model. b) The primary source contributors to the long-term trace metal pollution in a river system were determined by the Self-organised Map, Positive Matrix Factorization receptor model, and Bayesian multivariate receptor model. The distributions of the source contributions to trace metal pollution were estimated. c) The risk transmission of trace pollutants in the surface water was estimated by a source-based dynamic model. The sensitivities of the risk to human activities, characteristics of wastewater treatment plants, and river flow regimes were evaluated. d) The contributions of hydro-chemical factors, climate impact, and sampling methods to water pollution and data uncertainty were analysed by the Wavelet Analysis and Bayesian Net-work. Both the models’ accuracy and robustness were evaluated by statistical analysis. The methods and results provided herein could improve the standard of statistical rigour and support the authorities’ decision-making.
59

Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale Simulations

Cheng, Haiyan 03 August 2009 (has links)
Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations. This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making. ``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality. / Ph. D.
60

Air Quality in Mexico City: Spatial and Temporal Variations of Particulate Polycyclic Aromatic Hydrocarbons and Source Apportionment of Gasoline-Versus-Diesel Vehicle Emissions

Thornhill, Dwight Anthony Corey 21 August 2007 (has links)
The Mexico City Metropolitan Area (MCMA) is one of the largest cities in the world, and as with many megacities worldwide, it experiences serious air quality and pollution problems, especially with ozone and particulate matter. Ozone levels exceed the health-based standard, which is equivalent to the U.S. standard, on approximately 80% of all days, and concentrations of particulate matter 10 μm and smaller (PM10) exceed the standard on more than 40% of all days in most years. Particulate polycyclic aromatic hydrocarbons (PAHs) are a class of semi-volatile compounds that are formed during combustion and many of these compounds are known or suspected carcinogens. Recent studies on PAHs in Mexico City indicate that very high concentrations have been observed there and may pose a serious health hazard. The first part of this thesis describes results from the Megacities Initiative: Local and Regional Observations (MILAGRO) study in Mexico City in March 2006. During this field campaign, we measured PAH and aerosol active surface area (AS) concentrations at six different locations throughout the city using the Aerodyne Mobile Laboratory (AML). The different sites encompassed a mix of residential, commercial, industrial, and undeveloped land use. The goals of this research were to describe spatial and temporal patterns in PAH and AS concentrations, to gain insight into sources of PAHs, and to quantify the relationships between PAHs and other pollutants. We observed that the highest measurements were generally found at sites with dense traffic networks. Also, PAH concentrations varied considerably in space. An important implication of this result is that for risk assessment studies, a single monitoring site will not adequately represent an individual's exposure. Source identification and apportionment are essential for developing effective control strategies to improve air quality and therefore reduce the health impacts associated with fine particulate matter and PAHs. However, very few studies have separated gasoline- versus diesel-powered vehicle emissions under a variety of on-road driving conditions. The second part of this thesis focuses on distinguishing between the two types of engine emissions within the MCMA using positive matrix factorization (PMF) receptor modeling. The Aerodyne Mobile Laboratory drove throughout the MCMA in March 2006 and measured on-road concentrations of a large suite of gaseous and particulate pollutants, including carbon dioxide, carbon monoxide (CO), nitric oxide (NO), benzene (C6H6), formaldehyde (HCHO), ammonia (NH3), fine particulate matter (PM2.5), PAHs, and black carbon (BC). These pollutant species served as the input data for the receptor model. Fuel-based emission factors and annual emissions within Mexico City were then calculated from the source profiles of the PMF model and fuel sales data. We found that gasoline-powered vehicles were responsible for 90% of mobile source CO emissions and 85% of VOCs, while diesel-powered vehicles accounted for almost all of NO emissions (99.98%). Furthermore, the annual emissions estimates for CO and VOC were lower than estimated during the MCMA-2003 field campaign. The number of megacities is expected to grow dramatically in the coming decades. As one of the world's largest megacities, Mexico City serves as a model for studying air quality problems in highly populated, extremely polluted environments. The results of this work can be used by policy makers to improve air quality and reduce related health risks in Mexico City and other megacities. / Master of Science

Page generated in 0.0922 seconds