• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 549
  • 506
  • 119
  • 66
  • 55
  • 36
  • 26
  • 18
  • 17
  • 13
  • 10
  • 9
  • 8
  • 7
  • 7
  • Tagged with
  • 1664
  • 204
  • 142
  • 105
  • 102
  • 99
  • 95
  • 91
  • 90
  • 88
  • 86
  • 84
  • 83
  • 77
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

A pre-implementation analysis of the new South African withholding tax on interest / Bhavesh Shashikant Govan

Govan, Bhavesh Shashikant January 2014 (has links)
South Africa is in need of foreign direct investment (FDI) to increase economic growth and alleviate unemployment and poverty. To succeed in obtaining this FDI, South Africa must compete with the rest of the world for the available FDI. The global economic outlook is currently still uncertain and the growth of advanced economies are slowing down while Asia and Sub-Saharan Africa continue to grow at a steady pace. South Africa, as part of Sub-Saharan Africa, should take advantage of this growth on the African continent as well as internationally. Although studies have been performed to ascertain the tax policies of countries, the role of taxation applied by countries and the effects of taxation on FDI, there have been few studies on the tax policies specifically in respect of withholding taxes on interest. The new South African withholding tax on interest, applicable to South African source interest payments to non-residents, has been proposed to be included in terms of sections 49A to 49H in the Income Tax Act (58 of 1962) and will become effective from 1 January 2015. These sections have been introduced to align the said withholding tax and the section 10(1)(h) interest exemption, applicable to normal income tax in respect of non-residents, to the withholding taxes on interest and interest exemptions applied globally. Attention should be focused on whether the aforementioned global alignment will be achieved with the introduction of this legislation as South Africa had previously applied a similar legislation called non-residents’ tax on interest (NRTI) which appeared to be unsuccessful. Determining whether this legislation has been aligned with global practice will provide useful insight into whether this new legislation will promote, stagnate or be indifferent to FDI in South Africa, while at the same time not eroding the tax base with overly generous exemptions. This study reviews and compares the taxes implemented globally specifically in relation to withholding taxes on interest in a selection of countries, namely the developing countries Brazil, Russia, India, China, Mozambique and Namibia and the developed countries Germany and Denmark. Other determinants which will also have an impact on the comparisons of these withholding taxes are, for example, normal and withholding tax interest exemptions and repo rates – all of which have been incorporated into this comparative study. Based on the literature reviewed and the comparative analysis, the study concludes that the South African withholding tax on interest is effectively designed to keep attracting foreign lending in order to remain competitive in international markets. It is further shown that the South African legislation in respect of the section 10(1)(h) blanket interest exemption is aligned to that of global practice. / MCom (South African and International Taxation), North-West University, Potchefstroom Campus, 2014
872

Caractérisation des discontinuités dans des ouvrages massifs en béton par la diagraphie électrique de résistivité

Taillet, Elodie January 2014 (has links)
Résumé : Le vieillissement des ouvrages en béton est une préoccupation majeure affectant la pérennité et l’efficacité des structures. Le maître d’ouvrage se doit de maintenir les fonctions d’usage de la structure tout en gardant une gestion économique efficace. L’objectif final de ces travaux de recherche est, donc de pouvoir renseigner sur l’état global de fissuration de la structure afin d’aider le maître d’ouvrage à respecter ses engagements. Dans cette optique, cette thèse développe une nouvelle technique aidant à la quantification de l’état des ouvrages massifs en béton. Elle s’appuie, pour cela, sur la méthode non-destructive de résistivité électrique en surface, connue pour sa sensibilité face à des facteurs révélateurs d’une altération. Toutefois, à cause de sa dépendance entre la profondeur d’investigation et la résolution, la méthode ne peut pas garantir de l’état global d’un ouvrage. De ce fait, il a été décidé d’utiliser la résistivité électrique via des forages préexistants dans la structure (diagraphie électrique). L’outil utilisé est une sonde en dispositif normal réservée jusqu’à présent pour la prospection pétrolière et hydrogéologique. En plus d’une prospection en profondeur via le forage, cette sonde peut acquérir des informations sur un rayon de 3.2m autour du forage. Cependant, à mesure que le volume de béton sondé augmente, la résolution décroit. La difficulté est donc de pouvoir exploiter les capacités de prospection de la sonde tout en sachant que la résolution faillit. Il s’agit de contourner le problème en maîtrisant les concepts de la diagraphie et son nouveau milieu d’application. Cette thèse est basée sur une première approche numérique permettant d’apporter des corrections sur les données de terrain et de déterminer la sensibilité de l’outil face à de l’endommagement d’ouverture plurimillimétrique à centimétrique. Ceci est validé par des mesures réalisées sur une écluse de la Voie Maritime du Saint-Laurent. Une étude numérique de la réponse de l’outil en fonction des paramètres de fissure tels que l’ouverture, le contraste entre la résistivité de la discontinuité et du béton, et l’extension est réalisée. Elle permet de construire une base de données afin de développer une méthode pour la caractérisation de l’endommagement. Cette méthode s’appuie sur ces réponses diagraphiques pour retrouver les paramètres de fissure recherchés (problème inverse). Nous procédons tout d’abord par une analyse préliminaire se basant sur un croisement des informations apportées par les différentes électrodes de la sonde puis nous optimisons les résultats par la méthode de recuit simulé. La méthode, ainsi développée est ensuite appliquée à un deuxième ouvrage pour en déterminer l’état interne. Ces travaux détectent plusieurs zones endommagées et caractérisent l’une d’elles par une ouverture centimétrique et une extension comprise entre 1.6m et 3.2m. Ces travaux prometteurs, attestent d’un premier diagnostic interne des ouvrages massifs en béton, un enjeu qui restait sans réponses satisfaisantes jusqu’à maintenant. // Abstract : The aging of concrete structures is a major problem affecting their sustainability and their efficiency. The owner must maintain the structure serviceability and provide cost-effective management. The goal of this work is to provide detailed information about the state of cracking inside the structure in order to assist the owner to meet its commitments. In this context, this thesis develops a new technology to assess the condition of mass concrete structures. It relies on a non-destructive method based on electrical resistivity measured from surface, known for its sensitivity to factors associated with concrete deterioration. However, because of its dependence between the investigation depth and the resolution, the method cannot assess the overall state of a structure. Therefore, it was decided to use the electrical resistivity through preexisting boreholes in the structure (electrical logging). The tool used is a normal probe, which has been traditionally used for oil and hydrogeological exploration. In addition to the investigation in depth via boreholes, this probe can get information over a radius of 3.2m around the borehole. However, as the probing volume of concrete increases, the resolution decreases. Difficulty is to use the exploration abilities of the tool, knowing that the resolution is limited. This is to get around the problem by mastering logging concepts and its new application environment. This thesis is based on a first numerical approach to make corrections on field data and to determine the tool sensitivity with regard to the multi-millimeter and centimeter crack size damage. This was validated with measurements made on a full-size lock located on the St. Lawrence Seaway. A numerical study of the tool response versus the discontinuities parameters such as the crack aperture, the resistivity contrast between the discontinuity and the concrete, and the extension was done. It allowed building a database used to develop a method for the characterization of the damage. This method is based on the tool responses to find the crack parameters (inverse problem). First, we proceed with a preliminary analysis based on a cross of information provided by the different electrodes of the probe then we optimize the results by the method of simulated annealing. The characterization method is applied to another structure to quantify its internal state. These studies detect several damaged areas and characterize one of them by a centimeter aperture and an extension between 1.6m and 3.2m. This work attest to a first internal diagnosis of massive concrete structures, an issue that remained without satisfactory answers so far.
873

Bucket-soil interaction for wheel loaders : An application of the Discrete Element Method

Henriksson, Felix, Minta, Joanna January 2016 (has links)
Wheel loaders are fundamental construction equipment to assist handling of bulk material e.g. gravel and stones. During digging operations, it withstands forces that are both large and very complicated to predict. Moreover, it is very expensive to develop prototypes of wheel loader for verification. Consequently, the Discrete Element Method (DEM) was introduced for gravel modeling a couple of years ago to enable prediction of these forces. The gravel model is connected with a Multibody System (MBS) model of the wheel loader, in this thesis a Volvo L180G. The co-simulation of these two systems is a very computer intensive operation and hence, it is important to investigate which parameters that have the largest influence on the simulation results. The aim of this thesis is to investigate the simulation sensitivity with respect to co-simulation communication interval, collision detection interval and gravel normal stiffness.The simulation results are verified by comparison with measurement data from previous tests performed by Volvo CE. The simulations are compared to investigate the relevant parameters. The conclusion of this thesis is that DEM is a method that in a very good way can predict the draft forces during digging operations.
874

Calibration and Model Risk in the Pricing of Exotic Options Under Pure-Jump Lévy Dynamics

Mboussa Anga, Gael 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2015 / AFRIKAANSE OPSOMMING : Die groeiende belangstelling in kalibrering en modelrisiko is ’n redelik resente ontwikkeling in finansiële wiskunde. Hierdie proefskrif fokusseer op hierdie sake, veral in verband met die prysbepaling van vanielje-en eksotiese opsies, en vergelyk die prestasie van verskeie Lévy modelle. ’n Nuwe metode om modelrisiko te meet word ook voorgestel (hoofstuk 6). Ons kalibreer eers verskeie Lévy modelle aan die log-opbrengs van die S&P500 indeks. Statistiese toetse en grafieke voorstellings toon albei aan dat suiwer sprongmodelle (VG, NIG en CGMY) die verdeling van die opbrengs beter beskryf as die Black-Scholes model. Daarna kalibreer ons hierdie vier modelle aan S&P500 indeks opsie data en ook aan "CGMY-wˆ ereld" data (’n gesimuleerde wÃłreld wat beskryf word deur die CGMY-model) met behulp van die wortel van gemiddelde kwadraat fout. Die CGMY model vaar beter as die VG, NIG en Black-Scholes modelle. Ons waarneem ook ’n effense verskil tussen die nuwe parameters van CGMY model en sy wisselende parameters, ten spyte van die feit dat CGMY model gekalibreer is aan die "CGMYwêreld" data. Versperrings-en terugblik opsies word daarna geprys, deur gebruik te maak van die gekalibreerde parameters vir ons modelle. Hierdie pryse word dan vergelyk met die "ware" pryse (bereken met die ware parameters van die "CGMY-wêreld), en ’n beduidende verskil tussen die modelpryse en die "ware" pryse word waargeneem. Ons eindig met ’n poging om hierdie modelrisiko te kwantiseer / ENGLISH ABSTRACT : The growing interest in calibration and model risk is a fairly recent development in financial mathematics. This thesis focussing on these issues, particularly in relation to the pricing of vanilla and exotic options, and compare the performance of various Lévy models. A new method to measure model risk is also proposed (Chapter 6). We calibrate only several Lévy models to the log-return of S&P500 index data. Statistical tests and graphs representations both show that pure jump models (VG, NIG and CGMY) the distribution of the proceeds better described as the Black-Scholes model. Then we calibrate these four models to the S&P500 index option data and also to "CGMY-world" data (a simulated world described by the CGMY model) using the root mean square error. Which CGMY model outperform VG, NIG and Black-Scholes models. We observe also a slight difference between the new parameters of CGMY model and its varying parameters, despite the fact that CGMY model is calibrated to the "CGMY-world" data. Barriers and lookback options are then priced, making use of the calibrated parameters for our models. These prices are then compared with the "real" prices (calculated with the true parameters of the "CGMY world), and a significant difference between the model prices and the "real" rates are observed. We end with an attempt to quantization this model risk.
875

Mechanical models of proteins

Soheilifard, Reza 28 October 2014 (has links)
In general, this dissertation is concerned with modeling of mechanical behavior of protein molecules. In particular, we focus on coarse-grained models, which bridge the gap in time and length scale between the atomistic simulation and biological processes. The dissertation presents three independent studies involving such models. The first study is concerned with a rigorous coarse-graining method for dynamics of linear systems. In this method, as usual, the conformational space of the original atomistic system is divided into master and slave degrees of freedom. Under the assumption that the characteristic timescales of the masters are slower than those of the slaves, the method results in Langevin-type equations of motion governed by an effective potential of mean force. In addition, coarse-graining introduces hydrodynamic-like coupling among the masters as well as non-trivial inertial effects. Application of our method to the long-timescale part of the relaxation spectra of proteins shows that such dynamic coupling is essential for reproducing their relaxation rates and modes. The second study is concerned with calibration of elastic network models based on the so-called B-factors, obtained from x-ray crystallographic measurements. We show that a proper calibration procedure must account for rigid-body motion and constraints imposed by the crystalline environment on the protein. These fundamental aspects of protein dynamics in crystals are often ignored in currently used elastic network models, leading to potentially erroneous network parameters. We develop an elastic network model that properly takes rigid-body motion and crystalline constraints into account. This model reveals that B-factors are dominated by rigid-body motion rather than deformation, and therefore B-factors are poorly suited for identifying elastic properties of protein molecules. Furthermore, it turns out that B-factors for a benchmark set of three hundred and thirty protein molecules can be well approximated by assuming that the protein molecules are rigid. The third study is concerned with the polymer mediated interaction between two planar surfaces. In particular, we consider the case where a thin polymer layer bridges two parallel plates. We consider two models of monodisperse and polydisperse for the polymer layer and obtain an analytical expression for the force-distance relationship of the two plates. / text
876

Evaluating the Normal Accident Theory in Complex Systems as a Predictive Approach to Mining Haulage Operations Safety

Do, Michael D. January 2012 (has links)
The Normal Accident Theory (NAT) attempts to understand why accidents occur in systems with high-risk technologies. NAT is characterized by two attributes: complexity and coupling. The combination of these attributes results in unplanned and unintended catastrophic consequences. High-risk technology systems that are complex and tightly coupled have a high probability of experiencing system failures. The mining industry has experienced significant incidents involving haulage operations up to and including severe injuries and fatalities. Although the mining industry has dramatically reduced fatalities and lost time accidents over the last three decades or more, accidents still continue to persist. For example, for the years 1998 - 2002, haulage operations in surface mines alone have accounted for over 40% of all accidents in the mining industry. The systems thinking was applied as an approach to qualitatively and quantitatively evaluate NAT in mining haulage operations. A measurement index was developed to measure this complexity. The results from the index measurements indicated a high degree of complexity that exists in haulage transfer systems than compared to loading and unloading systems. Additionally, several lines of evidence also point to the applicability of NAT in mining systems. They include strong organizational management or safety system does not guarantee zero accidents, complexity is exhibited in mining systems, and they are interactive and tightly coupled systems. Finally, the complexity of these systems were assessed with results indicating that a large number of accidents occur when there are between 4 or 5 causal factors. These factors indicate the degree of complexity necessary before accidents begin to occur.
877

Traitement séquentiels et parallèles dans la dyslexie lettre-par-lettre : étude de cas et simulation chez le lecteur normal

Fiset, Stéphanie January 2004 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
878

Obesity and Asthma: Adiponectin Receptor 1 (Adipo R1) and Adiponectin Receptor 2 (Adipo R2) are expressed by normal human bronchial epithelial (NHBE) cells at air-liquid interface (ALI) and expression changes with IL-13 stimulation

Bradley, Jennifer L 01 January 2016 (has links)
Obesity is recognized as an important risk factor for the development of many chronic diseases such as hypertension, Type 2 diabetes mellitus (T2DM) cardiovascular disease, cancer, renal disease, neurologic dysfunction, metabolic syndrome and asthma (3, 4). Circulating serum adiponectin levels in obese asthmatics have been reported to be low. Therefore, we aimed to investigate the role of adiponectin in a mucus hypersecretion model and hypothesized that adiponectin would decrease IL-13 induced MUC5AC expression from differentiated NHBE cells and that increasing concentrations of IL-13 would cause a decrease in Adipo R1 and Adipo R2 expression. MUC5AC expression with exposure to adiponectin was not significant. However, mRNA expression of Adipo R1 and Adipo R2 was significantly decreased by stimulation of IL-13 for acute (24 hours) and chronic (14 days) exposure. Therefore, the obese state and specifically IL-13 concentration could play a role in Adipo R1 and Adipo R2 expression within NHBE cells.
879

Validité d'un modèle QuasiNURBS interpolant des données géométriques incertaines

Zidani-Boumedien, Malika January 2006 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
880

Stochastická dominance vyšších řádů / High-order stochastic dominance

Mikulka, Jakub January 2011 (has links)
The thesis deals with high-order stochastic dominance of random variables and portfolios. The summary of findings about high-order stochastic dominance and portfolio efficiency is presented. As a main part of the thesis it is proven that under assumption of both normal and gamma distribution the infinite-order stochastic dominance is equivalent to the second-order stochastic dominance. The necessary and sufficient condition for the infinite-order stochastic dominance portfolio efficiency is derived under the assumption of normality. The condition is used in the empirical part of the thesis where parametrical approach to the portfolio efficiency is compared to the nonparametric scenario approach. The derived necessary and sufficient condition is based on the assumption of normality; therefore we use two sets of data, one with fulfilled assumption of normality and the other for which the assumption of normality was unambigously rejected. Consequently, the influence of fulfillment of the normality assumption on the results of the necessary and sufficient condition for portfolio efficiency is estimated.

Page generated in 0.0479 seconds