Spelling suggestions: "subject:"principle"" "subject:"aprinciple""
111 |
Rethinking the Law of Letters of CreditCorne, Charmian Wang January 2003 (has links)
The documentary letters of credit transaction is the most common method of payment for goods in international trade. Its use has been considered so important that it is referred to as the �lifeblood� of international commerce. The purpose of this thesis is, through analysing the present regime of documentary credit established under the The Uniform Customs and Practice for Documentary Credits, 1993 Revision (�UCP�), to identify the rights and duties of all parties in such transactions and the reasons for the frequent occurrence of fraudulent activities associated with the documents required under the credits. It identifies that the present system fails to either encourage or implement substantial realisation of �reasonable care� or �good faith� on the part of the banks, or realisation of the requirement of �good faith� from beneficiaries. As a result, the independence principle has been left without substance, with resulting huge opportunities for fraudsters to cheat on the documents and obtain payment without the need to actually perform their duties to banks and buyers. Such issues have become more acute against the background of an underlying shift in the allocation of risk between the respective parties to letters of credit. There has been a depreciation in the value of the primary document of title and security held by the issue, the bill of lading, with the advent of container shipping. As the letter of credit system is wholly dependent on the integrity of the documents, it is being undermined by these developments. This has represented a shift in the traditional scheme of risk allocation from the seller to the bank. In practice, banks have taken countermeasures by insisting that applicants provide other types of collateral, and by subjecting applicants to rigorous credit checks. Thus, applicants ultimately have had to bear the brunt of costs associated with this reallocation of risk. It will be demonstrated that the UCP does not incorporate adequate or clear enough duties to be exercised on the part of issuers toward applicants, and severely restricts the applicant�s right to sue if the issuer has wrongfully honoured. Ultimately, a balance must be struck between the desirability of protecting the applicant from the beneficiary�s fraud against the benefits gained by maintaining the letter of credit as a commercial instrument and business device. Obviously, there is public interest in protecting both of these commercial values. This thesis advocates that a mechanism in addition to the fraud exception must be introduced to safeguard the system against the ramifications of these changes � increased fraud. The thesis is structured into five chapters. Chapter 1 sets out to demonstrate the circumstances under which the respective risks are borne by each participant in the letter of credit transaction, and how developments in trade practice have caused the burden of certain of these risks among the parties to a letter of credit transaction to shift. Chapter 2, after briefly visiting the historical origins of the letter of credit and the birth of the UCP, explores the implications of the dominance of banking interests over the drafting and interpretation of the UCP, how the UCP has in practice excluded the intrusion of other sources of law and the general reluctance of courts to intervene by applying non-letter of credit principles, the implication of the UCP�s assumption of the law in practice, the resulting marginalisation of local laws, and the inequality in bargaining power between banks and applicants that precludes a choice of law other than the UCP. Chapter 3 explores the independence principle and question of documentary compliance, why the system is ridden with non-compliant documents and the lack of incentive and meaningful duty for the banks to check for �red flags� that may indicate fraud on the documents or in the transaction. It will be emphasised that documentary validity, rather than mere documentary compliance, should be the focus under the letter of credit. Chapter 4 examines the fraud exception to the independence principle, the typical high thresholds of proof that applicants had to overcome to estopp payment, and explores recent trends towards the gradual lowering of such thresholds. Finally, Chapter 5 considers practical measures and proposals for reform that would help to redress the imbalance in the allocation of risk identified in the thesis.
|
112 |
Novel turbo-equalization techniques for coded digital transmissionDejonghe, Antoine 10 December 2004 (has links)
Turbo-codes have attracted an explosion of interest since their discovery in 1993: for the first time, the gap with the limits predicted by information and coding theory was on the way to be bridged. The astonishing performance of turbo-codes relies on two major concepts: code concatenation so as to build a powerful global code, and iterative decoding in order to efficiently approximate the optimal decoding process.
As a matter of fact, the techniques involved in turbo coding and in the associated iterative decoding strategy can be generalized to other problems frequently encountered in digital communications. This results in a so-called turbo principle. A famous application of the latter principle is the communication scheme referred to as turbo-equalization: when considering coded transmission over a frequency-selective channel, it enables to jointly and efficiently perform the equalization and decoding tasks required at the receiver. This leads by the way to significant performance improvement with regard to conventional disjoint approaches.
In this context, the purpose of the present thesis is the derivation and the performance study of novel digital communication receivers, which perform iterative joint detection and decoding by means of the turbo principle. The binary turbo-equalization scheme is considered as a starting point, and improved in several ways, which are detailed throughout this work. Emphasis is always put on the performance analysis of the proposed communication systems, so as to reach insight about their behavior. Practical considerations are also taken into account, in order to provide realistic, tractable, and efficient solutions.
|
113 |
Monotone method for nonlocal systems of first orderLiu, Weian January 2005 (has links)
In this paper, the monotone method is extended to the initial-boundary value problems of nonlocal PDE system of first order, both quasi-monotone and non-monotone. A comparison principle is established, and a monotone scheme is given.
|
114 |
The Maximum Principle for Cauchy-Riemann Functions and HypocomplexityDaghighi, Abtin January 2012 (has links)
This licentiate thesis contains results on the maximum principle forCauchy–Riemann functions (CR functions) on weakly 1-concave CRmanifolds and hypocomplexity of locally integrable structures. Themaximum principle does not hold true in general for smooth CR functions,and basic counterexamples can be constructed in the presenceof strictly pseudoconvex points. We prove a maximum principle forcontinuous CR functions on smooth weakly 1-concave CR submanifolds.Because weak 1-concavity is also necessary for the maximumprinciple, a consequence is that a smooth generic CR submanifold ofCn obeys the maximum principle for continuous CR functions if andonly if it is weakly 1-concave. The proof is then generalized to embeddedweakly p-concave CR submanifolds of p-complete complexmanifolds. The second part concerns hypocomplexity and hypoanalyticstructures. We give a generalization of a known result regardingautomatic smoothness of solutions to the homogeneous problemfor the tangential CR vector fields given local holomorphic extension.This generalization ensures that a given locally integrable structureis hypocomplex at the origin if and only if it does not allow solutionsnear the origin which cannot be represented by a smooth function nearthe origin. / Uppsatsen innehåller resultat om maximumprincipen för kontinuerligaCauchy–Riemann funktioner (CR-funktioner) på svagt 1-konkava CRmångfalder,samt hypokomplexitet för lokalt integrerbara strukturer.Maximumprincipen gäller inte generellt för släta CR funktioner ochmotexempel kan konstrueras givet strängt pseudokonvexa punkter.Vi bevisar en maximumprincip för kontinuerliga CR-funktioner påsläta inbäddade svagt 1-konkava CR-mångfalder. Eftersom svagt 1-konkavitet också är nödvändigt får vi som konsekvens att för slätageneriska inbäddade CR-mångfalder i Cn gäller att maximum-principenför kontinuerliga CR-funktioner håller om och endast om CR-mångfaldenär svagt 1-konkav. Vi generaliserar satsen till svagt p-konkava CRmångfalderi p-kompletta mångfalder. Den andra delen behandlarhypokomplexitet och hypoanalytiska strukturer. Vi generaliserar enkänd sats om automatisk släthet för lösningar till de tangentiella CRekvationerna,givet existensen av lokal holomorf utvidgning. Generaliseringenger att en lokalt integrerbar struktur är hypokomplex iorigo om och endast om den inte tillåter lösningar nära origo som inteär släta nära origo. / <p>Forskning finansierad av Forskarskolan i Matematik och Beräkningsvetenskap (FMB), baserad i Uppsala.</p>
|
115 |
First Principle Calculation with Interpolating Scaling Function on Adaptive GriddingWang, Jen-chung 09 August 2007 (has links)
A new multiresolution scheme based on interpolating scaling function(ISF) on adaptive gridding(AG) shows promising in the first principle calculation. We also use ISFs on solving Poisson equation(PE), and find good approximations on the expansions of the second derivatives of ISFs. It is simpler than the wavelet scheme and fully implements the fast wavelet transformation so that the method is very suitable to problems with frequently updating charge density such as the first-principle calculation in electronic structures in atoms, molecules, and solids.
Although the scheme is similar to the AG scheme on real space, the ISFs can represent fields more effectively and it needs less grids than the scheme of real space does. This simple and effective method provides an alternative to both the real space and the wavelet methods in the first principle calculation. Also, The method can be easily parallelized due to the block structure of the grid layout.
|
116 |
Edge Detection on Underwater Laser SpotTseng, Pin-hsien 04 September 2007 (has links)
none
|
117 |
Principle-based Implementation of Knowledge Building CommunitiesReeve, Richard 01 September 2010 (has links)
This thesis investigates issues and challenges surrounding the use of teacher study groups as a means of addressing the gap that must be closed between design principles and classroom practices in order to effectively implement an educational innovation. A multiple-case design was used to examine how teachers’ perceived understanding of the Knowledge Building Communities principles changed over time and affected their implementation of the Knowledge Building Communities model—a model that requires student engagement in the collaborative production of ideas that are continually improved by all participants. Knowledge Forum® is an on-line environment designed to support Knowledge Building. Data sources for this study include teacher interviews, transcripts of study group meetings, teachers’ ratings of their perceived understanding of Knowledge Building principles, teacher and student activity in Knowledge Forum, and student interviews. In total this study involved seven teachers and eleven study group meetings across three school sites. Based on work at a site already engaged in Knowledge Building a tentative proposition was developed: discussing Knowledge Building principles increases teachers’ perceived understanding of these principles and contributes to increasingly effective designs for implementing them. This proposition was tested and refined at two additional elementary public schools. Taken together the findings suggest the importance of and difficulties surrounding study groups focused on principle-based approaches to pedagogical change. In particular, the findings point to discussion and active engagement with the principles as a catalyst for change. A data analysis technique was developed to examine the discourse patterns of select episodes of study group meetings. The resulting pattern suggests the principles can frame a study groups’ work and set the groundwork for change through discussion of goals underlying the principles, stories relevant to their implementation, and commitment to ongoing experimentation to address obstacles. Detailed accounts of teacher difficulties and change form the basis of a descriptive model developed to convey how teachers address contextual concerns in their study groups, with elaboration of the types of interactions that help them move to deeper understanding of principles and to more successful implementations of the Knowledge Building Communities model.
|
118 |
A Proposal for Principle-based Securities Regulation for CanadaMargaritis, Kelly 12 January 2011 (has links)
This paper argues in favour of principle-based securities regulation for Canada. The author examines the current state of Canadian securities regulation and why change is needed. The author then examines the characteristics of principle-based regulation and contrasts it against rule-based regulation while exposing the advantages and disadvantages of both regulatory models. In proposing a principle-based model for Canadian securities regulation, the author looks to the use of this type of regulation in the capital markets of certain Canadian provinces, the United States and the United Kingdom and then examines certain attributes of Canadian capital markets that have to be considered in the application of principle-based securities regulation to Canada. In supporting principle-based regulation as the modern form of securities regulation, the author discusses lessons learned from the global financial crisis and how those lessons can be applied in the promotion of principle-based securities regulation for Canada.
|
119 |
A Proposal for Principle-based Securities Regulation for CanadaMargaritis, Kelly 12 January 2011 (has links)
This paper argues in favour of principle-based securities regulation for Canada. The author examines the current state of Canadian securities regulation and why change is needed. The author then examines the characteristics of principle-based regulation and contrasts it against rule-based regulation while exposing the advantages and disadvantages of both regulatory models. In proposing a principle-based model for Canadian securities regulation, the author looks to the use of this type of regulation in the capital markets of certain Canadian provinces, the United States and the United Kingdom and then examines certain attributes of Canadian capital markets that have to be considered in the application of principle-based securities regulation to Canada. In supporting principle-based regulation as the modern form of securities regulation, the author discusses lessons learned from the global financial crisis and how those lessons can be applied in the promotion of principle-based securities regulation for Canada.
|
120 |
Towards better understanding of the Smoothed Particle Hydrodynamic MethodGourma, Mustapha 09 1900 (has links)
Numerous approaches have been proposed for solving partial differential equations; all these
methods have their own advantages and disadvantages depending on the problems being treated. In
recent years there has been much development of particle methods for mechanical problems.
Among these are the Smoothed Particle Hydrodynamics (SPH), Reproducing Kernel Particle
Method (RKPM), Element Free Galerkin (EFG) and Moving Least Squares (MLS) methods. This
development is motivated by the extension of their applications to mechanical and engineering
problems.
Since numerical experiments are one of the basic tools used in computational mechanics, in
physics, in biology etc, a robust spatial discretization would be a significant contribution towards
solutions of a number of problems. Even a well-defined stable and convergent formulation of a
continuous model does not guarantee a perfect numerical solution to the problem under
investigation.
Particle methods especially SPH and RKPM have advantages over meshed methods for problems,
in which large distortions and high discontinuities occur, such as high velocity impact,
fragmentation, hydrodynamic ram. These methods are also convenient for open problems. Recently,
SPH and its family have grown into a successful simulation tools and the extension of these
methods to initial boundary value problems requires further research in numerical fields.
In this thesis, several problem areas of the SPH formulation were examined. Firstly, a new approach based on ‘Hamilton’s variational principle’ is used to derive the equations of motion in the SPH form. Secondly, the application of a complex Von Neumann analysis to SPH method reveals the
existence of a number of physical mechanisms accountable for the stability of the method. Finally, the notion of the amplification matrix is used to detect how numerical errors propagate permits the identification of the mechanisms responsible for the delimitation of the domain of numerical stability.
By doing so, we were able to erect a link between the physics and the numerics that govern the SPH formulation.
|
Page generated in 0.0489 seconds