Spelling suggestions: "subject:"bcheme"" "subject:"ascheme""
81 |
Emissions trading scheme for South Africa : opportunities and challengesJooste, Dustin 03 1900 (has links)
ENGLISH ABSTRACT: This research report aims to determine whether an emissions trading scheme or carbon tax is the
most suitable market-based emissions reduction mechanism for South Africa, given its multiple
environmental, social and economic objectives. Key factors considered in this comparison include:
environmental effectiveness; economic efficiency; social welfare impacts; public finance
considerations; administrative complexity and costs; and, finally, the relationship to global
greenhouse gas reduction mechanisms. These factors are compared in the short and long term to
determine which mechanism is most likely to deliver South Africa’s emissions reduction targets
within the given time frames. The comparison of these factors involves a non-empirical literature
review, followed by a rating of the mechanisms in order to distil a best fit in terms of the various
aspects of an effective emissions reduction mechanism, taking into account the specific needs and
conditions of South Africa.
The research found that, in the short term, a carbon tax was best suited to the South African
context. This is because of the fiscal certainty inherent in this mechanism, which provides clear
price signals and a stable public income. However, the reasons for these comparative advantages
over an emissions trading scheme relate to the long lead times and structure of the latter
mechanism, which requires years of implementation and favours environmental effectiveness over
economic efficiency. Further reasons include a lack of understanding and buy-in in terms of
market-based mechanisms, a situation that favours familiarity over effectiveness in some
instances. Taking these issues into account, the research shows that an emissions trading scheme
is better suited to the South African context in the long term. Once properly implemented, this
mechanism provides superior results in terms of the above-mentioned factors, and specifically in
terms of environmental effectiveness and the potential for benefit through international integration.
This research report concludes that the South African government has failed to take a long-term
view of the mechanisms available for emissions reduction, choosing instead to implement a carbon
tax, which favours economic growth at the expense of the environment and future generations. A
general lack of understanding of the structures and opportunity costs of the two mechanisms
necessitates an investigation by government of the applicability and structure of an emissions
trading scheme in the South African context before market-based mechanisms can play an
effective part in the future development of the country’s environmental regulatory regime.
|
82 |
DESIGN AND IMPLEMENTATION OF LIFTING BASED DAUBECHIES WAVELET TRANSFORMS USING ALGEBRAIC INTEGERS2013 April 1900 (has links)
Over the past few decades, the demand for digital information has increased drastically. This enormous demand poses serious difficulties on the storage and transmission bandwidth of the current technologies. One possible solution to overcome this approach is to compress the amount of information by discarding all the redundancies. In multimedia technology, various lossy compression techniques are used to compress the raw image data to facilitate storage and to fit the transmission bandwidth.
In this thesis, we propose a new approach using algebraic integers to reduce the complexity of the Daubechies-4 (D4) and Daubechies-6 (D6) Lifting based Discrete Wavelet Transforms. The resulting architecture is completely integer based, which is free from the round-off error that is caused in floating point calculations. The filter coefficients of the two transforms of Daubechies family are individually converted to integers by multiplying it with value of 2x, where, x is a random value selected at a point where the quantity of losses is negligible. The wavelet coefficients are then quantized using the proposed iterative individual-subband coding algorithm. The proposed coding algorithm is adopted from the well-known Embedded Zerotree Wavelet (EZW) coding. The results obtained from simulation shows that the proposed coding algorithm proves to be much faster than its predecessor, and at the same time, produces good Peak Signal to Noise Ratio (PSNR) at very low bit rates.
Finally, the two proposed transform architectures are implemented on Virtex-E Field Programmable Gate Array (FPGA) to test the hardware cost (in terms of multipliers, adders and registers) and throughput rate. From the synthesis results, we see that the proposed algorithm has low hardware cost and a high throughput rate.
|
83 |
Snow or rain? - A matter of wet-bulb temperature / Regn eller snö? En fråga om våta temperaturen.Olsen, Arvid January 2003 (has links)
Accurate precipitation-type forecasts are essential in many areas of our modern society andtherefore there is a need to develop proper working methods for this purpose. Focus of thiswork has been to study important physical processes decisive in deciding both the temperatureof the precipitation particles, hence affecting their phase, and the surrounding air. Two majorlatent heating effects have been emphasized, melting effect and cooling byevaporation/sublimation. Melting of the snow flakes subtracts heat from the surroundings andhence acts as a cooling agent. Phase transformation from solid/liquid into the gas phase alsoneeds heat which here results in a cooling tendency. These two mechanisms may sometimeshave a crucial influence for deciding the correct precipitation-type. The melting effect isdiscussed in a paper about a snow event in Tennessee in USA, and another paper describingan event in Japan showing the influence of the evaporation/sublimation process. In the lattercase the wet-bulb temperature, Tiw as a physical correct discriminator between snow and rainis obtained. A numerical weather prediction model (HIRLAM) is being used to study differentcondensation schemes during three weather situations occurring in Sweden. These areRasch/Kristjánsson condensation scheme, Sundqvist original condensation scheme and amodification of the latter scheme. In the modified Sundqvist condensation scheme the Tiw hasbeen implemented as a limit temperature between snow and rain. The results are showingdifferences between the two main schemes concerning the total precipitation (both snow andrain). Comparisons between Sundqvist condensation scheme and this modified version, calledSundqvist scheme with Tiw show that this latter version creates slightly more snow.Differences between them are largest in dryer areas. Differences in the snow accumulationincrease when the forecast length increases. That makes them harder to be compared to snowanalyses from MESAN (mesoscale analysis) because the analyses is partly based ondifferences in the snow depth and this cannot be directly compared to amount of newly fallensnow especially when surface air temperatures are above freezing. Deviations from the dataanalyses are obtained in both Sundqvist and Sundqvist scheme with Tiw but in some regionsthe latter is in better agreement with measurements. Further work is needed in precipitationtypestudies but the physical correct value with Tiw = 0 ºC as melting temperature used inSundqvist with Tiw scheme is an interesting project for the future in the field of precipitationtypeforecasting. / Sammanfattning av ”Regn eller snö? En fråga om våta temperaturen” Noggranna prognoser beträffande nederbördstyp är väldigt viktiga inom många områden isamhället. Det finns därför ett behov att utveckla bra metoder att avgöra om nederbördenfaller som regn eller snö. Viktiga fysikaliska processer är avgörande för nederbördens och denomgivande luftens temperatur; processernas kritiska betydelse för dess fas har satts i fokus.De två största latenta värmeeffekterna, avkylning genom smältning och genomavdunstning/sublimation har betonats. Smältning av snöflingorna extraherar värme frånomgivningen och därmed sänks temperaturen. Avdunstning och sublimation erfordrar värmeför fastransformation vilket även här tas från omgivningen och därmed en kylande effekt somföljd. Dessa två latenta värmeeffekter har ibland kritisk betydelse för nederbördstypen vidmarkytan och detta diskuteras dels i en artikel om en vädersituation från Tennessee (Kain etal., 2000) där smälteffekten fick avgörande betydelse för nederbördsfasen vid markytan, dels istudier från Japan där betydelsen av avdunstning och sublimation på nederbördstypenbetonats (Matsuo and Sasyo, 1981). I det senare fallet tydliggörs isobara våta temperaturenoch dess betydelse som diskriminator mellan regn och snö. En numerisk vädermodell (HIRLAM) har använts för att studera olika typer avkondensationsscheman och deras betydelse för nederbörden under tre olika väderskeenden iSverige. Dessa är Rasch/Kristjánssons kondensationsschema, Sundqvistskondensationsschema samt en något ändrad variant av Sundqvists kondensationsschema dären subrutin för beräknandet av Tiw har implementerats och ersatt den vanliga temperaturen iden del av schemat som beräknar smältning av nederbörd i fast form. Smälttemperaturen harsedan satts till 0ºC. Resultatet visar skillnader mellan Rasch/Kristjánssons schema ochSundqvists schema beträffande total 12 timmars nederbörd (regn och snö). Vissa periodertenderar Sundqvists kondensationsschema att överproducera nederbörden medan under andraperioder är det Rasch/Kristjánssons schema, som överproducerar jämfört mednederbördsobservationer. Jämförelser mellan Sundqvists schema och Sundqvists schema medTiw visar att den senare producerar mer ackumulerad snömängd med de största skillnaderna iområden som avviker mest från mättnad (100 %). Där finner vi också större differensermellan den vanliga temperaturen och Tiw. Skillnaden blir större när vi ökar den totala tiden förackumulerad snömängd men dessa värden blir då också svårare att verifiera med snöanalyserfrån MESAN. Detta då snöanalyserna bygger på skillnader mellan aktuell och föregåendeobserverade snödjup. Detta behöver ej alls vara lika med den verkliga mängden nysnö somfallit, speciellt under mätperioder då det är plusgrader. Avvikelser från snöanalyserna kannoteras i både Sundqvists schema och Sundqvists schema med Tiw. I vissa regioner är docksnöprognosen från den senare något bättre. Det fysikaliskt korrekta värdet av Tiw = 0ºC somsmältgräns mellan regn och snö istället för den vanliga temperaturen, utgör grunden förintressanta framtida studier beträffande nederbörd och nederbördstyp.
|
84 |
Approaches to transmission reduction protocols in low-frequency Wireless Sensor Networks deployed in the fieldWilkins, R. January 2015 (has links)
A key barrier in the adoption of Wireless Sensor Networks (WSNs) is achieving long-lived and robust real-life deployments. Issues include: reducing the impact of transmission loss, node failure detection, accommodating multiple sensor modalities, and the energy requirement of the WSN network stack. In systems where radio transmissions are the largest energy consumer on a node, it follows that reducing the number of transmissions will, in turn, extend node lifetime. Research in this area has led to the development of the Dual Prediction Scheme (DPS). However, the design of specific DPS algorithms in the literature have not typically considered issues arising in real world deployments. Therefore, this thesis proposes solutions to enable DPSs to function in robust and long-lived real-world WSN deployments. To exemplify the proposed solutions, Cogent-House, an end-to-end open-source home environmental and energy monitoring system, is considered as a case study. Cogent-House was deployed in 37 homes generating 235 evaluation data traces, each spanning periods of two weeks to a year. DPSs presented within the literature are often lacking in the ability to handle several aspects of real world deployments. To address issues in real-life deployments this thesis proposes a novel generalised framework, named Generalised Dual Prediction Scheme (G-DPS). G-DPS provides: i) a multi-modal approach, ii) an acknowledgement scheme, iii) heartbeat messages, and iv) a method to calculate reconstructed data yield. G-DPS’s multi-modal approach allows multiple sensor’s readings to be combined into a single model, compared to single-modal which uses multiple instances of a DPS. Considering a node sensing temperature, humidity and CO2, the multi-modal approach transmissions are reduced by up to 27%, signal reconstruction accuracy is improved by up to 65%, and the energy requirement of nodes is reduced by 15% compared to single-modal DPS. In a lossy network use of acknowledgements improves signal reconstruction accuracy by up to 2x and increases the data yield of the system up to 7x, when compared to an acknowledgement-less scheme, with only up to a 1.13x increase in energy consumption. Heartbeat messages allow the detection of faulty nodes, and yet do not significantly impact the energy requirement of functioning nodes. Implementing DPS algorithms within the G-DPS framework enables robust deployments, as well as easier comparison of performance between differing approaches. DPSs focus on modelling sensed signals, allowing accurate reconstruction of the signal from fewer transmissions. Although transmission scan be reduced in this way, considerable savings are also possible at the application level. Given the information needs of a specific application, raw sensor measurement data is often highly compressible. This thesis proposes the Bare Necessities (BN) algorithm, which exploits on-node analytics by transforming data to information closer to the data source (the sensing device). This approach is evaluated in the context of a household monitoring application that reports the percentage of time a room of the home spends in various environmental conditions. BN can reduce the number of packets transmitted to the sink by 7000x compared to a sense-and-send approach. To support the implementation of the above solutions in achieving long lifetimes, this thesis explores the impact of the network stack on the energy consumption of low transmission sensor nodes. Considering a DPS achieving a 20x transmission reduction, the energy reduction of anode is only 1.3x when using the TinyOS network stack. This thesis proposes the Backbone Collection Tree Protocol (B-CTP), a networking approach utilising a persistent backbone network of powered nodes. B-CTP coupled with Linear Spanish Inquisition Protocol (L-SIP) decreases the energy requirement for sensing nodes by 13.4x compared to sense-and-send nodes using the TinyOS network stack. When B-CTP is coupled with BN an energy reduction of 14.1x is achieved. Finally, this thesis proposes a quadratic spline reconstruction method which improves signal reconstruction accuracy by 1.3x compared to commonly used linear interpolation or model prediction based reconstruction approaches. Incorporating sequence numbers into the quadratic spline method allows up to 5 hours of accurate signal imputation during transmission failure. In summary, the techniques presented in this thesis enable WSNs to be long-lived and robust in real-life deployments. Furthermore, the underlying approaches can be applied to existing techniques and implemented for a wide variety of applications.
|
85 |
Tělo a jeho manifestace / Body and its manifestationHavlanová, Michaela January 2011 (has links)
Resumé (En) Key words: Body, soma, sarx, pexis, horizon, aesthesis, corporate scheme, body art, body modifications. This thesis deals with body and its manifestation. The body is conceived as philosophic, anthropologic and psychic phenomenon. Philosophic part determinates body as soma, sarx and pexis. The phenomenon as corporate scheme, horizon, motion and aesthesis are used for better understanding as well. Next part deals with body as anthropologic phenomenon. Body modifications, suspensions and their history are showed here. The research makes clear minds and reasons of extremely modificated persons. The last part is psychological. Problems related with wrong corporate scheme are described, as well as evolution of body's perception.
|
86 |
Investiční společnosti a fondy / Investment companies and fundsReichelt, Petr January 2015 (has links)
This diploma thesis deals with collective investment schemes, with the main focus on investment fund managers and administrators. Collective investment scheme is a form of indirect investment on the capital markets. It is an arrangement that enables a number of investors to pool their assets and have these professionally managed by an independent manager. It is a specific form of business which is based on raising finance from public or number of investors and then investing it with a goal of making profit. Investment is spread across a wide range of financial instruments which creates diversified portfolio. The First chapter serves as introduction to the basic principles of collective investment schemes, to its legal framework and legal entities that operate within this framework. Purpose of the second chapter is to give comprehensive overview of the managers of investment funds. It deals with cross-border management of investment funds, both within and outside of EU, operating conditions for AIFMs, capital and organisational requirements. The chapter concludes with explaining obligations for AIFMs managing AIFs which acquire control of companies and issuers. The third chapter focuses on administrators of investment funds it begins with analysing the concept of separation of fund management and...
|
87 |
Die Symmetrisierung des MacCormack-Schemas im Atmosphärenmodell GESIMAHinneburg, Detlef 02 November 2016 (has links) (PDF)
The dynamical equations of the non-hydrostatic mesoscale model GESIMA are solved numerically on an Arakawa-C grid. Because of the staggered grid most of the prognostic variables and their derivatives have identical local positions. The functional connection between the fluxes and velocities defined at different places is managed by the MacCormack scheme ignoring the local diff erences. The systematic errors are diminished by means of alternate down- and upwind shifting of the fluxes after each time step. A cycle of 8 time steps is necessary to achieve approximately symmetrical conditions because of the shift
permutations. Nevertheless, the systematic errors are not completely removed and the iterative calculation of the dynamic pressure is retarded by starting values from eight time steps ago (same permutation of shift directions). Both shortcomings are avoided by a symmetrized MacCormack scheme without the loss of its advantages of handling strong gradients. The new method is based on the symmetrization of the equations with respect to the passive quantities and on the simultaneous calculation of each equation for opposite shift directions of the active variables followed by averaging both increments. The method is tested for a typical example. / Die dynamischen Modellgleichungen des nicht-hydrostatischen mesoskaligen Atmosphärenmodells GESIMA sind numerisch auf einem Arakawa-C-Gitter gelöst. Durch die versetzte Anordnung der Größen auf dem Gitter besitzen die Differenzenquotienten (auf den rechten Seiten) und die prognostizierten Größen (auf den linken Seiten) von vornherein die gleiche lokale Position, allerdings nicht in jedem Fall. Das bisher in GESIMA praktizierte MacCormack-Schema stellt den Zusammenhang zwischen den an verschiedenen Gitterstellen definierten Flüssen und Geschwindigkeiten her, indem die Ortsdifferenz zwischen Fluß- und
zugehöriger Geschwindigkeitskomponente ignoriert wird. Zur Verringerung der systematischen Fehler erfolgt die direkte Zuordnung einer Flußkomponente abwechselnd (sequentiell) in einem Zeitschritt zur flußabwärts benachbarten Geschwindigkeitskomponente und im nächsten Zeitschritt zur flußaufwärts benachbarten. Nach Ablauf von jeweils 8 Zeitschritten sind die notwendigen Zuordnungspermutationen der 3 Vektorkomponenten zwecks einer annähernden Symmetrisierung des Verfahrens erreicht. Nachteile des bisherigen Verfahrens sind (a) der nicht vollständige Abbau der jedem Zeitschritt immanenten systematischen Zuordnungsfehler und (b) ein stark erhöhter Rechenaufwand für die iterative Bestimmung des dynamischen Druckes durch einen um 8 Zeitschritte (jeweils gleiche Zuordnungspermutation) zurückliegenden Startwert. Beide Nachteile werden durch ein neues, symmetrisiertes MacCormack-Schema vermieden, ohne daß auf die Vorteile bei der Handhabung starker Gradienten verzichtet werden muß. Das Verfahren beruht (a) auf der Symmetrisierung der lokalen Zuordnung für die passiven Größen innerhalb einer Gleichung (d.h. der nicht durch sie prognostizierten Größen) und (b) auf der simultanen Durchführung der zwei entgegengesetzten Zuordnungsrichtungen für jede der 3 Geschwindigkeitskomponenten innerhalb eines Zeitschrittes mit anschließender Mittelung der beiden Inkremente. Das neue Verfahren wurde anhand eines Beispiels geprüft.
|
88 |
Processo estratégico na criação e implantação da Escola de Artes, Ciências e Humanidades da USP: esquema analítico e evidências empíricas / Strategic process in the creation and implantation of the School of Arts, Sciences and Humanities at the USP: analytical schema and empirical evidences.Palacios, Fernando Antonio Colares 06 June 2011 (has links)
Ao constatar a complexidade e a especificidade dos processos organizacionais em universidades, a tese procurou responder à questão central de como ampliar a capacidade de análise, interpretação e explicação do processo estratégico ocorrido quando da criação e implantação da Escola de Artes, Ciências e Humanidades (EACH), na Universidade de São Paulo (USP). Para tanto, o objetivo geral foi elaborar um esquema analítico capaz de estruturar os elementos intervenientes e propiciar significados aos pesquisadores e aos estrategistas sobre o processo ocorrido na EACH. Propunha-se, de forma específica, responder questões tipo: como caracterizar o processo estratégico, quais os agentes, os recursos organizacionais e as relações contextuais envolvidos e de que forma se combinaram nas ações. A estratégia foi analisada como uma construção social, na qual o ser humano é um potencial agente de transformação social. A tese fundamentou-se nas teorias sociológicas de Giddens (2003) e Sztompka (2005) e nos estudos organizacionais sobre processo. Ainda no campo da estratégia, explorou tipologias de formação e implementação. Utilizando metodologia interpretativa procurou identificar as contradições dialéticas geradas pela interação de agentes em eventos delimitados, imersos em uma realidade histórica, social e cultural. Foram realizadas entrevistas, pesquisa documental e técnicas de observação durante dois anos para a obtenção dos dados da pesquisa. O esquema analítico foi elaborado a partir de três dimensões principais: agentes, contexto e sistemas organizacionais. Na sua aplicação, foram analisados dois eventos constituintes do episódio de criação e implantação da nova unidade: 1) elaboração do projeto da EACH e 2) elaboração do curso de mestrado em sistemas complexos. Os resultados podem ser assim resumidos: a) o esquema analítico mostrou-se capaz de captar e retratar a complexidade do processo na EACH; b) o processo estratégico, nos eventos analisados, caracterizou-se como racional e formal, em determinados momentos, e como um processo negociado e em construção permanente; c) gestores e professores foram agentes principais, sendo influenciados por valores como autonomia e legitimidade; d) fatores organizacionais como cultura da USP, formas de liderança, poder e estruturas formais e não-formais mostraram-se muito influentes, assim como, o contexto histórico e social; e) constatou-se que os modelos organizacionais de análise da universidade conseguem captar apenas partes da complexidade do processo. São apresentadas proposições para a USP no intuito de contribuir para a efetivação do projeto. Espera-se que a tese contribua para ampliar o campo de pesquisa integrando os estudos educacionais aos estudos organizacionais. / After finding the complexity and specificity of organizational processes in Universities, this paper intends to answer the main question of how to increase the capacity of analysis, interpretation and explanation of the strategic process, that took place when the Arts, Science and Humanities School - EACH - was created and implanted in São Paulo University. In order to achieve that purpose, the main goal was to elaborate an analytical schema, capable of structuring the elements involved, and also to give meanings to strategy researchers and to strategists about the process that took place at EACH. It also intends to answer questions such as: how to characterize the strategic process, the agents, the organizational resources and the contextual relations involved, and how they would match in action. The strategy was analyzed as a social construction, in which the human being is a potential agent of social transformation. The thesis is based on the sociological theories of Giddens (2003) and Strompka (2005), so as on organizational studies about processes (1992; 1995). It also explored types of training and implementation. Making use of interpretative methodology, there was an effort to identify the dialectical contradictions created by the agents\' interaction in delimited events, located on a historical, social and cultural reality. Interviews, documentary research and observation techniques were made during the period of two years in order to acquire data for the research. The analytical schema was formulated based on three main dimensions: agents, context and organizational systems. On the usage of the scheme, two events which are part of the process of creation and implementation of the new unit were analyzed: 1) building the EACH project and 2) building a master course based on complex systems. The outcomes can be summarized as follows: a) the analytical scheme proved to be able to capture and to portray the complexity of the process in EACH; b) the strategic process on the analyzed events, was characterized as rational and formal, in certain moments, and as negotiated and ongoing; c) managers and professors were agents being influenced by values such as autonomy and legitimacy; d) organizational factors as forms of leadership, power and formal and non-formal structures proved to be influential, as well as, the historic and social context; e) it was found that the organizational models of analysis of the university can capture only parts of the complexity of the strategic process. Propositions to USP are here presented as an attempt to contribute to the effectiveness of the project. It is expected that a research area can be improved being able to integrate the educational studies to the organizational studies.
|
89 |
Fexprs as the basis of Lisp function application; or, $vau: the ultimate abstractionShutt, John N 01 September 2010 (has links)
"Abstraction creates custom programming languages that facilitate programming for specific problem domains. It is traditionally partitioned according to a two-phase model of program evaluation, into syntactic abstraction enacted at translation time, and semantic abstraction enacted at run time. Abstractions pigeon-holed into one phase cannot interact freely with those in the other, since they are required to occur at logically distinct times. Fexprs are a Lisp device that subsumes the capabilities of syntactic abstraction, but is enacted at run-time, thus eliminating the phase barrier between abstractions. Lisps of recent decades have avoided fexprs because of semantic ill-behavedness that accompanied fexprs in the dynamically scoped Lisps of the 1960s and 70s. This dissertation contends that the severe difficulties attendant on fexprs in the past are not essential, and can be overcome by judicious coordination with other elements of language design. In particular, fexprs can form the basis for a simple, well-behaved Scheme-like language, subsuming traditional abstractions without a multi-phase model of evaluation. The thesis is supported by a new Scheme-like language called Kernel, created for this work, in which each Scheme-style procedure consists of a wrapper that induces evaluation of operands, around a fexpr that acts on the resulting arguments. This arrangement enables Kernel to use a simple direct style of selectively evaluating subexpressions, in place of most Lisps' indirect quasiquotation style of selectively suppressing subexpression evaluation. The semantics of Kernel are treated through a new family of formal calculi, introduced here, called vau calculi. Vau calculi use direct subexpression-evaluation style to extend lambda calculus, eliminating a long-standing incompatibility between lambda calculus and fexprs that would otherwise trivialize their equational theories. The impure vau calculi introduce non-functional binding constructs and unconventional forms of substitution. This strategy avoids a difficulty of Felleisen's lambda-v-CS calculus, which modeled impure control and state using a partially non-compatible reduction relation, and therefore only approximated the Church-Rosser and Plotkin's Correspondence Theorems. The strategy here is supported by an abstract class of Regular Substitutive Reduction Systems, generalizing Klop's Regular Combinatory Reduction Systems."
|
90 |
Numerical Methods for European Option Pricing with BSDEsMin, Ming 24 April 2018 (has links)
This paper aims to calculate the all-inclusive European option price based on XVA model numerically. For European type options, the XVA can be calculated as so- lution of a BSDE with a specific driver function. We use the FT scheme to find a linear approximation of the nonlinear BSDE and then use linear regression Monte Carlo method to calculate the option price.
|
Page generated in 0.0415 seconds