• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 597
  • 122
  • 105
  • 79
  • 50
  • 31
  • 17
  • 15
  • 15
  • 13
  • 10
  • 9
  • 7
  • 7
  • 6
  • Tagged with
  • 1228
  • 151
  • 92
  • 86
  • 85
  • 85
  • 83
  • 82
  • 75
  • 75
  • 74
  • 70
  • 67
  • 61
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Source term treatment of SWEs using surface gradient upwind method

Pu, Jaan H., Cheng, N., Tan, S.K., Shao, Songdong 16 January 2012 (has links)
No / Owing to unpredictable bed topography conditions in natural shallow flows, various numerical methods have been developed to improve the treatment of source terms in the shallow water equations. The surface gradient method is an attractive approach as it includes a numerically simple approach to model flows over topographically-varied channels. To further improve the performance of this method, this study deals with the numerical improvement of the shallow-flow source terms. The so-called surface gradient upwind method (SGUM) integrates the source term treatment in the inviscid discretization scheme. A finite volume model (FVM) with the monotonic upwind scheme for conservative laws is used. The Harten–Lax–van Leer-contact approximate Riemann solver is used to reconstruct the Riemann problem in the FVM. The proposed method is validated against published analytical, numerical, and experimental data, indicating that the SGUM is robust and treats the source terms in different flow conditions well.
232

Legal perspectives on the regulation of trade in (conflict) diamonds in Zimbabwe by means of the Kimberley Process Regulation Scheme / Paidamoyo Bryne Saurombe

Saurombe, Paidamoyo Bryne January 2014 (has links)
The Kimberley Process Certification Scheme was born out of international security concerns triggered by rebel groups that were using the proceeds of rough diamonds to fund conflict. Rebel groups used rough diamonds, acquired through gross human rights abuses, to fund conflicts aimed at overthrowing legitimate governments. The situation was particularly calamitous and ruinous in Angola, Sierra Leone, Liberia and the Democratic Republic of the Congo. In response to this situation a unique coalition of governments, civil society groups and stakeholders in the diamond industry, came together with the support of the United Nations and established a scheme to separate illicitly acquired diamonds from legally traded diamonds. The historical situation at the time allowed the KPCS to define conflict diamonds as "rough diamonds used by rebel movements or their allies to finance conflict aimed at undermining legitimate governments". However, the exploitation of Marange diamonds in Zimbabwe shows that the use of the proceeds of so-called conflict in diamonds is not limited to rebel movements aiming to wield power but such conflict can be political, economic and military in nature. In Zimbabwe, there was a link between human rights abuses and the ZANU PF led government. ZANU PF financed terror using Marange diamonds. There was international dissatisfaction with the way the KPCS scheme certified Marange diamonds. The USA maintained sanctions on Zimbabwe and Global Witness withdrew from the scheme in protest over the refusal of the scheme to evolve. On the other hand, some participant countries applauded the scheme for its work in certifying Marange diamonds. This study evaluates the efficacy of the scheme in curbing conflict diamonds brought into legal trade by legitimate governments. The study concludes that there is need for reform in the KPCS to successfully separate conflict diamonds from clean diamonds in the face of changing forms of conflict. In meeting its objective, the KPCS applies an exclusion mechanism where participants of the scheme do not trade with non-participants. The World Trade Organisation (WTO) rules prohibit discrimination amongst participants and the KPCS clearly violated this rule. Scholars have debated human rights exceptions in the General Agreement on Tariffs and Trade (GATT). There is strong legal support for the idea that the KPCS is justified under GATT article XX and XI. The KPCS is presently operating under a waiver granted from by the WTO under article IX (3) and (4). Another challenge the scheme faces is the legal nature of the scheme. Scholars do not agree on whether to classify the scheme as hard law or soft law. There is a need for clarity on the legal nature of the scheme. / LLM (Import and Export Law), North-West University, Potchefstroom Campus, 2014
233

Legal perspectives on the regulation of trade in (conflict) diamonds in Zimbabwe by means of the Kimberley Process Regulation Scheme / Paidamoyo Bryne Saurombe

Saurombe, Paidamoyo Bryne January 2014 (has links)
The Kimberley Process Certification Scheme was born out of international security concerns triggered by rebel groups that were using the proceeds of rough diamonds to fund conflict. Rebel groups used rough diamonds, acquired through gross human rights abuses, to fund conflicts aimed at overthrowing legitimate governments. The situation was particularly calamitous and ruinous in Angola, Sierra Leone, Liberia and the Democratic Republic of the Congo. In response to this situation a unique coalition of governments, civil society groups and stakeholders in the diamond industry, came together with the support of the United Nations and established a scheme to separate illicitly acquired diamonds from legally traded diamonds. The historical situation at the time allowed the KPCS to define conflict diamonds as "rough diamonds used by rebel movements or their allies to finance conflict aimed at undermining legitimate governments". However, the exploitation of Marange diamonds in Zimbabwe shows that the use of the proceeds of so-called conflict in diamonds is not limited to rebel movements aiming to wield power but such conflict can be political, economic and military in nature. In Zimbabwe, there was a link between human rights abuses and the ZANU PF led government. ZANU PF financed terror using Marange diamonds. There was international dissatisfaction with the way the KPCS scheme certified Marange diamonds. The USA maintained sanctions on Zimbabwe and Global Witness withdrew from the scheme in protest over the refusal of the scheme to evolve. On the other hand, some participant countries applauded the scheme for its work in certifying Marange diamonds. This study evaluates the efficacy of the scheme in curbing conflict diamonds brought into legal trade by legitimate governments. The study concludes that there is need for reform in the KPCS to successfully separate conflict diamonds from clean diamonds in the face of changing forms of conflict. In meeting its objective, the KPCS applies an exclusion mechanism where participants of the scheme do not trade with non-participants. The World Trade Organisation (WTO) rules prohibit discrimination amongst participants and the KPCS clearly violated this rule. Scholars have debated human rights exceptions in the General Agreement on Tariffs and Trade (GATT). There is strong legal support for the idea that the KPCS is justified under GATT article XX and XI. The KPCS is presently operating under a waiver granted from by the WTO under article IX (3) and (4). Another challenge the scheme faces is the legal nature of the scheme. Scholars do not agree on whether to classify the scheme as hard law or soft law. There is a need for clarity on the legal nature of the scheme. / LLM (Import and Export Law), North-West University, Potchefstroom Campus, 2014
234

Analyse de sensibilité et amélioration des simulations d’albédo de surfaces enneigées dans les zones subarctiques et continentales humides à l’est du Canada avec le schéma de surface CLASS.

Thériault, Nathalie January 2015 (has links)
Résumé : Le bilan d’énergie de la Terre est largement influencé par la variation de l’albédo de surface (fraction de l'énergie solaire réfléchie par une surface). Ces variations sont modifiées par la présence, l’épaisseur et les propriétés physiques de la neige. Le réchauffement climatique observé a un impact significatif sur l'évolution du couvert nival, ce qui influence grandement l'albédo de surface, et en retour modifie le climat. Malgré l’importance de l’albédo de surface, plusieurs modèles calculent l’albédo de manière empirique, ce qui peut entraîner des biais significatifs entre les simulations et les observations selon les surfaces étudiées. Le schéma de surface canadien, Canadian Land Surface Scheme, CLASS (utilisé au Canada dans les modèles climatiques Global Climate Model et Modèle Régional Canadien du Climat), modélise l’évolution spatiale et temporelle des propriétés de la neige, dont l'albédo. L’albédo de CLASS est calculé selon la hauteur et l’âge (métamorphisme) de la neige au sol, et selon l’accumulation de la neige sur la canopée. Les objectifs de ce travail sont d’analyser le comportement de l’albédo (simulé et mesuré) et d’améliorer le paramétrage de l’albédo de surface pendant l’hiver sur des régions à l’est du Canada. Plus précisément, le comportement de l’albédo a été étudié par l’analyse de la sensibilité de CLASS 3.6 aux paramètres prescrits (paramètres qui sont utilisés dans les calculs du modèle dont les valeurs sont fixes et définies empiriquement). En plus de l’analyse des variations temporelles de l’albédo en fonction des conditions météorologiques pour les terrains de végétation basse (noté "gazon") et de conifères. Aussi, l’amélioration du paramétrage a été tentée en optimisant (pour le gazon et les conifères) ou en modifiant (pour le gazon) les calculs considérant les paramètres prescrits dont l’albédo de CLASS est sensible. En premier lieu, nous avons montré que la sensibilité de l’albédo de CLASS en terrain de gazon dépend grandement du seuil du taux de précipitation nécessaire pour que l’albédo soit actualisé (à sa valeur maximale) dans le modèle. Faire varier ce seuil entraîne que les simulations quotidiennes d’albédo de surface enneigées vont s’étaler en majorité entre 0.62 à 0.8 (supérieur à l’étalement normalement simulé). Le modèle est aussi sensible à la valeur d’actualisation de l’albédo dont la variation entraîne que l’albédo enneigé quotidien peut s’étaler de jusqu’à 0.48 à 0.9. En milieu forestier (conifères), le modèle est peu sensible aux paramètres prescrits étudiés. La comparaison entre les albédos simulés et les mesures au sol montrent une sous-estimation du modèle de -0.032 (4.3 %) à SIRENE (gazon au sud du Québec), de -0.027 (3.4 %) à Goose-Bay (gazon en site arctique) et de -0.075 (27.1 %) à la Baie-James (forêt boréale). Lorsque comparée avec les données MODIS (MODerate resolution Imaging Spectroradiometer) la sous-estimation du modèle à la Baie-James est de -0.011 (5.2 %). On montre que la valeur de l'albédo mesurée lors des précipitations de neige à Goose Bay est en moyenne supérieure à la valeur d'actualisation de l'albédo dans le modèle (0.896 par rapport 0.84), ce qui peut expliquer la sous-estimation. En forêt, un des problèmes provient de la faible valeur de l'albédo de la végétation enneigée (ajout de 0.17 dans le visible), tandis que l’albédo de surface mesuré peut être augmenté de 0.37 (par rapport à la végétation sans neige). Aussi, l’albédo de la neige sur la canopée ne diminue pas avec le temps contrairement à ce qui est observé. En second lieu, nous avons tenté d’améliorer le paramétrage, en optimisant des paramètres prescrits (aucune amélioration significative n’est obtenue) et en modifiant la valeur d'actualisation de l’albédo de la neige en zone de gazon. Cette valeur, normalement fixe, a été rendue variable selon la température et le taux de précipitations. Les résultats démontrent que les modifications n’apportent pas d'améliorations significatives de la RMSE (Root Mean Square Error) entre les simulations et les mesures d’albédo. Les modifications sont toutefois pertinentes pour ajouter de la variabilité aux fortes valeurs d’albédo simulées ainsi que pour améliorer la compréhension du comportement des simulations d’albédo. Aussi, la méthodologie peut être reproduite pour d’autres études qui veulent étudier la représentativité et améliorer les simulations d’un modèle. / Abstract : The surface energy balance of northern regions is closely linked to surface albedo (fraction of solar radiation reflected by a surface) variations. These variations are strongly influenced by the presence, depth and physical properties of the snowpack. Climate change affects significantly snow cover evolution, and decreases surface albedo and snow albedo with positive feedback to climate. Despite the importance of the albedo, many models empirically compute it, which can induce significant biases with albedo observations depending on studied surfaces. The Canadian Land Surface Scheme, CLASS (used in Canada into the Canadian Regional Climate Model, and the Global Climate Model), simulates the spatial and temporal evolution of snow state variables including the albedo. The albedo is computed according to the depth of snow on the ground as well as the accumulation of snow in trees. The albedo seasonal evolution for snow on ground is estimated in CLASS from an empirical aging expression with time and temperature and a “refresh” based on a threshold of snowfall depth. The seasonal evolution of snow on canopy is estimated from an interception expression with trees type and snowfall density and an empirical expression for unloading rate with time. The objectives of this project are to analyse albedo behavior (simulated and measured) and to improve CLASS simulations in winter for Eastern Canada. To do so, sensitivity test were performed on prescribed parameters (parameters that are used in CLASS computation, their values are fixed, and determined empirically). Also, albedo evolution with time and meteorological conditions were analysed for grass and coniferous terrain. Finally, we tried to improve simulations by optimizing sensitive prescribed parameters for grass and coniferous terrain, and by modifying the refresh albedo value for grass terrain. First, we analysed albedo evolution and modelling biases. Grass terrain showed strong sensitivity to the precipitation rate threshold (for the albedo to refresh to its maximum value), and to the value of the albedo refresh. Both are affected by input data of precipitation rate and phase. The modification of precipitation threshold rate generates daily surface albedo to vary mainly (75 % of data in winter) between 0.62 and 0.8, which is a greater fluctuation than for a normal simulation over winter. The modification of the albedo refresh value generates surface albedo to vary mainly (75 %) between 0.66 and 0.79, but with extreme values, 25 % of data, from 0.48 to 0.9. Coniferous areas showed small sensitivity to studied prescribed parameters. Also, comparisons were made between simulated and measured mean albedo during winter. CLASS underestimates the albedo by -0.032 (4.3 %) at SIRENE (grass in Southern Quebec), by -0.027 (3.4 %) at Goose Bay (grass in arctic site) and by -0.075 (27.1 %) at James Bay (boreal forest) (or -0.011 (5.2 %) compared to MODIS (MODerate resolution Imaging Spectroradiometer) data). A modelling issue in grass terrain is the small and steady maximum albedo value (0.84) compared to measured data in arctic condition (0.896 with variation of an order of 0.09 at Goose Bay, or 0.826 at SIRENE with warmer temperatures). In forested areas, a modelling issue is the small albedo increase (+0.17 in the visible range, +0.04 in NIR) for the part of the vegetation that is covered by snow (total surface albedo gets to a maximum of 0.22) compared to events of high surface albedo (0.4). Another bias comes from the albedo value of the snow trapped on canopy which does not decrease with time in opposition to observed surface albedo which is lower at the end of winter and which suggests snow metamorphism occurred. Secondly, we tried to improve simulations by optimizing prescribed parameters and by modifying the albedo’s maximum value computation. Optimisations were made on sensitive prescribed parameters or on those that seemed unsuited. No significant RMSE (Root Mean Square Error) improvements were obtained from optimisations in both grass and coniferous area. Improvements of albedo simulations were tried by adjusting the maximum value (normally fixed) with temperature and precipitation rate, in grass terrain. Results show that these modifications did not significantly improved simulations’ RMSE. Nevertheless, the latter modification improved the correlation between simulated and measured albedo. These statistics were made with the whole dataset which can reduce the impact of modifications (they were mainly affecting albedo during a precipitation event), but it allows to overview the new model performance. Modifications also added variability to maximum values (closer to observed albedo) and they increased our knowledge on surface albedo behavior (simulated and measured). The methodology is also replicable for other studies that would aim to analyse and improve simulations of a surface model.
235

Cubature methods and applications to option pricing

Matchie, Lydienne 12 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: In this thesis, higher order numerical methods for weak approximation of solutions of stochastic differential equations (SDEs) are presented. They are motivated by option pricing problems in finance where the price of a given option can be written as the expectation of a functional of a diffusion process. Numerical methods of order at most one have been the most used so far and higher order methods have been difficult to perform because of the unknown density of iterated integrals of the d-dimensional Brownian motion present in the stochastic Taylor expansion. In 2001, Kusuoka constructed a higher order approximation scheme based on Malliavin calculus. The iterated stochastic integrals are replaced by a family of finitely-valued random variables whose moments up to a certain fixed order are equivalent to moments of iterated Stratonovich integrals of Brownian motion. This method has been shown to outperform the traditional Euler-Maruyama method. In 2004, this method was refined by Lyons and Victoir into Cubature on Wiener space. Lyons and Victoir extended the classical cubature method for approximating integrals in finite dimension to approximating integrals in infinite dimensional Wiener space. Since then, many authors have intensively applied these ideas and the topic is today an active domain of research. Our work is essentially based on the recently developed higher order schemes based on ideas of the Kusuoka approximation and Lyons-Victoir “Cubature on Wiener space” and mostly applied to option pricing. These are the Ninomiya-Victoir (N-V) and Ninomiya- Ninomiya (N-N) approximation schemes. It should be stressed here that many other applications of these schemes have been developed among which is the Alfonsi scheme for the CIR process and the decomposition method presented by Kohatsu and Tanaka for jump driven SDEs. After sketching the main ideas of numerical approximation methods in Chapter 1 , we start Chapter 2 by setting up some essential terminologies and definitions. A discussion on the stochastic Taylor expansion based on iterated Stratonovich integrals is presented, we close this chapter by illustrating this expansion with the Euler-Maruyama approximation scheme. Chapter 3 contains the main ideas of Kusuoka approximation scheme, we concentrate on the implementation of the algorithm. This scheme is applied to the pricing of an Asian call option and numerical results are presented. We start Chapter 4 by taking a look at the classical cubature formulas after which we propose in a simple way the general ideas of “Cubature on Wiener space” also known as the Lyons-Victoir approximation scheme. This is an extension of the classical cubature method. The aim of this scheme is to construct cubature formulas for approximating integrals defined on Wiener space and consequently, to develop higher order numerical schemes. It is based on the stochastic Stratonovich expansion and can be viewed as an extension of the Kusuoka scheme. Applying the ideas of the Kusuoka and Lyons-Victoir approximation schemes, Ninomiya- Victoir and Ninomiya-Ninomiya developed new numerical schemes of order 2, where they transformed the problem of solving SDE into a problem of solving ordinary differential equations (ODEs). In Chapter 5 , we begin by a general presentation of the N-V algorithm. We then apply this algorithm to the pricing of an Asian call option and we also consider the optimal portfolio strategies problem introduced by Fukaya. The implementation and numerical simulation of the algorithm for these problems are performed. We find that the N-V algorithm performs significantly faster than the traditional Euler-Maruyama method. Finally, the N-N approximation method is introduced. The idea behind this scheme is to construct an ODE-valued random variable whose average approximates the solution of a given SDE. The Runge-Kutta method for ODEs is then applied to the ODE drawn from the random variable and a linear operator is constructed. We derive the general expression for the constructed operator and apply the algorithm to the pricing of an Asian call option under the Heston volatility model. / AFRIKAANSE OPSOMMING: In hierdie proefskrif, word ’n hoërorde numeriese metode vir die swak benadering van oplossings tot stogastiese differensiaalvergelykings (SDV) aangebied. Die motivering vir hierdie werk word gegee deur ’n probleem in finansies, naamlik om opsiepryse vas te stel, waar die prys van ’n gegewe opsie beskryf kan word as die verwagte waarde van ’n funksionaal van ’n diffusie proses. Numeriese metodes van orde, op die meeste een, is tot dus ver in algemene gebruik. Dit is moelik om hoërorde metodes toe te pas as gevolg van die onbekende digtheid van herhaalde integrale van d-dimensionele Brown-beweging teenwoordig in die stogastiese Taylor ontwikkeling. In 2001 het Kusuoka ’n hoërorde benaderings skema gekonstrueer wat gebaseer is op Malliavin calculus. Die herhaalde stogastiese integrale word vervang deur ’n familie van stogastiese veranderlikes met eindige waardes, wat se momente tot ’n sekere vaste orde bestaan. Dit is al gedemonstreer dat hierdie metode die tradisionele Euler-Maruyama metode oortref. In 2004 is hierdie metode verfyn deur Lyons en Victoir na volumeberekening op Wiener ruimtes. Lyons en Victoir het uitgebrei op die klassieke volumeberekening metode om integrale te benader in eindige dimensie na die benadering van integrale in oneindige dimensionele Wiener ruimte. Sedertdien het menige outeurs dié idees intensief toegepas en is die onderwerp vandag ’n aktiewe navorsings gebied. Ons werk is hoofsaaklik gebaseer op die onlangse ontwikkelling van hoërorde skemas, wat op hul beurt gebaseer is op die idees van Kusuoka benadering en Lyons-Victoir "Volumeberekening op Wiener ruimte". Die werk word veral toegepas op die prysvastelling van opsies, naamlik Ninomiya-Victoir en Ninomiya-Ninomiya benaderings skemas. Dit moet hier beklemtoon word dat baie ander toepassings van hierdie skemas al ontwikkel is, onder meer die Alfonsi skema vir die CIR proses en die ontbinding metode wat voorgestel is deur Kohatsu en Tanaka vir sprong aangedrewe SDVs. Na ’n skets van die hoof idees agter metodes van numeriese benadering in Hoofstuk 1 , begin Hoofstuk 2 met die neersetting van noodsaaklike terminologie en definisies. ’n Diskussie oor die stogastiese Taylor ontwikkeling, gebaseer op herhaalde Stratonovich integrale word uiteengeset, waarna die hoofstuk afsluit met ’n illustrasie van dié ontwikkeling met die Euler-Maruyama benaderings skema. Hoofstuk 3 bevat die hoofgedagtes agter die Kusuoka benaderings skema, waar daar ook op die implementering van die algoritme gekonsentreer word. Hierdie skema is van toepassing op die prysvastelling van ’n Asiatiese call-opsie, numeriese resultate word ook aangebied. Ons begin Hoofstuk 4 deur te kyk na klassieke volumeberekenings formules waarna ons op ’n eenvoudige wyse die algemene idees van "Volumeberekening op Wiener ruimtes", ook bekend as die Lyons-Victoir benaderings skema, as ’n uitbreiding van die klassieke volumeberekening metode gebruik. Die doel van hierdie skema is om volumeberekening formules op te stel vir benaderings integrale wat gedefinieer is op Wiener ruimtes en gevolglik, hoërorde numeriese skemas te ontwikkel. Dit is gebaseer op die stogastiese Stratonovich ontwikkeling en kan beskou word as ’n ontwikkeling van die Kusuoka skema. Deur Kusuoka en Lyon-Victoir se idees oor benaderings skemas toe te pas, het Ninomiya-Victoir en Ninomiya- Ninomiya nuwe numeriese skemas van orde 2 ontwikkel, waar hulle die probleem omgeskakel het van een waar SDVs opgelos moet word, na een waar gewone differensiaalvergelykings (GDV) opgelos moet word. Hierdie twee skemas word in Hoofstuk 5 uiteengeset. Alhoewel die benaderings soortgelyk is, is daar ’n beduidende verskil in die algoritmes self. Hierdie hoofstuk begin met ’n algemene uiteensetting van die Ninomiya-Victoir algoritme waar ’n arbitrêre vaste tyd horison, T, gebruik word. Dié word toegepas op opsieprysvastelling en optimale portefeulje strategie probleme. Verder word numeriese simulasies uitgevoer, die prestasie van die Ninomiya-Victoir algoritme was bestudeer en vergelyk met die Euler-Maruyama metode. Ons maak die opmerking dat die Ninomiya-Victoir algoritme aansienlik vinniger is. Die belangrikste resultaat van die Ninomiya-Ninomiya benaderings skema word ook voorgestel. Deur die idee van ’n Lie algebra te gebruik, het Ninomiya en Ninomiya ’n stogastiese veranderlike met GDV-waardes gekonstrueer wat se gemiddeld die oplossing van ’n gegewe SDV benader. Die Runge-Kutta metode vir GDVs word dan toegepas op die GDV wat getrek is uit die stogastiese veranderlike en ’n lineêre operator gekonstrueer. ’n Veralgemeende uitdrukking vir die gekonstrueerde operator is afgelei en die algoritme is toegepas op die prysvasstelling van ’n Asiatiese opsie onder die Heston onbestendigheids model.
236

Direct numerical simulation of gas transfer at the air-water interface in a buoyant-convective flow environment

Kubrak, Boris January 2014 (has links)
The gas transfer process across the air-water interface in a buoyant-convective environment has been investigated by Direct Numerical Simulation (DNS) to gain improved understanding of the mechanisms that control the process. The process is controlled by a combination of molecular diffusion and turbulent transport by natural convection. The convection when a water surface is cooled is combination of the Rayleigh-B´enard convection and the Rayleigh-Taylor instability. It is therefore necessary to accurately resolve the flow field as well as the molecular diffusion and the turbulent transport which contribute to the total flux. One of the challenges from a numerical point of view is to handle the very different levels of diffusion when solving the convection-diffusion equation. The temperature diffusion in water is relatively high whereas the molecular diffusion for most environmentally important gases is very low. This low molecular diffusion leads to steep gradients in the gas concentration, especially near the interface. Resolving the steep gradients is the limiting factor for an accurate resolution of the gas concentration field. Therefore a detailed study has been carried out to find the limits of an accurate resolution of the transport for a low diffusivity scalar. This problem of diffusive scalar transport was studied in numerous 1D, 2D and 3D numerical simulations. A fifth-order weighted non-oscillatory scheme (WENO) was deployed to solve the convection of the scalars, in this case gas concentration and temperature. The WENO-scheme was modified and tested in 1D scalar transport to work on non-uniform meshes. To solve the 2D and 3D velocity field the incompressible Navier-Stokes equations were solved on a staggered mesh. The convective terms were solved using a fourth-order accurate kinetic energy conserving discretization while the diffusive terms were solved using a fourth-order central method. The diffusive terms were discretized using a fourth-order central finite difference method for the second derivative. For the time-integration of the velocity field a second-order Adams-Bashworth method was employed. The Boussinesq approximation was employed to model the buoyancy due to temperature differences in the water. A linear relationship between temperature and density was assumed. A mesh sensitivity study found that the velocity field is fully resolved on a relatively coarse mesh as the level of turbulence is relatively low. However a finer mesh for the gas concentration field is required to fully capture the steep gradients that occur because of its low diffusivity. A combined dual meshing approach was used where the velocity field was solved on a coarser mesh and the scalar field (gas concentration and temperature) was solved on an overlaying finer submesh. The velocities were interpolated by a second-order method onto the finer sub-mesh. A mesh sensitivity study identified a minimum mesh size required for an accurate solution of the scalar field for a range of Schmidt numbers from Sc = 20 to Sc = 500. Initially the Rayleigh-B´enard convection leads to very fine plumes of cold liquid of high gas concentration that penetrate the deeper regions. High concentration areas remain in fine tubes that are fed from the surface. The temperature however diffuses much stronger and faster over time and the results show that temperature alone is not a good identifier for detailed high concentration areas when the gas transfer is investigated experimentally. For large timescales the temperature field becomes much more homogeneous whereas the concentration field stays more heterogeneous. However, the temperature can be used to estimate the overall transfer velocity KL. If the temperature behaves like a passive scalar a relation between Schmidt or Prandtl number and KL is evident. A qualitative comparison of the numerical results from this work to existing experiments was also carried out. Laser Induced Fluorescence (LIF) images of the oxygen concentration field and Schlieren photography has been compared to the results from the 3D simulations, which were found to be in good agreement. A detailed quantitative analysis of the process was carried out. A study of the horizontally averaged convective and diffusive mass flux enabled the calculation of transfer velocity KL at the interface. With KL known the renewal rate r for the so called surface renewal model could be determined. It was found that the renewal rates are higher than in experiments in a grid stirred tank. The horizontally averaged mean and fluctuating concentration profiles were analysed and from that the boundary layer thickness could be accurately monitored over time. A lot of this new DNS data obtained in this research might be inaccessible in experiments and reveal previously unknown details of the gas transfer at the air water interface.
237

Bandwidth and energy-efficient route discovery for noisy Mobile Ad-hoc NETworks

Adarbah, Haitham January 2015 (has links)
Broadcasting is used in on-demand routing protocols to discover routes in Mobile Ad-hoc Networks (MANETs). On-demand routing protocols, such as Ad-hoc On-demand Distance Vector (AODV) commonly employ pure flooding based broadcasting to discover new routes. In pure flooding, a route request (RREQ) packet is broadcast by the source node and each receiving node rebroadcasts it. This continues until the RREQ packet arrives at the destination node. Pure flooding generates excessive redundant routing traffic that may lead to the broadcast storm problem (BSP) and deteriorate the performance of MANETs significantly. A number of probabilistic broadcasting schemes have been proposed in the literature to address BSP. However, these schemes do not consider thermal noise and interference which exist in real life MANETs, and therefore, do not perform well in real life MANETs. Real life MANETs are noisy and the communication is not error free. This research argues that a broadcast scheme that considers the effects of thermal noise, co-channel interference, and node density in the neighbourhood simultaneously can reduce the broadcast storm problem and enhance the MANET performance. To achieve this, three investigations have been carried out: First, the effect of carrier sensing ranges on on-demand routing protocol such as AODV and their impact on interference; second, effects of thermal noise on on-demand routing protocols and third, evaluation of pure flooding and probabilistic broadcasting schemes under noisy and noiseless conditions. The findings of these investigations are exploited to propose a Channel Adaptive Probabilistic Broadcast (CAPB) scheme to disseminate RREQ packets efficiently. The proposed CAPB scheme determines the probability of rebroadcasting RREQ packets on the fly according to the current Signal to Interference plus Noise Ratio (SINR) and node density in the neighbourhood. The proposed scheme and two related state of the art (SoA) schemes from the literature are implemented in the standard AODV to replace the pure flooding based broadcast scheme. Ns-2 simulation results show that the proposed CAPB scheme outperforms the other schemes in terms of routing overhead, average end-to-end delay, throughput and energy consumption.
238

Mise en œuvre d'un langage à mobilité forte

Epardaud, Stephane 18 February 2008 (has links) (PDF)
Afin de résoudre les problèmes liés à l'intégration d'un nombre croissant d'appareils programmables, nous proposons un langage d'agents mobiles. Ces agents mobiles sont capables de migrer d'un appareil ou ordinateur à l'autre afin d'exploiter au mieux ses ressources, ce qui permet de profiter au mieux des capacités de chaque appareil à partir d'un unique programme. Ce langage est ULM: Un Langage pour la Mobilité. Nous présentons dans cette thèse ses fonctionnalités, ses particularités, ainsi que sa mise en œuvre. ULM est un dérivé du langage Scheme, auquel nous avons ajouté les fonctionnalités liées à la mobilité ainsi qu'à l'interaction entre les agents mobiles. ULM possède un ensemble de primitives permettant la création d'agents à mobilité forte, avec un ordonnancement coopératif déterministe, et des primitives de contrôles telles que la suspension ou la préemption faible. Nous présentons dans cette thèse l'intégration de ces primitives dans le langage Scheme, ainsi que leur interaction et l'ajout de certaines nouvelles primitives telles que la préemption forte ou la migration sûre. Nous présentons ensuite la sémantique dénotationnelle du langage et sa mise en œuvre au moyen d'un compilateur vers code-octet, et de deux machines virtuelles: une écrite en Bigloo Scheme pour exécution sur des ordinateurs traditionnels, l'autre écrite en Java ME pour les téléphones portables. Nous présentons ensuite l'utilisation possible d'ULM comme remplacement de programmes écrits pour des boucles d'évènements, l'interface entre ULM et des langages externes, quelques exemples d'utilisation d'ULM, puis les travaux futurs avant de conclure.
239

Numerical Study of Sediment Transport under Unsteady Flow

Zhang, Shiyan January 2011 (has links)
Numerical model for simulating sediment transport in unsteady flow is incomplete in several aspects: first of all, the numerical schemes have been proved suitable for the simulation of flow over rigid bed needs to be reevaluated for unsteady flow over mobile bed; secondly, existing non-equilibrium sediment transport models are empirically developed and therefore lack of consistency regarding the evaluation of the non-equilibrium parameters; thirdly, the sediment transport in various applications have unique features which needs to be considered in the models. Sediment transport in unsteady flows was studied using analytical and numerical methods. A one dimensional (1D) finite volume method (FVM) model was developed. Five popular numerical schemes were implemented into the model and their performances were evaluated under highly unsteady flow condition. A novel physically-based non-equilibrium sediment transport model was established to describe the non-equilibrium sediment transport process. Infiltration effects on flow and sediment transport was included to make the model applicable to simulate irrigation induced soil erosion in furrows. The Laursen (1958) formula was adopted and modified to calculate the erodibility of fine-grain sized soil, and then verified by laboratory and field datasets. The numerical model was applied to a series of simulations of sediment transport in highly unsteady flow including the dam break erosional flow, flash flood in natural rivers and irrigation flows and proved to be applicable in various applications. The first order schemes were able to produce smooth and reasonably accurate results, and spurious oscillations were observed in the simulated results produced by second order schemes. The proposed non-equilibrium sediment transport model yielded better results than several other models in the literatures. The modified Laursen (1958) formula adopted was applicable in calculating the erodibility of the soil in irrigation. Additionally, it was indicated that the effect of the jet erosion and the structural failure of the discontinuous bed topography cannot be properly accounted for due to the limitation of 1D model. The comparison between the simulated and measured sediment discharge hydrographs indicated a potential process associated to the transport of the fine-grain sized soil in the irrigation furrows.
240

A Novel Authenticity of an Image Using Visual Cryptography

Koshta, Prashant Kumar, Thakur, Shailendra Singh 01 April 2012 (has links)
Information security in the present era is becoming very important in communication and data storage. Data transferred from one party to another over an insecure channel (e.g., Internet) can be protected by cryptography. The encrypting technologies of traditional and modern cryptography are usually used to avoid the message from being disclosed. Public-key cryptography usually uses complex mathematical computations to scramble the message. / A digital signature is an important public-key primitive that performs the function of conventional handwritten signatures for entity authentication, data integrity, and non-repudiation, especially within the electronic commerce environment. Currently, most conventional digital signature schemes are based on mathematical hard problems. These mathematical algorithms require computers to perform the heavy and complex computations to generate and verify the keys and signatures. In 1995, Naor and Shamir proposed a visual cryptography (VC) for binary images. VC has high security and requires simple computations. The purpose of this thesis is to provide an alternative to the current digital signature technology. We introduce a new digital signature scheme based on the concept of a non-expansion visual cryptography. A visual digital signature scheme is a method to enable visual verification of the authenticity of an image in an insecure environment without the need to perform any complex computations. We proposed scheme generates visual shares and manipulates them using the simple Boolean operations OR rather than generating and computing large and long random integer values as in the conventional digital signature schemes currently in use.

Page generated in 0.0411 seconds