381 |
Fast simulation of rare events in Markov level/phase processesLuo, Jingxiang 19 July 2004
Methods of efficient Monte-Carlo simulation when rare events are involved have been studied for several decades. Rare events are very important in the context of evaluating high quality computer/communication systems. Meanwhile, the efficient simulation of systems involving rare events poses great challenges.
A simulation method is said to be efficient if the number of replicas required to get accurate estimates grows slowly, compared to the rate at which the probability of the rare event approaches zero.
Despite the great success of the two mainstream methods, importance sampling (IS) and importance splitting, either of them can become inefficient under certain conditions, as reported in some recent studies.
The purpose of this study is to look for possible enhancement of fast simulation methods. I focus on the ``level/phase process', a Markov process in which the level and the phase are two state variables. Furthermore, changes of level and phase are induced by events, which have rates that are independent of the level except at a boundary.
For such a system, the event of reaching a high level occurs rarely, provided the system typically stays at lower levels. The states at those high levels constitute the rare event set.
Though simple, this models a variety of applications involving rare events.
In this setting, I have studied two efficient simulation methods, the rate tilting method and the adaptive splitting method, concerning their efficiencies.
I have compared the efficiency of rate tilting with several previously used similar methods. The experiments are done by using queues in tandem, an often used test bench for the rare event simulation. The schema of adaptive splitting has not been described in literature. For this method, I have analyzed its efficiency to show its superiority over the (conventional) splitting method.
The way that a system approaches a designated rare event set is called the system's large deviation behavior. Toward the end of gaining insight about the relation of system behavior and the efficiency of IS simulation, I quantify the large deviation behavior and its complexity.
This work indicates that the system's large deviation behavior has a significant impact on the efficiency of a simulation method.
|
382 |
Design and Application of Discrete Explicit Filters for Large Eddy Simulation of Compressible Turbulent FlowsDeconinck, Willem 24 February 2009 (has links)
In the context of Large Eddy Simulation (LES) of turbulent flows, there is a current need to compare and evaluate different proposed subfilter-scale models. In order to carefully compare subfilter-scale models and compare LES predictions to Direct Numerical Simulation (DNS) results (the latter would be helpful in the comparison and validation of models), there is a real need for a "grid-independent" LES capability and explicit filtering methods offer one means by which this may be achieved.
Advantages of explicit filtering are that it provides a means for eliminating aliasing errors, allows for the direct control of commutation errors, and most importantly allows a decoupling between the mesh spacing and the filter width which is the primary reason why there are difficulties in comparing LES solutions obtained on different grids. This thesis considers the design and assessment of discrete explicit filters and their application to isotropic turbulence prediction.
|
383 |
A novel approach to reduce the computation time for CFD; hybrid LES–RANS modelling on parallel computersTurnbull, Julian January 2003 (has links)
Large Eddy Simulation is a method of obtaining high accuracy computational
results for modelling fluid flow. Unfortunately it is computationally expensive
limiting it to users of large parallel machines. However, it may be that the
use of LES leads to an over-resolution of the problem because the bulk of
the computational domain could be adequately modelled using the Reynolds
averaged approach.
A study has been undertaken to assess the feasibility, both in accuracy and
computational efficiency of using a parallel computer to solve both LES and
RANS type turbulence models on the same domain for the problem flow over
a circular cylinder at Reynolds number 3 900
To do this the domain has been created and then divided into two sub-domains,
one for the LES model and one for the kappa - epsilon turbulence model. The hybrid
model has been developed specifically for a parallel computing environment
and the user is able to allocate modelling techniques to processors in a way
which enables expansion of the model to any number of processors.
Computational experimentation has shown that the combination of the Smagorinsky
model can be used to capture the vortex shedding from the cylinder and
the information successfully passed to the kappa - epsilon model for the dissipation of the
vortices further downstream. The results have been compared to high accuracy
LES results and with both kappa - epsilon and Smagorinsky LES computations on the
same domain. The hybrid models developed compare well with the Smagorinsky
model capturing the vortex shedding with the correct periodicity.
Suggestions for future work have been made to develop this idea further, and
to investigate the possibility of using the technology for the modelling of mixing
and fast chemical reactions based on the more accurate prediction of the
turbulence levels in the LES sub-domain.
|
384 |
The Impact of Non-thermal Processes in the Intracluster Medium on Cosmological Cluster ObservablesBattaglia, Nicholas Ambrose 05 January 2012 (has links)
In this thesis we describe the generation and analysis of hydrodynamical simulations of galaxy clusters and their intracluster
medium (ICM), using large cosmological boxes to generate large samples, in conjunction with individual cluster computations. The
main focus is the exploration of the non-thermal processes in the ICM and the effect they have on the interpretation of observations used for cosmological constraints. We provide an introduction to the cosmological structure formation framework for our computations and an overview of the numerical simulations and
observations of galaxy clusters. We explore the cluster magnetic field observables through radio relics, extended entities in the ICM characterized by their of diffuse radio emission. We show that statistical quantities such as radio relic luminosity
functions and rotation measure power spectra are sensitive to magnetic field models. The spectral index of the radio relic emission
provides information on structure formation shocks, {\it e.g.}, on their Mach number. We develop a coarse grained stochastic model of active galaxy nucleus (AGN) feedback in clusters and show the impact of such inhomogeneous feedback on the thermal pressure profile. We explore variations in
the pressure profile as a function of cluster mass, redshift, and radius and provide a constrained fitting function for this profile. We measure the degree of the non-thermal pressure in the gas from
internal cluster bulk motions and show it has an impact on the slope and scatter of the Sunyaev-Zel'dovich (SZ) scaling relation. We also find that the gross shape of the ICM, as characterized by scaled moment of inertia tensors, affects the SZ scaling relation. We demonstrate that the shape and the amplitude of the SZ angular power spectrum is sensitive to AGN feedback, and this affects the cosmological parameters determined from high resolution ACT and SPT cosmic microwave background data. We compare analytic, semi-analytic, and simulation-based methods for calculating the SZ power spectrum, and characterize their
differences. All the methods must rely, one way or another, on high resolution large-scale hydrodynamical simulations with varying assumptions for modelling the gas of the sort presented here. We show how our results can be used to interpret the latest ACT and SPT power spectrum results. We provide an outlook for the future, describing follow-up work we are undertaking to further advance the theory of cluster science.
|
385 |
CSR implementation in large enterprises : Comparision between China and SwedenXiao, Ziye, Liu, Xingrui January 2013 (has links)
Corporate Social Responsibility (CSR) has been widely talked about over decades. CSR is a concept proposed in Western Countries firstly and it asks for business to contribute sustainable economic development and to improve the quality of life by involving other stakeholders at the same time. The CSR among countries in Asia has beenspreadingin recent years. This thesis takes a closer comparison on the CSR between China and Sweden. As a case study with qualitative strategy, its main aims are to compare the driving forces, barriers, activities and deliverables in implementation of CSR in a Sweden-China context. Two Swedish enterprises and one Chinese enterprise are used as examples here in this study and both of their primary data by interview and secondary data by CSR or Sustainability Report are utilized. Theories refer to the implementation of CSR, the CSR in China and Sweden are used to establish the conceptual framework of this study. Empirical findings show that Chinese and Swedish enterprises both implement CSR in a similar way while the differences still exist. For instance, the Swedish enterprises stress the work on philanthropic responsibility to participate in local activities, while the Chinese enterprises contribute to the society by donation in natural disaster. Their activities, driving forces, barriers and deliverables are summarized in a model, respectively. Base on this fact, this thesis argues that the difference is due to the influence of cultural and political factors. Consequently, it leads to a situation that the Swedish enterprises have an advantage in implementation on caring of employees while the Chinese enterprises are good at making contributions to the larger society. This thesis can hopefully provide insightful comparison between the implementation of CSR in both Swedish and Chinese enterprises. As a conclusion, the study recommends that the future research should focus on CSR implementation of political influence.
|
386 |
Grand Variations for large orchestraZajicek, Daniel 06 September 2012 (has links)
Grand Variations is a work for large orchestra built on an original theme and six variations. My primary concerns when composing were communication, continuity, and distortion. To musically communicate an idea repetition is essential, and the type of repetition presented in theme and variations provided what I was looking for. In addition, the fact that the theme will be repeated over and over leads to a built in continuity. The final concern, distortion, may be achieved by pulling away from a more straightforward presentation of the thematic material.
Two additional elements played a large role in the work—cyclic forms, and quantum physics. The composition Déserts by Edgard Varese, and the jazz work Nefertiti by Wayne Shorter, both contain strong cyclic features. Nefertiti uses the same melody repeated over and over, while Déserts, on the other hand, repeatedly presents the same musical gestures, and sound objects, but with slight changes to achieve its own cyclic sound world. These two works framed the way that I approached variations, yet are at odds with each other. Through my reading of quantum physics, I found a way to join the two into a working structure, and the book, The Grand Design, by Stephen Hawking and Leonard Mlodinow, helped me to do it. Because of this, I decided early on to honor that influence, and the title Grand Variations reflect that.
|
387 |
Scheduling in Large Scale MIMO Downlink SystemsBayesteh, Alireza January 2008 (has links)
This dissertation deals with
the problem of scheduling in wireless MIMO (Multiple-Input Multiple-Output) downlink systems. The focus is on the large-scale systems when the number of subscribers is large.
In part one, the problem of user selection in MIMO Broadcast channel is studied. An efficient user selection algorithm is proposed and is shown to achieve the sum-rate capacity of the system asymptotically (in terms of the number of users), while requiring (i)~low-complexity precoding scheme of zero-forcing beam-forming at the base station, (ii)~low amount of feedback from the users to the base station, (iii)~low complexity of search.
Part two studies the problem of MIMO broadcast channel with partial Channel State Information (CSI) at the transmitter. The necessary and sufficient conditions for the amount of CSI at the transmitter (which is provided to via feedback links from the receivers) in order to achieve the sum-rate capacity of the system are derived. The analysis is performed in various singnal to noise ratio regimes.
In part three, the problem of sum-rate maximization in a broadcast channel with large number of users, when each user has a stringent delay constraint, is studied. In this part, a new definition of fairness, called short-term fairness is introduced. A scheduling algorithm is proposed that achieves: (i) Maximum sum-rate throughput and (ii) Maximum short-term fairness of the system, simultaneously, while satisfying the delay constraint for each individual user with probability one.
In part four, the sum-rate capacity of MIMO broadcast channel, when the channels are Rician fading, is derived in various scenarios in terms of the value of the Rician factor and the distribution of the specular components of the channel.
|
388 |
Hemlöshet, inte bara ett storstadsfenomen? : - En kvalitativ studie av hemlöshet i en mellanstor kommun.Olsson, Katarina January 2011 (has links)
The purpose of this study is to describe and analyze homelessness in a medium-sized municipality from an organizational perspective. Some of the central questions in this study are: How is the cause of homelessness explained? How do different actors in the community address, prevent and combat homelessness? Who is responsible for the homelessness? This study is based on four semi-structured interviews with organizations that work with homelessness. The analysis is based on problem definition theory. The result of this study shows that homelessness is a problem in this medium-sized municipality even if it is not a big one and that it is often closely combined with substance abuse. The responsibility for the homeless is on the homeless themselves and the social service because that causes is, according to this study, individual based.
|
389 |
Trust dynamics within buyer-supplier relationships :Case of small logistics provider & large customerFan, Zixi, Dalzhenka, Hanna January 2012 (has links)
No description available.
|
390 |
Har pensionsredovisningsmetoder olika effekter på volatilitet i Eget Kapital? : En studie av Large Cap-bolagen i Nasdaq OMX Nordic Stockholm / Methods of pension accounting and effects on shareholders equity : A study of Nasdaq OMX Nordic Stockholm Large Cap companiesGrahm, Janette, Akar, Céline Serap January 2012 (has links)
Problembakgrund och syfte: Uppsatsens syfte är att undersöka huruvida det finns samband mellan pensionsredovisningsmetoder och volatilitet i bolagens Eget kapital. Enligt pensionsredovisningsforskningen finns samband mellan pensionsredovisningsmetoder och volatilitet i total Eget kapital. Enligt Amir skapar redovisning av aktuariella vinster och förluster mot Eget kapital (totalresultatet) upphov till volatilitet i Eget kapital. Syftet med uppsatsen är att undersöka om forskningen som påvisar detta samband även stöds gällande Large Cap bolagen i Nasdaq OMX Nordic Stockholm. Stämmer det att bolag som redovisar pensioner mot Eget kapital (totalresultatet) påvisar större volatilitet i Eget kapital än bolag som använder andra pensionsredovisningsmetoder? Detta är den grundläggande problemställningen i uppsatsen. Metod: För att svara på syftet använder vi oss av bolagens årsredovisningar där vi undersöker vilka pensionsredovisningsmetoder bolagen väljer och volatilitet i total Eget kapital mellan åren 2006-2011. Dessa värden sammanställs i Excel och testas i statistikprogrammet SPSS. Resultat och Slutsatser: Hypotesprövningen visar att det inte föreligger samband mellan volatilitet i Eget kapital utifrån de olika pensionsredovisningsmetoderna. Vår slutsats är att bolag inom Nasdaq OMX Nordic Stockholm som redovisar aktuariella vinster och förluster i Eget kapital (totalresultat) inte visar större volatilitet i Eget kapital än bolag som använder andra pensionsredovisningsmetoder. / Purpose: According to research done in pension accounting, there is a connection between pension accounting methods and volatility in shareholders equity. Amir claims that full recognition of actuarial gains and losses in the balance sheet creates volatility in shareholders equity. The aim of the paper is to examine whether this connection between pension accounting and volatility can be confirmed among companies in Nasdaq OMX Nordic, Stockholm. Method: The volatility in Shareholders Equity is examined by researching the annual reports of the companies. Throughout the study, the statistics program SPSS is used. Analysis: Companies that use full recognition of actuarial gains and losses in the balance sheet are compared to firms that use other pension accounting methods. The level of volatility in shareholders equity among firms that use full recognition of actuarial gains and losses are compared to firms that use other pension accounting methods. The aim is to examine whether companies that use full recognition of actuarial gains and losses show more volatility in share holders equity. Conclusion: According to our study, there is no relationship between pension accounting methods and volatility in shareholders equity. Companies in Nasdaq OMX Nordic Stockholm that recognize actuarial gains and losses directly to the comprehensive income in shareholders equity and debts does not show more volatility in shareholders equity than firms that use other pension accounting methods.
|
Page generated in 0.0344 seconds