371 |
Ägarstrukturens betydelse för företagssociala ansvarstagande : En studie av hur svenska storföretags sociala ansvarstagandepåverkas av ägarstrukturenSundvall, Tomas, Trång, David January 2013 (has links)
Intresset för företagens sociala ansvarstagande och dess bakomliggande faktorer har på senaretid ökat snabbt och många olika aspekter har studerats. Vi har i denna uppsats undersökt hurföretags ägarkoncentration, storleken på deras största ägare samt ägande av finansiellainstitutioner påverkar kvalitén på företagens sociala ansvarstagande i ett svenskt kontext. Denmetod som använts är en multipel linjär regression där sambandet mellan ovan nämndavariabler och kvalitén på företags sociala ansvarstagande testats. Resultaten visar inte pånågot signifikant samband mellan ägarkoncentration och kvalitén på företags socialaansvarstagande medan storleken på den största ägaren visar sig ha en negativ inverkan ochägande av finansiella institutioner en positiv inverkan på kvalitén av det socialaansvarstagandet. Då resultaten gällande storleken på den största ägaren ochägarkoncentrationen skiljer sig från resultat i andra länder skulle det kunna implicera attSveriges institutionella bakgrund påverkar storägares inställning till socialt ansvarstagande.
|
372 |
Novel opposition-based sampling methods for efficiently solving challenging optimization problemsEsmailzadeh, Ali 01 April 2011 (has links)
In solving noise-free and noisy optimization problems, candidate initialization and sampling
play a key role, but are not deeply investigated. It is of interest to know if the entire
search space has the same quality for candidate-solutions during solving different type of
optimization problems. In this thesis, a comprehensive investigation is conducted in order
to clear those doubts, and to examine the effects of variant sampling methods on solving
challenging optimization problems, such as large-scale, noisy, and multi-modal problems.
As a result, the search space is segmented by using seven segmentation schemes, namely:
Center-Point, Center-Based, Modula-Opposite, Quasi-Opposite, Quasi-Reflection, Supper-
Opposite, and Opposite-Random. The introduced schemes are studied using Monte-Carlo
simulation, on various types of noise-free optimization problems, and ultimately ranked
based on their performance in terms of probability of closeness, average distance to unknown
solution, number of solutions found, and diversity. Based on the results of the
experiments, high-ranked schemes are selected and utilized on well-known metaheuristic
algorithms, as case studies. Two categories of case studies are targeted; one for a singlesolution-
based metaheuristic (S-metaheuristic) and another one for a population based
metaheuristic (P-metaheuristic). A high-ranked single-solution-based scheme is utilized
to accelerate Simulated Annealing (SA) algorithm, as a noise-free S-metaheuristic case
study. Similarly, for noise-free P-metaheuristic case study, an effective population-based
algorithm, Differential Evolution (DE), has been utilized. The experiments confirm that
the new algorithms outperform the parent algorithm (DE) on large-scale problems. In the
same direction, with regards to solving noisy problems more efficiently, a Shaking-based
sampling method is introduced, in which the original noise is tackled by adding an additional
noise into the search process. As a case study, the Shaking-based sampling is
utilized on the DE algorithm, from which two variant algorithms have been developed and
showed impressive performance in comparison to the classical DE, in tackling noisy largescale
problems. This thesis has created an opportunity for a comprehensive investigation
on search space segmentation schemes and proposed new sampling methods. The current
study has provided a guide to use appropriate sampling schemes for a given types of
problems such as noisy, large-scale and multi-modal optimization problems. Furthermore,
this thesis questions the effectiveness of uniform-random sampling method, which is widely
used in of S-Metaheuristic and P-Metaheuristic algorithms. / UOIT
|
373 |
Design and Application of Discrete Explicit Filters for Large Eddy Simulation of Compressible Turbulent FlowsDeconinck, Willem 24 February 2009 (has links)
In the context of Large Eddy Simulation (LES) of turbulent flows, there is a current need to compare and evaluate different proposed subfilter-scale models. In order to carefully compare subfilter-scale models and compare LES predictions to Direct Numerical Simulation (DNS) results (the latter would be helpful in the comparison and validation of models), there is a real need for a "grid-independent" LES capability and explicit filtering methods offer one means by which this may be achieved.
Advantages of explicit filtering are that it provides a means for eliminating aliasing errors, allows for the direct control of commutation errors, and most importantly allows a decoupling between the mesh spacing and the filter width which is the primary reason why there are difficulties in comparing LES solutions obtained on different grids. This thesis considers the design and assessment of discrete explicit filters and their application to isotropic turbulence prediction.
|
374 |
On the Use of Double Auctions in Resource Allocation Problems in Large-scale Distributed SystemsFeng, Yuan 24 August 2011 (has links)
In this thesis, we explore the use of double auction markets as a general approach to tackle resource allocation problems in large-scale distributed systems, which are traditionally solved using optimization techniques. Prevalently adopted in real-world markets, double auctions have the power of arbitrating mappings between participating players and trading commodities in a decentralized fashion, with every player trying to maximize her own utility selfishly. Through the design of prefetching strategies in peer-assisted video-on-demand systems, we show how the problem of minimizing server bandwidth costs by reallocating media contents can be solved by double auction markets gracefully. However, not every resource allocation problem satisfies requirements of double auctions. We illustrate the limitation of double auctions with an example of virtual machine migration in container-based datacenters, which is then modeled into a Nash bargaining game and solved by a Nash bargaining solution.
|
375 |
On the Use of Double Auctions in Resource Allocation Problems in Large-scale Distributed SystemsFeng, Yuan 24 August 2011 (has links)
In this thesis, we explore the use of double auction markets as a general approach to tackle resource allocation problems in large-scale distributed systems, which are traditionally solved using optimization techniques. Prevalently adopted in real-world markets, double auctions have the power of arbitrating mappings between participating players and trading commodities in a decentralized fashion, with every player trying to maximize her own utility selfishly. Through the design of prefetching strategies in peer-assisted video-on-demand systems, we show how the problem of minimizing server bandwidth costs by reallocating media contents can be solved by double auction markets gracefully. However, not every resource allocation problem satisfies requirements of double auctions. We illustrate the limitation of double auctions with an example of virtual machine migration in container-based datacenters, which is then modeled into a Nash bargaining game and solved by a Nash bargaining solution.
|
376 |
Evolutionary Granular Kernel MachinesJin, Bo 03 May 2007 (has links)
Kernel machines such as Support Vector Machines (SVMs) have been widely used in various data mining applications with good generalization properties. Performance of SVMs for solving nonlinear problems is highly affected by kernel functions. The complexity of SVMs training is mainly related to the size of a training dataset. How to design a powerful kernel, how to speed up SVMs training and how to train SVMs with millions of examples are still challenging problems in the SVMs research. For these important problems, powerful and flexible kernel trees called Evolutionary Granular Kernel Trees (EGKTs) are designed to incorporate prior domain knowledge. Granular Kernel Tree Structure Evolving System (GKTSES) is developed to evolve the structures of Granular Kernel Trees (GKTs) without prior knowledge. A voting scheme is also proposed to reduce the prediction deviation of GKTSES. To speed up EGKTs optimization, a master-slave parallel model is implemented. To help SVMs challenge large-scale data mining, a Minimum Enclosing Ball (MEB) based data reduction method is presented, and a new MEB-SVM algorithm is designed. All these kernel methods are designed based on Granular Computing (GrC). In general, Evolutionary Granular Kernel Machines (EGKMs) are investigated to optimize kernels effectively, speed up training greatly and mine huge amounts of data efficiently.
|
377 |
Exploring factors affecting math achievement using large scale assessment results in SaskatchewanLai, Hollis 16 September 2008
Current research suggests that a high level of confidence and a low level of anxiety are predictive of higher math achievement. Compared to students from other provinces, previous research has found that Saskatchewan students have a higher level of confidence and a lower level of anxiety for learning math, but still tend to achieve lower math scores compared to students in other provinces. The data suggest that there may be unique factors effecting math learning for students in Saskatchewan. The purpose of the study is to determine the factors that may affect Saskatchewan students math achievement. Exploratory factor analyses and regression methods were employed to investigate possible traits that aid students in achieving higher math scores. Results from a 2007 math assessment administered to grade 5 students in Saskatchewan were used for the current study. The goal of the study was to provide a better understanding of the factors and trends unique to students for mathematic achievements in Saskatchewan.<p> Using results from a province-wide math assessment and an accompanying questionnaire administered to students in grade five across public school in Saskatchewan (n=11,279), the present study found statistical significance in three factors that have been supported by previous studies to influence math achievement differences, specifically in (1) confidence in math, (2) parental involvement in math and (3) extracurricular participation in math. The three aforementioned factors were found to be related to math achievement as predicted by the Assessment for Learning (AFL) program in Saskatchewan, although there were reservations to the findings due to a weak amount of variances accounted for in the regression model (r2 =.084). Furthermore, a multivariate analysis of variance indicated gender and locations of schools to have effects on students math achievement scores. Although a high amount of measurement errors in the questionnaire (and subsequently a low variance accounted for by the regression model) limited the scope and implications of the model, future implications and improvements are discussed
|
378 |
Modeling and validation of crop feeding in a large square balerRemoué, Tyler 01 November 2007
This study investigated the crop density in a New Holland BB960 (branch of CNH Global N.V.) large square baler as examined by crop trajectory from the precompression room to the bale chamber. This study also examined both the top and bottom plunger pressures and critical factors affecting the final top and bottom bale densities.<p>The crop trajectories (wad of crop) were measured using a high-speed camera from the side of the baler through viewing windows. The viewing windows were divided into four regions for determining the crop displacement, velocity and acceleration. Crop strain was used to evaluate the potential change in density of the crop before being compressed by the plunger. Generally, the vertical crop strain was found to be higher in the top half of the bale compared to the bottom. <p>Average strain values for side measurements were 12.8% for the top and 2.1% for the bottom. Plunger pressures were measured to compare peak pressures between the top and bottom halves of each compressed wad of crop, and to develop pressure profiles based on the plungers position. Results of comparing the mean peak plunger pressures between the top and bottom locations indicated the mean pressures were significantly higher at the top location with the exception of one particular setting. Resulting pressure profile graphs aided in qualitatively describing the compression process for both top and bottom locations.<p>A stepwise regression model was developed to examine the difference in material quantity in the top half of the bale compared to the bottom, based on bale weights. The model indicated that flake setting, stuffer ratio and number of flakes had the greatest effect on maintaining consistent bale density by comparing top to bottom halves of each bale. The R2 (coefficient of determination) value for the developed model was of 59.9%. The R2 was low although could be accounted for due to the limited number of data points in the developed model.
|
379 |
Fast simulation of rare events in Markov level/phase processesLuo, Jingxiang 19 July 2004 (has links)
Methods of efficient Monte-Carlo simulation when rare events are involved have been studied for several decades. Rare events are very important in the context of evaluating high quality computer/communication systems. Meanwhile, the efficient simulation of systems involving rare events poses great challenges.
A simulation method is said to be efficient if the number of replicas required to get accurate estimates grows slowly, compared to the rate at which the probability of the rare event approaches zero.
Despite the great success of the two mainstream methods, importance sampling (IS) and importance splitting, either of them can become inefficient under certain conditions, as reported in some recent studies.
The purpose of this study is to look for possible enhancement of fast simulation methods. I focus on the ``level/phase process', a Markov process in which the level and the phase are two state variables. Furthermore, changes of level and phase are induced by events, which have rates that are independent of the level except at a boundary.
For such a system, the event of reaching a high level occurs rarely, provided the system typically stays at lower levels. The states at those high levels constitute the rare event set.
Though simple, this models a variety of applications involving rare events.
In this setting, I have studied two efficient simulation methods, the rate tilting method and the adaptive splitting method, concerning their efficiencies.
I have compared the efficiency of rate tilting with several previously used similar methods. The experiments are done by using queues in tandem, an often used test bench for the rare event simulation. The schema of adaptive splitting has not been described in literature. For this method, I have analyzed its efficiency to show its superiority over the (conventional) splitting method.
The way that a system approaches a designated rare event set is called the system's large deviation behavior. Toward the end of gaining insight about the relation of system behavior and the efficiency of IS simulation, I quantify the large deviation behavior and its complexity.
This work indicates that the system's large deviation behavior has a significant impact on the efficiency of a simulation method.
|
380 |
The Impact of Non-thermal Processes in the Intracluster Medium on Cosmological Cluster ObservablesBattaglia, Nicholas Ambrose 05 January 2012 (has links)
In this thesis we describe the generation and analysis of hydrodynamical simulations of galaxy clusters and their intracluster
medium (ICM), using large cosmological boxes to generate large samples, in conjunction with individual cluster computations. The
main focus is the exploration of the non-thermal processes in the ICM and the effect they have on the interpretation of observations used for cosmological constraints. We provide an introduction to the cosmological structure formation framework for our computations and an overview of the numerical simulations and
observations of galaxy clusters. We explore the cluster magnetic field observables through radio relics, extended entities in the ICM characterized by their of diffuse radio emission. We show that statistical quantities such as radio relic luminosity
functions and rotation measure power spectra are sensitive to magnetic field models. The spectral index of the radio relic emission
provides information on structure formation shocks, {\it e.g.}, on their Mach number. We develop a coarse grained stochastic model of active galaxy nucleus (AGN) feedback in clusters and show the impact of such inhomogeneous feedback on the thermal pressure profile. We explore variations in
the pressure profile as a function of cluster mass, redshift, and radius and provide a constrained fitting function for this profile. We measure the degree of the non-thermal pressure in the gas from
internal cluster bulk motions and show it has an impact on the slope and scatter of the Sunyaev-Zel'dovich (SZ) scaling relation. We also find that the gross shape of the ICM, as characterized by scaled moment of inertia tensors, affects the SZ scaling relation. We demonstrate that the shape and the amplitude of the SZ angular power spectrum is sensitive to AGN feedback, and this affects the cosmological parameters determined from high resolution ACT and SPT cosmic microwave background data. We compare analytic, semi-analytic, and simulation-based methods for calculating the SZ power spectrum, and characterize their
differences. All the methods must rely, one way or another, on high resolution large-scale hydrodynamical simulations with varying assumptions for modelling the gas of the sort presented here. We show how our results can be used to interpret the latest ACT and SPT power spectrum results. We provide an outlook for the future, describing follow-up work we are undertaking to further advance the theory of cluster science.
|
Page generated in 0.0339 seconds