Spelling suggestions: "subject:"mixed""
441 |
Real-Time Workload Models : Expressiveness vs. Analysis EfficiencyStigge, Martin January 2014 (has links)
The requirements for real-time systems in safety-critical applications typically contain strict timing constraints. The design of such a system must be subject to extensive validation to guarantee that critical timing constraints will never be violated while the system operates. A mathematically rigorous technique to do so is to perform a schedulability analysis for formally verifying models of the computational workload. Different workload models allow to describe task activations at different levels of expressiveness, ranging from traditional periodic models to sophisticated graph-based ones. An inherent conflict arises between the expressiveness and analysis efficiency of task models. The more expressive a task model is, the more accurately it can describe a system design, reducing over-approximations and thus minimizing wasteful over-provisioning of system resources. However, more expressiveness implies higher computational complexity of corresponding analysis methods. Consequently, an ideal model provides the highest possible expressiveness for which efficient exact analysis methods exist. This thesis investigates the trade-off between expressiveness and analysis efficiency. A new digraph-based task model is introduced, which generalizes all previously proposed models that can be analyzed in pseudo-polynomial time without using any analysis-specific over-approximations. We develop methods allowing to efficiently analyze variants of the model despite their strictly increased expressiveness. A key contribution is the notion of path abstraction which enables efficient graph traversal algorithms. We demonstrate tractability borderlines for different classes of schedulers, namely static priority and earliest-deadline first schedulers, by establishing hardness results. These hardness proofs provide insights about the inherent complexity of developing efficient analysis methods and indicate fundamental difficulties of the considered schedulability problems. Finally, we develop a novel abstraction refinement scheme to cope with combinatorial explosion and apply it to schedulability and response-time analysis problems. All methods presented in this thesis are extensively evaluated, demonstrating practical applicability.
|
442 |
Tobin’s Q theory and regional housing investment : Empirical analysis on Swedish dataSax Kaijser, Per January 2014 (has links)
This thesis investigates the relationship between Tobin’s Q and regional housing investment in Sweden for the time period of 1998-2012. The relationship is tested through estimation of two models for time-series analysis, a vector error correction model (VECM) and an autoregressive distributed lag (ARDL) model. Depending on which model that is used, I find some evidence of positive correlation between Tobin’s Q and regional housing investment in the long run while the short run dynamics of investment does not seem to be explained by Tobin’s Q. By transforming the regional data into a panel data set and running a fixed effects model, I examine the gain in explanatory power of Tobin’s Q from using disaggregated data rather than aggregated. My findings suggest that using disaggregated data improves the explanatory power of Tobin’s Q on investment. However, the Granger Causality test indicates two-way causality between Tobin’s Q and investment, causing endogeneity problem in the estimated equations.
|
443 |
Budget Management : The perception and use of budgets within publicly traded companies in SwedenJohansson, Xenia January 2014 (has links)
Background The debate about the budgets being and not being has been going on for 40 years. On one hand advocates for abolishing the budget have been criticising the budget, arguing for example that it is a waste of resources that only provides an illusion of control. Whereas on the other hand, business students are still taught to use the budget and previous studies show that companies are still holding on to it. Hence, there seems to exist a budget paradox. Purpose The purpose of this thesis is to examine the use and perception of fixed budgets within larger companies of today and to clarify the reality behind the debate about the usefulness of budgets. Further, to get an understanding of what purposes are deemed as important when using different control measures, and how well these are fulfilled. Method This study is predominantly quantitative with a deductive approach, as it via a self-administrative web survey has collected data from 58 publicly traded companies in Sweden. The questionnaire has consisted of open- and close-ended questions, to provide a deeper understanding of the role the fixed budget. Conclusion The overall percentage of companies that have abolished the budget has increased when compared to previous studies, but the majority of 81% still use the fixed budget in one way or another. Out of the participating companies, 67% stated that they supplement the fixed budget with other control measures, and as the percentage of those considering abolishing the budget have decreased, the overall perception of the fixed budget has improved.
|
444 |
Credit Ratings and Firm Litigation RiskXie, Huixian 01 January 2015 (has links)
This paper looks at whether firms’ credit ratings are negatively affected by litigation risk after controlling for known factors that affect credit ratings. The conventional wisdom is that litigation risk and credit ratings have an inverse relationship. However, my hypothesis is that the inverse relationship will not be stable if the model of credit ratings has taken other factors into account. The methodology first constructs a model of litigation risk, and then regress the credit ratings on the measurement of litigation risk. Previous empirical research on litigation risk measurement uses industry proxies as indicators for litigation risk. In this paper, I include firm characteristics and the Beneish M-score (a determinant for earnings manipulation) in addition to the industry proxy to construct an alternative model measuring litigation risk. I find that supplementing the Francis, Philbrick and Schipper (1994a, b; hereafter FPS) industry proxy with measures of firm characteristics improves predictive ability. In the model of credit ratings, I find that the change of litigation risk has a negative correlation with the credit ratings. However, the negative coefficient on the change of litigation risk changes to a positive one after controlling for other variables such as firm size, return on asset, and interest coverage ratio. This finding provides support for the hypothesis that the negative correlation between the credit ratings and litigation risk is not stable. This suggests that credit ratings may not incorporate litigation risk specifically although litigation can lead to firms’ financial damage and reputation crisis. However, the negative coefficient on the change of litigation risk remains unchanged when I control for the year fixed effects. I also find a negative correlation between the year 2007 and credit ratings due to financial crisis. The results are not conclusive given the likely simultaneous determination of litigation risk and credit ratings.
|
445 |
Robust designs for field experiments with blocksMann, Rena Kaur 28 July 2011 (has links)
This thesis focuses on the design of field experiments with blocks to study treatment effects for a number of treatments. Small field plots are available but located in several blocks and each plot is assigned to a treatment in the experiment. Due to spatial correlation among the plots, the allocation of the treatments to plots has influence on the analysis of the treatment effects. When the spatial correlation is known, optimal allocations (designs) of the treatments to plots have been studied in the literature. However, the spatial correlation is usually unknown in practice, so we propose a robust criterion to study optimal designs of the treatments to plots. Neighbourhoods of correlation structures are introduced and a modified generalized least squares estimator is discussed. A simulated annealing algorithm is implemented to compute optimal/robust designs. Various results are obtained for different experimental settings. Some theoretical results are also proved in the thesis. / Graduate
|
446 |
Observing a light dark matter beam with neutrino experimentsDeNiverville, Patrick 18 August 2011 (has links)
We consider the sensitivity of high luminosity neutrino experiments to light stable states, as arise in scenarios of MeV-scale dark matter. To ensure the correct thermal relic abundance, such states must annihilate to the Standard model via light mediators, providing a portal for access to the dark matter state in colliders or fixed targets. This framework implies that neutrino beams produced at a fixed target will also carry an additional “dark matter beam”, which can mimic neutrino scattering off electrons or nuclei in the detector. We therefore develop a Monte Carlo code to simulate the production of a dark matter beam at two proton fixed-target facilities with high luminosity, LSND and MiniBooNE, and with this simulation determine the existing limits on light dark matter. We find in particular that MeV-scale dark matter scenarios motivated by an explanation of the galactic 511 keV line are strongly constrained. / Graduate
|
447 |
COSINE: A tool for constraining spatial neighbourhoods in marine environmentsSuarez, Cesar Augusto 20 September 2013 (has links)
Spatial analysis methods used for detecting, interpolating or predicting local patterns require a delineation of a neighbourhood defining the extent of spatial interaction in geographic data. The most common neighbourhood delineation techniques include fixed distance bands, k-nearest neighbours, or spatial adjacency (contiguity) matrices optimized to represent spatial dependency in data. However, these standard approaches do not take into consideration the geographic or environmental constraints such as impassable mountain ranges, road networks or coastline barriers. Specifically, complex marine landscapes and coastlines present common problematic neighbourhood definitions for standard neighbourhood matrices used in the spatial analysis of marine environments. Therefore, the goal of our research is to present a new approach to constraining spatial neighbourhoods when conducting geographical analysis in marine environments. To meet this goal, we developed methods and software (COnstraining SpatIal NEighbourhoods - COSINE) for modifying spatial neighbourhoods, and demonstrate their utility in two case studies. Our method enables delineation of neighbourhoods that are constrained by coastlines and the direction of marine currents. Our software calculates and evaluates whether neighbouring features are separated by land, or are within a user defined angle that excludes interaction based on directional processes. Using decision rules a modified spatial weight matrix is created, either in binary or row-standardized format. Within open source software (R), a graphical user interface enables users to modify the standard spatial neighbourhood definition distance, inverse distance and k-nearest neighbour. Two case studies are presented to demonstrate the usefulness of the new approach for detecting spatial patterns: the first case study observes marine mammals’ abundance and the second, oil spill observation. Our results indicate that constraining spatial neighbourhoods in marine environments is particularly important at larger spatial scales. The COSINE tool has many applications for modelling both environmental and human processes. / Graduate / 0463 / 0366 / suarezc@uvic.ca
|
448 |
Tax competition among municipalities in the central part of Sweden : An empirical study: Does municipal taxation decisions depend on taxations in neighboring municipalities?Luoma, Alem January 2014 (has links)
The primary task of this paper is to test the interactive relations between tax rates at municipality level. We include 96 municipalities between the years 2006 to 2013. The relations are estimated by panel data instrumental variable estimation method with fixed effect for overcoming the possible specific error of simultaneity. In addition, we choose a set of control variables to strength our analysis. The main findings of this study suggest, one percent tax cut in the neighboring municipality leads to a 0,62 percent decrease in the tax in the home municipality ceteris paribus. This result is in line with theory and is similar to findings in previous studies such as Edmark and Åhgren (2008).
|
449 |
Kan bolagsskattesatsen förklaras av underliggande faktorer? : Varför sänkte riksdagen bolagskattesatsen? / Can Corporate Taxrate be explained by underlying factors? : Why lowered the Swedish Parliment the corporate taxrate?Hallberg, Amanda January 2015 (has links)
Bolagsskattesänkningen genomfördes den 1 januari 2013, målet med sänkningen var att stimulera Sveriges tillväxt då en sänk bolagsskatt sägs öka investeringsviljan. Med sänkningen ville man också minska incitamenten för företag att flytta sina verksamheter till lågskatteländer. Att bolagsskattesatsen sänktes väckte ett intresse som skapade denna uppsats att undersöka vilka faktorer det är som styr bolagsskattesatsen, till exempel, i en liten öppen ekonomi som den svenska. Teorin grundar sig i kapitalstruktur och finansieringsbeslut. Faktorer som anses påverka bolagsskatten och som valts ut är utländska direktinvesteringar, öppenhet mot kapitalflöde och BNP per Capita. Datainsamling har skett sekundärt och bearbetats i det analytiska programmet R. Med hjälp av R och statistiska metoder har det genomförts paneldataanalys och regressionsanalys. Svaret är kort och gott, ja, faktorerna påverkar bolagsskattesatsen. Det visar att ett land som är större geografiskt och har en hög öppenhet för kapitalrörelser tenderar att ha en lägre bolagsskattesats och att det tycktes vara ett bra val av Sverige att justera bolagsskatten. / The goal with lowering of the Swedish corporate tax rate the 1st of January 2013 was to stimulate the Swedish growth, as a lower corporate tax rate is said to increase the will to invest. The incentives for corporations to move to low tax countries was also thought to be decreased due to the reduction. When the tax rate was reduced an interest arise to examine which factors influence the tax rate, as for example, for a small economy as the Swedish. Theory presented is based on capital structure and finance decisions. The variables chosen is FDI, Openness towards capital flow and BNP per capita. Collection of data is secondary and has been analysed in the statistic program R with focus on panel data and regression analysis. The answer is for short, yes, the variables do indeed influence the corporate tax rate. Countries whom are larger geographically and has a high openness towards capital movement are more likely to have a lower corporate tax rate and it seemed to be a good choice of the Swedish parliament to lower the corporate tax rate.
|
450 |
The Long-term Impact of Birth Order on Health and Educational AttainmentBarclay, Kieron January 2014 (has links)
This doctoral thesis examines the long-term impact of birth order on health, and educational attainment. Swedish register data is used to link individuals to their siblings, thereby allowing members of the sibling group to be compared to one another. This thesis consists of an introductory chapter summarizing empirical research on the relationship between birth order and educational attainment, intelligence, health, and personality, as well the theoretical frameworks that have been developed to explain those relationships. This introductory chapter is followed by four original empirical studies. The first two studies show that relative to first born siblings, later borns have lower physical fitness in late adolescence, and higher mortality in adulthood. The third study uses the Swedish registers to identify sibling groups that entirely consist of adopted individuals, and shows that the commonly observed negative relationship between birth order and educational attainment persists in these fully adopted sibling groups. These results suggest that birth order effects are likely explained by post-natal, social mechanisms within the family. Finally, the fourth study shows that even though later born siblings do worse than first borns in a fully adjusted statistical model, educational expansion in the 20th century has meant that later born siblings actually tend to have greater educational attainment and are more likely to attend university in comparison to older siblings within the same family. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Manuscript. Paper 2: Epub ahead of print. Paper 3: Accepted. Paper 4: Manuscript.</p>
|
Page generated in 0.0522 seconds