311 |
Impairment test av goodwill : Användning av diskonteringsräntanPourmand, Pejhman, Khazeni, Reza January 2009 (has links)
Bakgrund: Efter införandet av IFRS har det förekommit en del förändringar gentemot tidigare redovisnings regler i Sverige. Från och med 1 januari år 2005 skall alla börsbolag gå över till IFRS 3 Business Combinations. Detta innebär att goodwill inte får skrivas av så som det gjorts tidigare. Nu mer måste det årligen ske ett Impairment test av goodwill för att undersöka om det föreligger ett nedskrivningsbehov.Diskonteringsräntan används vid impairment test av goodwill och är en viktig faktor vid nuvärdesberäkningar av kassaflöden. Problem: Noterade börsbolag utför ett impairment test av goodwill, vilket görs för att undersöka om nedskrivning av goodwill är aktuellt eller inte. Diskonteringsräntan är en viktig faktor vid Impairment test. Trots detta test utför inte de flesta företag nedskrivningar. Varför gör inte företag några nedskrivningar kan det bero på att en praxis utvecklats? Hur bestämmer företag diskonteringsräntan och varför, vilka är dess komponenter? Syfte: Med denna undersökning är att beskriva systematiskt hur företag tar fram sina valda diskonteringsräntor för omprövning av goodwill, kan detta bero på en praxisutveckling eller av andra omständigheter. Vi vill även undersöka vilka metoder företag använder sig för att ta fram dess diskonteringsränta samt dess komponenter. Metod: Undersökningen i denna uppsats har främst utgått från en kvantitativ enkätundersökning, även vetenskapliga tidskrifter samt skriftliga källor har använts.Totalt kontaktades 108 stycken noterade företag på Stockholms börs. Slutsatser: Det har visat sig att företag använder sig av förväntade kassaflöden vid framtagning av dess diskonteringsränta och WACC modellen. Impairment test har visat sig vara av betydelse då det ger en mer rättvisande bild och överblick på goodwill. / Background: Following the introduction of IFRS, has given rise to some changes towards the previous accounting rules in Sweden. As of January 1 year 2005, all listed companies move to IFRS 3 Business Combinations. This means that goodwill is not to be amortized as done previously. Now annually there has to be an Impairment test of goodwill in order to investigate whether there is need for impairment or not. The discount rate is used for impairment testing of goodwill and is an important factor in calculating the discounted value of cash flows. Problem: Public listed company doing an impairment test of goodwill is done to examine whether the impairment of goodwill is relevant or not. The discount is an important factor in Impairment test. Although it's showed that most companies do not need to write down goodwill. Why do not some companies write-down goodwill, is it maybe that a practice developed? How do companies discount and why? What are its components? Purpose: This study is to describe systematically how firms develop their selected discount rates for review of goodwill, is it maybe due to a practice development or other circumstances. We also want to explore the methods companies use to develop its discount rate and its components. Method: The study in this thesis has mainly been based on a quantitative survey, including scientific journals and written sources. 108 companies listed on the Stockholm stock exchange were contacted. Conclusions: It has been shown that companies use the expected cash flows at the development of its discount rate and WACC model. Impairment tests have also proved to be of importance since it gives a more accurate picture and overview of goodwill.
|
312 |
Combination of Levene-Type Tests and a Finite-Intersection Method for Testing Trends in VariancesNoguchi, Kimihiro January 2009 (has links)
The problem of detecting monotonic increasing/decreasing trends in variances from k samples is widely met in many applications, e.g. financial data analysis, medical and environmental studies. However, most of the tests for equality of variances against ordered alternatives rely on the assumption of normality. Such tests are often non-robust to departures from normality, which eventually leads to unreliable conclusions. In this thesis, we propose a combination of a robust Levene-type test and a finite-intersection method, which relaxes the assumption of normality. The new combined procedure yields a more accurate estimate of sizes of the test and provides competitive powers. In addition, we discuss various modifications of the proposed test for unbalanced design cases. We present theoretical justifications of the new test and illustrate its applications by simulations and case studies.
|
313 |
Överensstämmelse mellan två olika uthållighetstest hos unga handbollsspelare - Cooper Test vs. Yo-Yo Intermittent Recovery Test. / Conformity between two different endurance tests in young team handball players - Cooper Test vs. Yo-Yo Intermittent Recovery TestStigaeus, Patrik, Soror, Patrik January 2011 (has links)
Introduktion. Handboll är en olympisk gren som spelas internationellt, men främst i Europa. Idrotten ställer höga krav på både aerob och anaerob metabolism. Syfte. Syftet med studien var att studera överensstämmelsen mellan Cooper Test (CT) och Yo-Yo Intermittent Recovery Test Level 1 (YYIR1) för unga handbollspelare. Dessutom om möjligt även studera spelpositionens betydelse för utfallet. Metod. 56 unga handbollsspelare bjöds in att deltaga i studien. Deltagarna utförde CT och YYIR1 och överensstämmelsen mellan testerna studerades med hjälp av Spearmans korrelationskoefficient (rs). Resultat. 11 män och 10 kvinnor deltog i studien. Resultatet visade på en god överensstämmelse mellan CT och YYIR1 för gruppen som helhet (rs = 0,79, p = < 0,001). Ingen slutsats kunde dras utifrån spelarnas position och resultaten av de båda testerna. Indelat efter kön skiljde sig överensstämmelsen för män (rs = 0,28, p = 0,4) och för kvinnor (rs = 0,61, p = 0,06). Slutsats. Överensstämmelsen mellan CT och YYIR1 var god på gruppnivå och därför skulle testerna kunna vara utbytbara. Men eftersom det fanns en tydlig skillnad mellan könen krävs större studier.
|
314 |
Combination of Levene-Type Tests and a Finite-Intersection Method for Testing Trends in VariancesNoguchi, Kimihiro January 2009 (has links)
The problem of detecting monotonic increasing/decreasing trends in variances from k samples is widely met in many applications, e.g. financial data analysis, medical and environmental studies. However, most of the tests for equality of variances against ordered alternatives rely on the assumption of normality. Such tests are often non-robust to departures from normality, which eventually leads to unreliable conclusions. In this thesis, we propose a combination of a robust Levene-type test and a finite-intersection method, which relaxes the assumption of normality. The new combined procedure yields a more accurate estimate of sizes of the test and provides competitive powers. In addition, we discuss various modifications of the proposed test for unbalanced design cases. We present theoretical justifications of the new test and illustrate its applications by simulations and case studies.
|
315 |
Tier-Based Multilevel Interconnect Diagnosis for Through-Silicon-ViaPai, Chih-Yun 11 August 2010 (has links)
This paper proposes a multitier multilevel TSV diagnosis scheme for 3D ICs to achieve interconnect reliability and yield with targets of interconnect faults under stuck-at and open fault models. This scheme takes advantage of previous work of IEEE 1500 compatible interconnect test and diagnosis methods, and further develop a TSV detection and diagnosis method for 3D circuits. An interconnect diagnosis scheme based on the oscillation ring (OR) test methodology for 3D systems-on-chip (SOC) designs with heterogeneous cores is proposed. The large number of test rings in the SOC design, however, significantly complicates the interconnect diagnosis problem. In this paper, the diagnosability of an interconnect structure is first analyzed then a fast diagnosability checking algorithm and an efficient diagnosis ring generation algorithm are proposed. It is shown in this paper that the both vertical and horizontal ring generation algorithm achieves the maximum detectability for any interconnect.
|
316 |
Power supply noise in delay testingWang, Jing 15 May 2009 (has links)
As technology scales into the Deep Sub-Micron (DSM) regime, circuit designs have
become more and more sensitive to power supply noise. Excessive noise can significantly
affect the timing performance of DSM designs and cause non-trivial additional delay. In
delay test generation, test compaction and test fill techniques can produce excessive power
supply noise. This will eventually result in delay test overkill.
To reduce this overkill, we propose a low-cost pattern-dependent approach to analyze
noise-induced delay variation for each delay test pattern applied to the design. Two noise
models have been proposed to address array bond and wire bond power supply networks,
and they are experimentally validated and compared. Delay model is then applied to
calculate path delay under noise. This analysis approach can be integrated into static test
compaction or test fill tools to control supply noise level of delay tests. We also propose
an algorithm to predict transition count of a circuit, which can be applied to control
switching activity during dynamic compaction.
Experiments have been performed on ISCAS89 benchmark circuits. Results show that
compacted delay test patterns generated by our compaction tool can meet a moderate
noise or delay constraint with only a small increase in compacted test set size. Take the benchmark circuit s38417 for example: a 10% delay increase constraint only results in
1.6% increase in compacted test set size in our experiments. In addition, different test fill
techniques have a significant impact on path delay. In our work, a test fill tool with supply
noise analysis has been developed to compare several test fill techniques, and results show
that the test fill strategy significant affect switching activity, power supply noise and
delay. For instance, patterns with minimum transition fill produce less noise-induced
delay than random fill. Silicon results also show that test patterns filled in different ways
can cause as much as 14% delay variation on target paths. In conclusion, we must take
noise into consideration when delay test patterns are generated.
|
317 |
Low Cost Power and Supply Noise Estimation and Control in Scan Testing of VLSI CircuitsJiang, Zhongwei 2010 December 1900 (has links)
Test power is an important issue in deep submicron semiconductor testing. Too much power supply noise and too much power dissipation can result in excessive temperature rise, both leading to overkill during delay test. Scan-based test has been widely adopted as one of the most commonly used VLSI testing method. The test power during scan testing comprises shift power and capture power. The power consumed in the shift cycle dominates the total power dissipation. It is crucial for IC manufacturing companies to achieve near constant power consumption for a given timing window in order to keep the chip under test (CUT) at a near constant temperature, to make it easy to characterize the circuit behavior and prevent delay test over kill.
To achieve constant test power, first, we built a fast and accurate power model, which can estimate the shift power without logic simulation of the circuit. We also proposed an efficient and low power X-bit Filling process, which could potentially reduce both the shift power and capture power. Then, we introduced an efficient test pattern reordering algorithm, which achieves near constant power between groups of patterns. The number of patterns in a group is determined by the thermal constant of the chip. Experimental results show that our proposed power model has very good correlation. Our proposed X-Fill process achieved both minimum shift power and capture power. The algorithm supports multiple scan chains and can achieve constant power within different regions of the chip. The greedy test pattern reordering algorithm can reduce the power variation from 29-126 percent to 8-10 percent or even lower if we reduce the power variance threshold.
Excessive noise can significantly affect the timing performance of Deep Sub-Micron (DSM) designs and cause non-trivial additional delay. In delay test generation, test compaction and test fill techniques can produce excessive power supply noise. This can result in delay test overkill. Prior approaches to power supply noise aware delay test compaction are too costly due to many logic simulations, and are limited to static compaction.
We proposed a realistic low cost delay test compaction flow that guardbands the delay using a sequence of estimation metrics to keep the circuit under test supply noise more like functional mode. This flow has been implemented in both static compaction and dynamic compaction. We analyzed the relationship between delay and voltage drop, and the relationship between effective weighted switching activity (WSA) and voltage drop. Based on these correlations, we introduce the low cost delay test pattern compaction framework considering power supply noise. Experimental results on ISCAS89 circuits show that our low cost framework is up to ten times faster than the prior high cost framework. Simulation results also verify that the low cost model can correctly guardband every path‟s extra noise-induced delay. We discussed the rules to set different constraints in the levelized framework. The veto process used in the compaction can be also applied to other constraints, such as power and temperature.
|
318 |
Fault modeling, delay evaluation and path selection for delay test under process variation in nano-scale VLSI circuitsLu, Xiang 12 April 2006 (has links)
Delay test in nano-scale VLSI circuits becomes more difficult with shrinking
technology feature sizes and rising clock frequencies. In this dissertation, we study three
challenging issues in delay test: fault modeling, variational delay evaluation and path
selection under process variation. Previous research of fault modeling on resistive spot
defects, such as resistive opens and bridges in the interconnect, and resistive shorts in
devices, lacked an accurate fault model. As a result it was difficult to perform fault
simulation and select the best vectors. Conventional methods to compute variational delay
under process variation are either slow or inaccurate. On the problem of path selection
under process variation, previous approaches either choose too many paths, or missed the
path that is necessary to be tested.
We present new solutions in this dissertation. A new fault model that clearly and
comprehensively expresses the relationship between electrical behaviors and resistive
spots is proposed. Then the effect of process variations on path delays is modeled with a
linear function and a fast method to compute coefficients of the linear function is also
derived. Finally, we present the new path pruning algorithms that efficiently prune unimportant paths for test, and as a result we select as few as possible paths for test while
the fault coverage is satisfied. The experimental results show that the new solutions are
efficient and accurate.
|
319 |
Investigation of Automated Terminal Interoperability Test / Undersökning av automatiserad interoperabilitetstest av mobila terminalerBrammer, Niklas January 2008 (has links)
<p>In order to develop and secure the functionality of its cellular communications systems, Ericsson deals with numerous R&D and I&V activities. One important aspect is interoperability with mobile terminals from different vendors on the world market. Therefore Ericsson co-operates with mobile platform and user equipment manufacturers. These companies visit the interoperability developmental testing (IoDT) laboratories in Linköping to test their developmental products and prototypes in order to certify compliance with Ericsson's products. The knowledge exchange is mutual, Ericsson as well as the user equipment manufacturers benefit from the co-operation.</p><p>The goal of this master's thesis performed at Ericsson AB is to suggest areas in which the IoDT testing can be automated in order to minimize time consuming and tedious work tasks. Primarily the search should be aimed at replacing manual tasks in use today.</p><p>The thesis suggests a number of IoDT tasks that might be subject for automation. Among these one is chosen for implementation. The thesis also includes an implementation part. The task that has been chosen for implementation is the network verification after base station controller software upgrade procedure. This is not a core IoDT function but it implies a lot of work, and is often performed.</p><p>The automation project is also supposed to act as a springboard for future automation within IoDT. The forthcoming LTE standard will require a lot of IoDT testing, and therefore the automation capabilities should be investigated. The thesis shows that automation work is possible, and that the startup process is straightforward. Existing tools are easy to use, and well supported. The network verification automated test scope has been successful.</p>
|
320 |
Automating a test strategy for a protocoldecoder toolJohansson, Henrik January 2008 (has links)
<p>Within Ericsson AB, integration and verification activities is done on the network level in order to secure the functionality of the network. Protocol analysers are used to capture the traffic in the network. This results in many log files, which needs to be analysed. To do this, a protocol decoder tool called Scapy/LHC is used. Scapy/LHC is a framework that allows the users to write their own script to retrieve the data they need from the log files. The Scapy/LHC framework is incrementally developed as open source within Ericsson when there are needs for more functionality. This is often done by the users, outside normal working tasks. Because of this, there is almost no testing done to verify that old and new functionality works as expected, and there is no formal test strategy in use today.</p><p><br />The goal of this master’s thesis is to evaluate test strategies that are possible to use on the Scapy/LHC framework. To make the time needed for the testing process as short as possible, the test strategy needs to be automated. Therefore, possible test automation tools shall also be evaluated.</p><p><br />Two possible test strategies and two possible test automation tools are evaluated in this thesis. A test strategy, where the scripts that are written by the users are used, is then selected for implementation. The two test automation tools are also implemented. The evaluation of the implemented test strategy shows that it is possible to find defects in the Scapy/LHC framework in a time efficient way with help of the implemented test strategy and any of the implemented test automation tools.</p>
|
Page generated in 0.0709 seconds