Spelling suggestions: "subject:"heighted"" "subject:"eighted""
541 |
Statistical decisions in optimising grain yieldNorng, Sorn January 2004 (has links)
This thesis concerns Precision Agriculture (PA) technology which involves methods developed to optimise grain yield by examining data quality and modelling protein/yield relationship of wheat and sorghum fields in central and southern Queensland. An important part of developing strategies to optimisise grain yield is the understanding of PA technology. This covers major aspects of PA which includes all the components of Site- Specific Crop Management System (SSCM). These components are 1. Spatial referencing, 2. Crop, soil and climate monitoring, 3. Attribute mapping, 4. Decision suppport systems and 5. Differential action. Understanding how all five components fit into PA significantly aids the development of data analysis methods. The development of PA is dependent on the collection, analysis and interpretation of information. A preliminary data analysis step is described which covers both non-spatial and spatial data analysis methods. The non-spatial analysis involves plotting methods (maps, histograms), standard distribution and statistical summary (mean, standard deviation). The spatial analysis covers both undirected and directional variogram analyses. In addition to the data analysis, a theoretical investigation into GPS error is given. GPS plays a major role in the development of PA. A number of sources of errors affect the GPS and therefore effect the positioning measurements. Therefore, an understanding of the distribution of the errors and how they are related to each other over time is needed to complement the understanding of the nature of the data. Understanding the error distribution and the data give useful insights for model assumptions in regard to position measurement errors. A review of filtering methods is given and new methods are developed, namely, strip analysis and a double harvesting algoritm. These methods are designed specifically for controlled traffic and normal traffic respectively but can be applied to all kinds of yield monitoring data. The data resulting from the strip analysis and double harvesting algorithm are used in investigating the relationship between on-the-go yield and protein. The strategy is to use protein and yield in determining decisions with respect to nitrogen managements. The agronomic assumption is that protein and yield have a significant relationship based on plot trials. We investigate whether there is any significant relationship between protein and yield at the local level to warrent this kind of assumption. Understanding PA technology and being aware of the sources of errors that exist in data collection and data analysis are all very important in the steps of developing management decision strategies.
|
542 |
Efficient index structures for video databasesAcar, Esra 01 February 2008 (has links) (PDF)
Content-based retrieval of multimedia data has been still an active research area. The efficient retrieval of video data is proven a difficult task for content-based video retrieval systems. In this thesis study, a Content-Based Video Retrieval (CBVR) system that adapts two different index structures, namely Slim-Tree and BitMatrix, for efficiently retrieving videos based on low-level features such as color, texture, shape and motion is presented. The system represents low-level features of video data with MPEG-7 Descriptors extracted from video shots by using MPEG-7 reference software and stored in a native XML database. The low-level descriptors used in the study are Color Layout (CL), Dominant Color (DC), Edge Histogram (EH), Region Shape (RS) and Motion Activity (MA). Ordered Weighted Averaging (OWA) operator in Slim-Tree and BitMatrix aggregates these features to find final similarity between any two objects. The system supports three different types of queries: exact match queries, k-NN queries and range queries. The experiments included in this study are in terms of index construction, index update, query response time and retrieval efficiency using ANMRR performance metric and precision/recall scores. The experimental results show that using BitMatrix along with Ordered Weighted Averaging method is superior in content-based video retrieval systems.
|
543 |
Υλοποίηση αριθμητικών μονάδων υπολοίπου 2^n+1 με αριθμητική των n δυαδικών ψηφίωνΜαριδάκης, Νικόλαος 25 February 2010 (has links)
Το Σύστημα Αριθμητικής Υπολοίπου (Residue Number System - RNS), είναι ένα σύστημα αριθμητικής το οποίο παρουσιάζει σημαντικά πλεονεκτήματα στην ταχύτητα με την οποία μπορούν να γίνουν οι αριθμητικές πράξεις. Στα RNS οι αριθμοί αναπαρίστανται σαν ένα σύνολο από υπόλοιπα.
Οι εφαρμογές του RNS εκτείνονται σε ένα ευρύ φάσμα της επιστήμης και της τεχνολογίας οπότε έχει δοθεί μεγάλο βάρος στην ανάπτυξη αριθμητικών συστημάτων υψηλής απόδοσης. Τέτοιες αριθμητικές μονάδες είναι αθροιστές, πολλαπλασιαστές, κυκλώματα υπολογισμού ρίζας και γεννήτριες υπολοίπου (Residue Generator – RG).
Τα RNS συστήματα πολύ συχνά χρησιμοποιούν βάσεις με τρία διαφορετικά moduli της μορφής {2^n-1,2^n,2^n+1}. Αυτό οφείλεται στο γεγονός ότι έχουν κατασκευαστεί πολύ αποδοτικά συνδυαστικά κυκλώματα κωδικοποίησης και αποκωδικοποίησης από και προς το δυαδικό σύστημα. Επομένως, ο σχεδιασμός πολύ αποδοτικών αριθμητικών συστημάτων modulo 2^n-1, modulo 2^n, modulo 2^n+1 είναι ζωτικής σημασίας για τις εφαρμογές που χρησιμοποιούν το RNS.
Από αυτές τις βάσεις αυτή που απαιτεί τα πιο απαιτητικά κυκλώματα είναι αυτή που έχει σαν moduli το 2^n+1 μια που μόνο αυτή δίνει αριθμούς με n+1 bits. Στη modulo 2^n+1 αριθμητική οι αριθμοί εμφανίζονται συνήθως σε δύο αναπαραστάσεις. Στην αναπαράσταση με βάρη και στη diminished-1 αναπαράσταση. Οι δύο αυτές αναπαραστάσεις έχουν κάποια χαρακτηριστικά που τις διαφοροποιούν και που τις κάνουν κατάλληλες για διαφορετικές εφαρμογές.
Στην διπλωματική αυτή θα παρουσιάσουμε μια τεχνική η οποία συνδυάζει τα πλεονεκτήματα των δύο αναπαραστάσεων προσφέροντας έτσι κυκλώματα με μικρότερη επιφάνεια που συνήθως όμως έχουν καλύτερη απόδοση. Αυτή η τεχνική θα εφαρμοστεί σε modulo 2^n+1 αθροιστές πολλαπλών εντέλων (Multi-Operand Modulo Adder – MOMA), σε modulo 2^n+1 αθροιστές και σε RG ενώ θα γίνει μελέτη της απόδοσης τους σε σχέση με τις πιο διαδεδομένες μέχρι τώρα αντίστοιχες αρχιτεκτονικές. / The Residue Number Systen (RNS) is an arithmetic system with many advantages in the speed of arithmetic components. In RNS the numbers are represented as a set of residues.
The RNS applications are various so there is a great effort in developing arithmetic components with very high performance. Those arithmetic components are adders, multipliers, residue generators etc.
In RNS there are commonly used bases of the form {2^n-1, 2^n, 2^n+1} that is because there has been developed very efficient encoding and decoding from and to the binary system. So the design of very efficient arithmetic components in modulo 2^n-1, modulo 2^n, modulo 2^n+1 is very crucial for RNS applications.
From these three modulis the 2^n+1 is the most critical to implement because it is the only one that needs n+1-bits. In modulo 2^n+1 arithmetic the numbers are represented in two forms. In the weighted representation and in the diminished-1 representation. These two representations have some differences that make them suitable for different applications.
On this thesis work we introduce a technique that combines the advantages of the two representations. This technique when applied in arithmetic components produces circuits that are smaller and very often faster. We use this techniques to design multi operand modulo 2^n+1 adders (MOMA), fast modulo 2^n+1 adders and residue generators (RG).
|
544 |
Εφαρμογή και αξιολόγηση των μεθόδων Diffusion Weighted Imaging και Diffusion Tensor Imaging σε χωροκατακτητικές νόσους του κεντρικού νευρικού συστήματοςΔιαμαντής, Απόστολος 07 June 2013 (has links)
Οι τεχνικές απεικόνισης μοριακής διάχυσης (DWI) και τανυστή διάχυσης (DTI) είναι από τις πιο δημοφιλείς τεχνικές μαγνητικής τομογραφίας (MRI) στην έρευνα του εγκεφάλου. Διάχυση (ή θερμική κίνηση Brown) είναι ένα τυχαίο φαινόμενο το οποίο περιγράφει τη μεταφορά υλικού (π.χ μόρια νερού) από μία χωρική θέση σε άλλη με την πάροδο του χρόνου. Η διάχυση του νερού σε βιολογικούς ιστούς παρατηρείται μέσα, έξω, γύρω από τις κυτταρικές δομές και είναι αποτέλεσμα της θερμικής ενέργειας των μορίων. Η κάθε τεχνική υποστηρίζεται από τον δικό της αλγόριθμο από τους οποίους προκύπτουν και οι αντίστοιχοι παραμετρικοί χάρτες. Πιο συγκεκριμένα από την τεχνική διάχυσης προκύπτει ο δείκτης της φαινόμενης σταθεράς διάχυσης (ADC-Apparent Diffusion Coefficient) , ενώ από την τεχνική του τανυστή διάχυσης προκύπτει ο δείκτης της κλασματικής ανισοτροπίας (FA-Fractional Anisotropy). Η παράμετρος ADC δείχνει πόσο διαφέρει η διάχυση στην περιοχή ενδιαφέροντος σε σχέση με την μέση τιμή διάχυσης. Η κλασματική ανισοτροπία (FA) είναι δείκτης μέτρησης του βαθμού ανισοτροπίας της διάχυσης και η τιμή της εξαρτάται άμεσα από την ακεραιότητα των νευρικών ινών. Το φάσμα εφαρμογής των δύο τεχνικών είναι ευρύ (εφαρμογή σε απομυελινωτικές νόσους, ισχαιμικά επεισόδια, εγκεφαλικοί όγκοι). Ο κύριος λόγος είναι ότι η διάχυση των μορίων νερού είναι ιδιαίτερα ευαίσθητη σε τυχόν αλλοιώσεις στη δομή των ινών της Λευκής ουσίας. Σκοπός της παρούσας ερευνητικής είναι η εφαρμογή των τεχνικών Τανυστή Διάχυσης (DTI) και Μοριακής Διάχυσης (DWI) σε τρείς κατηγορίες εγκεφαλικών όγκων (μηνιγγιώματα, γλοιώματα υψηλής και χαμηλής κακοήθειας, εγκεφαλικούς μεταστατικούς όγκους) με σκοπό τον διαχωρισμό αυτών. / The brain is a highly organized organ with a complex microstructural organization . The microstructural organization of brain tissue affects the molecular motion (diffusion) of water. Diffusion therefore reflects the structural organization of tissue. Diffusion imaging is a
Magnetic Resonance (MR) imaging technique that allows the quantification to the molecular motion of water. Magnitude and directionality (anisotropy) of molecular motion of water can be described. Measurements of the magnitude of diffusion have been used to identify abnormal tissue in tumors, stroke, multiple sclerosis and status epilepticus. Diffusion tensor imaging (DTI) is a relatively new technique that allows rotationally invariant measurements of both magnitude and directionality of water diffusion. DTI sequences with calculation of apparent diffusion coefficient (ADC) and fractional anisotropy (FA) scalars allow characterization of the shape and magnitude of the diffusion ellipsoid. These parameters consequently reflect the microstructural architecture of the human brain. In addition, quantification of diffusion can be especially helpful as it may allow early diagnosis of pathology . The purpose of this study was to correlate the changes in FA and ADC between three different brain tumors and outline the probability of presurgical tumor differentiation.
|
545 |
Modelling Credit Risk: Estimation of Asset and Default Correlation for an SME PortfolioCedeno, Yaxum, Jansson, Rebecca January 2018 (has links)
When banks lend capital to counterparties they take on a risk, known as credit risk which traditionally has been the largest risk exposure for banks. To be protected against potential default losses when lending capital, banks must hold a regulatory capital that is based on a regulatory formula for calculating risk weighted assets (RWA). This formula is part of the Basel Accords and it is implemented in the legal system of all European Union member states. The key parameters of the RWA formula are probability of default, loss given default and asset correlation. Banks today have the option to estimate the probability of default and loss given default by internal models however the asset correlation must be determined by a formula provided by the legal framework. This project is a first approach for Handelsbanken to study what would happen if banks were allowed to estimate asset correlation by internal models. We assess two models for estimating the asset correlation of a portfolio of Small and Medium Enterprices (SME). The estimates are compared with the asset correlation given by the regulatory formula and with estimates for another parameter called default correlation. The models are validated using predicted historical data and Monte-Carlo Simulations. For the studied SME portfolio, the models give similar estimates for the asset correlations and the estimates are lower than those given by the regulatory formula. This would imply a lower capital requirement if banks were allowed to use internal models to estimate the asset correlation used in the RWA formula. Default correlation, if not used synonymously with asset correlation, is shown to be another measure and should not be used in the RWA formula. / När banker lånar ut kapital till motparter tar de en risk, mer känt som kreditrisk som traditionellt har varit den största risken för banker. För att skydda sig mot potentiella förluster vid utlåning måste banker ha ett reglerat kapital som bygger på en formel för beräkning av riskvägda tillgångar (RWA). Denna formel ingår i Basels regelverk och är implementerad i rättssystemet i alla EU-länder. De viktigaste parametrarna för RWA-formeln är sannolikheten att fallera, förlustgivet fallissemang och tillgångskorrelation. Bankerna har idag möjlighet att beräkna de två variablerna sannolikheten att fallera och förlustgivet fallissemang med interna modeller men tillgångskorrelation måste bestämmas med hjälp av en standardformel givet från regelverket. Detta projekt är ett första tillvägagångssätt för Handelsbanken att studera vad som skulle hända om banker fick beräkna tillgångskorrelation med interna modeller. Vi analyserar två modeller för att skatta tillgångskorrelation i en portfölj av Små och Medelstora Företag (SME). Uppskattningarna jämförs sedan med den tillgångskorrelation som ges av regelverket och jämförs även mot en parameter som kallas fallissemangskorrelation. Modellerna som används för att beräkna korrelationerna valideras med hjälp av estimerat data och Monte-Carlo Simuleringar. För den studerade SME portföljen ges liknande uppskattningar för de båda tillgångskorrelationsmodellerna, samt visar det sig att de är lägre än den korrelationen som ges av regelverket. Detta skulle innebära ett lägre kapitalkrav om bankerna fick använda sig av interna modeller för att estimera tillgångskorrelation som används i RWA-formeln. Om fallissemangskorrelation inte används synonymt till tillgångskorrelation, visar det sig att fallisemangskorrelation är en annan mätning än tillgångskorrelation och bör inte användas i RWA-formeln.
|
546 |
Visual topography and perceptual learning in the primate visual systemTang-Wright, Kimmy January 2016 (has links)
The primate visual system is organised and wired in a topological manner. From the eye well into extrastriate visual cortex, a preserved spatial representation of the vi- sual world is maintained across many levels of processing. Diffusion-weighted imaging (DWI), together with probabilistic tractography, is a non-invasive technique for map- ping connectivity within the brain. In this thesis I probed the sensitivity and accuracy of DWI and probabilistic tractography by quantifying its capacity to detect topolog- ical connectivity in the post mortem macaque brain, between the lateral geniculate nucleus (LGN) and primary visual cortex (V1). The results were validated against electrophysiological and histological data from previous studies. Using the methodol- ogy developed in this thesis, it was possible to segment the LGN reliably into distinct subregions based on its structural connectivity to different parts of the visual field represented in V1. Quantitative differences in connectivity from magno- and parvo- cellular subcomponents of the LGN to different parts of V1 could be replicated with this method in post mortem brains. The topological corticocortical connectivity be- tween extrastriate visual area V5/MT and V1 could also be mapped in the post mortem macaque. In vivo DWI scans previously obtained from the same brains have lower resolution and signal-to-noise because of the shorter scan times. Nevertheless, in many cases, these yielded topological maps similar to the post mortem maps. These results indicate that the preserved topology of connection between LGN to V1, and V5/MT to V1, can be revealed using non-invasive measures of diffusion-weighted imaging and tractography in vivo. In a preliminary investigation using Human Connectome data obtained in vivo, I was not able to segment the retinotopic map in LGN based on con- nections to V1. This may be because information about the topological connectivity is not carried in the much lower resolution human diffusion data, or because of other methodological limitations. I also investigated the mechanisms of perceptual learning by developing a novel task-irrelevant perceptual learning paradigm designed to adapt neuronal elements early on in visual processing in a certain region of the visual field. There is evidence, although not clear-cut, to suggest that the paradigm elicits task- irrelevant perceptual learning, but that these effects only emerge when practice-related effects are accounted for. When orientation and location specific effects on perceptual performance are examined, the largest improvement occurs at the trained location, however, there is also significant improvement at one other 'untrained' location, and there is also a significant improvement in performance for a control group that did not receive any training at any location. The work highlights inherent difficulties in inves- tigating perceptual learning, which relate to the fact that learning likely takes place at both lower and higher levels of processing, however, the paradigm provides a good starting point for comprehensively investigating the complex mechanisms underlying perceptual learning.
|
547 |
Towards semantic language processing / Mot semantisk språkbearbetningJonsson, Anna January 2018 (has links)
The overall goal of the field of natural language processing is to facilitate the communication between humans and computers, and to help humans with natural language problems such as translation. In this thesis, we focus on semantic language processing. Modelling semantics – the meaning of natural language – requires both a structure to hold the semantic information and a device that can enforce rules on the structure to ensure well-formed semantics while not being too computationally heavy. The devices used in natural language processing are preferably weighted to allow for comparison of the alternative semantic interpretations outputted by a device. The structure employed here is the abstract meaning representation (AMR). We show that AMRs representing well-formed semantics can be generated while leaving out AMRs that are not semantically well-formed. For this purpose, we use a type of graph grammar called contextual hyperedge replacement grammar (CHRG). Moreover, we argue that a more well-known subclass of CHRG – the hyperedge replacement grammar (HRG) – is not powerful enough for AMR generation. This is due to the limitation of HRG when it comes to handling co-references, which in its turn depends on the fact that HRGs only generate graphs of bounded treewidth. Furthermore, we also address the N best problem, which is as follows: Given a weighted device, return the N best (here: smallest-weighted, or more intuitively, smallest-errored) structures. Our goal is to solve the N best problem for devices capable of expressing sophisticated forms of semantic representations such as CHRGs. Here, however, we merely take a first step consisting in developing methods for solving the N best problem for weighted tree automata and some types of weighted acyclic hypergraphs.
|
548 |
Image quality of standard and synthetic diffusion weighted magnetic resonance imaging in prostate cancerBaker, Adam Timothy 24 October 2018 (has links)
The extension from Quantitative Magnetic Resonance Imaging to synthetic imaging has the clear advantage of being able to continually image the patient after the exam. MR techniques such as DWI are commonly used but have some clear disadvantages resulting from the use of echoplanar imaging. It should then be asked whether one imaging technique is objectively better. If one technique is better, the incorporation in clinical settings could produce better diagnostic rates, and save valuable time. In order to quantitatively assess the quality of these techniques, the SNR and CNR values of similar tissues were compared. The pre-analysis discussion concentrating on the spatial resolution and artifacts, supports that synthetic images have an advantage over DWI due to higher resolution and absence of artifacts. The SNR and CNR values were calculated for each patient and image type for the comparison, initially assuming that the synthetic images would have a higher mean SNR and CNR. In most cases the differences between scan types was found to not be statistically significant. In conclusion, this analysis could not support the initial theory that the synthetic images had a higher SNR or CNR. The research shows that they are more likely to be comparable. An investigation of the diagnostic power of the synthetic in comparison to standard DWI would give clinical relevance to these results.
|
549 |
Utiliza??o de M?dia M?vel Exponencialmente Ponderada para detectar e corrigir os Estilos de Aprendizagem do estudanteRibeiro, Patrick Aur?lio Luiz 28 September 2017 (has links)
Incluir a Universidade Federal dos Vales do Jequitinhonha e Mucuri (UFVJM) como ag?ncia financiadora. / Submitted by Jos? Henrique Henrique (jose.neves@ufvjm.edu.br) on 2017-12-14T16:46:41Z
No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
patrick_aurelio_luiz_ribeiro.pdf: 6159348 bytes, checksum: 5978e3ca5ff417ce94712c998e8c5c8a (MD5) / Approved for entry into archive by Rodrigo Martins Cruz (rodrigo.cruz@ufvjm.edu.br) on 2018-01-03T12:20:58Z (GMT) No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
patrick_aurelio_luiz_ribeiro.pdf: 6159348 bytes, checksum: 5978e3ca5ff417ce94712c998e8c5c8a (MD5) / Made available in DSpace on 2018-01-03T12:20:58Z (GMT). No. of bitstreams: 2
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
patrick_aurelio_luiz_ribeiro.pdf: 6159348 bytes, checksum: 5978e3ca5ff417ce94712c998e8c5c8a (MD5)
Previous issue date: 2017 / Na modalidade de ensino a dist?ncia, os Ambientes Virtuais de Aprendizagem (AVAs)
s?o elementos fundamentais no processo de ensino e aprendizagem, atrav?s da disponibiliza??o
de conte?dos e ?reas de discuss?o e comunica??o entre os atores do processo.
Entretanto, tais ambientes, na sua maioria, caracterizam-se pelo fato de serem
est?ticos, abordando m?todos pedag?gicos gen?ricos atrav?s dos quais estudantes com caracter?sticas
e Estilos de Aprendizagem (EAs) diferentes buscam o conhecimento. Dessa
maneira, ? importante que sejam levados em considera??o os EAs de cada estudante
como forma de tornar a aprendizagem mais eficaz. Question?rios psicom?tricos na maioria
das vezes s?o utilizados para que as caracter?sticas de aprendizagem do estudante
sejam identificadas, por?m nem sempre tais question?rios apresentam resultados precisos
quanto ao EAs de determinado estudante. Assim, faz-se necess?ria a utiliza??o de outras
t?cnicas de detec??o, haja vista que uma identifica??o precisa ? capaz de melhorar
o processo de aprendizagem por meio de escolhas de estrat?gias pedag?gicas melhores. Diante disso, surge a necessidade de utiliza??o de sistemas inteligentes que se adaptem ?s caracter?sticas de aprendizagem do estudante, utilizando como pressupostos as experi?ncias vivenciadas por ele e as an?lises estat?sticas dessas experi?ncias. Isso pode ser feito atrav?s de avalia??es dos EAs apresentados pelo estudante, em que a partir dos resultados um novo modelo de aprendizagem do estudante ? definido para que o conte?do seja disponibilizado de acordo com esse modelo. Nesse intuito a presente abordagem objetivou identificar e corrigir os EAs do estudante por meio da utiliza??o do conceito de M?dia M?vel Exponencialmente Ponderada no processo de decis?o sobre a aplica??o do refor?o de maneira a ajustar o Modelo do Estudante (ME), de modo que os resultados obtidos, ap?s a realiza??o do teste estat?stico n?o-param?trico de Mann-Whitney, mostraram-se significativamente melhores do que os resultados apresentados por Dor?a (2012), cujo trabalho foi refer?ncia para o desenvolvimento desta proposta. / Disserta??o (Mestrado Profissional) ? Programa de P?s-Gradua??o em Educa??o, Universidade Federal dos Vales do Jequitinhonha e Mucuri, 2017. / In Distance Learning, Learning Management Systems (LMS) are extremely important
elements in teaching and learning process, because they can offer content and spaces
of discussion and comunication between people who are part of that process. However
they are static and do not consider students? Learning Styles (LS) to show the content,
they just use the same pedagogical methods for all learners. It is important to consider
students? Learning Styles because this can make the learning process more efective.
Most of the time people use Psychometric Instruments to detect students? preferences,
but sometimes the outcomes of those methods are not precise. Because of this other
techniques of detection of LS can be used to identify precisely the student?s LS and
consequently to choose better pedagogical strategies than when are used manual techniques
of detection of LS. For this reason intelligent systems which adapt to students?
learning characteristics get importance since they use experiences and statistical analysis
over these experiences to be adaptive. It can be done based on learner?s Learning Styles
that are adjusted by a part of the system, then these new LS are used by another
part of the system to select a pedagogical strategy which fit to student?s characteristics.
Thus, this work presents an approach which aimed to identify and to correct the Learning
Styles of the learner using for this the Exponentially Weighted Moving Average
(EWMA) concept. This concept was used to decide if reinforcement signs have to be used
to make the student?s modeling. This approach was tested and the outcomes were submitted
to non parametric test Mann-Whitney which pointed they were significantly better
than the results of Dor?a (2012), whose work was the base of the work presented here.
|
550 |
Ekonomická přidaná hodnota / Economic Value AddedTAUSCHOVÁ, Petra January 2015 (has links)
This disertation labour focuses on assessment of the financial situation analyzed company using the economic value added. In theoretical part is characterized economic value added, there is described its origin, calculation variants and the posibility of using this indicator. There are defined the necessary adjustments of accounting data for achieving economic data, procedures for determing the cost of capital and pyramid decoposition economic value added. The practical part is targeted to calculate economic value added by variant of entity and equity in rated company. It is targeted to compare the results of these two variants, to pyramid decomposition EVA equity in 2014 compare 2013 and to alternatives economic value added.
|
Page generated in 0.0337 seconds