• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Weighted layered space-time code with iterative detection and decoding

Karim, Md Anisul January 2006 (has links)
Master of Engineering (Research) / Multiple antenna systems are an appealing candidate for emerging fourth-generation wireless networks due to its potential to exploit space diversity for increasing conveyed throughput without wasting bandwidth and power resources. Particularly, layered space-time architecture (LST) proposed by Foschini, is a technique to achieve a significant fraction of the theoretical capacity with a reasonable implementation complexity. There has been a great deal of challenges in the detection of space-time signal; especially to design a low-complexity detector, which can efficiently remove multi-layer interference and approach the interference free bound. The application of iterative principle to joint detection and decoding has been a promising approach. It has been shown that, the iterative receiver with parallel interference canceller (PIC) has a low linear complexity and near interference free performance. Furthermore, it is widely accepted that the performance of digital communication systems can be considerably improved once the channel state information (CSI) is used to optimize the transmit signal. In this thesis, the problem of the design of a power allocation strategy in LST architecture to simultaneously optimize coding, diversity and weighting gains is addressed. A more practical scenario is also considered by assuming imperfect CSI at the receiver. The effect of channel estimation errors in LST architecture with an iterative PIC receiver is investigated. It is shown that imperfect channel estimation at an LST receiver results in erroneous decision statistics at the very first iteration and this error propagates to the subsequent iterations, which ultimately leads to severe degradation of the overall performance. We design a transmit power allocation policy to take into account the imperfection in the channel estimation process. The transmit power of various layers is optimized through minimization of the average bit error rate (BER) of the LST architecture with a low complexity iterative PIC detector. At the receiver, the PIC detector performs both interference regeneration and cancellation simultaneously for all layers. A convolutional code is used as the constituent code. The iterative decoding principle is applied to pass the a posteriori probability estimates between the detector and decoders. The decoder is based on the maximum a posteriori (MAP) algorithms. A closed-form optimal solution for power allocation in terms of the minimum BER is obtained. In order to validate the effectiveness of the proposed schemes, substantial simulation results are provided.
2

Forecasting the Equity Premium and Optimal Portfolios

Bjurgert, Johan, Edstrand, Marcus January 2008 (has links)
The expected equity premium is an important parameter in many financial models, especially within portfolio optimization. A good forecast of the future equity premium is therefore of great interest. In this thesis we seek to forecast the equity premium, use it in portfolio optimization and then give evidence on how sensitive the results are to estimation errors and how the impact of these can be minimized. Linear prediction models are commonly used by practitioners to forecast the expected equity premium, this with mixed results. To only choose the model that performs the best in-sample for forecasting, does not take model uncertainty into account. Our approach is to still use linear prediction models, but also taking model uncertainty into consideration by applying Bayesian model averaging. The predictions are used in the optimization of a portfolio with risky assets to investigate how sensitive portfolio optimization is to estimation errors in the mean vector and covariance matrix. This is performed by using a Monte Carlo based heuristic called portfolio resampling. The results show that the predictive ability of linear models is not substantially improved by taking model uncertainty into consideration. This could mean that the main problem with linear models is not model uncertainty, but rather too low predictive ability. However, we find that our approach gives better forecasts than just using the historical average as an estimate. Furthermore, we find some predictive ability in the the GDP, the short term spread and the volatility for the five years to come. Portfolio resampling proves to be useful when the input parameters in a portfolio optimization problem is suffering from vast uncertainty.
3

Forecasting the Equity Premium and Optimal Portfolios

Bjurgert, Johan, Edstrand, Marcus January 2008 (has links)
<p>The expected equity premium is an important parameter in many financial models, especially within portfolio optimization. A good forecast of the future equity premium is therefore of great interest. In this thesis we seek to forecast the equity premium, use it in portfolio optimization and then give evidence on how sensitive the results are to estimation errors and how the impact of these can be minimized.</p><p>Linear prediction models are commonly used by practitioners to forecast the expected equity premium, this with mixed results. To only choose the model that performs the best in-sample for forecasting, does not take model uncertainty into account. Our approach is to still use linear prediction models, but also taking model uncertainty into consideration by applying Bayesian model averaging. The predictions are used in the optimization of a portfolio with risky assets to investigate how sensitive portfolio optimization is to estimation errors in the mean vector and covariance matrix. This is performed by using a Monte Carlo based heuristic called portfolio resampling.</p><p>The results show that the predictive ability of linear models is not substantially improved by taking model uncertainty into consideration. This could mean that the main problem with linear models is not model uncertainty, but rather too low predictive ability. However, we find that our approach gives better forecasts than just using the historical average as an estimate. Furthermore, we find some predictive ability in the the GDP, the short term spread and the volatility for the five years to come. Portfolio resampling proves to be useful when the input parameters in a portfolio optimization problem is suffering from vast uncertainty. </p>
4

Weighted layered space-time code with iterative detection and decoding

Karim, Md Anisul January 2006 (has links)
Master of Engineering (Research) / Multiple antenna systems are an appealing candidate for emerging fourth-generation wireless networks due to its potential to exploit space diversity for increasing conveyed throughput without wasting bandwidth and power resources. Particularly, layered space-time architecture (LST) proposed by Foschini, is a technique to achieve a significant fraction of the theoretical capacity with a reasonable implementation complexity. There has been a great deal of challenges in the detection of space-time signal; especially to design a low-complexity detector, which can efficiently remove multi-layer interference and approach the interference free bound. The application of iterative principle to joint detection and decoding has been a promising approach. It has been shown that, the iterative receiver with parallel interference canceller (PIC) has a low linear complexity and near interference free performance. Furthermore, it is widely accepted that the performance of digital communication systems can be considerably improved once the channel state information (CSI) is used to optimize the transmit signal. In this thesis, the problem of the design of a power allocation strategy in LST architecture to simultaneously optimize coding, diversity and weighting gains is addressed. A more practical scenario is also considered by assuming imperfect CSI at the receiver. The effect of channel estimation errors in LST architecture with an iterative PIC receiver is investigated. It is shown that imperfect channel estimation at an LST receiver results in erroneous decision statistics at the very first iteration and this error propagates to the subsequent iterations, which ultimately leads to severe degradation of the overall performance. We design a transmit power allocation policy to take into account the imperfection in the channel estimation process. The transmit power of various layers is optimized through minimization of the average bit error rate (BER) of the LST architecture with a low complexity iterative PIC detector. At the receiver, the PIC detector performs both interference regeneration and cancellation simultaneously for all layers. A convolutional code is used as the constituent code. The iterative decoding principle is applied to pass the a posteriori probability estimates between the detector and decoders. The decoder is based on the maximum a posteriori (MAP) algorithms. A closed-form optimal solution for power allocation in terms of the minimum BER is obtained. In order to validate the effectiveness of the proposed schemes, substantial simulation results are provided.
5

Une méthode d'optimisation hybride pour une évaluation robuste de requêtes / A Hybrid Method to Robust Query Processing

Moumen, Chiraz 29 May 2017 (has links)
La qualité d'un plan d'exécution engendré par un optimiseur de requêtes est fortement dépendante de la qualité des estimations produites par le modèle de coûts. Malheureusement, ces estimations sont souvent imprécises. De nombreux travaux ont été menés pour améliorer la précision des estimations. Cependant, obtenir des estimations précises reste très difficile car ceci nécessite une connaissance préalable et détaillée des propriétés des données et des caractéristiques de l'environnement d'exécution. Motivé par ce problème, deux approches principales de méthodes d'optimisation ont été proposées. Une première approche s'appuie sur des valeurs singulières d'estimations pour choisir un plan d'exécution optimal. A l'exécution, des statistiques sont collectées et comparées à celles estimées. En cas d'erreur d'estimation, une ré-optimisation est déclenchée pour le reste du plan. A chaque invocation, l'optimiseur associe des valeurs spécifiques aux paramètres nécessaires aux calculs des coûts. Cette approche peut ainsi induire plusieurs ré-optimisations d'un plan, engendrant ainsi de mauvaises performances. Dans l'objectif d'éviter cela, une approche alternative considère la possibilité d'erreurs d'estimation dès la phase d'optimisation. Ceci est modélisé par l'utilisation d'un ensemble de points d'estimations pour chaque paramètre présumé incertain. L'objectif est d'anticiper la réaction à une sous-optimalité éventuelle d'un plan d'exécution. Les méthodes dans cette approche cherchent à générer des plans robustes dans le sens où ils sont capables de fournir des performances acceptables et stables pour plusieurs conditions d'exécution. Ces méthodes supposent souvent qu'il est possible de trouver un plan robuste pour l'ensemble de points d'estimations considéré. Cette hypothèse reste injustifiée, notamment lorsque cet ensemble est important. De plus, la majorité de ces méthodes maintiennent sans modification un plan d'exécution jusqu'à la terminaison. Cela peut conduire à de mauvaises performances en cas de violation de la robustesse à l'exécution. Compte tenu de ces constatations, nous proposons dans le cadre de cette thèse une méthode d'optimisation hybride qui vise deux objectifs : la production de plans d'exécution robustes, notamment lorsque l'incertitude des estimations utilisées est importante, et la correction d'une violation de la robustesse pendant l'exécution. Notre méthode s'appuie sur des intervalles d'estimations calculés autour des paramètres incertains, pour produire des plans d'exécution robustes. Ces plans sont ensuite enrichis par des opérateurs dits de contrôle et de décision. Ces opérateurs collectent des statistiques à l'exécution et vérifient la robustesse du plan en cours. Si la robustesse est violée, ces opérateurs sont capables de prendre des décisions de corrections du reste du plan sans avoir besoin de rappeler l'optimiseur. Les résultats de l'évaluation des performances de notre méthode indiquent qu'elle fournit des améliorations significatives dans la robustesse d'évaluation de requêtes. / The quality of an execution plan generated by a query optimizer is highly dependent on the quality of the estimates produced by the cost model. Unfortunately, these estimates are often imprecise. A body of work has been done to improve estimate accuracy. However, obtaining accurate estimates remains very challenging since it requires a prior and detailed knowledge of the data properties and run-time characteristics. Motivated by this issue, two main optimization approaches have been proposed. A first approach relies on single-point estimates to choose an optimal execution plan. At run-time, statistics are collected and compared with estimates. If an estimation error is detected, a re-optimization is triggered for the rest of the plan. At each invocation, the optimizer uses specific values for parameters required for cost calculations. Thus, this approach can induce several plan re-optimizations, resulting in poor performance. In order to avoid this, a second approach considers the possibility of estimation errors at the optimization time. This is modelled by the use of multi-point estimates for each error-prone parameter. The aim is to anticipate the reaction to a possible plan sub-optimality. Methods in this approach seek to generate robust plans, which are able to provide good performance for several run-time conditions. These methods often assume that it is possible to find a robust plan for all expected run-time conditions. This assumption remains unjustified. Moreover, the majority of these methods maintain without modifications an execution plan until the termination. This can lead to poor performance in case of robustness violation at run-time. Based on these findings, we propose in this thesis a hybrid optimization method that aims at two objectives : the production of robust execution plans, particularly when the uncertainty in the used estimates is high, and the correction of a robustness violation during execution. This method makes use of intervals of estimates around error-prone parameters. It produces execution plans that are likely to perform reasonably well over different run-time conditions, so called robust plans. Robust plans are then augmented with what we call check-decide operators. These operators collect statistics at run-time and check the robustness of the current plan. If the robustness is violated, check-decide operators are able to make decisions for plan modifications to correct the robustness violation without a need to recall the optimizer. The results of performance studies of our method indicate that it provides significant improvements in the robustness of query processing.
6

Rodízio de auditoria e a qualidade dos lucros: uma análise a partir dos accruals residuais

Silvestre, Adalene Olivia 20 December 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2017-02-10T12:19:50Z No. of bitstreams: 1 Adalene Olivia Silvestre_.pdf: 590609 bytes, checksum: 188661a0ac02964a45900cec5e4c0062 (MD5) / Made available in DSpace on 2017-02-10T12:19:50Z (GMT). No. of bitstreams: 1 Adalene Olivia Silvestre_.pdf: 590609 bytes, checksum: 188661a0ac02964a45900cec5e4c0062 (MD5) Previous issue date: 2016-12-20 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / PROSUP - Programa de Suporte à Pós-Gradução de Instituições de Ensino Particulares / A auditoria independente exerce um importante papel na relação entre a empresa e os usuários externos à entidade, devendo o auditor ser independente em relação à empresa auditada. No Brasil, o rodízio obrigatório de firma de auditoria é regulamentado pela Instrução CVM 308/99 na tentativa de contribuir com a manutenção da independência do auditor e, consequentemente, com a qualidade dos lucros divulgados pelas empresas. Diante disto, o presente estudo tem por objetivo analisar o efeito do rodízio da firma de auditoria sobre a qualidade dos lucros das empresas de capital aberto brasileiras listadas na BM&FBOVESPA no período de 2008 a 2015. Como medida de qualidade dos lucros foram utilizados os accruals residuais, que identificam a parcela discricionária dos accruals, medida inversa à qualidade dos lucros. Os accruals residuais foram abordados a partir de duas diferentes perspectivas: o gerenciamento de resultados, medido pelos modelos de Jones (1991) e Jones Modificado por Dechow, Sloan e Sweeney (1995) e os erros de estimativas, medidos pelos modelos de Dechow e Dichev (2002) e Dechow e Dichev modificado por McNichols (2002). Os resultados demonstram que o rodízio de firma de auditoria reduz o volume de accruals residuais e, assim, aumenta a qualidade dos lucros, quando esses são mensurados a partir da perspectiva do gerenciamento de resultados, através dos modelos de Jones e Jones modificado. Entretanto, o efeito do rodízio de firma de auditoria sobre a qualidade dos lucros não é observado quando os accruals residuais são mensurados a partir da perspectiva dos erros de estimativas contábeis, através dos modelos de Dechow e Dichev e McNichols. Por outro lado, os resultados demonstram que as empresas que realizam rodízio voluntário de firma de auditoria apresentam maiores accruals residuais e, consequentemente, menor qualidade dos lucros. / Independent audit plays an important role in the relationship between the company and external users, and the auditor must be independent of the audited company. In Brazil, the mandatory audit firm rotation is regulated by CVM Instruction 308/99, in an attempt to contribute to the maintenance of auditor independence and, consequently, with the quality of earnings disclosed by the companies. Therefore, the present study has the objective of analyzing the effect of the audit firm rotation on the earnings quality of Brazilian public companies listed on BM&FBOVESPA in the period from 2008 to 2015. Residuals accruals were used as a measure of earnings quality, which identify a discretionary portion of the accruals, inverse measure of earnings quality. The residuals accruals were approached from two different perspectives: earnings management, measured by Jones model (1991) and Jones modified by Dechow, Sloan and Sweeney model (1995) and the estimation errors, measured by the Dechow and Dichev model (2002) and Dechow and Dichev modified by McNichols model (2002). The results show that audit firm rotation reduced the volume of residuals accruals and, thus, increases the earnings quality, when these are measured from the perspective of earnings management through the Jones and Jones modified models. However, the effect of audit firm rotation on the earnings quality is not observed when the residuals accruals are measured from the perspective of accounting estimation errors, through the Dechow and Dichev and McNichols models. On the other hand, the results demonstrate that the companies that perform the voluntary audit firm rotation have greater residuals accruals and, consequently, lower earnings quality.
7

Reliable Communications under Limited Knowledge of the Channel

Yazdani, Raman Unknown Date
No description available.

Page generated in 0.1316 seconds