Spelling suggestions: "subject:"tradeoff"" "subject:"tradeoffs""
31 |
Limited Feedback Information in Wireless Communications : Transmission Schemes and Performance BoundsKim, Thanh Tùng January 2008 (has links)
This thesis studies some fundamental aspects of wireless systems with partial channel state information at the transmitter (CSIT), with a special emphasis on the high signal-to-noise ratio (SNR) regime. The first contribution is a study on multi-layer variable-rate communication systems with quantized feedback, where the expected rate is chosen as the performance measure. Iterative algorithms exploiting results in the literature of parallel broadcast channels are developed to design the system parameters. Necessary and sufficient conditions for single-layer coding to be optimal are derived. In contrast to the ergodic case, it is shown that a few bits of feedback information can improve the expected rate dramatically. The next part of the thesis is devoted to characterizing the tradeoff between diversity and multiplexing gains (D-M tradeoff) over slow fading channels with partial CSIT. In the multiple-input multiple-output (MIMO) case, we introduce the concept of minimum guaranteed multiplexing gain in the forward link and show that it influences the D-M tradeoff significantly. It is demonstrated that power control based on the feedback is instrumental in achieving the D-M tradeoff, and that rate adaptation is important in obtaining a high diversity gain even at high rates. Extending the D-M tradeoff analysis to decode-and-forward relay channels with quantized channel state feedback, we consider several different scenarios. In the relay-to-source feedback case, it is found that using just one bit of feedback to control the source transmit power is sufficient to achieve the multiantenna upper bound in a range of multiplexing gains. In the destination-to-source-and-relay feedback scenario, if the source-relay channel gain is unknown to the feedback quantizer at the destination, the diversity gain only grows linearly in the number of feedback levels, in sharp contrast to an exponential growth for MIMO channels. We also consider the achievable D-M tradeoff of a relay network with the compress-and-forward protocol when the relay is constrained to make use of standard source coding. Under a short-term power constraint at the relay, using source coding without side information results in a significant loss in terms of the D-M tradeoff. For a range of multiplexing gains, this loss can be fully compensated for by using power control at the relay. The final part of the thesis deals with the transmission of an analog Gaussian source over quasi-static fading channels with limited CSIT, taking the SNR exponent of the end-to-end average distortion as performance measure. Building upon results from the D-M tradeoff analysis, we develop novel upper bounds on the distortion exponents achieved with partial CSIT. We show that in order to achieve the optimal scaling, the CSIT feedback resolution must grow logarithmically with the bandwidth ratio for MIMO channels. The achievable distortion exponent of some hybrid schemes with heavily quantized feedback is also derived. As for the half-duplex fading relay channel, combining a simple feedback scheme with separate source and channel coding outperforms the best known no-feedback strategies even with only a few bits of feedback information. / QC 20100817
|
32 |
Do Firms Balance Their Operating and Financial Leverage? - The Relationship Between Operating and Financial Leverage in Swedish Listed CompaniesLöwenthal, Simon, Nyman, Henry January 2013 (has links)
Previous research on the tradeoff between operating and financial leverage has come to contradicting results, thus, there is no consensus of opinion regarding van Horne’s tradeoff theory. This study investigates whether there is support for the tradeoff theory on a sample of 347 Swedish, listed firms. Unlike previous studies, we employ a method with direct measures using guidance provided by Penman (2012), rather than using the more common degree of operating and financial leverage as proxies. During the time period 2006-2011 we find a statistically significant negative relationship of 0.214 using an OLS regression with financial leverage as the dependent variable, giving support for the tradeoff theory. The adjusted explanatory power (adjusted R2) is however rather low, despite adding four control variables, reaching only 7.4%.
|
33 |
Experiments on Redistribution, Trust, and EntitlementsHall, Daniel T, Cox, James C 15 May 2010 (has links)
This dissertation comprises three essays. The unifying theme is experiments used as the empirical methodology. Each essay is an independent study, but aspects of behavior related to cooperation, trust, and entitlement are present in each essay.
The first essay looks at the efficiency-equality tradeoff of increasing redistribution in a small group setting. Subjects generate a stronger sense of entitlement to their labor earnings by performing a real effort task. Subjects must trust other members of their group to work in order to keep labor a profitable activity under higher levels of redistribution. We find a significant efficiency-equality tradeoff explained by lowered work incentives. Labor supply decisions also show strategic and cooperative behavior similar to behavior found in public goods experiments. The efficiency-equality tradeoff calls for a reconsideration of increasing dependence on the public sector for charity provision.
The second essay investigates how the application of a role-reversal protocol affects behavior in cooperative games. Subjects play all possible player roles in the game under a role-reversal protocol. We test if behavior results from a two-role trust game are robust to applying a role-reversal protocol. We find that paying subjects for one role leads to no significant role-reversal effect whereas previous studies paying for both roles find reductions in generosity in both roles.
The third essay is co-authored with Dr. James C. Cox. We test for differences in trust-related behavior under private and common property environments. Subjects participate in payoff equivalent 2-person Private Property Trust Game or Common Property Trust Game. We strengthen property right entitlements by asking subjects perform a real effort task to earn their private or common property endowments. Strengthening entitlements leads to behavioral differences in the two trust games not previously found. We find second mover generosity in response to first mover decisions is lower in the Common Property Trust Game. Second movers are relatively less generous because first movers overturn the status quo opportunity set which is most generous and signals “full trust.” Many first movers anticipate this and respond optimally by choosing extremes which signal “full trust” or “no trust” in the game.
|
34 |
Delay-Throughput Analysis in Distributed Wireless NetworksAbouei, Jamshid January 2009 (has links)
A primary challenge in wireless networks is to use available resources efficiently so
that the Quality of Service (QoS) is satisfied while maximizing the throughput of the
network. Among different resource allocation strategies, power and spectrum allocations
have long been regarded as efficient tools to mitigate interference and improve the
throughput of the network. Also, achieving a low transmission delay is an important
QoS requirement in buffer-limited networks, particularly for users with real-time
services. For these networks, too much delay results in dropping some packets. Therefore, the main challenge
in networks with real-time services is to utilize an efficient power allocation scheme
so that the delay is minimized while achieving a high throughput. This dissertation
deals with these problems in distributed wireless networks.
|
35 |
Delay-Throughput Analysis in Distributed Wireless NetworksAbouei, Jamshid January 2009 (has links)
A primary challenge in wireless networks is to use available resources efficiently so
that the Quality of Service (QoS) is satisfied while maximizing the throughput of the
network. Among different resource allocation strategies, power and spectrum allocations
have long been regarded as efficient tools to mitigate interference and improve the
throughput of the network. Also, achieving a low transmission delay is an important
QoS requirement in buffer-limited networks, particularly for users with real-time
services. For these networks, too much delay results in dropping some packets. Therefore, the main challenge
in networks with real-time services is to utilize an efficient power allocation scheme
so that the delay is minimized while achieving a high throughput. This dissertation
deals with these problems in distributed wireless networks.
|
36 |
Neural Correlates of Speed-Accuracy Tradeoff: An Electrophysiological AnalysisHeitz, Richard Philip 29 March 2007 (has links)
Recent computational models and physiological studies suggest that simple, two-alternative forced-choice decision making can be conceptualized as the gradual accumulation of sensory evidence. Accordingly, information is sampled over time from a sensory stimulus, giving rise to an activation function. A response is emitted when this function reaches a criterion level of activity. Critically, the phenomenon known as speed-accuracy tradeoff (SAT) is modeled as a shift in the response boundaries (criterion). As speed stress increases and criterion is lowered, the information function travels less distance before reaching threshold. This leads to faster overall responses, but also an increase in error rate, given that less information is accumulated. Psychophysiological data using EEG and single-unit recordings from monkey cortex suggest that these accumulator models are biologically plausible. The present work is an effort to strengthen this position. Specifically, it seeks to demonstrate a neural correlate of criterion and demonstrate its relationship to behavior. To do so, subjects performed a letter discrimination paradigm under three levels of speed stress. At the same time, electroencephalogram (EEG) was used to derive a measure known as the lateralized readiness potential, which is known to reflect ongoing motor preparation in motor cortex. In Experiment 1, the amplitude of the LRP was related to speed stress: as subjects were forced to respond more quickly, less information was accumulated before making a response. In other words, criterion lowered. These data are complicated by Experiment 2, which found that there are boundary conditions for this effect to obtain.
|
37 |
Functional genomics of a model ecological species, Daphnia pulexMalcom, Jacob Wesley 25 February 2014 (has links)
Determining the molecular basis of heritable variation in complex, quantitative ecologically important traits will provide insight into the proximate mechanisms driving phenotypic and ecological variation, and the molecular evolutionary history of these traits. Furthermore, if the study organism is a “keystone species” whose presence or absence shapes ecological communities, then we extend our understanding of the effects of molecular variation to the level of communities. I examined the molecular basis of variation in 32 ecologically important traits in the freshwater pond keystone species Daphnia pulex, and identified thousands of candidate genes for which variation may affect not just Daphnia phenotypes, but the structure of communities. I extended the basic results to address two questions: what genes are associated with the offspring size-number trade-off in Daphnia; and can we identify candidate “keystone gene networks” for which variation may have a particularly strong influence on eco-evolutionary dynamics of limnetic communities? I found that different genes, with different biological functions, are associated with the trade-off in subsequent broods, and propose a model linking evolutionary frameworks to molecular biological functions. Next I found that quantitative genetic variation in keystone traits appears to co-vary with the selection regimes to which Daphnia is subject, and identified two candidate gene networks that may underpin this genetic variation. Not only do these results provide a host of molecular hypotheses to be tested as Daphnia matures as a model genomic organism, but they also suggest models that link molecular research with broader themes in ecology, evolution, and behavior. / text
|
38 |
Evaluation and Optimization of Turnaround Time and Cost of HPC Applications on the CloudMarathe, Aniruddha Prakash January 2014 (has links)
The popularity of Amazon's EC2 cloud platform has increased in commercial and scientific high-performance computing (HPC) applications domain in recent years. However, many HPC users consider dedicated high-performance clusters, typically found in large compute centers such as those in national laboratories, to be far superior to EC2 because of significant communication overhead of the latter. We find this view to be quite narrow and the proper metrics for comparing high-performance clusters to EC2 is turnaround time and cost. In this work, we first compare the HPC-grade EC2 cluster to top-of-the-line HPC clusters based on turnaround time and total cost of execution. When measuring turnaround time, we include expected queue wait time on HPC clusters. Our results show that although as expected, standard HPC clusters are superior in raw performance, they suffer from potentially significant queue wait times. We show that EC2 clusters may produce better turnaround times due to typically lower wait queue times. To estimate cost, we developed a pricing model---relative to EC2's node-hour prices---to set node-hour prices for (currently free) HPC clusters. We observe that the cost-effectiveness of running an application on a cluster depends on raw performance and application scalability. However, despite the potentially lower queue wait and turnaround times, the primary barrier to using clouds for many HPC users is the cost. Amazon EC2 provides a fixed-cost option (called on-demand) and a variable-cost, auction-based option (called the spot market). The spot market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 spot market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to 7x cheaper than using the on-demand market and up to 44% cheaper than the best non-redundant, spot-market algorithm. Finally, we extend our adaptive algorithm to exploit several opportunities for cost-savings on the EC2 spot market. First, we incorporate application scalability characteristics into our adaptive policy. We show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56% cost-savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale. Second, we demonstrate potential for obtaining considerable free computation time on the spot market enabled by its hour-boundary pricing model.
|
39 |
Three essays on stock market risk estimation and aggregationChen, Hai Feng 27 March 2012 (has links)
This dissertation consists of three essays. In the first essay, I estimate a high dimensional covariance matrix of returns for 88 individual stocks from the S&P 100 index, using daily return data for 1995-2005. This study applies the two-step estimator of the dynamic conditional correlation multivariate GARCH model, proposed by Engle (2002b) and Engle and Sheppard (2001) and applies variations of this model. This is the first study estimating variances and covariances of returns using a large number of individual stocks (e.g., Engle and Sheppard (2001) use data on various aggregate sub-indexes of stocks). This avoids errors in estimation of GARCH models with contemporaneous aggregation of stocks (e.g. Nijman and Sentana 1996; Komunjer 2001). Second, this is the first multivariate GARCH adopting a systematic general-to-specific approach to specification of lagged returns in the mean equation. Various alternatives to simple GARCH are considered in step one univariate estimation, and econometric results favour an asymmetric EGARCH extension of Engle and Sheppard’s model.
In essay two, I aggregate a variance-covariance matrix of return risk (estimated using DCC-MVGARCH in essay one) to an aggregate index of return risk. This measure of risk is compared with the standard approach to measuring risk from a simple univariate GARCH model of aggregate returns. In principle the standard approach implies errors in estimation due to contemporaneous aggregation of stocks. The two measures are compared in terms of correlation and economic values: measures are not perfectly correlated, and the economic value for the improved estimate of risk as calculated here is substantial.
Essay three has three parts. The major part is an empirical study of the aggregate risk return tradeoff for U.S. stocks using daily data. Recent research indicates that past risk-return studies suffer from inadequate sample size, and this suggests using daily rather than monthly data. Modeling dynamics/lags is critical in daily models, and apparently this is the first such study to model lags correctly using a general to specific approach. This is also the first risk return study to apply Wu tests for possible problems of endogeneity/measurement error for the risk variable. Results indicate a statistically significant positive relation between expected returns and risk, as is predicted by capital asset pricing models.
Development of the Wu test leads naturally into a model relating aggregate risk of returns to economic variables from the risk return study. This is the first such model to include lags in variables based on a general to specific methodology and to include covariances of such variables. I also derive coefficient links between such models and risk-return models, so in theory these models are more closely related than has been realized in past literature. Empirical results for the daily model are consistent with theory and indicate that the economic and financial variables explain a substantial part of variation in daily risk of returns.
The first section of this essay also investigates at a theoretical and empirical level several alternative index number approaches for aggregating multivariate risk over stocks. The empirical results indicate that these indexes are highly correlated for this data set, so only the simplest indexes are used in the remainder of the essay.
|
40 |
Three essays on stock market risk estimation and aggregationChen, Hai Feng 27 March 2012 (has links)
This dissertation consists of three essays. In the first essay, I estimate a high dimensional covariance matrix of returns for 88 individual stocks from the S&P 100 index, using daily return data for 1995-2005. This study applies the two-step estimator of the dynamic conditional correlation multivariate GARCH model, proposed by Engle (2002b) and Engle and Sheppard (2001) and applies variations of this model. This is the first study estimating variances and covariances of returns using a large number of individual stocks (e.g., Engle and Sheppard (2001) use data on various aggregate sub-indexes of stocks). This avoids errors in estimation of GARCH models with contemporaneous aggregation of stocks (e.g. Nijman and Sentana 1996; Komunjer 2001). Second, this is the first multivariate GARCH adopting a systematic general-to-specific approach to specification of lagged returns in the mean equation. Various alternatives to simple GARCH are considered in step one univariate estimation, and econometric results favour an asymmetric EGARCH extension of Engle and Sheppard’s model.
In essay two, I aggregate a variance-covariance matrix of return risk (estimated using DCC-MVGARCH in essay one) to an aggregate index of return risk. This measure of risk is compared with the standard approach to measuring risk from a simple univariate GARCH model of aggregate returns. In principle the standard approach implies errors in estimation due to contemporaneous aggregation of stocks. The two measures are compared in terms of correlation and economic values: measures are not perfectly correlated, and the economic value for the improved estimate of risk as calculated here is substantial.
Essay three has three parts. The major part is an empirical study of the aggregate risk return tradeoff for U.S. stocks using daily data. Recent research indicates that past risk-return studies suffer from inadequate sample size, and this suggests using daily rather than monthly data. Modeling dynamics/lags is critical in daily models, and apparently this is the first such study to model lags correctly using a general to specific approach. This is also the first risk return study to apply Wu tests for possible problems of endogeneity/measurement error for the risk variable. Results indicate a statistically significant positive relation between expected returns and risk, as is predicted by capital asset pricing models.
Development of the Wu test leads naturally into a model relating aggregate risk of returns to economic variables from the risk return study. This is the first such model to include lags in variables based on a general to specific methodology and to include covariances of such variables. I also derive coefficient links between such models and risk-return models, so in theory these models are more closely related than has been realized in past literature. Empirical results for the daily model are consistent with theory and indicate that the economic and financial variables explain a substantial part of variation in daily risk of returns.
The first section of this essay also investigates at a theoretical and empirical level several alternative index number approaches for aggregating multivariate risk over stocks. The empirical results indicate that these indexes are highly correlated for this data set, so only the simplest indexes are used in the remainder of the essay.
|
Page generated in 0.0273 seconds