• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 15
  • 9
  • 4
  • 3
  • 2
  • Tagged with
  • 106
  • 19
  • 17
  • 17
  • 16
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Coding For Multi-Antenna Wireless Systems And Wireless Relay Networks

Kiran, T 11 1900 (has links)
Communication over a wireless channel is a challenging task because of the inherent fading effects. Any wireless communication system employs some form of diversity improving techniques in order to improve the reliability of the channel. This thesis deals with efficient code design for two different spatial diversity techniques, viz, diversity by employing multiple antennas at the transmitter and/or the receiver, and diversity through cooperative commu- nication between users. In other words, this thesis deals with efficient code design for (1) multiple-input multiple-output (MIMO) channels, and (2) wireless relay channels. Codes for the MIMO channel are termed space-time (ST) codes and those for the relay channels are called distributed ST codes. The first part of the thesis focuses on ST code construction for MIMO fading channel with perfect channel state information (CSI) at the receiver, and no CSI at the transmitter. As a measure of performance we use the rate-diversity tradeoff and the Diversity-Multiplexing Gain (D-MG) Tradeoff, which are two different tradeoffs characterizing the tradeoff between the rate and the reliability achievable by any ST code. We provide two types of code constructions that are optimal with respect to the rate-diversity tradeoff; one is based on the rank-distance codes which are traditionally applied as codes for storage devices, and the second construction is based on a matrix representation of a cayley algebra. The second contribution in ST code constructions is related to codes with a certain nonvanishing determinant (NVD) property. Motivation for these constructions is a recent result on the necessary and sufficient conditions for an ST code to achieve the D-MG tradeoff. Explicit code constructions satisfying these conditions are provided for certain number of transmit antennas. The second part of the thesis focuses on distributed ST code construction for wireless relay channel. The transmission protocol follows a two-hop model wherein the source broadcasts a vector in the first hop and in the second hop the relays transmit a vector that is a transformation of the received vector by a relay-specific unitary transformation. While the source and relays do not have CSI, at the destination we assume two different scenarios (a) destina- tion with complete CSI (b) destination with only the relay-destination CSI. For both these scenarios, we derive a Chernoff bound on the pair-wise error probability and propose code design criteria. For the first case, we provide explicit construction of distributed ST codes with lower decoding complexity compared to codes based on some earlier system models. For the latter case, we propose a novel differential encoding and differential decoding technique and also provide explicit code constructions. At the heart of all these constructions is the cyclic division algebra (CDA) and its matrix representations. We translate the problem of code construction in each of the above scenarios to the problem of constructing CDAs satisfying certain properties. Explicit examples are provided to illustrate each of these constructions.
102

Robustness versus performance tradeoffs in PID tuning

Amiri, Mohammad Sadegh Unknown Date
No description available.
103

Essays on the Liquidity Trap, Oil Shocks, and the Great Moderation

Nakov, Anton 19 November 2007 (has links)
The thesis studies three distinct issues in monetary economics using a common dynamic general equilibrium approach under the assumptions of rational expectations and nominal price rigidity. The first chapter deals with the so-called "liquidity trap" - an issue which was raised originally by Keynes in the aftermath of the Great Depression. Since the nominal interest rate cannot fall below zero, this limits the scope for expansionary monetary policy when the interest rate is near its lower bound. The chapter studies the conduct of monetary policy in such an environment in isolation from other possible stabilization tools (such as fiscal or exchange rate policy). In particular, a standard New Keynesian model economy with Calvo staggered price setting is simulated under various alternative monetary policy regimes, including optimal policy. The challenge lies in solving the (otherwise linear) stochastic sticky price model with an explicit occasionally binding non-negativity constraint on the nominal interest rate. This is achieved by parametrizing expectations and applying a global solution method known as "collocation". The results indicate that the dynamics and sometimes the unconditional means of the nominal rate, inflation and the output gap are strongly affected by uncertainty in the presence of the zero lower bound. Commitment to the optimal rule reduces unconditional welfare losses to around one-tenth of those achievable under discretionary policy, while constant price level targeting delivers losses which are only 60% larger than under the optimal rule. On the other hand, conditional on a strong deflationary shock, simple instrument rules perform substantially worse than the optimal policy even if the unconditional welfare loss from following such rules is not much affected by the zero lower bound per se. The second thesis chapter (co-authored with Andrea Pescatori) studies the implications of imperfect competition in the oil market, and in particular the existence of a welfare-relevant trade-off between inflation and output gap volatility. In the standard New Keynesian model exogenous oil shocks do not generate any such tradeoff: under a strict inflation targeting policy, the output decline is exactly equal to the efficient output contraction in response to the shock. I propose an extension of the standard model in which the existence of a dominant oil supplier (such as OPEC) leads to inefficient fluctuations in the oil price markup, reflecting a dynamic distortion of the economy's production process. As a result, in the face of oil sector shocks, stabilizing inflation does not automatically stabilize the distance of output from first-best, and monetary policymakers face a tradeoff between the two goals. The model is also a step away from discussing the effects of exogenous oil price changes and towards analyzing the implications of the underlying shocks that cause the oil price to change in the first place. This is an advantage over the existing literature, which treats the macroeconomic effects and policy implications of oil price movements as if they were independent of the underlying source of disturbance. In contrast, the analysis in this chapter shows that conditional on the source of the shock, a central bank confronted with the same oil price change may find it desirable to either raise or lower the interest rate in order to improve welfare. The third thesis chapter (co-authored with Andrea Pescatori) studies the extent to which the rise in US macroeconomic stability since the mid-1980s can be accounted for by changes in oil shocks and the oil share in GDP. This is done by estimating with Bayesian methods the model developed in the second chapter over two samples - before and after 1984 - and conducting counterfactual simulations. In doing so we nest two other popular explanations for the so-called "Great Moderation": (1) smaller (non-oil) shocks; and (2) better monetary policy. We find that the reduced oil share can account for around one third of the inflation moderation, and about 13% of the GDP growth moderation. At the same time smaller oil shocks can explain approximately 7% of GDP growth moderation and 11% of the inflation moderation. Thus, the oil share and oil shocks have played a non-trivial role in the moderation, especially of inflation, even if the bulk of the volatility reduction of output growth and inflation is attributed to smaller non-oil shocks and better monetary policy, respectively. / La tesis estudia tres problemas distintos de macroeconomía monetaria utilizando como marco común el equilibrio general dinámico bajo expectativas racionales y con rigidez nominal de los precios. El primer capítulo trata el problema de la "trampa de liquidez" - un tema planteado primero por Keynes después de la Gran Depresión de 1929. El hecho de que el tipo de interés nominal no pueda ser negativo limita la posibilidad de llevar una política monetaria expansiva cuando el tipo de interés se acerca a cero. El capítulo estudia la conducta de la política monetaria en este entorno en aislamiento de otros posibles instrumentos de estabilización (como la política fiscal o la política de tipo de cambio). En concreto, se simula un modelo estándar Neo-Keynesiano con rigidez de precios a la Calvo bajo diferentes regimenes de política monetaria, incluida la política monetaria óptima. El reto consiste en resolver el modelo estocástico bajo la restricción explícita ocasionalmente vinculante de no negatividad de los tipos de interés. La solución supone parametrizar las expectativas y utilizar el método de solución global conocido como "colocación". Los resultados indican que la dinámica y en ocasiones los valores medios del tipo de interés, la inflación y el output gap están muy influidos por la presencia de la restricción de no negatividad. El compromiso con la regla monetaria óptima reduce las pérdidas de bienestar esperadas hasta una décima parte de las pérdidas obtenidas bajo la mejor política discrecional, mientras una política de meta constante del nivel de precios resulta en pérdidas que son sólo 60% mayores de las obtenidas bajo la regla óptima. Por otro lado, condicionado a a un choque fuerte deflacionario, las reglas instrumentarias simples funcionan mucho peor que la política óptima, aun si las pérdidas no condicionales de bienestar asociadas a dichas reglas no están muy afectadas por la presencia de la restricción de no negatividad en si. El segundo capítulo de la tesis estudia las implicaciones de la competencia imperfecta en el mercado del petróleo, y en concreto la existencia de un conflicto relevante entre la volatilidad de la inflación y la del output gap de un país importador de petróleo. En el modelo estándar Neo Keynesiano, los choques petroleros exógenos no generan ningún conflicto de objetivos: bajo una política de metas de inflación estricta, la caída del output es exactamente igual a la contracción eficiente del output en respuesta al choque. Este capitulo propone una extensión del modelo básico en la cual la presencia de un proveedor de petróleo dominante (OPEP) lleva a fluctuaciones ineficientes en el margen del precio del petróleo que reflejan una distorsión dinámica en el proceso de producción de la economía. Como consecuencia, ante choques provinientes del sector de petróleo, una política de estabilidad de los precios no conlleva automáticamente a una estabilización de la distancia del output de su nivel eficiente y existe un conflicto entre los dos objetivos. El modelo se aleja de la discución los efectos de cambios exógenos en el precio del petróleo y se acerca al análisis de las implicaciones de los factores fundamentales que provocan los cambios en el precio del petróleo en primer lugar. Esto último representa una ventaja clara frente a la literatura existente, la cual trata tanto los efectos macroeconómicos como las implicaciones para la política monetaria de cambios en el precio del petróleo como si éstos fueran independientes de los factores fundamentales provocando dicho cambio. A diferencia de esta literatura, el análisis del capitulo II demuestra cómo frente al mismo cambio en el precio del petróleo, un banco central puede encontrar deseable bien subir o bajar el tipo de interés en función del origen del choque. El tercer capitulo estudia el grado en que el ascenso de la estabilidad macroeconómica en EE.UU. a partir de mediados de los 80 se puede atribuir a cambios en la naturaleza de los choques petroleros y/o el peso del petróleo en el PIB. Con este propósito se estima el modelo desarrollado en el capitulo II con métodos Bayesianos utilizando datos macroeconómicos de dos periodos - antes y después de 1984 - y se conducen simulaciones contrafactuales. Las simulaciones permiten dos explicaciones alternativas de la "Gran Moderación": (1) menores choques no petroleros; y (2) mejor política monetaria. Los resultados apuntan a que el petróleo ha jugado un papel no-trivial en la moderación. En particular, el menor peso del petroleo en el PIB a partir de 1984 ha contribuido a una tercera parte de la moderación de la inflación y un 13% de la moderación del output. Al mismo tiempo, un 7% de la moderación del PIB y 11% de la moderación de la inflación se pueden atribuir a menores choques petroleros.
104

ON THE RATE-COST TRADEOFF OF GAUSSIAN LINEAR CONTROL SYSTEMS WITH RANDOM COMMUNICATION DELAY

Jia Zhang (13176651) 01 August 2022 (has links)
<p>    </p> <p>This thesis studies networked Gaussian linear control systems with random delays. Networked control systems is a popular topic these years because of their versatile applications in daily life, such as smart grid and unmanned vehicles. With the development of these systems, researchers have explored this area in two directions. The first one is to derive the inherent rate-cost relationship in the systems, that is the minimal transmission rate needed to achieve an arbitrarily given stability requirement. The other one is to design achievability schemes, which aim at using as less as transmission rate to achieve an arbitrarily given stability requirement. In this thesis, we explore both directions. We assume the sensor-to-controller channels experience independently and identically distributed random delays of bounded support. Our work separates into two parts. In the first part, we consider networked systems with only one sensor. We focus on deriving a lower bound, R_{LB}(D), of the rate-cost tradeoff with the cost function to be E{| <strong>x^</strong>T<strong>x </strong>|} ≤ D, where <strong>x </strong>refers to the state to be controlled. We also propose an achievability scheme as an upper bound, R_{UB}(D), of the optimal rate-cost tradeoff. The scheme uses lattice quantization, entropy encoder, and certainty-equivalence controller. It achieves a good performance that roughly requires 2 bits per time slot more than R_{LB}(D) to achieve the same stability level. We also generalize the cost function to be of both the state and the control actions. For the joint state-and-control cost, we propose the minimal cost a system can achieve. The second part focuses on to the covariance-based fusion scheme design for systems with multiple > 1 sensors. We notice that in the multi-sensor scenario, the outdated arrivals at the controller, which many existing fusion schemes often discard, carry additional information. Therefore, we design an implementable fusion scheme (CQE) which is the MMSE estimator using both the freshest and outdated information at the controller. Our experiment demonstrates that CQE out-performances the MMSE estimator using the freshest information (LQE) exclusively by achieving a 15% smaller average L2 norm using the same transmission rate. As a benchmark, we also derive the minimal achievable L2 norm, Dmin, for the multi-sensor systems. The simulation shows that CQE approaches Dmin significantly better than LQE. </p>
105

Integrating Combinatorial Scheduling with Inventory Management and Queueing Theory

Terekhov, Daria 13 August 2013 (has links)
The central thesis of this dissertation is that by combining classical scheduling methodologies with those of inventory management and queueing theory we can better model, understand and solve complex real-world scheduling problems. In part II of this dissertation, we provide models of a realistic supply chain scheduling problem that capture both its combinatorial nature and its dependence on inventory availability. We present an extensive empirical evaluation of how well implementations of these models in commercially available software solve the problem. We are therefore able to address, within a specific problem, the need for scheduling to take into account related decision-making processes. In order to simultaneously deal with combinatorial and dynamic properties of real scheduling problems, in part III we propose to integrate queueing theory and deterministic scheduling. Firstly, by reviewing the queueing theory literature that deals with dynamic resource allocation and sequencing and outlining numerous future work directions, we build a strong foundation for the investigation of the integration of queueing theory and scheduling. Subsequently, we demonstrate that integration can take place on three levels: conceptual, theoretical and algorithmic. At the conceptual level, we combine concepts, ideas and problem settings from the two areas, showing that such combinations provide insights into the trade-off between long-run and short-run objectives. Next, we show that theoretical integration of queueing and scheduling can lead to long-run performance guarantees for scheduling algorithms that have previously been proved only for queueing policies. In particular, we are the first to prove, in two flow shop environments, the stability of a scheduling method that is based on the traditional scheduling literature and utilizes processing time information to make sequencing decisions. Finally, to address the algorithmic level of integration, we present, in an extensive future work chapter, one general approach for creating hybrid queueing/scheduling algorithms. To our knowledge, this dissertation is the first work that builds a framework for integrating queueing theory and scheduling. Motivated by characteristics of real problems, this dissertation takes a step toward extending scheduling research beyond traditional assumptions and addressing more realistic scheduling problems.
106

Integrating Combinatorial Scheduling with Inventory Management and Queueing Theory

Terekhov, Daria 13 August 2013 (has links)
The central thesis of this dissertation is that by combining classical scheduling methodologies with those of inventory management and queueing theory we can better model, understand and solve complex real-world scheduling problems. In part II of this dissertation, we provide models of a realistic supply chain scheduling problem that capture both its combinatorial nature and its dependence on inventory availability. We present an extensive empirical evaluation of how well implementations of these models in commercially available software solve the problem. We are therefore able to address, within a specific problem, the need for scheduling to take into account related decision-making processes. In order to simultaneously deal with combinatorial and dynamic properties of real scheduling problems, in part III we propose to integrate queueing theory and deterministic scheduling. Firstly, by reviewing the queueing theory literature that deals with dynamic resource allocation and sequencing and outlining numerous future work directions, we build a strong foundation for the investigation of the integration of queueing theory and scheduling. Subsequently, we demonstrate that integration can take place on three levels: conceptual, theoretical and algorithmic. At the conceptual level, we combine concepts, ideas and problem settings from the two areas, showing that such combinations provide insights into the trade-off between long-run and short-run objectives. Next, we show that theoretical integration of queueing and scheduling can lead to long-run performance guarantees for scheduling algorithms that have previously been proved only for queueing policies. In particular, we are the first to prove, in two flow shop environments, the stability of a scheduling method that is based on the traditional scheduling literature and utilizes processing time information to make sequencing decisions. Finally, to address the algorithmic level of integration, we present, in an extensive future work chapter, one general approach for creating hybrid queueing/scheduling algorithms. To our knowledge, this dissertation is the first work that builds a framework for integrating queueing theory and scheduling. Motivated by characteristics of real problems, this dissertation takes a step toward extending scheduling research beyond traditional assumptions and addressing more realistic scheduling problems.

Page generated in 0.0438 seconds