Spelling suggestions: "subject:"telekommunikation."" "subject:"telekommunikations.""
211 |
Availability Assessment for Secondary Access in TV White SpaceShi, Lei January 2012 (has links)
In recent years, the rapid growth in wireless data traffic has posed not only unique opportunities but also great challenges for the wireless industry. In order to meet the growing demand without excessive cost or energy consumption, one feasible option for the operators is to acquire more spectrum for wireless communication. Unlike the lengthy allocation process for exclusive spectrum licenses, secondary spectrum access is deemed as a flexible alternative to obtain additional spectrum at low cost. In particular, the VHF/UHF TV band, so called ’TV White Space’, is considered as the most promising candidate for secondary access thanks to its well defined primary usage and favorable propagation characteristic for building penetration and wide area coverage. Therefore, secondary access in the TV band has been extensively studied by both academic and industry. Most of the research have focused on the detection of ’spectrum holes’ with sensing technologies, while a few others have provided high level analysis of the potential of TV white space with simplistic secondary interference model for single secondary user. Only A limited number of studies have investigated aggregate interference from multiple secondary users, but even these studies have ignored the adjacent channel interference from secondary users close to the TV receivers. Thus, in this thesis we first concentrate on examining the effect of harmful interference on TV reception from short range devices transmitting on adjacent channels, and model the accumulative effect of multi-channel interferences observed in the measurement. Then the basic methodology for evaluating the potential of TV white space is developed in the second part of the thesis, where we propose a new analytical approach for regulating the secondary transmit power that significantly outperforms the existing method in the regulatory framework. Finally, we combine the aggregate interference model and the basic regulation methodology to extend the analysis from single user to multiple user case, first with only users transmitting on different adjacent TV channels and later also including users transmitting on co-channels. Our performance evaluation have shown that the effect of adjacent channel interference at close distance is far from negligible and the accumulative effect of multi-channel interferences has substantial impact on the scalability of a secondary system. In fact, the adjacent channel interference proves to be the primary limiting factor of the TV white space availability for low power short range systems. Nonetheless, because the proposed approach can adapt to varying environmental conditions and consequently utilize the spectrum reuse opportunity more efficiently than existing frameworks, we can still note considerable amount of TV white space available for short range secondary devices. / <p>QC 20121106</p> / QUASAR
|
212 |
Relaying for Timely and Reliable Message Dissemination in Wireless Distributed Control SystemsHoang, Le-Nam January 2015 (has links)
Distributed control applications enabled by wireless networks are becoming more and more frequent. The advantages of wireless access are many, as control systems become mobile, autonomous and connected. Examples include platooning and automated factories. However, distributed control systems have stringent requirement on both reliability and timeliness, the latter in terms of deadlines. If the deadline is missed, the packet is considered useless, similarly to a lost or erroneous packet in a system without deadlines. In addition, wireless channels are, by nature, more exposed to noise and interference than their wired counterparts. Consequently, it implies a considerable challenge to fulfill the deadline requirements with sufficient reliability for proper functionality of distributed control applications. However, by taking advantage of cooperative communications, increased reliability can be achieved with little or no additional delay. Reducing the delay until a message is successfully received is a two-fold problem: providing channel access with a predictable maximum delay and maximizing the reliability of each transmission, once granted by the medium access method. To this end, this thesis proposes a framework that provides a bounded channel access delay and handles the co-existence of both time-triggered and event-driven messages encountered in distributed control applications. In addition, the thesis proposes and evaluates an efficient message dissemination technique based on relaying that maximizes the reliability given a certain deadline, or alternatively determines the delay required to achieve a certain reliability threshold for both unicast and broadcast scenarios. Numerical results, which are verified by Monte-Carlo simulations, show significant improvements with the proposed relaying scheme as compared to a conventional scheme without cooperation, providing more reliable message delivery given a fixed number of available time-slots. It also becomes clear in which situations relaying is preferable and in which situations pure retransmissions are preferable, as the relay selection algorithm will always pick the best option. The relay selection algorithm has a reasonable complexity and can be used by both routing algorithms and relaying scenarios in any time-critical application as long as it is used together with a framework that enables predictable channel access. In addition, it can be implemented on top of commercially available transceivers. / ACDC
|
213 |
DSP-based Coherent Optical Systems : Receiver Sensitivity and Coding AspectsLeong, Miu Yoong January 2015 (has links)
User demand for faster access to more data is at a historic high and rising. One of the enabling technologies that makes the information age possible is fiber-optic communications, where light is used to carry information from one place to another over optical fiber. Since the technology was first shown to be feasible in the 1970s, it has been constantly evolving with each new generation of fiber-optic systems achieving higher data rates than its predecessor. Today, the most promising approach for further increasing data rates is digital signal processing (DSP)-based coherent optical transmission with multi-level modulation. As multi-level modulation formats are very susceptible to noise and distortions, forward error correction (FEC) is typically used in such systems. However, FEC has traditionally been designed for additive white Gaussian noise (AWGN) channels, whereas fiber-optic systems also have other impairments. For example, there is relatively high phase noise (PN) from the transmitter and local oscillator (LO) lasers. The contributions of this thesis are in two areas. First, we use a unified approach to analyze theoretical performance limits of coherent optical receivers and microwave receivers, in terms of signal-to-noise ratio (SNR) and bit error rate (BER). By using our general framework, we directly compare the performance of ten coherent optical receiver architectures and five microwave receiver architectures. In addition, we put previous publications into context, and identify areas of agreement and disagreement between them. Second, we propose straightforward methods to select codes for systems with PN. We focus on Bose-Chaudhuri-Hocquenghem (BCH) codes with simple implementations, which correct pre-FEC BERs around 10−3. Our methods are semi-analytical, and need only short pre-FEC simulations to estimate error statistics. We propose statistical models that can be parameterized based on those estimates. Codes can be selected analytically based on our models. / <p>QC 20150528</p>
|
214 |
Scalable Self-Organizing Server Clusters with Quality of Service ObjectivesAdam, Constantin January 2005 (has links)
<p>Advanced architectures for cluster-based services that have been recently proposed allow for service differentiation, server overload control and high utilization of resources. These systems, however, rely on centralized functions, which limit their ability to scale and to tolerate faults. In addition, they do not have built-in architectural support for automatic reconfiguration in case of failures or addition/removal of system components.</p><p>Recent research in peer-to-peer systems and distributed management has demonstrated the potential benefits of decentralized over centralized designs: a decentralized design can reduce the configuration complexity of a system and increase its scalability and fault tolerance.</p><p>This research focuses on introducing self-management capabilities into the design of cluster-based services. Its intended benefits are to make service platforms dynamically adapt to the needs of customers and to environment changes, while giving the service providers the capability to adjust operational policies at run-time.</p><p>We have developed a decentralized design that efficiently allocates resources among multiple services inside a server cluster. The design combines the advantages of both centralized and decentralized architectures. It allows associating a set of QoS objectives with each service. In case of overload or failures, the quality of service degrades in a controllable manner. We have evaluated the performance of our design through extensive simulations. The results have been compared with performance characteristics of ideal systems.</p>
|
215 |
Source and Channel Coding for Audiovisual Communication SystemsKim, Moo Young January 2004 (has links)
Topics in source and channel coding for audiovisual communication systems are studied. The goal of source coding is to represent a source with the lowest possible rate to achieve a particular distortion, or with the lowest possible distortion at a given rate. Channel coding adds redundancy to quantized source information to recover channel errors. This thesis consists of four topics. Firstly, based on high-rate theory, we propose Karhunen-Loéve transform (KLT)-based classified vector quantization (VQ) to efficiently utilize optimal VQ advantages over scalar quantization (SQ). Compared with code-excited linear predictive (CELP) speech coding, KLT-based classified VQ provides not only a higher SNR and perceptual quality, but also lower computational complexity. Further improvement is obtained by companding. Secondly, we compare various transmitter-based packet-loss recovery techniques from a rate-distortion viewpoint for real-time audiovisual communication systems over the Internet. We conclude that, in most circumstances, multiple description coding (MDC) is the best packet-loss recovery technique. If channel conditions are informed, channel-optimized MDC yields better performance. Compared with resolution-constrained quantization (RCQ), entropy-constrained quantization (ECQ) produces a smaller number of distortion outliers but is more sensitive to channel errors. We apply a generalized γ-th power distortion measure to design a new RCQ algorithm that has less distortion outliers and is more robust against source mismatch than conventional RCQ methods. Finally, designing quantizers to effectively remove irrelevancy as well as redundancy is considered. Taking into account the just noticeable difference (JND) of human perception, we design a new RCQ method that has improved performance in terms of mean distortion and distortion outliers. Based on high-rate theory, optimal centroid density and its corresponding mean distortion are also accurately predicted. The latter two quantization methods can be combined with practical source coding systems such as KLT-based classified VQ and with joint source-channel coding paradigms such as MDC.
|
216 |
Peaking Capacity in Restructured Power SystemsDoorman, Gerard January 2000 (has links)
<p>The theme of this thesis is the supply of capacity during peak demand in restructured power systems. There are a number of reasons why there is uncertainty about whether an enegyonly electricity market (where generators are only paid for the energy produced) is able to ensure uninterrupted supply during peak load conditions.</p><p>Much of the public debate in Europe has been about the present surplus generation capacity. However, in a truly competitive environment, it is hard to believe that seldom used capacity will be kept operational. This is illustrated by developments in Sweden. For this reason, the large surplus of generation capacity in the European Union may vanish much faster than generally assumed. In the USA, much of the debate has been about California. During the last three summers, California has occasionally experienced involuntary load shedding and prices have been very high during these periods. To some extent, the Californian situation illustrates the relevance of the subject of this thesis: in a deregulated system generators may not be willing to invest in peaking capacity that is only needed occasionally, even though prices are very high during these periods.</p><p>A good solution to the problem of providing peaking power is pivotal to the success of power market restructuring. Solutions that fail to create the right incentives will result in unacceptable load shedding and can endanger the whole restructuring process. On the other hand, solutions that pay too much for investments in peaking power will lead to generation capacity surpluses and thus represent a societal loss.</p><p>Why is peaking capacity a problematic issue in energy-only markets?</p><p>Traditionally, probabilistic methods are applied to calculate the required generation capacity to obtain a desired level of reliability. In a centrally planned system, this level of generation capacity is developed in a least-cost manner. A single utility or central authorities can thus control the level of reliability directly. This is not possible in a market-based system, if suppliers are only paid for the energy produced.</p><p>Under the assumption of certainty and continually varying prices, generators fully recover their variable and investment costs under ideal market conditions. When uncertainty is taken into account, generators will cover their expected costs. However, revenues will be extremely volatile, especially for peaking generators. Combined with a risk-averse attitude, it is unlikely that investments will be sufficient to maintain the traditional level of reliability in an energyonly market. Consequently, one would expect reserve margins to decline in such markets. This effect is very clear in Sweden that deregulated in 1996, and less explicit in a number of other cases like Norway, California and Alberta.</p><p>Pricing and Consumer Preferences</p><p>The theory of electricity pricing was originally developed for vertically integrated utilities, but elements from this theory are also valuable in a restructured context. Many authors have agreed on the presence of a capacity element in the optimal price during peak-load conditions, while price should equal marginal cost during low-load conditions. An important assumption is that prices have to be stable. More recently, spot pricing of electricity has been advocated. A number of papers have been written about how to efficiently include security considerations in the spot price.</p><p>Because the availability of capacity cannot be directly controlled in an energy-only spot market, the probability of occasional capacity shortages increases. It is important to be prepared for this situation. The core of the problem is that demand is de facto inelastic in the short-term because of traditional tariff systems. It is shown that considerable economic gains are obtained when demand elasticity can be utilized, even if only minor shares of demand are elastic in the short-term. Better utilization of demand elasticity was also profitable in traditional systems, but after restructuring the gain is much larger: the alternative is not expensive generation but random rationing, which is unacceptable in modern society.</p><p>It is possible to go one step further. Consumers have different preferences for the use of energy and reliability. Some consumers have a low tolerance about being disconnected, while others are more willing to accept this. This will be reflected by their willingness to pay for reliability. A better solution would emerge if consumers could buy electricity and reliability more or less as separate commodities, based on their preferences.</p><p>In the context of pricing it should be pointed out that ”profile-based settlement” that allows small consumers to freely choose their supplier without hourly metering is detrimental with respect to the correct pricing of capacity. It should only be used in the initial phases of opening a market.</p><p>Improved utilization of system resources</p><p>Even in the short-term, demand and the availability of generation and transmission resources are uncertain. Therefore, it is necessary to have reserves available in a power system. When capacity becomes scarce, it is difficult to satisfy the reserve requirements. If these requirements are strict, the only possibility is to resort to what can be called ”preventive loadshedding” to satisfy the reserve requirements. This is obviously an expensive solution, but there are no obvious ways of balancing the (societal) cost of preventive load shedding against reduced system security. In this thesis, a model is developed for unit commitment and dispatch with a one-hour time horizon, with the objective of minimizing the sum of the operation and disruption costs, including the expected cost of system collapse. The model is run for the IEEE Reliability Test System. It is shown that under conditions where there is not enough capacity available to satisfy the reserve requirements, large cost savings can be obtained by optimizing the sum of the operation and disruption costs instead of using preventive load-shedding. In the model, it is also possible to directly target reliability indexes like the Loss of Load Probability or Expected Energy not Served. It is shown that increased reliability (in terms of the values of the indexes) can be obtained at a lower cost by targeting the indexes directly instead of resorting to reserve requirements. This is especially the case if flexible load-shedding routines are developed, making it possible to disconnect and reconnect the optimal amounts of load efficiently.</p><p>The use of alternatives to fixed reserve requirements as a means to maintain system security does not solve the problem about how to ensure the availability of peaking capacity. However, in a situation with occasional capacity shortages, it gives the System Operator a tool to find the optimal balance between preventive load shedding and system security, which can result in significantly lower disruption costs in such cases. More research and development in this area is necessary to develop methods and tools that are suitable for large power systems.</p><p>Ancillary Services</p><p>Investment in peaking capacity is insufficient in restructured systems because expected revenues are too low or too uncertain. If generator revenues are increased, the situation improves. One way to obtain this is to create markets for ancillary services. In the thesis, a model is developed for a central-dispatch type of pool. In this model, markets for energy and three types of ancillary services are cleared simultaneously for 24 hours ahead. Market prices are such that volumes and prices are consistent with the market participants. self-dispatch decisions . i.e. given these prices, market participants would have chosen the same production of energy and ancillary services as the outcome of the optimization program. With this model, it is shown that markets for ancillary services increase generator revenues, but this effect is partly offset by lower energy prices. This shows that markets for ancillary services can contribute to improving the situation, but given the remaining uncertainty, this is hardly enough to solve the problem.</p><p>Capacity Subscription</p><p>Because consumers have preferences for two goods: electricity and reliability, they should ideally have the choice of purchasing the preferred amount of each of these. Traditionally this is not possible . reliability is a public good, produced or obtained by a central authority on behalf of all consumers. Technological progress is presently changing this. Capacity subscription is a method that allows consumers to choose their individual level of reliability, at the same time creating a true market for capacity. It is based on the concept of selfrationing. Consumers anticipate (for example on a seasonal basis) their need for capacity at the instant of system-wide peak demand. Based on this anticipation, they procure their desired level of capacity in a market, where generators offer their available capacity. Demand is limited to subscribed capacity by a fuse-like device that is activated when total demand exceeds total available generation. In this way, the capacity payment only influences the market when demand is close to installed capacity, and does not distort the energy price in other periods. Demand is not limited when there is ample capacity. Demand will never exceed supply, because it can be limited in an acceptable way when this situation occurs. Moreover, both consumers and suppliers can adapt to situations with scarce or ample capacity, and the price of capacity will reflect this situation. There is one problem with the method: as consumers do not reach their subscribed capacity simultaneously, there will be a capacity surplus at the instant the fuse-devices are activated. Two methods to solve this problem are analysed, and it is shown that the problem can be solved optimally by giving consumers who prefer this the opportunity to buy power in excess of their subscription on the spot market.</p><p>Policy evaluation</p><p>Six alternative policies to assess the peaking power problem are analysed based on the following criteria:</p><p>- Static efficiency: the welfare-optimal match of consumption and supply</p><p>- Dynamic efficiency: the ability to create incentives for innovation</p><p>- Invisibility: with invisible strategies, each market actor pursues his or her own objectives without worrying about anyone else.s</p><p>- Robustness: a robust policy is less sensitive to deviations from assumptions</p><p>- Timeliness: the ability of a policy to be employed at the right time</p><p>- Stakeholder equity: the degree to which all the involved parties are treated equitable</p><p>- Corrigibility: the extent to which a policy can be corrected once it is employed</p><p>- Acceptability: the degree to which the policy is acceptable to all parties</p><p>- Simplicity: ceteris paribus simple strategies are preferable over more complicated strategies</p><p>- Cost: the cost of implementing the policy</p><p>- System security: the policy.s ability to obtain an acceptable level of system security</p><p>The policies are, in short (an example is given in parentheses):</p><p>- Capacity obligation: suppliers are obliged to keep sufficient capacity (PJM)</p><p>- Fixed capacity payment: a fixed payment is offered for available capacity (Spain)</p><p>- Dynamic capacity payment: capacity payment is based on the Loss of Load Probability (England and Wales)</p><p>- Energy-only: no explicit payments or obligation (Scandinavia, California)</p><p>- Proxy prices: very high administrative prices are used as a proxy to the Value of Lost Load when load shedding is necessary (Australia)</p><p>- Capacity subscription: cf. the description above (not implemented)</p><p>As could be expected, no single policy performs best on all criteria. The obligation and fixed payment methods do not perform well on market efficiency criteria, as essentially they are not market-based policies. The proxy prices policy is a reasonable policy on most criteria. It is easy, cheap and quick to implement. Because there is little experience with the method so far, there is some uncertainty with respect to if it is effective. One can anticipate that the threat of having to buy power at rationing prices will motivate market participants to avoid coming in a buying position in such cases, and that this will stimulate the adaptation of innovative solutions, especially on the demand side.</p><p>The capacity subscription policy looks very promising on the issues of efficiency, robustness and system security. This is especially true for dynamic efficiency: consumers will weigh the cost of capacity against the cost of innovative load control devices, and if the price of capacity is high, a market for such technology will emerge. However, there is a considerable threshold prior to the introduction of capacity subscription, caused by the implementation costs and complexity.</p><p>The conclusion on policies is thus that in an early stage after restructuring it may be appropriate to resort to the capacity obligation or payment method if the capacity balance is tight at the time of transition. For the medium-term, or if there is ample capacity initially, it is sensible to introduce proxy market prices to transfer the risk of a capacity deficit to market participants, with due attention being paid to the appropriate price level. Capacity subscription can be a long-term objective.</p>
|
217 |
Joint Source-channel Coding : Development of Methods and Utilization in Image CommunicationsCoward, Helge January 2001 (has links)
<p>In a traditional communication system, the coding process is divided into source coding and channel coding. Source coding is the process of compressing the source signal, and channel coding is the process of error protection. It can be shown that with no delay or complexity constraints and with exact knowledge of the source and channel properties, optimal performance can be obtained with separate source and channel coding. However, joint source-channel coding can lead to performance gains under complexity or delay constraints and offer robustness against unknown system parameters.</p><p>Multiple description coding is a system for generating two (or more) descriptions of a source, where decoding is possible from either description, but decoding of higher quality is possible if both descriptions are available. This system has been proposed as a means for joint source-channel coding. In this dissertation, the multiple description coding is used to protect against loss of data in an error correcting code caused by a number of channel errors exceeding the correcting ability of the channel code. This is tried on three channel models: a packet erasure channel, a binary symmetric channel, and a block fading channel, and the results obtained with multiple description coding is compared against traditional single description coding. The results show that if a long-term average mean square error distortion measure is used, multiple description coding is not as good as single description coding, except when the delay or block error rate of the channel code is heavily constrained.</p><p>A direct source-channel mapping is a mapping from amplitude continuous source symbols to amplitude continuous channel symbols, often involving a dimension change. A hybrid scalar quantizer-linear coder (HSQLC) is a direct source-channel mapping where the memoryless source signal is quantized using a scalar quantizer. The quantized value is transmitted on an analog channel using one symbol which can take as many levels as the quantizer, and the quantization error is transmitted on the same channel by means of a simple linear coder. Thus, there is a bandwidth expansion, two channel symbols are produced per source symbol. The channel is assumed to have additive white Gaussian noise and a power constraint. The quantizer levels and the distribution of power between the two symbols are optimized for different source distributions. A uniform quantizer with an appropriate step size gives a performance close to the optimized quantizer both for a Gaussian, a Laplacian, and a uniform memoryless source. The coder performs well compared to other joint source-channel coders, and it is relatively robust against variations in the channel noise level.</p><p>A previous image coder using direct source-channel mappings is improved. This coder is a subband coder where a classification following the decorrelating filter bank assigns mappings of different rates to different subband samples according to their importance. Improvements are made to practically all the parts of the coder, but the most important one is that the mappings are changed, and particularly, the bandwidth expanding HSQLC is introduced. The coder shows large improvements compared to the previous version, especially at channel qualities near the design quality. For poor channels or high rates, the HSQLC provides a large portion of the improvement. The coder is compared against a combination of a JPEG 2000 coder and a good channel code, and the performance is competitive with the reference, while the robustness against an unknown channel quality is largely improved. This kind of robustness is very important in broadcasting and mobile communications. </p> / <p>I tradisjonelle kommunikasjonssystemer kan kodingen deles inn i kildekoding (kompresjon) og kanalkoding (feilbeskyttelse). Disse operasjonene kan ses i sammenheng, og kombinert kilde- og kanalkoding kan gi forbedringer ved begrenset kompleksitet eller forsinkelse, og øke robustheten mot ukjente systemparametre. I avhandlingen vurderes to metoder. I den første er kilde- og kanalkodingen fortsatt delvis separat, men kildekoden er gjort robust mot dekodingsfeil i kanalkoden. Dette gjøres ved flerbeskrivelseskoding (multiple description coding), der kildesignalet representeres med to beskrivelser. Dekoding er mulig fra hver beskrivelse isolert, men høyere kvalitet kan oppnås hvis begge beskrivelsene er tilgjengelig. Ved sammenligning med et tradisjonelt system viser det seg at med hensyn på midlere kvadratisk avvik er flerbeskrivelseskoding som regel mindre bra enn et tradisjonelt system. Direkte kilde-til-kanal-avbildninger er avbildninger fra amplitudekontinuerlige kildesymboler direkte til amplitudekontinuerlige kanalsymboler. En slik metode blir lansert. Der skalarkvantiseres kildesignalet, som antas minneløst, og overføres med ett symbol på en analog kanal, mens kvantiseringsfeilen overføres analogt på den samme kanalen. Systemparametrene blir optimalisert for forskjellige kilder og kanalkvaliteter. Denne koderen gir bra ytelse sammenlignet med andre kombinerte kilde- og kanalkodere, og den er relativt robust mot variasjoner i støynivået på kanalen. Direkte kilde-til-kanal-avbildninger anvendes i en delbåndskoder for stillbilder. Denne koderen, som er basert på tidligere arbeider, blir sammenlignet med en kombinasjon av en JPEG 2000-koder og en god kanalkode, og ytelsen er omtrent like bra som referansen, samtidig som robustheten mot ukjent kanalkvalitet har økt kraftig. Denne typen robusthet er svært viktig i kringkasting og mobilkommunikasjon.</p>
|
218 |
Peaking Capacity in Restructured Power SystemsDoorman, Gerard January 2000 (has links)
The theme of this thesis is the supply of capacity during peak demand in restructured power systems. There are a number of reasons why there is uncertainty about whether an enegyonly electricity market (where generators are only paid for the energy produced) is able to ensure uninterrupted supply during peak load conditions. Much of the public debate in Europe has been about the present surplus generation capacity. However, in a truly competitive environment, it is hard to believe that seldom used capacity will be kept operational. This is illustrated by developments in Sweden. For this reason, the large surplus of generation capacity in the European Union may vanish much faster than generally assumed. In the USA, much of the debate has been about California. During the last three summers, California has occasionally experienced involuntary load shedding and prices have been very high during these periods. To some extent, the Californian situation illustrates the relevance of the subject of this thesis: in a deregulated system generators may not be willing to invest in peaking capacity that is only needed occasionally, even though prices are very high during these periods. A good solution to the problem of providing peaking power is pivotal to the success of power market restructuring. Solutions that fail to create the right incentives will result in unacceptable load shedding and can endanger the whole restructuring process. On the other hand, solutions that pay too much for investments in peaking power will lead to generation capacity surpluses and thus represent a societal loss. Why is peaking capacity a problematic issue in energy-only markets? Traditionally, probabilistic methods are applied to calculate the required generation capacity to obtain a desired level of reliability. In a centrally planned system, this level of generation capacity is developed in a least-cost manner. A single utility or central authorities can thus control the level of reliability directly. This is not possible in a market-based system, if suppliers are only paid for the energy produced. Under the assumption of certainty and continually varying prices, generators fully recover their variable and investment costs under ideal market conditions. When uncertainty is taken into account, generators will cover their expected costs. However, revenues will be extremely volatile, especially for peaking generators. Combined with a risk-averse attitude, it is unlikely that investments will be sufficient to maintain the traditional level of reliability in an energyonly market. Consequently, one would expect reserve margins to decline in such markets. This effect is very clear in Sweden that deregulated in 1996, and less explicit in a number of other cases like Norway, California and Alberta. Pricing and Consumer Preferences The theory of electricity pricing was originally developed for vertically integrated utilities, but elements from this theory are also valuable in a restructured context. Many authors have agreed on the presence of a capacity element in the optimal price during peak-load conditions, while price should equal marginal cost during low-load conditions. An important assumption is that prices have to be stable. More recently, spot pricing of electricity has been advocated. A number of papers have been written about how to efficiently include security considerations in the spot price. Because the availability of capacity cannot be directly controlled in an energy-only spot market, the probability of occasional capacity shortages increases. It is important to be prepared for this situation. The core of the problem is that demand is de facto inelastic in the short-term because of traditional tariff systems. It is shown that considerable economic gains are obtained when demand elasticity can be utilized, even if only minor shares of demand are elastic in the short-term. Better utilization of demand elasticity was also profitable in traditional systems, but after restructuring the gain is much larger: the alternative is not expensive generation but random rationing, which is unacceptable in modern society. It is possible to go one step further. Consumers have different preferences for the use of energy and reliability. Some consumers have a low tolerance about being disconnected, while others are more willing to accept this. This will be reflected by their willingness to pay for reliability. A better solution would emerge if consumers could buy electricity and reliability more or less as separate commodities, based on their preferences. In the context of pricing it should be pointed out that ”profile-based settlement” that allows small consumers to freely choose their supplier without hourly metering is detrimental with respect to the correct pricing of capacity. It should only be used in the initial phases of opening a market. Improved utilization of system resources Even in the short-term, demand and the availability of generation and transmission resources are uncertain. Therefore, it is necessary to have reserves available in a power system. When capacity becomes scarce, it is difficult to satisfy the reserve requirements. If these requirements are strict, the only possibility is to resort to what can be called ”preventive loadshedding” to satisfy the reserve requirements. This is obviously an expensive solution, but there are no obvious ways of balancing the (societal) cost of preventive load shedding against reduced system security. In this thesis, a model is developed for unit commitment and dispatch with a one-hour time horizon, with the objective of minimizing the sum of the operation and disruption costs, including the expected cost of system collapse. The model is run for the IEEE Reliability Test System. It is shown that under conditions where there is not enough capacity available to satisfy the reserve requirements, large cost savings can be obtained by optimizing the sum of the operation and disruption costs instead of using preventive load-shedding. In the model, it is also possible to directly target reliability indexes like the Loss of Load Probability or Expected Energy not Served. It is shown that increased reliability (in terms of the values of the indexes) can be obtained at a lower cost by targeting the indexes directly instead of resorting to reserve requirements. This is especially the case if flexible load-shedding routines are developed, making it possible to disconnect and reconnect the optimal amounts of load efficiently. The use of alternatives to fixed reserve requirements as a means to maintain system security does not solve the problem about how to ensure the availability of peaking capacity. However, in a situation with occasional capacity shortages, it gives the System Operator a tool to find the optimal balance between preventive load shedding and system security, which can result in significantly lower disruption costs in such cases. More research and development in this area is necessary to develop methods and tools that are suitable for large power systems. Ancillary Services Investment in peaking capacity is insufficient in restructured systems because expected revenues are too low or too uncertain. If generator revenues are increased, the situation improves. One way to obtain this is to create markets for ancillary services. In the thesis, a model is developed for a central-dispatch type of pool. In this model, markets for energy and three types of ancillary services are cleared simultaneously for 24 hours ahead. Market prices are such that volumes and prices are consistent with the market participants. self-dispatch decisions . i.e. given these prices, market participants would have chosen the same production of energy and ancillary services as the outcome of the optimization program. With this model, it is shown that markets for ancillary services increase generator revenues, but this effect is partly offset by lower energy prices. This shows that markets for ancillary services can contribute to improving the situation, but given the remaining uncertainty, this is hardly enough to solve the problem. Capacity Subscription Because consumers have preferences for two goods: electricity and reliability, they should ideally have the choice of purchasing the preferred amount of each of these. Traditionally this is not possible . reliability is a public good, produced or obtained by a central authority on behalf of all consumers. Technological progress is presently changing this. Capacity subscription is a method that allows consumers to choose their individual level of reliability, at the same time creating a true market for capacity. It is based on the concept of selfrationing. Consumers anticipate (for example on a seasonal basis) their need for capacity at the instant of system-wide peak demand. Based on this anticipation, they procure their desired level of capacity in a market, where generators offer their available capacity. Demand is limited to subscribed capacity by a fuse-like device that is activated when total demand exceeds total available generation. In this way, the capacity payment only influences the market when demand is close to installed capacity, and does not distort the energy price in other periods. Demand is not limited when there is ample capacity. Demand will never exceed supply, because it can be limited in an acceptable way when this situation occurs. Moreover, both consumers and suppliers can adapt to situations with scarce or ample capacity, and the price of capacity will reflect this situation. There is one problem with the method: as consumers do not reach their subscribed capacity simultaneously, there will be a capacity surplus at the instant the fuse-devices are activated. Two methods to solve this problem are analysed, and it is shown that the problem can be solved optimally by giving consumers who prefer this the opportunity to buy power in excess of their subscription on the spot market. Policy evaluation Six alternative policies to assess the peaking power problem are analysed based on the following criteria: - Static efficiency: the welfare-optimal match of consumption and supply - Dynamic efficiency: the ability to create incentives for innovation - Invisibility: with invisible strategies, each market actor pursues his or her own objectives without worrying about anyone else.s - Robustness: a robust policy is less sensitive to deviations from assumptions - Timeliness: the ability of a policy to be employed at the right time - Stakeholder equity: the degree to which all the involved parties are treated equitable - Corrigibility: the extent to which a policy can be corrected once it is employed - Acceptability: the degree to which the policy is acceptable to all parties - Simplicity: ceteris paribus simple strategies are preferable over more complicated strategies - Cost: the cost of implementing the policy - System security: the policy.s ability to obtain an acceptable level of system security The policies are, in short (an example is given in parentheses): - Capacity obligation: suppliers are obliged to keep sufficient capacity (PJM) - Fixed capacity payment: a fixed payment is offered for available capacity (Spain) - Dynamic capacity payment: capacity payment is based on the Loss of Load Probability (England and Wales) - Energy-only: no explicit payments or obligation (Scandinavia, California) - Proxy prices: very high administrative prices are used as a proxy to the Value of Lost Load when load shedding is necessary (Australia) - Capacity subscription: cf. the description above (not implemented) As could be expected, no single policy performs best on all criteria. The obligation and fixed payment methods do not perform well on market efficiency criteria, as essentially they are not market-based policies. The proxy prices policy is a reasonable policy on most criteria. It is easy, cheap and quick to implement. Because there is little experience with the method so far, there is some uncertainty with respect to if it is effective. One can anticipate that the threat of having to buy power at rationing prices will motivate market participants to avoid coming in a buying position in such cases, and that this will stimulate the adaptation of innovative solutions, especially on the demand side. The capacity subscription policy looks very promising on the issues of efficiency, robustness and system security. This is especially true for dynamic efficiency: consumers will weigh the cost of capacity against the cost of innovative load control devices, and if the price of capacity is high, a market for such technology will emerge. However, there is a considerable threshold prior to the introduction of capacity subscription, caused by the implementation costs and complexity. The conclusion on policies is thus that in an early stage after restructuring it may be appropriate to resort to the capacity obligation or payment method if the capacity balance is tight at the time of transition. For the medium-term, or if there is ample capacity initially, it is sensible to introduce proxy market prices to transfer the risk of a capacity deficit to market participants, with due attention being paid to the appropriate price level. Capacity subscription can be a long-term objective.
|
219 |
Joint Source-channel Coding : Development of Methods and Utilization in Image CommunicationsCoward, Helge January 2001 (has links)
In a traditional communication system, the coding process is divided into source coding and channel coding. Source coding is the process of compressing the source signal, and channel coding is the process of error protection. It can be shown that with no delay or complexity constraints and with exact knowledge of the source and channel properties, optimal performance can be obtained with separate source and channel coding. However, joint source-channel coding can lead to performance gains under complexity or delay constraints and offer robustness against unknown system parameters. Multiple description coding is a system for generating two (or more) descriptions of a source, where decoding is possible from either description, but decoding of higher quality is possible if both descriptions are available. This system has been proposed as a means for joint source-channel coding. In this dissertation, the multiple description coding is used to protect against loss of data in an error correcting code caused by a number of channel errors exceeding the correcting ability of the channel code. This is tried on three channel models: a packet erasure channel, a binary symmetric channel, and a block fading channel, and the results obtained with multiple description coding is compared against traditional single description coding. The results show that if a long-term average mean square error distortion measure is used, multiple description coding is not as good as single description coding, except when the delay or block error rate of the channel code is heavily constrained. A direct source-channel mapping is a mapping from amplitude continuous source symbols to amplitude continuous channel symbols, often involving a dimension change. A hybrid scalar quantizer-linear coder (HSQLC) is a direct source-channel mapping where the memoryless source signal is quantized using a scalar quantizer. The quantized value is transmitted on an analog channel using one symbol which can take as many levels as the quantizer, and the quantization error is transmitted on the same channel by means of a simple linear coder. Thus, there is a bandwidth expansion, two channel symbols are produced per source symbol. The channel is assumed to have additive white Gaussian noise and a power constraint. The quantizer levels and the distribution of power between the two symbols are optimized for different source distributions. A uniform quantizer with an appropriate step size gives a performance close to the optimized quantizer both for a Gaussian, a Laplacian, and a uniform memoryless source. The coder performs well compared to other joint source-channel coders, and it is relatively robust against variations in the channel noise level. A previous image coder using direct source-channel mappings is improved. This coder is a subband coder where a classification following the decorrelating filter bank assigns mappings of different rates to different subband samples according to their importance. Improvements are made to practically all the parts of the coder, but the most important one is that the mappings are changed, and particularly, the bandwidth expanding HSQLC is introduced. The coder shows large improvements compared to the previous version, especially at channel qualities near the design quality. For poor channels or high rates, the HSQLC provides a large portion of the improvement. The coder is compared against a combination of a JPEG 2000 coder and a good channel code, and the performance is competitive with the reference, while the robustness against an unknown channel quality is largely improved. This kind of robustness is very important in broadcasting and mobile communications. / I tradisjonelle kommunikasjonssystemer kan kodingen deles inn i kildekoding (kompresjon) og kanalkoding (feilbeskyttelse). Disse operasjonene kan ses i sammenheng, og kombinert kilde- og kanalkoding kan gi forbedringer ved begrenset kompleksitet eller forsinkelse, og øke robustheten mot ukjente systemparametre. I avhandlingen vurderes to metoder. I den første er kilde- og kanalkodingen fortsatt delvis separat, men kildekoden er gjort robust mot dekodingsfeil i kanalkoden. Dette gjøres ved flerbeskrivelseskoding (multiple description coding), der kildesignalet representeres med to beskrivelser. Dekoding er mulig fra hver beskrivelse isolert, men høyere kvalitet kan oppnås hvis begge beskrivelsene er tilgjengelig. Ved sammenligning med et tradisjonelt system viser det seg at med hensyn på midlere kvadratisk avvik er flerbeskrivelseskoding som regel mindre bra enn et tradisjonelt system. Direkte kilde-til-kanal-avbildninger er avbildninger fra amplitudekontinuerlige kildesymboler direkte til amplitudekontinuerlige kanalsymboler. En slik metode blir lansert. Der skalarkvantiseres kildesignalet, som antas minneløst, og overføres med ett symbol på en analog kanal, mens kvantiseringsfeilen overføres analogt på den samme kanalen. Systemparametrene blir optimalisert for forskjellige kilder og kanalkvaliteter. Denne koderen gir bra ytelse sammenlignet med andre kombinerte kilde- og kanalkodere, og den er relativt robust mot variasjoner i støynivået på kanalen. Direkte kilde-til-kanal-avbildninger anvendes i en delbåndskoder for stillbilder. Denne koderen, som er basert på tidligere arbeider, blir sammenlignet med en kombinasjon av en JPEG 2000-koder og en god kanalkode, og ytelsen er omtrent like bra som referansen, samtidig som robustheten mot ukjent kanalkvalitet har økt kraftig. Denne typen robusthet er svært viktig i kringkasting og mobilkommunikasjon.
|
220 |
Source and Channel Coding for Audiovisual Communication SystemsKim, Moo Yound January 2004 (has links)
<p>Topics in source and channel coding for audiovisual communication systems are studied. The goal of source coding is to represent a source with the lowest possible rate to achieve a particular distortion, or with the lowest possible distortion at a given rate. Channel coding adds redundancy to quantized source information to recover channel errors. This thesis consists of four topics.</p><p>Firstly, based on high-rate theory, we propose Karhunen-Loéve transform (KLT)-based classified vector quantization (VQ) to efficiently utilize optimal VQ advantages over scalar quantization (SQ). Compared with code-excited linear predictive (CELP) speech coding, KLT-based classified VQ provides not only a higher SNR and perceptual quality, but also lower computational complexity. Further improvement is obtained by companding.</p><p>Secondly, we compare various transmitter-based packet-loss recovery techniques from a rate-distortion viewpoint for real-time audiovisual communication systems over the Internet. We conclude that, in most circumstances, multiple description coding (MDC) is the best packet-loss recovery technique. If channel conditions are informed, channel-optimized MDC yields better performance.</p><p>Compared with resolution-constrained quantization (RCQ), entropy-constrained quantization (ECQ) produces a smaller number of distortion outliers but is more sensitive to channel errors. We apply a generalized γ-th power distortion measure to design a new RCQ algorithm that has less distortion outliers and is more robust against source mismatch than conventional RCQ methods.</p><p>Finally, designing quantizers to effectively remove irrelevancy as well as redundancy is considered. Taking into account the just noticeable difference (JND) of human perception, we design a new RCQ method that has improved performance in terms of mean distortion and distortion outliers. Based on high-rate theory, optimal centroid density and its corresponding mean distortion are also accurately predicted.</p><p>The latter two quantization methods can be combined with practical source coding systems such as KLT-based classified VQ and with joint source-channel coding paradigms such as MDC.</p>
|
Page generated in 0.0973 seconds