• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 36
  • 24
  • 13
  • 9
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 315
  • 315
  • 68
  • 46
  • 45
  • 42
  • 39
  • 36
  • 33
  • 33
  • 29
  • 28
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Implementation and Evaluation of P.880 Methodology

Imam, Hasani Syed Hassan January 2009 (has links)
Continuous Evaluation of Time Varying Speech Quality (CETVSQ) is a method of subjective assessment of transmitted speech quality for long speech sequences containing quality fluctuations in time. This method is modeled for continuous evaluation of long speech sequences based on two subjective tasks. First task is to assess the speech quality during the listening and second task is to assess the overall speech quality after listening to the speech sequences. The development of continuous evaluation of time varying speech quality was motivated by fact that speech quality degradations are often not constant and varies in time. In modern IP telephony and wireless networks, speech quality varies due to specific impairments such as packet loss, echo, handover in networks etc. Many other standard methods already exist, which are being used for subjective assessment of short speech sequences. These methods such as ITU-T Rec. P.800 are well suited for only time constant speech quality. In this thesis work, it was required to implement CETVSQ methodology, so that it could be possible to assess long speech sequences. An analog hardware slider is used for the continuous assessment of speech qualities, as well as for overall quality judgments. Instantaneous and overall quality judgments are being saved into Excel file. The results stored in the Excel file are analyzed by applying different statistical measures. In evaluation part of the thesis work, subjects’ scores are analyzed by applying statistical methods to identify several factors that have originated in the CETVSQ methodology. A subjective test had already been conducted according to P.800 ACR method. The long speech sequences were divided into 8 seconds short sequences and then assessed using P.800 ACR method. In this study, the long speech sequences are assessed using CETVSQ methodology and comparison is conducted between P.800 ACR and CETVSQ results. It has been revealed that if long speech sequences are divided into short segments and evaluated using P.800 ACR, then P.800 ACR results will be different from the results obtained from CETVSQ methodology. The necessity of CETVSQ methodology is proved by this study.
152

Towards Sustainable Cloud Computing: Reducing Electricity Cost and Carbon Footprint for Cloud Data Centers through Geographical and Temporal Shifting of Workloads

Le, Trung 17 July 2012 (has links)
Cloud Computing presents a novel way for businesses to procure their IT needs. Its elasticity and on-demand provisioning enables a shift from capital expenditures to operating expenses, giving businesses the technological agility they need to respond to an ever-changing marketplace. The rapid adoption of Cloud Computing, however, poses a unique challenge to Cloud providers—their already very large electricity bill and carbon footprint will get larger as they expand; managing both costs is therefore essential to their growth. This thesis squarely addresses the above challenge. Recognizing the presence of Cloud data centers in multiple locations and the differences in electricity price and emission intensity among these locations and over time, we develop an optimization framework that couples workload distribution with time-varying signals on electricity price and emission intensity for financial and environmental benefits. The framework is comprised of an optimization model, an aggregate cost function, and 6 scheduling heuristics. To evaluate cost savings, we run simulations with 5 data centers located across North America over a period of 81 days. We use historical data on electricity price, emission intensity, and workload collected from market operators and research data archives. We find that our framework can produce substantial cost savings, especially when workloads are distributed both geographically and temporally—up to 53.35% on electricity cost, or 29.13% on carbon cost, or 51.44% on electricity cost and 13.14% on carbon cost simultaneously.
153

Essays on Mergers and Acquisitions and Event Studies

Irani, Mohammad January 2016 (has links)
This dissertation consists of three studies on the anticipation of mergers and acquisitions (M&amp;As) and its impact on takeover event studies.  Article I investigates whether the market can anticipate both takeovers and their payment forms prior to their announcement dates. This article also proposes a new time-series approach for detecting the ex-ante deal-anticipation and payment-form anticipation dates. The results indicate that the majority of deals and their payment forms are anticipated much earlier than has been documented in previous takeover studies. Moreover, controlling for the anticipation dates matters for explaining the choice of payment method in M&amp;As. Article II studies how assuming that M&amp;As are unpredictable during the estimation window affects the measurement of abnormal returns. The results show that a part of takeover synergy is indeed incorporated into the stock prices during the estimation window of previous studies, around the deal-anticipation dates. This article estimates the parameters of the expected return model from the pre-anticipation period to control the consequences of ex-ante anticipation on the estimates of abnormal returns. Using the anticipation-adjusted approach significantly improves the estimation of the event-window abnormal returns, and provides new insights into some well-documented takeover results. Article III examines how the abnormal returns are affected when a standard event study assumes that the parameters of the expected return model are stable. Using a sample of firm takeovers, the results indicate that the parameters are indeed unstable. This article introduces a time-varying market model to account for the dynamics of merging likelihood when it estimates the abnormal returns. The findings show that the stability assumption causes a standard event study to overestimate significantly the abnormal returns to the target and acquirer shareholders. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Manuscript. Paper 2: Manuscript. Paper 3: Manuscript.</p>
154

A STUDY ON THE DCC-GARCH MODEL’S FORECASTING ABILITY WITH VALUE-AT-RISK APPLICATIONS ON THE SCANDINAVIAN FOREIGN EXCHANGE MARKET

Andersson-Säll, Tim, Lindskog, Johan January 2019 (has links)
This thesis has treated the subject of DCC-GARCH model’s forecasting ability and Value-at- Risk applications on the Scandinavian foreign exchange market. The estimated models were based on daily opening foreign exchange spot rates in the period of 2004-2013, which captured the information in the financial crisis of 2008 and Eurozone crisis in the early 2010s. The forecasts were performed on a one-day rolling window in 2014. The results show that the DCC-GARCH model accurately predicted the fluctuation in the conditional correlation, although not with the correct magnitude. Furthermore, the DCC-GARCH model shows good Value-at-Risk forecasting performance for different portfolios containing the Scandinavian currencies.
155

Efficient Reconstruction of Two-Periodic Nonuniformly Sampled Signals Applicable to Time-Interleaved ADCs

Vengattaramane, Kameswaran January 2006 (has links)
<p>Nonuniform sampling occurs in many practical applications either intentionally or unintentionally. This thesis deals with the reconstruction of two-periodic nonuniform signals which is of great importance in two-channel time-interleaved analog-to-digital converters. In a two-channel time-interleaved ADC, aperture delay mismatch between the channels gives rise to a two-periodic nonuniform sampling pattern, resulting in distortion and severely affecting the linearity of the converter. The problem is solved by digitally recovering a uniformly sampled sequence from a two-periodic nonuniformly sampled set. For this purpose, a time-varying FIR filter is employed. If the sampling pattern is known and fixed, this filter can be designed in an optimal way using least-squares or minimax design. When the sampling pattern changes now and then as during the normal operation of time-interleaved ADC, these filters have to be redesigned. This has implications on the implementation cost as general on-line design is cumbersome. To overcome this problem, a novel time-varying FIR filter with polynomial impulse response is developed and characterized in this thesis. The main advantage with these filters is that on-line design is no longer needed. It now suffices to perform only one design before implementation and in the implementation it is enough to adjust only one variable parameter when the sampling pattern changes. Thus the high implementation cost is decreased substantially.</p><p>Filter design and the associated performance metrics have been validated using MATLAB. The design space has been explored to limits imposed by machine precision on matrix inversions. Studies related to finite wordlength effects in practical filter realisations have also been carried out. These formulations can also be extended to the general M - periodic nonuniform sampling case.</p>
156

Efficient Information Visualization of Multivariate and Time-Varying Data

Johansson, Jimmy January 2008 (has links)
Data can be found everywhere, for example in the form of price, size, weight and colour of all products sold by a company, or as time series of daily observations of temperature, precipitation, wind and visibility from thousands of stations. Due to their size and complexity it is intrinsically hard to form a global overview and understanding of them. Information visualization aims at overcoming these difficulties by transforming data into representations that can be more easily interpreted. This thesis presents work on the development of methods to enable efficient information visualization of multivariate and time-varying data sets by conveying information in a clear and interpretable way, and in a reasonable time. The work presented is primarily based on a popular multivariate visualization technique called parallel coordinates but many of the methods can be generalized to apply to other information visualization techniques. A three-dimensional, multi-relational version of parallel coordinates is presented that enables a simultaneous analysis of all pairwise relationships between a single focus variable and all other variables included in the display. This approach permits a more rapid analysis of highly multivariate data sets. Through a number of user studies the multi-relational parallel coordinates technique has been evaluated against standard, two-dimensional parallel coordinates and been found to better support a number of different types of task. High precision density maps and transfer functions are presented as a means to reveal structure in large data displayed in parallel coordinates. These two approaches make it possible to interactively analyse arbitrary regions in a parallel coordinates display without risking the loss of significant structure. Another focus of this thesis relates to the visualization of time-varying, multivariate data. This has been studied both in the specific application area of system identification using volumetric representations, as well as in the general case by the introduction of temporal parallel coordinates. The methods described in this thesis have all been implemented using modern computer graphics hardware which enables the display and manipulation of very large data sets in real time. A wide range of data sets, both synthetically generated and taken from real applications, have been used to test these methods. It is expected that, as long as the data have multivariate properties, they could be employed efficiently.
157

Towards Sustainable Cloud Computing: Reducing Electricity Cost and Carbon Footprint for Cloud Data Centers through Geographical and Temporal Shifting of Workloads

Le, Trung 17 July 2012 (has links)
Cloud Computing presents a novel way for businesses to procure their IT needs. Its elasticity and on-demand provisioning enables a shift from capital expenditures to operating expenses, giving businesses the technological agility they need to respond to an ever-changing marketplace. The rapid adoption of Cloud Computing, however, poses a unique challenge to Cloud providers—their already very large electricity bill and carbon footprint will get larger as they expand; managing both costs is therefore essential to their growth. This thesis squarely addresses the above challenge. Recognizing the presence of Cloud data centers in multiple locations and the differences in electricity price and emission intensity among these locations and over time, we develop an optimization framework that couples workload distribution with time-varying signals on electricity price and emission intensity for financial and environmental benefits. The framework is comprised of an optimization model, an aggregate cost function, and 6 scheduling heuristics. To evaluate cost savings, we run simulations with 5 data centers located across North America over a period of 81 days. We use historical data on electricity price, emission intensity, and workload collected from market operators and research data archives. We find that our framework can produce substantial cost savings, especially when workloads are distributed both geographically and temporally—up to 53.35% on electricity cost, or 29.13% on carbon cost, or 51.44% on electricity cost and 13.14% on carbon cost simultaneously.
158

Performance of a Cluster that Supports Resource Reservation and On-demand Access

Leung, Gerald January 2009 (has links)
Next generation data centres are expected to support both advance resource reservation and on-demand access, but the system performance for such a computing environment has not been well-investigated. A reservation request is characterized by a start time, duration, and resource requirement. Discrete event simulation is used to study the performance characteristics of reservation systems. The basic strategy is to accept a request if resources are available and reject the request otherwise. The performance metrics considered are resource utilization and blocking probability. Results showing the impact of input parameters on these performance metrics are presented. It is found that the resource utilization is quite low. Two strategies that can be used to improve the performance for advance reservation are evaluated. The first strategy allows the start time to be delayed up to some maximum value, while the second allows the possibility of non-uniform resource allocation over the duration of the reservation. Simulation results showing the performance improvements of these two strategies are presented. Resources not used by advance reservation are used to support on-demand access. The performance metrics of interest is the mean response time. Simulation results showing the impact of resource availability and its variation over time on the mean response time are presented. These results provide valuable insights into the performance of systems with time-varying processing capacity. They can also be used to develop guidelines for the non-uniform resource allocation strategy for advance reservation in case the reserved resources are used for interactive access.
159

Efficient Reconstruction of Two-Periodic Nonuniformly Sampled Signals Applicable to Time-Interleaved ADCs

Vengattaramane, Kameswaran January 2006 (has links)
Nonuniform sampling occurs in many practical applications either intentionally or unintentionally. This thesis deals with the reconstruction of two-periodic nonuniform signals which is of great importance in two-channel time-interleaved analog-to-digital converters. In a two-channel time-interleaved ADC, aperture delay mismatch between the channels gives rise to a two-periodic nonuniform sampling pattern, resulting in distortion and severely affecting the linearity of the converter. The problem is solved by digitally recovering a uniformly sampled sequence from a two-periodic nonuniformly sampled set. For this purpose, a time-varying FIR filter is employed. If the sampling pattern is known and fixed, this filter can be designed in an optimal way using least-squares or minimax design. When the sampling pattern changes now and then as during the normal operation of time-interleaved ADC, these filters have to be redesigned. This has implications on the implementation cost as general on-line design is cumbersome. To overcome this problem, a novel time-varying FIR filter with polynomial impulse response is developed and characterized in this thesis. The main advantage with these filters is that on-line design is no longer needed. It now suffices to perform only one design before implementation and in the implementation it is enough to adjust only one variable parameter when the sampling pattern changes. Thus the high implementation cost is decreased substantially. Filter design and the associated performance metrics have been validated using MATLAB. The design space has been explored to limits imposed by machine precision on matrix inversions. Studies related to finite wordlength effects in practical filter realisations have also been carried out. These formulations can also be extended to the general M - periodic nonuniform sampling case.
160

Performance of a Cluster that Supports Resource Reservation and On-demand Access

Leung, Gerald January 2009 (has links)
Next generation data centres are expected to support both advance resource reservation and on-demand access, but the system performance for such a computing environment has not been well-investigated. A reservation request is characterized by a start time, duration, and resource requirement. Discrete event simulation is used to study the performance characteristics of reservation systems. The basic strategy is to accept a request if resources are available and reject the request otherwise. The performance metrics considered are resource utilization and blocking probability. Results showing the impact of input parameters on these performance metrics are presented. It is found that the resource utilization is quite low. Two strategies that can be used to improve the performance for advance reservation are evaluated. The first strategy allows the start time to be delayed up to some maximum value, while the second allows the possibility of non-uniform resource allocation over the duration of the reservation. Simulation results showing the performance improvements of these two strategies are presented. Resources not used by advance reservation are used to support on-demand access. The performance metrics of interest is the mean response time. Simulation results showing the impact of resource availability and its variation over time on the mean response time are presented. These results provide valuable insights into the performance of systems with time-varying processing capacity. They can also be used to develop guidelines for the non-uniform resource allocation strategy for advance reservation in case the reserved resources are used for interactive access.

Page generated in 0.0381 seconds