• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 36
  • 24
  • 13
  • 9
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 314
  • 314
  • 68
  • 46
  • 45
  • 42
  • 38
  • 36
  • 33
  • 33
  • 29
  • 28
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Towards Sustainable Cloud Computing: Reducing Electricity Cost and Carbon Footprint for Cloud Data Centers through Geographical and Temporal Shifting of Workloads

Le, Trung 17 July 2012 (has links)
Cloud Computing presents a novel way for businesses to procure their IT needs. Its elasticity and on-demand provisioning enables a shift from capital expenditures to operating expenses, giving businesses the technological agility they need to respond to an ever-changing marketplace. The rapid adoption of Cloud Computing, however, poses a unique challenge to Cloud providers—their already very large electricity bill and carbon footprint will get larger as they expand; managing both costs is therefore essential to their growth. This thesis squarely addresses the above challenge. Recognizing the presence of Cloud data centers in multiple locations and the differences in electricity price and emission intensity among these locations and over time, we develop an optimization framework that couples workload distribution with time-varying signals on electricity price and emission intensity for financial and environmental benefits. The framework is comprised of an optimization model, an aggregate cost function, and 6 scheduling heuristics. To evaluate cost savings, we run simulations with 5 data centers located across North America over a period of 81 days. We use historical data on electricity price, emission intensity, and workload collected from market operators and research data archives. We find that our framework can produce substantial cost savings, especially when workloads are distributed both geographically and temporally—up to 53.35% on electricity cost, or 29.13% on carbon cost, or 51.44% on electricity cost and 13.14% on carbon cost simultaneously.
152

Essays on Mergers and Acquisitions and Event Studies

Irani, Mohammad January 2016 (has links)
This dissertation consists of three studies on the anticipation of mergers and acquisitions (M&amp;As) and its impact on takeover event studies.  Article I investigates whether the market can anticipate both takeovers and their payment forms prior to their announcement dates. This article also proposes a new time-series approach for detecting the ex-ante deal-anticipation and payment-form anticipation dates. The results indicate that the majority of deals and their payment forms are anticipated much earlier than has been documented in previous takeover studies. Moreover, controlling for the anticipation dates matters for explaining the choice of payment method in M&amp;As. Article II studies how assuming that M&amp;As are unpredictable during the estimation window affects the measurement of abnormal returns. The results show that a part of takeover synergy is indeed incorporated into the stock prices during the estimation window of previous studies, around the deal-anticipation dates. This article estimates the parameters of the expected return model from the pre-anticipation period to control the consequences of ex-ante anticipation on the estimates of abnormal returns. Using the anticipation-adjusted approach significantly improves the estimation of the event-window abnormal returns, and provides new insights into some well-documented takeover results. Article III examines how the abnormal returns are affected when a standard event study assumes that the parameters of the expected return model are stable. Using a sample of firm takeovers, the results indicate that the parameters are indeed unstable. This article introduces a time-varying market model to account for the dynamics of merging likelihood when it estimates the abnormal returns. The findings show that the stability assumption causes a standard event study to overestimate significantly the abnormal returns to the target and acquirer shareholders. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Manuscript. Paper 2: Manuscript. Paper 3: Manuscript.</p>
153

A STUDY ON THE DCC-GARCH MODEL’S FORECASTING ABILITY WITH VALUE-AT-RISK APPLICATIONS ON THE SCANDINAVIAN FOREIGN EXCHANGE MARKET

Andersson-Säll, Tim, Lindskog, Johan January 2019 (has links)
This thesis has treated the subject of DCC-GARCH model’s forecasting ability and Value-at- Risk applications on the Scandinavian foreign exchange market. The estimated models were based on daily opening foreign exchange spot rates in the period of 2004-2013, which captured the information in the financial crisis of 2008 and Eurozone crisis in the early 2010s. The forecasts were performed on a one-day rolling window in 2014. The results show that the DCC-GARCH model accurately predicted the fluctuation in the conditional correlation, although not with the correct magnitude. Furthermore, the DCC-GARCH model shows good Value-at-Risk forecasting performance for different portfolios containing the Scandinavian currencies.
154

Efficient Reconstruction of Two-Periodic Nonuniformly Sampled Signals Applicable to Time-Interleaved ADCs

Vengattaramane, Kameswaran January 2006 (has links)
<p>Nonuniform sampling occurs in many practical applications either intentionally or unintentionally. This thesis deals with the reconstruction of two-periodic nonuniform signals which is of great importance in two-channel time-interleaved analog-to-digital converters. In a two-channel time-interleaved ADC, aperture delay mismatch between the channels gives rise to a two-periodic nonuniform sampling pattern, resulting in distortion and severely affecting the linearity of the converter. The problem is solved by digitally recovering a uniformly sampled sequence from a two-periodic nonuniformly sampled set. For this purpose, a time-varying FIR filter is employed. If the sampling pattern is known and fixed, this filter can be designed in an optimal way using least-squares or minimax design. When the sampling pattern changes now and then as during the normal operation of time-interleaved ADC, these filters have to be redesigned. This has implications on the implementation cost as general on-line design is cumbersome. To overcome this problem, a novel time-varying FIR filter with polynomial impulse response is developed and characterized in this thesis. The main advantage with these filters is that on-line design is no longer needed. It now suffices to perform only one design before implementation and in the implementation it is enough to adjust only one variable parameter when the sampling pattern changes. Thus the high implementation cost is decreased substantially.</p><p>Filter design and the associated performance metrics have been validated using MATLAB. The design space has been explored to limits imposed by machine precision on matrix inversions. Studies related to finite wordlength effects in practical filter realisations have also been carried out. These formulations can also be extended to the general M - periodic nonuniform sampling case.</p>
155

Efficient Information Visualization of Multivariate and Time-Varying Data

Johansson, Jimmy January 2008 (has links)
Data can be found everywhere, for example in the form of price, size, weight and colour of all products sold by a company, or as time series of daily observations of temperature, precipitation, wind and visibility from thousands of stations. Due to their size and complexity it is intrinsically hard to form a global overview and understanding of them. Information visualization aims at overcoming these difficulties by transforming data into representations that can be more easily interpreted. This thesis presents work on the development of methods to enable efficient information visualization of multivariate and time-varying data sets by conveying information in a clear and interpretable way, and in a reasonable time. The work presented is primarily based on a popular multivariate visualization technique called parallel coordinates but many of the methods can be generalized to apply to other information visualization techniques. A three-dimensional, multi-relational version of parallel coordinates is presented that enables a simultaneous analysis of all pairwise relationships between a single focus variable and all other variables included in the display. This approach permits a more rapid analysis of highly multivariate data sets. Through a number of user studies the multi-relational parallel coordinates technique has been evaluated against standard, two-dimensional parallel coordinates and been found to better support a number of different types of task. High precision density maps and transfer functions are presented as a means to reveal structure in large data displayed in parallel coordinates. These two approaches make it possible to interactively analyse arbitrary regions in a parallel coordinates display without risking the loss of significant structure. Another focus of this thesis relates to the visualization of time-varying, multivariate data. This has been studied both in the specific application area of system identification using volumetric representations, as well as in the general case by the introduction of temporal parallel coordinates. The methods described in this thesis have all been implemented using modern computer graphics hardware which enables the display and manipulation of very large data sets in real time. A wide range of data sets, both synthetically generated and taken from real applications, have been used to test these methods. It is expected that, as long as the data have multivariate properties, they could be employed efficiently.
156

Towards Sustainable Cloud Computing: Reducing Electricity Cost and Carbon Footprint for Cloud Data Centers through Geographical and Temporal Shifting of Workloads

Le, Trung 17 July 2012 (has links)
Cloud Computing presents a novel way for businesses to procure their IT needs. Its elasticity and on-demand provisioning enables a shift from capital expenditures to operating expenses, giving businesses the technological agility they need to respond to an ever-changing marketplace. The rapid adoption of Cloud Computing, however, poses a unique challenge to Cloud providers—their already very large electricity bill and carbon footprint will get larger as they expand; managing both costs is therefore essential to their growth. This thesis squarely addresses the above challenge. Recognizing the presence of Cloud data centers in multiple locations and the differences in electricity price and emission intensity among these locations and over time, we develop an optimization framework that couples workload distribution with time-varying signals on electricity price and emission intensity for financial and environmental benefits. The framework is comprised of an optimization model, an aggregate cost function, and 6 scheduling heuristics. To evaluate cost savings, we run simulations with 5 data centers located across North America over a period of 81 days. We use historical data on electricity price, emission intensity, and workload collected from market operators and research data archives. We find that our framework can produce substantial cost savings, especially when workloads are distributed both geographically and temporally—up to 53.35% on electricity cost, or 29.13% on carbon cost, or 51.44% on electricity cost and 13.14% on carbon cost simultaneously.
157

Performance of a Cluster that Supports Resource Reservation and On-demand Access

Leung, Gerald January 2009 (has links)
Next generation data centres are expected to support both advance resource reservation and on-demand access, but the system performance for such a computing environment has not been well-investigated. A reservation request is characterized by a start time, duration, and resource requirement. Discrete event simulation is used to study the performance characteristics of reservation systems. The basic strategy is to accept a request if resources are available and reject the request otherwise. The performance metrics considered are resource utilization and blocking probability. Results showing the impact of input parameters on these performance metrics are presented. It is found that the resource utilization is quite low. Two strategies that can be used to improve the performance for advance reservation are evaluated. The first strategy allows the start time to be delayed up to some maximum value, while the second allows the possibility of non-uniform resource allocation over the duration of the reservation. Simulation results showing the performance improvements of these two strategies are presented. Resources not used by advance reservation are used to support on-demand access. The performance metrics of interest is the mean response time. Simulation results showing the impact of resource availability and its variation over time on the mean response time are presented. These results provide valuable insights into the performance of systems with time-varying processing capacity. They can also be used to develop guidelines for the non-uniform resource allocation strategy for advance reservation in case the reserved resources are used for interactive access.
158

Efficient Reconstruction of Two-Periodic Nonuniformly Sampled Signals Applicable to Time-Interleaved ADCs

Vengattaramane, Kameswaran January 2006 (has links)
Nonuniform sampling occurs in many practical applications either intentionally or unintentionally. This thesis deals with the reconstruction of two-periodic nonuniform signals which is of great importance in two-channel time-interleaved analog-to-digital converters. In a two-channel time-interleaved ADC, aperture delay mismatch between the channels gives rise to a two-periodic nonuniform sampling pattern, resulting in distortion and severely affecting the linearity of the converter. The problem is solved by digitally recovering a uniformly sampled sequence from a two-periodic nonuniformly sampled set. For this purpose, a time-varying FIR filter is employed. If the sampling pattern is known and fixed, this filter can be designed in an optimal way using least-squares or minimax design. When the sampling pattern changes now and then as during the normal operation of time-interleaved ADC, these filters have to be redesigned. This has implications on the implementation cost as general on-line design is cumbersome. To overcome this problem, a novel time-varying FIR filter with polynomial impulse response is developed and characterized in this thesis. The main advantage with these filters is that on-line design is no longer needed. It now suffices to perform only one design before implementation and in the implementation it is enough to adjust only one variable parameter when the sampling pattern changes. Thus the high implementation cost is decreased substantially. Filter design and the associated performance metrics have been validated using MATLAB. The design space has been explored to limits imposed by machine precision on matrix inversions. Studies related to finite wordlength effects in practical filter realisations have also been carried out. These formulations can also be extended to the general M - periodic nonuniform sampling case.
159

Performance of a Cluster that Supports Resource Reservation and On-demand Access

Leung, Gerald January 2009 (has links)
Next generation data centres are expected to support both advance resource reservation and on-demand access, but the system performance for such a computing environment has not been well-investigated. A reservation request is characterized by a start time, duration, and resource requirement. Discrete event simulation is used to study the performance characteristics of reservation systems. The basic strategy is to accept a request if resources are available and reject the request otherwise. The performance metrics considered are resource utilization and blocking probability. Results showing the impact of input parameters on these performance metrics are presented. It is found that the resource utilization is quite low. Two strategies that can be used to improve the performance for advance reservation are evaluated. The first strategy allows the start time to be delayed up to some maximum value, while the second allows the possibility of non-uniform resource allocation over the duration of the reservation. Simulation results showing the performance improvements of these two strategies are presented. Resources not used by advance reservation are used to support on-demand access. The performance metrics of interest is the mean response time. Simulation results showing the impact of resource availability and its variation over time on the mean response time are presented. These results provide valuable insights into the performance of systems with time-varying processing capacity. They can also be used to develop guidelines for the non-uniform resource allocation strategy for advance reservation in case the reserved resources are used for interactive access.
160

Localization of Dynamic Acoustic Sources with a Maneuverable Array

Rogers, Jeffrey S. January 2010 (has links)
<p>This thesis addresses the problem of source localization and time-varying spatial spectrum estimation with maneuverable arrays. Two applications, each having different environmental assumptions and array geometries, are considered: 1) passive broadband source localization with a rigid 2-sensor array in a shallow water, multipath environment and 2) time-varying spatial spectrum estimation with a large, flexible towed array. Although both applications differ, the processing scheme associated with each is designed to exploit array maneuverability for improved localization and detection performance.</p><p>In the first application considered, passive broadband source localization is accomplished via time delay estimation (TDE). Conventional TDE methods, such as the generalized cross-correlation (GCC) method, make the assumption of a direct-path signal model and thus suffer localization performance loss in shallow water, multipath environments. Correlated multipath returns can result in spurious peaks in GCC outputs resulting in large bearing estimate errors. A new algorithm that exploits array maneuverability is presented here. The multiple orientation geometric averaging (MOGA) technique geometrically averages cross-correlation outputs to obtain a multipath-robust TDE. A broadband multipath simulation is presented and results indicate that the MOGA effectively suppresses correlated multipath returns in the TDE.</p><p>The second application addresses the problem of field directionality mapping (FDM) or spatial spectrum estimation in dynamic environments with a maneuverable towed acoustic array. Array processing algorithms for towed arrays are typically designed assuming the array is straight, and are thus degraded during tow ship maneuvers. In this thesis, maneuvering the array is treated as a feature allowing for left and right disambiguation as well as improved resolution towards endfire. The Cramer Rao lower bound is used to motivate the improvement in source localization which can be theoretically achieved by exploiting array maneuverability. Two methods for estimating time-varying field directionality with a maneuvering array are presented: 1) maximum likelihood estimation solved using the expectation maximization (EM) algorithm and 2) a non-negative least squares (NNLS) approach. The NNLS method is designed to compute the field directionality from beamformed power outputs, while the ML algorithm uses raw sensor data. A multi-source simulation is used to illustrate both the proposed algorithms' ability to suppress ambiguous towed-array backlobes and resolve closely spaced interferers near endfire which pose challenges for conventional beamforming approaches especially during array maneuvers. Receiver operating characteristics (ROCs) are presented to evaluate the algorithms' detection performance versus SNR. Results indicate that both FDM algorithms offer the potential to provide superior detection performance in the presence of noise and interfering backlobes when compared to conventional beamforming with a maneuverable array.</p> / Dissertation

Page generated in 0.0594 seconds