551 |
Distributed Feedback and Feedforward of Discrete-Time Sigma-Delta ModulatorChiu, Jih-Chin 23 July 2012 (has links)
This paper presents a distributed feedback and feedforward of discrete-time delta sigma modulator applications in the radio. We know the delta-sigma modulator using oversampling and noise shaping technique, thus we can relax the specifications of the components. This paper described the architectural differences and compare, the in-band signal is less sensitive to noise interference, and improve the resolution of the circuit. In the resonator, a simple structure with a small number of capacitor in resonator circuit.
This paper uses the TSMC 0.18£gm process parameters to the simulation, implementation, and measurement. Our fourth-order discrete-time delta-sigma modulator specifications as follows: the input signal frequency is 10.7MHz, the sampling frequency is 42.8MHz, the signal bandwidth is 200kHz, oversampling rate is 107, and one bit quantizer.
|
552 |
Understanding Churn in Decentralized Peer-to-Peer NetworksYao, Zhongmei 2009 August 1900 (has links)
This dissertation presents a novel modeling framework for understanding the dynamics
of peer-to-peer (P2P) networks under churn (i.e., random user arrival/departure)
and designing systems more resilient against node failure. The proposed models are
applicable to general distributed systems under a variety of conditions on graph construction
and user lifetimes.
The foundation of this work is a new churn model that describes user arrival and
departure as a superposition of many periodic (renewal) processes. It not only allows
general (non-exponential) user lifetime distributions, but also captures heterogeneous
behavior of peers. We utilize this model to analyze link dynamics and the ability
of the system to stay connected under churn. Our results offers exact computation
of user-isolation and graph-partitioning probabilities for any monotone lifetime distribution,
including heavy-tailed cases found in real systems. We also propose an
age-proportional random-walk algorithm for creating links in unstructured P2P networks
that achieves zero isolation probability as system size becomes infinite. We
additionally obtain many insightful results on the transient distribution of in-degree,
edge arrival process, system size, and lifetimes of live users as simple functions of the
aggregate lifetime distribution.
The second half of this work studies churn in structured P2P networks that are
usually built upon distributed hash tables (DHTs). Users in DHTs maintain two types of neighbor sets: routing tables and successor/leaf sets. The former tables determine
link lifetimes and routing performance of the system, while the latter are built for
ensuring DHT consistency and connectivity. Our first result in this area proves that
robustness of DHTs is mainly determined by zone size of selected neighbors, which
leads us to propose a min-zone algorithm that significantly reduces link churn in
DHTs. Our second result uses the Chen-Stein method to understand concurrent
failures among strongly dependent successor sets of many DHTs and finds an optimal
stabilization strategy for keeping Chord connected under churn.
|
553 |
A Study on A Series Grid Interconnection Module for Distributed Energy ResourcesXiau, Ying-Chieh 13 July 2006 (has links)
This thesis presents the applications of a series interconnection scheme for small distributed generation (DG) systems in distribution networks. The concept uses one set of voltage source converter (VSC) to control the injected voltage magnitude and phase angle for power injection and voltage sag mitigation. Through an energy storage device and the VSC, DG outputs vary concurrently with the line loading and provide load leveling functions. Under voltage sag situations, it provides missing voltages to effectively deal with power quality problems. Due to its series connection characteristic, it is convenient in preventing islanding operation and good for fault current limiting. The concept is suitable for locations where the voltage phase shift is not a major concern. Due to the use of only one set of converter, it is economic for customer site distributed energy resource applications and its control strategy would depend on the types of load connected.
|
554 |
An Object-Process Methodology for Implementation a Distribution Information SystemLu, Liang-Yu 16 July 2001 (has links)
Component base software development methodology is the most important technological revolution of software industry in the past few years. Straightly to push forward software industry from taking handiwork as principle thing, gradually to get into automation assisting tool procreation¡¦s automation industry. Component base software development technology give way to business information system easy fabricate flexibly. System developer may assemble software components depending on user requirement. We can increase or subtract system components to modulate a section of system capability any time. But do not influence whole system, only contained a part of system components.
This thesis brings up an object-process methodology to apply develop a business distributed information system. Using object-process methodology to find business objects from business process. We can divide system analysis into two parts and eight steps, to analyze the user requirement than to design information system to guide stable software objects and system framework. Through object-process business system helps we establish the model of the complex business system, mapping the real world activity or the abstract conception into system model. We can analyze and design distributed objects efficiently for distributed operation system environment needed. Proceeding to the next step, to transform software model and to seal up distributed component object module (DCOM), than to put DCOM into system application layer. Let the business information system flexibly and ample fitting in user requirement.
|
555 |
CDPthread: A POSIX-Thread Based Distributed Computing EnvironmentTseng, Guo-Fu 28 July 2009 (has links)
Due to the limitation of single machine¡¦s computing power, and the aspect of cost, distributed design is getting more and more popular nowadays. The Distributed Shared Memory (DSM) system is one of the most hot topics in this area. Most people are dedicated on designing a library or even a new language, in order to gain higher performance on DSM systems. As a consequence, the programmers are required to learn a new library or language. Even more, they have to handle synchronizations for the distributed environment. In this paper, we propose a design that is compatible with POSIX-Thread Environment. The distributed nature of the system described herein is totally transparent to the programmers.
|
556 |
Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missionsAlemany, Kristina 13 November 2009 (has links)
Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times.
Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables - launch date, time of flight, and asteroid stay times (when applicable) - as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition.
This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times.
The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.
|
557 |
Monitoring-as-a-service in the cloudMeng, Shicong 03 April 2012 (has links)
State monitoring is a fundamental building block for Cloud services.
The demand for providing state monitoring as services (MaaS) continues to grow and is evidenced by CloudWatch from Amazon EC2, which allows cloud consumers to pay for monitoring a selection of performance metrics with coarse-grained periodical sampling of runtime states. One of the key challenges for wide deployment of MaaS is to provide better balance among a set of critical quality and performance parameters, such as accuracy, cost, scalability and customizability.
This dissertation research is dedicated to innovative research and
development of an elastic framework for providing state monitoring as
a service (MaaS). We analyze limitations of existing techniques, systematically identify the need and the challenges at different layers of a Cloud monitoring service platform, and develop a suite of
distributed monitoring techniques to support for flexible monitoring
infrastructure, cost-effective state monitoring and monitoring-enhanced Cloud management. At the monitoring infrastructure layer, we develop techniques to support multi-tenancy of monitoring services by exploring cost sharing between monitoring tasks and safeguarding monitoring resource usage. To provide elasticity in monitoring, we propose techniques to allow the monitoring infrastructure to self-scale with monitoring demand. At the cost-effective state monitoring layer, we devise several new state monitoring functionalities to meet unique functional requirements in Cloud monitoring. Violation likelihood state monitoring explores the benefits of consolidating monitoring workloads by allowing utility-driven monitoring intensity tuning on individual monitoring tasks and identifying correlations between monitoring tasks. Window based state monitoring leverages distributed windows for the best monitoring accuracy and communication efficiency. Reliable state monitoring is robust to both transient and long-lasting communication issues caused by component failures or cross-VM performance interferences. At the monitoring-enhanced Cloud management layer, we devise a novel technique to learn about the performance characteristics of both Cloud infrastructure and Cloud applications from cumulative performance monitoring data to increase the cloud deployment efficiency.
|
558 |
Prediction based load balancing heuristic for a heterogeneous clusterSaranyan, N 09 1900 (has links)
Load balancing has been a topic of interest in both academia and industry, mainly
because of the scope for performance enhancement that is available to be exploited in
many parallel and distributed processing environments. Among the many approaches
that have been used to solve the load balancing problem, we find that only very few
use prediction of code execution times. Our reasoning for this is that the field of code prediction
is in its infancy. As of this writing, we are not aware of any prediction-based
load balancing approach that uses prediction8 of code-execution times, and uses neither
the information provided by the user, nor an off-line step that does the prediction, the
results of which are then used at run-time. In this context, it is important to note that
prior studies have indicated the feasibility of predicting the CPU requirements of general
application programs.
Our motivation in using prediction-based load balancing is to determine the feasibility
of the approach. The reasoning behind that is the following: if prediction-based load
balancing does yield good performance, then it may be worthwhile to develop a predictor
that can give a rough estimate of the length of the next CPU burst of each process. While
high accuracy of the predictor is not essential, the computation overhead of the predictor
must be sufficiently' small, so as not to offset the gain of load balancing.
As for the system, we assume a set of autonomous computers, that are connected by
a fast, shared medium. The individual nodes can vary in the additional hardware and
software that may be available in them. Further, we assume that the processes in the
workload are sequential.
The first step is to fix the parameters for our assumed predictor. Then, an algorithm
that takes into account the characteristics of the predictor is proposed. There are many
trade-off decisions in the design of the algorithm, including certain steps in which we
have relied on trial and error method to find suitable values. The next logical step is
to verify the efficiency of the algorithm. To assess its performance, we carry out event
driven simulation. We also evaluate the robustness of the algorithm with respect to the
characteristics of the predictor.
The contribution of the thesis is as follows: It proposes a load-balancing algorithm
for a heterogeneous cluster of workstations connected by a fast network. The simulation
assumes that the heterogeneity is limited to variability in processor clock rates; but
the algorithm can be applied when the nodes have other types of heterogeneity as well.
The algorithm uses prediction of CPU burst lengths as its basic input unit. The performance
of the algorithm is evaluated through event driven simulation using assumed
workload distributions. The results of the simulation show that the algorithm yields a
good improvement in response times over the scenario in which no load redistribution is
done.
|
559 |
Distributed database support for networked real-time multiplayer gamesGrimm, Henrik January 2002 (has links)
<p>The focus of this dissertation is on large-scale and long-running networked real-time multiplayer games. In this type of games, each player controls one or many entities, which interact in a shared virtual environment. Three attributes - scalability, security, and fault tolerance - are considered essential for this type of games. The normal approaches for building this type of games, using a client/server or peer-to-peer architecture, fail in achieving all three attributes. We propose a server-network architecture that supports these attributes. In this architecture, a cluster of servers collectively manage the game state and each server manages a separate region of the virtual environment. We discuss how the architecture can be extended using proxies, and we compare it to other similar architectures. Further, we investigate how a distributed database management system can support the proposed architecture. Since efficiency is very important in this type of games, some properties of traditional database systems must be relaxed. We also show how methods for increasing scalability, such as interest management and dead reckoning, can be implemented in a database system. Finally, we suggest how the proposed architecture can be validated using a simulation of a large-scale game.</p>
|
560 |
Policy architecture for distributed storage systemsBelaramani, Nalini Moti 15 October 2009 (has links)
Distributed data storage is a building block for many distributed systems
such as mobile file systems, web service replication systems, enterprise file
systems, etc. New distributed data storage systems are frequently built as new
environment, requirements or workloads emerge. The goal of this dissertation
is to develop the science of distributed storage systems by making it easier
to build new systems. In order to achieve this goal, it proposes a new policy
architecture, PADS, that is based on two key ideas: first, by providing a set of
common mechanisms in an underlying layer, new systems can be implemented
by defining policies that orchestrate these mechanisms; second, policy can be
separated into routing and blocking policy, each addresses different parts of the
system design. Routing policy specifies how data flow among nodes in order
to meet performance, availability, and resource usage goals, whereas blocking
policy specifies when it is safe to access data in order to meet consistency and
durability goals. This dissertation presents a PADS prototype that defines a set of distributed
storage mechanisms that are sufficiently flexible and general to support
a large range of systems, a small policy API that is easy to use and captures
the right abstractions for distributed storage, and a declarative language
for specifying policy that enables quick, concise implementations of complex
systems.
We demonstrate that PADS is able to significantly reduce development
effort by constructing a dozen significant distributed storage systems spanning
a large portion of the design space over the prototype. We find that each
system required only a couple of weeks of implementation effort and required a few dozen lines of policy code. / text
|
Page generated in 0.0622 seconds