41 |
Performance Analysis and Implementationof Predictable Streaming Applications onMultiprocessor Systems-on-ChipZhu, Jun January 2010 (has links)
Driven by the increasing capacity of integrated circuits, multiprocessorsystems-on-chip (MPSoCs) are widely used in modern consumer electron-ics devices. In this thesis, the performance analysis and implementationmethodologies are explored to design predictable streaming applications onMPSoCs computing platforms. The application functionality and concur-rency are described in synchronous data flow (SDF) computational models,and two state-of-the-art architecture templates are adopted as multiproces-sor architectures, i.e., network-on-chip (NoC) based MPSoC and hybrid re-configurable CPU/FPGA platforms. Based on the author’s contributions onsimulation and formal analytical methods, performance analysis and designspace exploration for embedded MPSoCs architectures have been addressed. An energy efficient design space exploration flow is proposed for stream-ing applications with guaranteed throughput on NoC based MPSoCs, in whichboth application throughput analysis and system energy calculation are car-ried out by simulation on a multi-clocked synchronous modelling frame-work. On the other hand, based on event models of data streams, a formalanalytical scheduling framework for real-time streaming applications withminimal buffer requirement on hybrid CPU/FPGA architectures is exploited.The scheduling problem has been formalized declaratively by constraint basetechniques, and solved by a public domain constraint solver. Consecutively,the constraint based method has been extended to solve problems rangingfrom global computation/communication scheduling and reconfiguration anal-ysis to Pareto efficient design. Finally, a prototype of stream processing sys-tem on FPGA based MPSoC is built to substantiate the results from theoreti-cal studies in this thesis. / QC 20101207 / SysModel / Andres
|
42 |
Fast simulation of rare events in Markov level/phase processesLuo, Jingxiang 19 July 2004 (has links)
Methods of efficient Monte-Carlo simulation when rare events are involved have been studied for several decades. Rare events are very important in the context of evaluating high quality computer/communication systems. Meanwhile, the efficient simulation of systems involving rare events poses great challenges.
A simulation method is said to be efficient if the number of replicas required to get accurate estimates grows slowly, compared to the rate at which the probability of the rare event approaches zero.
Despite the great success of the two mainstream methods, importance sampling (IS) and importance splitting, either of them can become inefficient under certain conditions, as reported in some recent studies.
The purpose of this study is to look for possible enhancement of fast simulation methods. I focus on the ``level/phase process', a Markov process in which the level and the phase are two state variables. Furthermore, changes of level and phase are induced by events, which have rates that are independent of the level except at a boundary.
For such a system, the event of reaching a high level occurs rarely, provided the system typically stays at lower levels. The states at those high levels constitute the rare event set.
Though simple, this models a variety of applications involving rare events.
In this setting, I have studied two efficient simulation methods, the rate tilting method and the adaptive splitting method, concerning their efficiencies.
I have compared the efficiency of rate tilting with several previously used similar methods. The experiments are done by using queues in tandem, an often used test bench for the rare event simulation. The schema of adaptive splitting has not been described in literature. For this method, I have analyzed its efficiency to show its superiority over the (conventional) splitting method.
The way that a system approaches a designated rare event set is called the system's large deviation behavior. Toward the end of gaining insight about the relation of system behavior and the efficiency of IS simulation, I quantify the large deviation behavior and its complexity.
This work indicates that the system's large deviation behavior has a significant impact on the efficiency of a simulation method.
|
43 |
Fast simulation of rare events in Markov level/phase processesLuo, Jingxiang 19 July 2004
Methods of efficient Monte-Carlo simulation when rare events are involved have been studied for several decades. Rare events are very important in the context of evaluating high quality computer/communication systems. Meanwhile, the efficient simulation of systems involving rare events poses great challenges.
A simulation method is said to be efficient if the number of replicas required to get accurate estimates grows slowly, compared to the rate at which the probability of the rare event approaches zero.
Despite the great success of the two mainstream methods, importance sampling (IS) and importance splitting, either of them can become inefficient under certain conditions, as reported in some recent studies.
The purpose of this study is to look for possible enhancement of fast simulation methods. I focus on the ``level/phase process', a Markov process in which the level and the phase are two state variables. Furthermore, changes of level and phase are induced by events, which have rates that are independent of the level except at a boundary.
For such a system, the event of reaching a high level occurs rarely, provided the system typically stays at lower levels. The states at those high levels constitute the rare event set.
Though simple, this models a variety of applications involving rare events.
In this setting, I have studied two efficient simulation methods, the rate tilting method and the adaptive splitting method, concerning their efficiencies.
I have compared the efficiency of rate tilting with several previously used similar methods. The experiments are done by using queues in tandem, an often used test bench for the rare event simulation. The schema of adaptive splitting has not been described in literature. For this method, I have analyzed its efficiency to show its superiority over the (conventional) splitting method.
The way that a system approaches a designated rare event set is called the system's large deviation behavior. Toward the end of gaining insight about the relation of system behavior and the efficiency of IS simulation, I quantify the large deviation behavior and its complexity.
This work indicates that the system's large deviation behavior has a significant impact on the efficiency of a simulation method.
|
44 |
Performance Analysis of Distributed MAC Protocols for Wireless NetworksLing, Xinhua 01 May 2007 (has links)
How to improve the radio resource utilization and provide better
quality-of-service (QoS) is an everlasting challenge to the
designers of wireless networks. As an indispensable element of
the solution to the above task, medium access control (MAC)
protocols coordinate the stations and resolve the channel access
contentions so that the scarce radio resources are shared fairly
and efficiently among the participating users. With a given
physical layer, a properly designed MAC protocol is the key to
desired system performance, and directly affects the perceived QoS
of end users.
Distributed random access protocols are widely used MAC protocols
in both infrastructure-based and infrastructureless wireless
networks. To understand the characteristics of these protocols,
there have been enormous efforts on their performance study by
means of analytical modeling in the literature. However, the
existing approaches are inflexible to adapt to different protocol
variants and traffic situations, due to either many unrealistic
assumptions or high complexity.
In this thesis, we propose a simple and scalable generic
performance analysis framework for a family of carrier sense
multiple access with collision avoidance (CSMA/CA) based
distributed MAC protocols, regardless of the detailed backoff and
channel access policies, with more realistic and fewer
assumptions. It provides a systematic approach to the performance
study and comparison of diverse MAC protocols in various
situations. Developed from the viewpoint of a tagged station, the
proposed framework focuses on modeling the backoff and channel
access behavior of an individual station. A set of fixed point
equations is obtained based on a novel three-level renewal process
concept, which leads to the fundamental MAC performance metric,
average frame service time. With this result, the important
network saturation throughput is then obtained straightforwardly.
The above distinctive approach makes the proposed analytical
framework unified for both saturated and unsaturated stations.
The proposed framework is successfully applied to study and
compare the performance of three representative distributed MAC
protocols: the legacy p-persistent CSMA/CA protocol, the IEEE
802.15.4 contention access period MAC protocol, and the IEEE
802.11 distributed coordination function, in a network with
homogeneous service. It is also extended naturally to study the
effects of three prevalent mechanisms for prioritized channel
access in a network with service differentiation. In particular,
the novel concepts of ``virtual backoff event'' and ``pre-backoff
waiting periods'' greatly simplify the analysis of the arbitration
interframe space mechanism, which is the most challenging one
among the three, as shown in the previous works reported in the
literature. The comparison with comprehensive simulations shows
that the proposed analytical framework provides accurate
performance predictions in a broad range of stations. The results
obtained provide many helpful insights into how to improve the
performance of current protocols and design better new ones.
|
45 |
Performance Analysis of Relational Database over Distributed File SystemsTsai, Ching-Tang 08 July 2011 (has links)
With the growing of Internet, people use network frequently. Many PC applications have moved to the network-based environment, such as text processing, calendar, photo management, and even users can develop applications on the network. Google is a company providing web services. Its popular services are search engine and Gmail which attract people by short response time and lots amount of data storage. It also charges businesses to place their own advertisements. Another hot social network is Facebook which is also a popular website. It processes huge instant messages and social relationships between users. The magic power of doing this depends on the new technique, Cloud Computing.
Cloud computing has ability to keep high-performance processing and short response time, and its kernel components are distributed data storage and distributed data processing. Hadoop is a famous open source to build cloud distributed file system and distributed data analysis. Hadoop is suitable for batch applications and write-once-and-read-many applications. Thus, currently there are only fewer applications, such as pattern searching and log file analysis, to be implemented over Hadoop. However, almost all database applications are still using relational databases. To port them into cloud platform, it becomes necessary to let relational database running over HDFS. So we will test the solution of FUSE-DFS which is an interface to mount HDFS into a system and is used like a local filesystem. If we can make FUSE-DFS performance satisfy user¡¦s application, then we can easier persuade people to port their application into cloud platform with least overhead.
|
46 |
A Performance Monitoring Tool Suite for Software and SoC on-chip Bus: Using 3D Graphics SoC as an exampleChang, Yi-Hao 19 March 2012 (has links)
Nowadays SoC involves both software and hardware designs, performance bottleneck may occur either in software/hardware or even both. But present performance monitoring tools usually evaluates one of software/hardware performance, which is not quite enough for nowadays SoC designs. Furthermore, due to increasing complexity of user requirements, embedded OS, such as Linux is introduced to manage the limited hardware resources for complicated applications. However, it also makes performance monitoring harder since the memory addressing space is divided into user space and kernel space with different capability to access system resources, which makes user space application impossible to retrieve system performance information without kernel or hardware supports. In this thesis, we propose a performance monitoring tool suite which is capable of analyzing the performance of user pace application, kernel space device driver and AMBA AHB bus for SoC running under Linux. We develop Performance Monitoring Tool Suite (PMTS) which includes: Program Monitor (PM) to monitor the execution time of software; Bus Utilization Monitor (BUM), Bus Contention Monitor (BCM) and Bus Global Monitor (BGM) to monitor the bus utilization, contentions¡Ketc. PMTS can help user to find out the performance bottleneck of both software and hardware more easily. We have applied PMTS to FPGA develop board and find out the hardware/software performance bottlenecks of the designs. From the experimental results we can know that adding PMTS won¡¦t impact the critical path of SoC.
|
47 |
TCP Performance Analysis on the Position of Link Failure in MPLS Traffic ReroutingYang, Ping-Chan 20 August 2004 (has links)
Multi-Protocol Label Switching (MPLS), a label swapping and forwarding technology proposed by IETF, is very suitable for the backbone of the next-generation Internet. MPLS has the advantages in improving the performance of network-layer routing and increasing network scalability as well. To provide more reliable delivery in MPLS networks, it is necessary for every label switch router (LSR) to perform a fast recovery mechanism after link failures. It is also required for an LSR to support the functions of failure detection, failure notification, and protection mechanisms in each label switched path (LSP). Therefore, different kinds of recovery schemes in previous literatures have been proposed to enhance the reliability of MPLS networks when a link failure occurs in the primary LSP.
In this thesis, we focus on the comparisons of three famous recovery mechanisms, Makam, Haskin, and Hundessa approach. By investigating different locations of link failure, the influences of the three approaches individually on the TCP performance are our major concerns, especially under different TCP versions. Finally, we use the MPLS Network Simulator (MNS) to verify our observations. Four different TCP versions, including TCP-Tahoe, TCP-Reno, TCP-NewReno, and TCP-SACK, are employed in our simulator.
From the simulation results, the characteristics of congestion control when using different TCP versions are discussed. Without applying fast retransmission and fast recovery, the average throughput of TCP-Tahoe is the smallest, as compared to that of other TCP versions. In addition, multiple packet losses in the period of link failures would largely downgrade the performance of average throughput, no matter which TCP version (TCP-NewReno or TCP-Reno) is employed. Using Makam approach, we found out that the average throughput becomes better when the location of link failures is close to the ingress node.
|
48 |
A Study of Visitors' Experience Satisfaction in Different Types of Leisure FarmsChu, Chia-Ching 27 July 2006 (has links)
none
|
49 |
An lmprotance-Performance Analysis of Hotel Selection Factors in Kaohsing International Hotels:Study of Mainland Chinese and Japanese TravelersCheng, Shih-Jang 16 July 2006 (has links)
The main source of the international hotel guests are mostly comprised of the travelers from foreign countries; and among those foreign travelers, the Japanese are always in the highest rank to Taiwan and become the most important market. On the other hand, since Mainland China has liberalized the Chinese for sightseeing abroad, the travel industry all over the world are widely opening their arms and try to absorb this big cake.
Taiwanese tourism industry as well has been preparing and standing by the arrival of the Mainland Chinese for the recent years. This is the reason why the study will focus on these two major markets of traveler and take a close look to see that when selecting the international hotel in Kaohsiung, whether it will be affected by the different culture, traveling attitude, social or economic back-ground etc., so as to compare the importance of hotel facility and service quality in the perception of guest¡¦s recognition and satisfaction, a comparison of the difference. The IPA (Importance-Performance Analysis) along with SWOT analysis are applied to analyze the selecting factors of Mainland Chinese and Japanese travelers to six international hotels in Kaohsiung.
Those factors are categorized into 6 groups : 1. service quality; 2. business facilities; 3. value; 4. room & front desk; 5. food & recreation; 6. security; A questionnaire is formed according to this format and 700 copies has been printed.This survey was conducted from 8th, Mar. to 8th, Apr. in 2006, and finally returned with 161 valid in Japanese and 116 valid in simplified Chinese, the rate of valid returned is 39.5%. The results of the analyses come out as follows:
To both Mainland Chinese and Japanese travelers, Kaohsiung International Hotels remain few strengths, there are only two items of strength, i.e. IDD service is available. & Hotel provides comfortable ambience. Kaohsiung International Hotels should take serious considerations to enforce their strength in competition to improve higher occupation rate, i.e. for Mainland Chinese travelers, High quality in-room temperature control; for Japanese, High quality in-room temperature control. and Hotel F&B is value for money; those improvements will promote travelers¡¦ satisfaction.
|
50 |
noneHung, Wen-chung 07 February 2007 (has links)
none
|
Page generated in 0.0964 seconds