• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1473
  • 334
  • 197
  • 155
  • 107
  • 76
  • 72
  • 53
  • 44
  • 41
  • 19
  • 15
  • 15
  • 13
  • 13
  • Tagged with
  • 2961
  • 922
  • 333
  • 317
  • 296
  • 295
  • 293
  • 232
  • 210
  • 198
  • 197
  • 195
  • 194
  • 179
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Software Design of A Static Memory Synthesis Method

tseng, ying-sang 15 July 2004 (has links)
Along with process technology advancement, we can integrate more and more on-chip memory in an SOC. Memory intensive applications, such as image processing and digital signal processing, usually access certain number of data arrays. Memory system designs for such systems can then critically influence cost, performance, and power of the resulting SOCs. In this thesis research, we focus on the software design of a memory synthesis method of data stored in arrays. Memory synthesis task considers access time, power, and cost requirements of array data and utilize characteristics of indexing patterns of array accesses. It then derive the allocation of memory organizations and effective organizations of multiple data arrays mapped onto the allocated memory modules. Its design principle lies in the matching of data array reorganization and their assigned memory module resource allocation so as to enhance data access locality in the same memory rows and to reduce the number of row accesses (bit line accesses). Hence, we can achieve required power and performance goals with low memory system cost. Static memory synthesis solves memory synthesis problem with fixed loop count and tasks in prior of product design. The memory synthesis task succeeds the high level synthesis task. It is then followed by the address generating circuit design task and the memory access scheduling circuit design task of the functional module side. These circuit designs can herein be combined with high level synthesis results to perform logic synthesis. Finally, we can perform physical synthesis of functional modules, logic circuit modules, and memory modules. It is thus expected to produce an SOC design satisfying overall design requirements. The software design of the static memory synthesis method includes two main topics: memory allocation and module assignment for data arrays and the estimation method of a memory system design. In this research, we designed the experimental software for the memory synthesis method. We also planned experiments based upon real and synthetic design cases to validate the effectiveness of the static memory synthesis method.
72

A Gain-sharing Model applied to re-evaluate the stock exchange ratio in communication industry¡¦s M&A ¡V the case of Taiwan Mobile Corporation and Far EasTone Telecommunications Corporation.

Tsai, Tsung-hsien 24 June 2005 (has links)
After the liberalization of the market , there will be the surge of competitors and price competition. In the end, the market will go to M&A. In the begin, there are seven competitors in Taiwan¡¦s communication industry. After the M&A, the market divided into three giant groups, Chunghwa Telecom Co., Ltd,Taiwan Mobile Co., Ltd and Far EasTone Co., Ltd. The traditional models in evaluating the stock exchange ratio in M&A is to evaluate the target company and buying them out. In this paper, we adopting game theoretic ain sharing model to re-evaluate the stock exchange ratio in profit generating and distribution . The result is that the net income is the proper variable to evaluate the stock exchange ratio in Taiwan¡¦s communication industry.
73

A Probability-based Framework for Dynamic Resource Scheduling in Grid Environment

Lin, Hung-yang 07 July 2007 (has links)
Recent enthusiasm in grid computing has resulted in a tremendous amount of research in resource scheduling techniques for tasks in a workflow. Most of the work on resource scheduling is aimed at minimizing the total response time for the entire workflow and treats the estimated response time of a task running on a local resource as a constant. However in a dynamic environment such grid computing, the behavior of resources simply cannot be ensured. In this thesis, thus, we propose a probabilistic framework for resource scheduling in a grid environment that views the task response time as a probability distribution to take into consideration the uncertain factors. The goal is to dynamically assign resources to tasks so as to maximize the probability of completing the entire workflow within a desired total response time. We propose three algorithms for the dynamic resource scheduling in grid environment, namely the integer programming, the max-max heuristic and the min-max heuristic. Experimental results using synthetic data derived from a real protein annotation workflow application demonstrate that the proposed probability-based scheduling strategies have similar performance in an environment with homogeneous resources and perform better in an environment with heterogeneous resources, when compared with the existing methods that consider the response time as a constant. Of the two proposed heuristics, min-max generally yields better performance.
74

Resource Allocation for Multimedia with QoS Requirement in Wireless Network

Lo, Che-Feng 20 July 2001 (has links)
With the rapid development of Web-based technologies, our daily life has become intensely involved with Internet. Combined with the maturity of wireless network technologies, the transmission of multimedia data using mobile communication equipments will surely become the next step of Internet usage. More and more real-time data and massive amount of information are being transmitted on the Internet, making the bandwidth a scarce resource. To resolve the congestion of Internet, therefore, the efficient management and distribution of limited and valuable resource is more important than the enhancement of it. Our research posed a dynamic resource allocation method, which exploited the reward-penalty concept in order to find the most efficient allocation solution under the constraint of limited resources. The method enabled the users who need to use resource to achieve the necessary resources and their guarantee of quality. The system resource managers or service providers could make the best arrangement of their constrained resources and gain the highest reward through two essential procedures: Admission Control and Resource Allocation. Users themselves, on the other hand, ¡§smoothly¡¨ adjust the resource they had to match the resources they gained. Our algorithm provided existent users with what they requested while at the same time maximized the benefit of the system and made the most efficient arrangement of resources in regards to new requests. The consequences of simulation experiments showed that our system, which was based on reward-penalty model, is apparently superior to the so-called one based on reward model. The results also showed that CB method took users¡¦ reward rate as well as their penalty rate into account while maintaining admission control.
75

Code Merge Scheme (CMS) ¡GA Dynamic Scheme for Allocating OVSF Codes in WCDMA

Huang, Tien-Tsun 06 August 2001 (has links)
Abstract Wideband-CDMA (WCDMA) is a kind of third-generation wireless communication system. It can provide multi-rate services and fast transmission with wideband technology. To improve the solution of no enough wireless bandwidth currently, the 3G communication systems have been researched and developed in several leading countries recently. WCDMA adopts a kind of new spreading codes named Orthogonal Variable Spreading Factor codes (OVSF codes) that have advantages of dynamically variable rates and keeping orthogonality. OVSF codes can provide different data transmission rates by assigning codes with different lengths. By building a code tree, we can discuss some better schemes to assign available data rate. In this paper, we propose an efficient channel assignment scheme that can decrease the call blocking rate and complexity of channel reassignment procedure. Based on the properties of the binary code tree, we use code merge scheme to decrease channel reassignment rate and call blocking rate. This will efficiently improve the performance of channel assignment and spectral efficiency. Simulation results show that the proposed scheme has expected results.
76

Admission Control with Maximizing QoS Satisfactions

Yang, Hung-Chun 10 July 2003 (has links)
The progresses of technology bring up the bandwidth of the network, that can afford the increasing data amount from text to multimedia and make the network application development change with every passing day and become more varied. Recently, the rising of the wireless network attracts the public¡¦s attention. Compared with the traditional network, the wireless network has the advantage in its convenience and extensibility, but it has shortcoming in its bandwidth and stability. Because of the limited resources of the wireless network, introducing QoS (Quality of Service) can use the resources more efficiently. QoS guarantee the ability to achieve the special network applications¡¦ requests by using the network components or technology. QoS can differentiate between different classes of network services and allocate the system resources much better. Our research adopts the reward-penalty model to differentiate between different kinds of service requests to maximize the system¡¦s earning. It is decided by three QoS parameters, reward rate, delay penalty rate and drop penalty. In the reward-penalty model, the admission control¡¦s goal is to maximize the system¡¦s benefit. The purpose of our research is to design an efficient and dynamic resource allocation method, including admission control and resource allocation, to find the most efficient solution under the constraint of limited resources and smoothly adjust the resources of the existing users to promise the QoS. The consequences of simulation experiments show that MDI, posed in our research, has a better performance than other algorithms. Under different network environments, e.g. arrival rate, request bandwidth, transmission time, MDI is better and more stable than other algorithms. MDI can adjust QoS parameters, e.g. reward rate, delay penalty rate, drop penalty, to achieve different system¡¦s goal, like low delay rate or low drop rate. Thus it can be seen that MDI is able to not only make an efficient use of system resources but also adjust the QoS parameters to counter the change of the network environment in order to have a better performance.
77

Optimal meter placement and transaction-based loss allocation in deregulated power system operation

Ding, Qifeng 17 February 2005 (has links)
In this dissertation topics of optimal meter placement and transaction-based loss allocation in deregulated power system operation are investigated. Firstly, Chapter II introduces the basic idea of candidate measurement identification, which is the selection of candidate measurement sets, each of which will make the system observable under a given contingency (loss of measurements and network topology changes). A new method is then developed for optimal meter placement, which is the choice of the optimal combination out of the selected candidate measurement sets in order to ensure the entire system observability under any one of the contingencies. A new method, which allows a natural separation of losses among individual transactions in a multiple-transaction setting is proposed in Chapter III. The proposed method does not use any approximations such as a D.C. power flow, avoiding method induced inaccuracies. The power network losses are expressed in terms of individual power transactions. A transaction-loss matrix, which illustrates the breakdown of losses introduced by each individual transaction and interactions between any two transactions, is created. The network losses can then be allocated to each transaction based on the transaction-loss matrix entries. The conventional power flow analysis is extended in Chapter IV to combine with the transaction loss allocation. A systematic solution procedure is formed in order to adjust generation while simultaneously allocating losses to the generators designated by individual transactions. Furthermore, Chapter V presents an Optimal Power Flow (OPF) algorithm to optimize the loss compensation if some transactions elect to purchase the loss service from the Independent System Operator (ISO) and accordingly the incurred losses are fairly allocated back to individual transactions. IEEE test systems have been used to verify the effectiveness of the proposed method.
78

Assessment of the effectiveness of the advanced programmatic risk analysis and management model (apram) as a decision support tool for construction projects

Imbeah, William Kweku Ansah 17 September 2007 (has links)
Construction projects are complicated and fraught with so many risks that many projects are unable to meet pre-defined project objectives. Managers of construction projects require decision support tools that can be used to identify, analyze and implement measures that can mitigate the effects of project risks. Several risk analysis techniques have been developed over the years to enable construction project managers to make useful decisions that can improve the chances of project success. These risk analysis techniques however fail to simultaneously address risks relating to cost, schedule and quality. Also, construction projects may have scarce resources and construction managers still bear the responsibility of ensuring that project goals are met. Certain projects require trade-offs between technical and managerial risks and managers need tools that can help them do this. This thesis evaluates the usefulness of the Advanced Programmatic Risk Analysis and Management Model (APRAM) as a decision support tool for managing construction projects. The development of a visitor center in Midland, Texas was used as a case study for this research. The case study involved the implementation of APRAM during the concept phase of project development to determine the best construction system that can minimize the expected cost of failure. A risk analysis performed using a more standard approach yielded an expected cost of failure that is almost eight times the expected cost of failure yielded by APRAM. This study concludes that APRAM is a risk analysis technique that can minimize the expected costs of failure by integrating project risks of time, budget and quality through the allocation of resources. APRAM can also be useful for making construction management decisions. All identified component or material configurations for each alternative system however, should be analyzed instead of analyzing only the lowest cost alternative for each system as proposed by the original APRAM model. In addition, it is not possible to use decision trees to determine the optimal allocation of management reserves that would mitigate managerial problems during construction projects. Furthermore, APRAM does not address the issue of safety during construction and assumes all identifiable risks can be handled with money.
79

A Probability-based Framework for Dynamic Resource Scheduling in Data-Intensive Grid Environment

Li, Shih-Yung 23 July 2008 (has links)
Recent enthusiasm in grid computing has resulted in a tremendous amount of research in resource scheduling techniques for tasks in a (scientific) workflow. There are many factors that may affect the scheduling results, one of which is whether the application is computing-intensive or data-intensive. Most of the grid scheduling researches focus on a single aspect of the environments. In this thesis, we base on our previous work, a probability-based framework for dynamic resource scheduling, and consider data transmission overhead in our scheduling algorithms. The goal is to dynamically assign resources to tasks so as to maximize the probability of completing the entire workflow within a desired total response time. We propose two algorithms for the dynamic resource scheduling in grid environment, namely largest deadline completion probability (LDCP) and smallest deadline completion probability (SDCP). Furthermore, considering the data transmission overhead, we propose a suite of push-based scheduling algorithms, which schedule all the immediate descendant tasks when a task is completed. These are algorithms will be compared to the pull-demand scheduling algorithms in our previous work and workflow-based algorithms proposed by other researchers. We use GridSim toolkit to model the grid environment and evaluate the performance of the various scheduling algorithms.
80

Joint routing and resource allocation in multihop wireless network /

Luo, Lu. January 2009 (has links)
Includes bibliographical references (p. 54-56).

Page generated in 0.0783 seconds