• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 258
  • 98
  • 21
  • 16
  • 11
  • 9
  • 9
  • 9
  • 8
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 527
  • 527
  • 91
  • 78
  • 77
  • 67
  • 65
  • 57
  • 55
  • 54
  • 51
  • 38
  • 37
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Backlight module industry electronic supply chain management model -- A case study of a company

Hsieh, Ming-jiuan 17 July 2007 (has links)
Supply Chain Management involves the cross-organizational interaction and integration, sharing of resources and information, supply chain to optimize the overall goal. In this process there are many interference performance of the uncertainty factors, instead of the uncertainty generated, the extent of the supply chain members unable to obtain the information needed for decision-making. The lack of transparency of information, it will cause the supply chain every member of inventory cost and increase productivity without the use of efficiency, reduce the competitiveness of enterprises. Based on the study that through information sharing, process reengineering and the establishment of performance appraisal system can be an effective supply chain operation of the uncertainty facing the problems and improve the performance of the supply chain and business performance. The results showed that backlight module manufacturing industries belong to industry, labor-intensive, highly customized products. short product life cycles and frequent changes in the demand for the supply chain characteristics, its own high demand uncertainty, through with the customer demand for information-sharing and coordination process, which can effectively reduce demand uncertainty, Implement process improvement orders In cross-organizational coordination mechanism, through information sharing will help the supply chain, reduce the bullwhip effect on the supply chain significantly, and its customers and share the risks and profit sharing. In addition, the synergistic operation of the supply chain framework of analysis, cooperative enterprise in pursuit of enterprises operating information Seamlessly and the operation of the synchronization overall cooperation can play the largest comprehensive Synergy.
52

New advances in synchronization of digital communication receivers

Wang, Yan 17 February 2005 (has links)
Synchronization is a challenging but very important task in communications. In digital communication systems, a hierarchy of synchronization problems has to be considered: carrier synchronization, symbol timing synchronization and frame synchronization. For bandwidth efficiency and burst transmission reasons, the former two synchronization steps tend to favor non-data aided (NDA or blind) techniques, while in general, the last one is usually solved by inserting repetitively known bits or words into the data sequence, and is referred to as a data-aided (DA) approach. Over the last two decades, extensive research work has been carried out to design nondata-aided timing recovery and carrier synchronization algorithms. Despite their importance and spread use, most of the existing blind synchronization algorithms are derived in an ad-hoc manner without exploiting optimally the entire available statistical information. In most cases their performance is evaluated by computer simulations, rigorous and complete performance analysis has not been performed yet. It turns out that a theoretical oriented approach is indispensable for studying the limit or bound of algorithms and comparing different methods. The main goal of this dissertation is to develop several novel signal processing frameworks that enable to analyze and improve the performance of the existing timing recovery and carrier synchronization algorithms. As byproducts of this analysis, unified methods for designing new computationally and statistically efficient (i.e., minimum variance estimators) blind feedforward synchronizers are developed. Our work consists of three tightly coupled research directions. First, a general and unified framework is proposed to develop optimal nonlinear least-squares (NLS) carrier recovery scheme for burst transmissions. A family of blind constellation-dependent optimal "matched" NLS carrier estimators is proposed for synchronization of burst transmissions fully modulated by PSK and QAM-constellations in additive white Gaussian noise channels. Second, a cyclostationary statistics based framework is proposed for designing computationally and statistically efficient robust blind symbol timing recovery for time-selective flat-fading channels. Lastly, dealing with the problem of frame synchronization, a simple and efficient data-aided approach is proposed for jointly estimating the frame boundary, the frequency-selective channel and the carrier frequency offset.
53

Performance understanding and tuning of iterative computation using profiling techniques

Ozarde, Sarang Anil 18 May 2010 (has links)
Most applications spend a significant amount of time in the iterative parts of a computation. They typically iterate over the same set of operations with different values. These values either depend on inputs or values calculated in previous iterations. While loops capture some iterative behavior, in many cases such a behavior is spread over whole program sometimes through recursion. Understanding iterative behavior of the computation can be very useful to fine-tune it. In this thesis, we present a profiling based framework to understand and improve performance of iterative computation. We capture the state of iterations in two aspects 1) Algorithmic State 2) Program State. We demonstrate the applicability of our framework for capturing algorithmic state by applying it to the SAT Solvers and program state by applying it to a variety of benchmarks exhibiting completely parallelizable loops. Further, we show that such a performance characterization can be successfully used to improve the performance of the underlying application. Many high performance combinatorial optimization applications involve SAT solving. A variety of SAT solvers have been developed that employ different data structures and different propagation methods for converging on a fixed point for generating a satisfiable solution. The performance debugging and tuning of SAT solvers to a given domain is an important problem encountered in practice. Unfortunately not much work has been done to quantify the iterative efficiency of SAT solvers. In this work, we develop quantifiable measures for calculating convergence efficiency of SAT solvers. Here, we capture the Algorithmic state of the application by tracking the assignment of variables for each iteration. A compact representation of profile data is developed to track the rate of progress and convergence. The novelty of this approach is that it is independent of the specific strategies used in individual solvers, yet it gives key insights into the "progress" and "convergence behavior" of the solver in terms of a specific implementation at hand. An analysis tool is written to interpret the profile data and extract values of the following metrics such as: average convergence rate, efficiency of iteration and variable stabilization. Finally, using this system we produce a study of 4 well known SAT solvers to compare their iterative efficiency using random as well as industrial benchmarks. Using the framework, iterative inefficiencies that lead to slow convergence are identified. We also show how to fine-tune the solvers by adapting the key steps. We also show that the similar profile data representation can be easily applied to loops, in general, to capture their program state. One of the key attributes of the program state inside loops is their branch behavior. We demonstrate the applicability of the framework by profiling completely parallelizable loops (no cross-iteration dependence) and by storing the branching behavior of each iteration. The branch behavior across a group of iterations is important in devising the thread warps from parallel loops for efficient execution on GPUs. We show how some loops can be effectively parallelized on GPUs using this information.
54

Feasibility Study of a SLA Driven Transmission Service

Sun, Zhichao January 2015 (has links)
Network based services are expanding scale at an unprecedented speed currently. With the continuously strengthen of user’s dependence on these, performance issues are becoming more and more important. Service Level Agreement (SLA) is a negotiated contract between service provider and customer in the way of service quality, priority, responsibility, etc. In this thesis, we designed and implemented a prototype for a SLA driven transmission service, which can deliver a file from one host to another, using a combination of different transport protocols. The proposed service measures the network conditions, and based on these and user’s requirement, it dynamically evaluates if it can meet the user SLA. Once a transmission has been accepted, it uses this information to adjust the usage of different transfer layer protocols, in order to meet the agreed SLA. The thesis work is based on the investigating of network theory and experimental results. We research how the SLA driven transmission service is affected by various factors, they include user’s requirements, network conditions, and service performance, etc. We design and implement an evaluation model for the network performance. It reveals how network performance is influenced by different network metrics, such as Round-Trip-Time (RTT), Throughput, and Packet Loss Rate (PLR), etc. We implement a transmission service on real test-bed, which is a controllable environment. We can alter the network metrics and measuring frequency of our evaluation model. Then, we evaluate these changes with our evaluation model and improve the performance of the transmission service. After that, we propose a calculating method for the service cost. At last, we can summarize the feasibility of this SLA driven transmission service. In the experiments, we obtain the variable delivery time and packet loss of the transmission service, which are changed with RTT and PLR of network. We analyze the different performance of transmission service, which uses TCP, UDP, and SCTP separately. Also a suitable measuring frequency and the cost for the usage of transmission service on this frequency are pointed out. Statistical analysis on the experiment results show that such SLA driven transmission service is feasible. It brings improved performance for user’s requirements. In addition, we come up with some useful suggestions and future work for the transmission service.
55

A multi-dimensional scale for repositioning public park and recreation services

Kaczynski, Andrew Thomas 30 September 2004 (has links)
The goal of this study was to develop an instrument to assist public park and recreation agencies in successfully repositioning their offerings in order to garner increased allocations of tax dollars. To achieve this, an agency must be perceived as providing public benefits, those that accrue to all members of its constituency. The scale sought to identify the importance of various community issues and perceptions of the agency's performance in contributing to those issues. A valid and reliable 36-item instrument was developed that encompasses nine distinct dimensions: Preventing Youth Crime, Environmental Stewardship, Enhancing Real Estate Values, Attracting and Retaining Businesses, Attracting and Retaining Retirees, Improving Community Health, Stimulating Urban Rejuvenation, Attracting Tourists, and Addressing the Needs of People who are Underemployed. These dimensions represent community issues that a park and recreation agency can contribute towards, and can therefore use as a basis for its repositioning efforts. Using a screening process by expert judges, a pretest sample of undergraduate students, and a sample of municipal residents, each of the importance and performance dimensions in the scale was judged to possess content validity, internal consistency, construct validity, and split-half reliability. A shortened version of the instrument was also demonstrated to possess internal consistency and construct validity. In a practical application, the scale proved useful in identifying repositioning options for the park and recreation department, both in isolation and relative to a public agency'competitor'. Limitations of the study and suggestions for future research are offered.
56

System Performance Analysis Considering Human-related Factors

Kiassat, Ashkan Corey 08 August 2013 (has links)
All individuals are unique in their characteristics. As such, their positive and negative contributions to system performance differ. In any system that is not fully automated, the effect of the human participants has to be considered when one is interested in the performance optimization of the system. Humans are intelligent, adaptive, and learn over time. At the same time, humans are error-prone. Therefore, in situations where human and hardware have to interact and complement each other, the system faces advantages and disadvantages from the role the humans play. It is this role and its effect on performance that is the focus of this dissertation. When analyzing the role of people, one can focus on providing resources to enable the human participants to produce more. Alternatively, one can strive to ensure the occurrence of less frequent and impactful errors. The focus of the analysis in this dissertation is the latter. Our analysis can be categorized into two parts. In the first part of our analysis, we consider a short term planning horizon and focus directly on failure risk analysis. What can be done about the risk stemming from the human participant? Any proactive steps that can be taken will have the advantage of reducing risk, but will also have a cost associated with it. We develop a cost-benefit analysis to enable a decision-maker to choose the optimal course of action for revenue maximization. We proceed to use this model to calculate the minimum acceptable level of risk, and the associated skill level, to ensure system profitability. The models developed are applied to a case study that comes from a manufacturing company in Ontario, Canada. In the second part of our analysis, we consider a longer planning horizon and are focused on output maximization. Human learning, and its effect on output, is considered. In the first model we develop, we use learning curves and production forecasting models to optimally assign operators, in order to maximize system output. In the second model we develop, we perform a failure risk analysis in combination with learning curves, to forecast the total production of operators. Similar to the first part of our analysis, we apply the output maximization models to the aforementioned case study to better demonstrate the concepts.
57

System Performance Analysis Considering Human-related Factors

Kiassat, Ashkan Corey 08 August 2013 (has links)
All individuals are unique in their characteristics. As such, their positive and negative contributions to system performance differ. In any system that is not fully automated, the effect of the human participants has to be considered when one is interested in the performance optimization of the system. Humans are intelligent, adaptive, and learn over time. At the same time, humans are error-prone. Therefore, in situations where human and hardware have to interact and complement each other, the system faces advantages and disadvantages from the role the humans play. It is this role and its effect on performance that is the focus of this dissertation. When analyzing the role of people, one can focus on providing resources to enable the human participants to produce more. Alternatively, one can strive to ensure the occurrence of less frequent and impactful errors. The focus of the analysis in this dissertation is the latter. Our analysis can be categorized into two parts. In the first part of our analysis, we consider a short term planning horizon and focus directly on failure risk analysis. What can be done about the risk stemming from the human participant? Any proactive steps that can be taken will have the advantage of reducing risk, but will also have a cost associated with it. We develop a cost-benefit analysis to enable a decision-maker to choose the optimal course of action for revenue maximization. We proceed to use this model to calculate the minimum acceptable level of risk, and the associated skill level, to ensure system profitability. The models developed are applied to a case study that comes from a manufacturing company in Ontario, Canada. In the second part of our analysis, we consider a longer planning horizon and are focused on output maximization. Human learning, and its effect on output, is considered. In the first model we develop, we use learning curves and production forecasting models to optimally assign operators, in order to maximize system output. In the second model we develop, we perform a failure risk analysis in combination with learning curves, to forecast the total production of operators. Similar to the first part of our analysis, we apply the output maximization models to the aforementioned case study to better demonstrate the concepts.
58

Analysis and optimization of MAC protocols for wireless networks

Shu, Feng Unknown Date (has links) (PDF)
Medium access control (MAC) plays a vital role in satisfying the varied quality of service (QoS) requirements in wireless networks. Many MAC solutions have been proposed for these networks, and performance evaluation, optimization and enhancement of these MAC protocols is needed. In this thesis, we focus on the analysis and optimization of MAC protocols for some recently emerged wireless technologies targeted at low-rate and multimedia applications.
59

Performance estimation of wireless networks using traffic generation and monitoring on a mobile device.

Tiemeni, Ghislaine Livie Ngangom January 2015 (has links)
Masters of Science / In this study, a traffic generator software package namely MTGawn was developed to run packet generation and evaluation on a mobile device. The call generating software system is able to: simulate voice over Internet protocol calls as well as user datagram protocol and transmission control protocol between mobile phones over a wireless network and analyse network data similar to computer-based network monitoring tools such as Iperf and D-ITG but is self-contained on a mobile device. This entailed porting a ‘stripped down’ version of a packet generation and monitoring system with functionality as found in open source tools for a mobile platform. This mobile system is able to generate and monitor traffic over any network interface on a mobile device, and calculate the standard quality of service metrics. The tool was compared to a computer–based tool namely distributed Internet traffic generator (D-ITG) in the same environment and, in most cases, MTGawn reported comparable results to D-ITG. The important motivation for this software was to ease feasibility testing and monitoring in the field by using an affordable and rechargeable technology such as a mobile device. The system was tested in a testbed and can be used in rural areas where a mobile device is more suitable than a PC or laptop. The main challenge was to port and adapt an open source packet generator to an Android platform and to provide a suitable touchscreen interface for the tool. ACM Categories and Subject Descriptors B.8 [PERFORMANCE AND RELIABILITY] B.8.2 [Performance Analysis and Design Aids] C.4 [PERFORMANCE OF SYSTEMS] Measurement techniques, Performance attributes
60

TFPS : um sistema de pré-processamento de traces para auxiliar na visualização de programas paralelos / TFPS - a traces preprocessing system to aid in parallel programs visualization

Stringhini, Denise January 1997 (has links)
O trabalho apresenta o projeto e o desenvolvimento de uma ferramenta para visualização lógica da execução de programas paralelos, a TFPS de Trace File Preprocessor System, cujo objetivo é a analise de desempenho de tais programas. 0 projeto é baseado no pré-processamento de arquivos de traces de execução dos programas. A idéia básica consiste em aproveitar as informações fornecidas pela monitoração. Estas informações, que em geral são utilizadas apenas para dirigir animação post-mortem destes programas, neste caso são utilizadas também na montagem das janelas de visualização. Assim, são descritos o pré-processador e a montagem das janelas de visualização. O primeiro, e responsável principalmente pela leitura e analise das informações contidas no arquivo de trace e pela geração de um arquivo de saída com todas as informações necessárias a montagem das janelas. Estas foram concebidas levando em consideração o tipo de informação que pode ser obtido de um arquivo de trace. Desta forma, foi possível aproximar o conteúdo das janelas de visualização o máximo possível do programa paralelo em analise. Com o objetivo de demonstrar esta aproximação foi construído um protótipo tanto para o pré-processador quanto para a ferramenta de visualização. Ambos os protótipos são descritos neste trabalho. / This study presents the project and development of a logical visualization tool for parallel programs. the TFPS of Trace File Preprocessor System, whose goal is the performance analysis of such programs. The project is based on the preprocessing of trace files of programs' execution. The basic idea consists in making use of the information given by the monitoring process. This information, whose general application is only to drive the post-mortem animation of these programs, is in this case also used to create the visualization displays. Thus, the preprocessor and the creation of visualization displays are described. The first is mainly responsible for reading and analyzing the information present in the trace file and for generating an output file with all information necessary for creating the views. The latter was conceived by taking into consideration the type of information that can be obtained from a trace file. Therefore it was possible to make the content of the visualization displays close to the parallel program that is being analyzed. A prototype of the preprocessor as well as of the visualization tool was built up in order to demonstrate the described approach. Both prototypes are described in this study.

Page generated in 0.3072 seconds