Spelling suggestions: "subject:"bperformance aptimization"" "subject:"bperformance anoptimization""
1 |
TCP performance study and enhancements within wireless multi-hop ad hoc network environments / Analyse et amélioration conjointe de la consommation d'énergie et des débits de TCP dans les réseaux ad hoc sans filSeddik, Alaa 30 March 2009 (has links)
Les réseaux ad hoc diffèrent des réseaux filaires par la multitude de perturbation auxquels ils sont sujets. Alors qu’une rupture de lien est un événement plutôt rare sur des réseaux filaires, et généralement imputable à l’état physique du matériel intermédiaire, cet événement est courant avec les communications radio. Ceci est lié à la qualité du signal reçu de l’autre extrémité ou à la configuration de l’environnement. Les perturbations causées par l’environnement ne mènent pas seulement à une rupture d’un lien, elles peuvent avoir un impact sur la réception des données. La volatilité des liens est typique dans les réseaux sans fil alors pour les réseaux filaires ce problème est inexistant. TCP, qui est prévu pour assurer la transmission fiable des données, n’a été conçu qu’en tenant compte des contraintes des réseaux filaires. Ainsi, certains événements dans la transmission de données sans fil peuvent être mal interprétés et engendrer une mauvaise réaction de TCP. Pour améliorer la performance de TCP dans les réseaux ad hoc, l’objectif de cette thèse est double. Dans un premier temps, une étude complète des performances de TCP dans les réseaux ad hoc est dressée. Celle-ci concerne à la fois les débits atteignables et aussi la consommation d’énergie induite par l’utilisation de ce protocole de transport dans un réseau ad hoc. Cette étude permet d’identifier les points d’amélioration de TCP pour qu’il soit utilisable dans les réseaux ad hoc. Dans un second temps, nous proposons une nouvelle variante de TCP, appelée TCP-WELCOME, dont l’objectif est de traiter de façon adéquate les différents types de perte de paquets sur un réseau ad hoc et optimiser la performance de TCP. / Wireless ad hoc networks are different from wired networks by the multitude of data packet loss situations they are subjected to. This is due to the characteristics of wireless channel that might obstruct the proper reception of data packet at the destination end. In some case, these vulnerabilities of wireless channel can result in a complete link failure. Although link failure is of low probability in wired networks, it is rather common in wireless networks. The volatility of communication channel is a typical problem with wireless links, which is not the case with wired cables. TCP is a transport protocol that aims at ensuring high reliability and guarantying reception of data packets. However, TCP was designed for wired networks to address congestion, which is the main cause for data packet loss in wired networks. Therefore, other types of data packet loss encountered in wireless networks are prone to misinterpretation by TCP, which will lead to TCP performance degradation within the network. To overcome the performance limitation of TCP when used within ad hoc networks, the aim of this thesis is twofold. First, a complete performance study of TCP over ad hoc networks is achieved. This evaluation concerns two performance metrics: the achievable throughput and the energy consumption of TCP within ad hoc networks. This study allows identifying the potential room of improvement to enhance TCP efficiency in ad hoc networks. Second, we propose a new TCP variant, TCP-WELCOME that optimizes the performance of TCP in ad hoc networks through its ability to distinguish among, and efficiently deal with, different data packet loss situations, within ad hoc networks.
|
2 |
Resource Allocation and Performance Optimization in Wireless Networksguo, wenxuan 26 April 2011 (has links)
As wireless networks continue streaking through more aspects of our lives, it is seriously constrained by limited network resources, in terms of time, frequency and power. In order to enhance performance for wireless networks, it is of great importance to allocate resources smartly based on the current network scenarios. The focus of this dissertation is to investigate radio resource management algorithms to optimize performance for different types of wireless networks. Firstly, we investigate a joint optimization problem on relay node placement and route assignment for wireless sensor networks. A heuristic binary integer programming algorithm is proposed to maximize the total number of information packets received at the base station during the network lifetime. We then present an optimization algorithm based on binary integer programming for relay node assignment with the current node locations. Subsequently, a heuristic algorithm is applied to move the relay nodes to the locations iteratively to better serve their associated edge nodes. Secondly, as traditional goal of maximizing the total throughput can result in unbalanced use of network resources, we study a joint problem of power control and channel assignment within a wireless mesh network such that the minimal capacity of all links is maximized. This is essentially a fairness problem. We develop an upper bound for the objective by relaxing the integer variables and linearization. Subsequently, we put forward a heuristic approach to approximate the optimal solution, which tries to increase the minimal capacity of all links via setting tighter constraint and solving a binary integer programming problem. Simulation results show that solutions obtained by this algorithm are very close to the upper bounds obtained via relaxation, thus suggesting that the solution produced by the algorithm is near-optimal. Thirdly, we study the topology control of disaster area wireless networks to facilitate mobile nodes communications by deploying a minimum number of relay nodes dynamically. We first put forward a novel mobility model for mobile nodes that describes the movement of first responders within a large disaster area. Secondly, we formulate the square disk cover problem and propose three algorithms to solve it, including the two-vertex square covering algorithm, the circle covering algorithm and the binary integer programming algorithm. Fourthly, we explore the joint problem of power control and channel assignment to maximize cognitive radio network throughput. It is assumed that an overlaid cognitive radio network (CRN) co-exists with a primary network. We model the opportunistic spectrum access for cognitive radio network and formulate the cross-layer optimization problem under the interference constraints imposed by the existing primary network. A distributed greedy algorithm is proposed to seek for larger network throughput. Cross-layer optimization for CRN is often implemented in centralized manner to avoid co-channel interference. The distributed algorithm coordinates the channel assignment with local channel usage information. Thus the computation complexity is greatly reduced. Finally, we study the network throughput optimization problem for a multi-hop wireless network by considering interference alignment at physical layer. We first transform the problem of dividing a set of links into multiple maximal concurrent link sets to the problem of finding the maximal cliques of a graph. Then each concurrent link set is further divided into one or several interference channel networks, on which interference alignment is implemented to guarantee simultaneous transmission. The network throughput optimization problem is then formulated as a non-convex nonlinear programming problem, which is NP-hard generally. Thus we resort to developing a branch-and-bound framework, which guarantees an achievable performance bound.
|
3 |
Analysis and parameter prediction of compiler transformation for graphics processorsMagni, Alberto January 2016 (has links)
In the last decade graphics processors (GPUs) have been extensively used to solve computationally intensive problems. A variety of GPU architectures by different hardware manufacturers have been shipped in a few years. OpenCL has been introduced as the standard cross-vendor programming framework for GPU computing. Writing and optimising OpenCL applications is a challenging task, the programmer has to take care of several low level details. This is even harder when the goal is to improve performance on a wide range of devices: OpenCL does not guarantee performance portability. In this thesis we focus on the analysis and the portability of compiler optimisations. We describe the implementation of a portable compiler transformation: thread-coarsening. The transformation increases the amount of work carried out by a single thread running on the GPU. The goal is to reduce the amount of redundant instructions executed by the parallel application. The first contribution is a technique to analyse performance improvements and degradations given by the compiler transformation, we study the changes of hardware performance counters when applying coarsening. In this way we identify the root causes of execution time variations due to coarsening. As second contribution, we study the relative performance of coarsening over multiple input sizes. We show that the speedups given by coarsening are stable for problem sizes larger than a threshold that we call saturation point. We exploit the existence of the saturation point to speedup iterative compilation. The last contribution of the work is the development of a machine learning technique that automatically selects a coarsening configuration that improves performance. The technique is based on an iterative model built using a neural network. The network is trained once for a GPU model and used for several programs. To prove the flexibility of our techniques, all our experiments have been deployed on multiple GPU models by different vendors.
|
4 |
Mental readiness in rehabilitation (MR2): simple techniques for mental health integrationStone, Erin J. 04 January 2024 (has links)
Mental health (MH) concerns are becoming more prevalent, though the current United States adult population remains more inclined to seek care only for physical conditions. Clients with physical dysfunction are especially likely to experience an
exacerbation of MH concerns. The skilled, holistic practice scope of Occupational Therapy Practitioners (OTP) make them well suited to address both physical and MH needs. This program, Mental Readiness in Rehabilitation (MR2), provides holistic care planning education to OTPs. The MR2 is a one-hour education program that provides the background on MH in physical rehabilitation and offers practical skills training the Mental Readiness Screening Tool and the corresponding MH toolkit. This program educates OTPs on convenient, evidence-based skills to embed MH interventions for more holistic, comprehensive treatment plans. Plans for program implementation, funding, and evaluation of the MR2 program are included, as well as intent for the dissemination of program findings to advance the base of evidence for OTPs as qualified mental health
practitioners.
|
5 |
Two different perspectives on capacitive deionization process : performance optimization and flow visualizationDemirer, Onur Nihat 19 November 2013 (has links)
In this thesis, two different experimental approaches to capacitive deionization (CDI) process are presented. In the first approach, transient system characteristics were analyzed to find three different operating points, first based on minimum outlet concentration, second based on maximum average adsorption rate and third based on maximum adsorption efficiency. These three operating points were compared in long term desalination tests. In addition, the effects of inlet stream salinity and CDI system size have been characterized to assess the feasibility of a commercial CDI system operating at brackish water salinity levels. In the second approach, the physical phenomena occurring inside a capacitive deionization system were studied by laser-induced fluorescence visualization of a “pseudo-porous” CDI microstructure. A model CDI cell was fabricated on a silicon-on-insulator (SOI) substrate and charged fluorophores were used to visualize the simultaneous electro migration of oppositely charged ions and to obtain in situ concentration measurements. / text
|
6 |
DEVELOPMENT OF AN AUTOMATION AND CONTROL SYSTEM FOR A COAL SPIRALZhang, Baojie 01 December 2011 (has links)
Coal spirals are widely used in coal preparation plants around the world to clean fine coal, typically in the 1 x 0.15 mm size range. Despite their popularity and the trend toward increased automation in modern coal preparation plants, adjustments to the critical process variable for coal spirals, i.e., product splitter position, continue to be done manually. Since spiral feed in a plant tends to fluctuate on a regular basis, timely manual adjustment of splitter position in tens or hundreds of spirals operating in a plant is nearly impossible. As a result, the clean coal yield from a spiral and also the overall plant suffers on a regular basis. The main goal of this study was to develop a suitable sensor and control system to adjust the product splitter position of a full-scale spiral. Some of the basic properties of coal slurry were thoroughly investigated for their on-line measurability and for their correlations with the density of the constituent solid particles. After experimenting with electrical capacitance- and conductivity- (i.e., reciprocal of resistivity) based sensing techniques, a conductivity-based tube sensor was developed for measuring density of solid particles in the spiral trough. Two sensors were used to establish a density gradient in the critical region across the spiral trough at the discharge end. Based on this continuously monitored density gradient, a PIC24 microcontroller was programmed to send a signal to a DC gear motor that would move the splitter arm in the appropriate direction when sufficient variation in conductivity was detected. Currently, a cycle time of 5 minutes is used for the spiral control system; however, in a commercial application, the cycle time could be lengthened to 30 or 60 minutes. The automation system has been validated by examining the performance of a full-scale spiral while deliberately changing factors like feed solid content, feed washability characteristics, and feed slurry ionic concentration. With a full-scale compound spiral programmed to achieve a specific gravity of separation at 1.65 by an automatic adjustment of the splitter position, the actual D50 values achieved for two separate tests were 1.64 and 1.73. The close proximity of target and actual D50 values is indicative of the effectiveness of the developed system. The next step in near-term commercialization of this proprietary spiral control system will be a longer term (several months) in-plant demonstration. The main goal of this study was to develop a suitable sensor and control system to adjust the product splitter position of a full-scale spiral. One of the basic properties of coal slurry was thoroughly investigated for its on-line measurability and for its correlation with the constituent solid density of the slurry. After experimenting with electrical capacitance- and conductivity- (i.e., reciprocal of resistivity) based sensing techniques, a conductive-based tube sensor was selected for measuring solids density of particles in the spiral trough. Two sensors were used to establish a density gradient in the critical region across the spiral trough at the discharge end. Based on this continuously monitored density gradient, a PIC24 microcontroller was programmed to send a signal to a DC gear motor that would move the splitter arm when sufficient variation in conductivity was detected. Currently, a cycle time of 5 minutes is used for the spiral control system; however, in a commercial application, the cycle time could be lengthened to 30 or 60 minutes. The automation system has been validated by examining the performance of a full-scale spiral while deliberately changing factors like feed solid content, feed washability characteristics, and feed slurry ionic concentration. With compound spirals programmed to achieve a specific gravity of separation at 1.65, actual D50 values achieved for two separate tests were 1.64 and 1.73. The close proximity of target and actual D50 values is indicative of the effectiveness of the developed system. The next step in near-term commercialization of this proprietary spiral control system will be a longer term (several months) in-plant demonstration.
|
7 |
Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUsAbdelfattah, Ahmad 15 January 2015 (has links)
High performance computing (HPC) platforms are evolving to more heterogeneous configurations to support the workloads of various applications. The current hardware landscape is composed of traditional multicore CPUs equipped with hardware accelerators that can handle high levels of parallelism. Graphical Processing Units (GPUs) are popular high performance hardware accelerators in modern supercomputers. GPU programming has a different model than that for CPUs, which means that many numerical kernels have to be redesigned and optimized specifically for this architecture. GPUs usually outperform multicore CPUs in some compute intensive and massively parallel applications that have regular processing patterns. However, most scientific applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved.
This dissertation presents a uniform design strategy for optimizing critical memory-bound kernels on GPUs. Based on hierarchical register blocking, double buffering and latency hiding techniques, this strategy leverages the performance of a wide range of standard numerical kernels found in dense and sparse linear algebra libraries. The work presented here focuses on matrix-vector multiplication kernels (MVM) as repre-
sentative and most important memory-bound operations in this context. Each kernel inherits the benefits of the proposed strategies. By exposing a proper set of tuning parameters, the strategy is flexible enough to suit different types of matrices, ranging from large dense matrices, to sparse matrices with dense block structures, while high performance is maintained. Furthermore, the tuning parameters are used to maintain the relative performance across different GPU architectures. Multi-GPU acceleration is proposed to scale the performance on several devices. The performance experiments show improvements ranging from 10% and up to more than fourfold speedup against competitive GPU MVM approaches. Performance impacts on high-level numerical libraries and a computational astronomy application are highlighted, since such memory-bound kernels are often located in innermost levels of the software chain. The excellent performance obtained in this work has led to the adoption of code in NVIDIAs widely distributed cuBLAS library.
|
8 |
CFD as a tool to optimize aeration tank design and operationKarpinska, A.M., Bridgeman, John 22 November 2017 (has links)
Yes / In a novel development on previous computational fluid dynamics studies, the work reported here used an Eulerian two-fluid model with the shear stress transport k–ω turbulence closure model and bubble interaction models to simulate aeration tank performance at full scale and to identify process performance issues resulting from design parameters and operating conditions. The current operating scenario was found to produce a fully developed spiral flow. Reduction of the airflow rates to the average and minimum design values led to a deterioration of the mixing conditions and formation of extended unaerated fluid regions. The influence of bubble-induced mixing on the reactor performance was further assessed via simulations of the residence time distribution of the fluid. Internal flow recirculation ensured long contact times between the phases; however, hindered axial mixing and the presence of dead zones were also identified. Finally, two optimization schemes based on modified design and operating scenarios were evaluated. The adjustment of the airflow distribution between the control zones led to improved mixing and a 20% improvement to the mass transfer coefficient. Upgrading the diffuser grid was found to be an expensive and ineffective solution, leading to worsening of the mixing conditions and yielding the lowest mass transfer coefficient compared with the other optimization schemes studied. / College of Engineering and Physical Sciences, University of Birmingham, UK
|
9 |
ROBUST ADAPTIVE BEAMFORMING WITH BROAD NULLSYudong, He, Xianghua, Yang, Jie, Zhou, Banghua, Zhou, Beibei, Shao 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Robust adaptive beamforming using worst-case performance optimization is developed in recent
years. It had good performance against array response errors, but it cannot reject strong
interferences. In this paper, we propose a scheme for robust adaptive beamforming with broad
nulls to reject strong interferences. We add a quadratic constraint to suppress the power of the
array response over a spatial region of the interferences. The optimal weighting vector is then
obtained by minimizing the power of the array output subject to quadratic constrains on the
desired signal and interferences, respectively. We derive the formulations for the optimization
problem and solve it efficiently using Newton recursive algorithm. Numerical examples are
presented to compare the performances of the robust adaptive beamforming with no null
constrains, sharp nulls and broad nulls. The results show its powerful ability to reject strong
interferences.
|
10 |
Harnessing Teamwork in Networks: Prediction, Optimization, and ExplanationJanuary 2018 (has links)
abstract: Teams are increasingly indispensable to achievements in any organizations. Despite the organizations' substantial dependency on teams, fundamental knowledge about the conduct of team-enabled operations is lacking, especially at the {\it social, cognitive} and {\it information} level in relation to team performance and network dynamics. The goal of this dissertation is to create new instruments to {\it predict}, {\it optimize} and {\it explain} teams' performance in the context of composite networks (i.e., social-cognitive-information networks).
Understanding the dynamic mechanisms that drive the success of high-performing teams can provide the key insights into building the best teams and hence lift the productivity and profitability of the organizations. For this purpose, novel predictive models to forecast the long-term performance of teams ({\it point prediction}) as well as the pathway to impact ({\it trajectory prediction}) have been developed. A joint predictive model by exploring the relationship between team level and individual level performances has also been proposed.
For an existing team, it is often desirable to optimize its performance through expanding the team by bringing a new team member with certain expertise, or finding a new candidate to replace an existing under-performing member. I have developed graph kernel based performance optimization algorithms by considering both the structural matching and skill matching to solve the above enhancement scenarios. I have also worked towards real time team optimization by leveraging reinforcement learning techniques.
With the increased complexity of the machine learning models for predicting and optimizing teams, it is critical to acquire a deeper understanding of model behavior. For this purpose, I have investigated {\em explainable prediction} -- to provide explanation behind a performance prediction and {\em explainable optimization} -- to give reasons why the model recommendations are good candidates for certain enhancement scenarios. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
|
Page generated in 0.1219 seconds