• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 258
  • 98
  • 21
  • 16
  • 11
  • 10
  • 9
  • 9
  • 8
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 528
  • 528
  • 91
  • 78
  • 77
  • 67
  • 65
  • 57
  • 56
  • 54
  • 51
  • 39
  • 37
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Stakeholder perceptions of service quality improvement in Ethiopian public higher education institutions

Solomon Lemma Lodesso 12 1900 (has links)
The study identifies how different stakeholders perceive service quality improvement initiatives in public higher education institutions in Ethiopia. For this purpose, a mixed research methodology was employed. Furthermore, secondary data were collected from a variety of literature and primary data were collected from academic staff and final year students at public higher education institutions using the SERVQUAL scale and through focus group interviews. The collected data were analysed using both descriptive and inferential statistics. The research findings indicated that all dimensions of the service quality improvement initiatives were perceived by academic staff and studentsto be verypoor. The reasons for these poor or low perceptions were: the high expectationsof the stakeholders, the government’s intention to expand, lack of adequate knowledge regarding the implementation of the BPR process, the lack of motivation by service providers, poor management and the lack of good governance by the universities, inexperienced workers, non-empowered and task specific frontline employees, the low quality of the infrastructure, non-value adding hierarchical structures and approval systems, ethical problems with some service providers, the high staff turnoverand the lack of experienced staff. In addition, at all new universities, construction is underway and as a result,there are problems such as the poor state of the dormitories, classes,bathrooms, recreation areas, lounges, TV rooms, sport fields and internet connectivity, while the libraries are not well stocked with books and periodicals either. This study has recommended that the institutions should have standardised instruments that can be used to measure the status of service quality improvement and deliveryperiodically and to identify the areas that have the highest perceived performance gap scores in order to redeploy some of the resources. It also needs to be pointed out that the service providers lack sufficient knowledge and skillsconcerning the implementation of BPR, thus training is recommended in this regard.It is further recommended that for effective implementation of the BPR process, the importance of the provision of different guiding documents, continuous monitoring of activities and top management support should be kept in mind. / Educational Leadership and Management / D. Ed. (Educational Management)
482

Extrakce parametrů pro výzkum interpretačního výkonu / Extraction of parameters for the research of music performance

Laborová, Anna January 2021 (has links)
Different music performances of the same piece may significantly differ from each other. Not only the composer and the score defines the listener’s music experience, but the music performance itself is an integral part of this experience. Four parameter classes can be used to describe a performance objectively: tempo and timing, loudness (dynamics), timbre, and pitch. Each of the individual parameters or their combination can generate a unique characteristic performance. The extraction of such objective parameters is one of the difficulties in the field of Music Performance Analysis and Music Information Retrieval. The submitted work summarizes knowledge and methods from both of the fields. The system is applied to extract data from 31 string quartet performances of 2. movement Lento of String Quartet no. 12 F major (1893) by czech romantic composer Antonín Dvořák (1841–1904).
483

Nové trendy v oblasti mobility v datových sítích / New Mobility Trends in Data Networks

Skořepa, Michal January 2014 (has links)
Dizertační práce se zabývá návrhem nového algoritmu řízení handoveru v rámci protokolu Mobile IPv6, který umožní nasazení tohoto protokolu v leteckých datových sítích. Existující algoritmy řízení handoveru sice dosahují dostatečné výkonnosti v konvenčních pozemních bezdrátových sítích disponujích velkou šířkou pásma a nízkou latencí, jako jsou WiFi nebo UMTS, ale jak ukazuje tato práce, nasazení těchto algoritmů prostředí leteckých datových sítí nepřináší očekávané výhody. Analýza ukazuje, že v úzkopásmových leteckých sítích trpí tyto algoritmy řízení handoveru velkou latencí a způsobují značnou režii. Nový algoritmus řízení handoveru v MIPv6 navržený v této práci je založený na jednoduché myšlence: ''Já jsem letadlo, já vím, kam letím!'' To znamená, že pohyb letadla není náhodný, ale vysoce předvídatelný. Díky tomu je možno předvídat handovery mezi přístupovými sítěmi podél očekávané trajektorie letadla a vykonat nezbytné operace pro přípravu handoverů již na zemi, kde je letadlo připojeno k širokopásmové síti letiště. Tato dizertační práce dále uvádí porovnání existujících algoritmů řízení handoveru s nově navrženým pomocí analytické metody ohodnocení handoveru. Díky tomu je možno kvantifikovat výhody, které nový algoritmus přináší a taktéž popsat slabiny algoritmů existujících.
484

Modeling and Performance Evaluation of Spatially-correlated Cellular Networks / Modélisation et évaluation de la performance de réseaux cellulaires à corrélation spatiale

Wang, Shanshan 14 March 2019 (has links)
Dans la modélisation et l'évaluation des performances de la communication cellulaire sans fil, la géométrie stochastique est largement appliquée afin de fournir des solutions plus efficaces et plus précises. Le processus ponctuel de Poisson homogène (H-PPP), est le processus ponctuel le plus largement utilisé pour modéliser les emplacements spatiaux des stations de base (BS) en raison de sa facilité de traitement mathématique et de sa simplicité. Pour les fortes corrélations spatiales entre les emplacements des stations de base, seuls les processus ponctuels (PP) avec inhibitions et attractions spatiales peuvent être utiles. Cependant, le temps de simulation long et la faible aptitude mathématique rendent les PP non-Poisson non adaptés à l'évaluation des performances au niveau du système. Par conséquent, pour surmonter les problèmes mentionnés, nous avons les contributions suivantes dans cette thèse: Premièrement, nous introduisons une nouvelle méthodologie de modélisation et d’analyse de réseaux cellulaires de liaison descendante, dans laquelle les stations de base constituent un processus ponctuel invariant par le mouvement qui présente un certain degré d’interaction entre les points. L'approche proposée est basée sur la théorie des PP inhomogènes de Poisson (I-PPP) et est appelée approche à double amincissement non homogène (IDT). L’approche proposée consiste à approximer le PP initial invariant par le mouvement avec un PP équivalent constitué de la superposition de deux I-PPP conditionnellement indépendants. Les inhomogénéités des deux PP sont créées du point de vue de l'utilisateur type ``centré sur l'utilisateur''. Des conditions suffisantes sur les paramètres des fonctions d'amincissement qui garantissent une couverture meilleure ou pire par rapport au modèle de PPP homogène de base sont identifiées. La précision de l'approche IDT est justifiée à l'aide de données empiriques sur la distribution spatiale des stations de base. Ensuite, sur la base de l’approche IDT, une nouvelle expression analytique traitable du rapport de brouillage moyen sur signal (MISR) des réseaux cellulaires où les stations de base présentent des corrélations spatiales est introduite. Pour les PP non-Poisson, nous appliquons l'approche IDT proposée pour estimer les performances des PP non-Poisson. En prenant comme exemple le processus de points β-Ginibre ( β -GPP), nous proposons de nouvelles fonctions d’approximation pour les paramètres clés dans l’approche IDT afin de modéliser différents degrés d’inhibition spatiale et de prouver que MISR est constant en densification de réseau. Nous prouvons que la performance MISR dans le cas β-GPP ne dépend que du degré de répulsion spatiale, c'est-à-dire β , quelles que soient les densités de BS. Les nouvelles fonctions d'approximation et les tendances sont validées par des simulations numériques.Troisièmement nous étudions plus avant la méta-distribution du SIR à l’aide de l’approche IDT. La méta-distribution est la distribution de la probabilité de réussite conditionnelle compte tenu du processus de points. Nous dérivons et comparons l'expression sous forme fermée pour le b-ème moment dans les cas PP H-PPP et non-Poisson. Le calcul direct de la fonction de distribution cumulative complémentaire (CCDF) pour la méta-distribution n'étant pas disponible, nous proposons une méthode numérique simple et précise basée sur l'inversion numérique des transformées de Laplace. L'approche proposée est plus efficace et stable que l'approche conventionnelle utilisant le théorème de Gil-Pelaez. La valeur asymptotique de la CCDF de la méta distribution est calculée dans la nouvelle définition de la probabilité de réussite. En outre, la méthode proposée est comparée à certaines autres approximations et limites, par exemple l’approximation bêta, les bornes de Markov et les liaisons de Paley-Zygmund. Cependant, les autres modèles et limites d'approximation sont comparés pour être moins précis que notre méthode proposée. / In the modeling and performance evaluation of wireless cellular communication, stochastic geometry is widely applied, in order to provide more efficient and accurate solutions. Homogeneous Poisson point process (H-PPP) with identically independently distributed variables, is the most widely used point process to model the spatial locations of base stations (BSs) due to its mathematical tractability and simplicity. For strong spatial correlations between locations of BSs, only point processes (PPs) with spatial inhibitions and attractions can help. However, the long simulation time and weak mathematical tractability make non-Poisson PPs not suitable for system level performance evaluation. Therefore, to overcome mentioned problems, we have the following contributions in this thesis: First, we introduce a new methodology for modeling and analyzing downlink cellular networks, where the base stations constitute a motion-invariant point process that exhibits some degree of interactions among the points. The proposed approach is based on the theory of inhomogeneous Poisson PPs (I-PPPs) and is referred to as inhomogeneous double thinning (IDT) approach. The proposed approach consists of approximating the original motion-invariant PP with an equivalent PP that is made of the superposition of two conditionally independent I-PPPs. The inhomogeneities of both PPs are created from the point of view of the typical user. The inhomogeneities are mathematically modeled through two distance-dependent thinning functions and a tractable expression of the coverage probability is obtained. Sufficient conditions on the parameters of the thinning functions that guarantee better or worse coverage compared with the baseline homogeneous PPP model are identified. The accuracy of the IDT approach is substantiated with the aid of empirical data for the spatial distribution of the BSs. Then, based on the IDT approach, a new tractable analytical expression of mean interference to signal ratio (MISR) of cellular networks where BSs exhibits spatial correlations is introduced.For non-Poisson PPs, we apply proposed IDT approach to approximate the performance of non-Poisson PPs. Taking β-Ginibre point process (β -GPP) as an example, we propose new approximation functions for key parameters in IDT approach to model different degree of spatial inhibition and we successfully prove that MISR for β -GPP is constant under network densification with our proposed approximation functions. We prove that of MISR performance under β-GPP case only depends on the degree of spatial repulsion, i.e., β , regardless of different BS densities. We also prove that with the increase of β or (given fixed γ or β respectively), the corresponding MISR for β-GPP decreases. The new approximation functions and the trends are validated by numerical simulations. Third, we further study meta distribution of the SIR with the help of the IDT approach. Meta distribution is the distribution of the conditional success probability given the point process. We derive and compare the closed-form expression for the b-th moment under H-PPP and non-Poisson PP case. Since the direct computation of the complementary cumulative distribution function (CCDF) for meta distribution is not available, we propose a simple and accurate numerical method based on numerical inversion of Laplace transforms. The proposed approach is more efficient and stable than the conventional approach using Gil-Pelaez theorem. The asymptotic value of CCDF of meta distribution is computed under new definition of success probability. Furthermore, the proposed method is compared with some other approximations and bounds, e.g., beta approximation, Markov bounds and Paley-Zygmund bound. However, the other approximation models and bounds are compared to be less accurate than our proposed method.
485

Voltage loss analysis of PEM fuel cells

Jayasankar, B., Pohlmann, C., Harvey, D.B. 25 November 2019 (has links)
The assessment of performance for PEM Fuel Cells (PEMFC) at the stack, Single Repeating Unit (SRU), and Membrane Electrode Assembly (MEA) level is dominated by the evaluation of polarization curves. However, polarization curves do not provide adequate detail as to the origin of the inefficiencies of the fuel cell performance and information on these sources of origin are critical to understand and address topics such as material selection, optimal operating conditions, and overall robust and reliable cell and stack design characteristics. To the purpose of understanding the origin of the inefficiencies underlying the fuel cell polarization curve a series of additional experimental and analysis techniques must be applied and from the resultant data the origin of the inefficiencies can then be assigned to kinetic, ohmic, and mass transport loss categorizations. Further, through a combination of the diagnostic methods further resolution can be implied down to the contribution of the individual components to the relative voltage loss categories. In this topic, a methodology will be presented and discussed that achieves and demonstrates this process.
486

Concepts for In-memory Event Tracing: Runtime Event Reduction with Hierarchical Memory Buffers

Wagner, Michael 03 July 2015 (has links)
This thesis contributes to the field of performance analysis in High Performance Computing with new concepts for in-memory event tracing. Event tracing records runtime events of an application and stores each with a precise time stamp and further relevant metrics. The high resolution and detailed information allows an in-depth analysis of the dynamic program behavior, interactions in parallel applications, and potential performance issues. For long-running and large-scale parallel applications, event-based tracing faces three challenges, yet unsolved: the number of resulting trace files limits scalability, the huge amounts of collected data overwhelm file systems and analysis capabilities, and the measurement bias, in particular, due to intermediate memory buffer flushes prevents a correct analysis. This thesis proposes concepts for an in-memory event tracing workflow. These concepts include new enhanced encoding techniques to increase memory efficiency and novel strategies for runtime event reduction to dynamically adapt trace size during runtime. An in-memory event tracing workflow based on these concepts meets all three challenges: First, it not only overcomes the scalability limitations due to the number of resulting trace files but eliminates the overhead of file system interaction altogether. Second, the enhanced encoding techniques and event reduction lead to remarkable smaller trace sizes. Finally, an in-memory event tracing workflow completely avoids intermediate memory buffer flushes, which minimizes measurement bias and allows a meaningful performance analysis. The concepts further include the Hierarchical Memory Buffer data structure, which incorporates a multi-dimensional, hierarchical ordering of events by common metrics, such as time stamp, calling context, event class, and function call duration. This hierarchical ordering allows a low-overhead event encoding, event reduction and event filtering, as well as new hierarchy-aided analysis requests. An experimental evaluation based on real-life applications and a detailed case study underline the capabilities of the concepts presented in this thesis. The new enhanced encoding techniques reduce memory allocation during runtime by a factor of 3.3 to 7.2, while at the same do not introduce any additional overhead. Furthermore, the combined concepts including the enhanced encoding techniques, event reduction, and a new filter based on function duration within the Hierarchical Memory Buffer remarkably reduce the resulting trace size up to three orders of magnitude and keep an entire measurement within a single fixed-size memory buffer, while still providing a coarse but meaningful analysis of the application. This thesis includes a discussion of the state-of-the-art and related work, a detailed presentation of the enhanced encoding techniques, the event reduction strategies, the Hierarchical Memory Buffer data structure, and a extensive experimental evaluation of all concepts.
487

Software Controlled Clock Modulation for Energy Efficiency Optimization on Intel Processors

Schöne, Robert, Ilsche, Thomas, Bielert, Mario, Molka, Daniel, Hackenberg, Daniel 24 October 2017 (has links)
Current Intel processors implement a variety of power saving features like frequency scaling and idle states. These mechanisms limit the power draw and thereby decrease the thermal dissipation of the processors. However, they also have an impact on the achievable performance. The various mechanisms significantly differ regarding the amount of power savings, the latency of mode changes, and the associated overhead. In this paper, we describe and closely examine the so-called software controlled clock modulation mechanism for different processor generations. We present results that imply that the available documentation is not always correct and describe when this feature can be used to improve energy efficiency. We additionally compare it against the more popular feature of dynamic voltage and frequency scaling and develop a model to decide which feature should be used to optimize inter-process synchronizations on Intel Haswell-EP processors.
488

Nodale Spektralelemente und unstrukturierte Gitter - Methodische Aspekte und effiziente Algorithmen

Fladrich, Uwe 15 December 2011 (has links)
Die Dissertation behandelt methodische und algorithmische Aspekte der Spektralelementemethode zur räumlichen Diskretisierung partieller Differentialgleichungen. Die Weiterentwicklung einer symmetriebasierten Faktorisierung ermöglicht effiziente Operatoren für Tetraederelemente. Auf Grundlage einer umfassenden Leistungsanalyse werden Engpässe in der Implementierung der Operatoren identifiziert und durch algorithmische Modifikationen der Methode eliminiert.
489

Network-Calculus-based Performance Analysis for Wireless Sensor Networks

She, Huimin January 2009 (has links)
Recently, wireless sensor network (WSN) has become a promising technologywith a wide range of applications such as supply chain monitoringand environment surveillance. It is typically composed of multiple tiny devicesequipped with limited sensing, computing and wireless communicationcapabilities. Design of such networks presents several technique challengeswhile dealing with various requirements and diverse constraints. Performanceanalysis techniques are required to provide insight on design parametersand system behaviors. Based on network calculus, we present a deterministic analysis methodfor evaluating the worst-case delay and buffer cost of sensor networks. Tothis end, three general traffic flow operators are proposed and their delayand buffer bounds are derived. These operators can be used in combinationto model any complex traffic flowing scenarios. Furthermore, the methodintegrates a variable duty cycle to allow the sensor nodes to operate at lowrates thus saving power. In an attempt to balance traffic load and improveresource utilization and performance, traffic splitting mechanisms areintroduced for mesh sensor networks. Based on network calculus, the delayand buffer bounds are derived in non-splitting and splitting scenarios.In addition, analysis of traffic splitting mechanisms are extended to sensornetworks with general topologies. To provide reliable data delivery in sensornetworks, retransmission has been adopted as one of the most popularschemes. We propose an analytical method to evaluate the maximum datatransmission delay and energy consumption of two types of retransmissionschemes: hop-by-hop retransmission and end-to-end retransmission. We perform a case study of using sensor networks for a fresh food trackingsystem. Several experiments are carried out in the Omnet++ simulationenvironment. In order to validate the tightness of the two bounds obtainedby the analysis method, the simulation results and analytical results arecompared in the chain and mesh scenarios with various input traffic loads.From the results, we show that the analytic bounds are correct and tight.Therefore, network calculus is useful and accurate for performance analysisof wireless sensor network. / Ipack VINN Excellence Center
490

Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis

Yuankun Fu (10223831) 29 April 2021 (has links)
<div>As high performance computing systems are advancing from petascale to exascale, scientific workflows to integrate simulation and visualization/analysis are a key factor to influence scientific campaigns. As one of the campaigns to study fluid behaviors, computational fluid dynamics (CFD) simulations have progressed rapidly in the past several decades, and revolutionized our lives in many fields. Lattice Boltzmann method (LBM) is an evolving CFD approach to significantly reducing the complexity of the conventional CFD methods, and can simulate complex fluid flow phenomena with cheaper computational cost. This research focuses on accelerating the workflow of LBM simulation and data analysis.</div><div><br></div><div>I start my research on how to effectively integrate each component of a workflow at extreme scales. Firstly, we design an in-situ workflow benchmark that integrates seven state-of-the-art in-situ workflow systems with three synthetic applications, two real-world CFD applications, and corresponding data analysis. Then detailed performance analysis using visualized tracing shows that even the fastest existing workflow system still has 42% overhead. Then, I develop a novel minimized end-to-end workflow system, Zipper, which combines the fine-grain task parallelism of full asynchrony and pipelining. Meanwhile, I design a novel concurrent data transfer optimization method, which employs a multi-threaded work-stealing algorithm to transfer data using both channels of network and parallel file system. It significantly reduces the data transfer time by up to 32%, especially when the simulation application is stalled. Then investigation on the speedup using OmniPath network tools shows that the network congestion has been alleviated by up to 80%. At last, the scalability of the Zipper system has been verified by a performance model and various largescale workflow experiments on two HPC systems using up to 13,056 cores. Zipper is the fastest workflow system and outperforms the second-fastest by up to 2.2 times.</div><div><br></div><div>After minimizing the end-to-end time of the LBM workflow, I began to accelerate the memory-bound LBM algorithms. We first design novel parallel 2D memory-aware LBM algorithms. Then I extend to design 3D memory-aware LBM that combine features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging multiple temporal time steps. Strong scalability experiments on three HPC systems show that 2D and 3D memory-aware LBM algorithms outperform the existing fastest LBM by up to 4 times and 1.9 times, respectively. The speedup reasons are illustrated by theoretical algorithm analysis. Experimental roofline charts on modern CPU architectures show that memory-aware LBM algorithms can improve the arithmetic intensity (AI) of the fastest existing LBM by up to 4.6 times.</div>

Page generated in 0.0877 seconds