• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

The role and the functions of the Alternative Exchange (AltX) and its contribution to the development of the small and medium-sized enterprises (SMMEs) in South Africa

Mtiki, Xolisa January 2019 (has links)
Magister Commercii - MCom / Motivated by the number of firms that migrate from the Alternative Exchange (AltX) to the JSE main board, this research undertakes to examine the role and the functions of the AltX and its contribution to the development of the small and medium-sized enterprises (SMMEs) in South Africa over the period from January 2004 to December 2015. This study seeks to explore the performance of the firms that have migrated from the AltX to the JSE main board, as well as the attributes that contribute to a successful migration. The study emerges by computing risk, return, risk-adjusted performance and liquidity statistics of the firms that migrated from the AltX to the JSE main board over the period of the research since their respective listings on the AltX. In the preliminary tests conducted in this study, the excess returns of the sample firms were regressed against the market risk premium using ALSI as the market proxy. It is discovered that the beta coefficients estimated by the regressions are statistically insignificant. This indicates that the firms listed on the AltX have insignificant correlation with the firms listed on the JSE main board. Therefore, the ALSI could not be used as a performance benchmark for the sample firms in this research. Subsequently, the research evaluates the market response before and after the announcement date and the actual migration date of the firms that have migrated from the AltX to the JSE main board. The reasons why this research investigates the impact of announcement and actual migration separately is due to the observation that the period between announcement date and migration date is usually more than a month and investors might have different reactions towards these two mentioned events. Moreover, this is the first research that has investigated the impact corporate reaction on both migration announcement date and the actual migration date of the firms from the AltX to the JSE main board. The results reveal that there are significant average abnormal returns and average abnormal turnovers reaction around migration announcement date/actual migration date. The findings suggest that both the migration announcement and actual migration of the firms from the AltX to the JSE main board have produced significant abnormal returns. Moreover, the research evaluates the performance of the firms that have migrated from the AltX to the JSE main board against their comparable peers. The performance evaluation is conducted in two folds. Firstly, the evaluation is conducted in order to assess the financial position of the AltX sample firms before their migration to the JSE main board. Secondly, the post migration performance evaluation is conducted in order to classify each of the sample firms either as a success or as a failure after their migration to the JSE main board. The results reveals that, out of 20 sample firms only 13 firms have been categorised as successful post their migration from the AltX to the JSE main board, while the remaining 7 firms are categorised as unsuccessful post migration. Finally, this research investigates the attributes that differentiate the AltX firms that are likely to be successful and those that are unlikely to be successful after their migration to the JSE main board. To achieve this, Multivariate Discriminant Analysis (MDA) model developed by Altman (1968) is employed. The results reveals that, the model is able to classify 90% of the original cases and 85% of the cross-validated cases perfectly. Moreover, the model has identified net profit margin, current ratio and return on capital invested as the most important financial ratios in distinguishing the successful firms from unsuccessful firms post migration from the AltX to the JSE main board. / 2021-04-30
372

Solving dynamic multi-objective optimisation problems using vector evaluated particle swarm optimisation

Helbig, Marde 24 September 2012 (has links)
Most optimisation problems in everyday life are not static in nature, have multiple objectives and at least two of the objectives are in conflict with one another. However, most research focusses on either static multi-objective optimisation (MOO) or dynamic singleobjective optimisation (DSOO). Furthermore, most research on dynamic multi-objective optimisation (DMOO) focusses on evolutionary algorithms (EAs) and only a few particle swarm optimisation (PSO) algorithms exist. This thesis proposes a multi-swarm PSO algorithm, dynamic Vector Evaluated Particle Swarm Optimisation (DVEPSO), to solve dynamic multi-objective optimisation problems (DMOOPs). In order to determine whether an algorithm solves DMOO efficiently, functions are required that resembles real world DMOOPs, called benchmark functions, as well as functions that quantify the performance of the algorithm, called performance measures. However, one major problem in the field of DMOO is a lack of standard benchmark functions and performance measures. To address this problem, an overview is provided from the current literature and shortcomings of current DMOO benchmark functions and performance measures are discussed. In addition, new DMOOPs are introduced to address the identified shortcomings of current benchmark functions. Guides guide the optimisation process of DVEPSO. Therefore, various guide update approaches are investigated. Furthermore, a sensitivity analysis of DVEPSO is conducted to determine the influence of various parameters on the performance of DVEPSO. The investigated parameters include approaches to manage boundary constraint violations, approaches to share knowledge between the sub-swarms and responses to changes in the environment that are applied to either the particles of the sub-swarms or the non-dominated solutions stored in the archive. From these experiments the best DVEPSO configuration is determined and compared against four state-of-the-art DMOO algorithms. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
373

Porovnání přístupů k dotazování chemických sloučenin / Comparison of Approaches for Querying of Chemical Compounds

Šípek, Vojtěch January 2019 (has links)
The purpose of this thesis is to perform an analysis of approaches to querying chemical databases and to validate or invalidate its results. Currently, there exists no work which would compare the performance and memory usage of the best performing approaches on the same data set. In this thesis, we address this lack of information and we create an un-biased benchmark of the most popular index building methods for subgraph querying of chemical databases. Also, we compare the results of such benchmark with the performance results of an SQL and a graph database. 1
374

Lid driven cavity flow using stencil-based numerical methods

Juujärvi, Hannes, Kinnunen, Isak January 2022 (has links)
In this report the regular finite differences method (FDM) and a least-squares radial basis function-generated finite differences method (RBF-FD-LS) is used to solve the two-dimensional incompressible Navier-Stokes equations for the lid driven cavity problem. The Navier-Stokes equations is solved using stream function-vorticity formulation. The purpose of the report is to compare FDM and RBF-FD-LS with respect to accuracy and computational cost. Both methods were implemented in MATLAB and the problem was solved for Reynolds numbers equal to 100, 400 and 1000. In the report we present the solutions obtained as well as the results from the comparison. The results are discussed and conclusions are drawn. We came to the conclusion that RBF-FD-LS is more accurate when the stepsize of the grids used is held constant, while RBF-FD-LS costs more than FDM for similar accuracy.
375

Efficient LU Factorization for Texas Instruments Keystone Architecture Digital Signal Processors / Effektiv LU-faktorisering för Texas Instruments digitala signalprocessorer med Keystone-arkitektur

Netzer, Gilbert January 2015 (has links)
The energy consumption of large-scale high-performance computer (HPC) systems has become one of the foremost concerns of both data-center operators and computer manufacturers. This has renewed interest in alternative computer architectures that could offer substantially better energy-efficiency.Yet, the for the evaluation of the potential of these architectures necessary well-optimized implementations of typical HPC benchmarks are often not available for these for the HPC industry novel architectures. The in this work presented LU factorization benchmark implementation aims to provide such a high-quality tool for the HPC industry standard high-performance LINPACK benchmark (HPL) for the eight-core Texas Instruments TMS320C6678 digitalsignal processor (DSP). The presented implementation could perform the LU factorization at up to 30.9 GF/s at 1.25 GHz core clock frequency by using all the eight DSP cores of the System-on-Chip (SoC). This is 77% of the attainable peak double-precision floating-point performance of the DSP, a level of efficiency that is comparable to the efficiency expected on traditional x86-based processor architectures. A presented detailed performance analysis shows that this is largely due to the optimized implementation of the embedded generalized matrix-matrix multiplication (GEMM). For this operation, the on-chip direct memory access (DMA) engines were used to transfer the necessary data from the external DDR3 memory to the core-private and shared scratchpad memory. This allowed to overlap the data transfer with computations on the DSP cores. The computations were in turn optimized by using software pipeline techniques and were partly implemented in assembly language. With these optimization the performance of the matrix multiplication reached up to 95% of attainable peak performance. A detailed description of these two key optimization techniques and their application to the LU factorization is included. Using a specially instrumented Advantech TMDXEVM6678L evaluation module, described in detail in related work, allowed to measure the SoC’s energy efficiency of up to 2.92 GF/J while executing the presented benchmark. Results from the verification of the benchmark execution using standard HPL correctness checks and an uncertainty analysis of the experimentally gathered data are also presented. / Energiförbrukningen av storskaliga högpresterande datorsystem (HPC) har blivit ett av de främsta problemen för såväl ägare av dessa system som datortillverkare. Det har lett till ett förnyat intresse för alternativa datorarkitekturer som kan vara betydligt mer effektiva ur energiförbrukningssynpunkt. För detaljerade analyser av prestanda och energiförbrukning av dessa för HPC-industrin nya arkitekturer krävs väloptimerade implementationer av standard HPC-bänkmärkningsproblem. Syftet med detta examensarbete är att tillhandhålla ett sådant högkvalitativt verktyg i form av en implementation av ett bänkmärkesprogram för LU-faktorisering för den åttakärniga digitala signalprocessorn (DSP) TMS320C6678 från Texas Instruments. Bänkmärkningsproblemet är samma som för det inom HPC-industrin välkända bänkmärket “high-performance LINPACK” (HPL). Den här presenterade implementationen nådde upp till en prestanda av 30,9 GF/s vid 1,25 GHz klockfrekvens genom att samtidigt använda alla åtta kärnor i DSP:n. Detta motsvarar 77% av den teoretiskt uppnåbara prestandan, vilket är jämförbart med förväntningar på effektivteten av mer traditionella x86-baserade system. En detaljerad prestandaanalys visar att detta tillstor del uppnås genom den högoptimerade implementationen av den ingående matris-matris-multiplikationen. Användandet av specialiserade “direct memory access” (DMA) hårdvaruenheter för kopieringen av data mellan det externa DDR3 minnet och det interna kärn-privata och delade arbetsminnet tillät att överlappa dessa operationer med beräkningar. Optimerade mjukvaruimplementationer av dessa beräkningar, delvis utförda i maskinspåk, tillät att utföra matris-multiplikationen med upp till 95% av den teoretiskt nåbara prestandan. I rapporten ges en detaljerad beskrivning av dessa två nyckeltekniker. Energiförbrukningen vid exekvering av det implementerade bänkmärket kunde med hjälp av en för ändamålet anpassad Advantech TMDXEVM6678L evalueringsmodul bestämmas till maximalt 2,92 GF/J. Resultat från verifikationen av bänkmärkesimplementationen och en uppskattning av mätosäkerheten vid de experimentella mätningarna presenteras också.
376

GIS Processing on the Web

Knutsson, Erik, Rydhe, Manne January 2022 (has links)
Today more and more advanced and demanding applications are finding their way to the web. These are applications like video editing, games, and mathematical calculations. Up until a few years ago, JavaScript was the only language present on the web. That was until Mozilla, Google, Microsoft, and Apple decided to develop WebAssembly. WebAssembly is a low-level language, similar to assembly, but running in the browser. WebAssembly was not created to replace JavaScript, but to be used alongside it and complement JavaScript’s weaknesses. WebAssembly is still a relatively new language (2017) and is in continuous development. This work is presented as a guideline, and to give a general direction of how WebAssembly is performing (in 2022) when operating on GIS data. When comparing the execution speed of WebAssembly running in different environments (NodeJS, Google Chrome, and Mozilla Firefox), NodeJS was the fastest. The second fastest was Mozilla Firefox, and the slowest was Google Chrome. However, when compared to the native implementation in C++, no environment came close to the developers’ promised 10% slowdown compared to the native code. The average slowdowns found in this study were: The benchmark with small input files ran 63% slower than native. The benchmark with medium input files ran 62% slower than native, and the benchmarks with large input files ran 68% slower than native. The results are comparable to the study [6], which found that the slowdown was around 45% when running WebAssembly on Mozilla Firefox and 55% on Google Chrome with a peak of 2.5 times slowdown compared to native. This study aimed to measure memory usage in the different environments for operations on GIS data. However, the methods used in this study to measure memory proved to be too unsophisticated when dealing with JIT and garbage collection. For future work, a more detailed "memory allocated over time" graph should probably be used to be able to measure the peaks of memory currently allocated to the process instead of looking at the difference in memory before and after.
377

Benchmarking educational web portals : an application of the Kano method

MacDonald, Catherine Ann 30 March 2010 (has links)
The Kano method1 was used in order to determine the benchmark requirements of an educational web portal. A comprehensive list of possible specifications for an educational portal was constructed by examining the characteristics of educational portals globally. This information was used to develop a questionnaire in accordance with the Kano method. A number of hand-picked expert users were asked to answer the questionnaire. The results obtained from these questionnaires were used to categorize the importance of each component of a web portal as a “one-dimensional”2 , “must-be”3 or “attractive”4 requirement. The components categorized as “must-be” requirements were used to generate the benchmark of the minimum specifications of an educational web portal. Copyright / Dissertation (MEd)--University of Pretoria, 2008. / Curriculum Studies / MEd / Unrestricted
378

Continuous coordination as a realistic scenario for lifelong learning

Badrinaaraayanan, Akilesh 04 1900 (has links)
Les algorithmes actuels d'apprentissage profond par renforcement (RL) sont encore très spécifiques à leur tâche et n'ont pas la capacité de généraliser à de nouveaux environnements. L'apprentissage tout au long de la vie (LLL), cependant, vise à résoudre plusieurs tâches de manière séquentielle en transférant et en utilisant efficacement les connaissances entre les tâches. Malgré un regain d'intérêt pour le RL tout au long de la vie ces dernières années, l'absence d'un banc de test réaliste rend difficile une évaluation robuste des algorithmes d'apprentissage tout au long de la vie. Le RL multi-agents (MARL), d'autre part, peut être considérée comme un scénario naturel pour le RL tout au long de la vie en raison de sa non-stationnarité inhérente, puisque les politiques des agents changent avec le temps. Dans cette thèse, nous présentons un banc de test multi-agents d'apprentissage tout au long de la vie qui prend en charge un paramétrage à la fois zéro et quelques-coups. Notre configuration est basée sur Hanabi - un jeu multi-agents partiellement observable et entièrement coopératif qui s'est avéré difficile pour la coordination zéro coup. Son vaste espace stratégique en fait un environnement souhaitable pour les tâches RL tout au long de la vie. Nous évaluons plusieurs méthodes MARL récentes et comparons des algorithmes d'apprentissage tout au long de la vie de pointe dans des régimes de mémoire et de calcul limités pour faire la lumière sur leurs forces et leurs faiblesses. Ce paradigme d'apprentissage continu nous fournit également une manière pragmatique d'aller au-delà de la formation centralisée qui est le protocole de formation le plus couramment utilisé dans MARL. Nous montrons empiriquement que les agents entraînés dans notre environnement sont capables de bien se coordonner avec des agents inconnus, sans aucune hypothèse supplémentaire faite par des travaux précédents. Mots-clés: le RL multi-agents, l'apprentissage tout au long de la vie. / Current deep reinforcement learning (RL) algorithms are still highly task-specific and lack the ability to generalize to new environments. Lifelong learning (LLL), however, aims at solving multiple tasks sequentially by efficiently transferring and using knowledge between tasks. Despite a surge of interest in lifelong RL in recent years, the lack of a realistic testbed makes robust evaluation of lifelong learning algorithms difficult. Multi-agent RL (MARL), on the other hand, can be seen as a natural scenario for lifelong RL due to its inherent non-stationarity, since the agents' policies change over time. In this thesis, we introduce a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings. Our setup is based on Hanabi --- a partially-observable, fully cooperative multi-agent game that has been shown to be challenging for zero-shot coordination. Its large strategy space makes it a desirable environment for lifelong RL tasks. We evaluate several recent MARL methods, and benchmark state-of-the-art lifelong learning algorithms in limited memory and computation regimes to shed light on their strengths and weaknesses. This continual learning paradigm also provides us with a pragmatic way of going beyond centralized training which is the most commonly used training protocol in MARL. We empirically show that the agents trained in our setup are able to coordinate well with unknown agents, without any additional assumptions made by previous works. Key words: multi-agent reinforcement learning, lifelong learning.
379

Návrh metod a nástrojů pro zrychlení vývoje softwaru pro vestavěné procesory se zaměřením na aplikace v mechatronice / DESIGN OF METHODS AND TOOLS ACCELERATING THE SOFTWARE DESIGN FOR EMBEDDED PROCESSORS TARGETED FOR MECHATRONICS APPLICATIONS

Lamberský, Vojtěch January 2015 (has links)
The main focus of this dissertation thesis is on methods and tools which can increase the speed of software development process for embedded processors used in mechatronics applications. The first part of this work introduces software and hardware tools suitable for a rapid development and prototyping of new applications used today. This work focuses on two main topics from the mentioned application field. The first topic is a development of tools for an automatic code generation from the Simulink environment for an embedded processor. The second topic is a development of tools enabling execution time prediction based on a Simulink model. Next chapter of this work describes various aspects and properties of the Cerebot blockset, which is a toolset for a fully automatic code generation from a Simulink environment for an embedded processor. Following chapter describes various methods that are suitable for predicting the execution time on an embedded processor based on a Simulink model. Main contribution of this work presents the created support for a fully automatic code generation from a Simulink software for the MX7 cK hardware, which enables a code generation supporting also a complex peripheral (a graphic display unit). The next important contribution of this work presents the developed method for an automatic prediction of the software execution time based on a Simulink model.
380

Comparative Study of the Inference of an Image Quality Assessment Algorithm : Inference Benchmarking of an Image Quality Assessment Algorithm hosted on Cloud Architectures / En Jämförande Studie av Inferensen av en Bildkvalitetsbedömningsalgoritm : Inferens Benchmark av en Bildkvalitetsbedömingsalgoritm i olika Molnarkitekturer

Petersson, Jesper January 2023 (has links)
an instance has become exceedingly more time and resource consuming. To solve this issue, cloud computing is being used to train and serve the models. However, there’s a gap in research where these cloud computing platforms have been evaluated for these tasks. This thesis aims to investigate the inference task of an image quality assessment algorithm on different Machine Learning as a Service architecture. The quantitative metrics that are being used for the comparison are latency, inference time, throughput, carbon Footprint, and cost. The utilization of Machine Learning has a wide range of applications, with one of its most popular areas being Image Recognition or Image Classification. To effectively classify an image, it is imperative that the image is of high quality. This requirement is not always met, particularly in situations where users capture images through their mobile devices or other equipment. In light of this, there is a need for an image quality assessment, which can be achieved through the implementation of an Image Quality Assessment Model such as BRISQUE. When hosting BRISQUE in the cloud, there is a plethora of hardware options to choose from. This thesis aims to conduct a benchmark of these hardware options to evaluate the performance and sustainability of BRISQUE’s image quality assessment on various cloud hardware. The metrics for evaluation include inference time, hourly cost, effective cost, energy consumption, and emissions. Additionally, this thesis seeks to investigate the feasibility of incorporating sustainability metrics, such as energy consumption and emissions, into machine learning benchmarks in cloud environments. The results of the study reveal that the instance type from GCP was generally the best-performing among the 15 tested. The Image Quality Assessment Model appeared to benefit more from a higher number of cores than a high CPU clock speed. In terms of sustainability, it was observed that all instance types displayed a similar level of energy consumption, however, there were variations in emissions. Further analysis revealed that the selection of region played a significant role in determining the level of emissions produced by the cloud environment. However, the availability of such sustainability data is limited in a cloud environment due to restrictions imposed by cloud providers, rendering the inclusion of these metrics in Machine Learning benchmarks in cloud environments problematic. / Maskininlärning kan användas till en mängd olika saker. Ett populärt verksamhetsområde inom maskininlärning är bildigenkänning eller bildklassificering. För att utföra bildklassificering på en bild krävs först en bild av god kvalitet. Detta är inte alltid fallet när användare tar bilder i en applikation med sina telefoner eller andra enheter. Därför är behovet av en bildkvalitetskontroll nödvändigt. BRISQUE är en modell för bildkvalitetsbedömning som gör bildkvalitetskontroller på bilder, men när man hyr plats för den i molnet finns det många olika hårdvarualternativ att välja mellan. Denna uppsats avser att benchmarka denna hårdvara för att se hur BRISQUE utför inferens på dessa molnhårdvaror både när det gäller prestanda och hållbarhet där inferensens tid, timpris, effektivt pris, energiförbrukning och utsläpp är de insamlade mätvärdena. Avhandlingen söker också att undersöka möjligheten att inkludera hållbarhetsmetriker som energiförbrukning och utsläpp i en maskininlärningsbenchmark i molnmiljöer. Resultaten av studien visar att en av GCPs instanstyper var generellt den bäst presterande bland de 15 som testades. Bildkvalitetsbedömningsmodellen verkar dra nytta av ett högre antal kärnor mer än en hög CPU-frekvens. Vad gäller hållbarhet observerades att alla instanstyper visade en liknande nivå av energianvändning, men det fanns variationer i utsläpp. Ytterligare analys visade att valet av region hade en betydande roll i bestämningen av nivån av utsläpp som producerades av molnmiljön. Tillgången till sådana hållbarhetsdata är begränsade i en molnmiljö på grund av restriktioner som ställs av molnleverantörer vilket skapar problem om dessa mätvärden ska inkluderas i maskininlärningsbenchmarks i molnmiljöer.

Page generated in 2.3626 seconds