• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 73
  • 73
  • 18
  • 15
  • 13
  • 13
  • 12
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modélisation et exécution des applications d'analyse de données multi-dimentionnelles sur architectures distribuées. / Modelling and executing multidimensional data analysis applications over distributed architectures.

Pan, Jie 13 December 2010 (has links)
Des quantités de données colossalles sont générées quotidiennement. Traiter de grands volumes de données devient alors un véritable challenge pour les logiciels d'analyse des données multidimensionnelles. De plus, le temps de réponse exigé par les utilisateurs de ces logiciels devient de plus en plus court, voire intéractif. Pour répondre à cette demande, une approche basée sur le calcul parallèle est une solution. Les approches traditionnelles reposent sur des architectures performantes, mais coûteuses, comme les super-calculateurs. D'autres architectures à faible coût sont également disponibles, mais les méthodes développées sur ces architectures sont souvent bien moins efficaces. Dans cette thèse, nous utilisons un modèle de programmation parallèle issu du Cloud Computing, dénommé MapReduce, pour paralléliser le traitement des requêtes d'analyse de données multidimensionnelles afin de bénéficier de mécanismes de bonne scalabilité et de tolérance aux pannes. Dans ce travail, nous repensons les techniques existantes pour optimiser le traitement de requête d'analyse de données multidimensionnelles, y compris les étapes de pré-calcul, d'indexation, et de partitionnement de données. Nous avons aussi résumé le parallélisme de traitement de requêtes. Ensuite, nous avons étudié le modèle MapReduce en détail. Nous commençons par présenter le principe de MapReduce et celles du modèle étendu, MapCombineReduce. En particulier, nous analysons le coût de communication pour la procédure de MapReduce. Après avoir présenté le stockage de données qui fonctionne avec MapReduce, nous présentons les caractéristiques des applications de gestion de données appropriées pour le Cloud Computing et l'utilisation de MapReduce pour les applications d'analyse de données dans les travaux existants. Ensuite, nous nous concentrons sur la parallélisation des Multiple Group-by query, une requête typique utilisée dans l'exploration de données multidimensionnelles. Nous présentons la mise en oeuvre de l'implémentation initiale basée sur MapReduce et une optimisation basée sur MapCombineReduce. Selon les résultats expérimentaux, notre version optimisée montre un meilleur speed-up et une meilleure scalabilité que la version initiale. Nous donnons également une estimation formelle du temps d'exécution pour les deux implémentations. Afin d'optimiser davantage le traitement du Multiple Group-by query, une phase de restructuration de données est proposée pour optimiser les jobs individuels. Nous re-definissons l'organisation du stockage des données, et nous appliquons les techniques suivantes, le partitionnement des données, l'indexation inversée et la compression des données, au cours de la phase de restructuration des données. Nous redéfinissons les calculs effectués dans MapReduce et dans l'ordonnancement des tâches en utilisant cette nouvelle structure de données. En nous basant sur la mesure du temps d'exécution, nous pouvons donner une estimation formelle et ainsi déterminer les facteurs qui impactent les performances, telles que la sélectivité de requête, le nombre de mappers lancés sur un noeud, la distribution des données « hitting », la taille des résultats intermédiaires, les algorithmes de sérialisation adoptée, l'état du réseau, le fait d'utiliser ou non le combiner, ainsi que les méthodes adoptées pour le partitionnement de données. Nous donnons un modèle d'estimation des temps d'exécution et en particulier l'estimation des valeurs des paramètres différents pour les exécutions utilisant le partitionnement horizontal. Afin de soutenir la valeur-unique-wise-ordonnancement, qui est plus flexible, nous concevons une nouvelle structure de données compressées, qui fonctionne avec un partitionnement vertical. Cette approche permet l'agrégation sur une certaine valeur dans un processus continu. / Along with the development of hardware and software, more and more data is generated at a rate much faster than ever. Processing large volume of data is becoming a challenge for data analysis software. Additionally, short response time requirement is demanded by interactive operational data analysis tools. For addressing these issues, people look for solutions based on parallel computing. Traditional approaches rely on expensive high-performing hardware, like supercomputers. Another approach using commodity hardware has been less investigated. In this thesis, we are aiming to utilize commodity hardware to resolve these issues. We propose to utilize a parallel programming model issued from Cloud Computing, MapReduce, to parallelize multidimensional analytical query processing for benefit its good scalability and fault-tolerance mechanisms. In this work, we first revisit the existing techniques for optimizing multidimensional data analysis query, including pre-computing, indexing, data partitioning, and query processing parallelism. Then, we study the MapReduce model in detail. The basic idea of MapReduce and the extended MapCombineReduce model are presented. Especially, we analyse the communication cost of a MapReduce procedure. After presenting the data storage works with MapReduce, we discuss the features of data management applications suitable for Cloud Computing, and the utilization of MapReduce for data analysis applications in existing work. Next, we focus on the MapReduce-based parallelization for Multiple Group-by query, a typical query used in multidimensional data exploration. We present the MapReduce-based initial implementation and a MapCombineReduce-based optimization. According to the experimental results, our optimized version shows a better speed-up and a better scalability than the other version. We also give formal execution time estimation for both the initial implementation and the optimized one. In order to further optimize the processing of Multiple Group-by query processing, a data restructure phase is proposed to optimize individual job execution. We redesign the organization of data storage. We apply, data partitioning, inverted index and data compressing techniques, during data restructure phase. We redefine the MapReduce job's calculations, and job scheduling relying on the new data structure. Based on a measurement of execution time we give a formal estimation. We find performance impacting factors, including query selectivity, concurrently running mapper number on one node, hitting data distribution, intermediate output size, adopted serialization algorithms, network status, whether using combiner or not as well as the data partitioning methods. We give an estimation model for the query processing's execution time, and specifically estimated the values of various parameters for data horizontal partitioning-based query processing. In order to support more flexible distinct-value-wise job-scheduling, we design a new compressed data structure, which works with vertical partition. It allows the aggregations over one certain distinct value to be performed within one continuous process.
22

Adaptation of algorithms for underwater sonar data processing to GPU-based systems

Sundin, Patricia January 2013 (has links)
In this master thesis, algorithms for acoustic simulations in underwater environments are ported for GPU processing. The GPU parallel computing platforms used are CUDA, OpenCL and SkePU. The purpose of this master thesis is to adapt and evaluate the ported algorithms' performance on two modern NVIDIA GPUs, Tesla K20 and Quadro K5000. Several optimizations, described in existing literature for GPU processing (e.g. usage of shared memory, coalesced memory accesses), are implemented and multiple versions of each algorithm are created to study their trade-offs. Evaluation on two GPUs showed that different versions of the same algorithm have different performance characteristic and execution with the best performing version can give better performance than the original algorithm executing on 8 CPUs. A performance comparison between CUDA, OpenCL and SkePU versions of one algorithm is also made.
23

Design and implementation of a next generation Web Interaction SaaS prototype

Kolchenko, Mykhailo January 2012 (has links)
Web applications are getting more and more complicated with the extensive growth of the Internet. In order to cope with user demands, that are constantly increasing, a specialattention should be paid to performance optimizations. While a lot of attention is devoted to back-end optimization, front-end is often overlooked and therefore is a fertileground for performance bottlenecks. This thesis is destined to investigate a set of well-established front-end optimization techniques in order to find out those, that are the most efficient. The thesis primarily focuses on an examination of a limited set of techniques, that can be applied to static web resources. Some of the techniques are: resources consolidation, minification, compression and caching. The measurements used during the examination are based on four metrics, such as the Page Size, the Page Load Time, the Page Start Render Time and the Number of Requests the page made. The results show which methods impact performance most. In particular, the results revealed, that the resource compression technique alone brings significant performance improvements, the page size was reduced by 79% and the page load time by 72%, respectively. Despite that, it is evident that the best results can be achieved by a combination of different techniques. All optimization techniques combined made a serious difference, helping us reduce the page load time from 24 seconds down to just one second.
24

Adaptace programů ve Scale zaměřená na výkon / Performance based adaptation of Scala programs

Kubát, Petr January 2017 (has links)
Dynamic adaptivity of a computer system is its ability to modify the behavior according to the environment in which it is executed. It allows the system to achieve better performance, but usually requires specialized architecture and brings more complexity. The thesis presents an analysis and design of a framework that allows simple and fluent performance-based adaptive development at the level of functions and methods. It closely examines the API requirements and possibilities of integrating such a framework into the Scala programming language using its advanced syntactical constructs. On theoretical level, it deals with the problem of selecting the most appropriate function to execute with given input based on measurements of previous executions. In the provided framework implementation, the main stress is laid on modularity and extensibility, as many possible future extensions are outlined. The solution is evaluated on a variety of development scenarios, ranging from input adaptation of algorithms to environment adaptations of complex distributed computations in Apache Spark.
25

Aurora : seamless optimization of openMP applications / Aurora: Otimização Transparente de Aplicações OpenMP

Lorenzon, Arthur Francisco January 2018 (has links)
A exploração eficiente do paralelismo no nível de threads tem sido um desafio para os desenvolvedores de softwares. Como muitas aplicações não escalam com o número de núcleos, aumentar cegamente o número de threads pode não produzir os melhores resultados em desempenho ou energia. No entanto, a tarefa de escolher corretamente o número ideal de threads não é simples: muitas variáveis estão envolvidas (por exemplo, saturação do barramento off-chip e sobrecarga de sincronização de dados), que mudam de acordo com diferentes aspectos do sistema (por exemplo, conjunto de entrada, micro-arquitetura) e mesmo durante a execução da aplicação. Para abordar esse complexo cenário, esta tese apresenta Aurora. Ela é capaz de encontrar automaticamente, em tempo de execução e com o mínimo de sobrecarga, o número ideal de threads para cada região paralela da aplicação e se readaptar nos casos em que o comportamento de uma região muda durante a execução. Aurora trabalha com o OpenMP e é completamente transparente tanto para o programador quanto para o usuário final: dado um binário de uma aplicação OpenMP, Aurora o otimiza sem nenhuma transformação ou recompilação de código. Através da execução de quinze benchmarks conhecidos em quatro processadores multi-core, mostramos que Aurora melhora o trade-off entre desempenho e energia em até: 98% sobre a execução padrão do OpenMP; 86% sobre o recurso interno do OpenMP que ajusta dinamicamente o número de threads; e 91% quando comparado a uma emulação do feedback-driven threading. / Efficiently exploiting thread-level parallelism has been challenging for software developers. As many parallel applications do not scale with the number of cores, blindly increasing the number of threads may not produce the best results in performance or energy. However, the task of rightly choosing the ideal amount of threads is not straightforward: many variables are involved (e.g. off-chip bus saturation and overhead of datasynchronization), which will change according to different aspects of the system at hand (e.g., input set, micro-architecture) and even during execution. To address this complex scenario, this thesis presents Aurora. It is capable of automatically finding, at run-time and with minimum overhead, the optimal number of threads for each parallel region of the application and re-adapt in cases the behavior of a region changes during execution. Aurora works with OpenMP and is completely transparent to both designer and end-user: given an OpenMP application binary, Aurora optimizes it without any code transformation or recompilation. By executing fifteen well-known benchmarks on four multi-core processors, Aurora improves the trade-off between performance and energy by up to: 98% over the standard OpenMP execution; 86% over the built-in feature of OpenMP that dynamically adjusts the number of threads; and 91% over a feedback-driven threading emulation.
26

Vyhodnocování výkonnosti cloudových aplikací / Performance assessment of cloud applications

Sándor, Gábor January 2020 (has links)
Modern CPS and mobile applications like augmented reality or coordinated driving, etc. are envisioned to combine edge-cloud processing with real-time requirements. The real-time requirements however create a brand new challenge for cloud processing which has traditionally been best-effort. A key to guaranteeing real-time requirements is the understanding of how services sharing resources in the cloud interact on the performance level. The objective of the thesis is to design a mechanism which helps to categorize cloud applications based on the type of their workload. This should result in specification of a model defining a set of applications which can be deployed on a single node, while guaranteeing a certain quality of the service. It should be also able to find the optimal node where the application could be deployed.
27

Abarbeitung und Speicherung von hochfrequenten Sensor-Daten in Smart Home Systemen

Krombholz, Manuel 05 October 2020 (has links)
Der sich stetig entwickelnde Markt intelligenter Smart Home Geräte und das wachsende Bedürfnis des Einsatzes solcher Geräte bedingen eine intensive wissenschaftliche Auseinandersetzung mit Smart Home Systemen. Es bedarf insbesondere wissenschaftlicher Ansätze, um relevante Komponenten für die Realisierung solcher Systeme entwickeln zu können. Die Herausforderung besteht vor allem in der Komplexität der Systeme. So bringt u. a. eine sich steigernde Anzahl miteinander verbundener Geräte auch wachsende Anforderungen im Hinblick auf die zu verarbeitende Datenrate mit sich. Diese Arbeit beschäftigt sich mit dem Problem der Performance von Smart Home Systemen auf softwaretechnologischer Ebene, insbesondere mit der Verarbeitung einer außerordentlich hohen Menge an Informationen und der daraus resultierenden Herausforderung im Hinblick auf die Speichereffizienz. Neben der Untersuchung von Arbeiten, die ebenfalls auf Lösungen für verwandte Probleme abzielen, wird im Rahmen der vorliegenden Arbeit zudem die Frequenz und Dauer der Event-Verarbeitung zweier Smart Home Systeme in Form eines Benchmarks gemessen. Dessen Ergebnisse werden nach einer Beschreibung der allgemeinen Funktionsweise beider Systeme erläutert. Zudem werden eigene Ansätze für ein Smart Home System und dessen Umgang mit den genannten Herausforderungen in Form eines Konzepts erarbeitet. Ein Proof-Of-Concept wird dargelegt, indem eine Implementation dieses Konzeptes evaluiert wird. Das erwähnte Benchmark-Werkzeug wird genutzt, um gleiche Metriken bei der implementierten Software zu messen. Die Ergebnisse werden denen der beiden anderen Systeme gegenübergestellt.
28

Improving Quality of Experience through Performance Optimization of Server-Client Communication

Albinsson, Mattias, Andersson, Linus January 2016 (has links)
In software engineering it is important to consider how a potential user experiences the system during usage. No software user will have a satisfying experience if they perceive the system as slow, unresponsive, unstable or hiding information. Additionally, if the system restricts the users to only having a limited set of actions, their experience will further degrade. In order to evaluate the effect these issues have on a user‟s perceived experience, a measure called Quality of Experience is applied. In this work the foremost objective was to improve how a user experienced a system suffering from the previously mentioned issues, when searching for large amounts of data. To achieve this objective the system was evaluated to identify the issues present and which issues were affecting the user perceived Quality of Experience the most. The evaluated system was a warehouse management system developed and maintained by Aptean AB‟s office in Hässleholm, Sweden. The system consisted of multiple clients and a server, sending data over a network. Evaluation of the system was in form of a case study analyzing its performance, together with a survey performed by Aptean staff to gain knowledge of how the system was experienced when searching for large amounts of data. From the results, three issues impacting Quality of Experience the most were identified: (1) interaction; limited set of actions during a search, (2) transparency; limited representation of search progress and received data, (3) execution time; search completion taking long time. After the system was analyzed, hypothesized technological solutions were implemented to resolve the identified issues. The first solution divided the data into multiple partitions, the second decreased data size sent over the network by applying compression and the third was a combination of the two technologies. Following the implementations, a final set of measurements together with the same survey was performed to compare the solutions based on their performance and improvement gained in perceived Quality of Experience. The most significant improvement in perceived Quality of Experience was achieved by the data partitioning solution. While the combination of solutions offered a slight further improvement, it was primarily thanks to data partitioning, making that technology a more suitable solution for the identified issues compared to compression which only slightly improved perceived Quality of Experience. When the data was partitioned, updates were sent more frequently and allowed the user not only a larger set of actions during a search but also improved the information available in the client regarding search progress and received data. While data partitioning did not improve the execution time it offered the user a first set of data quickly, not forcing the user to idly wait, making the user experience the system as fast. The results indicated that to increase the user‟s perceived Quality of Experience for systems with server-client communication, data partitioning offered several opportunities for improvement. / I programvaruteknik är det viktigt att överväga hur en potentiell användare upplever ett system vid användning. Ingen användare kommer att ha en tillfredsställande upplevelse om de uppfattar systemet som långsamt, icke responsivt, ostabilt eller döljande av information. Dessutom, om systemet binder användarna till ett begränsat antal möjliga handlingar, kommer deras upplevelse vidare försämras. För att utvärdera vilken påverkan dessa problem har på en användares upplevda kvalitet, används mätenheten Upplevd Tjänstekvalitet. I detta arbete var det huvudsakliga syftet att förbättra en användares upplevelse av ett system som led av de tidigare nämnda problemen vid sökning av större datamängder. För att uppnå detta syfte utvärderades systemet för att identifiera befintliga problem samt vilka som mest påverkade användares Upplevda Tjänstekvalitet. Systemet som utvärderades var en mjukvara för lagerhantering som utvecklades och underhölls av Aptean AB‟s kontor i Hässleholm, Sverige. Systemet bestod av flera klienter och en server som skickade data över ett nätverk. Systemet utvärderades med en fallstudie där prestandan anayserades tillsammans med en enkät utförd i samarbete med Apteans personal för att få insikt i hur systemet upplevdes vid sökningar av stora datamängder. Resultaten visade på tre problem som hade störst inverkan på den Upplevda Tjänstekvaliteten: (1) interaktion; begränsade antal möjliga handlingar under en sökning, (2) transparens; begränsad tillgång till information om sökningens progress samt den hämtade datan, (3) körningstid; slutförande av en sökning tog lång tid. Efter att systemet hade analyserats, implementerades hypotetiska teknologiska lösningar för att lösa de identifierade problemen. Den första lösningen delade in datan i ett flertal partitioner, den andra minskade datans storlek som skickades över nätverket genom att tillämpa komprimering och den tredje var en kombination av de två teknologierna. Efter implementationen utfördes en sista uppsättning mätningar tillsammans med enkäten för att jämföra lösningarna baserat på deras prestanda och förbättringar av Upplevd Tjänstekvalitet. Den mest signifikanta förbättringen av Upplevd Tjänstekvalitet kom från datapartitioneringslösningen. Trots att kombinationen av lösningar uppnådde en mindre vidare förbättring, var det primärt tack vare datapartitioneringen, vilket innebar att den teknologin var den mest passande lösningen till de identifierade problemen jämfört med komprimering, vilken visade på endast en liten förbättring av Upplevd Tjänstekvalitet. När data partitionerades kunde flera uppdateringar skickas och användaren tilläts ett större antal möjliga handlingar under en sökning, men också en förbättrad tillgång till information i klienten angående sökningens progress samt den hämtade datan. Trots att datapartitionering inte förbättrade körningstiden, kunde den erbjuda användaren en första mängd data snabbt utan att tvinga användaren att sysslolöst vänta, vilket gjorde att systemet upplevdes som snabbt. För att förbättra den Upplevda Tjänstekvaliteten för system med server-klient kommunikation visade resultaten att datapartitionering är en lösning som erbjöd flera möjligheter för förbättring.
29

Performance Modelling and Simulation of Service Chains for Telecom Clouds

Gokan Khan, Michel January 2021 (has links)
New services and ever increasing traffic volumes require the next generation of mobile networks, e.g. 5G, to be much more flexible and scalable. The primary enabler for its flexibility is transforming network functions from proprietary hardware to software using modern virtualization technologies, paving the way of virtual network functions (VNF). Such VNFs can then be flexibly deployed on cloud data centers while traffic is routed along a chain of VNFs through software-defined networks. However, such flexibility comes with a new challenge of allocating efficient computational resources to each VNF and optimally placing them on a cluster. In this thesis, we argue that, to achieve an autonomous and efficient performance optimization method, a solid understanding of the underlying system, service chains, and upcoming traffic is required. We, therefore, conducted a series of focused studies to address the scalability and performance issues in three stages. We first introduce an automated profiling and benchmarking framework, named NFV-Inspector to measure and collect system KPIs as well as extract various insights from the system. Then, we propose systematic methods and algorithms for performance modelling and resource recommendation of cloud native network functions and evaluate them on a real 5G testbed. Finally, we design and implement a bottom-up performance simulator named PerfSim to approximate the performance of service chains based on the nodes’ performance models and user-defined scenarios. / <p>Article 5 part of thesis as manuscript, now published.</p>
30

Performance Optimization of Ice Sheet Simulation Models : Examining ways to speed up simulations, enabling for upscaling with more data

Brink, Fredrika January 2023 (has links)
This study aims to examine how simulation models can be performance optimized in Python. Optimized in the sense of executing faster and enabling upscaling with more data. To meet this aim, two models simulating the Greenland ice sheet are studied. The simulation of ice sheets is an important part of glaciology and climate change research. By following an iterative spiral model of software development and evolution with focus on the bottlenecks, it is possible to optimize the most time-consuming code sections. Several iterations of implementing tools and techniques suitable for Python code are performed, such as implementing libraries, changing data structures, and improving code hygiene. Once the models are optimized, the upscaling with a new dataset, called CARRA, created from observations and modelled outcomes combined, is studied. The results indicate that the most effective approach of performance optimizing is to implement the Numba library to compile critical code sections to machine code and to parallelize the simulations using Joblib. Depending on the data used and the size and granularity of the simulations, simulations between 1.5 and 3.2 times the speed are gained. When simulating CARRA data, the optimized code still results in faster simulations. However, the outcome demonstrates that differences exist between the ice sheets simulated by the dataset initially used and CARRA data. Even though the CARRA dataset yields a different glaciological result, the overall changes in the ice sheet are similar to the changes shown in the initial dataset simulations. The CARRA dataset could possibly be used for getting an overview of what is happening to the ice sheet, but not for making detailed analyses, where exact numbers are needed.

Page generated in 0.1339 seconds