• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 395
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 42
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

An Open-Source Framework for Large-Scale ML Model Serving

Sigfridsson, Petter January 2022 (has links)
The machine learning (ML) industry has taken great strides forward and is today facing new challenges. Many more models are developed, used and served within the industry. Datasets that models are trained on, are constantly changing. This demands that modern machine learning processes can handle large number of models, extreme load and support recurring updates in a scalable manner. To handle these challenges, there is a concept called model serving. Model serving is a relatively new concept where more efforts are required to address both conceptual and technical challenges. Existing ML model serving solutions aim to be scalable for the purpose of serving one model at a time. The industry itself requires that the whole ML process, the number of served models and that recurring updates are scalable. That is why this thesis presents an open-source framework for large-scale ML model serving that aims to meet the requirements of today’s ML industry. The presented framework is proven to handle a large-scale ML model serving environment in a scalable way but with some limitations. Results show that the number of parallel requests the framework can handle can be optimized. This would make the solution more efficient in the sense of resource utilization. One avenue for future improvements could be to integrate the developed framework as an application into the open-source machine learning platform STACKn.
212

Novel localised quality of service routing algorithms : performance evaluation of some new localised quality of service routing algorithms based on bandwidth and delay as the metrics for candidate path selection

Alghamdi, Turki A. January 2010 (has links)
The growing demand on the variety of internet applications requires management of large scale networks by efficient Quality of Service (QoS) routing, which considerably contributes to the QoS architecture. The biggest contemporary drawback in the maintenance and distribution of the global state is the increase in communication overheads. Unbalancing in the network, due to the frequent use of the links assigned to the shortest path retaining most of the network loads is regarded as a major problem for best effort service. Localised QoS routing, where the source nodes use statistics collected locally, is already described in contemporary sources as more advantageous. Scalability, however, is still one of the main concerns of existing localised QoS routing algorithms. The main aim of this thesis is to present and validate new localised algorithms in order to develop the scalability of QoS routing. Existing localised routing, Credit Based Routing (CBR) and Proportional Sticky Routing (PSR), use the blocking probability as a factor in selecting the routing paths and work with either credit or flow proportion respectively, which makes impossible having up-to-date information. Therefore our proposed Highest Minimum Bandwidth (HMB) and Highest Average Bottleneck Bandwidth History (HABBH) algorithms utilise bandwidth as the direct QoS criterion to select routing paths. We introduce an Integrated Delay Based Routing and Admission Control mechanism. Using this technique Minimum Total Delay (MTD), Low Fraction Failure (LFF) and Low Path Failure (LPF) were compared against the global QoS routing scheme, Dijkstra, and localised High Path Credit (HPC) scheme and showed superior performance. The simulation with the non-uniformly distributed traffic reduced blocking probability of the proposed algorithms. Therefore, we advocate the algorithms presented in the thesis, as a scalable approach to control large networks. We strongly suggest that bandwidth and mean delay are feasible QoS constraints to select optimal paths by locally collected information. We have demonstrated that a few good candidate paths can be selected to balance the load in the network and minimise communication overhead by applying the disjoint paths method, recalculation of candidate paths set and dynamic paths selection method. Thus, localised QoS routing can be used as a load balancing tool in order to improve the network resource utilization. A delay and bandwidth combination is one of the future prospects of our work, and the positive results presented in the thesis suggest that further development of a distributed approach in candidate paths selection may enhance the proposed localised algorithms.
213

Comprendre la performance des algorithmes d'exclusion mutuelle sur les machines multicoeurs modernes / Understanding the performance of mutual exclusion algorithms on modern multicore machines

Guiroux, Hugo 17 December 2018 (has links)
Une multitude d'algorithmes d'exclusion mutuelle ont été conçus au cours des vingt cinq dernières années, dans le but d'améliorer les performances liées à l'exécution de sections critiques et aux verrous.Malheureusement, il n'existe actuellement pas d'étude générale et complète au sujet du comportement de ces algorithmes d'exclusion mutuelle sur des applications réalistes (par opposition à des applications synthétiques) qui considère plusieurs métriques de performances, telles que l'efficacité énergétique ou la latence.Dans cette thèse, nous effectuons une analyse pragmatique des mécanismes d'exclusion mutuelle, dans le but de proposer aux développeurs logiciels assez d'informations pour leur permettre de concevoir et/ou d'utiliser des mécanismes rapides, qui passent à l'échelle et efficaces énergétiquement.Premièrement, nous effectuons une étude de performances de 28 algorithmes d'exclusion mutuelle faisant partie de l'état de l'art, en considérant 40 applications et quatre machines multicœurs différentes.Nous considérons non seulement le débit (la métrique de performance traditionnellement considérée), mais aussi l'efficacité énergétique et la latence, deux facteurs qui deviennent de plus en plus importants.Deuxièmement, nous présentons une analyse en profondeur de nos résultats.Plus particulièrement, nous décrivons neufs problèmes de performance liés aux verrous et proposons six recommandations aidant les développeurs logiciels dans le choix d'un algorithme d'exclusion mutuelle, se basant sur les caractéristiques de leur application ainsi que les propriétés des différents algorithmes.A partir de notre analyse détaillée, nous faisons plusieurs observations relatives à l'interaction des verrous et des applications, dont plusieurs d'entre elles sont à notre connaissance originales:(i) les applications sollicitent fortement les primitives lock/unlock mais aussi l'ensemble des primitives de synchronisation liées à l'exclusion mutuelle (ex. trylocks, variables de conditions),(ii) l'empreinte mémoire d'un verrou peut directement impacter les performances de l'application,(iii) pour beaucoup d'applications, l'interaction entre les verrous et l'ordonnanceur du système d'exploitation est un facteur primordial de performance,(iv) la latence d'acquisition d'un verrou a un impact très variable sur la latence d'une application,(v) aucun verrou n'est systématiquement le meilleur,(vi) choisir le meilleur verrou est difficile, et(vii) l'efficacité énergétique et le débit vont de pair dans le contexte des algorithmes d'exclusion mutuelle.Ces découvertes mettent en avant le fait que la synchronisation à base de verrou ne se résume pas seulement à la simple interface "lock - unlock".En conséquence, ces résultats appellent à plus de recherche dans le but de concevoir des algorithmes d'exclusion mutuelle avec une empreinte mémoire faible, adaptatifs et qui implémentent l'ensemble des primitives de synchronisation liées à l'exclusion mutuelle.De plus, ces algorithmes ne doivent pas seulement avoir de bonnes performances d'un point de vue du débit, mais aussi considérer la latence ainsi que l'efficacité énergétique. / A plethora of optimized mutual exclusion lock algorithms have been designed over the past 25 years to mitigate performance bottlenecks related to critical sections and synchronization.Unfortunately, there is currently no broad study of the behavior of these optimized lock algorithms on realistic applications that consider different performance metrics, such as energy efficiency and tail latency.In this thesis, we perform a thorough and practical analysis, with the goal of providing software developers with enough information to achieve fast, scalable and energy-efficient synchronization in their systems.First, we provide a performance study of 28 state-of-the-art mutex lock algorithms, on 40 applications, and four different multicore machines.We not only consider throughput (traditionally the main performance metric), but also energy efficiency and tail latency, which are becoming increasingly important.Second, we present an in-depth analysis in which we summarize our findings for all the studied applications.In particular, we describe nine different lock-related performance bottlenecks, and propose six guidelines helping software developers with their choice of a lock algorithm according to the different lock properties and the application characteristics.From our detailed analysis, we make a number of observations regarding locking algorithms and application behaviors, several of which have not been previously discovered:(i) applications not only stress the lock/unlock interface, but also the full locking API (e.g., trylocks, condition variables),(ii) the memory footprint of a lock can directly affect the application performance,(iii) for many applications, the interaction between locks and scheduling is an important application performance factor,(iv) lock tail latencies may or may not affect application tail latency,(v) no single lock is systematically the best,(vi) choosing the best lock is difficult (as it depends on many factors such as the workload and the machine), and(vii) energy efficiency and throughput go hand in hand in the context of lock algorithms.These findings highlight that locking involves more considerations than the simple "lock - unlock" interface and call for further research on designing low-memory footprint adaptive locks that fully and efficiently support the full lock interface, and consider all performance metrics.
214

Towards scalable, multi-view urban modeling using structure priors / Vers une modélisation urbaine 3D extensible intégrant des à priori de structure géométrique

Bourki, Amine 21 December 2017 (has links)
Nous étudions dans cette thèse le problème de reconstruction 3D multi-vue à partir d’une séquence d’images au sol acquises dans des environnements urbains ainsi que la prise en compte d’a priori permettant la préservation de la structure sous-jacente de la géométrie 3D observée, ainsi que le passage à l’échelle de tels processus de reconstruction qui est intrinsèquement délicat dans le contexte de l’imagerie urbaine. Bien que ces deux axes aient été traités de manière extensive dans la littérature, les méthodes de reconstruction 3D structurée souffrent d’une complexité en temps de calculs restreignant significativement leur intérêt. D’autre part, les approches de reconstruction 3D large échelle produisent généralement une géométrie simplifiée, perdant ainsi des éléments de structures qui sont importants dans le contexte urbain. L’objectif de cette thèse est de concilier les avantages des approches de reconstruction 3D structurée à celles des méthodes rapides produisant une géométrie simplifiée. Pour ce faire, nous présentons “Patchwork Stereo”, un framework qui combine stéréoscopie photométrique utilisant une poignée d’images issues de points de vue éloignés, et un nuage de point épars. Notre méthode intègre une analyse simultanée 2D-3D réalisant une extraction robuste de plans 3D ainsi qu’une segmentation d’images top-down structurée et repose sur une optimisation par champs de Markov aléatoires. Les contributions présentées sont évaluées via des expériences quantitatives et qualitatives sur des données d’imagerie urbaine complexes illustrant des performances tant quant à la fidélité structurelle des reconstructions 3D que du passage à l’échelle / In this thesis, we address the problem of 3D reconstruction from a sequence of calibrated street-level photographs with a simultaneous focus on scalability and the use of structure priors in Multi-View Stereo (MVS).While both aspects have been studied broadly, existing scalable MVS approaches do not handle well the ubiquitous structural regularities, yet simple, of man-made environments. On the other hand, structure-aware 3D reconstruction methods are slow and scale poorly with the size of the input sequences and/or may even require additional restrictive information. The goal of this thesis is to reconcile scalability and structure awareness within common MVS grounds using soft, generic priors which encourage : (i) piecewise planarity, (ii) alignment of objects boundaries with image gradients and (iii) with vanishing directions (VDs), and (iv) objects co-planarity. To do so, we present the novel “Patchwork Stereo” framework which integrates photometric stereo from a handful of wide-baseline views and a sparse 3D point cloud combining robust 3D plane extraction and top-down image partitioning from a unified 2D-3D analysis in a principled Markov Random Field energy minimization. We evaluate our contributions quantitatively and qualitatively on challenging urban datasets and illustrate results which are at least on par with state-of-the-art methods in terms of geometric structure, but achieved in several orders of magnitude faster paving the way for photo-realistic city-scale modeling
215

Crescimento em consultorias e assessorias empresariais: fatores limitantes e impulsionadores / Growth in business consulting and advising: limiting factors and growth factors

Hoffmann, Davi Laskani 25 October 2018 (has links)
É inegável a importância das micro e pequenas empresas para a geração de riqueza destinada à nação brasileira. As micro e pequenas empresas brasileiras são responsáveis por uma parcela expressiva do PIB nacional e dos empregos formais no país. Porém, estudos demonstram que uma grande parte destas empresas são encerradas em um período curto de operação, não passando pela fase de crescimento e maturidade, por causa, basicamente, de falhas em sua gestão. A organização objeto desta pesquisa, da qual o pesquisador é sócio, passou pela transação de micro para pequena empresa durante o processo de pesquisa dessa dissertação e enfrenta o desafio de crescer. Desta forma, surgiu a questão desta pesquisa: \"Como tratar os fatores limitantes e impulsionadores de crescimento de uma consultoria e assessoria empresarial?\". E o seu objetivo foi de propor uma sistematização das iniciativas que iriam suplantar as limitações identificadas e que possibilitariam o crescimento da consultoria e assessoria empresarial. Inicialmente levantaram-se as causas observadas das restrições ao crescimento desta organização, elaboradas entre o pesquisador e sua equipe, são elas: recursos limitados, baixo nível de padronização e posicionamento sem clareza. Quanto ao método de pesquisa escolhido utilizou-se de pesquisa bibliográfica para atender ao objetivo secundário de identificar os fatores limitantes e impulsionadores de crescimento na bibliografia, assim como soluções para o crescimento. Dentre o referencial teórico se utilizou dos seguintes temas: as particularidades da pequena empresa e o seu ciclo de crescimento, as características empreendedoras, melhores práticas de crescimento observadas nas pequenas e médias empresas, especificidades da prestação de serviço em consultoria e estratégia e posicionamento. A fim de pesquisar melhores práticas para crescimento existentes em consultorias ou assessorias empresariais realizaram-se 6 entrevistas semiestruturadas com proprietários de empresas prestadoras de serviço e correlatas ao segmento da organização estudada que passaram por um processo de crescimento. Da teoria e das entrevistas foram elencados 9 limitadores do crescimento empresarial. Das entrevistas elencaram-se 23 impulsionadores de crescimento e estes foram relacionados aos limitadores, gerando uma matriz relacional. Por fim, para elaborar uma sistematização das iniciativas que iriam suplantar as limitações identificadas e que possibilitariam o crescimento da consultoria e assessoria empresarial foi utilizado o método de pesquisa-ação. Através dos limitadores e impulsionadores identificados, além dos exemplos práticos citados pelos entrevistados, o pesquisador listou 17 iniciativas que compuseram um plano de intervenção, ou seja, uma sistematização. Ao longo da pesquisa, 5 das 17 iniciativas foram implantadas na organização estudada e os resultados observados. A média do faturamento bruto mensal da organização dobrou entre o período anterior ao início da pesquisa e o período posterior ao início da pesquisa, denotando uma correlação positiva às implantações de algumas iniciativas do plano de intervenção, não se limitando apenas às interferências destas variáveis. As limitações constatadas para esta pesquisa foram a falta de tempo (por se tratar de um mestrado) de aplicar o plano de intervenção na íntegra e também uma pesquisa quantitativa para validar este plano. Como contribuição desta pesquisa, espera-se que empresários ou aspirantes, que passam por situações semelhantes às enfrentadas pelo pesquisador, possam utilizar-se das evidências práticas deste material aplicando-o em suas próprias organizações. / The importance of micro and small companies for the generation of wealth destined for the Brazilian nation is undeniable. Brazilian micro and small enterprises are responsible for a significant share of national GDP and formal jobs in the country. However, studies show that a large part of these companies are shut down in a short period of operation, do not cross the growth and maturity phase, basically due to failures in their management. The organization object of this research, of which the researcher is a partner, went through the transaction from micro to small company during the research process of this dissertation and faces the challenge of growing. In this way, the question of this research appeared: \"How to deal with the limiting factors and growth promoters of a business consulting and advisory?\". Its objective was to propose a systematization of the initiatives that would overcome the limitations identified and that would enable the growth of business consulting and advisory services. Initially, it raised the observed causes of restrictions of growth of this organization, elaborated between the researcher and his team, it are: limited resources, low level of standardization and positioning without clarity. Regarding the chosen research method, a bibliographic search was used to meet the secondary objective of identifying the limiting factors and growth promoters in the bibliography, as well as solutions for growth. Among the theoretical references, the following themes were used: the particularities of the small company and its growth cycle, the entrepreneurial characteristics, the best practices of growth observed in small and medium enterprises, the specifics of the service rendering in consulting and strategy and positioning. In order to research best practices for growth existing in consultancies or business advisors, six semi-structured interviews with owners of service companies and correlated to the segment of the organization object of this research, which underwent a process of growth, were carried out. From the theory and the interviews were listed 9 limiters of business growth. Of the interviews, 23 growth promoters were listed and these were related to the limiters, generating a relational matrix. Finally, in order to elaborate a systematization of the initiatives that would overcome the identified limitations and that would allow the growth of the consulting and business consultancy, the action-research method was used. Through the identified limiters and drivers, in addition to the practical examples cited by the interviewees, the researcher listed 17 initiatives that comprised an intervention plan, that is, a systematization. Throughout the research, 5 of the 17 initiatives were implemented in the organization object of this research and the results observed. The average gross monthly turnover of the organization doubled between the period prior to the start of the survey and the period after the start of the survey, indicating a positive correlation to the implementation of some initiatives of the intervention plan, not limited to the interference of these variables. The limitations found for this research were the lack of time (because it is a master\'s degree) to apply the intervention plan in full and also a quantitative research to validate this plan. As a contribution of this research, it is expected that entrepreneurs or aspirants, who undergo similar situations to those faced by the researcher, can use the practical evidence of this material applying it in their own organizations.
216

Estudo de escalabilidade de servidores baseados em eventos em sitemas multiprocessados: um estudo de caso completo\" / Scalability study of event-driven servers in multi-processed systems: a complete case study

Cordeiro, Daniel de Angelis 27 October 2006 (has links)
O crescimento explosivo no número de usuários de Internet levou arquitetos de software a reavaliarem questões relacionadas à escalabilidade de serviços que são disponibilizados em larga escala. Projetar arquiteturas de software que não apresentem degradação no desempenho com o aumento no número de acessos concorrentes ainda é um desafio. Neste trabalho, investigamos o impacto do sistema operacional em questões relacionadas ao desempenho, paralelização e escalabilidade de jogos interativos multi-usuários. Em particular, estudamos e estendemos o jogo interativo, multi-usuário, QuakeWorld, disponibilizado publicamente pela id Software sob a licença GPL. Criamos um modelo de paralelismo para a simulação distribuída realizada pelo jogo e o implementamos no servidor do QuakeWorld com adaptações que permitem que o sistema operacional gerencie de forma adequada a execução da carga de trabalho gerada. / The explosive growth in the number of Internet users made software architects reevaluate issues related to the scalability of services deployed on a large scale. It is still challenging to design software architectures that do not experience performance degradation when the concurrent access increases. In this work, we investigate the impact of the operating system in issues related to performance, parallelization, and scalability of interactive multiplayer games. Particularly, we study and extend the interactive, multiplayer game QuakeWorld, made publicly available by id Software under GPL license. We have created a new parallelization model for Quake\'s distributed simulation and implemented that model in QuakeWorld server with adaptations that allows the operating system to manage the execution of the generated workload in a more convenient way.
217

Network architectures and energy efficiency for high performance data centers / Architectures réseaux et optimisation d'énergie pour les centres de données massives

Baccour, Emna 30 June 2017 (has links)
L’évolution des services en ligne et l’avènement du big data ont favorisé l’introduction de l’internet dans tous les aspects de notre vie : la communication et l’échange des informations (exemple, Gmail et Facebook), la recherche sur le web (exemple, Google), l’achat sur internet (exemple, Amazon) et le streaming vidéo (exemple, YouTube). Tous ces services sont hébergés sur des sites physiques appelés centres de données ou data centers qui sont responsables de stocker, gérer et fournir un accès rapide à toutes les données. Tous les équipements constituants le système d’information d’une entreprise (ordinateurs centraux, serveurs, baies de stockage, équipements réseaux et de télécommunications, etc) peuvent être regroupés dans ces centres de données. Cette évolution informatique et technologique a entrainé une croissance exponentielle des centres de données. Cela pose des problèmes de coût d’installation des équipements, d’énergie, d’émission de chaleur et de performance des services offerts aux clients. Ainsi, l’évolutivité, la performance, le coût, la fiabilité, la consommation d’énergie et la maintenance sont devenus des défis importants pour ces centres de données. Motivée par ces défis, la communauté de recherche a commencé à explorer de nouveaux mécanismes et algorithmes de routage et des nouvelles architectures pour améliorer la qualité de service du centre de données. Dans ce projet de thèse, nous avons développé de nouveaux algorithmes et architectures qui combinent les avantages des solutions proposées, tout en évitant leurs limitations. Les points abordés durant ce projet sont: 1) Proposer de nouvelles topologies, étudier leurs propriétés, leurs performances, ainsi que leurs coûts de construction. 2) Conception des algorithmes de routage et des modèles pour réduire la consommation d’énergie en prenant en considération la complexité, et la tolérance aux pannes. 3) Conception des protocoles et des systèmes de gestion de file d’attente pour fournir une bonne qualité de service. 4) Évaluation des nouveaux systèmes en les comparants à d’autres architectures et modèles dans des environnements réalistes. / The increasing trend to migrate applications, computation and storage into more robust systems leads to the emergence of mega data centers hosting tens of thousands of servers. As a result, designing a data center network that interconnects this massive number of servers, and providing efficient and fault-tolerant routing service are becoming an urgent need and a challenge that will be addressed in this thesis. Since this is a hot research topic, many solutions are proposed like adapting new interconnection technologies and new algorithms for data centers. However, many of these solutions generally suffer from performance problems, or can be quite costly. In addition, devoted efforts have not focused on quality of service and power efficiency on data center networks. So, in order to provide a novel solution that challenges the drawbacks of other researches and involves their advantages, we propose to develop new data center interconnection networks that aim to build a scalable, cost-effective, high performant and QoS-capable networking infrastructure. In addition, we suggest to implement power aware algorithms to make the network energy effective. Hence, we will particularly investigate the following issues: 1) Fixing architectural and topological properties of the new proposed data centers and evaluating their performances and capacities of providing robust systems under a faulty environment. 2) Proposing routing, load-balancing, fault-tolerance and power efficient algorithms to apply on our architectures and examining their complexity and how they satisfy the system requirements. 3) Integrating quality of service. 4) Comparing our proposed data centers and algorithms to existing solutions under a realistic environment. In this thesis, we investigate a quite challenging topic where we intend, first, to study the existing models, propose improvements and suggest new methodologies and algorithms.
218

Snapshots in large-scale distributed file systems

Stender, Jan 21 January 2013 (has links)
Viele moderne Dateisysteme unterstützen Snapshots zur Erzeugung konsistenter Online-Backups, zur Wiederherstellung verfälschter oder ungewollt geänderter Dateien, sowie zur Rückverfolgung von Änderungen an Dateien und Verzeichnissen. Während frühere Arbeiten zu Snapshots in Dateisystemen vorwiegend lokale Dateisysteme behandeln, haben moderne Trends wie Cloud- oder Cluster-Computing dazu geführt, dass die Datenhaltung in verteilten Speichersystemen an Bedeutung gewinnt. Solche Systeme umfassen häufig eine Vielzahl an Speicher-Servern, was besondere Herausforderungen mit Hinblick auf Skalierbarkeit, Verfügbarkeit und Ausfallsicherheit mit sich bringt. Diese Arbeit beschreibt einen Snapshot-Algorithmus für großangelegte verteilte Dateisysteme und dessen Integration in XtreemFS, ein skalierbares objektbasiertes Dateisystem für Grid- und Cloud-Computing-Umgebungen. Die zwei Bausteine des Algorithmus sind ein System zur effizienten Erzeugung und Verwaltung von Dateiinhalts- und Metadaten-Versionen, sowie ein skalierbares, ausfallsicheres Verfahren zur Aggregation bestimmter Versionen in einem Snapshot. Um das Problem einer fehlenden globalen Zeit zu bewältigen, implementiert der Algorithmus ein weniger restriktives, auf Zeitstempeln lose synchronisierter Server-Uhren basierendes Konsistenzmodell für Snapshots. Die wesentlichen Beiträge der Arbeit sind: 1) ein formales Modell von Snapshots und Snapshot-Konsistenz in verteilten Dateisystemen; 2) die Beschreibung effizienter Verfahren zur Verwaltung von Metadaten- und Dateiinhalts-Versionen in objektbasierten Dateisystemen; 3) die formale Darstellung eines skalierbaren, ausfallsicheren Snapshot-Algorithmus für großangelegte objektbasierte Dateisysteme; 4) eine detaillierte Beschreibung der Implementierung des Algorithmus in XtreemFS. Eine umfangreiche Auswertung belegt, dass der vorgestellte Algorithmus die Nutzerdatenrate kaum negativ beeinflusst, und dass er mit großen Zahlen an Snapshots und Versionen skaliert. / Snapshots are present in many modern file systems, where they allow to create consistent on-line backups, to roll back corruptions or inadvertent changes of files, and to keep a record of changes to files and directories. While most previous work on file system snapshots refers to local file systems, modern trends like cloud and cluster computing have shifted the focus towards distributed storage infrastructures. Such infrastructures often comprise large numbers of storage servers, which presents particular challenges in terms of scalability, availability and failure tolerance. This thesis describes snapshot algorithm for large-scale distributed file systems and its integration in XtreemFS, a scalable object-based file system for grid and cloud computing environments. The two building blocks of the algorithm are a version management scheme, which efficiently records versions of file content and metadata, as well as a scalable and failure-tolerant mechanism that aggregates specific versions in a snapshot. To overcome the lack of a global time in a distributed system, the algorithm implements a relaxed consistency model for snapshots, which is based on timestamps assigned by loosely synchronized server clocks. The main contributions of the thesis are: 1) a formal model of snapshots and snapshot consistency in distributed file systems; 2) the description of efficient schemes for the management of metadata and file content versions in object-based file systems; 3) the formal presentation of a scalable, fault-tolerant snapshot algorithm for large-scale object-based file systems; 4) a detailed description of the implementation of the algorithm as part of XtreemFS. An extensive evaluation shows that the proposed algorithm has no severe impact on user I/O, and that it scales to large numbers of snapshots and versions.
219

Flammability Characteristics at Heat Fluxes up to 200 kW/m2 and The Effect of Oxygen on Flame Heat Flux

Beaulieu, Patricia 19 December 2005 (has links)
"This dissertation documents two interrelated studies that were conducted to more fundamentally understand the scalability of flame heat flux. The first study used an applied heat flux in the bench scale horizontal orientation which simulates a large scale flame heat flux. The second study used enhanced ambient oxygen to actually increase the bench scale flame heat flux itself. Understanding the scalability of flame heat flux more fully will allow better ignition and combustion models to be developed as well as improved test methods. The key aspect of the first study was the use of real scale applied heat flux up to 200 kW/m2. An unexpected non-linear trend is observed in the typical plotting methods currently used in fire protection engineering for ignition and mass loss flux data for several materials tested. This non-linearity is a true material response. This study shows that viewing ignition as an inert material process is inaccurate at predicting the surface temperature at higher heat fluxes and suggests that decomposition kinetics at the surface and possibly even in-depth may need to be included in an analysis of the process of ignition. This study also shows that viewing burning strictly as a surface process where the decomposition kinetics is lumped into the heat of gasification may be inaccurate and the energy balance is too simplified to represent the physics occurring. The key aspect of the second study was direct experimental measurements of flame heat flux back to the burning surface for 20.9 to 40 % ambient oxygen concentrations. The total flame heat flux in enhanced ambient oxygen does not simulate large scale flame heat flux in the horizontal orientation. The vertical orientation shows that enhanced ambient oxygen increases the flame heat flux more significantly and also increases the measured flame spread velocity."
220

A influência das capacidades dinâmicas no processo de escalabilidade da inovação social

Pirotti, Tatiane Martins Cruz 13 June 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-10-09T12:45:58Z No. of bitstreams: 1 Tatiane Martins Cruz Pirotti_.pdf: 2431912 bytes, checksum: 1d85e4822ed294bc2f313f1f9dfaf073 (MD5) / Made available in DSpace on 2018-10-09T12:45:58Z (GMT). No. of bitstreams: 1 Tatiane Martins Cruz Pirotti_.pdf: 2431912 bytes, checksum: 1d85e4822ed294bc2f313f1f9dfaf073 (MD5) Previous issue date: 2018-06-13 / Nenhuma / As inovações sociais são importantes instrumentos para minimização ou solução de problemas sociais. Contudo, ainda existem grandes desafios ligados à gestão no que se refere ao desenvolvimento de inovações sociais que perdurem e que possam ampliar seus impactos sociais, gerando, assim, o que se entende por escalabilidade. Se adaptadas ao contexto social, as capacidades dinâmicas, suportadas por seus microprocessos, podem servir como apoio ao processo de escalabilidade. Mediante a realização de um estudo de caso único, em uma inovação social brasileira com mais de quatro décadas de existência, o presente estudo analisa de que formas as capacidades dinâmicas podem influenciar o processo de escalabilidade de uma inovação social. O método de pesquisa empregado é de natureza qualitativa, com utilização da abordagem processual. Os dados coletados consistiram em entrevistas, documentos e observações, o que permitiu identificar a existência de quatro fases metodológicas. Foram elencados e analisados para cada uma destas os eventos e os microprocessos de capacidades dinâmicas que mais impactaram no processo de escalabilidade da inovação social estudada. Como principais resultados, inferiu-se que as capacidades dinâmicas exercem influência positiva no processo de escalabilidade da inovação social, auxiliando na percepção de oportunidades e ameaças, bem como na apropriação e na criação das transformações e adaptações necessárias às mudanças ambientais e às metas de escalabilidade. Além disso, entende-se que o surgimento de novos microprocessos está associado às necessidades ambientais, bem como ao engajamento de diversos atores. Com esta pesquisa, buscou-se contribuir com a gestão da inovação social, ao elencar práticas (microprocessos) capazes de influenciar o processo de escalabilidade. Como contribuição teórica e acadêmica, o estudo avança no entendimento de como as capacidades dinâmicas podem contribuir no processo de escalabilidade, além de utilizar-se da análise processual, o que oportunizou um entendimento aprofundado do processo de escalabilidade ao longo da trajetória da inovação social estudada. / Social innovations are important tools for minimizing or solving social problems. However, there are still major challenges regarding the management of social innovations development in order to expand their social impacts, thus generating what is meant by scalability. The dynamic capabilities and its microfoundations, if adapted to the social context, can serve as a support for the scalability process. By conducting a single case study in a Brazilian social innovation with more than four decades of existence, the present study analyzes in which ways the dynamic capabilities can influence the scalability process of a social innovation. The research method used is qualitative, using the process approach. The data were collected through interviews, documents and observations, which allowed to identify the existence of four methodological phases. The dynamic capabilities events and microfoundations that most impacted the process of scalability of the studied social innovation were listed and analyzed for each of these events. As main results, it was inferred that dynamic capabilities exert a positive influence on the scalability process of social innovation, aiding in the perception of opportunities and threats, as well as in the appropriation and creation of the transformations and adaptations necessary for environmental changes and scalability goals. In addition, it is understood that the emergence of new microfoundations is associated with the environmental needs, as well as the engagement of several actors. With this research, we aim to contribute to the management of social innovation, by listing practices (microfoundations) capable of influencing the scalability process. As a theoretical and academic contribution, the study advances the understanding of how dynamic capabilities can influence the scalability process, in addition to using the process research approach, which provided a deep understanding of the scalability process along the social innovation trajectory studied.

Page generated in 0.0646 seconds