Spelling suggestions: "subject:"partitioning"" "subject:"artitioning""
331 |
[en] RECIPITATION PROCESSES CONTROLLED BY LATTICE AND GRAIN BOUNDARY DIFFUSION IN ALLOY 33 (FE-NI-CR-MO-N) / [pt] PROCESSOS DE PRECIPITAÇÃO INTERGRANULAR E VOLUMÉTRICA NA LIGA 33 (FE-NI-CR-MO-N)VIVIANE DELVAUX CARNEIRO 10 February 2005 (has links)
[pt] Este trabalho é uma investigação da microestrutura e
cinética dos fenômenos de precipitação que ocorrem na Liga
33 (Fe-Ni-Cr-Mo-Cu-N), sistema metálico desenvolvido pela
Krupp VDM com o intuito de suportar altas temperaturas e
ambiente corrosivo. A Liga 33 incorre precipitação
contínua e descontínua simultaneamente, como resultado do
tratamento de envelhecimento realizado numa faixa de
temperatura correspondente àquela que o material atinge
quando submetido a um processo de soldagem. A
caracterização microestrutural foi realizada por
microscopia ótica, microscopia eletrônica de varredura, e
microscopia eletrônica de transmissão, incluindo
microanálise, devido à ordem de grandeza nanométrica das
fases precipitadas. A precipitação descontínua ocasiona
uma estrutura lamelar no contorno dos grãos, resultado do
crescimento cooperativo entre as lamelas, envolvendo
átomos substitucionais (Cr, por exemplo) e intersticiais
(N). A precipitação contínua ocorre no interior dos grãos
gerando precipitados com diferentes morfologias. A
microanálise revela que os produtos gerados em ambas as
reações crescem competindo pelo Cr. Uma análise cinética-
morfológica aponta para a natureza não estacionária da
reação descontínua, que sofre gradativa diminuição de sua
taxa de transformação, até ser totalmente paralisada. / [en] This work is an investigation of the microstructure and
kinetics of the phenomena occurring inside Alloy 33 (Fe-Ni-
Cr-Mo-Cu-N), a metallic system developed by Krupp VDM to
endure high temperatures and corrosive environment. Alloy
33 incurs continuous and discontinuous precipitation
simultaneously, as a result of the aging treatment induced
in a temperature range correspondent to the one of a
welding process in the referred material. The
microstructural characterization was performed by optical
microscopy, scanning electron microscopy and transmission
electron microscopy, including microanalysis, due to the
nanometric nature of the precipitated phases.
Discontinuous precipitation produces a lamellar structure
along grain boundaries as a result of a cooperative growth
between the lamellae, involving substitucional and
interstitial atoms, Cr and N respectively. Continuous
precipitation occurs inside grains, generating
precipitates with different morphologies. Microanalysis
reveals that products of both precipitation reactions grow
competing for Cr. A kinetic-morphological analysis points
to the non-stationary characteristic of the discontinuous
precipitation, where the transformation rate diminishes
until it stops completely, as aging occurs.
|
332 |
[en] QUENCHING AND PARTITIONING OF NI-ADDED HIGH STRENGTH STEELS: KINETICS MODELLING MICROSTRUCTURE AND MECHANICAL PROPERTIES / [pt] TÊMPERA E PARTIÇÃO EM AÇOS DE ALTA RESISTÊNCIA CONTENDO NI: MODELAGEM CINÉTICA, MICROESTRUTURA E PROPRIEDADES MECÂNICASANA ROSA FONSECA DE AGUIAR MARTINS 03 December 2007 (has links)
[pt] Aços de alta resistência contendo frações significativas de
austenita retida têm alcançado grande interesse comercial
principalmente quando associados ao fenômeno TRIP durante o
processo de conformação final. Recentemente, um novo
conceito de tratamento térmico, denominado Têmpera e
Partição, vem sendo estudado como mais uma alternativa no
desenvolvimento de aços multifásicos. Neste processo, o
controle da fração volumétrica da austenita retida é
possível uma vez que durante o tratamento de partição, a
supersaturação de carbono na martensita temperada é
utilizada para estabilizar a austenita não transformada,
evitando assim transformações futuras que poderiam ocorrer
em temperaturas mais baixas. A seqüência de processamento
térmico envolve o tratamento de têmpera numa faixa de
temperatura entre Ms e Mf, seguido de partição numa
temperatura igual ou superior à temperatura de têmpera. A
partição do carbono da martenista para a austenita é
possível caso reações competitivas, como por exemplo, a
precipitação de carbetos, sejam suprimidas pela adição de
elementos de liga tais como Si e/ou Al. Uma condição básica
para o modelo está relacionada à restrição de movimentação
da interface martensita/austenita, uma vez que a difusão em
temperaturas baixas está limitada aos átomos interticiais.
Essa restrição leva a um novo conceito de equilíbrio
denominado Equilíbrio Constrito de Carbono, que é
caracterizado pela igualdade do potencial químico na
interface austenita-martensita apenas para o carbono. Nesse
trabalho foram desenvolvidos quatro aços, contendo
diferentes percentuais de C e Ni e com a presença dos
elementos Si, Mn, Mo e Cr. A adição desses elementos teve
finalidade reduzir a temperatura Bs, visando desacoplar o
tratamento de têmpera e partição de uma eventual
transformação bainítica. Um conjunto de condições para o
tratamento de têmpera e partição foi então desenhado,
envolvendo diferentes temperaturas de têmpera e diferentes
temperaturas e tempos de partição. A avaliação
microestrutural foi realizada utilizando recursos de
microscopia ótica e microscopia eletrônica de varredura e
de transmissão. A técnica de difração de raios-X foi
empregada para quantificar a fração de austenita retida e
seu enriquecimento em carbono. Foi modelado o processo de
partição do carbono utilizando o programa DICTRATM. Os
resultados dessas simulações foram analisados em termos dos
parâmetros microestruturais, do tempo e da temperatura, e
como essa combinação influência a cinética de partição do
carbono. Os resultados obtidos para as amostras ensaiadas
em tração indicaram uma vasta combinação de resistência
e ductilidade, confirmando o potencial do processo na
otimização das propriedades mecânicas. / [en] High strength steels containing significant fractions of
retained austenite have been developed in recent years and
are the subject of growing commercial interest when
associated with the TRIP phenomenon during deformation. A
new process concept, Quenching and Partitioning, has been
recently proposed for production of steel microstructures
containing carbon-enriched austenite. The heat treatment
sequence involves quenching to a temperature between the
martensite-start (Ms) and martensite-finish (Mf)
temperatures, followed by a partitioning treatment, above
or at the initial quench temperature, designed to enrich
the remaining untransformed austenite with the carbon
escaping from the supersaturated martensite phase, thereby
stabilizing the retained austenite phase during the
subsequent quench to room temperature. To enable the
austenite enrichment, competing reactions, principally
carbide precipitation, must be suppressed by appropriate
alloying elements, such as Si and/or Al. The concept
assumes a stationary martensite/austenite interface and the
absence of shortrange movements of iron and substitutionals
elements. The condition under which partitioning occur has
been called Constrained Carbon Equilibrium (ECC), due to
the restriction in movement of the interface and the
assumption that only carbon equilibrates its chemical
potencial at the interface. In this work, a group of four
alloys was investigated, containing different additions of
C and Ni and containing Si, Mn, Mo e Cr. These alloys were
designed to preclude bainite formation at the partitioning
temperatures of interest. Several heat-treatments, were
performed in these alloys, using the Q&P concept, to
evaluate its effect on the resulting microstructure and
mechanical properties. Each alloy was quenched at selected
temperatures and partitioned from 350 to 450°C for times
ranging from 10 to 1000s. Microstructural characterization
was performed by optical microcoscopy, scanning and
transmission electron microscopy, while X-ray diffraction
was used to determine both the fraction and the carbon
content of the retained austenite. Partitioning kinetics
were simulated with DICTRATM. The results were analyzed
taking into consideration the scale of the microstructure,
as well as the partitioning temperature. Tensile test
results indicated that very high levels of strength with
moderate toughness can be achieved confirming the potential
of the Q&P to produce a superior combination of mechanical
properties.
|
333 |
[en] DIRECTIONALITY FIELDS IN GENERATION AND EVALUATION OF QUADRILATERAL MESHES / [pt] CAMPOS DE DIRECIONALIDADE NA GERAÇÃO E AVALIAÇÃO DE MALHAS DE QUADRILÁTEROSALICE HERRERA DE FIGUEIREDO 12 December 2017 (has links)
[pt] Um dos principais desafios para a geração de malhas de quadriláteros é garantir o alinhamento dos elementos em relação às restrições do domínio. Malhas não alinhadas introduzem problemas numéricos em simulações que usam essas malhas como subdivisão do domínio. No entanto, não existe uma métrica de alinhamento para a avaliação de qualidade de malhas de quadriláteros. Um campo de direcionalidade representa a difusão das orientações das restrições no interior do domínio. Kowalski et al. usam um campo de direcionalidade para particionar o domínio em regiões quadrilaterais. Neste trabalho, reproduzimos o método de particionamento
proposto por Kowalski et al. com algumas alterações, visando reduzir o número final de partições. Em seguida, propomos uma métrica para avaliar a qualidade de malhas de quadriláteros em relação ao alinhamento com as restrições do domínio. / [en] One of the main challenges in quadrilateral mesh generation is to ensure the alignment of the elements with respect to domain constraints. Unaligned meshes insert numerical problems in simulations that use these
meshes as a domain discretization. However, there is no alignment metric for evaluating the quality of quadrilateral meshes. A directionality field represents the diffusion of the constraints orientation to the interior of the domain. Kowalski et al. use a directionality field for domain partitioning into quadrilateral regions. In this work, we reproduce their partitioning method with some modifications, aiming to reduce the final number of partitions. We also propose a metric to evaluate the quality of a quadrilateral mesh with respect to the alignment with domain constraints.
|
334 |
Parallel JPEG Processing with a Hardware Accelerated DSP Processor / Parallell JPEG-behandling med en hårdvaruaccelerarad DSP processorAndersson, Mikael, Karlström, Per January 2004 (has links)
This thesis describes the design of fast JPEG processing accelerators for a DSP processor. Certain computation tasks are moved from the DSP processor to hardware accelerators. The accelerators are slave co processing machines and are controlled via a new instruction set. The clock cycle and power consumption is reduced by utilizing the custom built hardware. The hardware can perform the tasks in fewer clock cycles and several tasks can run in parallel. This will reduce the total number of clock cycles needed. First a decoder and an encoder were implemented in DSP assembler. The cycle consumption of the parts was measured and from this the hardware/software partitioning was done. Behavioral models of the accelerators were then written in C++ and the assembly code was modified to work with the new hardware. Finally, the accelerators were implemented using Verilog. Extension of the accelerator instructions was given following a custom design flow.
|
335 |
En optimierande kompilator för SMV till CLP(B) / An optimising SMV to CLP(B) compilerAsplund, Mikael January 2005 (has links)
This thesis describes an optimising compiler for translating from SMV to CLP(B). The optimisation is aimed at reducing the number of required variables in order to decrease the size of the resulting BDDs. Also a partitioning of the transition relation is performed. The compiler uses an internal representation of a FSM that is built up from the SMV description. A number of rewrite steps are performed on the problem description such as encoding to a Boolean domain and performing the optimisations. The variable reduction heuristic is based on finding sub-circuits that are suitable for reduction and a state space search is performed on those groups. An evaluation of the results shows that in some cases the compiler is able to greatly reduce the size of the resulting BDDs.
|
336 |
Design of Single Scalar DSP based H.264/AVC DecoderTiejun Hu, Di Wu January 2005 (has links)
H.264/AVC is a new video compression standard designed for future broadband network. Compared with former video coding standards such as MPEG-2 and MPEG-4 part 2, it saves up to 40% in bit rate and provides important characteristics such as error resilience, stream switching etc. However, the improvement in performance also introduces increase in computational complexity, which requires more powerful hardware. At the same time, there are several image and video coding standards currently used such as JPEG and MPEG-4. Although ASIC design meets the performance requirement, it lacks flexibility for heterogeneous standards. Hence reconfigurable DSP processor is more suitable for media processing since it provides both real-time performance and flexibility. Currently there are several single scalar DSP processors in the market. Compare to media processor, which is generally SIMD or VLIW, single scalar DSP is cheaper and has smaller area while its performance for video processing is limited. In this paper, a method to promote the performance of single scalar DSP by attaching hardware accelerators is proposed. And the bottleneck for performance promotion is investigated and the upper limit of acceleration of a certain single scalar DSP for H.264/AVC decoding is presented. Behavioral model of H.264/AVC decoder is realized in pure software during the first step. Although real-time performance cannot be achieved with pure software implementation, computational complexity of different parts is investigated and the critical path in decoding was exposed by analyzing the first design of this software solution. Then both functional acceleration and addressing acceleration were investigated and designed to achieve the performance for real-time decoding using available clock frequency within 200MHz.
|
337 |
Controls on the sources and distribution of chalcophile and lithophile trace elements in arc magmasD'Souza, Rameses Joseph 24 January 2018 (has links)
Volcanic arcs have been the locus of continental growth since at least the Proterozoic eon. In this dissertation, I seek to shine more light on arc processes by inferring the lower crustal mineralogy of an ancient arc by geochemical and structural modelling of its exposed levels. Arcs characteristically have high concentrations of incompatible elements, thus I also experimentally assess the ability of alkaline melts and fluids associated with sediment melting to carry lithophile and chalcophile elements in the sub-arc.
I measured the chemical composition of 18 plutonic samples from the Bonanza island arc, emplaced between 203 and 164 Ma on the Wrangellia terrane on Vancouver Island, British Columbia. Models using trace elements with Nd and Sr isotopes indicate < 10% assimilation of the Wrangellia basement by the Bonanza arc magmas. The Bonanza arc rare earth element geochemistry is best explained as two lineages, each with two fractionation stages implicating < 15% garnet crystallization. My inference of garnet-bearing cumulates in the unexposed lower crust of the Bonanza arc, an unsuspected similarity with the coeval Talkeetna arc (Alaska), is consistent with estimates from geologic mapping and geobarometry indicating that the arc grew to > 23 km total thickness. The age distribution of the Bonanza arc plutons shows a single peak at 171 Ma whereas the volcanic rock age distribution shows two peaks at 171 and 198 Ma, likely due to sampling and/or preservation bias. Numerous mechanisms may produce the E-W separation of young and old volcanism and this does not constrain Jurassic subduction polarity beneath Wrangellia.
Although a small component of arc magmatism, alkaline arc rocks are associated with economic concentrations of chalcophile elements. The effect of varying alkalinity on S Concentration at Sulfide Saturation (SCSS) has not been previously tested. Thus, I conducted experiments on hydrous basaltic andesite melts with systematically varied alkalinity at 1270°C and 1 GPa using piston-cylinder apparatus. At oxygen fugacity two log units below the fayalite magnetite quartz buffer, I find SCSS is correlated with total alkali concentration, perhaps a result of the increased non-bridging oxygen associated with increased alkalinity. A limit to the effect of alkalis on SCSS in hydrous melts is observed at ~7.5 wt.% total alkalis. Using my results and published data, I retrained earlier SCSS models and developed a new empirical model using the optical basicity compositional parameter, predicting SCSS with slightly better accuracy than previous models.
Sediment melts contribute to the trace element signature of arcs and the chalcophile elements, compatible in redox-sensitive sulfide, are of particular interest. I conducted experiments at 3 GPa, 950 – 1050°C on sediment melts, determined fluid concentrations by mass balance and report the first fluid-melt partition coefficients (Dfluid/melt) for sediment melting. Compared to oxidized, anhydrite-bearing melts, I observe high Dfluid/melt for chalcophile elements and low values for Ce in reduced, pyrrhotite-bearing melts. Vanadium and Sc are unaffected by redox. The contrasting fluid-melt behaviour of Ce and Mo that I report indicates that melt, not fluid, is responsible for elevated Mo in the well-studied Lesser Antilles arc. / Graduate
|
338 |
Load Balancing of Multi-physics Simulation by Multi-criteria Graph Partitioning / Equilibrage de charge pour des simulations multi-physiques par partitionnement multcritères de graphesBarat, Remi 18 December 2017 (has links)
Les simulations dites multi-physiques couplent plusieurs phases de calcul. Lorsqu’elles sont exécutées en parallèle sur des architectures à mémoire distribuée, la minimisation du temps de restitution nécessite dans la plupart des cas d’équilibrer la charge entre les unités de traitement, pour chaque phase de calcul. En outre, la distribution des données doit minimiser les communications qu’elle induit. Ce problème peut être modélisé comme un problème de partitionnement de graphe multi-critères. On associe à chaque sommet du graphe un vecteur de poids, dont les composantes, appelées « critères », modélisent la charge de calcul porté par le sommet pour chaque phase de calcul. Les arêtes entre les sommets, indiquent des dépendances de données, et peuvent être munies d’un poids reflétant le volume de communication transitant entre les deux sommets. L’objectif est de trouver une partition des sommets équilibrant le poids de chaque partie pour chaque critère, tout en minimisant la somme des poids des arêtes coupées, appelée « coupe ». Le déséquilibre maximum toléré entre les parties est prescrit par l’utilisateur. On cherche alors une partition minimisant la coupe, parmi toutes celles dont le déséquilibre pour chaque critère est inférieur à cette tolérance. Ce problème étant NP-Dur dans le cas général, l’objet de cette thèse est de concevoir et d’implanter des heuristiques permettant de calculer efficacement de tels partitionnements. En effet, les outils actuels renvoient souvent des partitions dont le déséquilibre dépasse la tolérance prescrite. Notre étude de l’espace des solutions, c’est-à-dire l’ensemble des partitions respectant les contraintes d’équilibre, révèle qu’en pratique, cet espace est immense. En outre, nous prouvons dans le cas mono-critère qu’une borne sur les poids normalisés des sommets garantit que l’espace des solutions est non-vide et connexe. Nous fondant sur ces résultats théoriques, nous proposons des améliorations de la méthode multi-niveaux. Les outils existants mettent en oeuvre de nombreuses variations de cette méthode. Par l’étude de leurs codes sources, nous mettons en évidence ces variations et leurs conséquences à la lumière de notre analyse sur l’espace des solutions. Par ailleurs, nous définissons et implantons deux algorithmes de partitionnement initial, se focalisant sur l’obtention d’une solution à partir d’une partition potentiellement déséquilibrée, au moyen de déplacements successifs de sommets. Le premier algorithme effectue un mouvement dès que celui-ci améliore l’équilibre, alors que le second effectue le mouvement réduisant le plus le déséquilibre. Nous présentons une structure de données originale, permettant d’optimiser le choix des sommets à déplacer, et conduisant à des partitions de déséquilibre inférieur en moyenne aux méthodes existantes. Nous décrivons la plate-forme d’expérimentation, appelée Crack, que nous avons conçue afin de comparer les différents algorithmes étudiés. Ces comparaisons sont effectuées en partitionnant un ensembles d’instances comprenant un cas industriel et plusieurs cas fictifs. Nous proposons une méthode de génération de cas réalistes de simulations de type « transport de particules ». Nos résultats démontrent la nécessité de restreindre les poids des sommets lors de la phase de contraction de la méthode multi-niveaux. En outre, nous mettons en évidence l’influence de la stratégie d’ordonnancement des sommets, dépendante de la topologie du graphe, sur l’efficacité de l’algorithme d’appariement « Heavy-Edge Matching » dans cette même phase. Les différents algorithmes que nous étudions sont implantés dans un outil de partitionnement libre appelé Scotch. Au cours de nos expériences, Scotch et Crack renvoient une partition équilibrée à chaque exécution, là où MeTiS, l’outil le plus utilisé actuellement, échoue une grande partie du temps. Qui plus est, la coupe des solutions renvoyées par Scotch et Crack est équivalente ou meilleure que celle renvoyée par MeTiS. / Multiphysics simulation couple several computation phases. When they are run in parallel on memory-distributed architectures, minimizing the simulation time requires in most cases to balance the workload across computation units, for each computation phase. Moreover, the data distribution must minimize the induced communication. This problem can be modeled as a multi-criteria graph partitioning problem. We associate with each vertex of the graph a vector of weights, whose components, called “criteria”, model the workload of the vertex for each computation phase. The edges between vertices indicate data dependencies, and can be given a weight representing the communication volume transferred between the two vertices. The goal is to find a partition of the vertices that both balances the weights of each part for each criterion, and minimizes the “edgecut”, that is, the sum of the weights of the edges cut by the partition. The maximum allowed imbalance is provided by the user, and we search for a partition that minimizes the edgecut, among all the partitions whose imbalance for each criterion is smaller than this threshold. This problem being NP-Hard in the general case, this thesis aims at devising and implementing heuristics that allow us to compute efficiently such partitions. Indeed, existing tools often return partitions whose imbalance is higher than the prescribed tolerance. Our study of the solution space, that is, the set of all the partitions respecting the balance constraints, reveals that, in practice, this space is extremely large. Moreover, we prove in the mono-criterion case that a bound on the normalized vertex weights guarantees the existence of a solution, and the connectivity of the solution space. Based on these theoretical results, we propose improvements of the multilevel algorithm. Existing tools implement many variations of this algorithm. By studying their source code, we emphasize these variations and their consequences, in light of our analysis of the solution space. Furthermore, we define and implement two initial partitioning algorithms, focusing on returning a solution. From a potentially imbalanced partition, they successively move vertices from one part to another. The first algorithm performs any move that reduces the imbalance, while the second performs at each step the move reducing the most the imbalance. We present an original data structure that allows us to optimize the choice of the vertex to move, and leads to partitions of imbalance smaller on average than existing methods. We describe the experimentation framework, named Crack, that we implemented in order to compare the various algorithms at stake. This comparison is performed by partitioning a set of instances including an industrial test case, and several fictitious cases. We define a method for generating realistic weight distributions corresponding to “Particles-in-Cells”-like simulations. Our results demonstrate the necessity to coerce the vertex weights during the coarsening phase of the multilevel algorithm. Moreover, we evidence the impact of the vertex ordering, which should depend on the graph topology, on the efficiency of the “Heavy-Edge” matching scheme. The various algorithms that we consider are implemented in an open- source graph partitioning software called Scotch. In our experiments, Scotch and Crack returned a balanced partition for each execution, whereas MeTiS, the current most used partitioning tool, fails regularly. Additionally, the edgecut of the solutions returned by Scotch and Crack is equivalent or better than the edgecut of the solutions returned by MeTiS.
|
339 |
Distributed Optimization in Electric Power Systems: Partitioning, Communications, and SynchronizationGuo, Junyao 01 March 2018 (has links)
To integrate large volumes of renewables and use electricity more efficiently, many industrial trials are on-going around the world that aim to realize decentralized or hierarchical control of renewable and distributed energy resources, flexible loads and monitoring devices. As the cost and complexity involved in the centralized communications and control infrastructure may be prohibitive in controlling millions of these distributed energy resources and devices, distributed optimization methods are expected to become much more prevalent in the operation of future electric power systems, as they have the potential to address this challenge and can be applied to various applications such as optimal power ow, state estimation, voltage control, and many others. While many distributed optimization algorithms are developed mathematically, little effort has been reported so far on how these methods should actually be implemented in real-world large-scale systems. The challenges associated with this include identifying how to decompose the overall optimization problem, what communication infrastructures can support the information exchange among subproblems, and whether to coordinate the updates of the subproblems in a synchronous or asynchronous manner. This research is dedicated to developing mathematical tools to address these issues, particularly for solving the non-convex optimal power flow problem. As the first part of this thesis, we develop a partitioning method that defines the boundaries of regions when applying distributed algorithms to a power system. This partitioning method quantifies the computational couplings among the buses and groups the buses with large couplings into one region. Through numerical experiments, we show that the developed spectral partitioning approach is the key to achieving fast convergence of distributed optimization algorithms on large-scale systems. After the partitioning of the system is defined, one needs to determine whether the communications among neighboring regions are supported. Therefore, as the second part of this thesis, we propose models for centralized and distributed communications infrastructures and study the impact of communication delays on the efficiency of distributed optimization algorithms through network simulations. Our findings suggest that the centralized communications infrastructure can be prohibitive for distributed optimization and cost-effective migration paths to a more distributed communications infrastructure are necessary. As the sizes and complexities of subproblems and communication delays are generally heterogeneous, synchronous distributed algorithms can be inefficient as they require waiting for the slowest region in the system. Hence, as the third part of this thesis, we develop an asynchronous distributed optimization method and show its convergence for the considered optimal power flow problem. We further study the impact of parameter tuning, system partitioning and communication delays on the proposed asynchronous method and compare its practical performance with its synchronous counterpart. Simulation results indicate that the asynchronous approach can be more efficient with proper partitioning and parameter settings on large-scale systems. The outcome of this research provides important insights into how existing hardware and software solutions for Energy Management Systems in the power grid can be used or need to be extended for deploying distributed optimization methods, which establishes the interconnection between theoretical studies of distributed algorithms and their practical implementation. As the evolution towards a more distributed control architecture is already taking place in many utility networks, the approaches proposed in this thesis provide important tools and a methodology for adopting distributed optimization in power systems.
|
340 |
Hypergraphs in the Service of Very Large Scale Query Optimization. Application : Data Warehousing / Les hypergraphes au service de l'optimisation de requêtes à très large échelle. Application : Entrepôt de donnéesBoukorca, Ahcène 12 December 2016 (has links)
L'apparition du phénomène Big-Data, a conduit à l'arrivée de nouvelles besoins croissants et urgents de partage de données qui a engendré un grand nombre de requêtes que les SGBD doivent gérer. Ce problème a été aggravé par d 'autres besoins de recommandation et d 'exploration des requêtes. Vu que le traitement de données est toujours possible grâce aux solutions liées à l'optimisation de requêtes, la conception physique et l'architecture de déploiement, où ces solutions sont des résultats de problèmes combinatoires basés sur les requêtes, il est indispensable de revoir les méthodes traditionnelles pour répondre aux nouvelles besoins de passage à l'échelle. Cette thèse s'intéresse à ce problème de nombreuses requêtes et propose une approche, implémentée par un Framework appelé Big-Quereis, qui passe à l'échelle et basée sur le hypergraph, une structure de données flexible qui a une grande puissance de modélisation et permet des formulations précises de nombreux problèmes d•combinatoire informatique. Cette approche est. le fruit. de collaboration avec l'entreprise Mentor Graphies. Elle vise à capturer l'interaction de requêtes dans un plan unifié de requêtes et utiliser des algorithmes de partitionnement pour assurer le passage à l'échelle et avoir des structures d'optimisation optimales (vues matérialisées et partitionnement de données). Ce plan unifié est. utilisé dans la phase de déploiement des entrepôts de données parallèles, par le partitionnement de données en fragments et l'allocation de ces fragments dans les noeuds de calcule correspondants. Une étude expérimentale intensive a montré l'intérêt de notre approche en termes de passage à l'échelle des algorithmes et de réduction de temps de réponse de requêtes. / The emergence of the phenomenon Big-Data conducts to the introduction of new increased and urgent needs to share data between users and communities, which has engender a large number of queries that DBMS must handle. This problem has been compounded by other needs of recommendation and exploration of queries. Since data processing is still possible through solutions of query optimization, physical design and deployment architectures, in which these solutions are the results of combinatorial problems based on queries, it is essential to review traditional methods to respond to new needs of scalability. This thesis focuses on the problem of numerous queries and proposes a scalable approach implemented on framework called Big-queries and based on the hypergraph, a flexible data structure, which bas a larger modeling power and may allow accurate formulation of many problems of combinatorial scientific computing. This approach is the result of collaboration with the company Mentor Graphies. It aims to capture the queries interaction in an unified query plan and to use partitioning algorithms to ensure scalability and to optimal optimization structures (materialized views and data partitioning). Also, the unified plan is used in the deploymemt phase of parallel data warehouses, by allowing data partitioning in fragments and allocating these fragments in the correspond processing nodes. Intensive experimental study sbowed the interest of our approach in terms of scaling algorithms and minimization of query response time.
|
Page generated in 0.4523 seconds