Spelling suggestions: "subject:"partitioning"" "subject:"artitioning""
351 |
Utilisation des modèles de co-clustering pour l'analyse exploratoire des données / No English title availableGuigourès, Romain 04 December 2013 (has links)
Le co-clustering est une technique de classification consistant à réaliser une partition simultanée des lignes et des colonnes d’une matrice de données. Parmi les approches existantes, MODL permet de traiter des données volumineuses et de réaliser une partition de plusieurs variables, continues ou nominales. Nous utilisons cette approche comme référence dans l’ensemble des travaux de la thèse et montrons la diversité des problèmes de data mining pouvant être traités, comme le partitionnement de graphes, de graphes temporels ou encore le clustering de courbes. L’approche MODL permet d’obtenir des résultats fins sur des données volumineuses, ce qui les rend difficilement interprétables. Des outils d’analyse exploratoire sont alors nécessaires pour les exploiter. Afin de guider l'utilisateur dans l'interprétation de tels résultats, nous définissons plusieurs outils consistant à simplifier des résultats fins afin d’en avoir une interprétation globale, à détecter les clusters remarquables, à déterminer les valeurs représentatives de leurs clusters et enfin à visualiser les résultats. Les comportements asymptotiques de ces outils d’analyse exploratoire sont étudiés afin de faire le lien avec les approches existantes.Enfin une application sur des comptes-rendus d’appels de l’opérateur Orange, collectés en Côte d’Ivoire, montre l’intérêt de l’approche et des outils d’analyse exploratoire dans un contexte industriel. / Co-clustering is a clustering technique aiming at simultaneously partitioning the rows and the columns of a data matrix. Among the existing approaches, MODL is suitable for processing huge data sets with several continuous or categorical variables. We use it as the baseline approach in this thesis. We discuss the reliability of applying such an approach on data mining problems like graphs partitioning, temporal graphs segmentation or curve clustering.MODL tracks very fine patterns in huge data sets, that makes the results difficult to study. That is why, exploratory analysis tools must be defined in order to explore them. In order to help the user in interpreting the results, we define exploratory analysis tools aiming at simplifying the results in order to make possible an overall interpretation, tracking the most interesting patterns, determining the most representative values of the clusters and visualizing the results. We investigate the asymptotic behavior of these exploratory analysis tools in order to make the connection with the existing approaches.Finally, we highlight the value of MODL and the exploratory analysis tools owing to an application on call detailed records from the telecom operator Orange, collected in Ivory Coast.
|
352 |
Studies of Enzyme Mechanism Using Isotopic ProbesChen, Cheau-Yun 08 1900 (has links)
The isotope partitioning studies of the Ascaris suum NAD-malic enzyme reaction were examined with five transitory complexes including E:NAD, E:NAD:Mg, E:malate, E:Mg:malate, and E:NAD:malate. Three productive complexes, E:NAD, E:NAD:Mg, and E:Mg:malate, were obtained, suggesting a steady-state random mechanism. Data for trapping with E:14C-NAD indicate a rapid equilibrium addition of Mg2+ prior to the addition of malate. Trapping with 14C-malate could only be obtained from the E:Mg2+:14C-malate complex, while no trapping from E:14C-malate was obtained under feasible experimental conditions. Most likely, E:malate is non-productive, as has been suggested from the kinetic analysis. The experiment with E:NAD:malate could not be carried out due to the turnover of trace amounts of malate dehydrogenase in the pulse solution. The equations for the isotope partitioning studies varying two substrates in the chase solution in an ordered terreactant reaction were derived, allowing a determination of the relative rates of substrate dissociation to the catalytic reaction for each of the productive transitory complexes. NAD and malate are released from the central complex at an identical rate, equal to the catalytic rate.
|
353 |
Use Of Different Ripening Inhibitors To Enhance Antimicrobial Activity Of Essential Oil NanoemulsionRyu, Victor 27 October 2017 (has links)
The objective of this research was to study the impact of ripening inhibitor level and type on the formation, stability, and activity of antimicrobial thyme oil nanoemulsions formed by spontaneous emulsification. Oil-in-water antimicrobial nanoemulsions (10 wt%) were formed by titrating a mixture of essential oil, ripening inhibitor, and surfactant (Tween 80) into 5mM sodium citrate buffer (pH 3.5). Stable nanoemulsions containing small droplets (d < 70 nm) were formed. The antimicrobial activity of the nanoemulsions decreased with increasing ripening inhibitor concentration, which was attributed to a reduction in the amount of hydrophobic antimicrobial constituents transferred to the separated hydrophobic domain, mimicking bacterial cell membranes, by using dialysis and chromatography. The antimicrobial activity of the nanoemulsions also depended on the nature of the ripening inhibitor used: palm ≈ corn > canola > coconut which also depended on their ability to transfer hydrophobic antimicrobial constituents to the separated hydrophobic domain.
|
354 |
Analyzing Metacommunity Models with Statistical Variance Partitioning: A Review and Meta-AnalysisLamb, Kevin Vieira 03 August 2020 (has links)
The relative importance of deterministic processes versus chance is one of the most important questions in science. We analyze the success of variance partitioning methods used to explain variation in β-diversity and partition it into environmental, spatial, and spatially structured environmental components. We test the hypotheses that 1) the number of environmental descriptors in a study would be positively correlated with the percentage of β-diversity explained by the environment, and that the environment would explain more variation in β-diversity than spatial or shared factors in VP analyses, 2) increasing the complexity of environmental descriptors would help account for more of the total variation in β-diversity, and 3) studies based on functional groups would account for more of the total variation in β-diversity than studies based on taxonomic data. Results show that the amount of unexplained β-diversity is on average 65.6%. There was no evidence showing that the number of environmental descriptors, increased complexity of environmental descriptors, or utilizing functional diversity allowed researchers to account for more variation in β-diversity. We review the characteristics of studies that account for a large percentage of variation in β-diversity as well as explanations for studies that accounted for little variation in β-diversity.
|
355 |
Partitionnement réparti basé sur les sommets / Distributed edge partitioningMykhailenko, Hlib 14 June 2017 (has links)
Pour traiter un graphe de manière répartie, le partitionnement est une étape préliminaire importante car elle influence de manière significative le temps final d’exécutions. Dans cette thèse nous étudions le problème du partitionnement réparti de graphe. Des travaux récents ont montré qu’une approche basée sur le partitionnement des sommets plutôt que des arêtes offre de meilleures performances pour les graphes de type power-laws qui sont courant dans les données réelles. Dans un premier temps nous avons étudié les différentes métriques utilisées pour évaluer la qualité d’un partitionnement. Ensuite nous avons analysé et comparé plusieurs logiciels d’analyse de grands graphes (Hadoop, Giraph, Giraph++, Distributed GrahpLab et PowerGraph), les comparant `a une solution très populaire actuellement, Spark et son API de traitement de graphe appelée GraphX. Nous présentons les algorithmes de partitionnement les plus récents et introduisons une classification. En étudiant les différentes publications, nous arrivons à la conclusion qu’il n’est pas possible de comparer la performance relative de tous ces algorithmes. Nous avons donc décidé de les implémenter afin de les comparer expérimentalement. Les résultats obtenus montrent qu’un partitionneur de type Hybrid-Cut offre les meilleures performances. Dans un deuxième temps, nous étudions comment il est possible de prédire la qualité d’un partitionnement avant d’effectivement traiter le graphe. Pour cela, nous avons effectué de nombreuses expérimentations avec GraphX et effectué une analyse statistique précise des résultats en utilisation un modèle de régression linéaire. Nos expérimentations montrent que les métriques de communication sont de bons indicateurs de la performance. Enfin, nous proposons un environnement de partitionnement réparti basé sur du recuit simulé qui peut être utilisé pour optimiser une large partie des métriques de partitionnement. Nous fournissons des conditions suffisantes pour assurer la convergence vers l’optimum et discutons des métriques pouvant être effectivement optimisées de manière répartie. Nous avons implémenté cet algorithme dans GraphX et comparé ses performances avec JA-BE-JA-VC. Nous montrons que notre stratégie amène a` des améliorations significatives. / In distributed graph computation, graph partitioning is an important preliminary step because the computation time can significantly depend on how the graph has been split among the different executors. In this thesis we explore the graph partitioning problem. Recently, edge partitioning approach has been advocated as a better approach to process graphs with a power-law degree distribution, which are very common in real-world datasets. That is why we focus on edge partition- ing approach. We start by an overview of existing metrics, to evaluate the quality of the graph partitioning. We briefly study existing graph processing systems: Hadoop, Giraph, Giraph++, Distributed GrahpLab, and PowerGraph with their key features. Next, we compare them to Spark, a popular big-data processing framework with its graph processing APIs — GraphX. We provide an overview of existing edge partitioning algorithms and introduce partitioner classification. We conclude that, based only on published work, it is not possible to draw a clear conclusion about the relative performances of these partitioners. For this reason, we have experimentally compared all the edge partitioners currently avail- able for GraphX. Results suggest that Hybrid-Cut partitioner provides the best performance. We then study how it is possible to evaluate the quality of a parti- tion before running a computation. To this purpose, we carry experiments with GraphX and we perform an accurate statistical analysis using a linear regression model. Our experimental results show that communication metrics like vertex-cut and communication cost are effective predictors in most of the cases. Finally, we propose a framework for distributed edge partitioning based on distributed simulated annealing which can be used to optimize a large family of partitioning metrics. We provide sufficient conditions for convergence to the optimum and discuss which metrics can be efficiently optimized in a distributed way. We implemented our framework with GraphX and performed a comparison with JA-BE-JA-VC, a state-of-the-art partitioner that inspired our approach. We show that our approach can provide significant improvements.
|
356 |
Studie zur Partitionierung und Transmutation (P&T) hochradioaktiver Abfälle Stand der Grundlagen- und technologischen ForschungMerk, Bruno, Glivici-Cotruta, Varvara January 2014 (has links)
Das, dem Teilprojekt zu Grunde liegende, Gesamtprojekt gliederte sich in zwei Module: In Modul A (Förderung durch das BMWi, Federführung durch KIT) und Modul B (Förderung durch das BMBF, Federführung durch acatech). Projektpartner im Modul A waren DBE TECHNOLOGY GmbH, die Gesellschaft für Anlagen- und Reaktorsicherheit mbH (GRS), das Helmholtz-Zentrum Dresden-Rossendorf (HZDR), das Karlsruher Institut für Technologie (KIT) und die Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen zusammen mit dem Forschungszentrum Jülich (FZJ). Modul B wurde vom Zentrum für Interdisziplinäre Risiko- und Innovationsforschung der Universität Stuttgart (ZIRIUS) bearbeitet. Die Gesamtkoordination der beidem Module erfolgte durch die Deutsche Akademie der Technikwissenschaften (acatech). Auf Grundlage einer Analyse der wissenschaftlich-technischen Aspekte durch Modul A wurden die gesellschaftlichen Implikationen bewertet und daraus in Modul B Kommunikations- und Handlungsempfehlungen für die zukünftige Positionierung von P&T formuliert.
Im, vom HZDR koordinierten, Teilprojekt „Stand der Grundlagen- und technologischen Forschung“ wird eine Übersicht über den genannten Bereich gegeben. Eingeführt wird das Thema mit einer Kurzbeschreibung möglicher Reaktorsysteme für die Transmutation. Danach wird der Entwicklungsstand der Spezialbereiche Trennchemie, Sicherheitstechnologie, Beschleunigertechnologie Flüssigmetalltechnologie, Entwicklung von Spallationstargets, Transmutationsbrennstoffen und Werkstoffkonzepten sowie Konditionierung von Abfällen, beschrieben. Dies wird ergänzt durch Spezifika von Transmutationsanlagen beginnend bei physikalischen Grundlagen und Kerndesigns, über Reaktorphysik von Transmutationsanlagen, Simulationstools und die Entwicklung von Safety Approaches. Im Anschluss wird der Stand existierender Bestrahlungseinrichtungen mit schnellem Spektrum beschrieben. Nachfolgend werden basierend auf dem derzeitigen Stand von F&E die offenen Fragen und Forschungslücken in den einzelnen Teilbereichen – Wiederaufbereitung und Konditionierung, Beschleuniger und Spallationstarget, Reaktor – zusammengestellt und sowohl eine Strategie, als auch ein Fahrplan zur Schließung der Technology Gaps entwickelt.
Zusätzlich werden die Hauptbeiträge, des HZDR zur Gesamtstudie beschrieben. Dies sind insbesondere die Beschreibungen der Möglichkeiten und Grenzen von P&T, die Herausforderungen an Bestrahlungseinrichtungen zur Transmutation und deren Effektivität, sowie Sicherheitsmerkmale beschleuniger-getriebener unterkritischer Systeme inclusive grundlegender Störfallbetrachtungen und Sicherheitscharakteristik. / The main project, where this sub project contributed to, has been structured into two modules: module A (funded by the federal ministry of economics, managed by KIT) and module B (funded by the federal ministry of education and research, managed by acatech). Partners in module A were DBE TECHNOLOGY GmbH, the Gesellschaft für Anlagen- und Reaktorsicherheit mbH (GRS), the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), the Karlsruher Institute of Technology (KIT) and the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, in co-operation with the Forschungszentrum Jülich (FZJ). Modul B has been executed by the Zentrum für Interdisziplinäre Risiko- und Innovationsforschung der Universität Stuttgart (ZIRIUS). The overall coordination has been carried out by the Deutsche Akademie der Technikwissenschaften (acatech). The social implications have been evaluated in module B based on the analysis of the scientific and technological aspects in module A. Recommendations for communication and actions to be taken for the future positioning of P&T have been developed.
In the project part, coordinated by HZDR – status of R&D – an overview on the whole topic P&T is given. The topic is opened by a short description of reactor systems possible for transmutation. In the following the R&D status of separation technologies, safety technology, accelerator technology, liquid metal technology, spallation target development, transmutation fuel and structural material development, as well as waste conditioning is described. The topic is completed by the specifics of transmutation systems, the basic physics and core designs, the reactor physics, the simulation tools and the development of Safety Approaches. Additionally, the status of existing irradiation facilities with fast neutron spectrum is described. Based on the current R&D status, the research and technology gaps in the topics: separation and conditioning, accelerator and spallation target, and reactor are characterized and a strategy as well as a roadmap for closing these gaps has been developed.
In addition the major contributions of HZDR to the main project are described. The major parts are the description of the potential and the limits of P&T, the requirements and challenges for transmutation systems and the related efficiency, as well as the safety features of accelerator driven subcritical systems including the transient behavior and the safety characteristics.
|
357 |
Theoritical and numerical studies on the graph partitioning problem / Études théoriques et numériques du problème de partitionnement dans un grapheAlthoby, Haeder Younis Ghawi 06 November 2017 (has links)
Étant donné G = (V, E) un graphe non orienté connexe et un entier positif β (n), où n est le nombrede sommets de G, le problème du séparateur (VSP) consiste à trouver une partition de V en troisclasses A, B et C de sorte qu'il n'y a pas d'arêtes entre A et B, max {| A |, | B |} est inférieur ou égal àβ (n) et | C | est minimum. Dans cette thèse, nous considérons une modélisation du problème sous laforme d'un programme linéaire en nombres entiers. Nous décrivons certaines inégalités valides et etdéveloppons des algorithmes basés sur un schéma de voisinage.Nous étudions également le problème du st-séparateur connexe. Soient s et t deux sommets de Vnon adjacents. Un st-séparateur connexe dans le graphe G est un sous-ensemble S de V \ {s, t} quiinduit un sous-graphe connexe et dont la suppression déconnecte s de t. Il s'agit de déterminer un stséparateur de cardinalité minimum. Nous proposons trois formulations pour ce problème et donnonsdes inégalités valides du polyèdre associé à ce problème. Nous présentons aussi une heuristiqueefficace pour résoudre ce problème. / Given G=(V,E) a connected undirected graph and a positive integer β(n), where n is number ofvertices, the vertex separator problem (VSP) is to find a partition of V into three classes A,B and Csuch that there is no edge between A and B, max{|A|,|B|}less than or equal β(n), and |C| isminimum. In this thesis, we consider aninteger programming formulation for this problem. Wedescribe some valid inequalties and using these results to develop algorithms based onneighborhood scheme.We also study st-connected vertex separator problem. Let s and tbe two disjoint vertices of V, notadjacent. A st-connected separator in the graph G is a subset S of V\{s,t} such that there are no morepaths between sand tin G[G\S] and the graph G[S] is connected . The st-connected vertex speratorproblem consists in finding such subset with minimum cardinality. We propose three formulationsfor this problem and give some valid inequalities for the polyhedron associated with this problem.We develop also an efficient heuristic to solve this problem.
|
358 |
A Generalized Framework for Automatic Code Partitioning and Generation in Distributed SystemsSairaman, Viswanath 05 February 2010 (has links)
In distributed heterogeneous systems the partitioning of application software to be executed in a distributed fashion is a challenge by itself. The task of code partitioning for distributed processing involves partitioning the code into clusters and mapping those code clusters to the individual processing elements interconnected through a high speed network. Code generation is the process of converting the code partitions into individually executable code clusters and satisfying the code dependencies by adding communication primitives to send and receive data between dependent code clusters. In this work, we describe a generalized framework for automatic code partitioning and code generation for distributed heterogeneous systems. A model for system level design and synthesis using transaction level models has also been developed and is presented. The application programs along with the partition primitives are converted into independently executable concrete implementations. The process consists of two steps, first translating the primitives of the application program into equivalent code clusters, and then scheduling the implementations of these code clusters according to the inherent data dependencies. Further, the original source code needs to be reverse engineered in order to create a meta-data table describing the program elements and dependency trees. The data gathered, is used along with Parallel Virtual Machine (PVM) primitives for enabling the communication between the partitioned programs in the distributed environment. The framework consists of profiling tools, partitioning methodology, architectural exploration and cost analysis tools. The partitioning algorithm is based on clustering, in which the code clusters are created to minimize communication overhead represented as data transfers in task graph for the code. The proposed approach has been implemented and tested for different applications and compared with simulated annealing and tabu search based partitioning algorithms. The objective of partitioning is to minimize the communication overhead. While the proposed approach performs comparably with simulated annealing and better than tabu search based approaches in most cases in terms of communication overhead reduction, it is conclusively faster than simulated annealing and tabu search by an order of magnitude as indicated by simulation results. The proposed framework for system level design/synthesis provides an end to end rapid prototyping approach for aiding in architectural exploration and design optimization. The level of abstraction in the design phase can be fine tuned using transaction level models.
|
359 |
Attack-Resilient Adaptive Load-Balancing in Distributed Spatial Data Streaming SystemsAnas Hazim Daghistani (9143297) 05 August 2020 (has links)
<div>The proliferation of GPS-enabled devices has led to the development of numerous location-based services. These services need to process massive amounts of spatial data in real-time with high-throughput and low response time. The current scale of spatial data cannot be handled using centralized systems. This has led to the development of distributed spatial streaming systems. The performance of distributed streaming systems relies on how even the workload is distributed among their machines. However, the real-time streamed spatial data and query follow non-uniform spatial distributions that are continuously changing over time. Therefore, Distributed spatial streaming systems need to track the changes in the distribution of spatial data and queries and redistribute their workload accordingly. This thesis addresses the challenges of adapting to workload changes in distributed spatial streaming systems to improve the performance while preserving the system's security. </div><div>The thesis proposes TrioStat, an online workload estimation technique that relies on a probabilistic model for estimating the cost of partitions and machines of distributed spatial streaming systems. TrioStat has a decentralised technique to collect and maintain the required statistics in real-time with minimal overhead. In addition, this thesis introduces SWARM, a light-weight adaptive load-balancing protocol that continuously monitors the data and query workloads across the distributed processes of spatial data streaming systems, and redistribute the workloads soon as performance bottlenecks get detected. SWARM uses TrioStat to estimate the workload of the system's machines. Although using adaptive load-balancing techniques significantly improves the performance of distributed streaming systems, they make the system vulnerable to attacks. In this thesis, we introduce a novel attack model that targets adaptive load-balancing mechanisms of distributed streaming systems. The attack reduces the throughput and the availability of the system by making it stay in a continuous state of rebalancing. The thesis proposes Guard, a component that detects and blocks attacks that target the adaptive load balancing of distributed streaming systems. Guard is deployed in SWARM to develop an attack-resilient adaptive load balancing mechanism for Distributed spatial streaming systems.<br></div>
|
360 |
Partitioning and Control for Dynamical Systems Evolving on ManifoldsTan, Xiao January 2020 (has links)
With the development and integration of cyber-physical and safety-critical systems, control systems are expected to achieve tasks that include logic rules, receptive decision-making, safety constraints, and so forth. For example, in a persistent surveillance application, an unmanned aerial vehicle might be required to "take photos of areas A and B infinitely often, always avoid unsafe region C, and return to the charging point when the battery level goes low." One possible design approach to achieve such complex specifications is automata-based planning using formal verification algorithms. Central to the existing formal verification of continuous-time systems is the notion of abstraction, which consists of partitioning the state space into cells, and then formulating a certain control problem on each cell. The control problem is characterized as finding a state feedback to make all the closed-loop trajectories starting from one cell reach and enter a consecutive cell in finite time without intruding any other cells. This essentially abstracts the continuous system into a finite-state transition graph. The complex specifications can thus be checked against the simple transition model using formal verification tools, which yields a sequence of cells to visit consecutively. While control algorithms have been developed in the literature for linear systems associated with a polytopic partitioning of the state space, the partitioning and control problem for systems on a curved space is a relatively unexplored research area. In this thesis, we consider $ SO (3) $ and $ \ mathbb {S} ^ 2 $, the two most commonly encountered manifolds in mechanical systems, and propose several approaches to address the partitioning and control problem that in principle could be generalized to other manifolds. Chapter 2 proposes a discretization scheme that consists of sampling point generation and cell construction. Each cell is constructed as a ball region around a sampling point with an identical radius. Uniformity measures for the sampling points are proposed. As a result, the $SO(3)$ manifold is discretized into interconnected cells whose union covers the whole space. A graph model is naturally built up based on the cell adjacency relations. This discretization method, in general, can be extended to any Riemannian manifold. To enable the cell transitions, two reference trajectories are constructed corresponding to the cell-level plan. We demonstrate the results by solving a constrained attitude maneuvering problem with arbitrary obstacle shapes. It is shown that the algorithm finds a feasible trajectory as long as it exists at that discretization level. In Chapter 3, the 2-sphere manifold is considered and discretized into spherical polytopes, an analog of convex polytopes in the Euclidean space. Moreover, with the gnomonic projection, we show that the spherical polytopes can be naturally mapped into Euclidean polytopes and the dynamics on the manifold locally transform to a simple linear system via feedback linearization. Based on this transformation, the control problems then can be solved in the Euclidean space, where many control schemes exist with safe cell transition guarantee. This method serves as a special case that solves the partition-and-control problem by transforming the states and dynamics on manifold to Euclidean space in local charts. In Chapter 4, we propose a notion of high-order barrier functions for general control affine systems to guarantee set forward invariance by checking their higher order derivatives. This notion provides a unified framework to constrain the transient behavior of the closed-loop trajectories, which is essential in the cell-transition control design. The asymptotic stability of the forward invariant set is also proved, which is highly favorable for robustness with respect to model perturbations. We revisit the cell transition problem in Chapter 2 and show that even with a simple stabilizing nominal controller, the proposed high-order barrier function framework provides satisfactory transient performance. / <p>QC 20201012</p>
|
Page generated in 0.1425 seconds