Spelling suggestions: "subject:"decomposition"" "subject:"ecomposition""
271 |
Minimização ótima de classes especiais de funções booleanas / On the optimal minimization of espcial classes of Boolean functionsCallegaro, Vinicius January 2016 (has links)
O problema de fatorar e decompor funções Booleanas é Σ-completo2 para funções gerais. Algoritmos eficientes e exatos podem ser criados para classes de funções existentes como funções read-once, disjoint-support decomposable e read-polarity-once. Uma forma fatorada é chamada de read-once (RO) se cada variável aparece uma única vez. Uma função Booleana é RO se existe uma forma fatorada RO que a representa. Por exemplo, a função representada por =12+134+135 é uma função RO, pois pode ser fatorada em =1(2+3(4+5)). Uma função Booleana f(X) pode ser decomposta usando funções mais simples g e h de forma que ()=ℎ((1),2) sendo X1, X2 ≠ ∅, e X1 ∪ X2 = X. Uma decomposição disjunta de suporte (disjoint-support decomposition – DSD) é um caso especial de decomposição funcional, onde o conjunto de entradas X1 e X2 não compartilham elementos, i.e., X1 ∩ X2 = ∅. Por exemplo, a função =12̅̅̅3+123̅̅̅ 4̅̅̅+12̅̅̅4 é DSD, pois existe uma decomposição tal que =1(2⊕(3+4)). Uma forma read-polarity-once (RPO) é uma forma fatorada onde cada polaridade (positiva ou negativa) de uma variável aparece no máximo uma vez. Uma função Booleana é RPO se existe uma forma fatorada RPO que a representa. Por exemplo, a função =1̅̅̅24+13+23 é RPO, pois pode ser fatorada em =(1̅̅̅4+3)(1+2). Esta tese apresenta quarto novos algoritmos para síntese de funções Booleanas. A primeira contribuição é um método de síntese para funções read-once baseado em uma estratégia de divisão-e-conquista. A segunda contribuição é um algoritmo top-down para síntese de funções DSD baseado em soma-de-produtos, produto-de-somas e soma-exclusiva-de-produtos. A terceira contribuição é um método bottom-up para síntese de funções DSD baseado em diferença Booleana e cofatores. A última contribuição é um novo método para síntese de funções RPO que é baseado na análise de transições positivas e negativas. / The problem of factoring and decomposing Boolean functions is Σ-complete2 for general functions. Efficient and exact algorithms can be created for an existing class of functions known as read-once, disjoint-support decomposable and read-polarity-once functions. A factored form is called read-once (RO) if each variable appears only once. A Boolean function is RO if it can be represented by an RO form. For example, the function represented by =12+134+135 is a RO function, since it can be factored into =1(2+3(4+5)). A Boolean function f(X) can be decomposed using simpler subfunctions g and h, such that ()=ℎ((1),2) being X1, X2 ≠ ∅, and X1 ∪ X2 = X. A disjoint-support decomposition (DSD) is a special case of functional decomposition, where the input sets X1 and X2 do not share any element, i.e., X1 ∩ X2 = ∅. Roughly speaking, DSD functions can be represented by a read-once expression where the exclusive-or operator (⊕) can also be used as base operation. For example, =1(2⊕(4+5)). A read-polarity-once (RPO) form is a factored form where each polarity (positive or negative) of a variable appears at most once. A Boolean function is RPO if it can be represented by an RPO factored form. For example the function =1̅̅̅24+13+23 is RPO, since it can factored into =(1̅̅̅4+3)(1+2). This dissertation presents four new algorithms for synthesis of Boolean functions. The first contribution is a synthesis method for read-once functions based on a divide-and-conquer strategy. The second and third contributions are two algorithms for synthesis of DSD functions: a top-down approach that checks if there is an OR, AND or XOR decomposition based on sum-of-products, product-of-sums and exclusive-sum-of-products inputs, respectively; and a method that runs in a bottom-up fashion and is based on Boolean difference and cofactor analysis. The last contribution is a new method to synthesize RPO functions which is based on the analysis of positive and negative transition sets. Results show the efficacy and efficiency of the four proposed methods.
|
272 |
Approches numérique multi-échelle/multi-modèle de la dégradation des matériaux composites / Multiscale / multimodel computational approach to the degradation of composite materialsTouzeau, Josselyn 30 October 2012 (has links)
Nos travaux concernent la mise en oeuvre d’une méthode multiéchelle pour faciliter la simulation numérique de structures complexes, appliquée à la modélisation de composants aéronautiques (notamment pour les pièces tournantes de turboréacteur et des structures composites stratifiées). Ces développements sont basés autour de la méthode Arlequin qui permet d’enrichir des modélisations numériques, à l’aide de patchs, autour de zones d’intérêt où des phénomènes complexes se produisent. Cette méthode est mise en oeuvre dans un cadre général permettant la superposition de maillages incompatibles au sein du code de calcul Z-set{Zébulon, en utilisant une formulation optimale des opérateurs de couplage. La précision et la robustesse de cette approche ont été évaluées sur différents problèmes numériques. Afin d’accroître les performances de la méthode Arlequin, un solveur spécifique basé sur les techniques de décomposition de domaine a été développé pour bénéficier des capacités de calcul offertes par les machines à architectures parallèles. Ces performances ont été évaluées sur différents cas tests académiques et quasi-industriels. Enfin, ces développements ont été appliqué à la simulation de problèmes de structures composites stratifiées. / Our work concerns the implementation of a method for convenient multiscale numerical simulation of complex structures, applied to the modeling of aircraft components (including rotating parts made of jet engine from laminate composite structures). These developments are based on the Arlequin method which allows to enrich numerical modeling, using patches around areas of interest where complex phenomena occur. This method is implemented in a general framework in order to link made of incompatible meshes in the Z-set{Zébulon finite element code, using an optimal formulation of the coupling operators. The accuracy and robustness of this approach were evaluated on various numerical problems. To increase the performance of the Arlequin method, a specific solver based on domain decomposition techniques has been developed to take advantage of computing capabilities offered by parallel machine architectures. Its performance has been evaluated on different numerical assessments from academic to industrial tests. Finally, these developments have been applied to the simulation of problems made of laminate composite structures.
|
273 |
Metodos computacionais para determinação de pontos de intersecção de n esferas no 'R POT. N' / Computacional methods for determination of points of intersection of n sphere in 'R POT. N'Gonçalves, Marcos Roberto da Silva 28 July 2008 (has links)
Orientadores: Carlile Campos Lavor, Jose Mario Martinez / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-11T21:12:00Z (GMT). No. of bitstreams: 1
Goncalves_MarcosRobertodaSilva_M.pdf: 1220561 bytes, checksum: e3b9dadf0b151f53d8a3e0e572b4c7ab (MD5)
Previous issue date: 2008 / Resumo: Neste trabalho, abordamos o problema da determinação de pontos de intersecção de n esferas no Rn. Este problema, além de ser importante matematicamente, é um problema com muitas aplicações, que vão desde a localização de pontos no globo, pelo sistema GPS, até a posicionamento de átomos em estruturas moleculares. O problema de encontrar a intersecção de n esferas no Rn é, em geral, formulado como um conjunto de n equações não-lineares, onde se deseja determinar a sua solução através de um método eficiente e confiável. Mostramos que, com exceção de alguns casos, o problema é geralmente resolvido de forma eficaz, empregando técnicas de álgebra linear. Reformulamos o problema de forma a convertê-lo em um problema linear e apresentamos dois métodos baseados na decomposição de matrizes. Testamos os métodos para casos particulares de baixa dimensão, analisando o custo computacional e possíveis dificuldades que podem surgir devido a erros de medição. / Abstract: We consider the problem of determining the points of intersection of n spheres in R n. This problem has many applications, such as the location of points on the globe by the GPS system and problems related to molecular geometry optimization. The problem of finding the intersection of n spheres in R n is generally expressed as a set of nonlinear equations, where we want to establish an efficient and reliable method to find their solution. We show that, in general, the problem can be solved effectively employing techniques of linear algebra. We reformulate the problem in order to transform it into a linear problem and present two methods based on the decomposition of matrices. We also test the methods in small instances and analyze the computational cost and possible difficulties that may arise due to errors of measurement. / Mestrado / Mestre em Matemática
|
274 |
Aqueous Solubilities and Water Induced Transformations of Halogenated BenzenesKim, In-Young 08 1900 (has links)
Methods of determining the aqueous solubilities of twelve chlorinated benzenes were evaluated in pure and in different water matrices. In pure water, results were comparable with the calculated values. Higher chlorinated tetrachlorobenzenes (TeCBs), pentachlorobenzenes (PCBz), and hexachlorobenzenes (HCBs) gave better precision and accuracy than lower chlorinated monochlorobenzenes (MCBs), dichlorobenzenes (DCBs), or trichlorobenzenes (TCBs).
|
275 |
The applicability of accumulated degree-day calculations on enclosed remains in a lotic aquatic environmentStark, Sally C. 09 November 2019 (has links)
This study examined the differences in decomposition rates and the resulting postmortem submergence interval (PMSI) of stillborn pigs and decapitated adult pig heads enclosed in plastic trash bags. Sixteen neonate pigs were divided into two variable categories: exposed and submerged in water, enclosed in a plastic trash bag and submerged in water. Upon recovery, each sample was assigned a Total Body Score. Eighteen decapitated adult pig heads were divided into two variable categories: nine heads were enclosed in plastic trash bags, and nine heads left exposed in the water. Twelve decapitated pig heads were divided into two terrestrial variable categories: six heads were enclosed in plastic trash bags and allowed to decompose on land, and six heads were left exposed on land. Accumulated degree-days (ADD) were calculated following the scoring guides provided in Moffatt et al. (2016), Megyesi et al. (2005) and Heaton et al. (2010). These guides were used to create a baseline decomposition rate established from the control groups decay rate. This baseline in the decomposition rate was then used to establish a measurable difference between exposed and enclosed samples. It was hypothesized that head samples submerged (enclosed/exposed) would decompose slower than the terrestrial samples (enclosed/exposed). It was further hypothesized that all enclosed/submerged samples would decompose slower than the exposed/terrestrial remains. A univariate analysis of variance (ANOVA) test found no statistically significant interactions between submerged, enclosed or exposed remains, indicating that the enclosure of remains in a plastic trash bag, and subsequent submergence or not did not affect the decomposition rate of either sample. An additional ANOVA found statistically significant differences between the rate of neonate sample decomposition and adult head sample decomposition. Paired sample t-tests produced statistically significant results that indicate the inaccuracy of the ADD calculation methods developed by Megyesi et al. (2005) and Heaton et al. (2010) to neonate-sized remains, decapitated heads, submerged enclosed/exposed samples or terrestrial enclosed/exposed samples.
|
276 |
Constitutive compatibility based identification of spatially varying elastic parameters distributionsMoussawi, Ali 12 1900 (has links)
The experimental identification of mechanical properties is crucial in mechanics for understanding material behavior and for the development of numerical models. Classical identification procedures employ standard shaped specimens, assume that the mechanical fields in the object are homogeneous, and recover global properties. Thus, multiple tests are required for full characterization of a heterogeneous object, leading to a time consuming and costly process. The development of non-contact, full-field measurement techniques from which complex kinematic fields can be recorded has opened the door to a new way of thinking. From the identification point of view, suitable methods can be used to process these complex kinematic fields in order to recover multiple spatially varying parameters through one test or a few tests. The requirement is the development of identification techniques that can process these complex experimental data. This thesis introduces a novel identification technique called the constitutive compatibility method. The key idea is to define stresses as compatible with the observed kinematic field through the chosen class of constitutive equation, making possible the uncoupling of the identification of stress from the identification of the material parameters. This uncoupling leads to parametrized solutions in cases where 5 the solution is non-unique (due to unknown traction boundary conditions) as demonstrated on 2D numerical examples. First the theory is outlined and the method is demonstrated in 2D applications. Second, the method is implemented within a domain decomposition framework in order to reduce the cost for processing very large problems. Finally, it is extended to 3D numerical examples. Promising results are shown for 2D and 3D problems
|
277 |
Scaling of Spectra of Cantor-Type Measures and Some Number Theoretic ConsiderationsKraus, Isabelle 01 January 2017 (has links)
We investigate some relations between number theory and spectral measures related to the harmonic analysis of a Cantor set. Specifically, we explore ways to determine when an odd natural number m generates a complete or incomplete Fourier basis for a Cantor-type measure with scale g.
|
278 |
Experimental Study of Two-Phase Cavitating Flows and Data AnalysisGe, Mingming 25 May 2022 (has links)
Cavitation can be defined as the breakdown of a liquid (either static or in motion) medium under very low pressure. The hydrodynamic happened in high-speed flow, where local pressure in liquid falls under the saturating pressure thus the liquid vaporizes to form the cavity. During the evolution and collapsing of cavitation bubbles, extreme physical conditions like high-temperature, high-pressure, shock-wave, and high-speed micro-jets can be generated. Such a phenomenon shall be prevented in hydraulic or astronautical machinery due to the induced erosion and noise, while it can be utilized to intensify some treatment processes of chemical, food, and pharmaceutical industries, to shorten sterilization times and lower energy consumption. Advances in the understanding of the physical processes of cavitating flows are challenging, mainly due to the lack of quantitative experimental data on the two-phase structures and dynamics inside the opaque cavitation areas. This dissertation is aimed at finding out the physical mechanisms governing the cavitation instabilities and making contributions in controlling hydraulic cavitation for engineering applications. In this thesis, cavitation developed in various convergent-divergent (Venturi) channels was studied experimentally using the ultra-fast synchrotron X-ray imaging, LIF Particle Image Velocimetry, and high-speed photography techniques, to (1) investigate the internal structures and evolution of bubble dynamics in cavitating flows, with velocity information obtained for two phases; (2) measure the slip velocity between the liquid and the vapor to provide the validation data for the numerical cavitation models; (3) consider the thermodynamic effects of cavitation to establish the relation between the cavitation extent and the fluid temperature, then and optimize the cavitation working condition in water; (4) seek the coherent structures of the complicated high-turbulent cavitating flow to reduce its randomness using data-driven methods. / Doctor of Philosophy / When the pressure of a liquid is below its saturation pressure, the liquid will be vaporized into vapor bubbles which can be called cavitation. In many hydraulic machines like pumps, propulsion systems, internal combustion engines, and rocket engines, this phenomenon is quite common and could induce damages to the mechanical systems. To understand the mechanisms and further control cavitation, investigation of the bubble inception, deformation, collapse, and flow regime change is mandatory. Here, we performed the fluid mechanics experiment to study the unsteady cavitating flow underlying physics as it occurs past the throat of a Venturi nozzle. Due to the opaqueness of this two-phase flow, an X-ray imaging technique is applied to visualize the internal flow structures in micrometer scales with minor beam scattering. Finally, we provided the latest physical model to explain the different regimes that appear in cavitation. The relationship between the cavitation length and its shedding regimes, and the dominant mechanism governing the transition of regimes are described. A combined suppression parameter is developed and can be used to enhance or suppress the cavitation intensity considering the influence of temperature.
|
279 |
Analysis of a nonhierarchical decomposition algorithmShankar, Jayashree 19 September 2009 (has links)
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs.
Here, the algorithm is carefully analyzed by testing it on simple quadratic programs, thereby recognizing the problems with the algorithm. Different modifications are made to improve its robustness and the best version is tested on a larger dimensional example. Some of the changes made are very fundamental, affecting the updating of the various tuning parameters present in the original algorithm.
The algorithm involves solving a given problem by dividing it into subproblems and a final coordination phase. The results indicate good success with small problems. On testing it with a larger dimensional example, it was discovered that there is a basic flaw in the coordination phase which needs to be rectified. / Master of Science
|
280 |
Complex Analysis on Planar Cell ComplexesArnold, Rachel Florence 28 May 2008 (has links)
This paper is an examination of the theory of discrete complex analysis that arises from the framework of a planar cell complex. Construction of this theory is largely integration-based. A combination of two cell complexes, the double and its associated diamond complex, allows for the development of a discrete Cauchy Integral Formula. / Master of Science
|
Page generated in 0.1007 seconds