81 |
Nouvelles méthodes de traitement de signaux multidimensionnels par décomposition suivant le théorème de Superposition de Kolmogorov / Novel processing methods for multidimensional signals using decompositions by the Kolmogorov superposition theoremLeni, Pierre-Emmanuel 23 November 2010 (has links)
Le traitement de signaux multidimensionnels reste un problème délicat lorsqu’il s’agit d’utiliser des méthodes conçues pour traiter des signaux monodimensionnels. Il faut alors étendre les méthodes monodimensionnelles à plusieurs dimensions, ce qui n’est pas toujours possible, ou bien convertir les signaux multidimensionnels en signaux 1D. Dans ce cas, l’objectif est de conserver le maximum des propriétés du signal original. Dans ce contexte, le théorème de superposition de Kolmogorov fournit un cadre théorique prometteur pour la conversion de signaux multidimensionnels. En effet, en 1957, Kolmogorov a démontré que toute fonction multivariée pouvait s’écrire comme sommes et compositions de fonctions monovariées. Notre travail s’est focalisé sur la décomposition d’images suivant le schéma proposé par le théorème de superposition, afin d’´etudier les applications possibles de cette d´ecomposition au traitement d’image. Pour cela, nous avons tout d’abord ´etudi´e la construction des fonctions monovari´ees. Ce probl`eme a fait l’objet de nombreuses ´etudes, et r´ecemment, deux algorithmes ont ´et´e propos´es. Sprecher a propos´e dans [Sprecher, 1996; Sprecher, 1997] un algorithme dans lequel il d´ecrit explicitement la m´ethode pour construire exactement les fonctions monovari´ees, tout en introduisant des notions fondamentales `a la compr´ehension du th´eor`eme. Par ailleurs, Igelnik et Parikh ont propos´e dans [Igelnik and Parikh, 2003; Igelnik, 2009] un algorithme pour approximer les fonctions monovariéees par un réseau de splines. Nous avons appliqué ces deux algorithmes à la décomposition d’images. Nous nous sommes ensuite focalisés sur l'étude de l’algorithme d’Igelnik, qui est plus facilement modifiable et offre une repréesentation analytique des fonctions, pour proposer deux applications originales répondant à des problématiques classiques de traitement de l’image : pour la compression : nous avons étudié la qualité de l’image reconstruite par un réseau de splines généré avec seulement une partie des pixels de l’image originale. Pour améliorer cette reconstruction, nous avons proposé d’effectuer cette décomposition sur des images de détails issues d’une transformée en ondelettes. Nous avons ensuite combiné cette méthode à JPEG 2000, et nous montrons que nous améliorons ainsi le schéma de compression JPEG 2000, même à bas bitrates. Pour la transmission progressive : en modifiant la génération du réseau de splines, l’image peut être décomposée en une seule fonction monovariée. Cette fonction peut être transmise progressivement, ce qui permet de reconstruire l’image en augmentant progressivement sa résolution. De plus, nous montrons qu’une telle transmission est résistante à la perte d’information. / The processing of multidimensional signal remains difficult when using monodimensional-based methods. Therefore, it is either required to extend monodimensional methods to several dimensions, which is not always possible, or to convert the multidimensional signals into 1D signals. In this case, the priority is to preserve most of the properties of the original signal. In this context, the Kolmogorov Superposition Theorem offers a promising theoretical framework for multidimensional signal conversion. In 1957, Kolmogorov demonstrated that any multivariate function can be written as sums and compositions of monovariate functions.We have focused on the image decomposition according to the superposition theorem scheme, to study the possible applications of this decomposition to image processing. We have first studied the monovariate function constructions. Various studies have dealt with this problem, and recently, two algorithms have been proposed. Sprecher has proposed in [Sprecher, 1996; Sprecher, 1997] an algorithm in which the method to exactly build the monovariate functions is described, as well as fundamental notions for the understanding of the theorem. Igelnik and Parikh have proposed in [Igelnik and Parikh, 2003; Igelnik, 2009] an algorithm to approximate the monovariate functions by a Spline network. We have applied both algorithms to image decomposition. We have chosen to use Igelnik’s algorithm which is easier to modify and provides an analytic representation of the functions, to propose two novel applications for classical problems in image processing : for compression : we have studied the quality of a reconstructed image using a spline network built with only a fraction of the pixels of the original image. To improve this reconstruction, we have proposed to apply this decomposition on images of details obtained by wavelet transform. We have then combined this method with JPEG 2000, and we show that the JPEG 2000 compression scheme is improved, even at low bitrates. For progressive transmission : by modifying the spline network construction, the image can be decomposed into one monovariate function. This function can be progressively transmitted, which allows to reconstruct the image by progressively increasing its resolution. Moreover, we show that such a transmission is resilient to information lost.
|
82 |
Mexický svátek Día de Muertos: jeho vývoj a percepce pohledem mexické identity / The Mexican Feast of Día de Muertos: Ist Evolution and Perception Via the Perspective of Mexican IdentityKalkusová, Tereza January 2017 (has links)
The thesis follows a thematic line in selected works dealing with the celebration of the Day of the Dead and various manifestations of the Mexican attitude toward death. It focuses both on its alteration in time and on the sources from which Mexican perspective of the death, as a part of the national ideological construct, springs from (existentialism, rural fatalism, Mexican Revolution, "philosophy of the Mexican character", tradition of the calaveras poems and graphics). The studied writings or sectors, which the thesis deal with, are: Nahua's lyric poetry, tradition of the calaveras poems and graphics, the large metaphysic poem of José Gorostiza Death Without End; chosen chapters of the essayistic creation of Octavio Paz, Carlos Fuentes and Carlos Monsiváis; Juan Rulfo's novel Pedro Páramo and Carlos Funtes's novella Aura.
|
83 |
Theory of symmetry and asymmetry in two-dimensional magnetic recording headsEdress Mohamed, Ammar Isam January 2016 (has links)
As part of the natural evolution and continued optimisation of their designs, current and future magnetic recording heads, used and proposed in technologies such as perpendicular recording, shingled magnetic recording and two-dimensional magnetic recording, often exhibit asymmetry in their structure. They consist of two semi-infinite poles separated by a gap (where the recording field is produced), with an inner gap faces inclined at an angle. Modelling of the fields from asymmetrical structures is complex, and no explicit solutions are currently available (only implicit conformal mapping solutions are available for rational inclination angles). Moreover, there is limited understanding on the correlation between the gap corner angle and the magnitude, distribution and wavelength response of these head structures. This research was therefore set out to investigate approximate analytical and semi-analytical methods for modelling the magnetic potentials and fields of two-dimensional symmetrical and asymmetrical magnetic recording heads, and deliver a quantitative understanding of the behaviour of the potentials and fields as functions of gap corner angles. The accuracy of the derived expressions (written in terms of the normalised root-mean-square deviation) was assessed by comparison to exact available solutions for limited cases, and to finite-element calculations on Comsol Multiphysics. Two analytical methods were derived to approximately model the fields from two-dimensional heads with tilted gap corners in the presence and absence of a soft magnetic underlayer (SUL): in the first method, the potential near a single, two-dimensional corner held at a constant potential is derived exactly through solution of Laplace's equation for the scalar potential in polar coordinates. Then through appropriate choice of enclosing boundary conditions, the potentials and fields of two corners at equal and opposite potentials and displaced from each other by a distance equal to the gap length were superposed to map the potential and field for asymmetrical and symmetrical heads. For asymmetrical heads, the superposition approximation provided good agreement to finite-element calculations for the limited range of exterior corner angles 0 from 0 (right-angled corner) to 45, due to the mismatch of surface charge densities on both poles for this geometry. For symmetrical head structures, the superposition approximation was found to yield remarkable agreement to exact solutions for all gap corner orientations from 0 (right-angled head) to 90 ("thin" gap head). In the second method derived in this research for modelling asymmetrical heads involved using a rational function approximation with free parameters to model the surface potential of asymmetrical heads. The free parameters and their functional dependence on corner angle were determined through fitting to finite-element calculations, enabling the derivation of analytical expressions for the magnetic fields that are in good agreement with exact solutions for all corner angels (0 to 90). To complement the two approximate methods for modelling the fields from asymmetrical and symmetrical heads, a new general approach based on the sine integral transform was derived to model the reaction of soft underlayers on the surface potential or field of any two-dimensional head structure, for sufficiently close head-to-underlayer separations. This method produces an infinite series of correction terms whose coefficients are functions of the head-to-underlayer separation and gap corner angle, that are added to the surface potential or field in the absence of an underlayer. This new approach demonstrated good agreement with finite-element calculations for sufficiently close head-to-underlayer separations, and with the classical Green's functions solutions for increasing separations. Using the derived analytical method and explicit expressions in this work, an understanding of the nature of the magnetic fields and their spectra as functions of the gap corner angles is gained. This understanding and analytical theory will benefit the modelling, design and optimisation of high performance magnetic recording heads.
|
84 |
HARQ Systems: Resource Allocation, Feedback Error Protection, and Bits-to-Symbol MappingsTumula V. K., Chaitanya January 2013 (has links)
Reliability of data transmission is a fundamental problem in wireless communications. Fading in wireless channels causes the signal strength to vary at the receiver and this results in loss of data packets. To improve the reliability, automatic repeat request (ARQ) schemes were introduced. However these ARQ schemes suffer from a reduction in the throughput. To address the throughput reduction, conventional ARQ schemes were combined with forward error correction (FEC) schemes to develop hybrid-ARQ (HARQ) schemes. For improving the reliability of data transmission, HARQ schemes are included in the present wireless standards like LTE, LTE-Advanced and WiMAX. Conventional HARQ systems use the same transmission power and the same number of channel uses in different ARQ rounds. However this is not optimal in terms of minimizing the average transmit power or the average energy spent for successful transmission of a data packet. We address this issue in the first part of the dissertation, where we consider optimal resource allocation in HARQ systems with a limit on the maximum number of allowed transmissions for a data packet. Specifically, we consider the problem of minimizing the packet drop probability (PDP) under an average transmit power constraint or equivalently minimizing the average transmit power under a fixed PDP constraint. We consider both incremental redundancy (IR)-based and Chase combining (CC)-based HARQ systems in our work. For an IR-HARQ system, for the special case of two allowed transmissions for each packet, we provide a solution for the optimal number of channel uses and the optimal power to be used in each ARQ round. For a CC-HARQ system, we solve the problem of optimal power allocation in i.i.d. Rayleigh fading channels as well as correlated Rayleigh fading channels. For the CC-HARQ case, we also provide a low complexity geometric programming (GP) solution using an approximation of the outage probability expression. HARQ systems conventionally use one bit acknowledgement (ACK)/negative ACK (NACK) feedback from the receiver to the transmitter. In the 3GPP-LTE systems, one method for sending these HARQ acknowledgement bits is to jointly code them with the other control signaling information using a specified Reed-Muller code consisting of 20 coded bits. Even though the resources used for sending this control signaling information can inherently provide a diversity gain, the Reed-Muller code with such a short block size is not good at extracting all of the available diversity. To address this issue, in the second part of this dissertation, we propose two new methods: i) based on complex-field coding (CFC), and ii) using repetition across frequency bands, to extract the inherent diversity available in the channel resources and improve the error protection for the HARQ acknowledgement bits along with the other control signaling information. In the second part of the dissertation, we also propose a new signal space diversity (SSD) scheme, which results in transmit signals having constant envelope (CE). The proposed CE-SSD scheme results in a better overall power efficiency due to the reduced back-off requirements on the radio frequency power amplifier. Moreover, the proposed CE-SSD technique can be useful for application scenarios involving transmission of small number of information bits, such as in the case of control signaling information transmission. In conventional HARQ systems, during the retransmission phase, the channel resources are exclusively used for the retransmitted data packet. This is not optimal in terms of efficient resource utilization. For efficient utilization of channel resources during the retransmissions, a superposition coding (SPC) based HARQ scheme was proposed in the literature. In an SPC based HARQ system, an erroneous packet is transmitted together with a new data packet by superposition in the Euclidean space. In the final part of this dissertation, we study performance of different bits-to-symbol mappings for such an SPC based HARQ system.
|
85 |
Sensitivity of Aeroelastic Properties of an Oscillating LPT CascadeGlodic, Nenad January 2013 (has links)
Modern turbomachinery design is characterized by a tendency towards thinner, lighter and highly loaded blades, which in turn gives rise to increased sensitivity to flow induced vibration such as flutter. Flutter is a self-excited and self-sustained instability phenomenon that may lead to structural failure due to High Cycle Fatigue (HCF) or material overload. In order to be able to predict potential flutter situations, it is necessary to accurately assess the unsteady aerodynamics during flutter and to understand the physics behind its driving mechanisms. Current numerical tools used for predicting unsteady aerodynamics of vibrating turbomachinery components are capable of modeling the flow field at high level of detail, but may fail in predicting the correct unsteady aerodynamics under certain conditions. Continuous validation of numerical models against experimental data therefore plays significant role in improving the prediction accuracy and reliability of the models. In flutter investigations, it is common to consider aerodynamically symmetric (tuned) setups. Due to manufacturing tolerances, assembly inaccuracies as well as in-service wear, the aerodynamic properties in a blade row may become asymmetric. Such asymmetries can be observed both in terms of steady as well as unsteady aerodynamic properties, and it is of great interest to understand the effects this may have on the aeroelastic stability of the system. Under certain conditions vibratory modes of realistic blade profiles tend to be coupled i.e. the contents of a given mode of vibration include displacements perpendicular and parallel to the chord as well as torsion of the profile. Current design trends for compressor blades that are resulting in low aspect ratio blades potentially reduce the frequency spacing between certain modes (i.e. 2F & 1T). Combined modes are also likely to occur in case of the vibration of a bladed disk with a comparatively soft disk and rigid blades or due to tying blades together in sectors (e.g. in turbines). The present investigation focuses on two areas that are of importance for improving the understanding of aeroelastic behavior of oscillating blade rows. Firstly, aeroelastic properties of combined mode shapes in an oscillating Low Pressure Turbine (LPT) cascade were studied and validity of the mode superposition principle was assessed. Secondly, the effects of aerodynamic mistuning on the aeroelastic properties of the cascade were addressed. The aerodynamic mistuning considered here is caused by blade-to-blade stagger angle variations The work has been carried out as compound experimental and numerical investigation, where numerical results are validated against test data. On the experimental side a test facility comprising an annular sector of seven free-standing LPT blades is used. The aeroelastic response phenomena were studied in the influence coefficient domain where one of the blades is made to oscillate in three-dimensional pure or combined modes, while the unsteady blade surface pressure is acquired on the oscillating blade itself and on the non-oscillating neighbor blades. On the numerical side, a series of numerical simulations were carried out using a commercial CFD code on a full-scale time-marching 3D viscous model. In accordance with the experimental part the simulations are performed using the influence coefficient approach, with only one blade oscillating. The results of combined modes studies suggest the validity of combining the aeroelastic properties of two modes over the investigated range of operating parameters. Quality parameters, indicating differences in mean absolute and imaginary values of the unsteady response between combined mode data and superposed data, feature values that are well below measurement accuracy of the setup. The findings of aerodynamic mistuning investigations indicate that the effect of de-staggering a single blade on steady aerodynamics in the cascade seem to be predominantly an effect of the change in passage throat. The changes in steady aerodynamics are thereby observed on the unsteady aerodynamics where distinctive effects on flow velocity lead to changes in the local unsteady pressure coefficients. In order to assess the overall aeroelastic stability of a randomly mistuned blade row, a Reduced Order Model (ROM) model is introduced, allowing for probabilistic analyses. From the analyses, an effect of destabilization due to aero-asymmetries was observed. However the observed effect was of moderate magnitude. / <p>QC 20130610</p> / Turbokraft
|
86 |
Modeling the High Strain Rate Tensile Response and Shear Failure of Thermoplastic CompositesUmberger, Pierce David 25 September 2013 (has links)
The high strain rate fiber direction tensile response of Ultra High Molecular Weight Polyethylene (UHMWPE) composites is of interest in applications where impact damage may occur. This response varies substantially with strain rate. However, physical testing of these composites is difficult at strain rates above 10^-1/s. A Monte Carlo simulation of composite tensile strength is constructed to estimate the tensile behavior of these composites. Load redistribution in the vicinity of fiber breaks varies according to fiber and matrix properties, which are in turn strain rate dependent. The distribution of fiber strengths is obtained from single fiber tests at strain rates ranging from 10^-4/s to 10^-1/s and shifted using the time-Temperature Superposition Principle (tTSP) to strain rates of 10^-4/s to 10^6/s. Other fiber properties are obtained from the same tests, but are assumed to be deterministic. Matrix properties are also assumed to be deterministic and are obtained from mechanical testing of neat matrix material samples. Simulation results are compared to experimental data for unidirectional lamina at strain rates up to 10^-1/s.
Above 10^-1/s, simulation results are compared to experimental data shifted using tTSP. Similarly, through-thickness shear response of UHMWPE composites is of interest to support computational modeling of impact damage. In this study, punch shear testing of UHMWPE composites is conducted to determine shear properties. Two test fixtures, one allowing, and one preventing backplane curvature are used in conjunction with finite element modeling to investigate the stress state under punch shear loading and the resulting shear strength of the composite. / Ph. D.
|
87 |
The Application of Linear Superposition Method on Water Distribution Systems Analysis of Contaminant Intrusion EventsJia, Xiaoyuan 18 September 2012 (has links)
No description available.
|
88 |
The Role of Penetrant Structure on the Transport and Mechanical Properties of a Thermoset AdhesiveKwan, Kermit S. Jr. 24 August 1998 (has links)
In this work the relationships between penetrant structure, its transport properties, and its effects on the mechanical properties of a polymer matrix were investigated. Although there is a vast amount of data on the diffusion of low molecular weight molecules into polymeric materials and on the mechanical properties of various polymer-penetrant systems, no attempts have been made to inter-relate the two properties with respect to the chemical structure of the diffusant. Therefore, two series of penetrants - n-alkanes and esters - were examined in this context, with the goal of correlating molecular size, shape, and chemical nature of the penetrant to its final transport and matrix mechanical properties. These correlations have been demonstrated to allow quantitative prediction of one property, given a reasonable set of data on the other parameters.
A series of n-alkanes (C6-C17) and esters (C5-C17) have been used to separate the effects of penetrant size and shape, from those due to polymer-penetrant interactions, in the diffusion through a polyamide polymeric adhesive. These effects have been taken into account in order to yield a qualitative relationship that allows for prediction of diffusivity based upon penetrant structural information. Transport properties have been analyzed using mass uptake experiments as well as an in-situ FTIR-ATR technique to provide detailed kinetic as well as thermodynamic information on this process.
The phenomenon of diffusion and its effects on the resulting dynamic mechanical response of a matrix polymeric adhesive have been studied in great detail using the method of reduced variables. The concept of a diffusion-time shift factor (log aDt) has been introduced to create doubly-reduced master curves, taking into account the effects of temperature and the variations in the polymer mechanical response due to the existence of a low molecular weight penetrant. / Ph. D.
|
89 |
Exploring the dynamics and dark halos of elliptical galaxies at large radiiForestell, Amy Dove 23 October 2009 (has links)
Dark matter is now accepted as an integral part of our universe, and galaxy dynamics have long provided the most convincing observational evidence for dark matter. Spiral galaxies have traditionally been used for these studies because of their more simple kinematics, however elliptical galaxies need to be understood as well. In this dissertation I present deep long-slit spectroscopy from the University of Texas’ Hobby-Eberly Telescope for a sample of elliptical galaxies. For a subsample of galaxies I fit axisymmetric orbit-superposition models with a range of dark halo density profiles. I find that all three galaxies modeled require a significant dark halo to explain their motions. However, the shape of the dark halo is not the expected NFW profile, but rather a profile with a flat central slope. I also discuss the galaxy masses, anisotropies, and stellar mass-to-light ratios. / text
|
90 |
Hardness of Constraint Satisfaction and Hypergraph Coloring : Constructions of Probabilistically Checkable Proofs with Perfect CompletenessHuang, Sangxia January 2015 (has links)
A Probabilistically Checkable Proof (PCP) of a mathematical statement is a proof written in a special manner that allows for efficient probabilistic verification. The celebrated PCP Theorem states that for every family of statements in NP, there is a probabilistic verification procedure that checks the validity of a PCP proof by reading only 3 bits from it. This landmark theorem, and the works leading up to it, laid the foundation for many subsequent works in computational complexity theory, the most prominent among them being the study of inapproximability of combinatorial optimization problems. This thesis focuses on a broad class of combinatorial optimization problems called Constraint Satisfaction Problems (CSPs). In an instance of a CSP problem of arity k, we are given a set of variables taking values from some finite domain, and a set of constraints each involving a subset of at most k variables. The goal is to find an assignment that simultaneously satisfies as many constraints as possible. An alternative formulation of the goal that is commonly used is Gap-CSP, where the goal is to decide whether a CSP instance is satisfiable or far from satisfiable, where the exact meaning of being far from satisfiable varies depending on the problems.We first study Boolean CSPs, where the domain of the variables is {0,1}. The main question we study is the hardness of distinguishing satisfiable Boolean CSP instances from those for which no assignment satisfies more than some epsilon fraction of the constraints. Intuitively, as the arity increases, the CSP gets more complex and thus the hardness parameter epsilon should decrease. We show that for Boolean CSPs of arity k, it is NP-hard to distinguish satisfiable instances from those that are at most 2^{~O(k^{1/3})}/2^k-satisfiable. We also study coloring of graphs and hypergraphs. Given a graph or a hypergraph, a coloring is an assignment of colors to vertices, such that all edges or hyperedges are non-monochromatic. The gap problem is to distinguish instances that are colorable with a small number of colors, from those that require a large number of colors. For graphs, we prove that there exists a constant K_0>0, such that for any K >= K_0, it is NP-hard to distinguish K-colorable graphs from those that require 2^{Omega(K^{1/3})} colors. For hypergraphs, we prove that it is quasi-NP-hard to distinguish 2-colorable 8-uniform hypergraphs of size N from those that require 2^{(log N)^{1/4-o(1)}} colors. In terms of techniques, all these results are based on constructions of PCPs with perfect completeness, that is, PCPs where the probabilistic proof verification procedure always accepts a correct proof. Not only is this a very natural property for proofs, but it can also be an essential requirement in many applications. It has always been particularly challenging to construct PCPs with perfect completeness for NP statements due to limitations in techniques. Our improved hardness results build on and extend many of the current approaches. Our Boolean CSP result and GraphColoring result were proved by adapting the Direct Sum of PCPs idea by Siu On Chan to the perfect completeness setting. Our proof for hypergraph coloring hardness improves and simplifies the recent work by Khot and Saket, in which they proposed the notion of superposition complexity of CSPs. / Ett probabilistiskt verifierbart bevis (eng: Probabilistically Checkable Proof, PCP) av en matematisk sats är ett bevis skrivet på ett speciellt sätt vilket möjliggör en effektiv probabilistisk verifiering. Den berömda PCP-satsen säger att för varje familj av påståenden i NP finns det en probabilistisk verifierare som kontrollerar om en PCP bevis är giltigt genom att läsa endast 3 bitar från det. Denna banbrytande sats, och arbetena som ledde fram till det, lade grunden för många senare arbeten inom komplexitetsteorin, framförallt inom studiet av approximerbarhet av kombinatoriska optimeringsproblem. I denna avhandling fokuserar vi på en bred klass av optimeringsproblem i form av villkorsuppfyllningsproblem (engelska ``Constraint Satisfaction Problems'' CSPs). En instans av ett CSP av aritet k ges av en mängd variabler som tar värden från någon ändlig domän, och ett antal villkor som vart och ett beror på en delmängd av högst k variabler. Målet är att hitta ett tilldelning av variablerna som samtidigt uppfyller så många som möjligt av villkoren. En alternativ formulering av målet som ofta används är Gap-CSP, där målet är att avgöra om en CSP-instans är satisfierbar eller långt ifrån satisfierbar, där den exakta innebörden av att vara ``långt ifrån satisfierbar'' varierar beroende på problemet.Först studerar vi booleska CSPer, där domänen är {0,1}. Den fråga vi studerar är svårigheten av att särskilja satisfierbara boolesk CSP-instanser från instanser där den bästa tilldelningen satisfierar högst en andel epsilon av villkoren. Intuitivt, när ariten ökar blir CSP mer komplexa och därmed bör svårighetsparametern epsilon avta med ökande aritet. Detta visar sig vara sant och ett första resultat är att för booleska CSP av aritet k är det NP-svårt att särskilja satisfierbara instanser från dem som är högst 2^{~O(k^{1/3})}/2^k-satisfierbara. Vidare studerar vi färgläggning av grafer och hypergrafer. Givet en graf eller en hypergraf, är en färgläggning en tilldelning av färger till noderna, så att ingen kant eller hyperkant är monokromatisk. Problemet vi analyserar är att särskilja instanser som är färgbara med ett litet antal färger från dem som behöver många färger. För grafer visar vi att det finns en konstant K_0>0, så att för alla K >= K_0 är det NP-svårt att särskilja grafer som är K-färgbara från dem som kräver minst 2^{Omega(K^{1/3})} färger. För hypergrafer visar vi att det är kvasi-NP-svårt att särskilja 2-färgbara 8-likformiga hypergrafer som har N noder från dem som kräv minst 2^{(log N)^{1/4-o(1)}} färger. Samtliga dessa resultat bygger på konstruktioner av PCPer med perfekt fullständighet. Det vill säga PCPer där verifieraren alltid accepterar ett korrekt bevis. Inte bara är detta en mycket naturlig egenskap för PCPer, men det kan också vara ett nödvändigt krav för vissa tillämpningar. Konstruktionen av PCPer med perfekt fullständighet för NP-påståenden ger tekniska komplikationer och kräver delvis utvecklande av nya metoder. Vårt booleska CSPer resultat och vårt Färgläggning resultat bevisas genom att anpassa ``Direktsumman-metoden'' introducerad av Siu On Chan till fallet med perfekt fullständighet. Vårt bevis för hypergraffärgningssvårighet förbättrar och förenklar ett färskt resultat av Khot och Saket, där de föreslog begreppet superpositionskomplexitet av CSP. / <p>QC 20150916</p>
|
Page generated in 0.0874 seconds