Spelling suggestions: "subject:"4points"" "subject:"5points""
371 |
Schémas de formules et de preuves en logique propositionnelleAravantinos, Vincent 23 September 2010 (has links) (PDF)
Le domaine de cette thèse est la déduction automatique, c.-à-d. le développement d'algorithmes dont le but est de prouver automatiquement des conjectures mathématiques. Dans cette thèse, les conjectures que nous voulons prouver appartiennent à une extension de la logique propositionnelle, appelée "schémas de formules". Ces objets permettent de représenter de façon finie une infinité de formules propositionnelles (de même que, p.ex., les langages réguliers permettent de représenter de façon finie des ensembles infinis de mots). Démontrer un schéma de formules revient alors à démontrer (en une fois) l'infinité de formules qu'il représente. Nous montrons que le problème de démontrer des schémas de formules est indécidable en général. La suite de la thèse s'articule autour de la définition d'algorithmes essayant tout de même de prouver automatiquement des schémas (mais, bien sûr, qui ne terminent pas en général). Ces algorithmes nous permettent d'identifier des classes décidables de schémas, c.-à-d. des classes pour lesquelles il existe un algorithme qui termine sur n'importe quelle entrée en répondant si le schéma est vrai ou pas. L'un de ces algorithmes a donné lieu à l'implémentation d'un prototype. Les méthodes de preuves présentées mélangent méthodes de preuve classiques en logique propositionnelle (DPLL ou tableaux sémantiques) et raisonnement par récurrence. Le raisonnement par récurrence est effectuée par l'utilisation de "preuves cycliques", c.-à-d. des preuves infinies dans lesquelles nous détectons des cycles. Dans ce cas, nous pouvons ramener les preuves infinies à des objets finis, ce que nous pouvons appeler des "schémas de preuves".
|
372 |
Reconstruction 3D d'objets par une representation fonctionnelleFayolle, Pierre-Alain 17 December 2007 (has links) (PDF)
Nous nous sommes essentiellement intéressés à la modélisation d'objets volumétriques par des champs de distance scalaire. La distance Euclidienne d'un point a un ensemble de points représentant la frontière d'un solide, correspond à la plus petite distance (définie à partir de la norme Euclidienne) entre ce point et n'importe quel point de l'ensemble. La représentation du solide par la distance à la surface du solide est une méthode concise mais relativement puissante pour définir et manipuler des solides. Dans ce cadre, nous nous sommes intéressés à la modélisation constructive de solides, et à la façon d'implémenter les opérations ensemblistes par des fonctions afin de garantir une bonne approximation de la distance ainsi que certaines propriétés de différentiabilité, nécessaire pour plusieurs classes d'opérations ou applications sur les solides. Nous avons construit différents types de fonctions implémentant les principales opérations ensemblistes (union, intersection, différence). Ces fonctions peuvent être ensuite appliquées à des primitives, définies par la distance à la surface de la primitive, afin de construire récursivement des solides complexes, définies eux-mêmes par une approximation à la distance du solide. Ces fonctions correspondent en fait à une certaine classe de R-fonctions, obtenues en lissant les points critiques des fonctions min/max (qui sont elles mêmes des R-fonctions). Ces fonctions sont appelées Signed Approximate Real Distance Functions (SARDF). Le cadre SARDF, constitue des fonctions décrites ci-dessus et de primitives définies par la fonction distance, a été utilisé pour la modélisation hétérogène de solides. La distance, ou son approximation, à la surface du solide ou des matériaux internes est utilisée comme un paramètre pour modéliser la distribution des matériaux à l'intérieur du solide. Le cadre SARDF a principalement été implémenté comme une extension de l'interpréteur d'HyperFun et à l'intérieur de l'applet Java d'HyperFun. La modélisation constructive de solides possède de nombreux avantages qui en font un outil puissant pour la modélisation de solides. Néanmoins, la définition constructive de solides peut être fastidieuse et répétitive. Nous avons étudié différents aspects pour l'automatiser. Dans un premier temps, nous avons introduit la notion de modèles template, et proposé différents algorithmes pour optimiser la forme d'un template à différentes instances correspondant à des nuages de points, sur ou aux alentours de la surface du solide. L'idée des templates vient de l'observation que les solides traditionnellement modélisés par ordinateur peuvent être regroupés en différentes classes possédant des caractéristiques communes. Par exemple, différents vases peuvent avoir une forme commune. Cette forme générale est modélisée une seule fois, et différents paramètres gouvernant les caractéristiques de la forme sont extraits. Ces paramètres sont ensuite optimisés à l'aide d'une combinaison de méta-heuristique comme le recuit simulé ou les algorithmes génétiques avec des méthodes directes du type Newton ou LevenbergMarquardt. L'utilisation du cadre SARDF pour la définition du modèle template est préférable, car donne de meilleurs résultats avec les algorithmes d'optimisation. Nous pouvons maintenant nous demander comment le modèle template est obtenu. Une première solution est d'utiliser les services d'un artiste. Néanmoins, nous pouvons aussi réfléchir pour automatiser ce processus. Nous avons essentiellement étudié deux aspects pour répondre à cette question : la première est l'utilisation de la programmation génétique pour former un modèle constructif à partir d'un nuage de points. La deuxième solution consiste à partir d'un nuage de points segmentés et une liste de primitives optimisés à ce nuage de points segmenté, d'utiliser un algorithme génétique pour déterminer l'ordre et le type d'opérations qui peuvent être appliquées à ces primitives. Ces deux solutions ont été implémentées et leurs résultats discutés.
|
373 |
Robustesse des arbres phylogénétiquesMariadassou, Mahendra 27 November 2009 (has links) (PDF)
La théorie synthétique de l'évolution a largement diffusé dans tous les domaines de la biologie, notamment grâce aux arbres phylogénétiques. S'ils ont une utilité évidente en génomique comparative, ils n'en sont pas moins utilisés dans de nombreux autres domaines allant de l'étude de la biodiversité à l'épidémiologie en passant par les sciences forensiques. Les arbres phylogénétiques sont non seulement une charactérisation efficace mais aussi un outil puissant pour étudier l'évolution. Cependant, toute utilisation d'arbre dans une étude suppose que l'arbre ait été correctement estimé, tant au niveau de la topologie que des autres paramètres, alors que cette estimation est un problème statistique compliqué et encore très ouvert. On admet généralement qu'on ne peut faire de bonne estimation sans les quatre pré-requis que sont (1) le choix d'un ou plusieurs gènes pertinents pour la question étudiée, (2) une quantité suffisante de données pour s'assurer une bonne précision d'estimation, (3) une méthode de reconstruction efficace qui s'appuie sur une modélisation fine de l'évolution pour minimiser les biais de reconstruction, (4) un bon échantillonnage de taxons. Nous nous intéressons dans cette thèse à quatre thèmes étroitement liés à l'un ou l'autre de ces pré-requis. Dans la première partie, nous utilisons des inégalités de concentration pour étudier le lien entre précision d'estimation et quantité de données. Nous proposons ensuite une méthode basée sur des extensions de Edgeworth pour tester la congruence phylogénétique d'un nouveau gène avec ses prédécesseurs. Dans la deuxième partie, nous proposons deux méthodes, inspirées des analyses de sensibilités, pour détecter les sites et taxons aberrants. Ces points aberrants peuvent nuire à la robustesse des estimateurs et nous montrons sur des exemples comment quelques observations aberrantes seulement suffisent à drastiquement modifier les estimateurs. Nous discutons les implications de ces résultats et montrons comment augmenter la robustesse de l'estimateur de l'arbre en présence d'observations aberrantes.
|
374 |
Perception and re-synchronization issues for the watermarking of 3D shapesRondao Alface, Patrice 26 October 2006 (has links)
Digital watermarking is the art of embedding secret messages in multimedia contents in order to protect their intellectual property. While the watermarking of image, audio and video is reaching maturity, the watermarking of 3D virtual objects is still a technology in its infancy.
In this thesis, we focus on two main issues. The first one is the perception of the distortions caused by the watermarking process or by attacks on the surface of a 3D model. The second one concerns the development of techniques able to retrieve a watermark without the availability of the original data and after common manipulations and attacks.
Since imperceptibility is a strong requirement, assessing the visual perception of the distortions that a 3D model undergoes in the watermarking pipeline is a key issue. In this thesis, we propose an image-based metric that relies on the comparison of 2D views with a Mutual Information criterion. A psychovisual experiment has validated the results of this metric for the most common watermarking attacks.
The other issue this thesis deals with is the blind and robust watermarking of 3D shapes. In this context, three different watermarking schemes are proposed. These schemes differ by the classes of 3D watermarking attacks they are able to resist to. The first scheme is based on the extension of spectral decomposition to 3D models. This approach leads to robustness against imperceptible geometric deformations. The weakness of this technique is mainly related to resampling or cropping attacks. The second scheme extends the first to resampling by making use of the automatic multiscale detection of robust umbilical points. The third scheme then addresses the cropping attack by detecting robust prong feature points to locally embed a watermark in the spatial domain.
|
375 |
Landscape, Kitchen, Table: Compressing the Food Axis to Serve a Food DesertElliott, Shannon Brooke 01 December 2010 (has links)
In the past, cities and their food system were spatially interwoven. However, rapid urbanization and the creation of industrialized agriculture have physically isolated and psychologically disconnected urban residents from the landscape that sustains them. Cities can no longer feed themselves and must rely on a global hinterland. Vital growing, preserving, and cooking knowledge has been lost, while negative health, economic, and environmental effects continue to develop from this separation. Low-income neighborhoods have significantly been affected where a lack of income and mobility pose barriers to adequate food access. Architects have addressed food issues individually, but have yet to take an integrative approach that meaningfully engages urban citizens with all processes of the food system. Urban planners have recently taken a holistic design approach to food issues through the development of the community food system concept. By applying this idea to an architectural program I have designed a Community Food Center for the Five Points Neighborhood in East Knoxville, TN. Spatially compressing and layering food activity spaces preserves the majority of the landscape on site for food production. The kitchen, dining room, market, and garden increase access to healthy food while serving as community gathering spaces, and the business incubator kitchens provide economic opportunities. The whole facility acts to educate and engage people in the growing, harvesting, preserving, cooking, sharing, and composting of food. Cities cannot sustain themselves by only providing spaces for consumption. Architects must challenge the accepted relationships between food system spaces and strive to reincorporate productive landscapes and spaces dedicated to transforming raw ingredients into a variety of architectural programs. Although the Five Points Community Food Center is site specific, the concept of integrating multiple food activities into a single architectural entity can be used as a tool for place making by expressing a local identity through food culture while improving the social and economic fabric.
|
376 |
Integrated Layout Design of Multi-component SystemsZhu, Jihong 09 December 2008 (has links)
A new integrated layout optimization method is proposed here for the design of multi-component systems. By introducing movable components into the design domain, the components layout and the supporting structural topology are optimized simultaneously. The developed design procedure mainly consists of three parts: (i). Introduction of non-overlap constraints between components. The Finite Circle Method (FCM) is used to avoid the components overlaps and also overlaps between components and the design domain boundaries. It proceeds by approximating geometries of components and the design domain with numbers of circles. The distance constraints between the circles of different components are then imposed as non-overlap constraints. (ii). Layout optimization of the components and supporting structure. Locations and orientations of the components are assumed as geometrical design variables for the optimal placement. Topology design variables of the supporting structure are defined by the density points. Meanwhile, embedded meshing techniques are developed to take into account the finite element mesh change caused by the component movements. Moreover, to account for the complicated requirements from aerospace structural system designs, design-dependent loads related to the inertial load or the structural self-weight and the design constraint related to the system gravity center position are taken into account in the problem formulation. (iii). Consistent material interpolation scheme between element stiffness and inertial load. The common SIMP material interpolation model is improved to avoid the singularity of localized deformation due to the presence of design dependent loading when the element stiffness and the involved inertial load are weakened with the element material removal.
Finally, to validate the proposed design procedure, a variety of multi-component system layout design problems are tested and solved on account of inertia loads and gravity center position constraint.
|
377 |
Local Hospital¡¦s Strategy Management Under National Health Insurance PolicyLin, Chin-hsing 20 August 2007 (has links)
Since the global budget system carried out by National Health Insurance Bureau in 2003, hospital autonomy management practiced in 2003 as well as reviews carried by specialized doctors system established, local hospitals have faced critical impacts. The fluctuating point
reimbursement was applied, it not only restrains the reimbursement received by the hospitals, but also causes management difficulties for local hospital as the fluctuating points shrink year by year. The number of western medical hospitals is decreasing. The number in 2000 was 575, but now only 500 local hospitals operate in Taiwan.
For survival, local hospitals have to establish sound financial system in order to deal with changing national health insurance policies. On the other hand, they are encouraged to use strategy management theories to promote the competitiveness for local hospitals to well control their expenditures and create their income.
The study has analyzed the statistical data from Statistical Office, Department of Health and National Health Insurance Bureau and integrated related literatures to understand the management strategies and responses of local hospitals under national health insurance policies and the economic, demographic and political environment. Results of the study will be provided for reference. Beside statistic data and literatures, strategy management concepts and theories were also adopted to clarify current policies of health insurance, reimbursement system and the
situation of local hospitals to probe into the difficulties and possible solutions.
Local hospitals were chosen to be the study subjects, and through SWOT Analysis, Poter¡¦s Five Force Analysis, Blue Sea Strategy and the findings from literature review, we found that (1)the financial gap of health insurance has transferred to medical organizations, especially local
hospitals; (2)under global budget system, local hospitals have to increase income and decrease expenditures by transformation, running pay business or joint outpatient service; ¡]3¡^cost management is critical for local hospitals to establish internal strength; (4) referring to blue sea strategy to develop distinguished business and differential products can create niche for local hospitals to break through the bottleneck.
|
378 |
Computer-aided detection and novel mammography imaging techniquesBornefalk, Hans January 2006 (has links)
This thesis presents techniques constructed to aid the radiologists in detecting breast cancer, the second largest cause of cancer deaths for western women. In the first part of the thesis, a computer-aided detection (CAD) system constructed for the detection of stellate lesions is presented. Different segmentation methods and an attempt to incorporate contra-lateral information are evaluated. In the second part, a new method for evaluating such CAD systems is presented based on constructing credible regions for the number of false positive marks per image at a certain desired target sensitivity. This method shows that the resulting regions are rather wide and this explains some of the difficulties encountered by other researchers when trying to compare CAD algorithms on different data sets. In this part an attempt to model the clinical use of CAD as a second look is also made and it shows that applying CAD in sequence to the radiologist in a routine manner, without duly altering the decision criterion of the radiologist, might very well result in suboptimal operating points. Finally, in the third part two dual-energy imaging methods optimized for contrast-enhanced imaging of breast tumors are presented. The first is based on applying an electronic threshold to a photon-counting digital detector to discriminate between high- and low-energy photons. This allows simultaneous acquisition of the high- and low-energy images. The second method is based on the geometry of a scanned multi-slit system and also allows single-shot contrast-enhanced dual-energy mammography by filtering the x-ray beam that reaches different detector lines differently. / QC 20100819
|
379 |
VÄG, VAL OCH VILLKOR : Individer som tidigare begått kriminella handlingar berättarSkantze, Lina, Zandén, Bianca January 2008 (has links)
This thesis, titled “Change, choice and conditions”, is written by Lina Skantze and Bianca Zandén. The study explores the process in which individuals’ attempt to end their criminal career, focusing on the interplay between path of life, choices, and conditions. The method is qualitative, and the empirical material consists of interviews with four young adults that all have experience of criminality. The empirical material is analyzed within a theoretical framework based on social construction, Antonovskys “Sense of coherence, SOC” and Giddens “Structuration theory” as well as existential philosophy. The authors suggest a theoretically and empirically based model illustrating the change process. The model, developed through abduction, suggests that the process in changing ones life radically includes a number of steps such as; distance to everyday life and its habits, existential choices, new conditions, reflection around former situations and experiences, formulating a life story, new habits and routines, new and/or re-established social relationships, orientation towards new goals and a sense of meaning in life, as well as hopes and ideas about the future. The authors conclude that there are no absolute turning points in the lives of the interviewees. Instead change happens in a complex process best described as incremental, consisting of small – and sometimes incoherent – steps. However, certain situations during the process are crucial and offer opportunity for fundamental existential choices.
|
380 |
Benchmarking Points-to AnalysisGutzmann, Tobias January 2013 (has links)
Points-to analysis is a static program analysis that, simply put, computes which objects created at certain points of a given program might show up at which other points of the same program. In particular, it computes possible targets of a call and possible objects referenced by a field. Such information is essential input to many client applications in optimizing compilers and software engineering tools. Comparing experimental results with respect to accuracy and performance is required in order to distinguish the promising from the less promising approaches to points-to analysis. Unfortunately, comparing the accuracy of two different points-to analysis implementations is difficult, as there are many pitfalls in the details. In particular, there are no standardized means to perform such a comparison, i.e, no benchmark suite - a set of programs with well-defined rules of how to compare different points-to analysis results - exists. Therefore, different researchers use their own means to evaluate their approaches to points-to analysis. To complicate matters, even the same researchers do not stick to the same evaluation methods, which often makes it impossible to take two research publications and reliably tell which one describes the more accurate points-to analysis. In this thesis, we define a methodology on how to benchmark points-to analysis. We create a benchmark suite, compare three different points-to analysis implementations with each other based on this methodology, and explain differences in analysis accuracy. We also argue for the need of a Gold Standard, i.e., a set of benchmark programs with exact analysis results. Such a Gold Standard is often required to compare points-to analysis results, and it also allows to assess the exact accuracy of points-to analysis results. Since such a Gold Standard cannot be computed automatically, it needs to be created semi-automatically by the research community. We propose a process for creating a Gold Standard based on under-approximating it through optimistic (dynamic) analysis and over-approximating it through conservative (static) analysis. With the help of improved static and dynamic points-to analysis and expert knowledge about benchmark programs, we present a first attempt towards a Gold Standard. We also provide a Web-based benchmarking platform, through which researchers can compare their own experimental results with those of other researchers, and can contribute towards the creation of a Gold Standard.
|
Page generated in 0.0364 seconds