• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 11
  • 3
  • Tagged with
  • 36
  • 36
  • 36
  • 36
  • 14
  • 13
  • 10
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Análisis experimental de los criterios de evaluación de usabilidad de aplicaciones multimedia en entornos de educación y formación a distancia

Borges de Barros Pereira, Hernane 09 July 2002 (has links)
La presente tesis doctoral gira en torno a la discusión sobre la interrelación entre la usabilidad del software educativo y su influencia en el diseño de materiales de contenido multimedia en formato CD-ROM y WEB. Actualmente, dentro de la ingeniería multimedia, la investigación sobre esta interrelación se vuelve importante debido al fenómeno de desarrollo creciente de las aplicaciones multimedia como herramientas educativas facilitando, de esta manera, el proceso de enseñanza y aprendizaje. El objetivo de la tesis es presentar un conjunto de criterios de evaluación de usabilidad basado en análisis experimentales e identificar el grado de influencia que dichos criterios ejercen en el aprendizaje de las personas mediante el uso de aplicaciones multimedia usadas en educación y formación a distancia.Para ello, ha sido necesario recurrir a los fundamentos teóricos de la educación (en particular de la educación y formación distancia), las nuevas tecnologías de la información y comunicación, la ingeniería de software y la ingeniería de usabilidad.El desarrollo de la presente tesis se ha basado en la investigación cualitativa, debido a la necesidad de producir conocimiento que permita entender y explicar el mundo y los fenómenos sociales. Usando el interpretativismo como punto de partida, se utilizan los métodos la teoría fundamentada en datos (grounded theory) y el estudio de caso (case study) para llevar a cabo la colecta, la clasificación y el análisis de los datos.La presente tesis presenta aportaciones de índole teórica y práctica.De un punto de vista teórico, la tesis contribuye con la ergonomía de software al elaborar una fundación teórica para el desarrollo de aplicaciones multimedia a partir de tres principios de diseño de sistemas interactivos multimedia. Los principios son la atención dirigida al usuario y sus tareas, las mediciones empíricas y el diseño iterativo. Estos principios representan una referencia importante para la ingeniería de usabilidad.En este sentido, se presenta un modelo de test, denominado modelo de test semántico y sintáctico, compuesto por una estructura conceptual, un método de aplicación que considera los test de verificación, validación y usabilidad y herramientas de apoyo que permiten automatizar las actividades de test.Considerada una extensión de los fundamentos teóricos, la perspectiva práctica del presente estudio se caracteriza por un proceso de test de aplicaciones multimedia, el cual detecta tanto los problemas y errores como los defectos y fallos que pueden afectar la aceptación y satisfacción del usuario y, por consiguiente, su aprendizaje. / The present doctoral thesis doctoral is concerning the discussion of the relationship among the usability of the instructional software and its influence on the design of multimedia materials in CD-ROM and WEB formats. Nowadays, research performed regarding this relationship within the field of Multimedia Engineering, is rapidly becoming more important due to the growing development of the use of multimedia applications as educational tools, making the process of teaching and learning much easier.The goal of the thesis is to submit a collection of approaches regarding usability evaluation based on experimental analysis and to identify the degree of influence that these approaches exert on learning abilities of people using multimedia for purposes such as education and training at distance.In order to attain this objective, it has been necessary to appeal to the theoretical foundations of education (in particular of the distance education and training), new information and communication technologies, software engineering and usability engineering.The development of this thesis has been based on Qualitative Research in order to produce knowledge, which allows to understand and explain the world and the social phenomena. Using the Interpretative approach as a starting point, the Grounded Theory and Case Study research methods have been used in the data gathering, classifying and analyzing.The current thesis presents contributions of theoretical and practical nature.From the theoretical point of view, this thesis contributes with the Software Ergonomics when elaborating a theoretical foundation for the development of multimedia applications starting from three principles of multimedia interactive systems design. The principles are the early focus on users and tasks, the empirical measurement and the iterative design. Furthermore, these principles represent an important reference to the Usability Engineering.In this way, a test model, called semantic and syntactical testing model, has been presented. This model is composed of a conceptual structure, an application method that takes into account the verification, validation and usability test, and support tools that allow to automate the test activities.Regarded as an extension of the theoretical foundation, the practical perspective of this study is characterized by a multimedia application testing process which not only detects problems and errors but also defects and failures that can affect the user's acceptance and satisfaction and, consequently, his or her learning.
32

Esquemes per a compartir secrets

Sáez, Germán 30 July 1998 (has links)
Aquesta tesi ha estat destacada amb la menció de PREMI EXTRAORDINARI DE DOCTORAT en l'àmbit de MATEMÀTIQUESCurs 1997 - 98 / A la present tesi ens hem ocupat fonamentalment de l'estudi matemàtic dels esquemes per a compartir secrets en les seves vessants de l'estudi de la taxa d'informació i de l'estudi dels esquemes segurs enfront l'acció de mentiders. Com a tema complementari hem estudiat la qüestió de l'arrel cúbica a l'anell dels enters mòdul m. Tots dos temes s'enquadren dins de la Criptologia.Pel que fa l'estudi dels esquemes per a compartir secrets, els nostres objectius han estat la caracterització de les estructures ideals i la fitació de la taxa d'informació òptima per a certes famílies d'estructures d'accés. La primera família estudiada ha estat la de les estructures definides per pesos i llindar. Hem trobat que totes es poden expressar mitjançant pesos i llindar naturals. Hem obtingut una caracterització completa de les de rang 2, és a dir, les que estan determinades per un graf que hem anomenat k-graf. Hem dissenyat un algorisme que les identifica a partir dels graus de cadascun dels vèrtexs i determina els pesos i llindar mínims. A partir de l'estructura d'aquests grafs hem determinat una fita inferior de la taxa d'informació òptima que és de l'ordre de 1/log n (amb n el número de participants), millorant la fita 1/2^{n/2} trobada amb l'únic esquema proposat fins ara per a aquestes estructures, degut a Shamir. A partir d'aquests resultats i mitjançant l'ús del dual d'una estructura hem extés els resultats anteriors a una nova família d'estructures. L'estudi general de les estructures definides per pesos i llindar de rang superior s'ha concentrat en trobar fites superiors i inferiors, especialment per les estructures definides per dos pesos. La segona família que hem estudiat és la de les estructures bipartites. Per les estructures bipartites, hem aconseguit caracteritzar totalment les que són ideals. Aquestes són la família d'estructures de quasi-llindar. Aquesta caracterització de les estructures ideals fa que les estructures de quasi-llindar juguin un paper dins de les estructures bipartites anàleg al paper que juguen els grafs multipartits complets dins de les estructures definides per grafs. Així és equivalent dir que una estructura bipartita és ideal a dir que és de quasi-llindar o a dir que és pot definir amb un esquema d'espai vectorial o a dir que la seva taxa d'informació òptima és més gran que 2/3. Per les estructures bipartites descrivim tècniques típiques de recobriment per tal de trobar fites inferiors de la taxa d'informació. Determinem un algorisme que permet trobar una fita superior de la taxa d'informació. Justifiquem que aquestes fites són ajustades.La següent família d'estructures que hem estudiat ha estat la de les estructures homogènies, obtenint resultats en la fitació inferior de la taxa d'informació. Hem proposat dues construccions d'esquemes per a compartir secrets per estructures homogènies basats en les tècniques de recobriments. Per avaluar les taxes d'informació hem definit el concepte de k-grau d'un participant en una estructura homogènia. Aquest paràmetre és la clau de tot l'estudi de les fites, de les comparacions entre elles i amb les conegudes fins ara. El resultat de la comparació indica que la segona d'elles ens dóna un esquema amb una taxa d'informació millor que la primera, però a canvi la primera utilitza un conjunt de secrets de mesura més realista. La comparació amb les proposades anteriorment mostra que les nostres són millors en la majoria dels casos. L'estudi de les fites superiors per les estructures homogènies l'hem encetat amb les de rang 3, trobant una primera fita superior per una subfamília d'estructures que és del mateix ordre que la fita inferior obtinguda per les nostres construccions. Pels esquemes segurs enfront l'acció de mentiders hem generalitzat els conceptes de seguretat per estructures de llindar al cas d'una estructura qualsevol, tant pel cas en el qual els mentiders no coneixen el secret, com pel cas que sí el coneixen. Hem trobat una fita superior de la taxa d'informació òptima per un esquema en el que una coalició de mentiders és detectada amb una certa probabilitat. Després d'aquest estudi general, hem proposat un esquema per a compartir secrets per a una estructura d'accés de tipus vectorial que detecta l'acció de coalicions de mentiders, que no coneixen el secret, amb una certa probabilitat. La taxa d'informació d'aquest esquema és 1/2, la qual és asímptòticament òptima. Per una estructura de llindar hem proposat un esquema que detecta l'acció de coalicions de mentiders (que sí que coneixen el secret) amb una certa probabilitat. Finalment hem trobat el primer esquema per a una estructura qualsevol que detecta l'acció de coalicions de mentiders (que no coneixen el secret) amb una certa probabilitat.Pel que fa a l'arrel cúbica en el conjunt dels enters mòdul m, hem estudiat l'existència i número d'arrels. Hem generalitzat dos mètodes, dels més potents, pel càlcul de l'arrel quadrada per arrel cúbiques. Aquests són el mètode de Peralta basat en l'ús d'un anell auxiliar i el mètode de Tonelli-Shanks basat en l'ús de subgrups de Sylow. S'ha adjuntat algun comentari per les aplicacions criptogràfiques de les arrels cúbiques. / This thesis is mainly devoted to the study of information rate of secret sharing schemes as well as secret sharing schemes secures against the action of cheaters. As a complement we have studied the problem of cube roots in Z_m.In the first part of our work we study some access structures in a combinatorial way and the boundness of the optimal information rate. We begin with weighted threshold access structures. We state that any of them can be defined with positive integer weights and threshold. We characterize all the rank 2 weighted threshold access structures as a k-graphs, finding minimum integer weights and threshold. We bound the optimal information rate using the complete multipartite covering technique. We extend the above results on characterization and computation of minimum weights and threshold to dual structures after proving the fact that the dual of a weighted threshold structure is a weighted threshold structure. The second family of structures that we studied is the bipartite access structures. We characterize completely the bipartite access structures that can be realized by an ideal secret sharing scheme. We prove that in a bipartite access structure it is equivalent ideal structure, vector space structure and optimal information rate of the structure less than 2/3. Both upper and lower bounds on the optimal information rate of bipartite access structures are given. We also start the general study of multipartite access structure.Using results on bipartite structures we study the information rate of secret sharing schemes whose access structure is defined by two weights and a threshold of arbitrary rank. Some upper and lower bounds are found. Case on more than two weights is considered too.The last family of access structure that we have studied is the homogeneous access structure. We describe two constructions of secret sharing schemes for a such a general class of structures. The first one has a worst information rate than the second one, but on the other hand the size of the secret set is more moderate than the first one. We describe the comparison between our lower bounds on the optimal information rate and on the optimal average information rate as well as comparison with known upper bounds. The results are that our second one are better than the first one and that our bounds are better than the known bound almost for every structure.We have started a study of the upper bounds on the optimal information rate for rank 3 homogenous access structures. We describe a structure that its upper bound is not so far than the lower bound obtained with our second construction.Concerning to the extended capabilities we have studied the schemes secures against the action of cheaters. We have generalized the concepts of secure scheme and robust scheme to any access structure, that several authors have already defined only for threshold structures. We found that for a secure scheme the optimal information rate is upper bounded and we find the bound. We construct a 1/q-secure scheme (for q secrets) asymptotically optimal for a vector space access structure. We compare this scheme with the scheme of Ogata and Kurosawa. Our second scheme is robust realizing a (r,n) threshold structure with probability <= (2r-3)/(q-r). The information rate is 1/3. Finally we propose the first 1/q-secure scheme for any access structure.We have generalized two of the fastest algorithms to take square roots to cube roots in Z_p. The first one is Peralta's algorithm, a probabilistic algorithm based on some properties of a ring. The non probabilistic part of the algorithm runs in log^3 p. The second one is Tonelli-Shanks' algorithm, a probabilistic method based on group theory which runs in log^4 p.
33

Some Digital Signature Schemes with Collective Signers

Herranz Sotoca, Javier 15 April 2005 (has links)
Digital signatures are one of the most important consequences of the appearance of public key cryptography, in 1976. These schemes provide authentication, integrity and non-repudiation to digital communications. Some extensions or variations of the concept of digital signature have been introduced, and many specific realizations of these new types of nature schemes have been proposed.In this thesis, we deal with the basic definitions and required security properties of traditional signature schemes and two of its extensions: distributed signature schemes and ring signature schemes. We review the state of the art in these two topics; then we propose and analyze new specific schemes for different scenarios.Namely, we first study distributed signature schemes for general access structures, based on RSA; then we show that such schemes can be used to construct other cryptographic protocols: distributed key distribution schemes and metering schemes. With respect to ring signatures, we opose schemes for both a scenario where the keys are of the Discrete Logarithm type and a scenario where the public keys of users are inferred from their personal identities. Finally, we also propose some distributed ring signature schemes, a kind of schemes which combine the concepts of distributed signatures and ring signatures. We formally prove the security of all these proposals, assuming that some mathematical problems are hard to solve. Specifically, we base the security of our schemes in the hardness of either the RSA problem, or the Discrete Logarithm problem, or the Computational Diffie-Hellman problem.
34

Integración de los modelos de simulación en el diseño de los ensayos clínicos

Abbas, Ismail 26 April 2004 (has links)
Los ensayos clínicos son a menudo muy costosos y pueden tener una duración considerable. Un error en el diseño de un ensayo clínico puede implicar la imposibilidad de verificar la hipótesis que se pretendía probar por falta de significación estadística de los resultados, lo que supondrá una pérdida de recursos y un retraso en el proceso de introducción del nuevo producto farmacéutico en el mercado.En esta tesis se plantea el diseño de un ensayo clínico de medicamentos como un problema económico de optimización. El criterio de optimización concreto puede variar según la perspectiva y el marco de decisión. Desde una perspectiva empresarial de maximización del beneficio, el ensayo clínico puede verse como una inversión con unos costes relativamente predecibles y unos beneficios futuros mucho más inciertos. Objetivos e hipótesis de la tesis El objetivo principal de este trabajo es el análisis y desarrollo de modelos de simulación para la optimización del diseño de los ensayos clínicos. Con este fin se desarrollan diferentes estructuras de modelos estocásticos asistidos por ordenador que permiten representar adecuadamente el ensayo clínico y simular los resultados con diseños e hipótesis alternativas antes de su realización. Con ello se pretende determinar el diseño que optimiza el coste y el tiempo de desarrollo de un ensayo clínico, analizando los resultados hipotéticos antes de su realización real.La hipótesis que se pretende probar es que el desarrollo y la aplicación de los modelos de simulación propuestos permiten alcanzar de forma más eficiente los objetivos de los ensayos clínicos, es decir, obtener con una determinada probabilidad de éxito los resultados buscados -lo que en el caso de nuevos productos es una condición necesaria para su comercialización- en un periodo de tiempo y con un coste óptimos. Metodología El modelo conceptual general para la optimización de un ensayo clínico que proponemos en esta tesis consiste en dos sub-modelos: un sub-modelo de reclutamiento de pacientes y de asignación del tratamiento y un sub-modelo de seguimiento que describe la evolución de la variable o variables de proceso del ensayo clínico y las variables finales de respuesta. La validación del modelo conceptual es interdisciplinaria y es necesaria para comprobar que los objetivos y las hipótesis se corresponden con los establecidos en el protocolo. Resultados de la investigación Los resultados de esta tesis se han aplicado a un ensayo clínico de lipodistrofia en el tratamiento con antiretrovirales (ARV). Por ello, se ha construido, validado y seleccionado un modelo estadístico cuyos resultados se ajustan lo mejor posible a los resultados del ensayo clínico de lipodistrofia. Posteriormente, se aplica este modelo para la optimización del diseño de un ensayo clínico futuro. Comentarios e investigaciones futuras La revisión realizada de la literatura ha permitido constatar que la mayoría de los enfoques que se aplican en la actualidad al diseño de ensayos clínicos, incluyendo los que utilizan modelos de simulación, no tienen en cuenta todos los aspectos relevantes para la optimización del diseño conjuntamente. Lo que propone precisamente esta tesis como principal aportación es integrar todos estos aspectos en un modelo general de simulación para la optimización del diseño de un ensayo clínico según criterios y supuestos explícitos. Este enfoque permite desarrollar el modelo general de optimización en el que se plantee el diseño óptimo en términos de maximización del beneficio neto esperado. Según el promotor, los beneficios serían: o bien el beneficio económico, o bien los beneficios en salud, que podrían, eventualmente, expresarse en unidades monetarias mediante el método de la disponibilidad a pagar u otros enfoques habituales en evaluación económica. / Clinical trials are frequently very expensive and can last quite long. An error in the design of a clinical trial can imply the impossibility of confirming the hypothesis because of the fact results are lacking in statistical significance. This will mean to incur losses in resources and a delay in the introduction of the new pharmaceutical product into the market.In this thesis the clinical trial design is conceived as an economic optimisation problem. The concrete optimisation criterion might depend on the approach and the decision context. From the market point of view that seeks maximising benefits, clinical trials can be seen as an investment with relatively predictable costs but a much more uncertain future benefits. Objectives and hypothesis of this thesis The main objective of this work is analysing and developing simulation models for the optimisation of clinical trial design. We have thus developed different computer-assisted stochastic models structures which allow conveniently to represent the clinical trial and simulate the results with alternative designs and hypothesis before carrying it out. The ultimate goal is determining which design optimises cost and clinical trial duration, by analysing hypothetical results before they are actually put into practice.The hypothesis wanted to be proved is that the development and application of the proposed simulation models allow more efficiently achieve the objectives formulated in a clinical trial, i.e. to obtain the sought results with a given success probability -what, as far as new pharmaceutical products is concerned, is a necessary condition before their commercialisation- with optimal cost and time span. Methods The general conceptual model to optimise a clinical trial we proposes in this thesis consists in two sub-models: a recruitment of patients and treatment assignment sub-model and a follow-up sub-model, which describes the evolution of the clinical trial process and end-point variable or variables. The validation of the conceptual model is interdisciplinary and it is necessary to verify that the objectives and the hypothesis to correspond closely to those established in the protocol. Research results The results of this thesis have been applied to a clinical trial of lipodystrophy in the antrretroviral treatment (ARV). It has been build up, validated and selected a statistic model which results adjust as good as possible to the results of clinical trial of lipodystrophy. Afterwards, this same model is applied for the design optimisation of a future clinical trial. Discussion and future research The review of literature has allowed us to realise that most of the approaches applied nowadays to the design of clinical trials, including those that use simulation models, do not take into account the relevant aspects for the optimisation of the design altogether. What this thesis suggests precisely as main contribution is integrating all these aspects in a general simulation model for the design optimisation of a clinical trial according to explicit criteria and assumptions. This approach allows developing a general model of optimisation where the optimal design is seen in terms of maximisation of the expected net benefit. According to the sponsor, the benefits would be: the economic benefit, or the benefits to health, which could, eventually, be expressed in monetary units based on the willingness to pay method or other usual economic evaluation methods.
35

Numerical modelling of complex geomechanical problems

Pérez Foguet, Agustí 01 December 2000 (has links)
La tesis se centra en el desarrollo de técnicas numéricas específicas para la resolución de problemas de mecánica de sólidos, tomando como referencia aquellos que involucran geomateriales (suelos, rocas, materiales granulares,...). Concretamente, se tratan los siguientes puntos: 1) formulaciones Arbitrariamente Lagrangianas Eulerianas (ALE) para problemas con grandes desplazamientos del contorno; 2) métodos de resolución para problemas no lineales en el campo de la mecánica de sólidos y 3) modelización del comportamiento mecánico de materiales granulares mediante leyes constitutivas elastoplásticas. Las principales aportaciones de la tesis son: el desarrollo de una formulación ALE para modelos hyperelastoplásticos y el cálculo de operadores tangentes para distintas leyes constitutivas y esquemas de integración temporal no triviales (uso de esquemas de derivación numérica, técnicas de subincrementación y modelos elastoplásticos con endurecimiento y/o reblandecimiento dependientes del trabajo plástico o la densidad). Se presentan diversas aplicaciones que muestran las principales características de los desarrollos presentados (análisis del ensayo del molinete para arcillas blandas, del ensayo triaxial para arenas, de la rotura bajo una cimentación, del proceso de estricción de una barra metálica circular y de un proceso de estampación en frío), dedicando una especial atención a los aspectos computacionales de la resolución de dichos problemas. Por último, se dedica un capítulo específico a la modelización y la simulación numérica de procesos de compactación fría de polvos metálicos y cerámicos. / Numerical modelling of problems involving geomaterials (i.e. soils, rocks, concrete and ceramics) has been an area of active research over the past few decades. This fact is probably due to three main causes: the increasing interest of predicting the material behaviour in practical engineering situations, the great change of computer capabilities and resources, and the growing interaction between computational mechanics, applied mathematics and different engineering fields (concrete, soil mechanics...). This thesis fits within this last multidisciplinary approach. Based on constitutive modelling and applied mathematics and using both languages the numerical simulation of some complex geomechanical problems has been studied.The state of the art regarding experiments, constitutive modelling, and numerical simulations involving geomaterials is very extensive. The thesis focuses in three of the most important and actual ongoing research topics within this framework: 1) the treatment of large boundary displacements by means of Arbitrary Lagrangian-Eulerian (ALE) formulations; 2) the numerical solution of highly nonlinear systems of equations in solid mechanics; and 3) the constitutive modelling of the nonlinear mechanical behaviour of granular materials. The three topics have been analysed and different contributions for each one of them have been developed. Moreover, some of the new developments have been applied to the numerical modelling of cold compaction processes of powders. The process consists in transforming a loose powder into a compacted sample through a large volume reduction. This problem has been chosen as a reference application of the thesis because it involves large boundary displacements, finite deformations and highly nonlinear material behaviour. Therefore, it is a challenging geomechanical problem from a numerical modelling point of view.The most relevant contributions of the thesis are the following: 1) with respect to the treatment of large boundary displacements: quasistatic and dynamic analyses of the vane test for soft materials using a fluid-based ALE formulation and different non-newtonian constitutive laws, and the development of a solid-based ALE formulation for finite strain hyperelastic-plastic models, with applications to isochoric and non-isochoric cases; 2) referent to the solution of nonlinear systems of equations in solid mechanics: the use of simple and robust numerical differentiation schemes for the computation of tangent operators, including examples with several non-trivial elastoplastic constitutive laws, and the development of consistent tangent operators for different substepping time-integration rules, with the application to an adaptive time-integration scheme; and 3) in the field of constitutive modelling of granular materials: the efficient numerical modelling of different problems involving elastoplastic models, including work hardening-softening models for small strain problems and density-dependent hyperelastic-plastic models in a large strain context, and robust and accurate simulations of several powder compaction processes, with detailed analysis of spatial density distributions and verification of the mass conservation principle.
36

Mesh-Free Methods for Dynamic Problems. Incompressibility and Large Strain

Vidal Seguí, Yolanda 17 January 2005 (has links)
This thesis makes two noteworthy contributions in the are of mesh-free methods: a Pseudo-Divergence-Free (PDF) Element Free Galerkin (EFG) method which alleviates the volumetric locking and a Stabilized Updated Lagrangian formulation which allows to solve fast-transient dynamic problems involving large distortions. The thesis is organized in the following way. First of all, this thesis dedicates one chapter to the state of the art of mesh-free methods. The main reason is because there are many mesh-free methods that can be found in the literature which can be based on different ideas and with different properties. There is a real need of classifying, ordering and comparing these methods: in fact, the same or almost the same method can be found with different names in the literature. Secondly, a novel improved formulation of the (EFG) method is proposed in order to alleviate volumetric locking. It is based on a pseudo-divergence-free interpolation. Using the concept of diffuse derivatives an a convergence theorem of these derivatives to the ones of the exact solution, the new approximation proposed is obtained imposing a zero diffuse divergence. In this way is guaranteed that the method verifies asymptotically the incompressibility condition and in addition the imposition can be done a priori. This means that the main difference between standard EFG and the improved method is how is chosen the interpolation basis. Modal analysis and numerical results for two classical benchmark tests in solids corroborate that, as expected, diffuse derivatives converge to the derivatives of the exact solution when the discretization is refined (for a fixed dilation parameter) and, of course, that diffuse divergence converges to the exact divergence with the expected theoretical rate. For standard EFG the typical convergence rate is degrade as the incompressible limit is approached but with the improved method good results are obtained even for a nearly incompressible case and a moderately fine discretization. The improved method has also been used to solve the Stokes equations. In this case the LBB condition is not explicitly satisfied because the pseudo-divergence-free approximation is employed. Reasonable results are obtained in spite of the equal order interpolation for velocity and pressure. Finally, several techniques have been developed in the past to solve the well known tensile instability in the SPH (Smooth Particle Hydrodynamics) mesh-free method. It has been proved that a Lagrangian formulation removes completely the instability (but zero energy modes exist). In fact, Lagrangian SPH works even better than the Finite Element Method in problems involving distortions. Nevertheless, in problems with very large distortions a Lagrangian formulation will need of frequent updates of the reference configuration. When such updates are incorporated then zero energy modes are more likely to be activated. When few updates are carried out the error is small but when updates are performed frequently the solution is completely spoilt because of the zero energy modes. In this thesis an updated Lagrangian formulation is developed. It allows to carry out updates of the reference configuration without suffering the appearance of spurious modes. To update the Lagrangian formulation an incremental approach is used: an intermediate configuration will be the new reference configuration for the next time steps. It has been observed that this updated formulation suffers from similar numerical fracture to the Eulerian case. A modal analysis has proven that there exist zero energy modes. In the paper the updated Lagrangian method is exposed in detail, a stability analysis is performed and finally a stabilization technique is incorporated to preclude spurious modes.

Page generated in 0.0694 seconds