Spelling suggestions: "subject:"discretization"" "subject:"iscretization""
151 |
A Study on the Effect of Inhomogeneous Phase of Shape Memory Alloy WireManna, Sukhendu Sekhar January 2017 (has links) (PDF)
The present study in this thesis has attempted to resolve one of the key aspects of enhancing predictability of macroscopic behavior of Shape Memory Alloy (SMA) wire by considering variation of local phase inhomogeneity. Understanding of functional fatigue and its relation with the phase distribution and its passivation is the key towards tailoring thermal Shape Memory Alloy actuators’ properties and performance. Present work has been carried out in two associated areas. First part has covered solving a coupled thermo-mechanical boundary value problem where initial phase fractions are prescribed at the gauss points and subsequent evolution are tracked over the loading cycle. An incremental form of a phenomenological constitutive model has been incorporated in the modelling framework. Finite element convergence studies using both homogeneous and inhomogeneous SMA wires are performed. Effects of phase inhomogeneity are investigated for mechanical loading and thermo-electric loading. Phase inhomogeneity is simulated mainly due to process and handling quality. An example of mechanical boundary condition such as gripping indicates a negative residual strain at macroscopic behavior. Simulation accurately captures vanishing local phase inhomogeneity upon multiple cycles of thermo-mechanical loading on unconstrained straight SMA wire. In the second part, a phase identification and measurement scheme is proposed. It has been shown that by employing variation of electrical resistivity which distinctly varies with phase transformation, martensite phase volume fraction can be quantified in average sense over the length of a SMA wire. This can be easily achieved by using a simple thermo-mechanical characterization setup along with resistance measurement circuit. Local phase inhomogeneity is created in an experimental sample, which is subjected to electrical heating under constant mechanical bias load. The response shows relaxation of the initial shrinkage strain due to local phase. Results observed for thermo-electric loading on the inhomogeneous SMA wires compliment the results observed from the simulated loading cases. Several interesting features such as shrinkage of the inhomogeneous SMA wire after first loading cycle, relaxation of the residual strain over multiple loading cycles due to the presence of inhomogeneity are captured. This model promises useful applications of SMA wire in fatigue studies, SMA embedded composites and hybrid structures.
|
152 |
Uma nova estratégia para o cálculo de afinidades eletrônicas / A new approach for electron affinity calculationRafael Costa Amaral 25 February 2015 (has links)
A afinidade eletrônica (AE) é uma importante propriedade de átomos e moléculas, sendo definida como a diferença de energia entre a espécie neutra e seu respectivo íon negativo. Uma vez que a AE é uma fração muito pequena da energia eletrônica total das espécies neutra e aniônica, é necessário que tais energias sejam determinadas com elevado grau de precisão. A receita utilizada para o cálculo teórico acurado da AE atômica e molecular baseia-se na escolha de um conjunto adequado de funções de base juntamente com o emprego de teorias com altos níveis de correlação eletrônica. Durante o cálculo, o mesmo conjunto de base é utilizado para descrever o elemento neutro e seu respectivo ânion. Geralmente, os conjuntos de base para descrever propriedades de ânions possuem seus expoentes otimizados em ambiente neutro, e sua difusibilidade é conferida pela adição de funções difusas para cada valor de momento angular, l. A ideia deste trabalho está no desenvolvimento de conjuntos de base otimizados exclusivamente em ambiente aniônico para cálculos precisos de afinidade eletrônica. Deste modo, foram escolhidos os átomos para serem estudados: B, C, O e F. Os conjuntos de base foram gerados pelo Método da Coordenada Geradora Hartree-Fock, empregando a técnica da Discretização Integral Polinomial para a solução das integrais do problema. Os conjuntos de base obtidos são compostos por (18s13p) primitivas que foram contraídos para [7s6p] via esquema de contração geral proposto por Raffenetti. Os conjuntos contraídos foram polarizados para 4d3f2g e 4d3f2g1h, sendo os expoentes otimizados em ambiente CISD através do método SIMPLEX. Avaliaram-se as funções de base no cálculo de afinidades eletrônicas, tendo seus resultados comparados aos obtidos utilizando as bases aug-cc-pVQZ e aug-cc-pV5Z. A análise dos resultados demonstrou que os conjuntos de base difusos, gerados neste trabalho, reproduzem de maneira satisfatória as afinidades eletrônicas em relação ao valor experimental. Os conjuntos difusos polarizados para 4d3f2g1h apresentaram eficiência superior aos conjuntos aug-cc-pVQZ e, em alguns casos, aos conjuntos aug-cc-pV5Z que são consideravelmente maiores. / The electron affinity (EA) is an important property of atoms and molecules defined as the energy difference between the neutral species and its negative ion. Since the EA is a very small fraction of the total electronic energy of anionic and neutral species, one must determine these energies with high accuracy. The recipe used to calculate accurate atomic and molecular EAs is based on the choice of an adequate basis set and the use of high level of electron correlation calculations. In the computation of EAs, the same basis set is used to describe both neutral and negatively charged species. In general, the basis sets designed to describe anionic properties have their exponents optimized in neutral environment, and its diffuseness is acquired through the addition of diffuse functions for each angular momentum. The main idea of this work is to develop basis sets optimized exclusively in anionic environment that would be applied in accurate calculations of electron affinity. Thus, here follows the chosen atoms to be studied: B, C, O and F. The basis sets were generated by the Generator Coordinate Hartree-Fock Method through the Polynomial Integral Discretization Method. Basis sets were obtained containing (18s13p) primitives that were contracted to [7s6p] via Raffenetti\'s general contraction scheme. The contracted basis sets were polarized to 4d3f2g and 4d3f2g1h, and the exponents of polarization were optimized in a CISD environment through the Simplex algorithm. The basis sets quality was evaluated through the calculation of the electron affinities. The results were compared to those obtained by using the aug-cc-pVQZ and aug-cc-pV5Z basis-sets. The calculation showed that our diffuse basis sets reproduce satisfactorily the electron affinities when compared to the experimental data. The diffuse basis sets polarized to 4d3f2g1h showed to be more efficient than the aug-cc-pVQZ basis sets and in some cases also better than the aug-cc-pV5Z basis sets that are considerably larger.
|
153 |
Schémas numériques pour la simulation de l'explosion / numerical schemes for explosion hazardsTherme, Nicolas 10 December 2015 (has links)
Dans les installations nucléaires, les explosions, qu’elles soient d’origine interne ou externe, peuvent entrainer la rupture du confinement et le rejet de matières radioactives dans l’environnement. Il est donc fondamental, dans un cadre de sûreté de modéliser ce phénomène. L’objectif de cette thèse est de contribuer à l’élaboration de schémas numériques performants pour résoudre ces modèles complexes. Les travaux présentés s’articule autour de deux axes majeurs : le développement de schémas volumes finis consistants pour les équations d’Euler compressible qui modélise les ondes de choc et celui de schémas performants pour la propagation d’interfaces comme le front de flamme lors d'une déflagration. La discrétisation spatiale est de type mailles décalées pour tous les schémas développés. Les schémas pour les équations d'Euler se basent sur une formulation en énergie interne qui permet de préserver sa positivité ainsi que celle de la masse volumique. Un bilan d'énergie cinétique discret peut être obtenu et permet de retrouver un bilan d'énergie totale par l'ajout d'un terme de correction dans le bilan d'énergie interne. Le schéma ainsi construit est consistant au sens de Lax avec les solutions faibles entropiques des équations continues. On utilise les propriétés des équations de type Hamilton-Jacobi pour construire une classe de schémas volumes finis performants sur une large variété de maillages modélisant la propagation du front de flamme. Ces schémas garantissent un principe du maximum et possèdent des propriétés importantes de monotonie et consistance qui permettent d'obtenir un résultat de convergence. / In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations which model the blast waves, then the buildup of reliable schemes for the front propagation, like the flame front during the deflagration phenomenon. Staggered discretization is used in space for all the schemes. It is based on the internal energy formulation of the Euler system, which insures its positivity and the positivity of the density. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance. High order, MUSCL-like interpolators are used in the discrete momentum operators. The resulting scheme is consistent (in the sense of Lax) with the weak entropic solutions of the continuous problem. We use the properties of Hamilton-Jacobi equations to build a class of finite volume schemes compatible with a large number of meshes to model the flame front propagation. These schemes satisfy a maximum principle and have important consistency and monotonicity properties. These latters allows to derive a convergence result for the schemes based on Cartesian grids.
|
154 |
Analyse numérique discrète de l'aléa fontis et du foisonnement associés aux cavités souterraines / Discrete numerical analysis of the sinkhole hazard and the bulking associated to underground cavitiesIkezouhene, Yaghkob 15 September 2017 (has links)
Au cours du temps, les cavités souterraines sont soumises à un vieillissement et plusieurs types de dégradation peuvent apparaitre. Les anciennes exploitations souterraines, parfois constituées d’un ou plusieurs niveaux, n’ont, sans doute, pas été conçues pour être stables à long terme. Elles ont été réalisées à une époque où n'existaient pas d'enjeux en surface, de zones de travaux, ce qui permettait d’éviter de se préoccuper des mouvements de sol induits. Elles ont pu quelquefois être totalement ou partiellement remblayées, mais pas de manière systématique. L’effondrement d'une cavité souterraine engendre la déconsolidation des niveaux supérieurs des terrains de recouvrement. Ces mécanismes peuvent provoquer en surface deux types de désordres : un affaissement ou un fontis. L'affaissement et le fontis peuvent provoquer des graves dommages aux structures et aux infrastructures en surface mais aussi mettre en péril la sécurité des populations. Les travaux de cette thèse s’articulent autour de l’étude du foisonnement, du fontis et de sa propagation dans les terrains de recouvrement.Les objectifs de cette thèse sont doubles : tout d’abord il s’agit d’étudier le foisonnement de la roche lors d’un effondrement de toit de carrières souterraines ; ensuite il s’agit de modéliser la propagation du fontis dans les terrains de recouvrement et ainsi hiérarchiser les paramètres associés à ce phénomène.La première partie de cette thèse repose sur une étude bibliographique qui récapitule les méthodes d’exploitation, méthodes d’analyse de stabilité de carrières souterraines, méthodes de prévision de la hauteur d’effondrement et estimation de foisonnement. A l’issue de cette synthèse bibliographique l’étude s’est focalisée sur les carrières souterraines à faible profondeur exploitées par chambres et piliers. Ainsi, la modélisation numérique par la méthode des éléments discrets (MED) a été choisie pour analyser l’instabilité des toits de carrières souterraines.La seconde partie s’intéresse au développement d’un modèle numérique qui a pour objectifs : d’une part, le développement d’un Programme de Discrétisation des Massifs Rocheux (PDMR) qui constitue le préprocesseur du logiciel STTAR3D et le développement d’un code permettant le calcul du coefficient de foisonnement des débris de l’effondrement. D’autre part, l’implémentation des lois de comportement sur STTAR3D.La troisième partie consiste à déterminer, d’une part les caractéristiques physico-mécaniques d’échantillons prélevés dans la carrière de la Brasserie (Paris-France), qui a été choisie pour une tester le modèle développé et d’autre part, les deux paramètres de la loi de comportement utilisée pour modéliser les contacts à savoir et µ.Enfin, la dernière partie de ce travail est constituée des simulations numériques pour lesquelles les paramètres de la loi de comportement mesuré expérimentalement ont été introduits dans STTAR3D. Dans la première étude numérique menée, on s’intéresse à l’effet de la hauteur de chute, du rayon de l’ouverture initiale du fontis et du degré de fracturation sur le foisonnement des décombres, ainsi qu’à l’effet de la variation du foisonnement sur la hauteur de l’effondrement et sur l’affaissement. Dans un second temps, on réalise un modèle de la carrière de la Brasserie dont on calcule le comportement par simulation numérique afin d’obtenir l’affaissement en surface et la hauteur de l’effondrement qui sont comparés aux observations in-situ / Over time, the underground cavities are subjected to aging and several types of degradation can occur. The old underground cavities have probably not been designed to be stable over the long term. They have sometimes been totally or partially backfilled, but not in a systematic way. The collapse of a mine causes deconsolidation of the upper levels of the overburden. These mechanisms can cause two types of disorders on the surface: subsidence or sinkhole. Subsidence and sinkhole can cause severe damage to structures and infrastructures in surface, but also jeopardize the safety of the population.The work of this thesis revolves around the study of rock's bulking, sinkhole and its spread in the overburden. The aims of this thesis are twofold: firstly, to study the bulking of rock during the roofs mine collapse; Secondly, modeling the spread of the sinkhole in the overburden and thus to prioritize the parameters associated with this phenomenon.The first part of this thesis is a bibliographical study which summarizes the methods of exploitation, methods of analysis of stability of underground quarries, methods of prediction of the height of collapse and estimation of the bulking factor. At the end of this bibliographic synthesis, the study focused on shallow underground quarries operated by rooms and pillars. Thus, numerical modeling using the discrete element method (MED) was chosen to analyze the instability of roofs of underground quarries.The second part focuses on the development of a numerical model with the following objectives: on the one hand, the development of a Rock Mass Discretization Program (RMDP) which constitutes the preprocessor of the STTAR3D software and the development of a Code allowing calculation of the bulking factor of the rubble of collapse. On the other hand, implementation of the behavior laws on STTAR3D.The third part consists of determining, on the one hand, the physicals and mechanicals characteristics of samples taken from the quarry of the Brasserie (Paris-France), which was chosen to test the model developed. On the other hand, determining of parameters of the behavior law used for modeling the contacts, namely “” and “μ”.Finally, the last part of this work is made of numerical simulations for which the parameters of the behavior law measured experimentally have been introduced in STTAR3D. In the first numerical study, we investigate the effect of fall height, the radius of the initial opening of the sinkhole and the fracturing degree on the bulking of the rubble, as well as the effect of variation of the bulking on the collapse height and on the subsidence. In a second step, a model of the Brasserie’s mine is realized, the behavior of which is studied by numerical simulation in order to obtain the subsidence on the surface and the collapse height, which are compared with the in-situ observations
|
155 |
Étude et modélisation des équations différentielles stochastiques / High weak order discretization schemes for stochastic differential equationRey, Clément 04 December 2015 (has links)
Durant les dernières décennies, l'essor des moyens technologiques et particulièrement informatiques a permis l'émergence de la mise en œuvre de méthodes numériques pour l'approximation d'Equations Différentielles Stochastiques (EDS) ainsi que pour l'estimation de leurs paramètres. Cette thèse aborde ces deux aspects et s'intéresse plus spécifiquement à l'efficacité de ces méthodes. La première partie sera consacrée à l'approximation d'EDS par schéma numérique tandis que la deuxième partie traite l'estimation de paramètres. Dans un premier temps, nous étudions des schémas d'approximation pour les EDSs. On suppose que ces schémas sont définis sur une grille de temps de taille $n$. On dira que le schéma $X^n$ converge faiblement vers la diffusion $X$ avec ordre $h in mathbb{N}$ si pour tout $T>0$, $vert mathbb{E}[f(X_T)-f(X_T^n)] vertleqslant C_f /n^h$. Jusqu'à maintenant, sauf dans certains cas particulier (schémas d'Euler et de Ninomiya Victoir), les recherches sur le sujet imposent que $C_f$ dépende de la norme infini de $f$ mais aussi de ses dérivées. En d'autres termes $C_f =C sum_{vert alpha vert leqslant q} Vert partial_{alpha} f Vert_{ infty}$. Notre objectif est de montrer que si le schéma converge faiblement avec ordre $h$ pour un tel $C_f$, alors, sous des hypothèses de non dégénérescence et de régularité des coefficients, on peut obtenir le même résultat avec $C_f=C Vert f Vert_{infty}$. Ainsi, on prouve qu'il est possible d'estimer $mathbb{E}[f(X_T)]$ pour $f$ mesurable et bornée. On dit alors que le schéma converge en variation totale vers la diffusion avec ordre $h$. On prouve aussi qu'il est possible d'approximer la densité de $X_T$ et ses dérivées par celle $X_T^n$. Afin d'obtenir ce résultat, nous emploierons une méthode de calcul de Malliavin adaptatif basée sur les variables aléatoires utilisées dans le schéma. L'intérêt de notre approche repose sur le fait que l'on ne traite pas le cas d'un schéma particulier. Ainsi notre résultat s'applique aussi bien aux schémas d'Euler ($h=1$) que de Ninomiya Victoir ($h=2$) mais aussi à un ensemble générique de schémas. De plus les variables aléatoires utilisées dans le schéma n'ont pas de lois de probabilité imposées mais appartiennent à un ensemble de lois ce qui conduit à considérer notre résultat comme un principe d'invariance. On illustrera également ce résultat dans le cas d'un schéma d'ordre 3 pour les EDSs unidimensionnelles. La deuxième partie de cette thèse traite le sujet de l'estimation des paramètres d'une EDS. Ici, on va se placer dans le cas particulier de l'Estimateur du Maximum de Vraisemblance (EMV) des paramètres qui apparaissent dans le modèle matriciel de Wishart. Ce processus est la version multi-dimensionnelle du processus de Cox Ingersoll Ross (CIR) et a pour particularité la présence de la fonction racine carrée dans le coefficient de diffusion. Ainsi ce modèle permet de généraliser le modèle d'Heston au cas d'une covariance locale. Dans cette thèse nous construisons l'EMV des paramètres du Wishart. On donne également la vitesse de convergence et la loi limite pour le cas ergodique ainsi que pour certains cas non ergodiques. Afin de prouver ces convergences, nous emploierons diverses méthodes, en l'occurrence : les théorèmes ergodiques, des méthodes de changement de temps, ou l'étude de la transformée de Laplace jointe du Wishart et de sa moyenne. De plus, dans dernière cette étude, on étend le domaine de définition de cette transformée jointe / The development of technology and computer science in the last decades, has led the emergence of numerical methods for the approximation of Stochastic Differential Equations (SDE) and for the estimation of their parameters. This thesis treats both of these two aspects. In particular, we study the effectiveness of those methods. The first part will be devoted to SDE's approximation by numerical schemes while the second part will deal with the estimation of the parameters of the Wishart process. First, we focus on approximation schemes for SDE's. We will treat schemes which are defined on a time grid with size $n$. We say that the scheme $ X^n $ converges weakly to the diffusion $ X $, with order $ h in mathbb{N} $, if for every $ T> 0 $, $ vert mathbb{E} [f (X_T) -f (X_T^n)]vert leqslant C_f / h^n $. Until now, except in some particular cases (Euler and Victoir Ninomiya schemes), researches on this topic require that $ C_f$ depends on the supremum norm of $ f $ as well as its derivatives. In other words $C_f =C sum_{vert alpha vert leqslant q} Vert partial_{alpha} f Vert_{ infty}$. Our goal is to show that, if the scheme converges weakly with order $ h $ for such $C_f$, then, under non degeneracy and regularity assumptions, we can obtain the same result with $ C_f=C Vert f Vert_{infty}$. We are thus able to estimate $mathbb{E} [f (X_T)]$ for a bounded and measurable function $f$. We will say that the scheme converges for the total variation distance, with rate $h$. We will also prove that the density of $X^n_T$ and its derivatives converge toward the ones of $X_T$. The proof of those results relies on a variant of the Malliavin calculus based on the noise of the random variable involved in the scheme. The great benefit of our approach is that it does not treat the case of a particular scheme and it can be used for many schemes. For instance, our result applies to both Euler $(h = 1)$ and Ninomiya Victoir $(h = 2)$ schemes. Furthermore, the random variables used in this set of schemes do not have a particular distribution law but belong to a set of laws. This leads to consider our result as an invariance principle as well. Finally, we will also illustrate this result for a third weak order scheme for one dimensional SDE's. The second part of this thesis deals with the topic of SDE's parameter estimation. More particularly, we will study the Maximum Likelihood Estimator (MLE) of the parameters that appear in the matrix model of Wishart. This process is the multi-dimensional version of the Cox Ingersoll Ross (CIR) process. Its specificity relies on the square root term which appears in the diffusion coefficient. Using those processes, it is possible to generalize the Heston model for the case of a local covariance. This thesis provides the calculation of the EMV of the parameters of the Wishart process. It also gives the speed of convergence and the limit laws for the ergodic cases and for some non-ergodic case. In order to obtain those results, we will use various methods, namely: the ergodic theorems, time change methods or the study of the joint Laplace transform of the Wishart process together with its average process. Moreover, in this latter study, we extend the domain of definition of this joint Laplace transform
|
156 |
Využití Bayesovských sítí pro predikci korporátních bankrotů / Corporate Bankruptcy Prediction Using Bayesian ClassifiersHátle, Lukáš January 2014 (has links)
The aim of this study is to evaluate feasibility of using Bayes classifiers for predicting corporate bankruptcies. The results obtain show that Bayes classifiers do reach comparable results to then more commonly used methods such the logistic regression and the decision trees. The comparison has been carried out based on Czech and Polish data sets. The overall accuracy rate of these so called naive Bayes classifiers, using entropic discretization along with the hybrid pre-selection of the explanatory attributes, reaches 77.19 % for the Czech dataset and 79.76 % for the Polish set respectively. The AUC values for these data sets are 0.81 and 0.87. The results obtained for the Polish data set have been compared to the already published articles by Tsai (2009) and Wang et al. (2014) who applied different classification algorithms. The method proposed in my study, when compared to the above earlier works, comes out as quite successful. The thesis also includes comparing various approaches as regards the discretisation of numerical attributes and selecting the relevant explanatory attributes. These are the key issues for increasing performance of the naive Bayes classifiers
|
157 |
Estudos de eficiência em buscas aleatórias unidimensionaisLima, Tiago Aécio Grangeiro de Souza Barbosa 23 July 2010 (has links)
Submitted by Sandra Maria Neri Santiago (sandra.neri@ufpe.br) on 2016-04-15T18:46:34Z
No. of bitstreams: 2
license_rdf: 1379 bytes, checksum: ea56f4fcc6f0edcf0e7437b1ff2d434c (MD5)
Dissertação_Tiago Aécio Grangeiro de Souza Barbosa Lima.pdf: 2215610 bytes, checksum: 8993869b89fc394d9e8171a017cfee6e (MD5) / Made available in DSpace on 2016-04-15T18:46:34Z (GMT). No. of bitstreams: 2
license_rdf: 1379 bytes, checksum: ea56f4fcc6f0edcf0e7437b1ff2d434c (MD5)
Dissertação_Tiago Aécio Grangeiro de Souza Barbosa Lima.pdf: 2215610 bytes, checksum: 8993869b89fc394d9e8171a017cfee6e (MD5)
Previous issue date: 2010-07-23 / Neste trabalho investigamos o problema do caminhante aleatório unidimensional como
modelo para encontrar que distribuição de probabilidades é a melhor estratégia a ser utilizada na busca por sítios-alvos aleatoriamente distribuídos, cuja localização é desconhecida, na situação em que o buscador tem informação limitada sobre sua vizinhança. Embora tal problema tenha surgido na década de 1960, uma nova motivação surgiu nos anos 1990 quando dados empíricos mostraram que várias espécies de animais, sob condições gerais (especialmente escassez de comida), não usam estratégias brownianas de busca, mas sim distribuições de Lévy. A principal diferença entre elas é que as distribuições de Lévy decaem muito mais lentamente com a distância (com cauda do tipo lei de potência no limite de longos passos), não obedecendo, portanto, ao Teorema do Limite Central, e apresentam propriedades
interessantes, como fractalidade, superdifusão e autoafinidade. Estes experimentos, juntamente com conceitos evolucionistas, levantaram a suspeita de que tal escolha pode ter sido adotada por ser mais vantajosa para o buscador, uma idéia conhecida como Lévy Flight Foraging Hypothesis. Em nosso estudo, definimos a eficiência da busca e obtemos a sua expressão analítica para o modelo. Utilizamos métodos computacionais para comparar as eficiências associadas às distribuições de Lévy e duas outras dentre as mais citadas na literatura, a gama e a "stretched exponential", concluindo que a de Lévy representa a melhor estratégia. Finalmente, empregamos métodos variacionais de extremização e obtemos a equação de Euler do problema. / In this work we study the one-dimensional random walk problem as a model to find which
probability distribution function (pdf) is the best strategy when looking for randomly istributed target sites whose locations are not known, when the searcher has only limited information about its vicinity. Although research on this problem dates back to the 1960’s, a new motivation arose in the 1990’s when empirical data showed that many animal species, under broad conditions (especially scarcity of food), do not use Brownian strategies when looking for food, but Lévy distributions instead. The main difference between them is that the Lévy distribution decay much slower with distance (with a power-law tail in the long-range limit), thereby not obeying the Central Limit Theorem, and present interesting properties, like fractality, superdiffusivityand self-affinity.
These experiments, coupled with evolutionary concepts, lead to suspicions that this choice might have been adopted because it is more advantageous for the searcher, an idea now termed as the Lévy Flight Foraging Hypothesis. To study the problem, we define a search efficiency function and obtain its analytical expression for our model. We use computational methods to compare the efficiencies associated with the Lévy and two of the most cited pdfs in the literature, the stretched exponential and Gamma distributions, showing that Lévy is the best search strategy. Finally, we employ variational
extremization methods to obtain the problem’s Euler equation.
|
158 |
Développement et implantation d’un modèle de diode par VHDL-AMS : Discrétisation selon la méthode Scharfetter-Gummel / Development and implementation of a diode model using VHDL-AMS : Discretization using the Scharfetter-Gummel MethodKesserwani, Joseph 11 September 2015 (has links)
La conception assistée par ordinateur (CAO) est largement utilisée dans l’industrie des semi-conducteurs pour la conception et l’analyse des différents composants dont l’étude consiste à résoudre l'équation de dérive-diffusion et l’équation de Poisson. La caractéristique non linéaire de ces équations demande des solutions numériques interactives. Le schéma de Scharfetter-Gummel est utilisé classiquement pour discrétiser l'équation de dérive-diffusion non dégénérée (ou équation de Schockley) pour simuler les phénomènes de transport des particules «électrons et trous» dans un semi-conducteur. Initialement cette méthode a été appliquée à un domaine unidimensionnel. Par la suite, cette méthode a été étendue au problème bidimensionnel sur la base d'un maillage rectangulaire. L’objectif donc de cette thèse serait d’implanter un modèle de diode par VHDL-AMS basé sur la discrétisation selon la méthode Scharfetter-Gummel. Le langage VHDL-AMS (Hardware Description Language – Analog Mixed Signal) est un langage de description comportemental pour les circuits analogiques et mixtes. Inspiré de son équivalent pour les circuits logiques, le VHDL, VHDL-AMS serait donc une extension. Etant donné que le langage VHDL-AMS est de haut niveau, ceci nous permettra de modéliser le comportement de systèmes physiques, électriques, mécaniques ou autres. Parallèlement VHDL-AMS permet de créer des modules, appelés « entités ». Ceux-ci sont définis par leurs ports externes (qui sont une interface avec les autres architectures ou entités) et par des équations mathématiques. La possibilité d’utiliser directement des relations mathématiques lors de la description du modèle nous donne une grande souplesse d’utilisation. Comme tous les langages de description comportementale analogique, VHDL-AMS est initialement dédié à la modélisation de type « haut niveau », tel que la modélisation d’un système électronique complet. L’utilisation d’un tel langage afin de réaliser un modèle de diode, constitue donc une alternative de ce dernier. En raison du grand nombre de nœud il est nécessaire de générer le code VHDL-AMS à partir d'une interface de type java. Les résultats obtenus par cette méthode seront comparés avec d'autres obtenus par différents autres logiciels. Le modèle à concevoir aura comme objectif : - Correspondre aux spécifications initialement tracés par les concepteurs et ceci afin de leur permettre de mettre en évidence les différentes caractéristiques des modules. - Simuler facilement l'intégration et/ou l'adéquation du composant dans un système donné - être conçus de sorte qu'il soit utilisé dans des composants plus complexes. / Computer-aided design (CAD) is widely used in the semiconductor industry for the design and analysis of individual components whose study is to solve the drift-diffusion equation and the Poisson equation. The nonlinear characteristic of these equations request interactive digital solutions. The diagram Scharfetter-Gummel is conventionally used to discretize the non-degenerate drift-diffusion equation (or equation Schockley) to simulate particle transport phenomena "electrons and holes" in a semiconductor. Initially this method was applied to a one-dimensional domain. Subsequently, this method was extended to the two-dimensional problem on the basis of a rectangular mesh. So the aim of this thesis is to implement a VHDL-AMS diode model based on the discretization using the Scharfetter-Gummel method. The VHDL-AMS (Hardware Description Language - Analog Mixed Signal) is a behavioral description language for analog and mixed circuits. Inspired by its equivalent for logic circuits, VHDL, VHDL-AMS would be an extension. Since the VHDL-AMS is high level, this will allow us to model the behavior of physical systems, electrical, mechanical or otherwise. Meanwhile VHDL-AMS can create modules, called "entities". These are defined by their external ports (which are an interface with other architectures or entities) and by mathematical equations. The ability to use mathematical relationships directly in the description of the model gives us great flexibility. Like all analog behavioral description languages, VHDL-AMS is initially dedicated to the modeling of the type "high level" as the modeling of complete electronic systems. The use of such a language in order to achieve a diode model thus constitutes an alternative to the latter. Due to the large number of node it is necessary to generate the VHDL-AMS code from a Matlab-based interface. The results obtained by this method will be compared with others from various softwares. The model design will aim: - Match the specifications originally drawn by designers and in order to allow them to highlight the different characteristics of the modules. - Easily Simulate integration and / or the component adequacy in a given system - Be designed so that it is used in more complex components. -Finally We plan to conduct experimental measures in order to verify the accuracy of our model.
|
159 |
Simulace idiofonického nástroje / Simulation of Idiofonic SystemMúčka, Martin January 2018 (has links)
The thesis deals with dynamic simulation of real bell behavior over time. The model is created according to principles of physical discretization as a spring in the FyDiK3D software. In order for the model to be declared as relevant, it is necessary to prove the behavior of the structures used in the elementary tasks of the mechanics. It shows the correlation between the stiffness of normal and diagonal springs. Describes how to use software import tools to create a model. The resulting model approaches its real bell behavior.
|
160 |
Generátor sítě konečných prvků / Mesh Generator for the Finite Element MethodŠčišľak, Tomáš January 2011 (has links)
The paper describes basic principles of the finite element method. Further, basic properties of triangulation are discussed. The thesis is primarily focused on the Delaunay and Greedy triangulation. Greedy triangulation is simple to implement, but may not produce the optimal shape of triangles. Delaunay method is used due to its robustness in a wide range of fields especially in computer graphics. This method is relatively easy to implement and provides triangles with high quality.
|
Page generated in 0.1036 seconds