Spelling suggestions: "subject:"geometrical"" "subject:"eometrical""
361 |
Estimação de estado em sistemas elétricos de potência: a interpretação geométrica aplicada ao processamento de erros de medidas, de parâmetros e de topologia / Power systems state estimation: the geometrical view applied to the processing of measurements, parameters and topological errorsBreno Elias Bretas de Carvalho 29 March 2018 (has links)
Este trabalho foi proposto com o objetivo de implementar uma ferramenta computacional para estimar os estados (tensões complexas nodais) de um sistema elétrico de potência e aplicar métodos alternativos para o processamento de erros topológicos, erros de parâmetros e/ou de erros grosseiros em medidas, baseados na interpretação geométrica dos erros e no conceito de inovação das medidas. O método utilizado para a resolução do problema de estimação de estado é o de mínimos quadrados ponderados. Através da interpretação geométrica, demonstrou-se matematicamente que o erro da medida é constituído de uma componente detectável e uma não-detectável, entretanto, as metodologias até então utilizadas para o processamento de erros consideram apenas a componente detectável do erro e, como consequência, podem falhar. Na tentativa de contornar essa limitação e baseado nos conceitos citados previamente, foi estudada e implementada uma metodologia alternativa para processar tais erros baseada na análise das componentes dos erros das medidas. Em primeiro lugar, é testado se o conjunto de medidas possui erros utilizando, para isso, o valor do erro de medida composto normalizado. Em seguida, diferencia-se se um ou outro erro ocorreu, ou mesmo se mais de um tipo de erro ocorreu. A correção a ser feita no parâmetro de linha ou na medida com erro grosseiro será o erro normalizado composto correspondente. A abordagem proposta neste trabalho requer somente um conjunto de medidas, e no mesmo instante. Para validação do programa, foram feitas diversas simulações nos sistemas de 14 e 57 barras do IEEE. / This work was proposed with the objective to implement a computational tool to estimate the states (nodal complex voltages) of a power system and apply alternative methods for the processing of topological errors, parameter errors and/or gross errors in measurements, based on the geometric interpretation of the errors and the innovation concept of measurements. The method used to solve the state estimation problem is the weighted least squares. Through geometric interpretation, it has been demonstrated mathematically that the measurement error is composed by a detectable component and a non-detectable, however, the methodologies heretofore used for error processing consider only the detectable component of the error and, consequently, can fail. In an attempt to overcome this limitation and based on the concepts mentioned previously, an alternative approach to process such errors was studied and implemented based on the analysis of the components of the measurements errors. Firstly, it is tested if the set of measurements has errors using, for that, the value of the composed measurement error in its normalized way. Next, it diers if either an error has occurred, or if more than one type of error occurred. The correction to be made in the line parameter or the measurement with gross error is the correspondent composed normalized error. The proposed approach in this paper requires only a set of measures, and at the same instant. To validate the software, several simulations were performed in the IEEE 14-bus and 57-bus systems.
|
362 |
Estimação de estado: a interpretação geométrica aplicada ao processamento de erros grosseiros em medidas / Study of systems with optical orthogonal multicarrier and consistentBreno Elias Bretas de Carvalho 22 March 2013 (has links)
Este trabalho foi proposto com o objetivo de implementar um programa computacional para estimar os estados (tensões complexas nodais) de um sistema elétrico de potência (SEP) e aplicar métodos alternativos para o processamento de erros grosseiros (EGs), baseados na interpretação geométrica dos erros e no conceito de inovação das medidas. Através da interpretação geométrica, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) e BRETAS et al. (2013) demonstraram matematicamente que o erro da medida se compõe de componentes detectáveis e não detectáveis, e ainda que a componente detectável do erro é exatamente o resíduo da medida. As metodologias até então utilizadas, para o processamento de EGs, consideram apenas a componente detectável do erro, e como consequência, podem falhar. Na tentativa de contornar essa limitação, e baseadas nos trabalhos citados previamente, foram estudadas e implementadas duas metodologias alternativas para processar as medidas portadoras de EGs. A primeira, é baseada na análise direta das componentes dos erros das medidas; a segunda, de forma similar às metodologias tradicionais, é baseada na análise dos resíduos das medidas. Entretanto, o diferencial da segunda metodologia proposta reside no fato de não considerarmos um valor limiar fixo para a detecção de medidas com EGs. Neste caso, adotamos um novo valor limiar (TV, do inglês: Threshold Value), característico de cada medida, como apresentado no trabalho de PIERETI (2011). Além disso, com o intuito de reforçar essa teoria, é proposta uma forma alternativa para o cálculo destes valores limiares, através da análise da geometria da função densidade de probabilidade da distribuição normal multivariável, referente aos resíduos das medidas. / This work was proposed with the objective of implementing a computer program to estimate the states (complex nodal voltages) in an electrical power system (EPS) and apply alternative methods for processing gross errors (GEs), based on the geometrical interpretation of the measurements errors and the innovation concept. Through the geometrical interpretation, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) and BRETAS et al. (2013) proved mathematically that the measurement error is composed of detectable and undetectable components, and also showed that the detectable component of the error is exactly the residual of the measurement. The methods hitherto used, for processing GEs, consider only the detectable component of the error, then as a consequence, may fail. In an attempt to overcome this limitation, and based on the works cited previously, were studied and implemented two alternative methodologies for process measurements with GEs. The first one is based on the direct analysis of the components of the errors of the measurements, the second one, in a similar way to the traditional methods, is based on the analysis of the measurements residuals. However, the differential of the second proposed methodology lies in the fact that it doesn\'t consider a fixed threshold value for detecting measurements with GEs. In this case, we adopted a new threshold value (TV ) characteristic of each measurement, as presented in the work of PIERETI (2011). Furthermore, in order to reinforce this theory, we propose an alternative way to calculate these thresholds, by analyzing the geometry of the probability density function of the multivariate normal distribution, relating to the measurements residuals.
|
363 |
Estimação de estado: a interpretação geométrica aplicada ao processamento de erros grosseiros em medidas / Study of systems with optical orthogonal multicarrier and consistentCarvalho, Breno Elias Bretas de 22 March 2013 (has links)
Este trabalho foi proposto com o objetivo de implementar um programa computacional para estimar os estados (tensões complexas nodais) de um sistema elétrico de potência (SEP) e aplicar métodos alternativos para o processamento de erros grosseiros (EGs), baseados na interpretação geométrica dos erros e no conceito de inovação das medidas. Através da interpretação geométrica, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) e BRETAS et al. (2013) demonstraram matematicamente que o erro da medida se compõe de componentes detectáveis e não detectáveis, e ainda que a componente detectável do erro é exatamente o resíduo da medida. As metodologias até então utilizadas, para o processamento de EGs, consideram apenas a componente detectável do erro, e como consequência, podem falhar. Na tentativa de contornar essa limitação, e baseadas nos trabalhos citados previamente, foram estudadas e implementadas duas metodologias alternativas para processar as medidas portadoras de EGs. A primeira, é baseada na análise direta das componentes dos erros das medidas; a segunda, de forma similar às metodologias tradicionais, é baseada na análise dos resíduos das medidas. Entretanto, o diferencial da segunda metodologia proposta reside no fato de não considerarmos um valor limiar fixo para a detecção de medidas com EGs. Neste caso, adotamos um novo valor limiar (TV, do inglês: Threshold Value), característico de cada medida, como apresentado no trabalho de PIERETI (2011). Além disso, com o intuito de reforçar essa teoria, é proposta uma forma alternativa para o cálculo destes valores limiares, através da análise da geometria da função densidade de probabilidade da distribuição normal multivariável, referente aos resíduos das medidas. / This work was proposed with the objective of implementing a computer program to estimate the states (complex nodal voltages) in an electrical power system (EPS) and apply alternative methods for processing gross errors (GEs), based on the geometrical interpretation of the measurements errors and the innovation concept. Through the geometrical interpretation, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) and BRETAS et al. (2013) proved mathematically that the measurement error is composed of detectable and undetectable components, and also showed that the detectable component of the error is exactly the residual of the measurement. The methods hitherto used, for processing GEs, consider only the detectable component of the error, then as a consequence, may fail. In an attempt to overcome this limitation, and based on the works cited previously, were studied and implemented two alternative methodologies for process measurements with GEs. The first one is based on the direct analysis of the components of the errors of the measurements, the second one, in a similar way to the traditional methods, is based on the analysis of the measurements residuals. However, the differential of the second proposed methodology lies in the fact that it doesn\'t consider a fixed threshold value for detecting measurements with GEs. In this case, we adopted a new threshold value (TV ) characteristic of each measurement, as presented in the work of PIERETI (2011). Furthermore, in order to reinforce this theory, we propose an alternative way to calculate these thresholds, by analyzing the geometry of the probability density function of the multivariate normal distribution, relating to the measurements residuals.
|
364 |
Modélisation et simulation 3D de la rupture des polymères amorphes / Modelling and numerical study of 3D effect on glassy polymer fractureGuo, Shu 08 July 2013 (has links)
Le sujet concerne l’influence des effets tridimensionnelles sur les champs de déformation et de contrainte au voisinage d’une éprouvette chargée en mode I. La loi de comportement caractéristique des polymères amorphes avec un seuil d’écoulement viscoplastique suivi d’un adoucissement et d’un durcissement à mesure que la déformation augmente est prise en compte. La loi est implantée dans une UMAT abaqus. Les champs au voisinage de l’entaille sont analysés et les résultats 3D comparés à ceux correspondant à un calcul 2D sous l’hypothèse de déformation plane. L’influence de l’épaisseur de l’échantillon est étudiée et nous montrons qu’au-delà d’un rapport épaisseur t sur rayon d’entaille rt, t/rt>20, les champs de déformation plastique sont qualitativement similaires entre les calculs 3D et 2D. En revanche, nous montrons que la répartition des contraintes et notamment celle de la contrainte moyenne est sur-estimée avec un calcul 2D en comparaison à une simulation 3D. Nous prenons en compte par la suite la rupture par craquelage, modélisée avec un modèle cohésif. Une étude paramétrique est menée afin de définir une procédure d’identification des paramètres caractéristiques du modèle cohésif. Par ailleurs les simulations montrent qu’au-delà d’un rapport t/rt supérieur à 20, une ténacité minimum peut être estimée : ceci constitue un résultat important pour la détermination expérimentale de la ténacité des polymères ductiles. / We investigate 3D effect of crack tip palsticity and the influence of the thickness on 3D glassy polymer fracture. The characteristuc constitutive law with a viscoplastic yield stress followed by softening and progressive hardening is accoutne for and implemented in a UMAt routine, in abaqus. The crack tip fields are investigated and 2D plasne strain versus 3D calculations compared. Qualitatively, the palstic distribution are comparableas soon as the ratio thickness over crack tip radius is larger than 20. However, the 2D calculations over estimate the stress distribution compared to the 3D cases. We have accounted for failure by crazing that is described with a cohesive models. A parametric study sheds light on the methodology to use for the calibration of the cohesive parameters. The simulations show that for a ratio thickness over craci tip radius larger than 20, a minimum tuoghness can be observed. This results has implication on the definition of a thickness larger enough experimentally.
|
365 |
Análisis de sistemas radiantes sobre geometrías arbitrarias definidas por superficies paramétricasSaiz Ipiña, Juan Antonio 01 December 1995 (has links)
En esta tesis se presenta un método para analizar antenas montadas sobre estructuras arbitrarias. La Optica Geométrica (GO) y la Teoría Uniforme de la difraccion (UTD), han sidoempleadas para analizar los efectos que la estructura produce sobre el diagrama de radiación de la antena emisora. Para la descripción geométrica de la estructura, han sido utilizados parches NURBS (Non Uniform Rational B-Spline), por lo que el método presentado, es compatible con la mayoría de los programas gráficos disponibles en el mercado.EL tratamiento de geometrías arbitrarias requiere un código eficiente en el análisis de tres dimensiones.Por otro lado, para obtener resultados satisfactorios, la descripción de la superficie de la estructura, debe ser muy próxima al modelo real, sin embargo, esto complica el tratamiento computacional. Aquí la estructura es modelada mediante un conjunto de parches NURBS, que unidos entre sí, describen el modelo completo. Esta descripción permite manipular superficies arbitrarias con un bajo numero de parches, lo que significa un volumen de información reducido.La descripción inicial por NURBS del modelo, es acompañada con información complemetaria como por ejemplo: la tipología de las superficies, las curvas frontera, el tipo de material, etc. Esto es imprescindible para la aplicación de criterios de selección dedicados a la aceleración del proceso.El método tras leer la descripción del modelo, descompone los parches NURBS en superficies racionales de Bezier. Un parche de Bezier es también una superficie paramétrica definida en términos de una combinación lineal de los polinomios de Bernstein.Las antenas son modeladas usando modelos numéricos simples basados en agrupaciones de dipolos infinitesimales eléctricos y magnéticos. Esta caracterización de la antena es muyventajosa ya que con un numero reducido de datos de entrada, la fuente queda definida para cualquier dirección del espacio y el valor del campo radiado puede ser calculado fácilmente.El análisis electromagnético de los efectos que contribuyen al campo dispersado por la geometría comienza con una selección rigurosa de la geometría iluminada desde la fuente.Unicamente los parches de Bezier iluminados serán almacenados por el ordenador durante el análisis. La filosofía de este proceso es descartar aquella parte de la geometría que no contribuye a los efectos de dispersión.El campo total calculado es la superposición de los siguientes efectos pertenecientes a la GO y a la UTD: campo directo procedente de la fuente, campo reflejado por los parches de Bezier, campo difractado por las aristas del modelo definidas como curvas de Bezier, ondas de superficie, dobles reflexiones, reflexione-difraccion y difraccion-reflexión. El método ha sido diseñado para analizar campo cercano y lejano. El mayor gasto computacional se debe a la búsqueda de los puntos de dispersión, por lo que antes de emplear los algoritmos de intersección es necesario aplicar un conjunto de criterios rápidos dependientes de la dirección de observación.El principio de Fermat en combinación con el Gradiente Conjugado (CGM) es usado para obtener de manera eficiente los puntos de dispersión sobre la estructura. Para cada efecto, laposible ocultación de la trayectoria completa del rayo es examinada, por ello, si el rayo corta alguno de los parches de Bezier su contribución será descartada. Los dobles efectos son tratados como una generalización de los simples efectos.El método desarrollado es eficiente ya que precisa de un numero reducido de superficies para modelar objetos complejos lo que se traduce en bajos requerimientos de memoria y reducidos tiempos de calculo. / In this thesis a method to analyze antennas on board of complex bodies is presented. The Geometrical Optics (GO) and Uniform Theory of Diffraction (UTD) have been used to analyze the effect of the structure in the radiation pattern of the antennas. The bodies are geometrically modelled by using NURBS (Non Uniform Rational B-Spline) surfaces. In addition to be accurate and efficient, the method is compatible with most of the modern CAGD (Computer Aided Geometric Design) available programs.The treatment of arbitrary geometries requires a code which can carry out an efficient 3D analysis. To obtain accurate results the description of the surface must be close to the real model, however this complicates the computational procedure. Here the structure is modeled by a collection of individual N.U.R.B.S. surface patches joined to form a complete description of the surface model. The NURBS description is able to manipulate free form surfaces with a low number of patches, and therefore, with a low amount of information. The initial description of the model by NURBS surfaces is accompanied with other complementary data for example : the topology of the surfaces, the boundary curves, the types of material and other inputs. It is very interesting to apply criteria to make the complete analysis faster.The method reads the NURBS description of the model and transforms the NURBS into the rational BEZIER surfaces. A rational BEZIER patch is also a parametric surface defined in terms of a linear combination of Bernstein polynomials.The antennas are modelled using simple numerical models based on arrays of electric and magnetic infinitesimal dipoles. This antenna modelization is very advantageous because with a little input data, the source is defined in any direction and the field value is readily accessible.The electromagnetic analysis of the contributive effects to the scattering field by the geometry, starts with the rigorous selection of the geometry illuminated from the source. Only the Bezier patches illuminated will be in memory of the computer during the analysis. The philosophy of this previous process is to discard in the process the part of the geometry which does not contribute to the scattering effects.The total field is the superposition of the following GO and UTD field components: direct field from the source, reflected fields from the Bezier patches of the model, diffracted fields from the arbitrary edges defined as a Bezier curves, creeping waves, double reflected field and diffracted-reflected and reflected-diffracted fields. The search of specular and diffraction points are the most CPU time consuming, thus before using the intersection algorithms it is necessary to apply a set of fast selection criteria which depend on the observation direction.The Fermat principle in conjunction with the Conjugate Gradient Method (CGM) is used for obtaining efficiently the reflection points and diffraction points on the structure. For each effect the complete ray path is examined to see whether or not it is interrupted by any Bezier patch of the model, in this case the field component is not computed. The double effects are treated using a generalization of the single effects algorithms. The method has been developed to analyze the near and far field cases for different frequencies.The developed method is quite efficient because it makes use of a small number of surfaces to model complex bodies, so it requires few memory and low computing time.
|
366 |
FE Analysis of axial-bearing in large fans : FE analys av axialkullager i stora fläktarHjalmarsson, Joel, Memic, Anes January 2010 (has links)
Detta examensarbete har utförts på Fläktwoods AB i Växjö, som producerar stora axialfläktar för olika industriapplikationer. Syftet är att öka kunskapen om fettsmorda axiella kullager genom FE analyser. Projektet har genomförts i fem delsteg för att avgöra påverkan av en eller några få parametrar i taget. De studerade parametrarna är: elementstorlek, kontaktstyvhet, last, lagergeometri (dvs. oskulation), ickelinjär geometri och ickelinjära materialegenskaper (dvs. plasticitet). Slutsatsen är att elementstorleken bör väljas fint nog för att ge ett jämnt resultat men grovt nog för att beräkningstiden skal vara rimlig. Kontaktstyvheten har inte stor, men tydlig, inverkan på kontakttrycket och penetrationen. Förändringar av oskulationen leder till förändringar i kontaktellipsens form medan olika laster inte påverkar formen på ellipsen, utan snarare storleken. När det handlar om plasticitet är sträckgränsen den viktigaste faktorn att beakta. / This thesis project was carried out at Fläktwoods AB in Växjö who produces large axial fans for different industry applications. The purpose is to increase the knowledge of grease lubricated axial ball bearings through FE analyses. The project was executed into five sub steps to determine the influence of one or few parameters at a time. The studied parameters are: mesh density, contact stiffness, load, bearing geometry (i.e. osculation), geometrical nonlinearity and material nonlinearity (i.e. plasticity). It is concluded that the mesh density should be selected fine enough to give a smooth result but course enough to give a reasonable calculation time. The contact stiffness has not a major, but a clear, impact on the contact pressure and penetration. Changes of the osculation lead to changes of the contact ellipse shape and applying different load level does not affect the shape of the ellipse but rather the size. When dealing with plasticity the yield strength is the most important factor to take in consideration.
|
367 |
Μεταφυσικές και γνωσιολογικές πλαισιώσεις της ηθικής στον πλατωνικό διάλογο "Μένων"Γιακουμή, Ραφαηλία 27 August 2014 (has links)
Ξεκινώντας από το θέμα του Μένωνα, χωρίζεται σε δύο μέρη. Το πρώτο μέρος είναι εμφανές ήδη από την αρχή του διαλόγου, όταν ο νεαρός Θεσσαλός θα απευθύνει το ερώτημα στον Σωκράτη αναφορικά με ποιον τρόπο αποκτάται η αρετή. Σύμφωνα με την σωκρατική τοποθέτηση, το εν λόγω ερώτημα δεν είναι δυνατόν να απαντηθεί, αν πρωτίστως δεν διατυπωθεί ο ορισμός της αρετής, οπότε τίθεται εμμέσως ως το δεύτερο μέρος της θεματολογίας. Το ότι ο Μένων έχει μαθητεύσει πλησίον του Γοργία αποτελεί έναυσμα για τον Σωκράτη, ώστε να προκαλέσει τον συνομιλητή του να ορίσει την αρετή, προφασιζόμενος τον αμνήμονα. Ο Μένων επιχειρεί να ορίσει την έννοια της αρετής τρεις φορές, χωρίς μία ορισμένη επιτυχία, εφόσον ο Σωκράτης κατορθώνει να εντοπίζει σφάλματα. Ωστόσο, ο Μένων, οδηγούμενος σε αδιέξοδο, θα διερωτηθεί: πώς είναι δυνατόν κάποιος να ερευνήσει ένα θέμα το οποίο δεν γνωρίζει, και αν το γνωρίσει πώς γνωρίζει ότι αυτό είναι αυτό που αναζητούσε (το παράδοξο του Μένωνα). Ο Σωκράτης θα απαντήσει στην απορία του επικαλούμενος την θεωρία της ανάμνησης, σύμφωνα με την οποία η γνώση είναι ανάκληση του ήδη υπάρχοντος, έχοντας αναντιλέκτως προϋποθέσει την αθανασία της ψυχής. Μάλιστα θα προχωρήσει και σε απόδειξη της εν λόγω εκδοχής, προβαίνοντας σε ένα μαθηματικό πείραμα με έναν από τους δούλους του Μένωνα. Η θεωρητική παράμετρος που θα αποκομίσουν από την διαδικασία του πειράματος είναι η αξία της έρευνας, όταν σκοπός είναι η προσέγγιση της αλήθειας, όπου απαιτείται μάλιστα και η αποδοχή της άγνοιάς μας. Σε μια αντίστοιχη έρευνα έγκειται και ο φιλοσοφικός προσδιορισμός που επιδιώκει ο Σωκράτης και θα παρακινήσει τον Μένωνα να ερευνήσουν από κοινού για την αρετή. Αυτή τη φορά θα ακολουθήσουν την υποθετική μέθοδο μέσω της οποίας θα εξετάσουν με ποιον τρόπο αποκτάται η αρετή, εφόσον δεν κατόρθωσαν προηγουμένως στην συζήτησή τους να διατυπώσουν έναν επαρκή ορισμό.Η αρετή δεν είναι έμφυτη. Διαφορετικά, θα έπρεπε να διαφυλάττονται οι νέοι που γεννώνται ενάρετοι προκειμένου να μην διαφθαρούν. Η αρετή δεν είναι ούτε διδακτή, εφόσον, έπειτα από διάλογο που παρεμβάλλεται με τον Άνυτο, διαπιστώνουν ότι ούτε οι σοφιστές είναι οι αρμόδιοι δάσκαλοι ούτε και οι πολιτικοί κατόρθωσαν να μεταδώσουν στα τέκνα τους την αρετή. Άρα, ένα πρώτο συμπέρασμα στο οποίο οδηγούνται είναι ότι η αρετή δεν διδάσκεται. Όμως, πώς εξηγείται η διαπίστωση ότι υπάρχουν άνθρωποι που προβαίνουν σε ενάρετες πράξεις; Σε αυτό το σημείο ο Σωκράτης οδηγείται στην εκτίμηση ότι μία παράμετρος τους έχει διαφύγει της ερευνητικής προσοχής. Επαναπροσδιορίζουν τα όσα έχουν συζητηθεί και τελικώς εναποθέτουν τον ενάρετο χαρακτήρα των ανθρώπων στην εκ θεού αποκτηθείσα ορθή γνώμη, εισάγοντας με αυτόν τον τρόπο την διάκριση από την επιστήμη. Ωστόσο, ο διάλογος καταλήγει σε απορία, καθώς δεν διατυπώνεται ένας επαρκής ορισμός για την αρετή. / The main question of platonic dialogue Meno is distinct in two topics. The first one is manifested by the beginning of the dialogue, when younger Thessalian asks Socrates for the way that virtue is acquired. According to Socratic account, this question is impossible to be answered because it is required the formulation of determination of what the virtue is. That is the second topic of this dialogue that is mentioned indirectly. The fact that Meno was student of Gorgias is a Socrates' motivation to challenge his interlocutor to determine the notion of virtue, pretended his ignorance. Meno tries to determine the notion of virtue three times, without successful, since Socrates identifies many errors. However, Meno having reached deadlock wonders himself how someone can investigate something that he does not know it, and by extension if he know it how he can know that this is what he searched about (Meno's paradox). Socrates answers to that paradox with the theory of recollection, having presupposed the immortality of soul. Indeed, he proceed in the evidence of that theory by doing a geometrical experiment with one of Meno's slaves. What they reap from this experiment is the value of researching, for which is required the acceptance of our ignorance. The aim is to approach the Truth. In a similar way lies the philosophical determination that Socrates seeks and he prompts Meno to search about virtue together. In this point they follow the hypothetical method through which they search the way of acquiring the vitrue, since they did not succeed to give a sufficient definition.Areti is not inherent. Otherwise, young guys born virtuous should have been preserved in order not to be corrupted. Areti is not teachable. After the intervening dialogue with Anitos, they result to the fact that neither Sophists nor politicians are appropriate teachers and they are not able to teach the virtue to their children. Therefore, a first conclusion they lied to is that virtue is not teachable. But, how can someone explain the fact that there are people doing virtuous actions? Thus, at this point Socrates realizes that something is missed. They redefine their words and at the end they attribute the virtuous element of people in the orthi gnomi given by god. By this account they introduce the distinction between opinion and science. However, this dialogue result in query because an adequate definition about virtue is not formulated.
|
368 |
Ανάπτυξη μεθοδολογιών για τη μη-γραμμική ανάλυση κατασκευών μεγάλης κλίμακαςΜπέλεσης, Στέφανος 19 May 2011 (has links)
O σχεδιασμός και η ανάπτυξη οικονομικών προϊόντων, με ταυτόχρονη ικανοποίηση των αναγκών για υψηλές επιδόσεις και ασφάλεια αποτελεί μια από τις μεγαλύτερες προκλήσεις για τους ερευνητές μηχανικούς και τη βιομηχανία. Ειδικότερα στους τομείς της κατασκευαστικής βιομηχανίας (αεροναυπηγική, ναυπηγική, αυτοκινητοβιομηχανία, διαστημική) των οποίων τα προϊόντα παράγονται σύμφωνα με τις τεχνολογίες αιχμής, επιζητείται από το μηχανικό να σχεδιάζει νέα προϊόντα με υψηλότερες επιδόσεις, χωρίς να αγνοεί την απαίτηση για μείωση του κόστους και του χρόνου ανάπτυξης αυτών. Η τάση αυτή βρίσκει εφαρμογή κατά κύριο λόγο στην αεροναυπηγική, όπου η μείωση του αξιοσημείωτου κόστους ανάπτυξης νέων αεροσκαφών, χωρίς υποβάθμιση της ασφαλούς και υψηλής ποιότητας τους, αποτελεί βασικό και μόνιμο στόχο.
Ο κυριότερος παράγοντας που επιβαρύνει σημαντικά την ανάπτυξη νέων αεροσκαφών, τόσο από πλευράς κόστους, όσο και χρονικά, είναι οι πειραματικές δοκιμές πλήρους κλίμακας η μεγάλης κλίμακας σε συνθήκες λειτουργίας, οι οποίες επηρεάζουν σημαντικά το κόστος και το χρόνο ανάπτυξης. Οι συγκεκριμένες δοκιμές συμπεριλαμβάνονται στη διαδικασία του σχεδιασμού, με σκοπό να επαληθεύσουν τα αποτελέσματα των αντίστοιχων δομικών αναλύσεων. Η σημασία των πειραματικών δοκιμών και συγκεκριμένα εκείνων της πλήρους κλίμακας ενισχύεται από το γεγονός ότι επιβάλλονται κατά την πιστοποίηση από τις αρχές Αδειοδότησης, με δεδομένο ότι οι δομικές αναλύσεις της αντίστοιχης κλίμακας (πολύ μεγάλης η πλήρους) δεν παρέχουν ικανοποιητική αξιοπιστία.
Η παραπάνω αδυναμία να εξαχθούν ικανοποιητικά αποτελέσματα από τις δομικές αναλύσεις οφείλεται σε δύο βασικά χαρακτηριστικά της ανάλυσης των κατασκευών μεγάλης κλίμακας. Η πρόβλεψη της αστοχίας στις αεροναυπηγικές και άλλες κατασκευές απαιτεί μη-γραμμική ανάλυση, λόγω αιτιών που σχετίζονται με τη συμπεριφορά υλικού (μη-γραμμική συμπεριφορά λόγω ελαστοπλαστικής συμπεριφοράς μεταλλικών υλικών η λόγω αστοχίας συνθέτων υλικών) ή με τη συμπεριφορά της δομής (γεωμετρική μη-γραμμικότητα, προβλήματα επαφής, κλπ/). Επιπρόσθετα, στις κατασκευές αυτές υπάρχει μεγάλη διαφορά κλίμακας μεταξύ των διαστάσεων της περιοχής έναρξης και αρχικής διάδοσης της τοπικής βλάβης με τις συνολικές διαστάσεις της δομής, οι οποίες σχετίζονται με την τελική αστοχία της κατασκευής. Η προσομοίωση με αριθμητικές μεθόδους, με έμφαση στη μέθοδο των Πεπερασμένων Στοιχείων, της δομικής συμπεριφοράς μεγάλης κλίμακας κατασκευών με τα παραπάνω χαρακτηριστικά, οδηγεί σε αριθμητικά πρότυπα εκατομμυρίων βαθμών ελευθερίας, τα οποία απαιτείται να επιλυθούν με μη-γραμμικές μεθόδους. Ο συνδυασμός του μεγέθους των προτύπων αυτών με το μη-γραμμικό χαρακτήρα τους, καθιστά το πρόβλημα δυσεπίλυτο έως σήμερα με χρήση συμβατικών μεθόδων και ουσιαστικά αποτελεί την αιτία μη-αξιοποίησης των εικονικών δοκιμών (αριθμητικών αναλύσεων), στην ελαχιστοποίηση ή και την ολοκληρωτική αποφυγή των εκτενών και δαπανηρών πειραματικών δοκιμών.
Βάσει των ανωτέρω, σκοπός της παρούσας διατριβής είναι η ανάπτυξη νέων, αξιόπιστων και ολοκληρωμένων μεθοδολογιών για τη μη-γραμμική ανάλυση κατασκευών μεγάλης κλίμακας, με κύριο στόχο την ικανοποιητική πρόβλεψη τοπικών φαινομένων που συνδέονται με την έναρξη της βλάβης, αλλά και την ικανότητα να εκτείνονται έως την κατάλληλη κλίμακα (ίσως και την πλήρη), ώστε να καθίσταται δυνατός ο υπολογισμός της δομικής συμπεριφοράς της κατασκευής μέχρι την τελική αστοχία.
Στη βάση αυτή, γίνεται ανάπτυξη νέων μεθοδολογιών δομικής μη-γραμμικής ανάλυσης και προτείνονται κατάλληλες τροποποιήσεις σε ήδη καθιερωμένες μεθόδους, με σκοπό την εφαρμογή τους σε κατασκευές μεγάλης κλίμακας. Λόγω των πλεονεκτημάτων που παρέχονται από την ταχεία και συνεχόμενη εξέλιξη των ηλεκτρονικών υπολογιστών (ταχύτητα, μνήμη, λογισμικό) και την ευρεία χρήση εμπορικών πακέτων που βασίζονται στη θεωρία των πινάκων (Πεπερασμένα Στοιχεία, Συνοριακά Στοιχεία, κλπ.), οι παραπάνω μεθοδολογίες χρησιμοποιούνται ευρέως στην πρόβλεψη της δομικής συμπεριφοράς των κατασκευών στη βάση της φιλοσοφίας της ‘εικονικής δοκιμής’. Με δεδομένο ότι οι αριθμητικές μέθοδοι επίλυσης μη-γραμμικών προβλημάτων σε κατασκευές μεγάλης κλίμακας δεν παρέχουν ικανοποιητικά αποτελέσματα, όπως προαναφέρθηκε, στην παρούσα εργασία αναζητήθηκαν εναλλακτικές μεθοδολογίες και τεχνικές, για την προσέγγιση του τεχνολογικού προβλήματος από τη σκοπιά του μηχανικού και προτάθηκαν αξιόπιστες λύσεις με δυνατότητα εφαρμογής σε βιομηχανικό περιβάλλον.
Η διαδικασία που ακολουθήθηκε αποτελείται από τέσσερις βασικούς άξονες, την γραμμική αριθμητική ανάλυση των τάσεων ολόκληρης της δομής μεγάλης κλίμακας, τον έλεγχο για πιθανή εμφάνιση τοπικής μη-γραμμικής συμπεριφοράς, την τοπική ανάλυση αστοχίας (μη-γραμμική ανάλυση) και μια σειρά κατάλληλων τεχνικών για τον προσδιορισμό της συνεισφοράς των περιοχών με τοπική μη-γραμμικότητα στη δομική συμπεριφορά ολόκληρης της δομής. Όλα τα βήματα της διαδικασίας πραγματοποιήθηκαν στη βάση της μεθόδου των Πεπερασμένων Στοιχείων. Η γραμμική αριθμητική ανάλυση τάσεων της κατασκευής έγινε με χρήση αριθμητικών προτύπων που προσομοιώνουν ολόκληρη την κατασκευή, χωρισμένων σε τμήματα, ανάλογα με την γεωμετρική επαναληψιμότητα που πιθανώς εμφανίζει η γεωμετρία. Οι τάσεις που υπολογίστηκαν χρησιμοποιήθηκαν στην πρόβλεψη τα εμφάνισης τοπικής μη-γραμμικότητας με τη βοήθεια κατάλληλα ανεπτυγμένων κριτηρίων, ανάλογα με το είδος της μη-γραμμικότητας που μπορεί να εμφανιστεί. Οι περιοχές μη-γραμμικότητας που ανιχνεύονται, ταξινομούνται σε σειρά κρισιμότητας και ανάλογα με το κρίσιμο επίπεδο φορτίου καθεμιάς από αυτές, επεξεργάζονται τοπικά με μη-γραμμικές αναλύσεις για την προσομοίωση της έναρξης και εξέλιξης της τοπικής μη-γραμμικότητας. Για τον υπολογισμό της συνεισφοράς των μη-γραμμικών υποπεριοχών στη δομική συμπεριφορά ολόκληρης της κατασκευής, αναπτύχθηκαν κατάλληλες τεχνικές περιγραφής της τοπικής μη-γραμμικότητας και εισαγωγής τους στα δομικά χαρακτηριστικά του αριθμητικού προτύπου της ολικής κατασκευής. Για την προσομοίωση της εξέλιξης της τοπικής μη-γραμμικότητας, από την πρώτη ανίχνευση μέχρι την τελική εξέλιξη, η διαδικασία εκτελείται βηματικά και επαναληπτικά. Αποδεικνύεται ότι κάτω από συγκεκριμένες παραδοχές, οι μεθοδολογίες για τη δομική μη-γραμμική ανάλυση κατασκευών μεγάλης κλίμακας είναι εφικτό να παρέχουν αξιόπιστα αποτελέσματα αντίστοιχα με εκείνα των πειραματικών δοκιμών πλήρους κλίμακας. Ταυτόχρονα είναι και αποτελεσματικές, δεδομένου ότι έχουν αναπτυχθεί κατάλληλα, ώστε να εστιάζουν τους διαθέσιμους υπολογιστικούς πόρους μόνο στις κρίσιμες περιοχές, μέσω της κατά απαίτησης εφαρμογής τοπικών μη-γραμμικών αναλύσεων. / The design and development of low-cost products, with simultaneous fulfilment of the requirements for higher performance and safety, is one of the biggest challenges for the research engineers and the industry. Especially in the sectors of the structural industry (aeronautics, shipbuilding, automotive, space industry) where the products are being produced according to the latest achievements of the technology, the engineer is obliged to design new products with higher proficiency, without neglecting the need for lower cost and development time. This trend has great application mainly in the aeronautical industry, where the reduction of the remarkable cost for the development of a new aircraft, without downgrading the level of safety and the quality of service comprises the main target of the current research effort.
The main factor that weighs down the development of new aircrafts, as far as the cost and the time is concerned, is the required experimental tests of the full / large scale under service loads, which affect significantly the development cost and the time to market. These tests are included in the design process, in order to verify the results of the corresponding structural analyses. The importance of the experimental tests and specifically these of the full scale level is amplified by the fact that they are being imposed during the certification process by the Airworthiness Authorities, since the structural analyses of the corresponding scale (full scale) do not provide adequate results.
The above mentioned inability of the structural analyses of providing adequate results is based on two main characteristics of the large scale structures. Firstly, the failure prediction in aeronautical (among others) structures requires non-linear analysis, for reasons related to the material behaviour (non-linear behaviour due to composite material damage, elastoplastic behaviour of metallic materials) and the structural behaviour (geometrical non-linearity, contact problems). Secondly, in these structures there is great difference between the dimensions of the local damage initiation region and the dimensions of the whole structure, with the latter being related with the total collapse. The simulation with numerical methods, especially with the use of Finite Elements, of the structural behaviour of large scale structures with the above characteristics, leads to million DOFs (Degrees Of Freedom), whose solution requires non-linear numerical methods. The combination of the size of these models with their non-linear nature renders the problem non-solvable using conventional methodologies and is in fact the reason for the, up to now, not thoroughly utilization of virtual testing (numerical simulations), that would lead to the minimization of the number or even to the complete avoidance of the extensive and costly experimental tests.
Based on the above, main objective of this Thesis is the development of new, reliable and integrated methodologies for the non-linear analysis of large scale structures, targeting mainly in the satisfactory prediction of phenomena related to the initiation of local damage, but also being able to evolute up to the appropriate scale (maybe full scale), in order to account the structural behaviour of the whole structure up to the total collapse.
On this basis, innovative methodologies are being developed for the structural non-linear analysis and appropriate modifications are proposed for already well-established techniques, in order to be applied on large scale structures. Due to the advantages offered from the rapid and constant progress of computers (speed, memory, software) and the wide usage of commercial tools that are based on the matrix theory (Finite Elements, Boundary Elements), the above mentioned methodologies were developed based on the philosophy of ‘virtual testing’. Due to the fact that the numerical solution methods for non-linear problems in large scale structures are not able to provide adequate results, as mentioned previously, in the present work alternative methodologies and techniques were investigated, approaching the technological problem from the engineer’s view and reliable solutions applicable to an industrial environment were proposed.
The procedure that was followed consists of four basic keystones: the linear numerical stress analysis of the whole structure, the check for possible local non-linear behaviour, the local damage analysis (non-linear analysis) and a series of appropriately configured sub-routines, able to redefine the contribution of the regions exhibiting local damage in the structural behaviour of the whole structure. All the routines of the proposed methodologies were accomplished using the commercial Finite Element code ANSYS. The linear numerical stress analysis of the structure was carried out with the use of numerical models simulating the whole structure, divided into suitable parts, based on the geometrical repeatability. The calculated stresses were utilized for the prediction of the local damage, using properly developed damage criteria, depending on the type of non-linearity. The corresponding regions detected, were classified according to the criticality level (critical load) and were elaborated with local analyses of non-linear nature for the simulation of local damage initiation. For the accumulation of the contribution of the local damage in the structural behaviour of the whole structure, appropriate techniques were developed for the description of the local damage and its incorporation in the structural features of the numerical model of the structure. For the determination of the damage evolution, from the first detection up to the final failure, the procedure was performed in an incremental and iterative way. It was proved, that under specific assumptions, the proposed methodologies simulating the non-linear phenomena of large scale structures are capable of providing accurate results, in accordance with those of the experimental tests of full scale level. Simultaneously, the proposed methodologies become also efficient, providing that they have been developed appropriately, in order to focus the available computer resources on the non-linearly behaving regions by the ‘on demand’ application of the non-linear analyses.
|
369 |
Identification passive en acoustique : estimateurs et applications au SHM / Passive estimation in acoustics : estimators and applications to SHMVincent, Rémy 08 January 2016 (has links)
L’identité de Ward est une relation qui permet d’identifier unmilieu de propagation linéaire dissipatif, c'est-à-dire d'estimer des paramètres qui le caractérisent. Dans les travaux exposés, cette identité est utilisée pour proposer de nouveaux modèles d’observation caractérisant un contexte d’estimation qualifié de passif : les sources qui excitent le système ne sont pas contrôlées par l’utilisateur. La théorie de l’estimation/détection dans ce contexte est étudiée et des analyses de performances sont menées sur divers estimateurs. La portée applicative des méthodes proposées concerne le domaine du Structural Health Monitoring (SHM), c’est-à-dire le suivi de l’état de santé desbâtiment, des ponts... L'approche est développée pour la modalité acoustique aux fréquences audibles, cette dernière s'avérant complémentaire des techniques de l’état de l’art du SHM et permettant entre autre, d’accéder à des paramètres structuraux et géométriques. Divers scénarios sont illustrés par la mise en oeuvre expérimentale des algorithmes développés et adaptés à des contraintes de calculs embarqués sur un réseau de capteurs autonome. / Ward identity is a relationship that enables damped linear system identification, ie the estimation its caracteristic properties. This identity is used to provide new observation models that are available in an estimation context where sources are uncontrolled by the user. An estimation and detection theory is derived from these models and various performances studies areconducted for several estimators. The reach of the proposed methods is extended to Structural Health Monitoring (SHM), that aims at measuring and tracking the health of buildings, such as a bridge or a sky-scraper for instance. The acoustic modality is chosen as it provides complementary parameters estimation to the state of the art in SHM, such as structural and geometrical parameters recovery. Some scenarios are experimentally illustrated by using the developed algorithms, adapted to fit the constrains set by embedded computation on anautonomous sensor network.
|
370 |
Electron beam melting of Alloy 718 : Influence of process parameters on the microstructureKarimi Neghlani, Paria January 2018 (has links)
Additive manufacturing (AM) is the name given to the technology of building 3D parts by adding layer-by-layer of materials, including metals, plastics, concrete, etc. Of the different types of AM techniques, electron beam melting (EBM), as a powder bed fusion technology, has been used in this study. EBM is used to build parts by melting metallic powders by using a highly intense electron beam as the energy source. Compared to a conventional process, EBM offers enhanced efficiency for the production of customized and specific parts in aerospace, space, and medical fields. In addition, the EBM process is used to produce complex parts for which other technologies would be either expensive or difficult to apply. This thesis has been divided into three sections, starting from a wider window and proceeding to a smaller one. The first section reveals how the position-related parameters (distance between samples, height from build plate, and sample location on build plate) can affect the microstructural characteristics. It has been found that the gap between the samples and the height from the build plate can have significant effects on the defect content and niobium-rich phase fraction. In the second section, through a deeper investigation, the behavior of Alloy 718 during the EBM process as a function of different geometry-related parameters is examined by building single tracks adjacent to each other (track-by-track) andsingle-wall samples (single tracks on top of each other). In this section, the main focus is to understand the effect of successive thermal cycling on microstructural evolution. In the final section, the correlations between the main machine-related parameters (scanning speed, beam current, and focus offset) and the geometrical (melt pool width, track height, re-melted depth, and contact angle) and microstructural (grain structure, niobium-rich phase fraction, and primary dendrite arm spacing) characteristics of a single track of Alloy 718 have been investigated. It has been found that the most influential machine-related parameters are scanning speed and beam current, which have significant effects on the geometry and the microstructure of the single-melted tracks.
|
Page generated in 0.0792 seconds