Spelling suggestions: "subject:"ionic"" "subject:"sonic""
71 |
[en] RELIABILITY ANALYSIS OF SATURATED-UNSATURATED SOIL SLOPES USING LIMIT ANALYSIS IN THE CONIC QUADRATIC SPACE / [pt] ANÁLISE DE CONFIABILIDADE DE TALUDES EM CONDIÇÕES SATURADAS-NÃO SATURADAS VIA ANÁLISE LIMITE NO ESPAÇO CÔNICO QUADRÁTICOMARLENE SUSY TAPIA MORALES 14 July 2014 (has links)
[pt] Este trabalho tem por objetivo a avaliação da estabilidade de taludes de solo quando sometidos a processos de infiltração de chuva, utilizando conceitos de Análise Limite e Análise de Confiabilidade. Primeiramente, determina-se a variação da sução no solo, para isto, emprega-se o Método dos Elementos Finitos e o Método de diferenças finitas na solução da equação de Richards. O modelo de Van Genuchten (1980) é utilizado para a curva característica. Na solução da nãolinearidade, emprega-se o método Picard Modificado. A instabilidade de taludes é estudada mediante o método de Análise Limite Numérica com base no Método de Elementos Finitos e o critério de Mohr Coulomb como critério de escoamento. A solução do problema matemático será realizada no espaço cônico quadrático com o objetivo de tornar a solução mais computacionalmente eficiente. Considerando as propriedades do solo como variáveis aleatórias foi incluída a determinação do Índice de Confiabilidade utilizando as formulações dos métodos de Monte Carlo e FORM (first order reliability method). Inicialmente são introduzidos conceitos básicos associados ao fluxo saturado-não saturado. A seguir são apresentados alguns conceitos. Sobre Análise Limite e sua formulação pelo Método de Elementos Finitos. Finalmente são introduzidos os fundamentos da Análise de
Confiabilidade. Análises de confiabilidade das encostas de Coos Bay no estado de Oregon nos Estados Unidos e da Vista Chinesa no Rio de Janeiro Brasil, são apresentadas devido a que estes taludes sofreram colapso quando submetidos a processos de infiltração de água de chuva. Os resultados deste trabalho mostram que a falha das encostas ocorre quando o índice de confiabilidade atinge um valor perto de dois. / [en] This thesis aims to perform a reliability analysis of the stability of 2D soil slopes when they are submitted to water infiltration due to the rains.The time variation of the soil matric suctions is calculated first. The Finite Element Method is used to transform the Richards differential equation into a system of nonlinear first order equations. The nonlinearity of the problem is due to the use of the characteristic curve proposed by van Genuchten (1980). The Modified Picard Method is applied to solve de time-dependent nonlinear equation system. The responses of the flux-problem are transferred to the stability problem in some instants using the same time-interval (normally days).To estimate the stability of the slopes, limit analysis is used. The limit analyses are performed based on the Inferior Limit Theorem of the Plasticity Theory. The problem is defined as an optimization problem where the load factor is maximized. The equilibrium equations are obtained via Finite Element discretization and the strength criterion of Mohr-Couomb is written in the conic quadratic space. Therefore, a SOCP (Second Order Conic Programming) problem is generated. The problem is solved using an interior point algorithm of the code Mosek.Since the soil properties are random variables a reliability analysis can be performed at each instant of the time-dependent problem. In order to perform the reliability analyses, Response Surfaces for the failure function of the slope are generated. In this work, the Stochastic Collocation Method is used to generate Response Surfaces. The Simulation Monte Carlo Method and the FORM (First Order Reliability Method) are used to obtain both the reliability index and the probability of failure of the slopes.Reliability analyses of the Coos Bay Slope in the state of Oregon in USA and in the Vista Chinesa Slope in Rio de Janeiro, Brazil, are presented because they collapse due to rainfall infiltration. The results show that the soil slope fails when the related reliability index is close to two.
|
72 |
Un modèle géométrique multi-vues des taches spéculaires basé sur les quadriques avec application en réalité augmentée / A multi-view geometric model of specular spots based on quadrics with augmented reality applicationMorgand, Alexandre 08 November 2018 (has links)
La réalité augmentée (RA) consiste en l’insertion d’éléments virtuels dans une scène réelle, observée à travers un écran ou en utilisant un système de projection sur la scène ou l’objet d’intérêt. Les systèmes de réalité augmentée peuvent prendre des différentes formes pour obtenir l’équilibre désiré entre trois critères : précision, latence et robustesse. Il est possible d’identifier trois composants principaux à ces systèmes : localisation, reconstruction et affichage. Les contributions de cette thèse se concentrent essentiellement sur l’affichage et plus particulièrement le rendu des applications de réalité augmentée. À l’opposé des récentes avancées dans le domaine de la localisation et de la reconstruction, l’insertion d’éléments virtuels de façon plausible et esthétique reste une problématique compliquée, mal-posée et peu adaptée à un contexte temps réel. En effet, cette insertion requiert une reconstruction de l’illumination de la scène afin d’appliquer les conditions lumineuses adéquates à l’objet inséré. L’illumination de la scène peut être divisée en plusieurs catégories. Nous pouvons modéliser l’environnement de façon à décrire l’interaction de la lumière incidente et réfléchie pour chaque point 3D d’une surface. Il est également possible d’expliciter l’environnement en calculant la position des sources de lumière, leur type (lampe de bureau, néon, ampoule, ….), leur intensité et leur couleur. Pour insérer un objet de façon cohérente et réaliste, il est primordial d’avoir également une connaissance de la surface recevant l’illumination. Cette interaction lumière/matériaux est dépendante de la géométrie de la surface, de sa composition chimique (matériau) et de sa couleur. Pour tous ces aspects, le problème de reconstruction de l’illumination est difficile, car il est très complexe d’isoler l’illumination sans connaissance a priori de la géométrie, des matériaux de la scène et de la pose de la caméra observant la scène. De manière générale, sur une surface, une source de lumière laisse plusieurs traces telles que les ombres, qui sont créées par l’occultation de rayons lumineux par un objet, et les réflexions spéculaires ou spécularités qui se manifestent par la réflexion partielle ou totale de la lumière. Bien que ces spécularités soient souvent considérées comme des éléments parasites dans les applications de localisation de caméra, de reconstruction ou encore de segmentation, ces éléments donnent des informations cruciales sur la position et la couleur de la source lumineuse, mais également sur la géométrie de la surface et la réflectance du matériau où elle se manifeste. Face à la difficulté du problème de modélisation de la lumière et plus particulièrement du calcul de l’ensemble des paramètres de la lumière, nous nous sommes focalisés, dans cette thèse, sur l’étude des spécularités et sur toutes les informations qu’elles peuvent fournir pour la compréhension de la scène. Plus particulièrement, nous savons qu’une spécularité est définie comme la réflexion d’une source de lumière sur une surface réfléchissante. Partant de cette remarque, nous avons exploré la possibilité de considérer la spécularité comme étant une image issue de la projection d’un objet 3D dans l’espace. Nous sommes partis d’un constat simple, mais peu traité par la littérature qui est que les spécularités présentent une forme elliptique lorsqu’elles apparaissent sur une surface plane. À partir de cette hypothèse, pouvons-nous considérer un objet 3D fixe dans l’espace tel que sa projection perspective dans l’image corresponde à la forme de la spécularité ? Plus particulièrement, nous savons qu’un ellipsoïde projeté perspectivement donne une ellipse. Considérer le phénomène de spécularité comme un phénomène géométrique a de nombreux avantages. (...) / Augmented Reality (AR) consists in inserting virtual elements in a real scene, observed through a screen or a projection system on the scene or the object of interest. The augmented reality systems can take different forms to obtain a balance between three criteria: precision, latency and robustness. It is possible to identify three main components to these systems: localization, reconstruction and display. The contributions of this thesis focus essentially on the display and more particularly the rendering of augmented reality applications. Contrary to the recent advances in the field of localization and reconstruction, the insertion of virtual elements in a plausible and aesthetic way remains a complicated problematic, ill-posed and not adapted to a real-time context. Indeed, this insertion requires a good understanding of the lighting conditions of the scene. The lighting conditions of the scene can be divided in several categories. First, we can model the environment to describe the interaction between the incident and reflected light pour each 3D point of a surface. Secondly, it is also possible to explicitly the environment by computing the position of the light sources, their type (desktop lamps, fluorescent lamp, light bulb, . . . ), their intensities and their colors. Finally, to insert a virtual object in a coherent and realistic way, it is essential to have the knowledge of the surface’s geometry, its chemical composition (material) and its color. For all of these aspects, the reconstruction of the illumination is difficult because it is really complex to isolate the illumination without prior knowledge of the geometry, material of the scene and the camera pose observing the scene. In general, on a surface, a light source leaves several traces such as shadows, created from the occultation of light rays by an object, and the specularities (or specular reflections) which are created by the partial or total reflection of the light. These specularities are often described as very high intensity elements in the image. Although these specularities are often considered as outliers for applications such as camera localization, reconstruction or segmentation, these elements give crucial information on the position and color of the light source but also on the surface’s geometry and the material’s reflectance where these specularities appear. To address the light modeling problem, we focused, in this thesis, on the study of specularities and on every information that they can provide for the understanding of the scene. More specifically, we know that a specularity is defined as the reflection of the light source on a shiny surface. From this statement, we have explored the possibility to consider the specularity as the image created from the projection of a 3D object in space.We started from the simple but little studied in the literature observation that specularities present an elliptic shape when they appear on a planar surface. From this hypothesis, can we consider the existence of a 3D object fixed in space such as its perspective projection in the image fit the shape of the specularity ? We know that an ellipsoid projected perspectivally gives an ellipse. Considering the specularity as a geometric phenomenon presents various advantages. First, the reconstruction of a 3D object and more specifically of an ellipsoid, has been the subject to many publications in the state of the art. Secondly, this modeling allows a great flexibility on the tracking of the state of the specularity and more specifically the light source. Indeed, if the light is turning off, it is easy to visualize in the image if the specularity disappears if we know the contour (and reciprocally of the light is turning on again). (...)
|
73 |
On the spectral geometry of manifolds with conic singularitiesSuleymanova, Asilya 29 September 2017 (has links)
Wir beginnen mit der Herleitung der asymptotischen Entwicklung der Spur des Wärmeleitungskernes, $\tr e^{-t\Delta}$, für $t\to0+$, wobei $\Delta$ der Laplace-Beltrami-Operator auf einer Mannigfaltigkeit mit Kegel-Singularitäten ist; dabei folgen wir der Arbeit von Brüning und Seeley. Dann untersuchen wir, wie die Koeffizienten der Entwicklung mit der Geometrie der Mannigfaltigkeit zusammenhängen, insbesondere fragen wir, ob die (mögliche) Singularität der Mannigfaltigkeit aus den Koeffizienten - und damit aus dem Spektrum des Laplace-Beltrami-Operators - abgelesen werden kann. In wurde gezeigt, dass im zweidimensionalen Fall ein logarithmischer Term und ein nicht lokaler Term im konstanten Glied genau dann verschwinden, wenn die Kegelbasis ein Kreis der Länge $2\pi$ ist, die Mannigfaltigkeit also geschlossen ist. Dann untersuchen wir wir höhere Dimensionen. Im vier-dimensionalen Fall zeigen wir, dass der logarithmische Term genau dann verschwindet, wenn die Kegelbasis eine
sphärische Raumform ist. Wir vermuten, dass das Verschwinden eines nicht lokalen Beitrags zum konstanten Term äquivalent ist dazu, dass die Kegelbasis die runde Sphäre ist; das kann aber bisher nur im zyklischen Fall gezeigt werden. Für geraddimensionale Mannigfaltigkeiten höherer Dimension und mit Kegelbasis von konstanter Krümmung zeigen wir weiter, dass der logarithmische Term ein Polynom in der Krümmung ist, das Wurzeln ungleich 1 haben kann, so dass erst das Verschwinden von mehreren Termen - die derzeit noch nicht explizit behandelt werden können - die Geschlossenheit der Mannigfaltigkeit zur Folge haben könnte. / We derive a detailed asymptotic expansion of the heat trace for the Laplace-Beltrami operator on functions on manifolds with one conic singularity, using the Singular Asymptotics Lemma of Jochen Bruening and Robert T. Seeley. Then we investigate how the terms in the expansion reflect the geometry of the manifold. Since the general expansion contains a logarithmic term, its vanishing is a necessary condition for smoothness of the manifold. It is shown in the paper by Bruening and Seeley that in the two-dimensional case this implies that the constant term of the expansion contains a non-local term that determines the length of the (circular) cross section and vanishes precisely if this length equals $2\pi$, that is, in the smooth case. We proceed to the study of higher dimensions. In the four-dimensional case, the logarithmic term in the expansion vanishes precisely when the cross section is a spherical space form, and we expect that the vanishing of a further singular term will imply again smoothness, but this is not yet clear beyond the case of cyclic space forms.
In higher dimensions the situation is naturally more difficult. We illustrate this in the case of cross sections with constant curvature. Then the logarithmic term becomes a polynomial in the curvature with roots that are different from 1, which necessitates more vanishing of other terms, not isolated so far.
|
74 |
Méthodes numériques pour le calcul à la rupture des structures de génie civil / Numerical methods for the yield design of civil engineering structuresBleyer, Jérémy 17 July 2015 (has links)
Ce travail tente de développer des outils numériques efficaces pour une approche plus rationnelle et moins empirique du dimensionnement à la ruine des ouvrages de génie civil. Contrairement aux approches traditionnelles reposant sur une combinaison de calculs élastiques, l'adoption de coefficients de sécurité et une vérification locale des sections critiques, la théorie du calcul à la rupture nous semble être un outil prometteur pour une évaluation plus rigoureuse de la sécurité des ouvrages. Dans cette thèse, nous proposons de mettre en œuvre numériquement les approches statique par l'intérieur et cinématique par l'extérieur du calcul à la rupture à l'aide d'éléments finis dédiés pour des structures de plaque en flexion et de coque en interaction membrane-flexion. Le problème d'optimisation correspondant est ensuite résolu à l'aide du développement, relativement récents, de solveurs de programmation conique particulièrement efficaces. Les outils développés sont également étendus au contexte de l'homogénéisation périodique en calcul à la rupture, qui constitue un moyen performant de traiter le cas des structures présentant une forte hétérogénéité de matériaux. Des procédures numériques sont spécifiquement développées afin de déterminer puis d'utiliser dans un calcul de structure des critères de résistance homogènes équivalents. Enfin, les potentialités de l'approche par le calcul à la rupture sont illustrées sur deux exemples complexes d'ingénierie : l'étude de la stabilité au feu de panneaux en béton armé de grande hauteur ainsi que le calcul de la marquise de la gare d'Austerlitz / This work aims at developping efficient numerical tools for a more rational and less empirical assessment of civil engineering structures yield design. As opposed to traditionnal methodologies relying on combinations of elastic computations, safety coefficients and local checking of critical members, the yield design theory seems to be a very promising tool for a more rigourous evaluation of structural safety. Lower bound static and upper bound kinematic approaches of the yield design theory are performed numerically using dedicated finite elements for plates in bending and shells in membrane-bending interaction. Corresponding optimization problems are then solved using very efficient conic programming solvers. The proposed tools are also extended to the framework of periodic homogenization in yield design, which enables to tackle the case of strong material heterogeneities. Numerical procedures are specifically tailored to compute equivalent homogeneous strength criteria and to use them, in a second step, in a computation at the structural level. Finally, the potentialities of the yield design approach are illustrated on two complex engineering problems : the stability assessment of high-rise reinforced concrete panels in fire conditions and the computation of the Paris-Austerlitz railway station canopy
|
75 |
Sobre seÃÃes cÃnicas / On conic sectionsJosà Adriano dos Santos Oliveira 18 June 2015 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / O estudo realizado nesta dissertaÃÃo, busca apresentar as seccÃes cÃnicas, dando Ãnfase a uma abordagem por meio de uma geometria sintÃtica e elementar, onde o trabalho à desenvolvido da seguinte forma: inicia-se com uma abordagem histÃrica, assim como a sua relaÃÃo com o cone circular; em seguida, à feito um estudo sintÃtico sobre as cÃnicas, exclusivamente, no plano; apresenta-se algumas superfÃcies quÃdricas; a equaÃÃo geral
do segundo grau à apresentada como uma representaÃÃo algÃbrica de uma cÃnica e sÃo mostradas diversas situaÃÃes, onde as cÃnicas surgem de forma, curiosamente, natural, alÃm das inÃmeras aplicaÃÃes prÃticas em diversas Ãreas do conhecimento. / The study in this dissertation, seeks to present the conic sections, emphasizing an approach by means of a synthetic and elementary geometry, where the work is carried out as follows: begins with a historical approach, as well as their relationship with the circular cone; then itâs done a synthetic study on the conical exclusively on the plan; It
presents some quadric surfaces; the general equation of the second degree is presented as an algebraic representation of a conic and are shown several situations where the conical arise so, curiously, natural, in addition to numerous practical applications in various fields of knowledge.
|
76 |
Despacho ótimo de geração e controle de potência reativa no sistema elétrico de potência /Yamaguti, Lucas do Carmo. January 2019 (has links)
Orientador: Jose Roberto Sanches Mantovani / Resumo: Neste trabalho são propostos modelos matemáticos determinístico e estocástico de programação cônica de segunda ordem em coordenadas retangulares para o problema de fluxo de potência ótimo de geração e controle de potência reativa no sistemas elétricos de potência, considerando as minimização dos custos de geração de energia, perdas ativas da rede e emissão de poluentes no meio ambiente. Os modelos contemplam as principais características físicas e econômicas do problema estudado, assim como os limites operacionais do sistema elétrico. Os modelos são programados em linguagem AMPL e suas soluções são obtidas através do solver comercial CPLEX. Os sistemas testes IEEE30, IEEE118 e ACTIVSg200 são utilizados nas simulações computacionais dos modelos propostos. Os resultados obtidos pelo modelo determinístico desenvolvido são validados através de comparações com os resultados fornecidos pelo software MATPOWER , onde ambos consideram apenas a existência de gerações termoelétricas. No modelo estocástico utiliza-se a técnica de geração de cenários e considera-se um período de um ano (8760 horas), e geradores que utilizam fontes de geração renováveis e não renováveis. / Abstract: In this work we propose deterministic and stochastic mathematical models of second order conical programming in rectangular coordinates for the optimal power flow problem of reactive power generation and control in electric power systems, considering the minimization of energy generation costs, losses networks and emission of pollutants into the environment. The models contemplate the main physical and economic characteristics of the studied problem, as well as the operational limits of the electric system. The models are programmed in AMPL language and their solutions are obtained through the commercial solver CPLEX. The IEEE30, IEEE118 and ACTIVSg200 test systems are used in the computer simulations of the proposed models. The results obtained by the deterministic model developed are validated through comparisons with the results provided by the software MATPOWERR , where both consider only the existence of thermoelectric generations. The stochastic model uses the scenario generation technique and considers a period of one year (8760 hours), and generators using renewable and non-renewable generation sources. / Mestre
|
77 |
Optimal Reinsurance Designs: from an Insurer’s PerspectiveWeng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer.
The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
|
78 |
Optimal Reinsurance Designs: from an Insurer’s PerspectiveWeng, Chengguo 09 1900 (has links)
The research on optimal reinsurance design dated back to the 1960’s. For nearly half a century, the quest for optimal reinsurance designs has remained a fascinating subject, drawing significant interests from both academicians and practitioners. Its fascination lies in its potential as an effective risk management tool for the insurers. There are many ways of formulating the optimal design of reinsurance, depending on the chosen objective and constraints. In this thesis, we address the problem of optimal reinsurance designs from an insurer’s perspective. For an insurer, an appropriate use of the reinsurance helps to reduce the adverse risk exposure and improve the overall viability of the underlying business. On the other hand, reinsurance incurs additional cost to the insurer in the form of reinsurance premium. This implies a classical risk and reward tradeoff faced by the insurer.
The primary objective of the thesis is to develop theoretically sound and yet practical solution in the quest for optimal reinsurance designs. In order to achieve such an objective, this thesis is divided into two parts. In the first part, a number of reinsurance models are developed and their optimal reinsurance treaties are derived explicitly. This part focuses on the risk measure minimization reinsurance models and discusses the optimal reinsurance treaties by exploiting two of the most common risk measures known as the Value-at-Risk (VaR) and the Conditional Tail Expectation (CTE). Some additional important economic factors such as the reinsurance premium budget, the insurer’s profitability are also considered. The second part proposes an innovative method in formulating the reinsurance models, which we refer as the empirical approach since it exploits explicitly the insurer’s empirical loss data. The empirical approach has the advantage that it is practical and intuitively appealing. This approach is motivated by the difficulty that the reinsurance models are often infinite dimensional optimization problems and hence the explicit solutions are achievable only in some special cases. The empirical approach effectively reformulates the optimal reinsurance problem into a finite dimensional optimization problem. Furthermore, we demonstrate that the second-order conic programming can be used to obtain the optimal solutions for a wide range of reinsurance models formulated by the empirical approach.
|
79 |
A New Contribution To Nonlinear Robust Regression And Classification With Mars And Its Applications To Data Mining For Quality Control In ManufacturingYerlikaya, Fatma 01 September 2008 (has links) (PDF)
Multivariate adaptive regression spline (MARS) denotes a modern
methodology from statistical learning which is very important
in both classification and regression, with an increasing number of applications in many areas of science, economy and technology.
MARS is very useful for high dimensional problems and shows a great promise for fitting nonlinear multivariate functions. MARS technique does not impose any particular class of relationship between the predictor variables and outcome variable of interest. In other words, a special advantage of MARS lies in its ability to estimate the contribution of the basis functions so that
both the additive and interaction effects of the predictors are allowed to determine the response variable.
The function fitted by MARS is continuous, whereas the one fitted by classical classification methods (CART) is not. Herewith, MARS becomes an alternative to CART. The MARS algorithm for estimating the model function consists of two complementary algorithms: the forward and backward stepwise algorithms. In the first step, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm is began by removing the least significant basis functions from the model.
In this study, we propose not to use the backward stepwise algorithm. Instead, we construct a penalized residual sum of squares (PRSS) for MARS as a Tikhonov regularization problem, which is also known as ridge regression. We treat this problem using continuous optimization techniques which we consider to
become an important complementary technology and alternative to the concept of the backward stepwise algorithm. In particular, we apply the elegant framework of conic quadratic programming which is an area of convex optimization that
is very well-structured, herewith, resembling linear programming and, hence, permitting the use of interior point methods. The boundaries of this optimization problem are determined by the multiobjective optimization approach which provides us many
alternative solutions.
Based on these theoretical and algorithmical studies, this MSc thesis work also contains applications on the data investigated in a TÜ / BiTAK project on quality control. By these applications, MARS and our new method are compared.
|
80 |
AFM Untersuchungen an smektischen Flüssigkristallen / Fokalkonische Domänen in smektischen Filmen / AFM Studies of Smectic Liquid Crystals / Focal Conic Domains in Smectic FilmsGuo, Wei 07 July 2009 (has links)
No description available.
|
Page generated in 0.044 seconds