• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 88
  • 50
  • 25
  • 14
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 525
  • 144
  • 97
  • 74
  • 64
  • 64
  • 60
  • 58
  • 53
  • 49
  • 45
  • 45
  • 43
  • 41
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Implementation of Tabular Verification and Refinement

Zhou, Ning 02 1900 (has links)
<p> It has been argued for some time that tabular representations of formal specifications can help in writing them, in understanding them, and in checking them. Recently it has been suggested that tabular representations also help in breaking down large verification and refinement conditions into a number of smaller ones.</p> <p> The article [32] developed the theory, but the real proof in terms of an implementation is not provided. This project is about formalizing tables in a theorem prover, Simplify, defining theorems of [32] in terms of functions written in the OCaml programming language, and conducting some case studies in verifying and refining realistic problems.</p> <p> A parser is designed to ease our job of inputting expressions. Pretty-print is also provided: all predicates and tables of the examples in our thesis are automatically generated.</p> <p> Our first example is a control system, a luxury sedan car seat. This example gives us an overall impression on how to prove correctness from tabular specification. The second example specifies a visitor information system. The design features of this example involve modeling properties and operations on sets, relations and functions by building self-defined axioms. The third example illustrates another control system, an elevator. Theorems of algorithmic refinements, stepwise data refinements, and the combination of algorithmic abstraction and data abstraction are applied correspondingly to different operations.</p> / Thesis / Master of Science (MSc)
222

Data Structure and Error Estimation for an Adaptive <i>p</i>-Version Finite Element Method in 2-D and 3-D Solids

Promwungkwa, Anucha 13 May 1998 (has links)
Automation of finite element analysis based on a fully adaptive <i>p</i>-refinement procedure can reduce the magnitude of discretization error to the desired accuracy with minimum computational cost and computer resources. This study aims to 1) develop an efficient <i>p</i>-refinement procedure with a non-uniform <i>p</i> analysis capability for solving 2-D and 3-D elastostatic mechanics problems, and 2) introduce a stress error estimate. An element-by-element algorithm that decides the appropriate order for each element, where element orders can range from 1 to 8, is described. Global and element data structures that manage the complex data generated during the refinement process are introduced. These data structures are designed to match the concept of object-oriented programming where data and functions are managed and organized simultaneously. The stress error indicator introduced is found to be more reliable and to converge faster than the error indicator measured in an energy norm called the residual method. The use of the stress error indicator results in approximately 20% fewer degrees of freedom than the residual method. Agreement of the calculated stress error values and the stress error indicator values confirms the convergence of final stresses to the analyst. The error order of the stress error estimate is postulated to be one order higher than the error order of the error estimate using the residual method. The mapping of a curved boundary element in the working coordinate system from a square-shape element in the natural coordinate system results in a significant improvement in the accuracy of stress results. Numerical examples demonstrate that refinement using non-uniform <i>p</i> analysis is superior to uniform <i>p</i> analysis in the convergence rates of output stresses or related terms. Non-uniform <i>p</i> analysis uses approximately 50% to 80% less computational time than uniform <i>p</i> analysis in solving the selected stress concentration and stress intensity problems. More importantly, the non-uniform <i>p</i> refinement procedure scales the number of equations down by 1/2 to 3/4. Therefore, a small scale computer can be used to solve equation systems generated using high order <i>p</i>-elements. In the calculation of the stress intensity factor of a semi-elliptical surface crack in a finite-thickness plate, non-uniform <i>p</i> analysis used fewer degrees of freedom than a conventional <i>h</i>-type element analysis found in the literature. / Ph. D.
223

Understanding and Improving Distal Pointing Interaction

Kopper, Regis Augusto Poli 04 August 2011 (has links)
Distal pointing is the interaction style defined by directly pointing at targets from a distance. It follows a laser pointer metaphor and the position of the cursor is determined by the intersection of a vector extending the pointing device with the display surface. Distal pointing as a basic interaction style poses several challenges for the user, mainly because of the lack of precision humans have when using it. The focus of this thesis is to understand and improve distal pointing, making it a viable interaction metaphor to be used in a wide variety of applications. We achieve this by proposing and validating a predictive model of distal pointing that is inspired by Fitts' law, but which contains some unique features. The difficulty of a distal pointing task is best described by the angular size of the target and the angular distance that the cursor needs to go across to reach the target from the input device perspective. The practical impact of this is that the user's relative position to the target should be taken into account. Based on the model we derived, we proposed a set of design guidelines for high-precision distal pointing techniques. The main guideline from the model is that increasing the target size is much more important than reducing the distance to the target. In order to improve distal pointing, we followed the model guidelines and designed interaction techniques that aim at improving the precision of distal pointing tasks. Absolute and Relative Mapping (ARM) distal pointing increases precision by offering the user a toggle which changes the control/display (CD) ratio such that a large movement of the input device is mapped to a small movement of the cursor. Dynamic Control Display Ratio (DyCoDiR) automatically increases distal pointing precision, as the user needs it. DyCoDiR takes into account the user distance to the interaction area and the speed at which the user moves the input device to dynamically calculate an increased CD ratio, making the action more precise the steadier the user tries to be. We performed an evaluation of ARM and DyCoDiR comparing them to basic distal pointing in a realistic context. In this experiment, we also provided variations of the techniques which increased the visual perception of targets through zooming in the area around the cursor when precision was needed. Results from the study show that ARM and DyCoDiR are significantly faster and more accurate than basic distal pointing with tasks that require very high precision. We analyzed user navigation strategies and found that the high precision techniques afford users to remain stationary while performing interactions. However, we also found that individual differences have a strong impact on the decision to walk or not, and that, sometimes, is more important than the technique affordance. We provided a validation for the distal pointing model through the analysis of expected difficulty of distal pointing tasks in light of each technique tested. We propose selection by progressive refinement, a new design concept for distal pointing selection techniques, whose goal is to offer the ability to achieve near perfect accuracy in selection at very cluttered environments. The idea of selection by progressive refinement is to gradually eliminate possible targets from the set of selectable objects until only one object is available for selection. We implemented SQUAD, a selection by progressive refinement distal pointing technique, and performed a controlled experiment comparing it to basic distal pointing. We found that there is a clear tradeoff between immediate selections that require high precision and selections by progressive refinement which always require low precision. We validated the model by fitting the distal pointing data and proposed a new model, which has a linear growth in time, for SQUAD selection. / Ph. D.
224

Abstraction Guided Semi-formal Verification

Parikh, Ankur 28 June 2007 (has links)
Abstraction-guided simulation is a promising semi-formal framework for design validation in which an abstract model of the design is used to guide a logic simulator towards a target property. However, key issues still need to be addressed before this framework can truly deliver on it's promise. Concretizing, or finding a real trace from an abstract trace, remains a hard problem. Abstract traces are often spurious, for which no corresponding real trace exits. This is a direct consequence of the abstraction being an over-approximation of the real design. Further, the way in which the abstract model is constructed is an open-ended problem which has a great impact on the performance of the simulator. In this work, we propose a novel approaches to address these issues. First, we present a genetic algorithm to select sets of state variables directly from the gate-level net-list of the design, which are highly correlated to the target property. The sets of selected variables are used to build the Partition Navigation Tracks (PNTs). PNTs capture the behavior of expanded portions of the state space as they related to the target property. Moreover, the computation and storage costs of the PNTs is small, making them scale well to large designs. Our experiments show that we are able to reach many more hard-to-reach states using our proposed techniques, compared to state-of-the-art methods. Next, we propose a novel abstraction strengthening technique, where the abstract design is constrained to make it more closely resemble the concrete design. Abstraction strengthening greatly reduces the need to refine the abstract model for hard to reach properties. To achieve this, we efficiently identify sequentially unreachable partial sates in the concrete design via intelligent partitioning, resolution and cube enlargement. Then, these partial states are added as constraints in the abstract model. Our experiments show that the cost to compute these constraints is low and the abstract traces obtained from the strengthened abstract model are far easier to concretize. / Master of Science
225

Isogeometric Finite Element Code Development for Analysis of Composite Structures

Kapoor, Hitesh 23 April 2013 (has links)
This research endeavor develops Isogeometric approach for analysis of composite structures and take advantage of higher order continuity, smoothness and variation diminishing property of Nurbs basis for stress analysis of composite and sandwich beams and plates. This research also computes stress concentration factor in a composite plate with a hole. Isogeometric nonlinear/linear finite element code is developed for static and dynamic analysis of laminated composite plates. Nurbs linear, quadratic, higher-order and k-refined elements are constructed using various refinement procedures and validated with numerical testing. Nurbs post-processor for in-plane and interlaminar stress calculation in laminated composite and sandwich plates is developed. Nurbs post-processor is found to be superior than regular finite element and in good agreement with the literature. Nurbs Isgoemetric analysis is used for stress analysis of laminated composite plate with open-hole. Stress concentration factor is computed along the hole edge and good agreement is obtained with the literature. Nurbs Isogeometric finite element code for free-vibration and linear dynamics analysis of laminated composite plates also obtain good agreement with the literature. Main highlights of the research are newly developed 9 control point linear Nurbs element, k-refined and higher-order Nurbs elements in isogeometric framework. Nurbs elements remove shear-locking and hourglass problems in thin plates in context of first-order shear deformation theory without the additional step and compute better stresses than Lagrange finite element and higher order shear deformation theory for comparatively thick plates i.e. a/h = 4. Also, Nurbs Isogeometric analysis perform well for vibration and dynamic problems and for straight and curved edge problems. / Ph. D.
226

[en] FINITE ELEMENT MESH GENERATION WITH DEFERRED CONSTRAINT INSERTION / [pt] GERAÇÃO DE MALHA DE ELEMENTOS FINITOS COM INSERÇÃO DE RESTRIÇÕES A POSTERIORI

CHRYSTIANO BARBOSA DE SOUZA ARAUJO 26 November 2013 (has links)
[pt] O método de elementos finitos é uma ferramenta amplamente utilizada na análise e simulação de fenômenos físicos. Uma etapa crucial desse método consiste na discretização de um domínio contínuo em uma malha de elementos finitos. A precisão da solução obtida na análise está diretamente relacionada á qualidade da malha utilizada, a qual é determinada pela forma e consistência de seus elementos. Grande parte dos trabalhos existentes na literatura é baseada em métodos clássicos, como Avanço de Fronteira, Triangulação de Delaunay com Restrição e Decomposição de Domínios. Utilizando uma abordagem diferente, esta dissertação propõe um método de geração a posteriori, em que restrições do domínio são inseridas iterativamente na malha, a qual é deformada localmente a fim de satisfazer cada nova restrição. Para validar o método proposto, os resultados obtidos são analisados através de critérios de qualidade comumente utilizados na literatura. / [en] The finite element method (FEM) is a widely used tool in the analysis and simulation of physical phenomena. The discretization of a continuous domain into a finite element mesh is an essential FEM step. The accuracy of the solution is highly dependent on the quality of the input mesh, which is mainly evaluated through the shape of the elements. Most of the related works is based on traditional methods, such as Advancing Front, Constrained Delaunay Triangulation and Domain Decomposition. Unlike these methods, this work proposes an iterative mesh generation method with deferred constraint insertion, in which an initially regular mesh is locally deformed in order to satisfy each new domain constraint. In addition, in order to validate the proposed method, this work evaluates each output mesh according to quality criteria commonly used in the literature.
227

Modélisation et simulation numériques de l'érosion par méthode DDFV / Modelling and numerical simulation of erosion by DDFV method

Lakhlili, Jalal 20 November 2015 (has links)
L’objectif de cette étude est de simuler l’érosion d’un sol cohésif sous l’effet d’un écoulement incompressible. Le modèle élaboré décrit une vitesse d’érosion interfaciale qui dépend de la contrainte de cisaillement de l’écoulement. La modélisation numérique proposée est une approche eulérienne, où une méthode de pénalisation de domaines est utilisée pour résoudre les équations de Navier-Stokes autour d’un obstacle. L’interface eau/sol est décrite par une fonction Level Set couplée à une loi d’érosion à seuil.L’approximation numérique est basée sur un schéma DDFV (Discrete Duality Finite Volume) autorisant des raffinements locaux sur maillages non-conformes et non-structurés. L’approche par pénalisation a mis en évidence une couche limite d'inconsistance à l'interface fluide/solide lors du calcul de la contrainte de cisaillement. Deux approches sont proposées pour estimer précisément la contrainte de ce problème à frontière libre. La pertinence du modèle à prédire l’érosion interfaciale du sol est confirmée par la présentation de plusieurs résultats de simulation, qui offrent une meilleure évaluation et compréhension des phénomènes d'érosion / This study focuses on the numerical modelling of the interfacial erosion occurring at a cohesive soil undergoing an incompressible flow process. The model assumes that the erosion velocity is driven by a fluid shear stress at the water/soil interface. The numerical modelling is based on the eulerian approach: a penalization procedure is used to compute Navier-Stokes equations around soil obstacle, with a fictitious domain method, in order to avoid body- fitted unstructured meshes. The water/soil interface’s evolution is described by a Level Set function coupled to a threshold erosion law.Because we use adaptive mesh refinement, we develop a Discrete Duality Finite Volume scheme (DDFV), which allows non-conforming and non-structured meshes. The penalization method, used to take into account a free velocity in the soil with non-body-fitted mesh, introduces an inaccurate shear stress at the interface. We propose two approaches to compute accurately the erosion velocity of this free boundary problem. The ability of the model to predict the interfacial erosion of soils is confirmed by presenting several simulations that provide better evaluation and comprehension of erosion phenomena.
228

A Refinement-Based Methodology for Verifying Abstract Data Type Implementations

Divakaran, Sumesh January 2015 (has links) (PDF)
This thesis is about techniques for proving the functional correctness of Abstract Data Type (ADT) implementations. We provide a framework for proving the functional correctness of imperative language implementations of ADTs, using a theory of refinement. We develop a theory of refinement to reason about both declarative and imperative language implementations of ADTs. Our theory facilitates compositional reasoning about complex implementations that may use several layers of sub-ADTs. Based on our theory of refinement, we propose a methodology for proving the functional correctness of an existing imperative language implementation of an ADT. We propose a mechanizable translation from an abstract model in the Z language to an abstract implementation in VCC’s ghost language. Then we present a technique to carry out the refinement checks completely within the VCC tool. We apply our proposed methodology to prove the functional correctness of the scheduling-related functionality of FreeRTOS, a popular open-source real-time operating system. We focused on the scheduler-related functionality, found major deviations from the intended behavior, and did a machine-checked proof of the correctness of the fixed code. We also present an efficient way to phrase the refinement conditions in VCC, which considerably improves VCC’s performance. We evaluated this technique on a simplified version of FreeRTOS which we constructed for this verification exercise. Using our efficient approach, VCC always terminates and leads to a reduction of over 90% in the total time taken by a naive check, when evaluated on this case-study.
229

Refinamento sequencial e paramétrico pelo método de Rietveld: aplicação na caracterização de fármacos e excipientes / Sequential and parametric refinement by the Rietveld method: application in the characterization of drugs and excipients

Tita, Diego Luiz [UNESP] 20 April 2018 (has links)
Submitted by Diego Luiz Tita (diego.tita@gmail.com) on 2018-05-11T17:51:31Z No. of bitstreams: 1 DR_TITA_DL_FINAL_A.pdf: 8672085 bytes, checksum: af81639f689b5a5fc78999ae952240a8 (MD5) / Approved for entry into archive by Ana Carolina Gonçalves Bet null (abet@iq.unesp.br) on 2018-05-15T12:53:22Z (GMT) No. of bitstreams: 1 tita_dl_dr_araiq_int.pdf: 8328482 bytes, checksum: be0fe77b0059e1880c4d517105e2c2f6 (MD5) / Made available in DSpace on 2018-05-15T12:53:22Z (GMT). No. of bitstreams: 1 tita_dl_dr_araiq_int.pdf: 8328482 bytes, checksum: be0fe77b0059e1880c4d517105e2c2f6 (MD5) Previous issue date: 2018-04-20 / O refinamento de estruturas cristalinas pelo método de Rietveld (MR) consiste em ajustar um modelo estrutural a uma medida de difração. Essa é uma ferramenta eficiente para identificação e quantificação de estruturas polimórficas presentes em fármacos e excipientes. Uma forma avançada do método é o refinamento sequencial por Rietveld (RSR) que visa, a partir de um conjunto de difratogramas de uma mesma amostra, estudar o comportamento do material em função de uma variável externa (e.g. temperatura, pressão, tempo ou ambiente químico). No presente trabalho, com o objetivo de estudar as transições polimórficas e as expansões/contrações dos parâmetros de cela unitária (PCU) dos insumos farmacêuticos: espironolactona (SPR), lactose monoidratada (LACMH) e lactose anidra (LACA), empregou-se o RSR em medidas obtidas em diferentes temperaturas. O RSR foi eficiente para que os PCU fossem refinados até temperaturas próximas ao ponto de fusão dos materiais. Após o RSR, a partir da análise matemática dos PCU obtidos, foram propostas funções que regem a tendência desses parâmetros quando submetidos à variação de temperatura. Com essas funções modelaram-se os PCU em uma outra modalidade de refinamento, o refinamento paramétrico por Rietveld (RPR), assim, os PCU seguem a modelagem imposta pelas equações obtidas via RSR. O RPR mostrou-se mais eficiente nas análises, o que evitou perda de fases ou problemas de ajustes, resultando assim em informações mais precisas do sistema. Embora o RSR e RPR serem métodos sofisticados para a caracterização dos materiais, a preparação das rotinas de programação dos refinamentos não é trivial, assim, nesse trabalho desenvolveu-se uma planilha (i.e. planilha SP-DLT) que facilita o emprego dos métodos. A planilha mostrou-se eficiente e rápida para programar todas as rotinas de refinamentos apresentadas nesse trabalho. Com os estudos dos insumos farmacêuticos observou-se que na amostra SPR a forma I, com o aumento da temperatura, se converte para forma II. A alfalactose monoidratada sofre desidratação e se converte para alfalactose, na amostra LACMH, e para betalactose, na amostra LACA. E, ainda com aumento de temperatura, a betalactose não sofre mudança de fase polimórfica. Assim, entende-se que o meio pode causar influência na rota de transição polimórfica. / The crystal structural refinement by the Rietveld method (MR) consists of fitting a structural model to a diffraction measure. This is an efficient tool for identification and quantification of polymorphic structures present in drugs and excipients. An advanced way to use this method is the Sequential Rietveld Refinement (RSR), which aims, from a set of data of the same sample, to study the behavior of the material as a function of an external variable (e.g. temperature, pressure, time or chemical environment). In the present work, with the objective of studying the polymorphic transitions and the expansions / contractions of the unit cell parameters (PCU) of the pharmaceutical ingredients: spironolactone (SPR), lactose monohydrate (LACMH) and anhydrous lactose (LACA), the RSR in measurements obtained at different temperatures. The RSR was efficient so that the PCU were refined to temperatures close to the melting point of the materials. After the RSR, from the mathematical analysis of the obtained PCU, functions were proposed that govern the trend of these parameters when submitted to the temperature variation. With these functions the PCU were modeled in another modality of refinement, the Parametric Rietveld Refinement (RPR), thus, the PCU follow the modeling imposed by the equations obtained via RSR. The RPR was more efficient in the analyzes, which avoided loss of phases or problems of adjustments, resulting in more accurate information of the system. Although RSR and RPR are sophisticated methods for characterization of materials, preparation of refinement programming routines is not trivial, so a spreadsheet (i.e. SP-DLT spreadsheet) has been developed in this paper to facilitate the use of methods. The worksheet proved to be efficient and quick to program all the refinement routines presented in this paper. With the studies of the pharmaceutical inputs it was observed that in the SPR sample, the form I, with the increase in temperature, converts to form II. Alfalactose monohydrate undergoes dehydration and converts to alfalactose in the LACMH sample and to betalactose in the LACA sample. And, even with temperature increase, the betalactose does not undergo polymorphic phase change. Thus, it is understood that the medium may cause influence on the polymorphic transition route.
230

Fast Solvers for Integtral-Equation based Electromagnetic Simulations

Das, Arkaprovo January 2016 (has links) (PDF)
With the rapid increase in available compute power and memory, and bolstered by the advent of efficient formulations and algorithms, the role of 3D full-wave computational methods for accurate modelling of complex electromagnetic (EM) structures has gained in significance. The range of problems includes Radar Cross Section (RCS) computation, analysis and design of antennas and passive microwave circuits, bio-medical non-invasive detection and therapeutics, energy harvesting etc. Further, with the rapid advances in technology trends like System-in-Package (SiP) and System-on-Chip (SoC), the fidelity of chip-to-chip communication and package-board electrical performance parameters like signal integrity (SI), power integrity (PI), electromagnetic interference (EMI) are becoming increasingly critical. Rising pin-counts to satisfy functionality requirements and decreasing layer-counts to maintain cost-effectiveness necessitates 3D full wave electromagnetic solution for accurate system modelling. Method of Moments (MoM) is one such widely used computational technique to solve a 3D electromagnetic problem with full-wave accuracy. Due to lesser number of mesh elements or discretization on the geometry, MoM has an advantage of a smaller matrix size. However, due to Green's Function interactions, the MoM matrix is dense and its solution presents a time and memory challenge. The thesis focuses on formulation and development of novel techniques that aid in fast MoM based electromagnetic solutions. With the recent paradigm shift in computer hardware architectures transitioning from single-core microprocessors to multi-core systems, it is of prime importance to parallelize the serial electromagnetic formulations in order to leverage maximum computational benefits. Therefore, the thesis explores the possibilities to expedite an electromagnetic simulation by scalable parallelization of near-linear complexity algorithms like Fast Multipole Method (FMM) on a multi-core platform. Secondly, with the best of parallelization strategies in place and near-linear complexity algorithms in use, the solution time of a complex EM problem can still be exceedingly large due to over-meshing of the geometry to achieve a desired level of accuracy. Hence, the thesis focuses on judicious placement of mesh elements on the geometry to capture the physics of the problem without compromising on accuracy- a technique called Adaptive Mesh Refinement. This facilitates a reduction in the number of solution variables or degrees of freedom in the system and hence the solution time. For multi-scale structures as encountered in chip-package-board systems, the MoM formulation breaks down for parts of the geometry having dimensions much smaller as compared to the operating wavelength. This phenomenon is popularly known as low-frequency breakdown or low-frequency instability. It results in an ill-conditioned MoM system matrix, and hence higher iteration count to converge when solved using an iterative solver framework. This consequently increases the solution time of simulation. The thesis thus proposes novel formulations to improve the spectral properties of the system matrix for real-world complex conductor and dielectric structures and hence form well-conditioned systems. This reduces the iteration count considerably for convergence and thus results in faster solution. Finally, minor changes in the geometrical design layouts can adversely affect the time-to-market of a commodity or a product. This is because the intermediate design variants, in spite of having similarities between them are treated as separate entities and therefore have to follow the conventional model-mesh-solve workflow for their analysis. This is a missed opportunity especially for design variant problems involving near-identical characteristics when the information from the previous design variant could have been used to expedite the simulation of the present design iteration. A similar problem occurs in the broadband simulation of an electromagnetic structure. The solution at a particular frequency can be expedited manifold if the matrix information from a frequency in its neighbourhood is used, provided the electrical characteristics remain nearly similar. The thesis introduces methods to re-use the subspace or Eigen-space information of a matrix from a previous design or frequency to solve the next incremental problem faster.

Page generated in 0.0746 seconds