Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
241 |
Simulação numérica de uma função indicadora de fluidos tridimensional empregando refinamento adaptativo de malhas / Numerical simulation of a 3D fluid indicator function using adaptive mesh refinementDaniel Mendes Azeredo 10 December 2007 (has links)
No presente trabalho, utilizou-se o Método da Fronteira Imersa, o qual utiliza dois tipos de malhas computacionais: euleriana (utilizada para o fluido) e lagrangiana (utilizada para representar a interface de separação de dois fluidos). O software livre GMSH foi utilizado para representar um sólido por meio da sua superfície externa e também para gerar uma malha triangular, bidimensional e não estruturada para discretizar essa superfície. Essa superfície foi utilizada como condição inicial para a malha lagrangiana (fronteira imersa). Os dados da malha lagrangiana são armazenados em uma estrutura de dados chamada Halfedge, a qual é largamente utilizada em Computação Gráfica para armazenar superfícies fechadas e orientáveis. Uma vez que a malha lagrangiana esteja armazenada nesta estrutura de dados, passa-se a estudar uma hipotética interação dinâmica entre a fronteira imersa e o escoamento do fluido. Esta interação é estudada apenas em um sentido, considera-se apenas a condição de não deslizamento, isto é, a fronteira imersa acompanhará passivamente um campo de velocidades pré-estabelecido (imposto), sem exercer qualquer força ou influência sobre ele. Foi utilizado um campo de distância local com sinal (função indicadora de fluidos) para identificar o interior e o exterior da superfície que representa a interface entre os fluidos. Este campo de distância é atualizado a cada passo no tempo utilizando idéias de Geometria Computacional, o que tornou o custo computacional para calcular esse campo otimal independente da complexidade geométrica da interface. Esta metodologia mostrou-se robusta e produz uma definição nítida das distintas fases dos fluidos em todos os passos no tempo. Para acompanhar e visualizar de forma mais precisa o comportamento dos fluidos na vizinhança da superfície que representa a interface de separação dos fluido, foi utilizado um algoritmo chamado de Refinamento Adaptativo de Malhas para fazer um refinamento dinâmico da malha euleriana na vizinhança da malha lagrangiana. / The scientific motivation of the present work is the mathematical modeling and the computational simulation of multiphase flows. Specifically, the equations of a two-phase flow are written by combining the Immersed Boundary Method with a suitable fluid indicator function. It is assumed that the fluid equations are discretized on an Eulerian mesh covering completely the flow domain and that the interface between the fluid phases is discretized by a non-structured Lagrangian mesh formed by triangles. In this context, employing tools commonly found in Computational Geometry, the computation of the fluid indicator function is efficiently performed on a block-structured Eulerian mesh bearing dynamical refinement patches. Formed by a set of triangles, the Lagrangian mesh, which is initally generated employing the free software GMSH, is stored in a Halfedge data structure, a data structure which is widely used in Computer Graphics to represent bounded, orientable closed surfaces. Once the Lagrangian mesh has been generated, next, one deals with the hipothetical situation of dealing with the one-way dynamical interaction between the immersed boundary and the fluid flow, that is, considering the non-slip condition, only the action of the flow on the interface is studied. No forces arising on the interface affects the flow, the interface passively being advect with the flow under a prescribed, imposed velocity field. In particular, the Navier-Stokes equations are not solved. The fluid indicator function is given by a signed distance function in a vicinity of the immersed boundary. It is employed to identify interior/exterior points with respect to the bounded, closed region which is assumed to contain one of the fluid phases in its interior. The signed distance is update every time step employing Computational Geometry methods with optimal cost. Several examples in three dimensions, showing the efficiency and efficacy in the computation of the fluid indicator function, are given which employ the dynamical adaptive properties of the Eurlerian mesh for a moving interface.
|
242 |
Supporting Multi-Criteria Decision Support Queries over Disparate Data SourcesRaghavan, Venkatesh 17 April 2012 (has links)
In the era of "big data revolution," marked by an exponential growth of information, extracting value from data enables analysts and businesses to address challenging problems such as drug discovery, fraud detection, and earthquake predictions. Multi-Criteria Decision Support (MCDS) queries are at the core of big-data analytics resulting in several classes of MCDS queries such as OLAP, Top-K, Pareto-optimal, and nearest neighbor queries. The intuitive nature of specifying multi-dimensional preferences has made Pareto-optimal queries, also known as skyline queries, popular. Existing skyline algorithms however do not address several crucial issues such as performing skyline evaluation over disparate sources, progressively generating skyline results, or robustly handling workload with multiple skyline over join queries. In this dissertation we thoroughly investigate topics in the area of skyline-aware query evaluation. In this dissertation, we first propose a novel execution framework called SKIN that treats skyline over joins as first class citizens during query processing. This is in contrast to existing techniques that treat skylines as an "add-on," loosely integrated with query processing by being placed on top of the query plan. SKIN is effective in exploiting the skyline characteristics of the tuples within individual data sources as well as across disparate sources. This enables SKIN to significantly reduce two primary costs, namely the cost of generating the join results and the cost of skyline comparisons to compute the final results. Second, we address the crucial business need to report results early; as soon as they are being generated so that users can formulate competitive decisions in near real-time. On top of SKIN, we built a progressive query evaluation framework ProgXe to transform the execution of queries involving skyline over joins to become non-blocking, i.e., to be progressively generating results early and often. By exploiting SKIN's principle of processing query at multiple levels of abstraction, ProgXe is able to: (1) extract the output dependencies in the output spaces by analyzing both the input and output space, and (2) exploit this knowledge of abstract-level relationships to guarantee correctness of early output. Third, real-world applications handle query workloads with diverse Quality of Service (QoS) requirements also referred to as contracts. Time sensitive queries, such as fraud detection, require results to progressively output with minimal delay, while ad-hoc and reporting queries can tolerate delay. In this dissertation, by building on the principles of ProgXe we propose the Contract-Aware Query Execution (CAQE) framework to support the open problem of contract driven multi-query processing. CAQE employs an adaptive execution strategy to continuously monitor the run-time satisfaction of queries and aggressively take corrective steps whenever the contracts are not being met. Lastly, to elucidate the portability of the core principle of this dissertation, the reasoning and query processing at different levels of data abstraction, we apply them to solve an orthogonal research question to auto-generate recommendation queries that facilitate users in exploring a complex database system. User queries are often too strict or too broad requiring a frustrating trial-and-error refinement process to meet the desired result cardinality while preserving original query semantics. Based on the principles of SKIN, we propose CAPRI to automatically generate refined queries that: (1) attain the desired cardinality and (2) minimize changes to the original query intentions. In our comprehensive experimental study of each part of this dissertation, we demonstrate the superiority of the proposed strategies over state-of-the-art techniques in both efficiency, as well as resource consumption.
|
243 |
Refinamento multinível em redes complexas baseado em similaridade de vizinhança / Multilevel refinement in complex networks based on neighborhood similarityValejo, Alan Demetrius Baria 11 November 2014 (has links)
No contexto de Redes Complexas, particularmente das redes sociais, grupos de objetos densamente conectados entre si, esparsamente conectados a outros grupos, são denominados de comunidades. Detecção dessas comunidades tornou-se um campo de crescente interesse científico e possui inúmeras aplicações práticas. Nesse contexto, surgiram várias pesquisas sobre estratégias multinível para particionar redes com elevada quantidade de vértices e arestas. O objetivo dessas estratégias é diminuir o custo do algoritmo de particionamento aplicando-o sobre uma versão reduzida da rede original. Uma possibilidade dessa estratégia, ainda pouco explorada, é utilizar heurísticas de refinamento local para melhorar a solução final. A maioria das abordagens de refinamento exploram propriedades gerais de redes complexas, tais como corte mínimo ou modularidade, porém, não exploram propriedades inerentes de domínios específicos. Por exemplo, redes sociais são caracterizadas por elevado coeficiente de agrupamento e assortatividade significativa, consequentemente, maximizar tais características pode conduzir a uma boa solução e uma estrutura de comunidades bem definida. Motivado por essa lacuna, neste trabalho é proposto um novo algoritmo de refinamento, denominado RSim, que explora características de alto grau de transitividade e assortatividade presente em algumas redes reais, em particular em redes sociais. Para isso, adotou-se medidas de similaridade híbridas entre pares de vértices, que utilizam os conceitos de vizinhança e informações de comunidades para interpretar a semelhança entre pares de vértices. Uma análise comparativa e sistemática demonstrou que o RSim supera os algoritmos de refinamento habituais em redes com alto coeficiente de agrupamento e assortatividade. Além disso, avaliou-se o RSim em uma aplicação real. Nesse cenário, o RSim supera todos os métodos avaliado quanto a eficiência e eficácia, considerando todos os conjuntos de dados selecionados. / In the context of complex networks, particularly social networks, groups of densely interconnected objects, sparsely linked to other groups are called communities. Detection of these communities has become a field of increasing scientific interest and has numerous practical applications. In this context, several studies have emerged on multilevel strategies for partitioning networks with high amount of vertices and edges. The goal of these strategies is to reduce the cost of partitioning algorithm by applying it on a reduced version of the original network. The possibility for this strategy, yet little explored, is to apply local refinement heuristics to improve the final solution. Most refinement approaches explore general properties of complex networks, such as minimum cut or modularity, however, do not exploit inherent properties of specific domains. For example, social networks are characterized by high clustering coefficient and significant assortativity, hence maximize such characteristics may lead to a good solution and a well-defined community structure. Motivated by this gap, in this thesis, we propose a new refinement algorithm, called RSim, which exploits characteristics of high degree of transitivity and assortativity present in some real networks, particularly social networks. For this, we adopted hybrid similarity measures between pairs of vertices, using the concepts of neighborhood and community information to interpret the similarity between pairs of vertices. A systematic and comparative analysis showed that the RSim statistically outperforms usual refinement algorithms in networks with high clustering coefficient and assortativity. In addition, we assessed the RSim in a real application. In this scenario, the RSim surpasses all evaluated methods in efficiency and effectiveness, considering all the selected data sets.
|
244 |
Resolução numérica de equações de advecção-difusão empregando malhas adaptativas / Numerical solution of advection-diusion equations using adaptative mesh renementOliveira, Alexandre Garcia de 07 July 2015 (has links)
Este trabalho apresenta um estudo sobre a solução numérica da equação geral de advecção-difusão usando uma metodologia numérica conservativa. Para a discretização espacial, é usado o Método de Volumes Finitos devido à natureza conservativa da equação em questão. O método é configurado de modo a ter suas variáveis centradas em centro de célula e, para as variáveis, como a velocidade, centradas nas faces um método de interpolação de segunda ordem é utilizado para um ajuste numérico ao centro. Embora a implementação computacional tenha sido feita de forma paramétrica de maneira a acomodar outros esquemas numéricos, a discretização temporal dá ênfase ao Método de Crank-Nicolson. Tal método numérico, sendo ele implícito, dá origem a um sistema linear de equações que, aqui, é resolvido empregando-se o Método Multigrid-Multinível. A corretude do código implementado é verificada a partir de testes por soluções manufaturadas, de modo a checar se a ordem de convergência prevista em teoria é alcançada pelos métodos numéricos. Um jato laminar é simulado, com o acoplamento entre a equação de Navier-Stokes e a equação geral de advecção-difusão, em um domínio computacional tridimensional. O jato é uma forma de vericar se o algoritmo de geração de malhas adaptativas funciona corretamente. O módulo produzido neste trabalho é baseado no código computacional AMR3D-P desenvolvido pelos grupos de pesquisa do IME-USP e o MFLab/FEMEC-UFU (Laboratório de Dinâmica de Fluidos da Universidade Federal de Uberlândia). A linguagem FORTRAN é utilizada para o desenvolvimento da metodologia numérica e as simulações foram executadas nos computadores do LabMAP(Laboratório da Matemática Aplicada do IME-USP) e do MFLab/FEMEC-UFU. / This work presents a study about the numerical solution of variable coecients advectiondi usion equation, or simply, general advection-diusion equation using a conservative numerical methodology. The Finite Volume Method is choosen as discretisation of the spatial domain because the conservative nature of the focused equation. This method is set up to have the scalar variable in a cell centered scheme and the vector quantities, such velocity, are face centered and they need a second order interpolation to get adjusted to the cell center. The computational code is parametric, in which, any implicit temporal discretisation can be choosen, but the emphasis relies on Crank-Nicolson method, a well-known second order method. The implicit nature of aforementioned method gives a linear system of equations which is solved here by the Multilevel-Multigrid method. The correctness of the computational code is checked by manufactured solution method used to inspect if the theoretical order of convergence is attained by the numerical methods. A laminar jet is simulated, coupling the Navier-Stokes equation and the general advection-diusion equation in a 3D computational domain. The jet is a good way to check the corectness of adaptative mesh renement algorithm. The module designed here is based in a previous implemented code AMR3D-P designed by IME-USP and MFLab/FEMEC-UFU (Fluid Dynamics Laboratory, Federal University of Uberlândia). The programming language used is FORTRAN and the simulations were run in LabMAP(Applied Mathematics Laboratoy at IME-USP) and MFLab/FEMEC-UFU computers.
|
245 |
O uso do estimador residual no refinamento adaptativo de malhas em elementos finitos / The use of the residual estimation in adaptive mesh refinement of finite elementClaudino, Marco Alexandre 26 March 2015 (has links)
Na obtenção de aproximações numéricas para Equações Diferenciais Parciais Elípticas utilizando o Método dos Elementos Finitos (MEF) alguns problemas apresentam valores maiores para o erro somente em algumas determinadas regiões do domínio como, por exemplo, regiões onde existam singularidades na solução contínua do problema. Uma possível alternativa para reduzir o erro cometido nestas regiões é aumentar o número de elementos nos trechos onde o erro cometido foi considerado grande. A questão principal é como identificar essas regiões, dado que a solução do problema contínuo é desconhecida. Neste trabalho iremos apresentar a chamada estimativa residual, que fornece um estimador do erro cometido na aproximação utilizando apenas os valores conhecidos dos contornos e a aproximação obtida sobre uma dada partição de elementos. Vamos discutir a relação entre a estimativa residual e o erro cometido na aproximação, além de utilizar as estimativas na construção de um algoritmo adaptativo para as malhas em estudo. Utilizando o software FreeFem++ serão obtidas aproximações para a Equação de Poisson e para o sistema de equações associado à Elasticidade Linear e por meio do estimador residual será analisado o erro cometido nas aproximações e a necessidade do refinamento adaptativo das malhas. / In obtaining numerical approximations for solutions to Elliptic Partial Differential Equations using the Finite Element Method (FEM) one sees that some problems have higher values for the error only in certain domain regions such as, for example, regions where the solution of the continous problem is singular. A possible alternative to reduce the error in these regions is to increase the number of elements in the partions where the error was considered large. The main issue is how to identify these regions, since the solution of the continuous problem is unknown. In this work we present the so-called residual estimate, which provides an error estimation approach which uses only the known values on the contours and the obtained approximation on a given discretization. We will discuss the relationship between the residual estimate and the error, and how to use the estimate for adaptively refining the mesh. Solutions for the Poisson equation and the Linear elasticity system of equations, and the residual estimates for the analysis of mesh refinement will be computed using the FreeFem++ software.
|
246 |
Automated and interactive approaches for optimal surface finding based segmentation of medical image dataSun, Shanhui 01 December 2012 (has links)
Optimal surface finding (OSF), a graph-based optimization approach to image segmentation, represents a powerful framework for medical image segmentation and analysis. In many applications, a pre-segmentation is required to enable OSF graph construction. Also, the cost function design is critical for the success of OSF. In this thesis, two issues in the context of OSF segmentation are addressed. First, a robust model-based segmentation method suitable for OSF initialization is introduced. Second, an OSF-based segmentation refinement approach is presented.
For segmenting complex anatomical structures (e.g., lungs), a rough initial segmentation is required to apply an OSF-based approach. For this purpose, a novel robust active shape model (RASM) is presented. The RASM matching in combination with OSF is investigated in the context of segmenting lungs with large lung cancer masses in 3D CT scans. The robustness and effectiveness of this approach is demonstrated on 30 lung scans containing 20 normal lungs and 40 diseased lungs where conventional segmentation methods frequently fail to deliver usable results. The developed RASM approach is generally applicable and suitable for large organs/structures.
While providing high levels of performance in most cases, OSF-based approaches may fail in a local region in the presence of pathology or other local challenges. A new (generic) interactive refinement approach for correcting local segmentation errors based on the OSF segmentation framework is proposed. Following the automated segmentation, the user can inspect the result and correct local or regional segmentation inaccuracies by (iteratively) providing clues regarding the location of the correct surface. This expert information is utilized to modify the previously calculated cost function, locally re-optimizing the underlying modified graph without a need to start the new optimization from scratch. For refinement, a hybrid desktop/virtual reality user interface based on stereoscopic visualization technology and advanced interaction techniques is utilized for efficient interaction with the segmentations (surfaces). The proposed generic interactive refinement method is adapted to three applications. First, two refinement tools for 3D lung segmentation are proposed, and the performance is assessed on 30 test cases from 18 CT lung scans. Second, in a feasibility study, the approach is expanded to 4D OSF-based lung segmentation refinement and an assessment of performance is provided. Finally, a dual-surface OSF-based intravascular ultrasound (IVUS) image segmentation framework is introduced, application specific segmentation refinement methods are developed, and an evaluation on 41 test cases is presented. As demonstrated by experiments, OSF-based segmentation refinement is a promising approach to address challenges in medical image segmentation.
|
247 |
Refactoring-based Requirements Refinement Towards DesignLiu, WenQian 18 February 2010 (has links)
Building systems that satisfy the given requirements is a main goal of software engineering. The success of this process relies largely on the presence of an adequate architectural design. Traditional paradigms deal with requirements separately from design. Our empirical studies show that crossing the boundary between requirements and design is difficult, existing tools and methods for bridging the gap inadequate, and that software architects rely heavily on experience, prior solutions, and creativity.
Current approaches in moving from requirements to design follow two schools. One is architecture-centric, focused on providing assistance to architects in reuse. The other is requirements-centric, and tends to extend established development frameworks and employ mappings to transition from requirements to architecture. Jackson indicates that clear understanding of requirements (the problem) is crucial to building useful systems, and that to evolve successfully, their design must reflect problem structure. Taylor et al. argue that design is the central activity in connecting requirements and architecture. Nonetheless, existing approaches either overlook underlying structure of requirements or design considerations.
This dissertation presents a novel theory enabling requirements structuring and design analysis through requirements refinement and refactoring. The theory introduces a refinement process model operating on four abstraction levels, and a set of refactoring operators and algorithms. The method works in small, well-guided steps with visible progress.
The theory provides a basis for designers to analyze and simplify requirement descriptions, remove redundancy, uncover dependencies, extract lower-level requirements, incorporate design concerns, and produce a system decomposition reflecting the underlying problem structure. A design built on top of this decomposition is better suited for evolution than one created without explicit structural analysis.
The theory is validated on an industrial-sized project, wherein a suitable system decomposition is produced and a comparison made to a conventionally-devised solution. Examples demonstrate that the theory handles changes incrementally. It is explained how the theory addresses the existing challenges in going from requirements to design and supports fundamental software engineering principles. The method is assessed against common validation criteria. The approach is compared with prominent related work.
|
248 |
The Structure of Bovine Mitochondrial ATP Synthase by Single Particle Electron CryomicroscopyBaker, Lindsay 20 August 2012 (has links)
Single particle electron cryomicroscopy (cryo-EM) is a method of structure determination that uses many randomly oriented images of the specimen to construct a three-dimensional density map. In this thesis, single particle cryo-EM has been used to determine the structure of intact adenosine triphosphate (ATP) synthase from bovine heart mitochondria, an approximately 550 kDa membrane protein complex. In respiring organisms, ATP synthase is responsible for synthesizing the majority of ATP, a molecule that serves as an energy source for many cellular reactions. In order to understand the mechanism of ATP synthase, knowledge of the arrangement of subunits in the intact complex is necessary. To obtain maps of intact ATP synthase showing internal density distributions by single particle cryo-EM, methodological improvements to image acquisition, map refinement, and data selection were developed. Further, a novel segmentation algorithm was developed to aid in interpretation of maps. The use of these tools allowed for construction and interpretation of two maps of ATP synthase, solubilized in different membrane mimetics, in which the arrangement of subunits could be identified. These maps revealed interactions within the complex important for its function. In addition, evidence was obtained for curvature of membrane mimetics around ATP synthase, suggesting a role for the complex in maintenance of mitochondrial membrane morphology.
|
249 |
The Structure of Bovine Mitochondrial ATP Synthase by Single Particle Electron CryomicroscopyBaker, Lindsay 20 August 2012 (has links)
Single particle electron cryomicroscopy (cryo-EM) is a method of structure determination that uses many randomly oriented images of the specimen to construct a three-dimensional density map. In this thesis, single particle cryo-EM has been used to determine the structure of intact adenosine triphosphate (ATP) synthase from bovine heart mitochondria, an approximately 550 kDa membrane protein complex. In respiring organisms, ATP synthase is responsible for synthesizing the majority of ATP, a molecule that serves as an energy source for many cellular reactions. In order to understand the mechanism of ATP synthase, knowledge of the arrangement of subunits in the intact complex is necessary. To obtain maps of intact ATP synthase showing internal density distributions by single particle cryo-EM, methodological improvements to image acquisition, map refinement, and data selection were developed. Further, a novel segmentation algorithm was developed to aid in interpretation of maps. The use of these tools allowed for construction and interpretation of two maps of ATP synthase, solubilized in different membrane mimetics, in which the arrangement of subunits could be identified. These maps revealed interactions within the complex important for its function. In addition, evidence was obtained for curvature of membrane mimetics around ATP synthase, suggesting a role for the complex in maintenance of mitochondrial membrane morphology.
|
250 |
Refactoring-based Requirements Refinement Towards DesignLiu, WenQian 18 February 2010 (has links)
Building systems that satisfy the given requirements is a main goal of software engineering. The success of this process relies largely on the presence of an adequate architectural design. Traditional paradigms deal with requirements separately from design. Our empirical studies show that crossing the boundary between requirements and design is difficult, existing tools and methods for bridging the gap inadequate, and that software architects rely heavily on experience, prior solutions, and creativity.
Current approaches in moving from requirements to design follow two schools. One is architecture-centric, focused on providing assistance to architects in reuse. The other is requirements-centric, and tends to extend established development frameworks and employ mappings to transition from requirements to architecture. Jackson indicates that clear understanding of requirements (the problem) is crucial to building useful systems, and that to evolve successfully, their design must reflect problem structure. Taylor et al. argue that design is the central activity in connecting requirements and architecture. Nonetheless, existing approaches either overlook underlying structure of requirements or design considerations.
This dissertation presents a novel theory enabling requirements structuring and design analysis through requirements refinement and refactoring. The theory introduces a refinement process model operating on four abstraction levels, and a set of refactoring operators and algorithms. The method works in small, well-guided steps with visible progress.
The theory provides a basis for designers to analyze and simplify requirement descriptions, remove redundancy, uncover dependencies, extract lower-level requirements, incorporate design concerns, and produce a system decomposition reflecting the underlying problem structure. A design built on top of this decomposition is better suited for evolution than one created without explicit structural analysis.
The theory is validated on an industrial-sized project, wherein a suitable system decomposition is produced and a comparison made to a conventionally-devised solution. Examples demonstrate that the theory handles changes incrementally. It is explained how the theory addresses the existing challenges in going from requirements to design and supports fundamental software engineering principles. The method is assessed against common validation criteria. The approach is compared with prominent related work.
|
Page generated in 0.0405 seconds