Spelling suggestions: "subject:"[een] SIMPLIFICATION"" "subject:"[enn] SIMPLIFICATION""
81 |
Function statistics applied to volume rendering : transfer functions design and computational issues on discrete functions / Estatísticas em funções aplicadas a visualização volumétrica : detalhes computacionais em funções discretasBernardon, Fabio Fedrizzi January 2008 (has links)
O projeto de funções de transferência é um interessante problema que recebe muita atenção da comunidade de visualização. Diversas pesquisas tem sido conduzidas para criar melhores ferramentas e técnicas que trabalham com dados volumétricos. Existem duas grandes classes de dados: volumes estruturados e volumes não-estruturados. A maioria dos trabalhos anteriores apenas se refere a dados estruturados. Este trabalho possui dois grupos de contribuições. O primeiro diz respeito ao problema clássico de especificação de funções de transferência. Primeiramente é desenvolvido o conceito de Ensembles, que são funções de transferência desenvolvidas a partir da combinação de funções anteriores e mais simples. Também é apresentada uma abordagem de key-framing para manipular dados que variam no tempo. O segundo grupo de contribuições é um estudo aprofundado sobre o comportamento de dados não-estruturados. Problemas críticos foram descobertos e tratados para permitir uma integração quase perfeita de ferramentas usadas para dados estruturados em dados não-estruturados. Os resultados mostram a melhoria de qualidade de histogramas, e também o sistema de desenvolvimento de funções de transferência. Trabalhos futuros são sugeridos para utilizar a versão melhorada do histograma de gradiente-magnitude, assim como a exploração de novos modelos de bordas. / Transfer function design is an important problem that receives much attention from the visualization community. Several researches have inspired the creation of better tools and techniques to deal with volumetric datasets. There are two major classes of datasets, namely structured and unstructured grids. Most of the previous work has only addressed structured data. This work presents two groups of contributions of different nature. The first contribution is related to the general problem of transfer function design. It introduces the concept of ensembles, which are complex transfer functions created from standard types. It also presents a key-frame based approach to handle time-varying sequences. The second group of contributions is related with a study on several characteristics of unstructured data. Problems have been discovered and addressed to allow a seamless integration of classical structured grids tools to unstructured data. This work includes results that show improvements on a statistical analysis of the data, as well as the developed transfer function design system. Further work is suggested to take advantage of the enhanced version of the gradient-magnitude histogram, and explore different boundary model.
|
82 |
Transformações multi-escala para a segmentação de imagens de impressões digitais / Multi-scale transformations for fingerprint image segmentationTeixeira, Raoni Florentino da Silva, 1987- 18 August 2018 (has links)
Orientador: Neucimar Jerônimo Leite / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-18T03:31:34Z (GMT). No. of bitstreams: 1
Teixeira_RaoniFlorentinodaSilva_M.pdf: 2781592 bytes, checksum: f95e3e81dccde47709f471564d50db1c (MD5)
Previous issue date: 2011 / Resumo: A identificação baseada em impressões digitais tem recebido considerável atenção nos últimos anos devido à crescente procura pela identificação automática de indivíduos, tanto em aplicações forenses quanto empresariais, por exemplo. Uma importante etapa que deve ser considerada nessas aplicações é a segmentação da imagem que constitui a impressão digital. Nesse contexto, o termo segmentação refere-se à separação da imagem em duas regiões, denominadas área da impressão (foreground) e fundo (background), a fim de evitar que características utilizadas no reconhecimento e/ou classificação das impressões digitais correspondentes sejam extraídas de regiões impróprias. Normalmente, as abordagens de segmentação encontradas na literatura não consideram imagens provenientes de diferentes bases de dados (ou sensores), em virtude da diversidade das propriedades e características encontradas em cada sensor e, em geral, o desempenho dos métodos existentes é baixo quando lidam com bases de dados heterogênias. Neste sentido, a segmentação de imagens oriundas de diferentes sensores constitui um problema ainda a ser explorado. Este trabalho apresenta um conjunto de transformações de imagens que pode ser utilizado para esse fim, ou seja, para segmentação de imagens de impressões digitais provenientes de diferentes sensores sem que seja necessário, por exemplo, uma pré-classificação ou treinamento. De modo geral, estas transformações são baseadas em operadores morfológicos do tipo toggle que apresentam características interessantes de simplificação de imagens. Os resultados obtidos considerando imagens de diferentes bases de dados mostram que o método proposto supera abordagens bem conhecidas da literatura que representam o estado-da-arte / Abstract: Fingerprint identification has received considerable attention in the last few years, due to an increasing demand for human automatic identification in areas concerning, for example, forensic and business applications. An important step to be considered in such applications is the fingerprint image segmentation. In this context, the term refers to splitting the image into two regions, namely, foreground and background, in order to avoid the extraction of features used in automatic classification and recognition from noisy regions. Usually, the segmentation methods found in the literature do not consider images from different databases (or sensors) and, in a general way, dealing with heterogeneous databases constitutes an open problem not well explored in the literature. This work presents a new set of image transformations related to fingerprint segmentation of images acquired from different sensors without any requirement for pre-classification or training. As we will elsewhere, these transformations are based on morphological toggle operators which present interesting image simplification properties. We evaluate our approach on images of different databases, and show its improvements when compared against other well-known state-of-the-art segmentation methods discussed in literature / Mestrado / Processamento de Imagens / Mestre em Ciência da Computação
|
83 |
Reformatação curvilínea baseada em simplificação e costura de malhas / Curvilinear reformatting based on mesh simplification and zipperingLoos, Wallace Souza, 1989- 10 October 2014 (has links)
Orientador: Wu Shin-Ting / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T15:53:26Z (GMT). No. of bitstreams: 1
Loos_WallaceSouza_M.pdf: 21840648 bytes, checksum: 8bb81a3fcf4de4d44de658f543435dda (MD5)
Previous issue date: 2014 / Resumo: A reformatação curvilínea é uma técnica computacional de exploração, portanto, não invasiva, que permite realizar cortes curvilíneos sobre as neuroimagens. Ela complementa as reformatações multiplanares na localização de displasia cortical focal, causa comumente associada a epilepsia refratária. Foi desenvolvido um algoritmo de reformatação curvilínea baseado em malhas de offset pelo nosso grupo de pesquisa. Dois problemas foram identificados no algoritmo: artefatos visuais e limitação da região a ser reformatada em áreas visíveis. Neste trabalho apresentamos soluções para contornar estes problemas. Mostramos que a causa dos artefatos são as auto-interseções locais e que um algoritmo de simplificação pode removê-las de forma eficiente sem comprometer a geometria de reformatação. Desenvolvemos um algoritmo de costura para juntar malhas obtidas em diferentes ângulos de vista numa única malha de corte. Desta forma, é possível realizar a reformatação simultânea nos dois hemisférios cerebrais propiciando um diagnóstico baseado em análise comparativa dos dois lados. Na implementação dos nossos algoritmos procuramos explorar as vantagens de uma estrutura de dados eficiente e do paralelismo das unidades de processamento gráfico (GPUs) para termos resultados em tempo interativo. Experimentos preliminares com as neuroimagens dos pacientes com crises epilépticas apontam que a ferramenta pode colaborar na identificação de lesões sutis / Abstract: Curvilinear Reformatting is a computational exploration technique, therefore noninvasive, that allows making curvilinear slices on 3D neuroimaging data. It complements multiplanar reformatting to find lesions of focal cortical dysplasia, a common cause of refractory epilepsy. Our research group has developed an algorithm for curvilinear reformatting based on offset meshes. Two problems were identified: visual artifacts and limitations of the visible area to be reformatted. In this work we presented two solutions to solve these problems. We show that the cause of the artifacts are the local self-intersections and that an simplification algorithm can remove them efficiently without compromising the geometry of reformatting. We developed an sewing algorithm to join meshes from different angles of view on a single mesh. In this way, it is possible to make the curvilinear reformatting in both hemispheres of the brain, providing a diagnosis based on comparative analysis of the two sides. In the implementation of our algorithms we tried to explore the advantages of efficient data structure and parallelism of graphic processor units (GPU) to achieve results at interactive rates. Preliminary results with neuroimages from patients with epileptic seizures indicate that the tool may collaborate in the finding of subtle lesions / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
|
84 |
Extração de termos de manuais técnicos de produtos tecnológicos: uma aplicação em Sistemas de Adaptação Textual / Term extraction from technological products instruction manuals: an application in textual adaptation systemsFernando Aurélio Martins Muniz 28 April 2011 (has links)
No Brasil, cerca de 68% da população é classificada como leitores com baixos níveis de alfabetização, isto é, possuem o nível de alfabetização rudimentar (21%) ou básico (47%), segundo dados do INAF (2009). O projeto PorSimples utilizou as duas abordagens de Adaptação Textual, a Simplificação e a Elaboração, para ajudar leitores com baixo nível de alfabetização a compreender documentos disponíveis na Web em português do Brasil, principalmente textos jornalísticos. Esta pesquisa de mestrado também se dedicou às duas abordagens acima, mas o foco foi o gênero de textos instrucionais. Em tarefas que exigem o uso de documentação técnica, a qualidade da documentação é um ponto crítico, pois caso a documentação seja imprecisa, incompleta ou muito complexa, o custo da tarefa ou até mesmo o risco de acidentes aumenta muito. Manuais de instrução possuem duas relações procedimentais básicas: a relação gera generation (quando uma ação gera automaticamente uma ação ), e a relação habilita enablement (quando a realização de uma ação permite a realização da ação , mas o agente precisa fazer algo a mais para garantir que irá ocorrer). O projeto aqui descrito, intitulado NorMan, estudou como as relações procedimentais gera e habilita são realizadas em manuais de instruções, dando base para a criação do sistema NorMan Extractor, que implementa um método de extração de termos dedicado ao gênero de textos instrucionais, especificamente aos manuais técnicos. Também foi proposta a adaptação do sistema de autoria de textos simplificados criado no projeto PorSimples o SIMPLIFICA para atender o gênero de textos instrucional. O SIMPLIFICA adaptado usa a lista de candidatos a termo, gerada pelo sistema NorMan Extractor, com duas funções: (a) para auxiliar na identificação de palavras que não devem ser simplificadas pelo método de simplificação léxica baseado em sinônimos, e (b) para gerar uma elaboração léxica para facilitar o entendimento do texto / In Brazil, 68% of the population can be classified as low-literacy readers, i.e., people at the rudimentary (21%) and basic (47%) literacy levels, according to the National Indicator of Functional Literacy (INAF, 2009). The PorSimples project used the two approaches of Textual Adaptation, Simplification and Elaboration, to help readers with low-literacy levels to understand Brazilian Portuguese documents on the Web, mainly newspaper articles. In this research we also used the two approaches above, but the focus was the genre of instructional texts. In tasks requiring the use of technical documentation, the quality of documentation is a critical point, because if the documentation is inaccurate, incomplete or too complex, the cost of the task or even the risk of accidents is greatly increased. Instructions manuals have two basic procedural relationships: the relation generation (by performing one of the actions (), the other () will automatically occur), and the relation enablement (when enables , then the agent needs to do something more than to guarantee that will be done). The project presented here, entitled NorMan, investigated the realization of the relationships between procedural actions in instruction manuals, providing the basis for creating an automatic term extraction method devoted to the genre of instructional texts, specifically technical manuals. We also proposed an adaptation of the authoring system of simplified texts created in the project PorSimples - the SIMPLIFICA - to deals with the genre of instrumental texts. The new SIMPLIFICA uses the list of term candidates, generated by the proposed method, with two functions: (a) to assist in the identification of words that should not be simplified by the lexical simplification method based on synonyms, and (b) to generate a lexical elaboration to facilitate the comprehension of the text
|
85 |
Describing scent : On the translation of hyphenated premodifiers in a text about perfumeMagnusson, Evelina January 2021 (has links)
This small-scale study examines the translation of a text about perfumes, focusing on how hyphenated premodifiers in the English source text were translated into Swedish. A quantitative analysis was carried out, where the various premodifying structures present in the source text were identified and categorized according to their individual constituents and frequencies of the various categories were calculated. A similar analysis was also performed regarding the corresponding structures found in the Swedish target text. The results were then compared to and contrasted with other recent studies. In the qualitative analysis, individual examples from the text were analysed more in depth, and the consequences of the translation choices made were discussed. The results demonstrated that English hyphenated premodifiers showed a great deal of structural variety. The most frequent structures were nouns occurring in the left-hand position and ed-participles occurring the right-hand position. A large majority of the hyphenated premodifiers were short, with only 5.5% consisting of three words or more. The results also showed that the most frequent corresponding structure in the Swedish target text were compound adjectives, which comprised 48.1% of all examples. The results of the qualitative analysis pointed at a tendency towards explication, especially when hyphenated premodifiers were restructured to postmodifying phrases and clauses. Furthermore, a tendency to simplify the hyphenated modifiers during the translation process was noted, especially when translating longer, phrasal modifiers. It was noted that many hyphenated premodifiers in the ST were metaphorical in nature. This was sometimes, but not always, also the case in the corresponding TT phrases.
|
86 |
Simulation model simplification in semiconductor manufacturingStogniy, Igor 16 November 2021 (has links)
Despite the fact that discrete event simulation has existed for more than 50 years, sufficiently large simulation models covering several enterprises are still not known. Those simulation models that do exist in the industry are usually used to test individual scenarios rather than for optimization. The main reason for this is the high computational complexity. A solution could be the use of simplified models. However, this problem has not been sufficiently investigated. This dissertation is devoted to the problem. The problem can be briefly formulated as the following question:
How to simplify a simulation model in order to minimize the run time of the simplified model and maximize its accuracy?
Unfortunately, the answer to this question is not simple and requires many problems to be solved. This thesis formulates these problems and proposes ways to solve them. Let us briefly list them. In order to simplify simulation models in conditions close to real ones, it is suggested to use statistical models specially developed for this purpose. Based on experimental data, this thesis analyzes various ways of aggregating process flows. However, the main method of simplification is the substitution of tool sets for delays. Two approaches to the use of delays are considered: distributed and aggregated delays. Obviously, the second approach reduces the simulation time for the simplified model more. However, distributed delays allow us to identify meaningful effects arising from the simulation model simplification. Moreover, it is interesting to compare the two methods. A significant problem is determining the criterion for selecting the tool set to be substituted. In this thesis, ten heuristics are considered for this purpose, each with two variations. Another problem is the calculation of delays. Here we consider three variations and compare them in terms of the accuracy of simplified simulation models. In this thesis, we consider two dispatching rules: First In First Out and Critical Ratio. The first rule provides a more predictable behavior of simplified simulation models and makes it easier to understand the various effects that arise. The second rule allows us to test the applicability of the proposed simplification methods to conditions close to the real world.
An important problem is the analysis of the obtained results. Despite the fact that this issue is well studied in the field of simulation, it has its own nuances in the case of analyzing the simplified models' accuracy. Moreover, in the scientific literature, various recommendations were found, which were experimentally tested in this thesis. As a result, it turned out that not all traditional accuracy measurements can be adequately used for the analysis of simplified models. Moreover, it is additionally worth using those techniques and methods, which usually in simulation modeling refer to the analysis of input data.
In this thesis, about 500,000 experiments were performed, and more than 2,000 reports were generated. Most of the figures presented in this thesis are based on specific reports of the most significant interest.:1 Introduction 6
1.1 Preamble 7
1.2 Scope of the research 8
1.3 Problem definition 10
2 State of the art 12
2.1 Research object 13
2.1.1 Tool sets 15
2.1.2 Down and PM events 17
2.1.3 Process flows 18
2.1.4 Products 20
2.2 Simplification 21
2.2.1 Simplification approaches 23
2.2.2 Tool set and process step simplification 24
2.2.3 Process flow and product nomenclature simplification 26
2.2.4 Product volume and lot release rule simplification 27
2.3 Discussion about bottleneck 29
2.3.1 Bottleneck definitions 29
2.3.2 Why do we consider bottlenecks? 32
2.3.3 Bottleneck detection methods 33
3 Solution 41
3.1 Design of experiments 42
3.1.1 α – forecast scenario 43
3.1.2 β – lot release strategy 60
3.1.3 γ – process flow aggregation 64
3.1.4 δ – delay position 78
3.1.5 ε – dispatching rule 79
3.1.6 ζ – sieve functions 80
3.1.7 η – delay type 89
3.1.8 Experimental environment 90
3.2 Experiment analysis tools 93
3.2.1 Errors 93
3.2.2 Deltas 94
3.2.3 Correlation coefficients and autocorrelation function 96
3.2.4 T-test, U-test, and F-test 97
3.2.5 Accuracy measurements based on the probability density function 98
3.2.6 Accuracy measurements based on the cumulative distribution function 99
3.2.7 Simple example 100
3.2.8 Simulation reports 104
3.2.9 Process step reports 105
3.2.10 Model runtime characteristics 106
4 Evaluation 111
4.1 Scenario “Present”, static product mix and all process flows (α1, β1, γ1) 113
4.1.1 Similarity and difference of errors and deltas for static product mix (β1) 113
4.1.2 “Butterfly effect” of Mean Absolute Error (MAE) 114
4.1.3 “Strange” behavior of correlation and autocorrelation 116
4.1.4 “Pathological” behavior of Mean Absolute Error (MAE) 117
4.1.5 Lot cycle time average shift 120
4.1.6 Delay type (η) usage analysis 125
4.1.7 Introduction to sieve function (ζ) analysis 132
4.1.8 Delay position (δ) analysis 136
4.1.9 δ2 calibration (improvement of the lot cycle time average shift) 140
4.1.10 Using t-test, U-test, and F-test as accuracy measurements 144
4.1.11 Using accuracy measurements based on the probability density function 153
4.1.12 Using accuracy measurements based on the cumulative distribution function 159
4.1.13 X-axes of the accuracy measurements 163
4.1.14 Sieve function (ζ) comparison (accuracy comparison) 165
4.2 Scenario “Present”, dynamic product mix and all process flows (α1, β2, γ1) 174
4.2.1 Modeling time gain, autocorrelation, correlation, and MAE in β2 case 174
4.2.2 Errors and deltas in β2 case 176
4.2.3 Lot cycle time average shift in β2 case 177
4.2.4 Accuracy measurement in β2 case and accuracy comparison 180
4.3 Scenario “Past + Future”, dynamic product mix and all process flows (α2, β2, γ1) 187
4.3.1 Delays in α2 case 187
4.3.2 Deltas, correlation, and MAE in α2 case 190
4.3.3 Accuracy comparison 192
4.4 Process flow aggregation (α1, β1, γ1) vs. (α1, β1, γ2) vs. (α1, β1, γ3) 196
4.4.1 Lot cycle time average 196
4.4.2 Lot cycle time standard deviation 200
4.4.3 Correlation, Mean Absolute Error, and Hamming distance 202
4.4.4 Accuracy comparison 204
4.5 Additional experiments of gradual process step merge (α1, β1, γ3). 210
4.5.1 Gradual merge experimental results. 210
4.5.2 Theoretical explanation of the gradual merge experimental results 212
4.6 Process flow aggregation (α1, β2, γ1) vs. (α1, β2, γ2) vs. (α1, β2, γ3) 214
4.6.1 Correlation, MAE, and deltas 214
4.6.2 Accuracy comparison 218
4.7 Process flow aggregation (α2, β2, γ1) vs. (α2, β2, γ2) vs. (α2, β2, γ3) 223
4.7.1 Delays in {α2, γ2} case 223
4.7.2 Accuracy comparison 226
5 Conclusions and outlook 228
6 Appendices. 232
6.1 Appendix A. Simulation reports overview. 232
6.2 Appendix B. Sieve functions comparison for {α1, β1, γ1, δ2_cal, ε1} 237
7 References 245
|
87 |
Converting simplicity as a military strategy principle to a successful tool for strategy execution in a geographically dispersed organisationDe Wet Barrie, George 04 April 2011 (has links)
This research reports a case study conducted to determine whether the application of Simplicity as a military principle can assist a geographically dispersed organisation in executing strategy more effectively. An investigation was conducted into the main reasons why strategy execution is not fully effective in an identified geographical dispersed organisation. A survey and semi-structured interviews were conducted to identify these inhibitors. A comparison with existing literature identified the 4 main requirements to effective strategy execution in this organisation. A review of the application of Simplicity in the military context was completed. A comprehensive literature review, integrated with semi-structured interviews with general staff in the South African Army identified military approaches to Simplicity and its impact on execution successes. A conceptual content analysis matched successful military approaches to Simplicity with the main drivers of ineffective strategy execution in the organisation. The output was strategy execution inhibitors in the organisation, with matched approaches to Simplicity from interviews with military professionals. The compilation of a specific model and tools for simplification was proposed for the organisation. The output was a model for strategy execution at all levels, with tools and techniques discussed to ensure the simplification of strategic objectives in execution. Copyright / Dissertation (MBA)--University of Pretoria, 2010. / Gordon Institute of Business Science (GIBS) / unrestricted
|
88 |
Generation and evaluation of collision geometry based on drawingsAlbin, Bernhardsson, Björling, Ivar January 2018 (has links)
Background. Many video games allow for creative expression. Attractive Interactive AB is developing such a game, in which players draw their own levels using pen and paper. For such a game to work, collision geometry needs to be generated from photos of hand-drawn video game levels. Objectives. The main goal of the thesis is to create an algorithm for generating collision geometry from photos of hand-drawn video game levels and to determine whether the generated geometry can replace handcrafted geometry. Handcrafted geometry is manually created using vector graphics editing tools. Methods. A method for generating collision geometry from photos of drawings is implemented. The quality of the generated geometry is evaluated and compared to handcrafted geometry in terms of vertex count and positional accuracy. Ground truths are used to determine the positional accuracy of collision geometry by calculating the resemblance of the created collision geometry and the respective ground truth. Results. The generated geometry has a higher positional accuracy and on average a lower vertex count than the handcrafted geometry. Performance measurements for two different configurations of the algorithm are presented. Conclusions. Collision geometry generated by the presented algorithm has a higher quality than handcrafted geometry. Thus, the generated geometry can replace handcrafted geometry. / Bakgrund. Många datorspel möjliggör kreativa yttringar. Attractive Interactive AB utvecklar ett sådant spel, i vilket spelare ritar sina egna nivåer med hjälp av papper och penna. För att ett sådant spel ska vara möjligt måste kollisionsgeometri genereras från foton av handritade spelnivåer. Syfte. Syftet med examensarbetet är att skapa en algoritm för att generera kollisionsgeometri från foton av handritade datorspelsnivåer och fastställa om den genererade geometrin kan ersätta handgjord geometri. Handgjord kollisionsgeometri skapas manuellt genom använding av redigeringsverktyg för vektorgrafik. Metod. En metod för att generera kollisionsgeometri från foton av ritade datorspelsnivåer implementeras. Kvaliteten av den genererade geometrin evalueras och jämförs med handgjord geometri i fråga om vertexantal och positionsnoggrannhet. Grundreferenser används för att fastställa positionsnoggrannheten av kollisionsgeometri genom att beräkna likheten av den skapta kollisionsgeometrin och respektive grundreferens. Resultat. Den genererade geometrin har en högre positionsnoggrannhet och i genomsnitt ett lägre vertexantal än den handgjorda geometrin. Prestandamätningar för två olika konfigurationer av algoritmen presenteras. Slutsatser. Kollisionsgeometrin som genererats av den föreslagna algoritmen har en högre kvalitet än handgjord geometri. Därmed kan den genererade geometrin ersätta handgjord geometri.
|
89 |
Automatic Text Simplification via Synonym Replacement / Automatiskt textförenkling genom synonymutbyteKeskisärkkä, Robin January 2012 (has links)
In this study automatic lexical simplification via synonym replacement in Swedish was investigated using three different strategies for choosing alternative synonyms: based on word frequency, based on word length, and based on level of synonymy. These strategies were evaluated in terms of standardized readability metrics for Swedish, average word length, proportion of long words, and in relation to the ratio of errors (type A) and number of replacements. The effect of replacements on different genres of texts was also examined. The results show that replacement based on word frequency and word length can improve readability in terms of established metrics for Swedish texts for all genres but that the risk of introducing errors is high. Attempts were made at identifying criteria thresholds that would decrease the ratio of errors but no general thresholds could be identified. In a final experiment word frequency and level of synonymy were combined using predefined thresholds. When more than one word passed the thresholds word frequency or level of synonymy was prioritized. The strategy was significantly better than word frequency alone when looking at all texts and prioritizing level of synonymy. Both prioritizing frequency and level of synonymy were significantly better for the newspaper texts. The results indicate that synonym replacement on a one-to-one word level is very likely to produce errors. Automatic lexical simplification should therefore not be regarded a trivial task, which is too often the case in research literature. In order to evaluate the true quality of the texts it would be valuable to take into account the specific reader. A simplified text that contains some errors but which fails to appreciate subtle differences in terminology can still be very useful if the original text is too difficult to comprehend to the unassisted reader.
|
90 |
Automated Quadrilateral Coarsening by Ring CollapseDewey, Mark William 20 March 2008 (has links) (PDF)
In most finite element analysis, a uniform mesh is not the optimum way to model the problem. Mesh adaptation is the ability to modify a finite element model to include regions of the mesh with higher and lower node density. Mesh adaptation has received extensive study in both computational mechanics and computer graphics to increase the resolution or accuracy of the solution in specific areas. The algorithm developed in this thesis, the Automated Quadrilateral Coarsening by Ring Collapse (AQCRC) algorithm, provides a unique solution to allow conformal coarsening of both structured and unstructured quadrilateral finite element meshes. The algorithm is based on dual chord operations and dual chord removal. The AQCRC algorithm follows six steps: 1) input of a coarsening region and factor, 2) selection of coarsening rings, 3) improvement of mesh quality, 4) removal of coarsening rings, 5) mesh clean-up and 6) coarsening iterations. Examples are presented that show the application of the algorithm.
|
Page generated in 0.0623 seconds