• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 106
  • 35
  • 34
  • 23
  • 11
  • 10
  • 6
  • 5
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 627
  • 132
  • 103
  • 96
  • 79
  • 75
  • 62
  • 58
  • 52
  • 48
  • 47
  • 40
  • 40
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Representação compressiva de malhas / Mesh Compressive Representation

Jose Paulo Rodrigues de Lima 17 February 2014 (has links)
A compressão de dados é uma área de muito interesse em termos computacionais devido à necessidade de armazená-los e transmiti-los. Em particular, a compressão de malhas possui grande interesse em função do crescimento de sua utilização em jogos tridimensionais e modelagens diversas. Nos últimos anos, uma nova teoria de aquisição e reconstrução de sinais foi desenvolvida, baseada no conceito de esparsidade na minimização da norma L1 e na incoerência do sinal, chamada Compressive Sensing (CS). Essa teoria possui algumas características marcantes, como a aleatoriedade de amostragem e a reconstrução via minimização, de modo que a própria aquisição do sinal é feita considerando somente os coeficientes significativos. Qualquer objeto que possa ser interpretado como um sinal esparso permite sua utilização. Assim, ao se representar esparsamente um objeto (sons, imagens) é possível aplicar a técnica de CS. Este trabalho verifica a viabilidade da aplicação da teoria de CS na compressão de malhas, de modo que seja possível um sensoreamento e representação compressivos na geometria de uma malha. Nos experimentos realizados, foram utilizadas variações dos parâmetros de entrada e técnicas de minimização da Norma L1. Os resultados obtidos mostram que a técnica de CS pode ser utilizada como estratégia de compressão da geometria das malhas. / Data compression is an area of a major interest in computational terms due to the issues on storage and transmission. Particularly, mesh compression has wide usage due to the increase of its application in games and three-dimensional modeling. In recent years, a new theory of acquisition and reconstruction of signals was developed, based on the concept of sparsity and in the minimization of the L1 norm and the incoherency of the signal, called Compressive Sensing (CS). This theory has some remarkable features, such as random sampling and reconstruction by minimization, in a way that the signal acquisition is done by considering only its significant coefficients. Any object that can be interpreted as a sparse sign allows its use. Thus, representing an object sparsely (sounds, images), you can apply the technique of CS. This work explores the viability of CS theory on mesh compression, so that it is possible a representative and compressive sensing on the mesh geometry. In the performed experiments, different parameters and L1 Norm minimization strategies were used. The results show that CS can be used as a mesh geometry compression strategy.
152

Algoritmos genéticos adaptativos: um estudo comparativo. / Genetic algorithm: a comparative study.

João Carlos Holland de Barcellos 07 April 2000 (has links)
Os Algoritmos Genéticos representam, atualmente, uma poderosa ferramenta para busca de soluções de problemas com alto nível de complexidade. Esta dissertação estuda os Meta Algoritmos Genéticos, que é uma classe de Algoritmos Genéticos, e compara-os com os Algoritmos Genéticos tradicionais. Para a realização deste estudo, foi desenvolvido um programa de computador que permite, de forma automática, a realização de testes de desempenho de várias modalidades de Algoritmos Genéticos, bem como a análise dos dados por eles gerados. Os resultados obtidos mostraram que os Meta Algoritmos Genéticos são mais estáveis, com relação ao seus parâmetros de controle, do que os Algoritmos Genéticos tradicionais. / The Genetic Algorithms nowadays are a strong tool to find solutions in problems with high level of complexity. This dissertation studies Meta Genetic Algorithms, a particular class of Genetic Algorithms, and compares them to the usual Genetic Algorithms. This was accomplished by a computer program that automatically tests the performance of some Genetic Algorithms models and analyze the data generated by them. The results show that Meta Genetic Algorithms are more stable than usual Genetic Algorithms with relation to their control parameters.
153

Mk4 : programa para síntese de funções majoritárias com até 4 variáveis de entrada /

Muniz, Jeferson de Lima January 2019 (has links)
Orientador: Alexandre César Rodrigues da Silva / Resumo: Com a evolução da tecnologia, os CIs (Circuitos Integrados) com tecnologia CMOS (Complementary Metal-Oxide Semicondutor) têm se tornado cada vez menores e mais eficientes, entretanto, esta tecnologia está atingindo os limites físicos. Para minimizar ainda mais os circuitos digitais, novas tecnologias foram desenvolvidas como, por exemplo, a tecnologia QCA (Quantum-Dot Cellular Automata) que em conjunto com a lógica majoritária tem despertado o interesse da comunidade acadêmica no que se refere ao desenvolvimento de ferramentas de síntese e de otimização. Neste trabalho implementou-se o programa denominado MK4 que tem como proposta realizar a minimização de funções majoritárias com até quatro variáveis, utilizando as ideias contidas no mapa de Karnaugh. Os resultados obtidos pelo MK4 foram comparados com os do programa exact_mig. De 65.536 funções comparadas, 92,60% das funções geradas pelo programa MK4 tiveram custos iguais ou inferiores em relação as funções geradas pelo exact_mig. / Abstract: With the evolution of technology, the ICs (Integrated Circuits) with CMOS (Complementary Metal-Oxide Semiconductor) technology has become smaller and more efficient. However, this technology is reaching its physical limits. To further minimize digital circuits, new technologies are presented such as, QCA (\textit{Quantum-Dot Cellular Automata}) technology that together with majority logic has aroused the interest of the academic community in the development of synthesis and optimization tools. In this work the program denominated MK4 was implemented, with the purpose of minimizing majority functions with up to four variables, using the Karnaugh map. The results obtained by MK4 were compared with those of the exact_mig program. From 65,536 functions compared, 92.60% of the functions generated by the MK4 had equal or lower costs in relation to the functions generated by the exact_mig. / Mestre
154

Analytical logical effort formulation for local sizing / Formulação analítica baseada em logical effort para dimensionamento local

Alegretti, Caio Graco Prates January 2013 (has links)
A indústria de microeletrônica tem recorrido cada vez mais à metodologia de projeto baseado em células para fazer frente à crescente complexidade dos projetos de circuitos integrados digitais, uma vez que circuitos baseados em células são projetados mais rápida e economicamente que circuitos full-custom. Entretanto, apesar do progresso ocorrido na área de Electronic Design Automation, circuitos digitais baseados em células apresentam desempenho inferior ao de circuitos full-custom. Assim, torna-se interessante encontrar maneiras de se fazer com que circuitos baseados em células tenham desempenho próximo ao de circuitos full-custom, sem que isso implique elevação significativa nos custos do projeto. Com tal objetivo em vista, esta tese apresenta contribuições para um fluxo automático de otimização local para circuitos digitais baseados em células. Por otimização local se entende a otimização do circuito em pequenas janelas de contexto, onde são feitas otimizações considerando o contexto global. Deste modo, a otimização local pode incluir a detecção e isolamento de regiões críticas do circuito e a geração de redes lógicas e de redes de transistores de diferentes topologias que são dimensionadas de acordo com as restrições de projeto em questão. Como as otimizações locais atuam em um contexto reduzido, várias soluções podem ser obtidas considerando as restrições locais, entre as quais se escolhe a mais adequada para substituir o subcircuito (região crítica) original. A contribuição específica desta tese é o desenvolvimento de um método de dimensionamento de subcircuitos capaz de obter soluções com área ativa mínima, respeitando a capacitância máxima de entrada, a carga a ser acionada, e a restrição de atraso imposta. O método é baseado em uma formulação de logical effort, e a principal contribuição é calcular analiticamente a derivada da área para obter área mínima, ao invés de fazer a derivada do atraso para obter o atraso mínimo, como é feito na formulação tradicional do logical effort. Simulações elétricas mostram que o modelo proposto é muito preciso para uma abordagem de primeira ordem, uma vez que apresenta erros médios de 1,48% para dissipação de potência, 2,28% para atraso de propagação e 6,5% para os tamanhos dos transistores. / Microelectronics industry has been relying more and more upon cell-based design methodology to face the growing complexity in the design of digital integrated circuits, since cell-based integrated circuits are designed in a faster and cheaper way than fullcustom circuits. Nevertheless, in spite of the advancements in the field of Electronic Design Automation, cell-based digital integrated circuits show inferior performance when compared with full-custom circuits. Therefore, it is desirable to find ways to bring the performance of cell-based circuits closer to that of full-custom circuits without compromising the design costs of the former circuits. Bearing this goal in mind, this thesis presents contributions towards an automatic flow of local optimization for cellbased digital circuits. By local optimization, it is meant circuit optimization within small context windows, in which optimizations are done taking into account the global context. This way, local optimization may include the detection and isolation of critical regions of the circuit and the generation of logic and transistor networks; these networks are sized according to the existing design constraints. Since local optimizations act in a reduced context, several solutions may be obtained considering local constraints, out of which the fittest solution is chosen to replace the original subcircuit (critical region). The specific contribution of this thesis is the development of a subcircuit sizing method capable of obtaining minimum active area solutions, taking into account the maximum input capacitance, the output load to be driven, and the imposed delay constraint. The method is based on the logical effort formulation, and the main contribution is to compute the area derivative to obtain minimum area, instead of making the delay derivative to obtain minimum delay, as it is done in the traditional logical effort formulation. Electrical simulations show that the proposed method is very precise for a first order approach, as it presents average errors of 1.48% in power dissipation, 2.28% in propagation delay, and 6.5% in transistor sizes.
155

Comportamento de dois reatores em batelada seqüenciais aeróbios com diferentes idades do lodo e retorno total do lodo em excesso após desintegração com ultra-som / Behavior of two aerobic sequential batch reactors with different sludge ages and total return of excess sludge after disintegration by ultrasound

Campos, André Luís de Oliva 18 October 2002 (has links)
O estudo teve por finalidade a redução do lodo gerado nos sistema aeróbios mediante recirculação do lodo em excesso, após passar por uma desintegração com ultra-som, ao tanque de aeração de cada reator. Foram utilizados dois reatores seqüenciais aeróbios (A e B) operando em sistema de batelada e com idades do lodo diferentes: 12 e 8 dias respectivamente, e com ciclos de doze horas. Os reatores foram operados em duas etapas. Na etapa inicial, chamada controle, os reatores foram operados por 130 dias sem reciclo do lodo, para se poder avaliar o comportamento e fazer futuras comparações com a etapa posterior, denominada teste. Na etapa controle foram analisados o comportamento da DQO, dos sólidos e dos nutrientes além de serem realizados testes com o ultra-som para a escolha do tempo de exposição e volume de amostra.. O reator A apresentou uma boa remoção de DQO (90,9%), entretanto não apresentou nitrificação completa, chegando apenas na transformação de nitrogênio orgânico a amoniacal. Já a remoção de fósforo foi da ordem de 60%. O reator B também apresentou uma boa remoção de DQO (87,7%) e houve formação de nitrato, embora não completa, e a remoção de fósforo se situou em 57%. Na etapa teste, que durou aproximadamente 90 dias, os reatores foram operados com retorno total do lodo após desintegração com ultra-som. Houve um aumento na DQO afluente devido ao retorno do lodo desintegrado, bem como um aumento nos teores de nitrogênio e fósforo. Houve também um aumento na concentração de sólidos dos reatores, sendo o reator A que apresentou um maior acréscimo. Ambos reatores apresentaram uma mesma eficiência na remoção de DQO (92,0% para o reator A, e 91% para o reator B) se comparado com a etapa controle. Com relação aos nutrientes os reatores apresentaram sensível melhora na nitrificação. O reator A apresentou uma quase completa redução no nitrogênio orgânico, embora não tenha chegado a nitrificação completa. O reator B apresentou uma remoção completa do nitrogênio orgânico. Com relação ao fósforo, a etapa teste apresentou um decréscimo na sua remoção (42% para o reator A e 44% para o reator B). As análises indicaram que a desintegração do lodo e seu retorno ao tanque de aeração não causaram problemas no funcionamento dos reatores, houve uma melhora na nitrificação e não houve uma sensível redução na remoção de fósforo. Comparando com o problema de transporte, tratamento e disposição final de lodo, o estudo de redução de lodos é uma grande alternativa e que merece mais estudos. / The study aims to reduce the sludge generation in aerobic systems by recirculation of the excess sludge after disintegration with a ultrasound set. Two aerobic sequential batch reactors (A and B) were utilized operating with two different sludge ages: 12 and 8 days respectively. The reactors were operated in two stages. In the first stage, called control, the reactors were operated for 130 days, without total sludge recirculation, to promote an assessment and compare with the next stage, called test. In the control stage the behavior of COD, solids and nutrients were analyzed, and tests with ultrasound were performed. The reactor A presented a good COD removal, but not a good nitrification, and a phosphorus removal of 60%. The reactor B presented a good COD removal and a good nitrification, but not complete, and a phosphorus removal of 57%. The test stage the reactors were operated for 90 days, with sludge recirculation after disintegration by ultrasound. There was an increase in influent COD, nitrogen and phosphorus because of sludge recirculation. There was an increase in MLVSS, but reactor A presented a greater increase. Both reactors presented good results in COD removal comparing with control stage. The reactors present a good improvement in nitrification, but not good phosphorus removal regarding to control stage. The analysis showed that sludge disintegration and recirculation to aeration tank not caused any problem in the behavior of the reactors. Comparing to transport, treatment and final disposal problems, the study of sludge reduction is a great alternative that deserves attention.
156

Iterative Methods to Solve Systems of Nonlinear Algebraic Equations

Alam, Md Shafiful 01 April 2018 (has links)
Iterative methods have been a very important area of study in numerical analysis since the inception of computational science. Their use ranges from solving algebraic equations to systems of differential equations and many more. In this thesis, we discuss several iterative methods, however our main focus is Newton's method. We present a detailed study of Newton's method, its order of convergence and the asymptotic error constant when solving problems of various types as well as analyze several pitfalls, which can affect convergence. We also pose some necessary and sufficient conditions on the function f for higher order of convergence. Different acceleration techniques are discussed with analysis of the asymptotic behavior of the iterates. Analogies between single variable and multivariable problems are detailed. We also explore some interesting phenomena while analyzing Newton's method for complex variables.
157

Calculation of sensor redundancy degree for linear sensor systems

Govindaraj, Santhosh 01 May 2010 (has links)
The rapid developments in the sensor and its related technology have made automation possible in many processes in diverse fields. Also sensor-based fault diagnosis and quality improvements have been made possible. These tasks depend highly on the sensor network for the accurate measurements. The two major problems that affect the reliability of the sensor system/network are sensor failures and sensor anomalies. The usage of redundant sensors offers some tolerance against these two problems. Hence the redundancy analysis of the sensor system is essential in order to clearly know the robustness of the system against these two problems. The degree of sensor redundancy defined in this thesis is closely tied with the fault-tolerance of the sensor network and can be viewed as a parameter related to the effectiveness of the sensor system design. In this thesis, an efficient algorithm to determine the degree of sensor redundancy for linear sensor systems is developed. First the redundancy structure is linked with the matroid structure, developed from the design matrix, using the matroid theory. The matroid problem equivalent to the degree of sensor redundancy is developed and the mathematical formulation for it is established. The solution is obtained by solving a series of l1-norm minimization problems. For many problems tested, the proposed algorithm is more efficient than other known alternatives such as basic exhaustive search and bound and decomposition method. The proposed algorithm is tested on problem instances from the literature and wide range of simulated problems. The results show that the algorithm determines the degree of redundancy more accurately when the design matrix is dense than when it is sparse. The algorithm provided accurate results for most problems in relatively short computation times.
158

Zero emission management

Lam, Lai Fong Janna. January 2001 (has links) (PDF)
Author's name appears as Lam Lai Fong Janna on front cover. Bibliography: leaves 117-120.
159

Computational petrology: Subsolidus equilibria in the upper mantle

Sommacal, Silvano, silvano.sommacal@anu.edu.au January 2004 (has links)
Processes that take place in the Earth’s mantle are not accessible to direct observation. Natural samples of mantle material that have been transported to the surface as xenoliths provide useful information on phase relations and compositions of phases at the pressure and temperature conditions of each rock fragment. In the past, considerable effort has been devoted by petrologists to investigate upper mantle processes experimentally. Results of high temperatures, high pressure experiments have provided insight into lower crust-upper mantle phase relations as a function of temperature, pressure and composition. However, the attainment of equilibrium in these experiments, especially in complex systems, may be very difficult to test rigorously. Furthermore, experimental results may also require extrapolation to different pressures, temperatures or bulk compositions. More recently, thermodynamic modeling has proved to be a very powerful approach to this problem, allowing the deciphering the physicochemical conditions at which mantle processes occur. On the other hand, a comprehensive thermodynamic model to investigate lower crust-upper mantle phase assemblages in complex systems does not exist. ¶ In this study, a new thermodynamic model to describe phase equilibria between silicate and/or oxide crystalline phases has been derived. For every solution phase the molar Gibbs free energy is given by the sum of contributions from the energy of the end-members, ideal mixing on sites, and excess site mixing terms. It is here argued that the end-member term of the Gibbs free energy for complex solid solution phases (e.g. pyroxene, spinel) has not previously been treated in the most appropriate manner. As an example, the correct expression of this term for a pyroxene solution in a general (Na-Ca-Mg-Fe2+-Al-Cr-Fe3+-Si-Ti) system is presented and the principle underlying its formulation for any complex solution phase is elucidated.¶ Based on the thermodynamic model an algorithm to compute lower crust-upper mantle phase equilibria for subsolidus mineral assemblages as a function of composition, temperature and pressure has been developed. Included in the algorithm is a new way to represent the total Gibbs free energy for any multi-phase complex system. At any given temperature and pressure a closed multi-phase system is at its equilibrium condition when the chemical composition of the phases present in the system and the number of moles of each are such that the Gibbs free energy of the system reaches its minimum value. From a mathematical point of view, the determination of equilibrium phase assemblages can, in short, be defined as a constrained minimization problem. To solve the Gibbs free energy minimization problem a ‘Feasible Iterate Sequential Quadratic Programming’ method (FSQP) is employed. The system’s Gibbs free energy is minimized under several different linear and non-linear constraints. The algorithm, coded as a highly flexible FORTRAN computer program (named ‘Gib’), has been set up, at the moment, to perform equilibrium calculations in NaO-CaO-MgO-FeO-Al2O3-Cr2O3-Fe2O3- SiO2-TiO2 systems. However, the program is designed in a way that any other oxide component could be easily added.¶ To accurately forward model phase equilibria compositions using ‘Gib’, a precise estimation of the thermodynamic data for mineral end-members and of the solution parameters that will be adopted in the computation is needed. As a result, the value of these parameters had to be derived/refined for every solution phase in the investigated systems. A computer program (called ‘GibInv’) has been set up, and its implementation is here described in detail, that allows the simultaneous refinement of any of the end-member and mixing parameters. Derivation of internally consistent thermodynamic data is obtained by making use of the Bayesian technique. The program, after being successfully tested in a synthetic case, is initially applied to pyroxene assemblages in the system CaO-MgO-FeO-Al2O3-SiO2 (i.e. CMFAS) and in its constituent subsystems. Preliminary results are presented.¶ The new thermodynamic model is then applied to assemblages of Ca-Mg-Fe olivines and to assemblages of coexisting pyroxenes (orthopyroxene, low Ca- and high Ca clinopyroxene; two or three depending on T-P-bulk composition conditions), in CMFAS system and subsystems. Olivine and pyroxene solid solution and end-member parameters are refined, in part using ‘GibInv’ and in part on a ‘trial and error’ basis, and, when necessary, new parameters are derived. Olivine/pyroxene phase relations within such systems and their subsystems are calculated over a wide range of temperatures and pressures and compare very favorably with experimental constraints.
160

Stochastic Transportation-Inventory Network Design Problem

Shu, Jia, Teo, Chung Piaw, Shen, Zuo-Jun Max 01 1900 (has links)
In this paper, we study the stochastic transportation-inventory network design problem involving one supplier and multiple retailers. Each retailer faces some uncertain demand. Due to this uncertainty, some amount of safety stock must be maintained to achieve suitable service levels. However, risk-pooling benefits may be achieved by allowing some retailers to serve as distribution centers (and therefore inventory storage locations) for other retailers. The problem is to determine which retailers should serve as distribution centers and how to allocate the other retailers to the distribution centers. Shen et al. (2000) and Daskin et al. (2001) formulated this problem as a set-covering integer-programming model. The pricing subproblem that arises from the column generation algorithm gives rise to a new class of submodular function minimization problem. They only provided efficient algorithms for two special cases, and assort to ellipsoid method to solve the general pricing problem, which run in O(n⁷ log(n)) time, where n is the number of retailers. In this paper, we show that by exploiting the special structures of the pricing problem, we can solve it in O(n² log n) time. Our approach implicitly utilizes the fact that the set of all lines in 2-D plane has low VC-dimension. Computational results show that moderate size transportation-inventory network design problem can be solved efficiently via this approach. / Singapore-MIT Alliance (SMA)

Page generated in 0.1286 seconds