• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 6
  • 5
  • 5
  • 4
  • 1
  • Tagged with
  • 48
  • 11
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Process Control in High-Noise Environments Using A Limited Number Of Measurements

Barajas, Leandro G. January 2003 (has links)
The topic of this dissertation is the derivation, development, and evaluation of novel hybrid algorithms for process control that use a limited number of measurements and that are suitable to operate in the presence of large amounts of process noise. As an initial step, affine and neural network statistical process models are developed in order to simulate the steady-state system behavior. Such models are vitally important in the evaluation, testing, and improvement of all other process controllers referred to in this work. Afterwards, fuzzy logic controller rules are assimilated into a mathematical characterization of a model that includes the modes and mode transition rules that define a hybrid hierarchical process control. The main processing entity in such framework is a closed-loop control algorithm that performs global and then local optimizations in order to asymptotically reach minimum bias error; this is done while requiring a minimum number of iterations in order to promptly reach a desired operational window. The results of this research are applied to surface mount technology manufacturing-lines yield optimization. This work achieves a practical degree of control over the solder-paste volume deposition in the Stencil Printing Process (SPP). Results show that it is possible to change the operating point of the process by modifying certain machine parameters and even compensate for the difference in height due to change in print direction.
42

Méthode de Galerkin Discontinue et intégrations explicites-implicites en temps basées sur un découplage des degrés de liberté. Applications au système des équations de Navier-Stokes.

Gérald, Sophie 26 November 2013 (has links) (PDF)
En mécanique des fluides numérique, un enjeu est le développement de méthodes d'approximation d'ordre élevé, comme celles de Galerkin Discontinues (GD). Si ces méthodes permettent d'envisager la simulation d'écoulements complexes en alternative aux méthodes usuelles d'ordre deux, elles souffrent cependant d'une forte restriction sur le pas de temps lorsqu'elles sont associées à une discrétisation explicite en temps. Ce travail de thèse consiste à mettre en œuvre une stratégie d'intégration temporelle explicite-implicite efficace, associée à une discrétisation spatiale GD d'ordre élevé, pour les écoulements instationnaires à convection dominante de fluides visqueux compressibles modélisés par le système des équations de Navier-Stokes. La discrétisation spatiale de la méthode GD est associée à des flux numériques de fluides parfaits et visqueux à stencil compact. En présence de frontières matérielles courbes, l'ordre élevé est garanti par la discrétisation du domaine de calcul à l'aide d'une représentation iso-paramétrique. La stratégie d'intégration temporelle repose sur une décomposition d'opérateurs de Strang, où les termes de convection sont résolus explicitement et ceux de diffusion implicitement. Son efficacité résulte d'une simplification du schéma implicite, où le calcul de la matrice implicite est approché avec une méthode sans jacobienne et où les degrés de liberté du schéma sont découplés. De fait, la taille du système linéaire à résoudre et le temps de calcul de la résolution sont significativement réduits. Enfin, la validation et l'évaluation des performances du schéma numérique sont réalisées à travers cinq cas tests bien référencés en deux dimensions d'espace.
43

An empirically derived system for high-speed rendering

Rautenbach, Helperus Ritzema 25 September 2012 (has links)
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU. All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry. Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments. Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
44

Problematika šablonového tisku pájecí pasty pro součástky s malou roztečí vývodů / Problems in Solder Paste Stencil Printing for Fine Pitch Components

Šimeček, Ondřej January 2011 (has links)
Despote the indisputable advantages of fine-pitch components, is need to calculate with a few trouble during production, especially increased requirements for accuracy of mounting and solder printing. In this work I’m concerned with problems of solder printing for these components and evaluation using SPC. For the evaluation I used 3D paste inspection based on laser scanning of the surface. The output of this work is to describe the principles of solder printing and elaborating of GR&R, SPC analysis and histograms of solder printing for some outputs. I focused in my master thesis on motive design change of problematic components and economic evaluation of the adjustments.
45

This is My Family: An Erasure

Rehman, Sadia 02 August 2017 (has links)
No description available.
46

Development of High-order CENO Finite-volume Schemes with Block-based Adaptive Mesh Refinement (AMR)

Ivan, Lucian 31 August 2011 (has links)
A high-order central essentially non-oscillatory (CENO) finite-volume scheme in combination with a block-based adaptive mesh refinement (AMR) algorithm is proposed for solution of hyperbolic and elliptic systems of conservation laws on body- fitted multi-block mesh. The spatial discretization of the hyperbolic (inviscid) terms is based on a hybrid solution reconstruction procedure that combines an unlimited high-order k-exact least-squares reconstruction technique following from a fixed central stencil with a monotonicity preserving limited piecewise linear reconstruction algorithm. The limited reconstruction is applied to computational cells with under-resolved solution content and the unlimited k-exact reconstruction procedure is used for cells in which the solution is fully resolved. Switching in the hybrid procedure is determined by a solution smoothness indicator. The hybrid approach avoids the complexity associated with other ENO schemes that require reconstruction on multiple stencils and therefore, would seem very well suited for extension to unstructured meshes. The high-order elliptic (viscous) fluxes are computed based on a k-order accurate average gradient derived from a (k+1)-order accurate reconstruction. A novel h-refinement criterion based on the solution smoothness indicator is used to direct the steady and unsteady refinement of the AMR mesh. The predictive capabilities of the proposed high-order AMR scheme are demonstrated for the Euler and Navier-Stokes equations governing two-dimensional compressible gaseous flows as well as for advection-diffusion problems characterized by the full range of Peclet numbers, Pe. The ability of the scheme to accurately represent solutions with smooth extrema and yet robustly handle under-resolved and/or non-smooth solution content (i.e., shocks and other discontinuities) is shown for a range of problems. Moreover, the ability to perform mesh refinement in regions of smooth but under-resolved and/or non-smooth solution content to achieve the desired resolution is also demonstrated.
47

Development of High-order CENO Finite-volume Schemes with Block-based Adaptive Mesh Refinement (AMR)

Ivan, Lucian 31 August 2011 (has links)
A high-order central essentially non-oscillatory (CENO) finite-volume scheme in combination with a block-based adaptive mesh refinement (AMR) algorithm is proposed for solution of hyperbolic and elliptic systems of conservation laws on body- fitted multi-block mesh. The spatial discretization of the hyperbolic (inviscid) terms is based on a hybrid solution reconstruction procedure that combines an unlimited high-order k-exact least-squares reconstruction technique following from a fixed central stencil with a monotonicity preserving limited piecewise linear reconstruction algorithm. The limited reconstruction is applied to computational cells with under-resolved solution content and the unlimited k-exact reconstruction procedure is used for cells in which the solution is fully resolved. Switching in the hybrid procedure is determined by a solution smoothness indicator. The hybrid approach avoids the complexity associated with other ENO schemes that require reconstruction on multiple stencils and therefore, would seem very well suited for extension to unstructured meshes. The high-order elliptic (viscous) fluxes are computed based on a k-order accurate average gradient derived from a (k+1)-order accurate reconstruction. A novel h-refinement criterion based on the solution smoothness indicator is used to direct the steady and unsteady refinement of the AMR mesh. The predictive capabilities of the proposed high-order AMR scheme are demonstrated for the Euler and Navier-Stokes equations governing two-dimensional compressible gaseous flows as well as for advection-diffusion problems characterized by the full range of Peclet numbers, Pe. The ability of the scheme to accurately represent solutions with smooth extrema and yet robustly handle under-resolved and/or non-smooth solution content (i.e., shocks and other discontinuities) is shown for a range of problems. Moreover, the ability to perform mesh refinement in regions of smooth but under-resolved and/or non-smooth solution content to achieve the desired resolution is also demonstrated.
48

Calcul flottant haute performance sur circuits reconfigurables / High-performance floating-point computing on reconfigurable circuits

Pasca, Bogdan Mihai 21 September 2011 (has links)
De plus en plus de constructeurs proposent des accélérateurs de calculs à base de circuits reconfigurables FPGA, cette technologie présentant bien plus de souplesse que le microprocesseur. Valoriser cette flexibilité dans le domaine de l'accélération de calcul flottant en utilisant les langages de description de circuits classiques (VHDL ou Verilog) reste toutefois très difficile, voire impossible parfois. Cette thèse a contribué au développement du logiciel FloPoCo, qui offre aux utilisateurs familiers avec VHDL un cadre C++ de description d'opérateurs arithmétiques génériques adapté au calcul reconfigurable. Ce cadre distingue explicitement la fonctionnalité combinatoire d'un opérateur, et la problématique de son pipeline pour une précision, une fréquence et un FPGA cible donnés. Afin de pouvoir utiliser FloPoCo pour concevoir des opérateurs haute performance en virgule flottante, il a fallu d'abord concevoir des blocs de bases optimisés. Nous avons d'abord développé des additionneurs pipelinés autour des lignes de propagation de retenue rapides, puis, à l'aide de techniques de pavages, nous avons conçu de gros multiplieurs, possiblement tronqués, utilisant des petits multiplieurs. L'évaluation de fonctions élémentaires en flottant implique souvent l'évaluation en virgule fixe d'une fonction. Nous présentons un opérateur générique de FloPoCo qui prend en entrée l'expression de la fonction à évaluer, avec ses précisions d'entrée et de sortie, et construit un évaluateur polynomial optimisé de cette fonction. Ce bloc de base a permis de développer des opérateurs en virgule flottante pour la racine carrée et l'exponentielle qui améliorent considérablement l'état de l'art. Nous avons aussi travaillé sur des techniques de compilation avancée pour adapter l'exécution d'un code C aux pipelines flexibles de nos opérateurs. FloPoCo a pu ainsi être utilisé pour implanter sur FPGA des applications complètes. / Due to their potential performance and unmatched flexibility, FPGA-based accelerators are part of more and more high-performance computing systems. However, exploiting this flexibility for accelerating floating-point computations by manually using classical circuit description languages (VHDL or Verilog) is very difficult, and sometimes impossible. This thesis has contributed to the development of the FloPoCo software, a C++ framework for describing flexible FPGA-specific arithmetic operators. This framework explicitly separates the description of the combinatorial functionality of an arithmetic operator, and its pipelining for a given precision, operating frequency and target FPGA.In order to be able to use FloPoCo for designing high performance floating-point operators, we first had to design the optimized basic blocks. We first developed pipelined addition architectures exploiting the fast-carry lines present in modern FPGAs. Next, we focused on multiplication architectures. Using tiling techniques, we proposed novel architectures for large multipliers, but also truncated multipliers, based on the multipliers found in modern FPGA DSP blocks. We also present a generic FloPoCo operator which inputs the expression of a function, its input and output precisions, and builds an optimized polynomial evaluator for the fixed-point evaluation of this function. Using this building block we have designed floating-point operators for the square-root and exponential functions which significantly outperform existing operators. Finally, we also made use of advanced compilation techniques for adapting the execution of a C program to the flexible pipelines of our operators.

Page generated in 0.0593 seconds