Spelling suggestions: "subject:"spline"" "subject:"bspline""
121 |
Metody FFD / FFD methodsNovák, Jiří January 2017 (has links)
The diploma thesis deals with the topic of free-form deformations. The main goal of this work were elaboration of theoretical knowledge about this issue and the programming of selected methods od free-form deformations. The first part describes the required spline theory, matrix calculus and free-form deformations. The resulting version shows three programs. The first program compares the selected free-form deformation methods to the example of the 4x4 control point grid. The second program serves as a generalization for the general case of grid of control points. The last program is based on direct manipulation of arbitrary surface point and following recomputation of the control points to obtain demanded shape.
|
122 |
Geometric processing of CAD data and meshes as input of integral equation solversRandrianarivony, Maharavo 30 September 2006 (has links)
Among the presently known numerical solvers of integral equations, two main
categories of approaches can be traced: mesh-free approaches, mesh-based approaches.
We will propose some techniques to process geometric data so that they can
be efficiently used in subsequent numerical treatments of integral equations. In
order to prepare geometric information so that the above two approaches can be
automatically applied, we need the following items:
(1) Splitting a given surface into several four-sided patches,
(2) Generating a diffeomorphism from the unit square to a foursided patch,
(3) Generating a mesh M on a given surface,
(4) Patching of a given triangulation.
In order to have a splitting, we need to approximate the surfaces
first by polygonal regions. We use afterwards quadrangulation techniques by
removing quadrilaterals repeatedly. We will generate the diffeomorphisms by
means of transfinite interpolations of Coons and Gordon types.
The generation of a mesh M from a piecewise Riemannian surface will use some
generalized Delaunay techniques in which the mesh size will be determined with
the help of the Laplace-Beltrami operator.
We will describe our experiences with the IGES format because of two reasons.
First, most of our implementations have been done with it. Next, some of the
proposed methodologies assume that the curve and surface representations are
similar to those of IGES.
Patching a mesh consists in approximating or interpolating it by a set of practical
surfaces such as B-spline patches. That approach proves useful when we want to
utilize a mesh-free integral equation solver but the input geometry is represented
as a mesh.
|
123 |
Detecting Rare Haplotype-Environmental Interaction and Nonlinear Effects of Rare Haplotypes using Bayesian LASSO on Quantitative TraitsZhang, Han 27 October 2017 (has links)
No description available.
|
124 |
Vytvoření interaktivních pomůcek z oblasti 2D počítačové grafiky / Teaching aids for 2D computer graphicsMalina, Jakub January 2013 (has links)
In this master’s thesis we focus on the basic properties of computer curves and their practical applicability. We explain how the curve can be understood in general, what are polynomial curves and their composing possibilities. Then we focus on the description of Bezier curves, especially the Bezier cubic. We discuss in more detail some of fundamental algorithms that are used for modelling these curves on computers and then we will show their practical interpretation. Then we explain non uniform rational B-spline curves and De Boor algorithm. In the end we discuss topic rasterization of segment, thick line, circle and ellipse. The aim of master’s thesis is the creation of the set of interactive applets, simulating some of the methods and algorithm we discussed in theoretical part. This applets will help facilitate understanding and will make the teaching more effective.
|
125 |
Advanced signal processing techniques for multi-target trackingDaniyan, Abdullahi January 2018 (has links)
The multi-target tracking problem essentially involves the recursive joint estimation of the state of unknown and time-varying number of targets present in a tracking scene, given a series of observations. This problem becomes more challenging because the sequence of observations is noisy and can become corrupted due to miss-detections and false alarms/clutter. Additionally, the detected observations are indistinguishable from clutter. Furthermore, whether the target(s) of interest are point or extended (in terms of spatial extent) poses even more technical challenges. An approach known as random finite sets provides an elegant and rigorous framework for the handling of the multi-target tracking problem. With a random finite sets formulation, both the multi-target states and multi-target observations are modelled as finite set valued random variables, that is, random variables which are random in both the number of elements and the values of the elements themselves. Furthermore, compared to other approaches, the random finite sets approach possesses a desirable characteristic of being free of explicit data association prior to tracking. In addition, a framework is available for dealing with random finite sets and is known as finite sets statistics. In this thesis, advanced signal processing techniques are employed to provide enhancements to and develop new random finite sets based multi-target tracking algorithms for the tracking of both point and extended targets with the aim to improve tracking performance in cluttered environments. To this end, firstly, a new and efficient Kalman-gain aided sequential Monte Carlo probability hypothesis density (KG-SMC-PHD) filter and a cardinalised particle probability hypothesis density (KG-SMC-CPHD) filter are proposed. These filters employ the Kalman- gain approach during weight update to correct predicted particle states by minimising the mean square error between the estimated measurement and the actual measurement received at a given time in order to arrive at a more accurate posterior. This technique identifies and selects those particles belonging to a particular target from a given PHD for state correction during weight computation. The proposed SMC-CPHD filter provides a better estimate of the number of targets. Besides the improved tracking accuracy, fewer particles are required in the proposed approach. Simulation results confirm the improved tracking performance when evaluated with different measures. Secondly, the KG-SMC-(C)PHD filters are particle filter (PF) based and as with PFs, they require a process known as resampling to avoid the problem of degeneracy. This thesis proposes a new resampling scheme to address a problem with the systematic resampling method which causes a high tendency of resampling very low weight particles especially when a large number of resampled particles are required; which in turn affect state estimation. Thirdly, the KG-SMC-(C)PHD filters proposed in this thesis perform filtering and not tracking , that is, they provide only point estimates of target states but do not provide connected estimates of target trajectories from one time step to the next. A new post processing step using game theory as a solution to this filtering - tracking problem is proposed. This approach was named the GTDA method. This method was employed in the KG-SMC-(C)PHD filter as a post processing technique and was evaluated using both simulated and real data obtained using the NI-USRP software defined radio platform in a passive bi-static radar system. Lastly, a new technique for the joint tracking and labelling of multiple extended targets is proposed. To achieve multiple extended target tracking using this technique, models for the target measurement rate, kinematic component and target extension are defined and jointly propagated in time under the generalised labelled multi-Bernoulli (GLMB) filter framework. The GLMB filter is a random finite sets-based filter. In particular, a Poisson mixture variational Bayesian (PMVB) model is developed to simultaneously estimate the measurement rate of multiple extended targets and extended target extension was modelled using B-splines. The proposed method was evaluated with various performance metrics in order to demonstrate its effectiveness in tracking multiple extended targets.
|
126 |
Adaptive Envelope Protection Methods for AircraftUnnikrishnan, Suraj 19 May 2006 (has links)
Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control/command inputs to track a limit parameter safe-response profile near the limit boundary. The method is evaluated using software-in-the-loop and flight evaluations on the Georgia Tech unmanned rotorcraft platform- GTMax. The thesis also develops and evaluates an extension for calculating control margins based on restricting limit parameter response aggressiveness near the limit boundary.
|
127 |
Vytvoření interaktivních pomůcek z oblasti 2D počítačové grafiky / Teaching aids for 2D computer graphicsMalina, Jakub January 2013 (has links)
In this master’s thesis we focus on the basic properties of computer curves and their practical applicability. We explain how the curve can be understood in general, what are polynomial curves and their composing possibilities. Then we focus on the description of Bezier curves, especially the Bezier cubic. We discuss in more detail some of fundamental algorithms that are used for modelling these curves on computers and then we will show their practical interpretation. Then we explain non uniform rational B-spline curves and De Boor algorithm. In the end we discuss topic rasterization of segment, thick line, circle and ellipse. The aim of master’s thesis is the creation of the set of interactive applets, simulating some of the methods and algorithm we discussed in theoretical part. This applets will help facilitate understanding and will make the teaching more effective.
|
128 |
Reducing turbulence- and transition-driven uncertainty in aerothermodynamic heating predictions for blunt-bodied reentry vehiclesUlerich, Rhys David 24 October 2014 (has links)
Turbulent boundary layers approximating those found on the NASA Orion Multi-Purpose Crew Vehicle (MPCV) thermal protection system during atmospheric reentry from the International Space Station have been studied by direct numerical simulation, with the ultimate goal of reducing aerothermodynamic heating prediction uncertainty. Simulations were performed using a new, well-verified, openly available Fourier/B-spline pseudospectral code called Suzerain equipped with a ``slow growth'' spatiotemporal homogenization approximation recently developed by Topalian et al. A first study aimed to reduce turbulence-driven heating prediction uncertainty by providing high-quality data suitable for calibrating Reynolds-averaged Navier--Stokes turbulence models to address the atypical boundary layer characteristics found in such reentry problems. The two data sets generated were Ma[approximate symbol] 0.9 and 1.15 homogenized boundary layers possessing Re[subscript theta, approximate symbol] 382 and 531, respectively. Edge-to-wall temperature ratios, T[subscript e]/T[subscript w], were close to 4.15 and wall blowing velocities, v[subscript w, superscript plus symbol]= v[subscript w]/u[subscript tau], were about 8 x 10-3 . The favorable pressure gradients had Pohlhausen parameters between 25 and 42. Skin frictions coefficients around 6 x10-3 and Nusselt numbers under 22 were observed. Near-wall vorticity fluctuations show qualitatively different profiles than observed by Spalart (J. Fluid Mech. 187 (1988)) or Guarini et al. (J. Fluid Mech. 414 (2000)). Small or negative displacement effects are evident. Uncertainty estimates and Favre-averaged equation budgets are provided. A second study aimed to reduce transition-driven uncertainty by determining where on the thermal protection system surface the boundary layer could sustain turbulence. Local boundary layer conditions were extracted from a laminar flow solution over the MPCV which included the bow shock, aerothermochemistry, heat shield surface curvature, and ablation. That information, as a function of leeward distance from the stagnation point, was approximated by Re[subscript theta], Ma[subscript e], [mathematical equation], v[subscript w, superscript plus sign], and T[subscript e]/T[subscript w] along with perfect gas assumptions. Homogenized turbulent boundary layers were initialized at those local conditions and evolved until either stationarity, implying the conditions could sustain turbulence, or relaminarization, implying the conditions could not. Fully turbulent fields relaminarized subject to conditions 4.134 m and 3.199 m leeward of the stagnation point. However, different initial conditions produced long-lived fluctuations at leeward position 2.299 m. Locations more than 1.389 m leeward of the stagnation point are predicted to sustain turbulence in this scenario. / text
|
Page generated in 0.0383 seconds