• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 117
  • 102
  • 48
  • 33
  • 25
  • 14
  • 13
  • 13
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 975
  • 135
  • 120
  • 111
  • 99
  • 86
  • 82
  • 73
  • 72
  • 71
  • 71
  • 71
  • 70
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Interpolation and visualization of sparse GPR data / Interpolering och visualisering av gles GPR data

Sjödin, Rickard January 2020 (has links)
Ground Penetrating Radar is a tool for mapping the subsurface in a noninvasive way. The radar instrument transmits electromagnetic waves and records the resulting scattered field. Unfortunately, the data from a survey can be hard to interpret, and this holds extra true for non-experts in the field. The data are also usually in 2.5D, or pseudo 3D, meaning that the vast majority of the scanned volume is missing data. Interpolation algorithms can, however, approximate the missing data, and the result can be visualized in an application and in this way ease the interpretation. This report has focused on comparing different interpolation algorithms, with extra focus on behaviour when the data get sparse. The compared methods were: Linear, inverse distance weighting, ordinary kriging, thin plate splines and fk domain zone-pass POCS. They were all found to have some strengths and weaknesses in different aspects, although ordinary kriging was found to be the most accurate and created the least artefacts. Inverse distance weighting performed surprisingly well considering its simplicity and low computational cost. A web-based, easy-to-use visualization application was developed in order to view the results from the interpolations. Some of the tools implemented include time slice, crop of a 3D cube, and iso surface.
482

Hodnocení vlivu interpolace při koregistraci radarových snímků / Evaluation of influence of interpolation methods on coregistration of radar images

Slačíková, Jana January 2010 (has links)
Evaluation of influence of interpolation methods on coregistration of radar images Abstract SAR interferogram processing requires subpixel coregistration of SAR image pair for accurate phase differencing. Errors in alignment introduce phase noise in SAR interferogram. Last step in coregistration is resampling one of SAR images. Also this step introduces errors in SAR interferogram. The resampling algorithms Nearest Neighbor, Bilinear interpolation, Cubic Convolution and advanced methods such as Raised Cosine kernel, Knab interpolation kernel and Truncated Sinc were tested on ERS tandem data and compared. The results were compared with the theory and simulations of earlier investigations (Hanssen, Bamler, 1999), (Migliaccio, Bruno, 2003) and (Cho ... [et al.], 2005). The main experiment in this work was to examine and compare resampling methods on real data to evaluate their effect on the interferometric phase quality and DEM generation. The coregistration performance was evaluated by the coherence (Touzi ... [et al.], 1999) and the sum of phase differences (Li ... [et al.], 2004). No evidence showed that computationally intensive algorithms produced better quality of interferogram than Cubic Convolution. The possibilities of evaluating by means of the accuracy of the final InSAR DEM (Li, Bethel, 2008) were...
483

Photogrammetric point cloud generation and surface interpolation for change detection / Generering av fotogrammetriska punktmoln och interpolation till förandringsanalys

Bergsjö, Joline January 2016 (has links)
In recent years the science revolving image matching algorithms has gotten an upswing mostly due to its benefits in computer vision. This has led to new opportunities for photogrammetric methods to compete with LiDAR data when it comes to 3D-point clouds and generating surface models. In Sweden a project to create a high resolution national height model started in 2009 and today almost the entirety of Sweden has been scanned with LiDAR sensors. The objective for this project is to achieve a height model with high spatial resolution and high accuracy in height. As for today no update of this model is planned in the project so it’s up to each municipality or company who needs a recent height model to update themselves. This thesis aims to investigate the benefits and shortcomings of using photogrammetric measures for generating and updating surface models. Two image matching software are used, ERDAS photogrammetry and Spacemetric Keystone, to generate a 3D point cloud of a rural area in Botkyrka municipality. The point clouds are interpolated into surface models using different interpolation percentiles and different resolutions. The photogrammetric point clouds are evaluated on how well they fit a reference point cloud, the surfaces are evaluated on how they are affected by the different interpolation percentiles and image resolutions. An analysis to see if the accuracy improves when the point cloud is interpolated into a surface. The result shows that photogrammetric point clouds follows the profile of the ground well but contains a lot of noise in the forest covered areas. A lower image resolution improves the accuracy for the forest feature in the surfaces. The results also show that noise-reduction is essential to generate a surface with decent accuracy. Furthermore, the results identify problem areas in dry deciduous forest where the photogrammetric method fails to capture the forest.
484

Art-directable cloud animation

Yiyun Wang (10703088) 06 May 2021 (has links)
<div>Volumetric cloud generation and rendering algorithms are well-developed to meet the need for a realistic sky performance in animation or games. However, it is challenging to create a stylized or designed animation for volumetric clouds using physics-based generation and simulation methods in real-time.</div><div>The problem raised by the research is the current volumetric cloud animation controlling methods are not art-directable. Making a piece of volumetric cloud move in a specific way can be difficult when using only a physics-based simulation method. The purpose of the study is to implement an animating method for volumetric clouds and with art-directable controllers. Using this method, a designer can easily control the cloud's motion in a reliable way. The program will achieve interactive performance using parallel processing with CUDA. Users will be able to animate the cloud by input a few vectors inside the cloud volume. </div><div>After reviewing the literature related to the real-time simulation method of clouds, texture advection algorithms, fluid simulation, and other processes to achieve the results, the thesis offers a feasible design of the algorithm and experiments to test the hypotheses. The study uses noise textures and fractional Brownian motion (fBm) to generate volumetric clouds and render the clouds by the ray marching technique. The program will render user input vectors and a three-dimension interpolation vector field with OpenGL. By adding or changing input vectors, the user will gain a divergence minimization interpolation field. The cloud volume could be animated by the texture advection technique based on the interpolation vector field in real-time. By inputting several vectors, the user could plausibly animate the volume cloud in an art-directable way.</div>
485

Efficient and High-Performance Clocking Circuits for High-Speed Data Links

Wang, Zhaowen January 2022 (has links)
The increasing demand for high-capacity and high-speed I/Os is pushing wireline and optical transceivers to a higher aggregate data rate. Multiple lanes of transceivers are monolithically integrated on a single system on chip (SoC), bringing more stringent requirements for the power consumption and area of a single transceiver. Clocking circuits directly determine the transceiver data rate and take a significant portion of the total power consumption. Power-efficient and high-speed data links rely on efficient and high-performance clock generation and distribution. Multi-phase clock generators (MPCGs) and phase interpolators (PIs) are two essential blocks in the local clock generator in each transceiver lane. MPCGs can generate multi-phase sampling clocks to increase the sampling rate of a fixed frequency, or they can generate multi-phase input clocks for the PIs to perform phase shifting. Their design also affects the schemes for global clock generation and distribution. 8-phase PIs improve the interpolation linearity compared to 4-phase PIs. However, their input 8-phase clock generation either requires power-hungry, multi-phase global clock distribution, or a complicated local 8-phase clock generator. Conventional clocking techniques have encountered the tradeoff of the jitter, power and phase accuracy for multi-phase clock generation. Moreover, 8-phase PIs also meet the linearity and speed bottleneck due to technology limitations. In this dissertation, we first discuss ring oscillators for multi-phase clock generation. The tradeoff of jitter and phase accuracy in ring oscillators locked by two-phase (0°/180°) injection is presented. This tradeoff is resolved by using a multi-phase injection-locked ring oscillator (MPIL-ROSC) for multi-phase clock generation. A quadrature delay-locked loop (QDLL) provides the coarse but low-jitter multi-phase injection signals to the MPIL-ROSC, and also tunes the MPIL-ROSC's self-oscillation frequency against process-voltage-temperature (PVT) variations. The MPCG is designed for 8-phase clock generation, and drives an 8-phase PI for clock interpolation. A 65-nm prototype chip shows that an MPIL-ROSC can generate low-jitter and highly accurate 8-phase clocks from 5 GHz to 8 GHz under a 1.1-V to 1.3-V supply variation. Moreover, a 7-bit PI driven by the MPIL-ROSC also achieves a peak-to-peak integral nonlinearity (INLpp) less than 1.90 LSB from 5 GHz to 8 GHz. To further improve the phase interpolation linearity and operation frequency range, a Twin phase interpolator (Twin PI) and a Delta quadrature delay-locked loop (Delta QDLL) are introduced. The phase nonlinearity of a 4-phase, linear-weight PI stems from approximating sinusoidal-weight summation with linear-weight summation. Consequently, the phase deviations are deterministic, sinusoidal, and repeat themselves among different interpolation quadrants. The Twin PI sums up the equalized-amplitude outputs from two, 4-phase PIs with their PI control codes offset by half of the INL "period". The INLs of two PIs have opposite signs to each other, and thus the summation cancels the majority of nonlinearity. The Twin PI achieves very high linearity across a wide operation bandwidth while only needing 4-phase (quadrature) input clocks, which eases the design of its preceding multi-phase clock generator and offers flexibility for global clock generation and distribution scheme. A Delta quadrature delay-locked loop is further proposed for low-jitter and wideband quadrature clock generation from the delay difference of two parallel delay paths with a background analog quadrature tuning loop. A 65-nm prototype chip demonstrates that a Delta QDLL generates quadrature clocks with an accuracy of 0.9° from 3.5 GHz to 11 GHz. The 7-bit Twin PI achieves less-than-1.45-LSB INLpp from 3.5 GHz to 11 GHz. At 7 GHz the INLpp is 0.72 LSB and the integrated fractional spur is as low as -41.7 dBc under -1429ppm clock modulation. To sum up, the proposed multi-phase injection-locked ring oscillators for multi-phase clock generation, and the combination of Twin phase interpolators and Delta quadrature delay-locked loop break the performance limitation of the state-of-the-art clocking circuits. The block-level innovation also offers opportunities to reconsider the global clocking scheme to save power and circuit area at the system level.
486

Fourier Series Applications in Multitemporal Remote Sensing Analysis using Landsat Data

Brooks, Evan B. 27 June 2013 (has links)
Researchers now have unprecedented access to free Landsat data, enabling detailed monitoring of the Earth's land surface and vegetation.  There are gaps in the data, due in part to cloud cover. The gaps are aperiodic and localized, forcing any detailed multitemporal analysis based on Landsat data to compensate.   Harmonic regression approximates Landsat data for any point in time with minimal training images and reduced storage requirements.  In two study areas in North Carolina, USA, harmonic regression approaches were least as good at simulating missing data as STAR-FM for images from 2001.  Harmonic regression had an R^2"0.9 over three quarters of all pixels. It gave the highest R_Predicted^2 values on two thirds of the pixels.  Applying harmonic regression with the same number of harmonics to consecutive years yielded an improved fit, R^2"0.99 for most pixels.   We next demonstrate a change detection method based on exponentially weighted moving average (EWMA) charts of harmonic residuals. In the process, a data-driven cloud filter is created, enabling use of partially clouded data.  The approach is shown capable of detecting thins and subtle forest degradations in Alabama, USA, considerably finer than the Landsat spatial resolution in an on-the-fly fashion, with new images easily incorporated into the algorithm.  EWMA detection accurately showed the location, timing, and magnitude of 85% of known harvests in the study area, verified by aerial imagery.   We use harmonic regression to improve the precision of dynamic forest parameter estimates, generating a robust time series of vegetation index values.  These values are classified into strata maps in Alabama, USA, depicting regions of similar growth potential.  These maps are applied to Forest Service Forest Inventory and Analysis (FIA) plots, generating post-stratified estimates of static and dynamic forest parameters.  Improvements to efficiency for all parameters were such that a comparable random sample would require at least 20% more sampling units, with the improvement for the growth parameter requiring a 50% increase. These applications demonstrate the utility of harmonic regression for Landsat data.  They suggest further applications in environmental monitoring and improved estimation of landscape parameters, critical to improving large-scale models of ecosystems and climate effects. / Ph. D.
487

Efficient Ensemble Data Assimilation and Forecasting of the Red Sea Circulation

Toye, Habib 23 November 2020 (has links)
This thesis presents our efforts to build an operational ensemble forecasting system for the Red Sea, based on the Data Research Testbed (DART) package for ensemble data assimilation and the Massachusetts Institute of Technology general circulation ocean model (MITgcm) for forecasting. The Red Sea DART-MITgcm system efficiently integrates all the ensemble members in parallel, while accommodating different ensemble assimilation schemes. The promising ensemble adjustment Kalman filter (EAKF), designed to avoid manipulating the gigantic covariance matrices involved in the ensemble assimilation process, possesses relevant features required for an operational setting. The need for more efficient filtering schemes to implement a high resolution assimilation system for the Red Sea and to handle large ensembles for proper description of the assimilation statistics prompted the design and implementation of new filtering approaches. Making the most of our world-class supercomputer, Shaheen, we first pushed the system limits by designing a fault-tolerant scheduler extension that allowed us to test for the first time a fully realistic and high resolution 1000 ensemble members ocean ensemble assimilation system. In an operational setting, however, timely forecasts are of essence, and running large ensembles, albeit preferable and desirable, is not sustainable. New schemes aiming at lowering the computational burden while preserving reliable assimilation results, were developed. The ensemble Optimal Interpolation (EnOI) algorithm requires only a single model integration in the forecast step, using a static ensemble of preselected members for assimilation, and is therefore computationally significantly cheaper than the EAKF. To account for the strong seasonal variability of the Red Sea circulation, an EnOI with seasonally-varying ensembles (SEnOI) was first implemented. To better handle intra-seasonal variabilities and enhance the developed seasonal EnOI system, an automatic procedure to adaptively select the ensemble members through the assimilation cycles was then introduced. Finally, an efficient Hybrid scheme combining the dynamical flow-dependent covariance of the EAKF and a static covariance of the EnOI was proposed and successfully tested in the Red Sea. The developed Hybrid ensemble data assimilation system will form the basis of the first operational Red Sea forecasting system that is currently being implemented to support Saudi Aramco operations in this basin.
488

A Spline Framework for Optimal Representation of Semiperiodic Signals

Guilak, Farzin G. 24 July 2015 (has links)
Semiperiodic signals possess an underlying periodicity, but their constituent spectral components include stochastic elements which make it impossible to analytically determine locations of the signal's critical points. Mathematically, a signal's critical points are those at which it is not differentiable or where its derivative is zero. In some domains they represent characteristic points, which are locations indicating important changes in the underlying process reflected by the signal. For many applications in healthcare, knowledge of precise locations of these points provides key insight for analytic, diagnostic, and therapeutic purposes. For example, given an appropriate signal they might indicate the start or end of a breath, numerous electrophysiological states of the heart during the cardiac cycle, or the point in a stride at which the heel impacts the ground. The inherent variability of these signals, the presence of noise, and often, very low signal amplitudes, makes accurate estimation of these points challenging. There has been much effort in automatically estimating characteristic point locations. Approaches include algorithms operating in the time domain, on various transformations of the data, and using different models of the signal. These methods apply a wide variety of techniques ranging from simple thresholds and search windows to sophisticated signal processing and pattern recognition algorithms. Existing approaches do not explicitly use prior knowledge of characteristic point locations in their estimation. This dissertation first develops a framework for an efficient parametric representation of semiperiodic signals using splines. It then implements an instance of that framework to optimally estimate locations of characteristic points, incorporating prior knowledge from manual annotations on training data. Splines represent signals in a piecewise manner by applying an interpolant to constraint points on the signal known as knots. The framework allows choice of interpolant, objective function, knot initialization algorithm, and optimization algorithm. After initialization it iteratively modifies knot locations until the objective function is met. For optimal estimation of characteristic points the framework relies on a Bayesian objective function, the a posteriori probability of knot locations given the observed signal. This objective function fuses prior knowledge, the observed signal, and its spline estimate. With a linear interpolant, knot locations after optimization serve as estimates of the signal's characteristic points. This implementation was used to determine locations of 11 characteristic points on a prospective test set comprising 200 electrocardiograph (ECG) signals from 20 subjects. It achieved a mean error of -0.4 milliseconds, less than one quarter of a sample interval. A low bias is not sufficient, however, and the literature recognizes error variance to be the more important factor in assessing accuracy. Error variances are typically compared to the variance of manual annotations provided by reviewers. The algorithm was within two standard deviations for six of the characteristic points, and within one sample interval of this criterion for another four points. The spline framework described here provides a complementary option to existing methods for parametric modeling of semiperiodic signals, and can be tailored to represent semiperiodic signals with high fidelity or to optimally estimate locations of their characteristic points.
489

Full waveform inversion of supershot-gathered data for optimization of turnaround time in seismic reflection survey / 地震反射法探査における複数震源同時発震によるデータ取得及び処理時間最適化の研究

Ehsan, Jamali Hondori 24 November 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第20061号 / 工博第4249号 / 新制||工||1658(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 三ケ田 均, 教授 小池 克明, 教授 木村 亮 高梨 将 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
490

INTERPOLATING HYDROLOGIC DATA USING LAPLACE FORMULATION

Tianle Xu (10802667) 14 May 2021 (has links)
Spatial interpolation techniques play an important role in hydrology as many point observations need to be interpolated to create continuous surfaces. Despite the availability of several tools and methods for interpolating data, <a>not all of them work consistently for hydrologic applications</a><a>. One of the techniques, Laplace Equation, which is used in hydrology for creating flownets, has rarely been used for interpolating hydrology data</a>. The objective of this study is to examine the efficiency of Laplace formulation (LF) in interpolating hydrologic data and compare it wih other widely used methods such as the inverse distance weighting (IDW), natural neighbor, and kriging. Comparison is performed quantitatively for using root mean square error (RMSE), visually for creating reasonable surfaces and computationally for ease of operation and speed. Data related to surface elevation, river bathymetry, precipitation, temperature, and soil moisture data are used for different areas in the United States. RMSE results show that LF performs better than IDW and is comparable to other methods for accuracy. LF is easy to use as it requires fewer input parameters compared to IDW and Kriging. Computationally, LF is comparable to other methods in terms of speed when the datasets are not large. Overall, LF offers an robust alternative to existing methods for interpolating various hydrology data. Further work is required to improve its computational efficiency with large data size and find out the effects of different cell size.

Page generated in 0.0302 seconds