Spelling suggestions: "subject:"square""
161 |
JavaScript och web workers : Parallellisering av en beräkningstung webbapplikation / JavaScript and web workers : Parallelization of a computationally heavy web applicationStråhle, Jesper January 2013 (has links)
Webben används i allt större utsträckning som en riktig applikationsplattform, mycket tack vare HTML5. Detta ställer högre krav på webbapplikationens prestanda på klientsidan, då nya tekniker möjliggör mer avancerade applikationer. Parallellisering är en metod för att öka prestandan i applikationer, som dessutom tar nytta av de parallella arkitekturer som idag är vanliga. Web workers – ett nytt API för JavaScript – tillåter en enkel form av parallellisering för webbapplikationer. Dock har web workers en del begränsningar som minskar antalet möjliga strategier. Detta arbete syftar till att utvärdera hur valet av parallelliseringsstrategi påverkar prestandan hos en JavaScript-implementation av marching squares – en algoritm med goda möjligheter för parallellisering. Tre olika strategier implementeras, och utvärderas därefter genom prestandamätning. Resultaten visar att en strategi som använder så lite och så optimerad kommunikation som möjligt ger bättre prestanda än en strategi med mer kommunikation. Vidare arbete för att bland annat utvärdera vinsterna av delat minne föreslås.
|
162 |
Comparison of Two Vortex-in-cell Schemes Implemented to a Three-dimensional Temporal Mixing LayerSadek, Nabel January 2012 (has links)
Numerical simulations are presented for three dimensional viscous incompressible free shear flows. The numerical method is based on solving the vorticity equation using Vortex-In-Cell method. In this method, the vorticity field is discretized into a finite set of Lagrangian elements (particles) and the computational domain is covered by Eulerian mesh. Velocity field is computed on the mesh by solving Poisson equation. The solution proceeds in time by advecting the particles with the flow. Second order Adam-Bashford method is used for time integration. Exchange of information between Lagrangian particles and Eulerian grid is carried out using the M’4 interpolation scheme. The classical inviscid scheme is enhanced to account for stretching and viscous effects. For that matter, two schemes are used. The first one used periodic remeshing of the vortex particles along with fourth order finite difference approximation for the partial derivatives of the stretching and viscous terms. In the second scheme, derivatives are approximated by least squares polynomial. The novelty of this work is signified by using the moving least squares technique within the framework of the Vortex-in-Cell method and implementing it to a three dimensional temporal mixing layer. Comparisons of the mean flow and velocity statistics are made with experimental studies. The results confirm the validity of the present schemes. Both schemes also demonstrate capability to qualitatively capture significant flow scales, and allow gaining physical insight as to the development of instabilities and the formation of three dimensional vortex structures. The two schemes show acceptable low numerical diffusion as well.
|
163 |
Optimization of Sampling Structure Conversion Methods for Color Mosaic DisplaysZheng, Xiang January 2014 (has links)
Although many devices can be used to capture images of high resolution, there is still a need to show these images on displays with low resolution. Existing methods of subpixel-based down-sampling are reviewed in this thesis and their limitations are described. A new approach to optimizing sampling structure conversion for color mosaic displays is developed. Full color images are filtered by a set of optimal filters before down-sampling, resulting in better image quality according to the SCIELAB measure, a spatial extension of the CIELAB metric measuring perceptual color difference. The typical RGB stripe display pattern is tested to get the optimal filters using least-squares filter design. The new approach is also implemented on a widely used two-dimensional display pattern, the Pentile RGBG. Clear images are produced and color fringing artifacts are reduced. Quality of down-sampled images are compared using SCIELAB and by visual inspection.
|
164 |
Mínimos quadrados aplicados à super-resolução de vídeoBrandalise, Rodrigo 20 February 2014 (has links)
Neste trabalho é proposta uma adaptação do método de Mínimos Quadrados aplicado à reconstrução de imagens com Super-Resolução, visando a reconstrução de vídeo (sequências de imagens) em tempo real. Resultados demonstram que a implementação proposta pode apresentar um desempenho melhor que o algoritmo de menor custo computacional da literatura, com um pequeno incremento no número de operações. Por fim, a estrutura proposta sugere viabilidade de análise. De posse de um modelo teórico de comportamento, parâmetros ótimos de projeto podem ser obtidos melhorando ainda mais o desempenho do algoritmo. / In this work an adaptation of the Least Squares method applied to Super-Resolution image reconstruction is proposed aiming real time video (image sequences) reconstruction. Results demonstrate that the proposed implementation presents a better performance than the algorithm with the lower computational cost presented in the literature, considering a small increase in the number of operations. Finally, the proposed structure suggests feasibility of analysis. A theoretical model for the algorithm behavior can leads to an optimal parameters design yielding further improvements in the algorithm performance.
|
165 |
Integrated Approach to Assess Supply Chains: A Comparison to the Process Control at the Firm LevelKaradağ, Mehmet Onur January 2011 (has links)
This study considers whether or not optimizing process metrics and settings across a supply chain gives significantly different outcomes than consideration at a firm level. While, the importance of supply chain integration has been shown in areas such as inventory management, this study appears to be the first empirical test for optimizing process settings. A Partial Least Squares (PLS) procedure is used to determine the crucial components and indicators that make up each component in a supply chain system. PLS allows supply chain members to have a greater understanding of critical coordination components in a given supply chain. Results and implications give an indication of what performance is possible with supply chain optimization versus local optimization on simulated and manufacturing data. It was found that pursuing an integrated approach over a traditional independent approach provides an improvement of 2% to 49% in predictive power for the supply chain under study.
|
166 |
Linearized inversion frameworks toward high-resolution seismic imagingAldawood, Ali 09 1900 (has links)
Seismic exploration utilizes controlled sources, which emit seismic waves that propagate through the earth subsurface and get reflected off subsurface interfaces and scatterers. The reflected and scattered waves are recorded by recording stations installed along the earth surface or down boreholes. Seismic imaging is a powerful tool to map these reflected and scattered energy back to their subsurface scattering or reflection points. Seismic imaging is conventionally based on the single-scattering assumption, where only energy that bounces once off a subsurface scatterer and recorded by a receiver is projected back to its subsurface position. The internally multiply scattered
seismic energy is considered as unwanted noise and is usually suppressed or removed from the recorded data. Conventional seismic imaging techniques yield subsurface images that suffer from low spatial resolution, migration artifacts, and acquisition fingerprint due to the limited acquisition aperture, number of sources and receivers, and bandwidth of the source wavelet. Hydrocarbon traps are becoming more challenging and considerable reserves are trapped in stratigraphic and pinch-out traps, which require highly resolved seismic images to delineate them.
This thesis focuses on developing and implementing new advanced cost-effective seismic imaging techniques aiming at enhancing the resolution of the migrated images by exploiting the sparseness of the subsurface reflectivity distribution and utilizing the multiples that are usually neglected when imaging seismic data. I first formulate the seismic imaging problem as a Basis pursuit denoise problem, which I solve using an L1-minimization algorithm to obtain the sparsest migrated image corresponding to the recorded data. Imaging multiples may illuminate subsurface zones, which are not easily illuminated by conventional seismic imaging using primary reflections only. I then develop an L2-norm (i.e. least-squares) inversion technique to image internally multiply scattered seismic waves to obtain highly resolved images delineating vertical faults that are otherwise not easily imaged by primaries.
Seismic interferometry is conventionally based on the cross-correlation and convolution of seismic traces to transform seismic data from one acquisition geometry to another. The conventional interferometric transformation yields virtual data that suffers from low temporal resolution, wavelet distortion, and correlation/convolution artifacts. I therefore incorporate a least-squares datuming technique to interferometrically transform vertical-seismic-profile surface-related multiples to surface-seismic-profile primaries. This yields redatumed data with high temporal resolution and less artifacts, which are subsequently imaged to obtain highly resolved subsurface images. Tests on synthetic examples demonstrate the efficiency of the proposed techniques, yielding highly resolved migrated sections compared with images obtained by imaging conventionally redatumed data.
I further advance the recently developed cost-effective Generalized Interferometric Multiple Imaging procedure, which aims to not only image first but also higher-order multiples as well. I formulate this procedure as a linearized inversion framework and solve it as a least-squares problem. Tests of the least-squares Generalized Interferometric Multiple imaging framework on synthetic datasets and demonstrate that it could provide highly resolved migrated images and delineate vertical fault planes compared with the standard procedure. The results support the assertion that this
linearized inversion framework can illuminate subsurface zones that are mainly illuminated by internally scattered energy.
|
167 |
Low-Complexity Regularization Algorithms for Image DeblurringAlanazi, Abdulrahman 11 1900 (has links)
Image restoration problems deal with images in which information has been degraded
by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes.
In this study, we present two regularization algorithms for the image deblurring problem.
We first present a new method based on solving a regularized least-squares (RLS)
problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on
the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations.
Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality.
In this thesis, we focus on the non-blind image deblurring problem, where the blur
kernel is assumed to be known. However, we developed algorithms that also work in the blind image deblurring. Experimental results show that our proposed methods are robust enough in the blind deblurring and outperform the other benchmark methods in terms of both output PSNR and SSIM values.
|
168 |
Kinetika degradace inkjetových barviv / Kinetics of Inkjet Dyes DegradationButeková, Silvia January 2015 (has links)
The stability of inkjet print is influenced by a lot of factors. Mutual effects of these factors accelerate the print degradation. The surrounding environment in image stability plays an important role, when the prints degrade especially by the light. The degradation of inkjet prints is presented as a decrease of dye or multiple dyes. It is necessary to know the dye concentration for the dye decrease prediction in the time. This dissertation thesis deals with the study of kinetics and changes in electron and molecular structure of digital photography prints after accelerated ageing tests. The study of resistance of inkjet prints was realized on one type of media using three different sets of inks. Changes in printed colours were measured and evaluated in calibration (by PLS calibration and least squares method). On the basis of calibration the dye decrease prediction of real samples in receiving layer was evaluated. Changes in electron and molecular structure were analysed on KBr pellets by FTIR an UV-Vis spectroscopy.
|
169 |
Information and distancesEpstein, Samuel Randall 23 September 2015 (has links)
We prove all randomized sampling methods produce outliers. Given a computable measure P over natural numbers or infinite binary sequences, there is no method that can produce an arbitrarily large sample such that all its members are typical of P. The second part of this dissertation describes a computationally inexpensive method to approximate Hilbertian distances. This method combines the semi-least squares inverse techinque with the canonical modern machine learning technique known as the kernel trick. In the task of distance approximation, our method was shown to be comparable in performance to a solution employing the Nystrom method. Using the kernel semi-least squares method, we developed and incorporated the Kernel-Subset-Tracker into the Camera Mouse, a video-based mouse replacement software for people with movement disabilities. The Kernel-Subset-Tracker is an exemplar-based method that uses a training set of representative images to produce online templates for positional tracking. Our experiments with test subjects show that augmenting the Camera Mouse with the Kernel-Subset-Tracker improves communication bandwidth statistically significantly.
|
170 |
The evaluation and readjustment of the VPI-CE horizontal control networkRheinhart, Brian K. January 1981 (has links)
The main objective of the VPI-CE control network. is to contribute to the Nation.al Geodetic Survey control network. In order to meet this objective, large amounts of survey data were accumulated at different times from various surveys between the years 1977-1980. Bach different set of survey data was reduced and adjusted by least squares independently creating various "sub" control networks that were connected to each other peace- . meal. When "sub" control networks were connected to each other, it was found that they did not meet the objective stated above. It is the purpose of this project to examine and check all survey data, adjust all data as one set to the NGS control network, and to evaluate the adjusted data to see if the survey meets second-order class II traverse specifications as established by the NGS.
Included in this paper are the following: a background on NGS specifications; least squares theory including observation equations, and error theory; a description of how data for the project was accumulated and reduced; the adjustment of the reduced survey data; results and analysis of the adjustment; and conclusions and recommendations for the survey. / Master of Engineering
|
Page generated in 0.0464 seconds