• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Aplicación y comparación de algoritmos de valoración de opciones reales para la evaluación de proyectos de generación eléctrica bajo incertidumbre

Hidalgo Arancibia, Juan Francisco January 2014 (has links)
Ingeniero Civil Industrial / La presente memoria se centra en desarrollar una metodología de evaluación de proyectos, para uso en generación eléctrica, que captura las dinámicas del valor de un proyecto de inversión. Esto en base a la evolución de la principal variable de mercado que afecta la rentabilidad usando las técnicas de valoración, llamados Least Square Monte Carlo y Árboles Trinomiales. Esto porque, los proyectos en este rubro son priorizados respecto a la tecnología de desarrollo más eficiente disponible usando un análisis de Valor Actual Neto (VAN) que no considera las dinámicas a las cuales están sujetos los valores de los proyectos. Como se prevé que las instalaciones de generación eléctrica se mantendrán en operación por un 50% más de su vida útil pronosticada, se vuelve más significativa la evaluación dinámica para el largo plazo incorporando incertidumbre. Para ello, se hace un análisis empírico del comportamiento de los precios de electricidad, en un nodo representativo del Sistema Interconectado Central, el principal mercado eléctrico en Chile. Con esto, se estiman los parámetros de un modelo que incorpora estas características, conocido como Mean Reverting Jump Diffusion en conjunto con un modelo de simulación de operación para la determinación de expectativas de largo plazo en precios de electricidad. Para la aplicación del análisis de opciones reales, se utilizan los procedimientos Least Square Monte Carlo y árboles trinomiales basados en los parámetros encontrados del proceso de precios para diseñar la estrategia óptima de inversión en el largo plazo en generación de tecnologías disponibles en Chile. Los resultados incluyen un análisis de las estrategias de inversión de diversas centrales disponibles para la expansión de la matriz energética de Chile dadas las condiciones actuales del mercado. En particular, el análisis de árboles sugiere invertir inmediatamente en centrales Hidroeléctricas, térmicas de Carbón y Diésel y no invertir en tecnología de ciclos combinados. Por otra parte, el algoritmo de Least Square Monte Carlo sugiere esperar en todos los casos, ya que estima que el valor de estos proyectos incrementará en el futuro respecto a sus valores estimados actuales. Por ello, el análisis comparativo entre los modelos sugiere de un sesgo relativo hacia arriba por parte del algoritmo de Least Square Monte Carlo. Este sesgo es analizado en mayor profundidad, arribando a la conclusión que a medida que se aumentan las simulaciones consideradas, este sesgo relativo va disminuyendo, pero de manera desacelerada. Para complementar, se incluye un análisis de sensibilidad de los resultados usando parámetros de cobertura de riesgo, conocidos coloquialmente como las griegas , calculados a partir de los Árboles trinomiales. Los resultados de estos análisis indican que el valor del precio de la electricidad, la volatilidad y la duración de la oportunidad de inversión impactan positivamente en el proyecto, lo que es consistente con la teoría de opciones reales que postula que a mayor volatilidad, más valor se puede generar en un proyecto al aprovechar las flexibilidades inherentes.
172

Comparison of Two Vortex-in-cell Schemes Implemented to a Three-dimensional Temporal Mixing Layer

Sadek, Nabel January 2012 (has links)
Numerical simulations are presented for three dimensional viscous incompressible free shear flows. The numerical method is based on solving the vorticity equation using Vortex-In-Cell method. In this method, the vorticity field is discretized into a finite set of Lagrangian elements (particles) and the computational domain is covered by Eulerian mesh. Velocity field is computed on the mesh by solving Poisson equation. The solution proceeds in time by advecting the particles with the flow. Second order Adam-Bashford method is used for time integration. Exchange of information between Lagrangian particles and Eulerian grid is carried out using the M’4 interpolation scheme. The classical inviscid scheme is enhanced to account for stretching and viscous effects. For that matter, two schemes are used. The first one used periodic remeshing of the vortex particles along with fourth order finite difference approximation for the partial derivatives of the stretching and viscous terms. In the second scheme, derivatives are approximated by least squares polynomial. The novelty of this work is signified by using the moving least squares technique within the framework of the Vortex-in-Cell method and implementing it to a three dimensional temporal mixing layer. Comparisons of the mean flow and velocity statistics are made with experimental studies. The results confirm the validity of the present schemes. Both schemes also demonstrate capability to qualitatively capture significant flow scales, and allow gaining physical insight as to the development of instabilities and the formation of three dimensional vortex structures. The two schemes show acceptable low numerical diffusion as well.
173

Optimization of Sampling Structure Conversion Methods for Color Mosaic Displays

Zheng, Xiang January 2014 (has links)
Although many devices can be used to capture images of high resolution, there is still a need to show these images on displays with low resolution. Existing methods of subpixel-based down-sampling are reviewed in this thesis and their limitations are described. A new approach to optimizing sampling structure conversion for color mosaic displays is developed. Full color images are filtered by a set of optimal filters before down-sampling, resulting in better image quality according to the SCIELAB measure, a spatial extension of the CIELAB metric measuring perceptual color difference. The typical RGB stripe display pattern is tested to get the optimal filters using least-squares filter design. The new approach is also implemented on a widely used two-dimensional display pattern, the Pentile RGBG. Clear images are produced and color fringing artifacts are reduced. Quality of down-sampled images are compared using SCIELAB and by visual inspection.
174

Mínimos quadrados aplicados à super-resolução de vídeo

Brandalise, Rodrigo 20 February 2014 (has links)
Neste trabalho é proposta uma adaptação do método de Mínimos Quadrados aplicado à reconstrução de imagens com Super-Resolução, visando a reconstrução de vídeo (sequências de imagens) em tempo real. Resultados demonstram que a implementação proposta pode apresentar um desempenho melhor que o algoritmo de menor custo computacional da literatura, com um pequeno incremento no número de operações. Por fim, a estrutura proposta sugere viabilidade de análise. De posse de um modelo teórico de comportamento, parâmetros ótimos de projeto podem ser obtidos melhorando ainda mais o desempenho do algoritmo. / In this work an adaptation of the Least Squares method applied to Super-Resolution image reconstruction is proposed aiming real time video (image sequences) reconstruction. Results demonstrate that the proposed implementation presents a better performance than the algorithm with the lower computational cost presented in the literature, considering a small increase in the number of operations. Finally, the proposed structure suggests feasibility of analysis. A theoretical model for the algorithm behavior can leads to an optimal parameters design yielding further improvements in the algorithm performance.
175

Capital structure and determinants of capital structure, before, during and after the 2008 financial crisis: A South African study

Ntshobane, Gcobisa 15 September 2021 (has links)
This study examines the effects of 2007/8 financial crisis on capital structure determinants of Johannesburg Stock Exchange (JSE) listed companies in South Africa. Data extracted from INET BFA Expert database was analyzed using regression models on the correlation between the leverage and company size, growth, profitability, tangibility, liquidity, non-debt tax shield along with Ordinary Least Squares based on the sample of JSE listed companies for the period of 2004 to 2013. The study examined two industries namely, Real estate and Retail industry. The results show that size, tangibility, profitability and liquidity have significant impact on the capital structure before, during and after financial crisis. Growth results were inconsistent over the period under review, and non-debt tax shield was found to be statistically insignificant. The study also shows that the 2007/8 had statistical significance on the capital structure of the listed companies in South Africa.
176

Integrated Approach to Assess Supply Chains: A Comparison to the Process Control at the Firm Level

Karadağ, Mehmet Onur January 2011 (has links)
This study considers whether or not optimizing process metrics and settings across a supply chain gives significantly different outcomes than consideration at a firm level. While, the importance of supply chain integration has been shown in areas such as inventory management, this study appears to be the first empirical test for optimizing process settings. A Partial Least Squares (PLS) procedure is used to determine the crucial components and indicators that make up each component in a supply chain system. PLS allows supply chain members to have a greater understanding of critical coordination components in a given supply chain. Results and implications give an indication of what performance is possible with supply chain optimization versus local optimization on simulated and manufacturing data. It was found that pursuing an integrated approach over a traditional independent approach provides an improvement of 2% to 49% in predictive power for the supply chain under study.
177

Linearized inversion frameworks toward high-resolution seismic imaging

Aldawood, Ali 09 1900 (has links)
Seismic exploration utilizes controlled sources, which emit seismic waves that propagate through the earth subsurface and get reflected off subsurface interfaces and scatterers. The reflected and scattered waves are recorded by recording stations installed along the earth surface or down boreholes. Seismic imaging is a powerful tool to map these reflected and scattered energy back to their subsurface scattering or reflection points. Seismic imaging is conventionally based on the single-scattering assumption, where only energy that bounces once off a subsurface scatterer and recorded by a receiver is projected back to its subsurface position. The internally multiply scattered seismic energy is considered as unwanted noise and is usually suppressed or removed from the recorded data. Conventional seismic imaging techniques yield subsurface images that suffer from low spatial resolution, migration artifacts, and acquisition fingerprint due to the limited acquisition aperture, number of sources and receivers, and bandwidth of the source wavelet. Hydrocarbon traps are becoming more challenging and considerable reserves are trapped in stratigraphic and pinch-out traps, which require highly resolved seismic images to delineate them. This thesis focuses on developing and implementing new advanced cost-effective seismic imaging techniques aiming at enhancing the resolution of the migrated images by exploiting the sparseness of the subsurface reflectivity distribution and utilizing the multiples that are usually neglected when imaging seismic data. I first formulate the seismic imaging problem as a Basis pursuit denoise problem, which I solve using an L1-minimization algorithm to obtain the sparsest migrated image corresponding to the recorded data. Imaging multiples may illuminate subsurface zones, which are not easily illuminated by conventional seismic imaging using primary reflections only. I then develop an L2-norm (i.e. least-squares) inversion technique to image internally multiply scattered seismic waves to obtain highly resolved images delineating vertical faults that are otherwise not easily imaged by primaries. Seismic interferometry is conventionally based on the cross-correlation and convolution of seismic traces to transform seismic data from one acquisition geometry to another. The conventional interferometric transformation yields virtual data that suffers from low temporal resolution, wavelet distortion, and correlation/convolution artifacts. I therefore incorporate a least-squares datuming technique to interferometrically transform vertical-seismic-profile surface-related multiples to surface-seismic-profile primaries. This yields redatumed data with high temporal resolution and less artifacts, which are subsequently imaged to obtain highly resolved subsurface images. Tests on synthetic examples demonstrate the efficiency of the proposed techniques, yielding highly resolved migrated sections compared with images obtained by imaging conventionally redatumed data. I further advance the recently developed cost-effective Generalized Interferometric Multiple Imaging procedure, which aims to not only image first but also higher-order multiples as well. I formulate this procedure as a linearized inversion framework and solve it as a least-squares problem. Tests of the least-squares Generalized Interferometric Multiple imaging framework on synthetic datasets and demonstrate that it could provide highly resolved migrated images and delineate vertical fault planes compared with the standard procedure. The results support the assertion that this linearized inversion framework can illuminate subsurface zones that are mainly illuminated by internally scattered energy.
178

Low-Complexity Regularization Algorithms for Image Deblurring

Alanazi, Abdulrahman 11 1900 (has links)
Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work in the blind image deblurring. Experimental results show that our proposed methods are robust enough in the blind deblurring and outperform the other benchmark methods in terms of both output PSNR and SSIM values.
179

Kinetika degradace inkjetových barviv / Kinetics of Inkjet Dyes Degradation

Buteková, Silvia January 2015 (has links)
The stability of inkjet print is influenced by a lot of factors. Mutual effects of these factors accelerate the print degradation. The surrounding environment in image stability plays an important role, when the prints degrade especially by the light. The degradation of inkjet prints is presented as a decrease of dye or multiple dyes. It is necessary to know the dye concentration for the dye decrease prediction in the time. This dissertation thesis deals with the study of kinetics and changes in electron and molecular structure of digital photography prints after accelerated ageing tests. The study of resistance of inkjet prints was realized on one type of media using three different sets of inks. Changes in printed colours were measured and evaluated in calibration (by PLS calibration and least squares method). On the basis of calibration the dye decrease prediction of real samples in receiving layer was evaluated. Changes in electron and molecular structure were analysed on KBr pellets by FTIR an UV-Vis spectroscopy.
180

Information and distances

Epstein, Samuel Randall 23 September 2015 (has links)
We prove all randomized sampling methods produce outliers. Given a computable measure P over natural numbers or infinite binary sequences, there is no method that can produce an arbitrarily large sample such that all its members are typical of P. The second part of this dissertation describes a computationally inexpensive method to approximate Hilbertian distances. This method combines the semi-least squares inverse techinque with the canonical modern machine learning technique known as the kernel trick. In the task of distance approximation, our method was shown to be comparable in performance to a solution employing the Nystrom method. Using the kernel semi-least squares method, we developed and incorporated the Kernel-Subset-Tracker into the Camera Mouse, a video-based mouse replacement software for people with movement disabilities. The Kernel-Subset-Tracker is an exemplar-based method that uses a training set of representative images to produce online templates for positional tracking. Our experiments with test subjects show that augmenting the Camera Mouse with the Kernel-Subset-Tracker improves communication bandwidth statistically significantly.

Page generated in 0.4596 seconds