• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 469
  • 171
  • 62
  • 40
  • 26
  • 19
  • 14
  • 14
  • 13
  • 10
  • 7
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 1007
  • 1007
  • 199
  • 180
  • 165
  • 157
  • 148
  • 137
  • 123
  • 115
  • 96
  • 93
  • 80
  • 78
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Integrated Approach to Assess Supply Chains: A Comparison to the Process Control at the Firm Level

Karadağ, Mehmet Onur January 2011 (has links)
This study considers whether or not optimizing process metrics and settings across a supply chain gives significantly different outcomes than consideration at a firm level. While, the importance of supply chain integration has been shown in areas such as inventory management, this study appears to be the first empirical test for optimizing process settings. A Partial Least Squares (PLS) procedure is used to determine the crucial components and indicators that make up each component in a supply chain system. PLS allows supply chain members to have a greater understanding of critical coordination components in a given supply chain. Results and implications give an indication of what performance is possible with supply chain optimization versus local optimization on simulated and manufacturing data. It was found that pursuing an integrated approach over a traditional independent approach provides an improvement of 2% to 49% in predictive power for the supply chain under study.
132

Linearized inversion frameworks toward high-resolution seismic imaging

Aldawood, Ali 09 1900 (has links)
Seismic exploration utilizes controlled sources, which emit seismic waves that propagate through the earth subsurface and get reflected off subsurface interfaces and scatterers. The reflected and scattered waves are recorded by recording stations installed along the earth surface or down boreholes. Seismic imaging is a powerful tool to map these reflected and scattered energy back to their subsurface scattering or reflection points. Seismic imaging is conventionally based on the single-scattering assumption, where only energy that bounces once off a subsurface scatterer and recorded by a receiver is projected back to its subsurface position. The internally multiply scattered seismic energy is considered as unwanted noise and is usually suppressed or removed from the recorded data. Conventional seismic imaging techniques yield subsurface images that suffer from low spatial resolution, migration artifacts, and acquisition fingerprint due to the limited acquisition aperture, number of sources and receivers, and bandwidth of the source wavelet. Hydrocarbon traps are becoming more challenging and considerable reserves are trapped in stratigraphic and pinch-out traps, which require highly resolved seismic images to delineate them. This thesis focuses on developing and implementing new advanced cost-effective seismic imaging techniques aiming at enhancing the resolution of the migrated images by exploiting the sparseness of the subsurface reflectivity distribution and utilizing the multiples that are usually neglected when imaging seismic data. I first formulate the seismic imaging problem as a Basis pursuit denoise problem, which I solve using an L1-minimization algorithm to obtain the sparsest migrated image corresponding to the recorded data. Imaging multiples may illuminate subsurface zones, which are not easily illuminated by conventional seismic imaging using primary reflections only. I then develop an L2-norm (i.e. least-squares) inversion technique to image internally multiply scattered seismic waves to obtain highly resolved images delineating vertical faults that are otherwise not easily imaged by primaries. Seismic interferometry is conventionally based on the cross-correlation and convolution of seismic traces to transform seismic data from one acquisition geometry to another. The conventional interferometric transformation yields virtual data that suffers from low temporal resolution, wavelet distortion, and correlation/convolution artifacts. I therefore incorporate a least-squares datuming technique to interferometrically transform vertical-seismic-profile surface-related multiples to surface-seismic-profile primaries. This yields redatumed data with high temporal resolution and less artifacts, which are subsequently imaged to obtain highly resolved subsurface images. Tests on synthetic examples demonstrate the efficiency of the proposed techniques, yielding highly resolved migrated sections compared with images obtained by imaging conventionally redatumed data. I further advance the recently developed cost-effective Generalized Interferometric Multiple Imaging procedure, which aims to not only image first but also higher-order multiples as well. I formulate this procedure as a linearized inversion framework and solve it as a least-squares problem. Tests of the least-squares Generalized Interferometric Multiple imaging framework on synthetic datasets and demonstrate that it could provide highly resolved migrated images and delineate vertical fault planes compared with the standard procedure. The results support the assertion that this linearized inversion framework can illuminate subsurface zones that are mainly illuminated by internally scattered energy.
133

Low-Complexity Regularization Algorithms for Image Deblurring

Alanazi, Abdulrahman 11 1900 (has links)
Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work in the blind image deblurring. Experimental results show that our proposed methods are robust enough in the blind deblurring and outperform the other benchmark methods in terms of both output PSNR and SSIM values.
134

Kinetika degradace inkjetových barviv / Kinetics of Inkjet Dyes Degradation

Buteková, Silvia January 2015 (has links)
The stability of inkjet print is influenced by a lot of factors. Mutual effects of these factors accelerate the print degradation. The surrounding environment in image stability plays an important role, when the prints degrade especially by the light. The degradation of inkjet prints is presented as a decrease of dye or multiple dyes. It is necessary to know the dye concentration for the dye decrease prediction in the time. This dissertation thesis deals with the study of kinetics and changes in electron and molecular structure of digital photography prints after accelerated ageing tests. The study of resistance of inkjet prints was realized on one type of media using three different sets of inks. Changes in printed colours were measured and evaluated in calibration (by PLS calibration and least squares method). On the basis of calibration the dye decrease prediction of real samples in receiving layer was evaluated. Changes in electron and molecular structure were analysed on KBr pellets by FTIR an UV-Vis spectroscopy.
135

Information and distances

Epstein, Samuel Randall 23 September 2015 (has links)
We prove all randomized sampling methods produce outliers. Given a computable measure P over natural numbers or infinite binary sequences, there is no method that can produce an arbitrarily large sample such that all its members are typical of P. The second part of this dissertation describes a computationally inexpensive method to approximate Hilbertian distances. This method combines the semi-least squares inverse techinque with the canonical modern machine learning technique known as the kernel trick. In the task of distance approximation, our method was shown to be comparable in performance to a solution employing the Nystrom method. Using the kernel semi-least squares method, we developed and incorporated the Kernel-Subset-Tracker into the Camera Mouse, a video-based mouse replacement software for people with movement disabilities. The Kernel-Subset-Tracker is an exemplar-based method that uses a training set of representative images to produce online templates for positional tracking. Our experiments with test subjects show that augmenting the Camera Mouse with the Kernel-Subset-Tracker improves communication bandwidth statistically significantly.
136

The evaluation and readjustment of the VPI-CE horizontal control network

Rheinhart, Brian K. January 1981 (has links)
The main objective of the VPI-CE control network. is to contribute to the Nation.al Geodetic Survey control network. In order to meet this objective, large amounts of survey data were accumulated at different times from various surveys between the years 1977-1980. Bach different set of survey data was reduced and adjusted by least squares independently creating various "sub" control networks that were connected to each other peace- . meal. When "sub" control networks were connected to each other, it was found that they did not meet the objective stated above. It is the purpose of this project to examine and check all survey data, adjust all data as one set to the NGS control network, and to evaluate the adjusted data to see if the survey meets second-order class II traverse specifications as established by the NGS. Included in this paper are the following: a background on NGS specifications; least squares theory including observation equations, and error theory; a description of how data for the project was accumulated and reduced; the adjustment of the reduced survey data; results and analysis of the adjustment; and conclusions and recommendations for the survey. / Master of Engineering
137

Health consciousness, environmental concern and animal welfare as key predictors of consumers' locus of control and attitudes towards meat consumption: a case of the Generation Y cohort, in South Africa

Khan, Mohammed Zayaad January 2019 (has links)
A research report submitted to the Faculty of Commerce, Law and Management, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Commerce (specialising in Marketing) / The twentieth-century dietary evolution has resulted in livestock being used as the primary source of protein in many countries. This has various implications on the wellbeing of humans, animals, and planet earth itself. The choices consumers make regarding food not only affect our personal health, but it also directly affects the wellbeing of our current ecosystems where modern meat production systems place a worrying burden on the environment. Sustainable consumption practice is often a result of two main driving forces, the first being individual or health (egoistic) motives and the second being animal welfare and environmental concern (altruistic) motives. The growing demand for meat products worldwide is unsustainable and there is a clear gap between our responsible intentions as citizens of the world and our hedonic needs as consumers referred to as the ‘Citizen-Consumer’ gap. Experts argue that technological innovations and more efficient production methods would serve as a future solution for the environmental and social implications of the livestock industry however current scholars emphasise that a technological fix will not be sufficient and that it is imperative for society to undertake a behavioural fix, such as lowering meat intake and discovering more sustainable means of protein consumption Consequently, the purpose of this study is to assess health consciousness, environmental concern and animal welfare as key predictors of consumers’ locus of control and attitudes towards meat consumption among university students, in Johannesburg, South Africa. The study used a design that was quantitative in nature, which resulted in the researcher employing a deductive approach, using a positivistic method. Data was collected by means of a survey questionnaire and was used to test the hypotheses. By means of Partial Least Squares – Structural Equation Modeling (PLS-SEM), the significance of the hypotheses statements was determined from a sample of 172 students enrolled at the University of the Witwatersrand, Johannesburg, South Africa. The findings generated by SmartPLS 3 statistical software revealed that health consciousness was the key predictor of external locus of control and that external locus of control had the most significant relationship with consumers’ attitude toward meat consumption, as compared to the second mediator variable – internal locus of control. It is anticipated that the findings of this study will contribute to both theory and practice in modern society, and it is trusted that the findings of this study will greatly inform future research endeavours. / NG (2020)
138

Computer identification and control of a heat exchanger

Munteanu, Corneliu Ioan. January 1975 (has links)
No description available.
139

Permutation recovery in shuffled total least squares regression

Wang, Qian 27 September 2023 (has links)
Shuffled linear regression concerns itself with linear models with an unknown correspondence between the input and the output. This correspondence is usually represented by a permutation matrix II*. The model we are interested in has one more complication which is that the design matrix is itself latent and is observed with noise. This is considered as a type of errors-in-variables (EIV) model. Our interest lies in the recovery of the permutation matrix. We propose an estimator for II* based on the total least squares (TLS) technique, a common method of estimation used in EIV model. The estimation problem can be viewed as approximating one matrix by another of lower rank and the quantity it seeks to minimize is the sum of the smallest singular values squared. Due to identifiability issue, we evaluate the proposed estimator by the normalized Procrustes quadratic loss which allows for an orthogonal rotation of the estimated design matrix. Our main result provides an upper bound on this quantity which states that it is required that the signal-to-noise ratio to go to infinity in order for the loss to go to zero. On the computational front, since the problem of permutation recovery is NP-hard to solve, we propose a simple and efficient algorithm named alternating LAP/TLS algorithm (ALTA) to approximate the estimator, and we use it to empirically examine the main result. The main idea of the algorithm is to alternate between estimating the unknown coefficient matrix using the TLS method and estimating the latent permutation matrix by solving a linear assignment problem (LAP) which runs in polynomial time. Lastly, we propose a hypothesis testing procedure based on graph matching which we apply in the field of digital humanities, on character social networks constructed from novel series.
140

Examining the Decision Process and Outcomes of System Development Methodology Adoption

Griffin, Audrey S. 27 April 2008 (has links)
No description available.

Page generated in 0.0648 seconds