• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1201

MSE-based Linear Transceiver Designs for Multiuser MIMO Wireless Communications

Tenenbaum, Adam 11 January 2012 (has links)
This dissertation designs linear transceivers for the multiuser downlink in multiple-input multiple-output (MIMO) systems. The designs rely on an uplink/downlink duality for the mean squared error (MSE) of each individual data stream. We first consider the design of transceivers assuming channel state information (CSI) at the transmitter. We consider minimization of the sum-MSE over all users subject to a sum power constraint on each transmission. Using MSE duality, we solve a computationally simpler convex problem in a virtual uplink. The transformation back to the downlink is simplified by our demonstrating the equality of the optimal power allocations in the uplink and downlink. Our second set of designs maximize the sum throughput for all users. We establish a series of relationships linking MSE to the signal-to-interference-plus-noise ratios of individual data streams and the information theoretic channel capacity under linear minimum MSE decoding. We show that minimizing the product of MSE matrix determinants is equivalent to sum-rate maximization, but we demonstrate that this problem does not admit a computationally efficient solution. We simplify the problem by minimizing the product of mean squared errors (PMSE) and propose an iterative algorithm based on alternating optimization with near-optimal performance. The remainder of the thesis considers the more practical case of imperfections in CSI. First, we consider the impact of delay and limited-rate feedback. We propose a system which employs Kalman prediction to mitigate delay; feedback rate is limited by employing adaptive delta modulation. Next, we consider the robust design of the sum-MSE and PMSE minimizing precoders with delay-free but imperfect estimates of the CSI. We extend the MSE duality to the case of imperfect CSI, and consider a new optimization problem which jointly optimizes the energy allocations for training and data stages along with the sum-MSE/PMSE minimizing transceivers. We prove the separability of these two problems when all users have equal estimation error variances, and propose several techniques to address the more challenging case of unequal estimation errors.
1202

3D imaging using time-correlated single photon counting

Neimert-Andersson, Thomas January 2010 (has links)
This project investigates a laser radar system. The system is based on the principles of time-correlated single photon counting, and by measuring the times-of-flight of reflected photons it can find range profiles and perform three-dimensional imaging of scenes. Because of the photon counting technique the resolution and precision that the system can achieve is very high compared to analog systems. These properties make the system interesting for many military applications. For example, the system can be used to interrogate non-cooperative targets at a safe distance in order to gather intelligence. However, signal processing is needed in order to extract the information from the data acquired by the system. This project focuses on the analysis of different signal processing methods. The Wiener filter and the Richardson-Lucy algorithm are used to deconvolve the data acquired by the photon counting system. In order to find the positions of potential targets different approaches of non-linear least squares methods are tested, as well as a more unconventional method called ESPRIT. The methods are evaluated based on their ability to resolve two targets separated by some known distance and the accuracy with which they calculate the position of a single target, as well as their robustness to noise and their computational burden. Results show that fitting a curve made of a linear combination of asymmetric super-Gaussians to the data by a method of non-linear least squares manages to accurately resolve targets separated by 1.75 cm, which is the best result of all the methods tested. The accuracy for finding the position of a single target is similar between the methods but ESPRIT has a much faster computation time.
1203

Explorative Multivariate Data Analysis of the Klinthagen Limestone Quarry Data / Utforskande multivariat analys av Klinthagentäktens projekteringsdata

Bergfors, Linus January 2010 (has links)
The today quarry planning at Klinthagen is rough, which provides an opportunity to introduce new exciting methods to improve the quarry gain and efficiency. Nordkalk AB, active at Klinthagen, wishes to start a new quarry at a nearby location. To exploit future quarries in an efficient manner and ensure production quality, multivariate statistics may help gather important information. In this thesis the possibilities of the multivariate statistical approaches of Principal Component Analysis (PCA) and Partial Least Squares (PLS) regression were evaluated on the Klinthagen bore data. PCA data were spatially interpolated by Kriging, which also was evaluated and compared to IDW interpolation. Principal component analysis supplied an overview of the variables relations, but also visualised the problems involved when linking geophysical data to geochemical data and the inaccuracy introduced by lacking data quality. The PLS regression further emphasised the geochemical-geophysical problems, but also showed good precision when applied to strictly geochemical data. Spatial interpolation by Kriging did not result in significantly better approximations than the less complex control interpolation by IDW. In order to improve the information content of the data when modelled by PCA, a more discrete sampling method would be advisable. The data quality may cause trouble, though with sample technique of today it was considered to be of less consequence. Faced with a single geophysical component to be predicted from chemical variables further geophysical data need to complement existing data to achieve satisfying PLS models. The stratified rock composure caused trouble when spatially interpolated. Further investigations should be performed to develop more suitable interpolation techniques.
1204

Hyperspectral Image Analysis Algorithm for Characterizing Human Tissue

Wondim, Yonas kassaw January 2011 (has links)
AbstractIn the field of Biomedical Optics measurement of tissue optical properties, like absorption, scattering, and reduced scattering coefficient, has gained importance for therapeutic and diagnostic applications. Accuracy in determining the optical properties is of vital importance to quantitatively determine chromophores in tissue.There are different techniques used to quantify tissue chromophores. Reflectance spectroscopy is one of the most common methods to rapidly and accurately characterize the blood amount and oxygen saturation in the microcirculation. With a hyper spectral imaging (HSI) device it is possible to capture images with spectral information that depends both on tissue absorption and scattering. To analyze this data software that accounts for both absorption and scattering event needs to be developed.In this thesis work an HSI algorithm, capable of assessing tissue oxygenation while accounting for both tissue absorption and scattering, is developed. The complete imaging system comprises: a light source, a liquid crystal tunable filter (LCTF), a camera lens, a CCD camera, control units and power supply for light source and filter, and a computer.This work also presents a Graphic processing Unit (GPU) implementation of the developed HSI algorithm, which is found computationally demanding. It is found that the GPU implementation outperforms the Matlab “lsqnonneg” function by the order of 5-7X.At the end, the HSI system and the developed algorithm is evaluated in two experiments. In the first experiment the concentration of chromophores is assessed while occluding the finger tip. In the second experiment the skin is provoked by UV light while checking for Erythema development by analyzing the oxyhemoglobin image at different point of time. In this experiment the melanin concentration change is also checked at different point of time from exposure.It is found that the result matches the theory in the time dependent change of oxyhemoglobin and deoxyhemoglobin. However, the result of melanin does not correspond to the theoretically expected result.
1205

AN EMPIRICAL STUDY OF DIFFERENT BRANCHING STRATEGIES FOR CONSTRAINT SATISFACTION PROBLEMS

Park, Vincent Se-jin January 2004 (has links)
Many real life problems can be formulated as constraint satisfaction problems <i>(CSPs)</i>. Backtracking search algorithms are usually employed to solve <i>CSPs</i> and in backtracking search the choice of branching strategies can be critical since they specify how a search algorithm can instantiate a variable and how a problem can be reduced into subproblems; that is, they define a search tree. In spite of the apparent importance of the branching strategy, there have been only a few empirical studies about different branching strategies and they all have been tested exclusively for numerical constraints. In this thesis, we employ the three most commonly used branching strategies in solving finite domain <i>CSPs</i>. These branching strategies are described as follows: first, a branching strategy with strong commitment assigns its variables in the early stage of the search as in k-Way branching; second, 2-Way branching guides a search by branching one side with assigning a variable and the other with eliminating the assigned value; third, the domain splitting strategy, based on the least commitment principle, branches by dividing a variable's domain rather than by assigning a single value to a variable. In our experiments, we compared the efficiency of different branching strategies in terms of their execution times and the number of choice points in solving finite domain <i>CSPs</i>. Interestingly, our experiments provide evidence that the choice of branching strategy for finite domain problems does not matter much in most cases--provided we are using an effective variable ordering heuristic--as domain splitting and 2-Way branching end up simulating k-Way branching. However, for an optimization problem with large domain size, the branching strategy with the least commitment principle can be more efficient than the other strategies. This empirical study will hopefully interest other practitioners to take different branching schemes into consideration in designing heuristics.
1206

Three Dimensional Laminar Compressible Navier Stokes Solver For Internal Rocket Flow Applications

Coskun, Korhan 01 December 2007 (has links) (PDF)
A three dimensional, Navier-Stokes finite volume flow solver which uses Roe&rsquo / s upwind flux differencing scheme for spatial and Runge-Kutta explicit multi-stage time stepping scheme and implicit Lower-Upper Symmetric Gauss Seidel (LU-SGS) iteration scheme for temporal discretization on unstructured and hybrid meshes is developed for steady rocket internal viscous flow applications. The spatial accuracy of the solver can be selected as first or second order. Second order accuracy is achieved by piecewise linear reconstruction. Gradients of flow variables required for piecewise linear reconstruction are calculated with both Green-Gauss and Least-Squares approaches. The solver developed is first verified against the three-dimensional viscous laminar flow over flat plate. Then the implicit time stepping algorithms are compared against two rocket motor internal flow problems. Although the solver is intended for internal flows, a test case involving flow over an airfoil is also given. As the last test case, supersonic vortex flow between concentric circular arcs is selected.
1207

Identifying Factors Influencing The Acceptance Of Processes: An Empirical Investigation Using The Structural Equation Modeling Approach

Degerli, Mustafa 01 July 2012 (has links) (PDF)
In this research, it was mainly aimed to develop an acceptance model for processes, namely the process acceptance model (PAM). For this purpose, a questionnaire, comprising 3-part and 81-question, was developed to collect quantitative and qualitative data from people having relationships with certain process-focused models and/or standards (CMMI, ISO 15504, ISO 9001, ISO 27001, AQAP-160, AQAP-2110, and/or AS 9100). To revise and refine the questionnaire, expert reviews were ensured, and a pilot study was conducted with 60 usable responses. After reviews, refinements and piloting, the questionnaire was deployed to collect data and in-total 368 usable responses were collected from the people. Here, collected data were screened concerning incorrectly entered data, missing data, outliers and normality, and reliability and validity of the questionnaire were ensured. Partial least squares structural equation modeling (PLS SEM) was applied to develop the PAM. In this context, exploratory and confirmatory factor analyses were applied, and the initial model was estimated and evaluated. The initial model was modified as required by PLS SEM, and confirmatory factor analysis was repeated, and the modified final model was estimated and evaluated. Consequently, the PAM, with 18 factors and their statistically significant relationships, was developed. Furthermore, descriptive statistics and t-tests were applied to discover some interesting, meaningful, and important points to be taken into account regarding the acceptance of processes. Moreover, collected quantitative data were analyzed, and three additional factors were discovered regarding the acceptance of processes. Besides, a checklist to test and/or promote the acceptance of processes was established.
1208

Compressed Domain Processing of MPEG Audio

Anantharaman, B 03 1900 (has links)
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are a) Extracting wavelet coefficients in the MPEG compressed domain. b) Wavelet based pitch extraction in MPEG compressed domain. c) Time Scale Modifications of MPEG audio. d) Watermarking of MPEG audio. The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients. Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported. In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals. The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
1209

Rotationally Invariant Kinetic Upwind Method (KUMARI)

Malagi, Keshav Shrinivas 07 1900 (has links)
In the quest for a high fidelity numerical scheme for CFD it is necessary to satisfy demands on accuracy, conservation, positivity and upwinding. Recently the requirement of rotational invariance has been added to this list. In the present work we are mainly interested in upwinding and rotational invariance of Least Squares Kinetic Upwind Method (LSKUM). The standard LSKUM achieves upwinding by stencil division along co-ordinate axes which is referred to as co-ordinate splitting method. This leads to symmetry breaking and rotational invariance is lost. Thus the numerical solution becomes co-ordinate frame dependent. To overcome this undesirable feature of existing numerical schemes, a new algorithm called KUMARI (Kinetic Upwind Method Avec Rotational Invariance, 'Avec' in French means 'with') has been developed. The interesting mathematical relation between directional derivative, Fourier series and divergence operator has been used effectively to achieve upwinding as well as rotational invariance and hence making the scheme truly or genuinely multidimensional upwind scheme. The KUMARI has been applied to the test case of standard 2D shock reflection problem, flow past airfoils, then to 2D blast wave problem and lastly to 2D Riemann problem (Lax's 3rd test case). The results show that either KUMARI is comparable to or in some cases better than the usual LSKUM.
1210

On Viscous Flux Discretization Procedures For Finite Volume And Meshless Solvers

Munikrishna, N 06 1900 (has links)
This work deals with discretizing viscous fluxes in the context of unstructured data based finite volume and meshless solvers, two competing methodologies for simulating viscous flows past complex industrial geometries. The two important requirements of a viscous discretization procedure are consistency and positivity. While consistency is a fundamental requirement, positivity is linked to the robustness of the solution methodology. The following advancements are made through this work within the finite volume and meshless frameworks. Finite Volume Method: Several viscous discretization procedures available in the literature are reviewed for: 1. ability to handle general grid elements 2. efficiency, particularly for 3D computations 3. consistency 4. positivity as applied to a model equation 5. global error behavior as applied to a model equation. While some of the popular procedures result in inconsistent formulation, the consistent procedures are observed to be computationally expensive and also have problems associated with robustness. From a systematic global error study, we have observed that even a formally inconsistent scheme exhibits consistency in terms of global error i.e., the global error decreases with grid refinement. This observation is important and also encouraging from the view point of devising a suitable discretization scheme for viscous fluxes. This study suggests that, one can relax the consistency requirement in order to gain in terms of robustness and computational cost, two key ingredients for any industrial flow solver. Some of the procedures are analysed for positivity as applied to a Laplacian and it is found that the two requirements of a viscous discretization procedure, consistency(accuracy) and positivity are essentially conflicting. Based on the review, four representative schemes are selected and used in HIFUN-2D(High resolution Flow Solver on UNstructured Meshes), an unstructured data based cell center finite volume flow solver, to simulate standard laminar and turbulent flow test cases. From the analysis, we can advocate the use of Green Gauss theorem based diamond path procedure which can render high level of robustness to the flow solver for industrial computations. Meshless Method: An Upwind-Least Squares Finite Difference(LSFD-U) meshless solver is developed for simulating viscous flows. Different viscous discretization procedures are proposed and analysed for positivity and the procedure which is found to be more positive is employed. Obtaining suitable point distribution, particularly for viscous flow computations happens to be one of the important components for the success of the meshless solvers. In principle, the meshless solvers can operate on any point distribution obtained using structured, unstructured and Cartesian meshes. But, the Cartesian meshing happens to be the most natural candidate for obtaining the point distribution. Therefore, the performance of LSFD-U for simulating viscous flows using point distribution obtained from Cartesian like grids is evaluated. While we have successfully computed laminar viscous flows, there are difficulties in terms of solving turbulent flows. In this context, we have evolved a strategy to generate suitable point distribution for simulating turbulent flows using meshless solver. The strategy involves a hybrid Cartesian point distribution wherein the region of boundary layer is filled with high aspect ratio body-fitted structured mesh and the potential flow region with unit aspect ratio Cartesian mesh. The main advantage of our solver is in terms of handling the structured and Cartesian grid interface. The interface algorithm is considerably simplified compared to the hybrid Cartesian mesh based finite volume methodology by exploiting the advantage accrue out of the use of meshless solver. Cheap, simple and robust discretization procedures are evolved for both inviscid and viscous fluxes, exploiting the basic features exhibited by the hybrid point distribution. These procedures are also subjected to positivity analysis and a systematic global error study. It should be remarked that the viscous discretization procedure employed in structured grid block is positive and in fact, this feature imparts the required robustness to the solver for computing turbulent flows. We have demonstrated the capability of the meshless solver LSFDU to solve turbulent flow past complex aerodynamic configurations by solving flow past a multi element airfoil configuration. In our view, the success shown by this work in computing turbulent flows can be considered as a landmark development in the area of meshless solvers and has great potential in industrial applications.

Page generated in 0.0256 seconds