• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 837
  • 117
  • 79
  • Tagged with
  • 1033
  • 672
  • 670
  • 298
  • 290
  • 290
  • 225
  • 214
  • 161
  • 155
  • 115
  • 83
  • 81
  • 73
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Parameter estimation and model based control design of drive train systems

Tallfors, Mats January 2005 (has links)
The main control task in many speed-controlled drives is to eliminate or reduce the load speed error caused by the load torque disturbance and reduce oscillations as quickly as possible. This thesis addresses different aspects of identification and control of such resonant elastic systems. In most industrial applications it is not practical to measure the load speed. Instead, we advocate model based control design that optimizes load speed while using motor speed as the feedback signal. For this to be possible one needs a mechanical model of the system and we suggest finding the mechanical parameters by estimation from experimental data. Hence a method has been developed which finds the mechanical parameters, including backlash, through a series of three dedicated experiments. At first this procedure is developed for the situation of one manipulated input, the motor torque, and one measured output, the motor speed. For drive systems with a very large motor in comparison to the load, it becomes very difficult to estimate all mechanical parameters from motor speed measurements only. An alternative estimation method has been developed for this purpose, using an additional sensor for the shaft torque. One more rather specific control problem is treated in the thesis, namely for drive systems with tandem coupled motors, where control structures have been developed with and without an extra sensor for shaft torque. / QC 20101221
182

Estimation Using Low Rank Signal Models

Mahata, Kaushik January 2003 (has links)
<p>Designing estimators based on low rank signal models is a common practice in signal processing. Some of these estimators are designed to use a single low rank snapshot vector, while others employ multiple snapshots. This dissertation deals with both these cases in different contexts.</p><p>Separable nonlinear least squares is a popular tool to extract parameter estimates from a single snapshot vector. Asymptotic statistical properties of the separable non-linear least squares estimates are explored in the first part of the thesis. The assumptions imposed on the noise process and the data model are general. Therefore, the results are useful in a wide range of applications. Sufficient conditions are established for consistency, asymptotic normality and statistical efficiency of the estimates. An expression for the asymptotic covariance matrix is derived and it is shown that the estimates are circular. The analysis is extended also to the constrained separable nonlinear least squares problems.</p><p>Nonparametric estimation of the material functions from wave propagation experiments is the topic of the second part. This is a typical application where a single snapshot vector is employed. Numerical and statistical properties of the least squares algorithm are explored in this context. Boundary conditions in the experiments are used to achieve superior estimation performance. Subsequently, a subspace based estimation algorithm is proposed. The subspace algorithm is not only computationally efficient, but is also equivalent to the least squares method in accuracy.</p><p>Estimation of the frequencies of multiple real valued sine waves is the topic in the third part, where multiple snapshots are employed. A new low rank signal model is introduced. Subsequently, an ESPRIT like method named R-Esprit and a weighted subspace fitting approach are developed based on the proposed model. When compared to ESPRIT, R-Esprit is not only computationally more economical but is also equivalent in performance. The weighted subspace fitting approach shows significant improvement in the resolution threshold. It is also robust to additive noise.</p>
183

Revenue Maximization in Resource Allocation : Applications in Wireless Communication Networks

Casimiro Ericsson, Nilo January 2004 (has links)
<p>Revenue maximization for network operators is considered as a criterion for resource allocation in wireless cellular networks. A business model encompassing service level agreements between network operators and service providers is presented. Admission control, through price model aware admission policing and service level control, is critical for the provisioning of useful services over a general purpose wireless network. A technical solution consisting of a fast resource scheduler taking into account service requirements and wireless channel properties, a service level controller that provides the scheduler with a reasonable load, and an admission policy to uphold the service level agreements and maximize revenue, is presented.</p><p>Two different types of service level controllers are presented and implemented. One is based on a scalar PID controller, that adjusts the admitted data rates for all active clients. The other one is obtained with linear programming methods, that optimally assign data rates to clients, given their channel qualities and price models.</p><p>Two new scheduling criteria, and algorithms based on them, are presented and evaluated in a simulated wireless environment. One is based on a quadratic criterion, and is implemented through approximative algorithms, encompassing a search based algorithm and two different linearizations of the criterion. The second one is based on statistical measures of the service rates and channel states, and is implemented as an approximation of the joint probability of achieving the delay limits while utilizing the available resources efficiently.</p><p>Two scheduling algorithms, one based on each criterion, are tested in combination with each of the service level controllers, and evaluated in terms of throughput, delay, and computational complexity, using a target test system. Results show that both schedulers can, when feasible, meet explicit throughput and delay requirements, while at the same time allowing the service level controller to maximize revenue by allocating the surplus resources to less demanding services.</p>
184

Nonlinear Approaches to Periodic Signal Modeling

Abd-Elrady, Emad January 2005 (has links)
<p>Periodic signal modeling plays an important role in different fields. The unifying theme of this thesis is using nonlinear techniques to model periodic signals. The suggested techniques utilize the user pre-knowledge about the signal waveform. This gives these techniques an advantage as compared to others that do not consider such priors.</p><p>The technique of Part I relies on the fact that a sine wave that is passed through a static nonlinear function produces a harmonic spectrum of overtones. Consequently, the estimated signal model can be parameterized as a known periodic function (with unknown frequency) in cascade with an unknown static nonlinearity. The unknown frequency and the parameters of the static nonlinearity are estimated simultaneously using the recursive prediction error method (RPEM). A treatment of the local convergence properties of the RPEM is provided. Also, an adaptive grid point algorithm is introduced to estimate the unknown frequency and the parameters of the static nonlinearity in a number of adaptively estimated grid points. This gives the RPEM more freedom to select the grid points and hence reduces modeling errors.</p><p>Limit cycle oscillations problem are encountered in many applications. Therefore, mathematical modeling of limit cycles becomes an essential topic that helps to better understand and/or to avoid limit cycle oscillations in different fields. In Part II, a second-order nonlinear ODE is used to model the periodic signal as a limit cycle oscillation. The right hand side of the ODE model is parameterized using a polynomial function in the states, and then discretized to allow for the implementation of different identification algorithms. Hence, it is possible to obtain highly accurate models by only estimating a few parameters.</p><p>In Part III, different user aspects for the two nonlinear approaches of the thesis are discussed. Finally, topics for future research are presented. </p>
185

Revenue Maximization in Resource Allocation : Applications in Wireless Communication Networks

Casimiro Ericsson, Nilo January 2004 (has links)
Revenue maximization for network operators is considered as a criterion for resource allocation in wireless cellular networks. A business model encompassing service level agreements between network operators and service providers is presented. Admission control, through price model aware admission policing and service level control, is critical for the provisioning of useful services over a general purpose wireless network. A technical solution consisting of a fast resource scheduler taking into account service requirements and wireless channel properties, a service level controller that provides the scheduler with a reasonable load, and an admission policy to uphold the service level agreements and maximize revenue, is presented. Two different types of service level controllers are presented and implemented. One is based on a scalar PID controller, that adjusts the admitted data rates for all active clients. The other one is obtained with linear programming methods, that optimally assign data rates to clients, given their channel qualities and price models. Two new scheduling criteria, and algorithms based on them, are presented and evaluated in a simulated wireless environment. One is based on a quadratic criterion, and is implemented through approximative algorithms, encompassing a search based algorithm and two different linearizations of the criterion. The second one is based on statistical measures of the service rates and channel states, and is implemented as an approximation of the joint probability of achieving the delay limits while utilizing the available resources efficiently. Two scheduling algorithms, one based on each criterion, are tested in combination with each of the service level controllers, and evaluated in terms of throughput, delay, and computational complexity, using a target test system. Results show that both schedulers can, when feasible, meet explicit throughput and delay requirements, while at the same time allowing the service level controller to maximize revenue by allocating the surplus resources to less demanding services.
186

Improving Elastography using SURF Imaging for Suppression of Reverberations

Grythe, Jørgen January 2010 (has links)
<p>For some of the applications of the Second-order UltRasound Field (SURF) imaging technique, a real-time delay-estimation algorithm has been developed for estimating spatially range-varying delays in RF signals. This algorithm is a phase-based approach for subsample delay estimation, and makes no assumption on the local delay variation. Any parametric model can be used for modeling the local delay variation. The phase-based delay estimator uses estimates of the instantaneous frequency and the phase difference and the relationship between the two to estimate the delay. The estimated delay may be used to calculate an improved estimate of the instantaneous frequency, which in turn may be used to calculate new, updated values for the delay using an iterative scheme. Although an iterative scheme introduces a larger bias, the estimated delay values have a significantly lowered standard deviation in comparison to the original method. The delay estimator originally developed for estimating propagation delays for SURF imaging, can also be used for elastography purposes. By not being restricted to locally constant delays, the delay estimator is able to more robustly estimate sharp changes in tissue stiffness, and in estimating small differences in strain more closely. Two different parametric models for the local delay have been tried, one linear, and one polynomial of the first degree. The two various models have been tested on an elastography recording provided by the Ultrasonix company (Ultrasonix Medical Corporation, Vancouver, Canada), and in vitro. Using a polynomial of the second degree as parametric model for the delay is better than a linear model in detecting edges of inclusions located at a depth where the strain is lower than closer to the transducer surface. The differences may be further emphasized by performing spatial filtering with a median filter. The downside of updating the model is an increased computational time of approximately 50%. Multiple reflections, also known as reverberations, appear as acoustic noise in ultrasound images and may greatly impair time-delay estimation, particularly in elastography. Today reverberation suppression is achieved by second harmonic imaging, but this method has the disadvantage of low penetration, and little or no signal in the near field. The SURF imaging technique has the advantages of reverberation suppression in addition to imaging in the fundamental frequency. A reverberation model has been established, and the effect reverberations have on estimated elastography images is studied. When using a layered silicon plate as reverberation model, and imaging through this initial reverberation model placed on top of the imaging phantom, elastography images were not obtained as the quality of the recording was degraded as a result of power loss. By adding reverberations by computer simulations after a recording with a SURF probe with reverberation suppression was performed, a markedly difference between elastography estimates done on the image with reverberations, and the image with reverberations and reverberation suppression was observed. Estimating on a signal with reverberations, the phase-based time-delay algorithm was unable to distinguish any differences in elasticity at all. Estimating time delays on a signal with reverberations and SURF reverberation suppression however, the algorithm was able to clearly estimate differences in strain, and display the presence of an inclusion.</p>
187

Calculation of the coverage area of mobile broadband communications. Focus on land

Martínez Gálvez, Antonio January 2010 (has links)
<p>Calculation of the coverage area of mobile broadband communications. Focus on land</p>
188

On the Efficiency of Data Communication for the Ultramonit Corrosion Monitoring System

Rommetveit, Tarjei January 2006 (has links)
<p>Ultramonit is a system under development for permanent installation on critical parts of the subsea oil- and gas pipelines in order to monitor the corrosion continuously by using ultrasound. The communication link which connects the Ultramonit units with the outside world is identified as the system’s bottleneck, and it is thus of interest to compress the ultrasonic data before transmission. The main goal of this diploma work has been to implement and optimize a lossy compression scheme in C on the available hardware (HW) with respect to a self-defined fidelity measure. Limited resources, such as memory constraints and constraints with respect to the processing time, have been a major issue during implementation. The real-time aspect of the problem results in an intricate relation between transfer time, processing time and compression ratio for a given fidelity. The encoder is optimized with respect to two different bit allocation schemes, two different filters as well as various parameters. Compared to transferring the unprocessed traces, the results demonstrate that the transfer time can be reduced with a factor 12. This yields acceptable fidelity concerning the main application of long term monitoring of subsea pipelines. However, for ultra-high precision applications where the total change in thickness due to corrosion is less than a few micrometers, compression should not be employed.</p>
189

Programming graphic card for fast calculation of sound field in marine acoustics

Haugehåtveit, Olav January 2006 (has links)
<p>Commodity computer graphics chips are probably today’s most powerful computational hardware one can buy for money. These chips, known generically as Graphics Processing Units or GPUs, has in recent years evolved from afterthought peripherals to modern, powerful programmable processor. Due to the movie and game industry we are where we are to today. One of Intel’s co-founder Gordon E. Moore said once that the number of transistors on a single integrated chip was to double every 18 month. So far this seems to be correct for the CPU. However for the GPU the development has gone much faster, and the floating point operations per second has increased enormously. Due to this rapid evolvement many researchers and scientists has discovered the enormous floating point potential can be taken advantage of, and a numerous applications has been tested such as audio and image algorithms. Also in the area of marine acoustics this has become interesting, where the demand for high computational power is increasing. This master report investigates how to make a program capable to run on a GPU for calculating an underwater sound field. To do this a graphics chips with programmable vertex and fragment processor is necessary. Programming this will require graphics API like OpenGL, a shading language like GLSL, and a general purpose GPU library like Shallows. A written program in Matlab is the basic for the GPU program. The goal is to reduce calculation time spent to calculate a underwater sound field. From this the increment from Matlab to GPU was found to be around 40-50 times. However if Matlab was able to calculate the same number of rays as maximum on the GPU, the increment would probably be bigger. Since this study was done on a laptop with nVidia GeForce Go 6600 graphics chip, a higher gain would theoretically be obtainable by a desktop graphics chip.</p>
190

Satellite Cluster Consepts : A system evaluation with emphasis on deinterleaving and emitter recognition

Bildøy, Bent Einar Stenersen January 2006 (has links)
<p>In a dense and complex emitter environment, a high pulse arrival rate and a large number of interleaved radar pulse sequences is expected, from both agile and stable emitters. This thesis evaluates the combination of interval-only algorithms with different monopulse parameters, in comparison to a neural network to do accurate emitter classification. This thesis has evaluated a selection of TOA deinterleaving algorithms with the intent to clearly discriminate between pulses emitted from agile emitters. The first section presents the different techniques, with emphasis on pinpointing the different algorithmic structures. The second section presents a neural network combinational recognition system, with a main focus on the fuzzy ARTMAP neural network, where also some practical implementations has been presented. The final section gives a partial system evaluation based on some statistical means, seeking to get an estimate on the information flow from the ESM receiver as a function of both the density and the expected parametric values, i.e. PW since this is proportional to the amount of processed pulses.</p>

Page generated in 0.1505 seconds