• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 442
  • 63
  • 56
  • 52
  • 21
  • 21
  • 10
  • 9
  • 8
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • Tagged with
  • 816
  • 237
  • 159
  • 105
  • 97
  • 95
  • 74
  • 69
  • 65
  • 64
  • 57
  • 57
  • 56
  • 56
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Återanvändning av information i  ABB Robotics beställnings- och konfigureringsdatabas BusinessOnline

Luthardt, Runa January 2008 (has links)
No description available.
112

Novel turbo-equalization techniques for coded digital transmission

Dejonghe, Antoine 10 December 2004 (has links)
Turbo-codes have attracted an explosion of interest since their discovery in 1993: for the first time, the gap with the limits predicted by information and coding theory was on the way to be bridged. The astonishing performance of turbo-codes relies on two major concepts: code concatenation so as to build a powerful global code, and iterative decoding in order to efficiently approximate the optimal decoding process. As a matter of fact, the techniques involved in turbo coding and in the associated iterative decoding strategy can be generalized to other problems frequently encountered in digital communications. This results in a so-called turbo principle. A famous application of the latter principle is the communication scheme referred to as turbo-equalization: when considering coded transmission over a frequency-selective channel, it enables to jointly and efficiently perform the equalization and decoding tasks required at the receiver. This leads by the way to significant performance improvement with regard to conventional disjoint approaches. In this context, the purpose of the present thesis is the derivation and the performance study of novel digital communication receivers, which perform iterative joint detection and decoding by means of the turbo principle. The binary turbo-equalization scheme is considered as a starting point, and improved in several ways, which are detailed throughout this work. Emphasis is always put on the performance analysis of the proposed communication systems, so as to reach insight about their behavior. Practical considerations are also taken into account, in order to provide realistic, tractable, and efficient solutions.
113

Iterative model-free controller tuning

Solari, Gabriel 08 August 2005 (has links)
Despite the vast amount of delivered theoretical results, regarding the topic of controller design, more than 90% of the controllers used in industry (petro-chemical, pulp and paper, steel, mining, etc) are of PID type (P, PI, PII, PD). This shows the importance of progressing in the elaboration of methods that consider restricted complexity controllers for practical applications, and that are computationally simple. Iterative Feedback Tuning (IFT) stands out as a new solution that takes into account both constraints. It belongs to the family of model-free controller tuning methods. It was developed at Cesame in the nineties and, since then, many real applications of IFT have been reported. This algorithm minimizes a cost function by means of a stochastic gradient descent scheme. In spite of the fact that the method has had an unexpected success in the tuning of real processes, a number of issues had not been fully covered yet. This thesis focuses on two aspects of this set of uncovered theoretical points: the convergence rate of the algorithm and a robust estimation of its gradient. Optimal prefilters, left as a degree of freedom for the user in the first formulation of IFT, are computed at each experiment. Their application allows a reduction in the covariance of the gradient estimate. Depending on what particular aspect the user is interested in improving, one optimal prefilter is selected. Monte-Carlo simulations have shown an enhancement with regards to a constant prefilter. A flexible arm set-up mounted in our robotics laboratory is used as a test bed to compare a model-based controller design algorithm with a model-free controller tuning method. The comparison is performed with some specifications defined beforehand. The same set-up plus a couple of air-jets serves as a tester for our theoretical results, when the rejection of a perturbation is the ultimate objective. Both cases have confirmed the predicted good behaviour offered by IFT.
114

Differential Geometry, Surface Patches and Convergence Methods

Grimson, W.E.L. 01 February 1979 (has links)
The problem of constructing a surface from the information provided by the Marr-Poggio theory of human stereo vision is investigated. It is argued that not only does this theory provide explicit boundary conditions at certain points in the image, but that the imaging process also provides implicit conditions on all other points in the image. This argument is used to derive conditions on possible algorithms for computing the surface. Additional constraining principles are applied to the problem; specifically that the process be performable by a local-support parallel network. Some mathematical tools, differential geometry, Coons surface patches and iterative methods of convergence, relevant to the problem of constructing the surface are outlined. Specific methods for actually computing the surface are examined.
115

Design and analysis of iteratively decodable codes for ISI channels

Doan, Dung Ngoc 01 November 2005 (has links)
Recent advancements in iterative processing have allowed communication systems to perform close to capacity limits withmanageable complexity.For manychannels such as the AWGN and flat fading channels, codes that perform only a fraction of a dB from the capacity have been designed in the literature. In this dissertation, we will focus on the design and analysis of near-capacity achieving codes for another important class of channels, namely inter-symbol interference (ISI)channels. We propose various coding schemes such as low-density parity-check (LDPC) codes, parallel and serial concatenations for ISI channels when there is no spectral shaping used at the transmitter. The design and analysis techniques use the idea of extrinsic information transfer (EXIT) function matching and provide insights into the performance of different codes and receiver structures. We then present a coding scheme which is the concatenation of an LDPC code with a spectral shaping block code designed to be matched to the channel??s spectrum. We will discuss how to design the shaping code and the outer LDPC code. We will show that spectral shaping matched codes can be used for the parallel concatenation to achieve near capacity performance. We will also discuss the capacity of multiple antenna ISI channels. We study the effects of transmitter and receiver diversities and noisy channel state information on channel capacity.
116

Estimation-based Metaheuristics for Stochastic Combinatorial Optimization: Case Studies in Stochastic Routing Problems

Prasanna, BALAPRAKASH 26 January 2010 (has links)
Stochastic combinatorial optimization problems are combinatorial optimization problems where part of the problem data are probabilistic. The focus of this thesis is on stochastic routing problems, a class of stochastic combinatorial optimization problems that arise in distribution management. Stochastic routing problems involve finding the best solution to distribute goods across a logistic network. In the problems we tackle, we consider a setting in which the cost of a solution is described by a random variable; the goal is to find the solution that minimizes the expected cost. Solving such stochastic routing problems is a challenging task because of two main factors. First, the number of possible solutions grows exponentially with the instance size. Second, computing the expected cost of a solution is computationally very expensive. <br> To tackle stochastic routing problems, stochastic local search algorithms such as iterative improvement algorithms and metaheuristics are quite promising because they offer effective strategies to tackle the combinatorial nature of these problems. However, a crucial factor that determines the success of these algorithms in stochastic settings is the trade-off between the computation time needed to search for high quality solutions in a large search space and the computation time spent in computing the expected cost of solutions obtained during the search. <br> To compute the expected cost of solutions in stochastic routing problems, two classes of approaches have been proposed in the literature: analytical computation and empirical estimation. The former exactly computes the expected cost using closed-form expressions; the latter estimates the expected cost through Monte Carlo simulation. <br> Many previously proposed metaheuristics for stochastic routing problems use the analytical computation approach. However, in a large number of practical stochastic routing problems, due to the presence of complex constraints, the use of the analytical computation approach is difficult, time consuming or even impossible. Even for the prototypical stochastic routing problems that we consider in this thesis, the adoption of the analytical computation approach is computationally expensive. Notwithstanding the fact that the empirical estimation approach can address the issues posed by the analytical computation approach, its adoption in metaheuristics to tackle stochastic routing problems has never been thoroughly investigated. <br> In this thesis, we study two classical stochastic routing problems: the probabilistic traveling salesman problem (PTSP) and the vehicle routing problem with stochastic demands and customers (VRPSDC). The goal of the thesis is to design, implement, and analyze effective metaheuristics that use the empirical estimation approach to tackle these two problems. The main results of this thesis are: 1) The empirical estimation approach is a viable alternative to the widely-adopted analytical computation approach for the PTSP and the VRPSDC; 2) A principled adoption of the empirical estimation approach in metaheuristics results in high performing algorithms for tackling the PTSP and the VRPSDC. The estimation-based metaheuristics developed in this thesis for these two problems define the new state-of-the-art.
117

Estimation-based iterative learning control

Wallén, Johanna January 2011 (has links)
In many  applications industrial robots perform the same motion  repeatedly. One way of compensating the repetitive part of the error  is by using iterative learning control (ILC). The ILC algorithm  makes use of the measured errors and iteratively calculates a  correction signal that is applied to the system. The main topic of the thesis is to apply an ILC algorithm to a  dynamic system where the controlled variable is not measured. A  remedy for handling this difficulty is to use additional sensors in  combination with signal processing algorithms to obtain estimates of  the controlled variable. A framework for analysis of ILC algorithms  is proposed for the situation when an ILC algorithm uses an estimate  of the controlled variable. This is a relevant research problem in  for example industrial robot applications, where normally only the  motor angular positions are measured while the control objective is  to follow a desired tool path. Additionally, the dynamic model of  the flexible robot structure suffers from uncertainties. The  behaviour when a system having these difficulties is controlled by  an ILC algorithm using measured variables directly is illustrated  experimentally, on both a serial and a parallel robot, and in  simulations of a flexible two-mass model. It is shown that the  correction of the tool-position error is limited by the accuracy of  the robot model. The benefits of estimation-based ILC is illustrated for cases when  fusing measurements of the robot motor angular positions with  measurements from an additional accelerometer mounted on the robot  tool to form a tool-position estimate. Estimation-based ILC is  studied in simulations on a flexible two-mass model and on a  flexible nonlinear two-link robot model, as well as in experiments  on a parallel robot. The results show that it is possible to improve  the tool performance when a tool-position estimate is used in the  ILC algorithm, compared to when the original measurements available  are used directly in the algorithm. Furthermore, the resulting  performance relies on the quality of the estimate, as expected. In the last part of the thesis, some implementation aspects of ILC  are discussed. Since the ILC algorithm involves filtering of signals  over finite-time intervals, often using non-causal filters, it is  important that the boundary effects of the filtering operations are  appropriately handled when implementing the algorithm. It is  illustrated by theoretical analysis and in simulations that the  method of implementation can have large influence over stability and  convergence properties of the algorithm. / Denna avhandling behandlar reglering genom iterativ inlärning, ILC  (från engelskans iterative learning control). Metoden har sitt  ursprung i industrirobottillämpningar där en robot utför samma  rörelse om och om igen. Ett sätt att kompensera för felen är genom  en ILC-algoritm som beräknar en korrektionssignal, som läggs på  systemet i nästa iteration. ILC-algoritmen kan ses som ett  komplement till det befintliga styrsystemet för att förbättra  prestanda. Det problem som särskilt studeras är då en ILC-algoritm appliceras  på ett dynamiskt system där reglerstorheten inte mäts. Ett sätt att  hantera dessa svårigheter är att använda ytterligare sensorer i  kombination med signalbehandlingsalgoritmer för att beräkna en  skattning av reglerstorheten som kan användas i ILC-algoritmen. Ett  ramverk för analys av skattningsbaserad ILC föreslås i avhandlingen.  Problemet är relevant och motiveras utifrån experiment på både en  seriell och en parallel robot. I konventionella robotstyrsystem  mäts endast de enskilda motorpositionerna, medan verktygspositionen  ska följa en önskad bana. Experimentresultat visar att en  ILC-algoritm baserad på motorpositionsfelen kan reducera dessa fel  effektivt. Dock behöver detta inte betyda en förbättrad  verktygsposition, eftersom robotmotorerna styrs mot felaktiga värden  på grund av att modellerna som används för att beräkna dessa  referensbanor inte beskriver den verkliga robotdynamiken helt. Skattningsbaserad ILC studeras både i simulering av en flexibel  tvåmassemodell och en olinjär robotmodell med flexibla leder, och i  experiment på en parallell robot. I studierna sammanvägs  motorpositionsmätningar med mätningar från en accelerometer på  robotverktyget till en skattning av verktygspositionen som används i  ILC-algoritmen. Resultaten visar att det är möjligt att förbättra  verktygspositionen med skattningsbaserad ILC, jämfört med när  motorpositionsmätningarna används direkt i  ILC-algoritmen. Resultatet beror också på skattningskvaliteten, som  förväntat. Slutligen diskuteras några implementeringsaspekter. Alla värden i  uppdateringssignalen läggs på systemet samtidigt, vilket gör det  möjligt att använda icke-kausal filtering där man utnyttjar framtida  signalvärden i filteringen. Detta gör att det är viktigt hur  randeffekterna (början och slutet av signalen) hanteras när man  implementerar ILC-algoritmen. Genom teoretisk analys och  simuleringsexempel illustreras att implementeringsmetoden kan ha  stor betydelse för egenskaperna hos ILC-algoritmen.
118

Low-complexity iterative receivers for multiuser space-time block coding systems

Yang, Yajun 31 October 2006
Iterative processing has been shown to be very effective in multiuser space-time block coding (STBC) systems. The complexity and efficiency of an iterative receiver depend heavily on how the log-likelihood ratios (LLRs) of the coded bits are computed and exchanged at the receiver among its three major components, namely the multiuser detector, the maximum a posterior probability (MAP) demodulators and the MAP channel decoders. This thesis first presents a method to quantitatively measure the system complexities with floating-point operations (FLOPS) and a technique to evaluate the iterative receiver's convergence property based on mutual information and extrinsic information transfer (EXIT) charts.<p>Then, an integrated iterative receiver is developed by applying the sigma mappings for M-ary quadrature amplitude modulation (M-QAM) constellations. Due to the linear relationship between the coded bits and the transmitted channel symbol, the multiuser detector can work on the bit-level and hence improves the convergence property of the iterative receiver. It is shown that the integrated iterative receiver is an attractive candidate to replace the conventional receiver when a few receive antennas and a high-order M-QAM constellation are employed.<p> Finally, a more general two-loop iterative receiver is proposed by introducing an inner iteration loop between the MAP demodulators and the MAP convolutional decoders besides the outer iteration loop that involves the multiuser detection (MUD) as in the conventional iterative receiver. The proposed two-loop iterative receiver greatly improves the iteration efficiency. It is demonstrated that the proposed two-loop iterative receiver can achieve the same asymptotic performance as that of the conventional iterative receiver, but with much less outer-loop iterations.
119

New numerical methods and analysis for Toeplitz matrices with financial applications

Pang, Hong Kui January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Mathematics
120

An iterative reconstruction algorithm for quantitative tissue decomposition using DECT / En iterativ rekonstruktions algoritm för kvantitativ vävnadsklassificering via DECT

Grandell, Oscar January 2012 (has links)
The introduction of dual energy CT, DECT, in the field of medical healthcare has made it possible to extract more information of the scanned objects. This in turn has the potential to improve the accuracy in radiation therapy dose planning. One problem that remains before successful material decomposition can be achieved however, is the presence of beam hardening and scatter artifacts that arise in a scan. Methods currently in clinical use for removal of beam hardening often bias the CT numbers. Hence, the possibility for an appropriate tissue decomposition is limited. Here a method for successful decomposition as well as removal of the beam hardening artifact is presented. The method uses effective linear attenuations for the five base materials, water, protein, adipose, cortical bone and marrow, to perform the decomposition on reconstructed simulated data. This is performed inside an iterative loop together with the polychromatic x-ray spectra to remove the beam hardening

Page generated in 0.0933 seconds