• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33736
  • 12647
  • 10076
  • 1113
  • 799
  • 552
  • 386
  • 323
  • 323
  • 323
  • 323
  • 323
  • 321
  • 238
  • 235
  • Tagged with
  • 68263
  • 33247
  • 16721
  • 16093
  • 13159
  • 13135
  • 13034
  • 10665
  • 5402
  • 4632
  • 4499
  • 4351
  • 3893
  • 3861
  • 3562
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Flexible Wireless Receivers: On-Chip Testing Techniques and Design for Testability

Ramzan, Rashad January 2009 (has links)
In recent years the interest in the design of low cost multistandard mobile devices has gone from technical aspiration to the commercial reality. Usually, the emerging wireless applications prompt the conception of new wireless standards. The end user wants to access voice, data, and streaming media using a single wireless terminal. In RF perspective, these standards differ in frequency band, sensitivity, data rate, bandwidth, and modulation type. Therefore, a flexible multistandard radio receiver covering most of the cellular, WLAN, and short range communication standards in 800MHz to 6GHz band is highly desired. To keep the cost low, high level of integration becomes a necessity for the multistandard flexible radio. Due to aggressive CMOS scaling the fT of the transistors has surpassed the value of 200 GHz. Moreover, as the CMOS technology has proven to be the best suited for monolithic integration, therefore it seems to be the future choice for the physical implementation of such a flexible receiver. In this thesis, two multiband sampling radio receiver front-ends implemented in 130 nm and 90 nm CMOS including test circuitry (DfT) are presented that is one step ahead in this direction. In modern radio transceivers the estimated cost of testing is a significant portion of manufacturing cost and is escalating with every new generation of RF chips. In order to reduce the test cost it is important to identify the faulty circuits very early in the design flow, even before packaging. In this thesis, on-chip testing techniques to reduce the test time and cost are presented. For integrated RF transceivers the chip reconfiguration by loopback setup can be used. Variants including the bypassing technique to improve testability and to enable on-chip test when the direct loopback is not feasible are presented. A technique for boosting the testability by the elevated symbol error rate test (SER) is also presented. It achieves better sensitivity and shorter test time compared to the standard SER test. Practical DfT implementation is addressed by circuit level design of various test blocks such as a linear attenuator, stimulus generator, and RF detectors embedded in RF chips without notable performance penalty. The down side of CMOS scaling is the increase in parameter variability due to process variations and mismatch. Both the test circuitry (DfT) and the circuit under test (CUT) are affected by these variations. A new calibration scheme for the test circuitry to compensate this effect is presented. On-chip DC measurements supported by a statistical regression method are used for this purpose. Wideband low-reflection PCB transmission lines are needed to enable the functional RF testing using external signal generators for RF chips directly bonded on the PCB. Due to extremely small chip dimensions it is not possible to layout the transmission line without width discontinuity. A step change in the substrate thickness is utilized to cancel this effect thus resulting in the low-reflection transmission line. In summary, all of these techniques at the system and circuit level pave a way to new opportunities towards low-cost transceiver testing, especially in volume production.
272

On Evaluation of Design Concepts : Modelling Approaches for Enhancing the Understanding of Design Solutions

Derelöv, Micael January 2009 (has links)
This dissertation embraces the issue of evaluating design concepts. Being able to sort out the potential“best solutions” from a set of solutions is a central and important part of the design process. The subjectdiscussed in this dissertation has its origins in the lack of knowledge about design concepts, somethingwhich is characteristic of the initial part of the design process and which frequently causes problems whenit comes to evaluation and selection of solutions. The purpose of this dissertation is to develop aids andmethods that enhance the understanding of design concepts in the early phases of the design process. From deductive reasoning about the fundamental mechanisms of the evaluation activity, the work hasbeen divided into three different areas: process and system modelling, concept optimisation, andidentification of potential failures. The bearing of the work within the area of process and system modelling has a verifying character. Theobjective of the work has been to analyse how established design methodology, which has its commonapplications within traditional engineering industry, may be applied within an area that is characterised bymore multidisciplinary interfaces, like biotechnology. The result of a number of case studies, in whichdifferent types of biotechnical systems where analysed and modelled, shows that the methodology isapplicable even for biotechnical products. During the work the methodology has also been furtherelaborated on in order to better suit the distinguishing characteristics exhibited in the development ofbiotechnical systems. Within the area of concept optimisation, an approach for optimising the concept generation has beenelaborated. By formalising the step in both concept generation and evaluation, it has been possible toapply genetic algorithms in order to optimise the process. The work has resulted in a model thatautomatically creates and sorts out a number of potential solutions from a defined solution space and adefined set of goals. The last area, which deals with identification of potential failures, has resulted in a rather novel way toconsider and model the behaviour of a system. The approach is an elaboration of the modellingtechniques within system theory, and deduces the system’s behaviour from known physical phenomenaand the system’s ability to effectuate them. The way the different behaviours interact with one another, byaffecting the properties of the system, determines the potential for a failure to occur. A “failure”,according to the model, is described as an unintended behaviour which obstructs the system’sfunctionality, i.e. which affects the conditions of a desired behaviour. The dissertation has resulted in three different means for approaching the difficulties associated with theevaluation of design concepts. The means are applicable during different parts of the design process, butthey all address the same issue, viz. to enhance the understanding of the design solutions.
273

Designing thiophene-based fluorescent probes for the study of neurodegenerative protein aggregation diseases : From test tube to in vivo experiments

Åslund, Andreas January 2009 (has links)
Protein aggregation is an event related to numerous neurodegenerative diseases, such as Alzhemier’s disease and prion diseases. However little is known as to how and why the aggregates form and furthermore, the toxic specie may not be the mature fibril but an on route or off route specie towards mature aggregates. During this project molecular probes were synthesized that may shed some light to these questions. The probes are thiophene based and the technique used for detection was mainly fluorescence. It was shown that the previously established thiophene based in vitro staining technique is valid ex vivo and in vivo. This would not have been possible without the synthesis of a variety of functionalized polymeric thiophene based probes; their in vitro and ex vivo staining properties were taken into consideration when the design of the small oligomeric probes were decided upon. These probes were shown to spectrally distinguish different types of amyloid, pass the bloodbrain barrier within minutes and specifically and selectively stain protein aggregates in the brains of mice.
274

Algorithms and Hardness Results for Some Valued CSPs

Kuivinen, Fredrik January 2009 (has links)
In the Constraint Satisfaction Problem (CSP) one is supposed to find an assignment to a set of variables so that a set of given constraints are satisfied. Many problems, both practical and theoretical, can be modelled as CSPs. As these problems are computationally hard it is interesting to investigate what kind of restrictions of the problems implies computational tractability. In this thesis the computational complexity of restrictions of two optimisation problems which are related to the CSP is studied. In optimisation problems one can also relax the requirements and ask for an approximatively good solution, instead of requiring the optimal one. The first problem we investigate is Maximum Solution (Max Sol) where one is looking for a solution which satisfies all constraints and also maximises a linear bjective function. The Maximum Solution problem is a generalisation of the well-known integer linear programming problem. In the case when the constraints are equations over an abelian group we obtain tight inapproximability results. We also study Max Sol for so-called maximal constraint languages and a partial classification theorem is obtained in this case. Finally, Max Sol over the boolean domain is studied in a setting where each variable only occurs a bounded number of times. The second problem is the Maximum Constraint Satisfaction Problem (Max CSP). In this problem one is looking for an assignment which maximises the number of satisfied constraints. We first show that if the constraints are known to give rise to an NP-hard CSP, then one cannot get arbitrarily good approximate solutions in polynomial time, unless P = NP. We use this result to show a similar hardness result for the case when only one constraint relation is used. We also study the submodular function minimisation problem (SFM) on certain finite lattices. There is a strong link between Max CSP and SFM; new tractability results for SFM implies new tractability results for Max CSP. It is conjectured that SFM is the only reason for Max CSP to be tractable, but no one has yet managed to prove this. We obtain new tractability results for SFM on diamonds and evidence which supports the hypothesis that all modular lattices are tractable. / I ett villkorsprogrammeringsproblem är uppgiften att tilldela värden till variabler så att en given mängd villkor blir uppfyllda. Många praktiska problem, så som schemaläggning och planering, kan formuleras som villkorsprogrammeringsproblem och det är därför önskvärt att ha effektiva algoritmer för att hitta lösningar till denna typ av problem. De generella varianterna av dessa problem är NP-svåra att lösa. Detta innebär att det antagligen inte finns effektiva algoritmer för problemen (om inte P = NP vilket anses vara mycket osannolikt). Av denna anledning förenklar vi problemet genom att studera restriktioner av det och ibland nöjer vi oss med approximativa lösningar. I den här avhandlingen studeras två varianter av villkorsprogrammeringsproblemet där man inte bara ska hitta en lösning utan hitta en så bra lösning som möjligt. I den första varianten är målet att hitta en tilldelning där samtliga villkor uppfylls och att en viktad summa av variablerna maximeras. Detta problem kan ses som en generalisering av det välkända linjära heltalsprogrammeringsproblemet. I den andra varianten är målet att hitta en tilldelning som uppfyller så många villkor som möjligt. Då det gäller den första varianten, då man ska hitta en lösning som uppfyller samtliga villkor som också maximerar summan av variablerna, presenteras nya resultat för ett antal specialfall. De så kallade maximala villkorsmängderna studeras och komplexiteten för ett antal av dessa bestäms. Vi studerar också en variant av problemet över den Boolska domänen då antal variabelförekomster är begränsat. I detta fall ger vi en partiell klassifikation över vilka villkorsmängder som är hanterbara och vilka som inte kan lösas effektivt. För den andra varianten, då man ska uppfylla så många villkor som möjligt, presenteras några nya effektiva algoritmer för vissa restriktioner. I dessa algoritmer löses det mer generella problemet av minimering av submodulära funktioner över vissa ändliga latticar. Vi bevisar också ett resultat som beskriver exakt när det finns effektiva algoritmer då man endast har tillgång till en typ av villkor.
275

Scheduling and Optimization of Fault-Tolerant Distributed Embedded Systems

Izosimov, Viacheslav January 2009 (has links)
Safety-critical applications have to function correctly and deliver high level of quality-ofservice even in the presence of faults. This thesis deals with techniques for tolerating effects of transient and intermittent faults. Re-execution, software replication, and rollback recovery with checkpointing are used to provide the required level of fault tolerance at the software level. Hardening is used to increase the reliability of hardware components. These techniques are considered in the context of distributed real-time systems with static and quasi-static scheduling. Many safety-critical applications have also strict time and cost constrains, which means that not only faults have to be tolerated but also the constraints should be satisfied. Hence, efficient system design approaches with careful consideration of fault tolerance are required. This thesis proposes several design optimization strategies and scheduling techniques that take fault tolerance into account. The design optimization tasks addressed include, among others, process mapping, fault tolerance policy assignment, checkpoint distribution, and trading-off between hardware hardening and software re-execution. Particular optimization approaches are also proposed to consider debugability requirements of fault-tolerant applications. Finally, quality-of-service aspects have been addressed in the thesis for fault-tolerant embedded systems with soft and hard timing constraints. The proposed scheduling and design optimization strategies have been thoroughly evaluated with extensive experiments. The experimental results show that considering fault tolerance during system-level design optimization is essential when designing cost-effective and high-quality fault-tolerant embedded systems.
276

Aspects of a Constraint Optimisation Problem

Thapper, Johan January 2010 (has links)
In this thesis we study a constraint optimisation problem called the maximum solution problem, henceforth referred to as Max Sol. It is defined as the problem of optimising a linear objective function over a constraint satisfaction problem (Csp) instance on a finite domain. Each variable in the instance is given a non-negative rational weight, and each domain element is also assigned a numerical value, for example taken from the natural numbers. From this point of view, the problem is seen to be a natural extension of integer linear programming over a bounded domain. We study both the time complexity of approximating Max Sol, and the time complexity of obtaining an optimal solution. In the latter case, we also construct some exponential-time algorithms. The algebraic method is a powerful tool for studying Csp-related problems. It was introduced for the decision version of Csp, and has been extended to a number of other problems, including Max Sol. With this technique we establish approximability classifications for certain families of constraint languages, based on algebraic characterisations. We also show how the concept of a core for relational structures can be extended in order to determine when constant unary relations can be added to a constraint language, without changing the computational complexity of finding an optimal solution to Max Sol. Using this result we show that, in a specific sense, when studying the computational complexity of Max Sol, we only need to consider constraint languages with all constant unary relations included. Some optimisation problems are known to be approximable within some constant ratio, but are not believed to be approximable within an arbitrarily small constant ratio. For such problems, it is of interest to find the best ratio within which the problem can be approximated, or at least give some bounds on this constant. We study this aspect of the (weighted) Max Csp problem for graphs. In this optimisation problem the number of satisfied constraints is supposed to be maximised. We introduce a method for studying approximation ratios which is based on a new parameter on the space of all graphs. Informally, we think of this parameter as an approximation distance; knowing the distance between two graphs, we can bound the approximation ratio of one of them, given a bound for the other. We further show how the basic idea can be implemented also for the Max Sol problem.
277

A Tensor Framework for Multidimensional Signal Processing

Westin, Carl-Fredrik January 1994 (has links)
This thesis deals with ltering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed "Normalized convolution". The method performs local expansion of a signal in a chosen lter basis which not necessarily has to be orthonormal. A key feature of the method is that it can deal with uncertain data when additional certainty statements are available for the data and/or the lters. It is shown how false operator responses due to missing or uncertain data can be significantly reduced or eliminated using this technique. Perhaps the most well-known of such eects are the various 'edge effects' which invariably occur at the edges of the input data set. The method is an example of the signal/certainty - philosophy, i.e. the separation of both data and operator into a signal part and a certainty part. An estimate of the certainty must accompany the data. Missing data are simply handled by setting the certainty to zero. Localization or windowing of operators is done using an applicability function, the operator equivalent to certainty, not by changing the actual operator coefficients. Spatially or temporally limited operators are handled by setting the applicability function to zero outside the window. The use of tensors in estimation of local structure and orientation using spatiotemporal quadrature filters is reviewed and related to dual tensor bases. The tensor representation conveys the degree and type of local anisotropy. For image sequences, the shape of the tensors describe the local structure of the spatiotemporal neighbourhood and provides information about local velocity. The tensor representation also conveys information for deciding if true flow or only normal flow is present. It is shown how normal flow estimates can be combined into a true flow using averaging of this tensor eld description. Important aspects of representation and techniques for grouping local orientation estimates into global line information are discussed. The uniformity of some standard parameter spaces for line segmentation is investigated. The analysis shows that, to avoid discontinuities, great care should be taken when choosing the parameter space for a particular problem. A new parameter mapping well suited for line extraction, the Möbius strip parameterization, is de ned. The method has similarities to the Hough Transform. Estimation of local frequency and bandwidth is also discussed. Local frequency is an important concept which provides an indication of the appropriate range of scales for subsequent analysis. One-dimensional and two-dimensional examples of local frequency estimation are given. The local bandwidth estimate is used for dening a certainty measure. The certainty measure enables the use of a normalized averaging process increasing robustness and accuracy of the frequency statements.
278

Signal Representation and Processing using Operator Groups

Nordberg, Klas January 1994 (has links)
This thesis presents a signal representation in terms of operators. The signal is assumed to be an element of a vector space and subject to transformations of operators. The operators form continuous groups, so-called Lie groups. The representation can be used for signals in general, in particular if spatial relations are undefinied and it does not require a basis of the signal space to be useful. Special attention is given to orthogonal operator groups which are generated by anti-Hermitian operators by means of the exponential mapping. It is shown that the eigensystem of the group generator is strongly related to properties of the corresponding operator group. For one-parameter orthogonal operator groups, a phase concept is introduced. This phase can for instance be used to distinguish between spatially even and odd signals and, therefore, corresponds to the usual phase for multi-dimensional signals. Given one operator group that represents the variation of the signal and one operator group that represents the variation of a corresponding feature descriptor, an equivariant mapping maps the signal to the descriptor such that the two operator groups correspond. Suficient conditions are derived for a general mapping to be equivariant with respect to a pair of operator groups. These conditions are expressed in terms of the generators of the two operator groups. As a special case, second order homo-geneous mappings are considered, and examples of how second order mappings can be used to obtain different types of feature descriptors are presented, in particular for operator groups that are homomorphic to rotations in two and three dimensions, respectively. A generalization of directed quadrature lters is made. All feature extraction algorithms that are presented are discussed in terms of phase invariance. Simple procedures that estimate group generators which correspond to one-parameter groups are derived and tested on an example. The resulting generator is evaluated by using its eigensystem in implementations of two feature extraction algorithms. It is shown that the resulting feature descriptor has good accuracy with respect to the corresponding feature value, even in the presence of signal noise.
279

Adaptive Multidimensional Filtering

Haglund, Leif January 1991 (has links)
This thesis contains a presentation and an analysis of adaptive filtering strategies for multidimensional data. The size, shape and orientation of the flter are signal controlled and thus adapted locally to each neighbourhood according to a predefined model. The filter is constructed as a linear weighting of fixed oriented bandpass filters having the same shape but different orientations. The adaptive filtering methods have been tested on both real data and synthesized test data in 2D, e.g. still images, 3D, e.g. image sequences or volumes, with good results. In 4D, e.g. volume sequences, the algorithm is given in its mathematical form. The weighting coefficients are given by the inner products of a tensor representing the local structure of the data and the tensors representing the orientation of the filters. The procedure and lter design in estimating the representation tensor are described. In 2D, the tensor contains information about the local energy, the optimal orientation and a certainty of the orientation. In 3D, the information in the tensor is the energy, the normal to the best ftting local plane and the tangent to the best fitting line, and certainties of these orientations. In the case of time sequences, a quantitative comparison of the proposed method and other (optical flow) algorithms is presented. The estimation of control information is made in different scales. There are two main reasons for this. A single filter has a particular limited pass band which may or may not be tuned to the different sized objects to describe. Second, size or scale is a descriptive feature in its own right. All of this requires the integration of measurements from different scales. The increasing interest in wavelet theory supports the idea that a multiresolution approach is necessary. Hence the resulting adaptive filter will adapt also in size and to different orientations in different scales.
280

Flexible Interleaving Sub–systems for FEC in Baseband Processors

Asghar, Rizwan January 2010 (has links)
Interleaving is always used in combination with an error control coding. It spreads the burst noise, and changes the burst noise to white noise so that the noise induced bit errors can be corrected. With the advancement of communication systems and substantial increase in bandwidth requirements, use of coding for forward error correction (FEC) has become an integral part in the modern communication systems. Dividing the FEC sub-systems in two categories i.e. channel coding/de-coding and interleaving/de-interleaving, the later appears to be more varying in permutation functions, block sizes and throughput requirements. The interleaving/de-interleaving consumes more silicon due to the silicon cost of the permutation tables used in conventional LUT based approaches. For multi-standard support devices the silicon cost of the permutation tables can grow much higher resulting in an un-efficient solution. Therefore, the hardware re-use among different interleaver modules to support multimode processing platform is of significance. The broadness of the interleaving algorithms gives rise to many challenges when considering a true multimode interleaver implementation. The main challenges include real-time low latency computation for different permutation functions, managing wide range of interleaving block sizes, higher throughput, low cost, fast and dynamic reconfiguration for different standards, and introducing parallelism where ever necessary. It is difficult to merge all currently used interleavers to a singlearchitecture because of different algorithms and throughputs; however, thefact that multimode coverage does not require multiple interleavers to workat the same time, provides opportunities to use hardware multiplexing. The multimode functionality is then achieved by fast switching between differentstandards. We used the algorithmic level transformations such as 2-Dtransformation, and realization of recursive computations, which appear to bethe key to bring different interleaving functions to the same level. In general,the work focuses on function level hardware re-use, but it also utilizesclassical data-path level optimizations for efficient hardware multiplexingamong different standards. The research has resulted in multiple flexible architectures supporting multiple standards. These architectures target both channel interleaving and turbo-code interleaving. The presented architectures can support both types of communication systems i.e. single-stream and multi-stream systems. Introducing the algorithmic level transformations and then applying hardware re-use methodology has resulted in lower silicon cost while supporting sufficient throughput. According to the database searching in March 2010, we have the first multimode interleaver core covering WLAN (802.11a/b/g and 802.11n), WiMAX (802.16e), 3GPP-WCDMA, 3GPP-LTE, and DVB-T/H on a single architecture with minimum silicon cost. The research also provides the support for parallel interleaver address generation using different architectures. It provides the algorithmic modifications and architectures to generate up to 8 addresses in parallel and handle the memory conflicts on-the-fly. One of the vital requirements for multimode operation is the fast switching between different standards, which is supported by the presented architectures with minimal cycle cost overheads. Fast switching between different standards gives luxury to the baseband processor to re-configure the interleaver architecture on-the-fly and re-use the same hardware for another standard. Lower silicon cost, maximum flexibility and fast switchability among multiple standards during run time make the proposed research a good choice for the radio baseband processing platforms.

Page generated in 0.0605 seconds