• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 27
  • 10
  • 9
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 35
  • 21
  • 21
  • 21
  • 20
  • 19
  • 18
  • 16
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Le mouvement des images : la lecture du temps / The movement of images : reading of the time

Lin, Chih-Wei 31 January 2014 (has links)
Nous avons toujours été attirés par les images successives, car nos yeux passent d’une image à l’autre automatiquement. Il semble que ces images attirent notre regard. Néanmoins, malgré ce genre d’images employées depuis longtemps, nous ne savons pas grand chose sur elles. Ainsi, comme les images attirent notre regard, et que celui-Ci passe de l’une à l’autre, un certain temps existe dans cette action, ou plutôt parmi ces images. Mais, cette question reste compliquée et difficile à démêler. Ces questions sur les images successives forment une partie de notre problématique. Dans notre thèse, nous avons réfléchi sur l’ontologie des images successives, et avons pensé ces questions à travers la manière de raconter les phénomènes de ces images uniques. Nous nous sommes penchés sur les images successives en fonction de la philosophie. A travers les théories de l’empirisme anglais, nous avons trouvé une voie, à savoir la réalité de ce genre d’images, et cette réflexion sur l’ontologie du mouvement des images successives constitue la première partie de notre recherche. En étendant notre pensée sur le mouvement des images successives, nous avons tenté de résoudre les questions sur le temps, autrement dit, de la durée dans les images successives. Selon les théories de Bergson, Deleuze et Bachelard, nos réflexions sur le temps, la durée en général, la durée dans les images successives, la vie...etc, forment la deuxième partie, dans laquelle nous abordons le travelling et l’intervalle. Quelques exemples artistiques accompagnent nos théories philosophiques, pour les comprendre dont les applications. Ainsi, pour la troisième partie de cette thèse, nous avons réalisé des aquarelles pour appliquer et examiner nos théories. De plus, à travers notre pratique artistique, nous avons trouvé des idées nouvelles, ainsi que les problématiques de notre recherche prochaine. En composant les théories, les exemples et l’exécution artistiques, cette étude, qui traverse divers domaines, à savoir la philosophie, la photographie, la cinématographie et l’art plastique, s’est attachée à résoudre principalement notre soif de connaissance sur les images successives. / We have always been attracted by the successive images, as we automatically look at them one after the other. It seems that these images attract our look. Nevertheless, in spite of this type of images long practice, we don’t know much about them. In this way, because these images attract our look, and our look passes from one to another, some time exists in this action, or rather among these images. However, this question remain complex and difficult to untangle. These questions considered about successive images form a part of our problems. In our thesis, we have the ontology of successive images, and we have thought these questions through the manner of telling phenomenon of these unique images. We looked into successive images according to the philosophy. Through theories of the English empiricism, we found a way, namely the reality of these type of images, and this reflection about the ontology of these successive images movement constitutes the first part of our research. Extending our thought on the movement of successive images, we attempted to solve questions on the time, in other words, on the duration in successive images. According to the Bergson’s theories, Deleuze’s and Bachelard’s, our thoughts on time, duration generally, duration in successive images, life…etc, form the second part, in which we consider the travelling and the interval. Some artistic examples accompany our philosophical theories, in order to understand their application. In this way, for the third part of this thesis, we have realized watercolor to enforce and examine our theories. Further, through our artistic practice, we have found new ideas, as well as our future research problems. In composing these theories, artistic examples and implementation, our research which is interdisciplinary, namely philosophy, photography, cinematography and plastic arts, devoted to mainly resolve our thirst of knowledge about successive images.
12

An Interference Cancellation Scheme for Carrier Frequency Offsets Compensation in the Uplink of OFDMA systems

Wang, Sen-Hung 20 August 2006 (has links)
A successive interference cancellation (SIC) structure is proposed for multiuser interference cancellation (MUI) due to carrier frequency offsets (CFOs) in the uplink of orthogonal frequency division multiple access (OFDMA) systems. The proposed architecture adopts a circular convolution to suppress the impacts caused by CFOs. This paper demonstrates that, with 2 iterations, the SIC has better performance than that of the parallel interference cancellation (PIC), but system complexity is only 1/2K, where K is the number of users in the uplink of OFDMA system. This study also shows that system complexity can be significantly reduced if proper approximation is made.
13

Investigation of 10-bit SAR ADC using flip-flip bypass circuit

Fontaine, Robert Alexander 15 April 2014 (has links)
The Successive Approximation Register (SAR) Analog to Digital Converter (ADC) is power efficient and operates at moderate resolution. However, the conversion speed is limited by settling time and control logic constraints. This report investigates a flip-flop bypass technique to reduce the required conversion time. A conventional design and flip-flop bypass design are simulated using a 0.18[micrometer] CMOS process. Background and design of the control logic, comparator, capacitive array, and switches for implementing the SAR ADCs is presented with the emphasis on optimizing for conversion speed. / text
14

An Energy Efficient Asynchronous Time-Domain Comparator

Gao, Yang 02 October 2013 (has links)
In energy-limited applications, such as wearable battery powered systems and implantable circuits for biological applications, ultra-low power analog-to-digital converters (ADCs) are essential for sustaining long time operation. As a fundamental building block of ADC, comparator should support a tightened power budget. Therefore, developing low-power design techniques for comparator is becoming more and more important. As an alternative to the conventional voltage-mode comparator, this thesis proposed an energy efficient time-domain comparator, which uses digital circuits to process analog signals by representing them as timing information. The proposed time-domain comparator has three main features: comparing on both clock edges (rising/falling), asynchronous comparison and 2-bit/step comparison. With these features, power consumption of the comparator can be effectively reduced. For verification, the proposed time-domain comparator is fabricated in IBM 0.18um CMOS technology in comparison with other two conventional time-domain comparators working at 100kS/s sampling rate and 8-bit resolution. The achieved power consumption of the proposed time-domain comparator is 50nW, which is much lower than 84nW and 285nW of the other two time-domain comparators.
15

IMPROVED SUBTRACTIVE INTERFERENCE CANCELLATION FOR DS-CDMA

MAO, ZHIYONG 31 March 2004 (has links)
No description available.
16

Méthodes d'accès basées sur le codage réseau couche physique / Access methods based on physical layer network coding

BUI, Huyen Chi 28 November 2012 (has links)
Dans le domaine des réseaux satellitaires, l'apparition de terminaux interactifs à bas-prix nécessite le développement et la mise en œuvre de protocoles d'accès multiple capables de supporter différents profils d'utilisateurs. En particulier, l'Agence Spatiale Européenne (ESA) et le centre d'étude spatial allemand (DLR) ont récemment proposé des protocoles d'accès aléatoires basés sur le codage réseau couche physique et l'élimination itérative des interférences pour résoudre en partie le problème de collisions sur une voie de retour du type Slotted ALOHA. C'est dans ce contexte que s'inscrit cette thèse qui vise à fournir une amélioration dans des méthodes d'accès aléatoires existantes. Nous introduisons Multi-Slot Coded Aloha (MuSCA) comme une nouvelle généralisation of CRDSA. Au lieu de transmettre des copies du même paquet, l'émetteur envoie plusieurs parties d'un mot de code d'un code correcteur d'erreurs ; chaque partie étant précédée d'un entête permettant de localiser les autres parties du mot de code. Au niveau du récepteur, toutes les parties envoyées par le même utilisateur, y compris celles qui sont interférées par d'autres signaux, participent au décodage. Le signal décodé est ensuite soustrait du signal total. Ainsi, l'interférence globale est réduite et les signaux restant ont plus de chances d'être décodés. Plusieurs méthodes d'analyse de performance basées sur des concepts théoriques (calcul de capacité, évolution des densités) et sur des simulations sont proposées. Les résultats obtenus montrent un gain très significatif de débit global comparé aux méthodes d'accès existantes. Ce gain peut encore être augmenté en variant le taux de découpe des mots de code. En modifiant certains de ces concepts, nous proposons également une application du codage réseau couche physique basée sur la superposition de modulations pour l'accès déterministe à la voie retour des communications par satellite. Une amélioration du débit est aussi obtenue par rapport à des stratégies plus classiques de multiplexage temporal. / In the domain of satellite networks, the emergence of low-cost interactive terminals motivates the need to develop and implement multiple access protocols able to support different user profiles. In particular, the European Space Agency (ESA) and the German Aerospace Center (DLR) have recently proposed random access protocols such as Contention Resolution Diversity Coded ALOHA (CRDSA) and Irregular Repetition Slotted ALOHA (IRSA). These methods are based on physical-layer network coding and successive interference cancellation in order to attempt to solve the collisions problem on a return channel of type Slotted ALOHA.This thesis aims to provide improvements of existing random access methods. We introduce Multi-Slot Coded Aloha (MuSCA) as a new generalization of CRDSA. Instead of transmitting copies of the same packet, the transmitter sends several parts of a codeword of an error-correcting code ; each part is preceded by a header allowing to locate the other parts of the codeword. At the receiver side, all parts transmitted by the same user, including those are interfered by other signals, are involved in the decoding. The decoded signal is then subtracted from the total signal. Thus, the overall interference is reduced and the remaining signals are more likely to be decoded. Several methods of performance analysis based on theoretical concepts (capacity computation, density evolution) and simulations are proposed. The results obtained show a significant gain in terms of throughput compared to existing access methods. This gain can be even more increased by varying the codewords stamping rate. Following these concepts, we also propose an application of physical-layer network coding based on the superposition modulation for a deterministic access on a return channel of satellite communications. We observe a gain in terms of throughput compared to more conventional strategies such as the time division multiplexing.
17

Accelerating Successive Approximation Algorithm Via Action Elimination

Jaber, Nasser M. A. Jr. 20 January 2009 (has links)
This research is an effort to improve the performance of successive approximation algorithm with a prime aim of solving finite states and actions, infinite horizon, stationary, discrete and discounted Markov Decision Processes (MDPs). Successive approximation is a simple and commonly used method to solve MDPs. Successive approximation often appears to be intractable for solving large scale MDPs due to its computational complexity. Action elimination, one of the techniques used to accelerate solving MDPs, reduces the problem size through identifying and eliminating sub-optimal actions. In some cases successive approximation is terminated when all actions but one per state are eliminated. The bounds on value functions are the key element in action elimination. New terms (action gain, action relative gain and action cumulative relative gain) were introduced to construct tighter bounds on the value functions and to propose an improved action elimination algorithm. When span semi-norm is used, we show numerically that the actual convergence of successive approximation is faster than the known theoretical rate. The absence of easy-to-compute bounds on the actual convergence rate motivated the current research to try a heuristic action elimination algorithm. The heuristic utilizes an estimated convergence rate in the span semi-norm to speed up action elimination. The algorithm demonstrated exceptional performance in terms of solution optimality and savings in computational time. Certain types of structured Markov processes are known to have monotone optimal policy. Two special action elimination algorithms are proposed in this research to accelerate successive approximation for these types of MDPs. The first algorithm uses the state space partitioning and prioritize iterate values updating in a way that maximizes temporary elimination of sub-optimal actions based on the policy monotonicity. The second algorithm is an improved version that includes permanent action elimination to improve the performance of the algorithm. The performance of the proposed algorithms are assessed and compared to that of other algorithms. The proposed algorithms demonstrated outstanding performance in terms of number of iterations and omputational time to converge.
18

Advanced Image Processing Using Histogram Equalization and Android Application Implementation

Gaddam, Purna Chandra Srinivas Kumar, Sunkara, Prathik January 2016 (has links)
Now a days the conditions at which the image taken may lead to near zero visibility for the human eye. They may usually due to lack of clarity, just like effects enclosed on earth’s atmosphere which have effects upon the images due to haze, fog and other day light effects. The effects on such images may exists, so useful information taken under those scenarios should be enhanced and made clear to recognize the objects and other useful information. To deal with such issues caused by low light or through the imaging devices experience haze effect many image processing algorithms were implemented. These algorithms also provide nonlinear contrast enhancement to some extent. We took pre-existed algorithms like SMQT (Successive mean Quantization Transform), V Transform, histogram equalization algorithms to improve the visual quality of digital picture with large range scenes and with irregular lighting conditions. These algorithms were performed in two different method and tested using different image facing low light and color change and succeeded in obtaining the enhanced image. These algorithms helps in various enhancements like color, contrast and very accurate results of images with low light. Histogram equalization technique is implemented by interpreting histogram of image as probability density function. To an image cumulative distribution function is applied so that accumulated histogram values are obtained. Then the values of the pixels are changed based on their probability and spread over the histogram. From these algorithms we choose histogram equalization, MATLAB code is taken as reference and made changes to implement in API (Application Program Interface) using JAVA and confirms that the application works properly with reduction of execution time.
19

All Digital, Background Calibration for Time-Interleaved and Successive Approximation Register Analog-to-Digital Converters

David, Christopher Leonidas 27 April 2010 (has links)
The growth of digital systems underscores the need to convert analog information to the digital domain at high speeds and with great accuracy. Analog-to-Digital Converter (ADC) calibration is often a limiting factor, requiring longer calibration times to achieve higher accuracy. The goal of this dissertation is to perform a fully digital background calibration using an arbitrary input signal for A/D converters. The work presented here adapts the cyclic "Split-ADC" calibration method to the time interleaved (TI) and successive approximation register (SAR) architectures. The TI architecture has three types of linear mismatch errors: offset, gain and aperture time delay. By correcting all three mismatch errors in the digital domain, each converter is capable of operating at the fastest speed allowed by the process technology. The total number of correction parameters required for calibration is dependent on the interleaving ratio, M. To adapt the "Split-ADC" method to a TI system, 2M+1 half-sized converters are required to estimate 3(2M+1) correction parameters. This thesis presents a 4:1 "Split-TI" converter that achieves full convergence in less than 400,000 samples. The SAR architecture employs a binary weight capacitor array to convert analog inputs into digital output codes. Mismatch in the capacitor weights results in non-linear distortion error. By adding redundant bits and dividing the array into individual unit capacitors, the "Split-SAR" method can estimate the mismatch and correct the digital output code. The results from this work show a reduction in the non-linear distortion with the ability to converge in less than 750,000 samples.
20

Successiv Kalkylering Successivprincipen / Successive Calculation Successive Principle

Lövgren, Daniel, Ali Abdi, Mahamed January 2010 (has links)
I den här tekniska rapporten försöker författarna att lyfta fram ett alternativt sätt att kalkyleraprojekt inom näringslivet. Metoden kallas successiv kalkylering eller successivprincipen.Allt för många projekt är komplexa och det är ofta som kalkylerna inte stämmer. Med den härmetoden förenklar man inte projektens komplexitet, utan bryter ner osäkerhetsfaktorerna såatt man därefter ser projektet med neutrala ögon. På så sätt har beställaren möjligheter att taställning till hinder som kan äventyra hela projektet. Därefter kan man välja att gå vidare medprojektet fullt medveten om eventuella osäkerheter eller avstår från att genomföra projektet.Om man går vidare med projektet kan man ta fram handlingsplaner i ett mycket tidigt stadiumoch förebygga kostsamma misstag.Rapporten utgörs av en större litteraturbakgrund. Det gör att läsaren kan tillgodogöra sig denteoretiska bakgrunden till successiv kalkylering. Därefter ger författarna några konkretaexempel som baseras på verkliga situationer. I kapitel fem har författarna intervjuat Sverigesfrämsta expert inom successiv kalkylering och även personal på Ringhals.Avslutningsvis redovisar författarna sina slutsatser och de erfarenheter som de hartillgodogjort sig genom att utföra detta examensarbete.

Page generated in 0.0591 seconds