• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 9
  • 8
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 89
  • 29
  • 26
  • 25
  • 25
  • 25
  • 21
  • 20
  • 19
  • 18
  • 17
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Decoupled (SSA-based) register allocators : from theory to practice, coping with just-in-time compilation and embedded processors constraints / Allocation de registres découplée (basée sur la formulation SSA) : De la théorie à la pratique, faire face aux contraintes liées à la compilation juste à temps et aux processeurs embarqués

Colombet, Quentin 07 December 2012 (has links)
Ma thèse porte sur l’allocation de registres. Durant cette étape, le compilateur doit assigner les variables du code source, en nombre arbitrairement grand, aux registres physiques du processeur, en nombre limité k. Des travaux récents, notamment ceux des thèses de F. Bouchez et S. Hack, ont montré qu’il était possible de séparer de manière complètement découplée cette étape en deux phases : le vidage (spill) – stockage de variables en mémoire pour libérer des registres – suivi de l’assignation aux registres proprement dite. Ces travaux démontraient la faisabilité de ce découpage en s’appuyant sur un cadre théorique et certaines hypothèses simplificatrices. En particulier, il est suffisant de s’assurer qu’après le spill, le nombre de variables simultanément en vie est inférieur à k.Ma thèse fait suite à ces travaux en montrant comment appliquer ce type d’approche dans un cadre réaliste, en prenant en compte les contraintes liées à l’encodage des instructions, à l’ABI (application binary interface), aux bancs de registres avec aliasing. Différentes approches sont proposées qui permettent soit de s’affranchir des problèmes précités, soit de les prendre en compte directement dans le modèle théorique. Les hypothèses des modèles et les solutions proposées sont évaluées et validées par une étude expérimentale poussée dans le compilateur de STMicroelectronics. Enfin, l’ensemble de ces travaux a été réalisé avec, en ligne de mire, les contraintes de la compilation moderne, la compilation JIT (just-in-time), où rapidité et consommation mémoire du compilateur sont des facteurs déterminants. Nous nous sommes efforcés d’offrir des solutions satisfaisant ces critères ou améliorant les résultats attendus tant qu’un certain budget n’a pas été dépassé, exploitant en particulier la forme SSA (static single assignment) pour définir des algorithmes de type tree scan qui généralisent les approches de type linear scan, proposées pour le JIT. / My thesis deals with register allocation. During this phase, the compiler has to assign variables of the source program, in an arbitrary big number, to actual registers of the processor, in a limited number k. Recent works, for instance the thesis of F. Bouchez and S. Hack, have shown that it is possible to split in two different decoupled step this phase: the spill - store the variables into memory to release registers - followed by the registers assignment. These works demonstrate the feasibility of this decoupling relying on a theoretic framework and some assumptions. In particular, it is sufficient to ensure after the spill step that the number of variables simultaneously live is below k.My thesis follows these works by showing how to apply this kind of approach when real-world constraints come in play: instructions encoding, ABI (application binary interface), register aliasing. Different approaches are proposed. They allow either to ignore these problems or to directly tackle them into the theoretic framework. The hypothesis of the models and the proposed solutions are evaluated and validated using a thorough experimental study with the compiler of STMicroelectronics. Finally, all these works have been done with the constraints of modern compilers in mind, the JIT (just-in-time) compilation, where the compilation time et the memory footprint of the compiler are key factors. We strive to offer solutions that cope with these criteria or improve the result until a given budget is reached. We, in particular, used the SSA (static single assignment) form to define algorithm like tree scan that generalizes linear scan based approaches proposed for JIT compilation.
22

Optimering av sampling quality-parametrar för Mental Ray

Karlsson, Linus January 2011 (has links)
Fotorealistiska 3d-bilder används idag inom ett brett spektrum av branscher. Framställningen av denna typ av grafik kräver ofta väldigt mycket datorkraft. Vid rendering med renderingsmotorer som använder sig av raytracing algoritmer är aliasing ett medfött problem. Lösningen heter anti-aliasing som arbetar för att undvika aliasing artefakter som jagged edges eller Moiré-effekter med mera. En del av anti-aliasingprocessen är supersampling som ofta kräver mycket datorkraft. Att optimera parametrar för supersampling är därför mycket viktigt. Det är möjligt att genom optimering spara väldigt mycket datorkraft och därmed tid. Detta arbete innehåller resultat från experiment där deltagare får bedöma bilder med olika kvalité av anti-aliasing. Resultaten av dessa experiment kan användas som referens vid optimering av renderingsparametrar för anti-aliasing vid rendering med hjälp av Mental Ray.
23

Bildanalys inom Machine Vision : Nyquists samplingsteorem vid digital fotografering

Lindström, Mattias January 2017 (has links)
Inom Machine Vision är det av stor vikt att kameran har möjlighet att detektera de detaljer som eftersöks. Aliasing är ett problem inom all digital fotografering och beror på att kamerans upplösning är för låg i förhållande till de detaljer den försöker fånga. Det här arbetet analyserar kamerans begränsningar och orsaken till dessa. En enkel kamerarigg som används till försök inom Machine Vision konstrueras om från grunden för bättre kontroll och upplösning och en ny styrning skapas till denna efter beställarens specifikationer. Ett testmönster för ISO 12233:2000 fotograferas därefter i denna rigg. Resultatet analyseras och jämförs mot Nyquists samplingsteorem med avseende på digital fotografering. Resultatet visar hur kamerans konstruktion och sätt att registrera färger genom ett filter framför bildsensorn och algoritmer för att beräkna färgen för varje enskild bildpunkt höjer sampelstorleken med en faktor 3 jämfört med det ursprungliga teoremet om dubbla samplingsfrekven-sen. / Within Machine Vision, it is very important that the camera can detect the details requested. Aliasing is a problem in all digital photography, and is because the camera's resolution is too low relative to the details it tries to capture. This work analyzes the camera's limitations and the cause of these. A simple camera rig used for Machine Vision tests is re-designed for better control and resolution, and a new control-system is created to this according to the client's specifications. A test pattern for ISO 12233: 2000 is then photographed in this rig. The result is analyzed and compared to Nyquist sampling theorem regarding digital photography. The result shows how the camera's design and way of registering colors through a filter in front of the image sensor and algorithms to calculate the color for each individual pixel increases the sample size by a factor of 3 compared with the original theorem with double sampling frequency.
24

DIGITAL FILTERING OF MULTIPLE ANALOG CHANNELS

Hicks, William T. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The traditional use of active RC-type filters to provide anti-aliasing filters in Pulse Code Modulation (PCM) systems is being replaced by the use of Digital Signal Processing (DSP). This is especially true when performance requirements are stringent and require operation over a wide environmental temperature range. This paper describes the design of a multi channel digital filtering card that incorporates up to 100 unique digitally implemented cutoff frequencies. Any combination of these frequencies can be independently assigned to any of the input channels.
25

An Analysis of Various Digital Filter Types for Use as Matched Pre-Sample Filters in Data Encoders

Hicks, William T. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The need for precise gain and phase matching in multi-channel data sampling systems can result in very strict design requirements for presample or anti-aliasing filters. The traditional use of active RC-type filters is expensive, especially when performance requirements are tight and when operation over a wide environmental temperature range is required. New Digital Signal Processing (DSP) techniques have provided an opportunity for cost reduction and/or performance improvements in these types of applications. This paper summarizes the results of an evaluation of various digital filter types used as matched presample filters in data sampling systems.
26

Mise en correspondance A contrario de points d'intérêt sous contraintes géométrique et photométrique / A Contrario matching of interest points through both geometric and photometric constraints

Noury, Nicolas 13 October 2011 (has links)
L'analyse de la structure et du mouvement permet d'estimer la forme d'objets 3D et la position de la caméra à partir de photos ou de vidéos. Le plus souvent, elle est réalisée au moyen des étapes suivantes : 1) L'extraction de points d'intérêt, 2) La mise en correspondance des points d'intérêt entre les images à l'aide de descripteurs photométriques des voisinages de point, 3) Le filtrage des appariements produits à l'étape précédente afin de ne conserver que ceux compatibles avec une contrainte géométrique fixée, dont on peut alors calculer les paramètres. Cependant, la ressemblance photométrique seule utilisée en deuxième étape ne suffit pas quand plusieurs points ont la même apparence. Ensuite, la dernière étape est effectuée par un algorithme de filtrage robuste, Ransac, qui nécessite de fixer des seuils, ce qui se révèle être une opération délicate. Le point de départ de ce travail est l'approche A Contrario Ransac de Moisan et Stival, qui permet de s'abstraire des seuils. Ensuite, notre première contribution a consisté en l'élaboration d'un modèle a contrario qui réalise la mise en correspondance à l'aide de critères photométrique et géométrique, ainsi que le filtrage robuste en une seule étape. Cette méthode permet de mettre en correspondance des scènes contenant des motifs répétés, ce qui n'est pas possible par l'approche habituelle. Notre seconde contribution étend ce résultat aux forts changements de point de vue, en améliorant la méthode ASift de Morel et Yu. Elle permet d'obtenir des correspondances plus nombreuses et plus densément réparties, dans des scènes difficiles contenant des motifs répétés observés sous des angles très différents / The analysis of structure from motion allows one to estimate the shape of 3D objects and the position of the camera from pictures or videos. It usually follows these three steps: 1) Extracting points of interest, 2) Matching points of interest using photometric descriptors computed on point neighborhoods, 3) Filtering previous matches so as to retain only those compatible with a geometric constraint, whose parameters can then be computed. However, for the second step, the photometric criterion is not enough on its own when several points are alike. As for the third step, it uses the Ransac robust filtering scheme, which requires setting thresholds, and that can be a difficult task. This work is based on Moisan and Stival's A Contrario Ransac approach, which allows one to set thresholds automatically. After assessing that method, the first contribution was the elaboration an a contrario model, which simultaneously achieves robust filtering and matching through both geometric and photometric criteria. That method allows one to match scenes with repeated patterns, which is impossible with the usual approach. The second contribution extended that result to strong viewpoint changes, improving the ASift method. The matches obtained are both more numerous and more densely distributed, in scenes containing many repeated patterns seen from very different angles.
27

Implementation and Evaluation of a RF Receiver Architecture Using an Undersampling Track-and-Hold Circuit / Implementation och utvärdering av en RF-mottagare baserad på en undersamplande track-and-hold-krets

Dahlbäck, Magnus January 2003 (has links)
<p>Today's radio frequency receivers for digital wireless communication are getting more and more complex. A single receiver unit should support multiple bands, have a wide bandwidth, be flexible and show good performance. To fulfil these requirements, new receiver architectures have to be developed and used. One possible alternative is the RF undersampling architecture. </p><p>This thesis evaluates the RF undersampling architecture, which make use of an undersampling track-and-hold circuit with very wide bandwidth to perform direct sampling of the RF carrier before the analogue-to-digital converter. The architecture’s main advantages and drawbacks are identified and analyzed. Also, techniques and improvements to solve or reduce the main problems of the RF undersampling receiver are proposed.</p>
28

Design of 3D Accelerator for Mobile Platform

Ramachandruni, Radha Krishna January 2006 (has links)
<p>Implement a high-level model of computationally intensive part of 3D graphics pipe-line. Increasing popularity of handheld devices along with developments in hardware technology, 3D graphics on mobile devices is fast becoming a reality. Graphics processing is essentially complex and computationally demanding. In order to achieve scene realism and perception of motion, identifying and accelerating bottle necks is crucial. This thesis is about Open-GL graphics pipe-line in general. Software which implements computationally intensive part of graphics pipe-line is built. In essence a rasterization unit that gets triangles with 2D screen, texture co-ordinates and color. Triangles go through scan conversion, texturing and a set of other per-fragment operations before getting displayed on screen.</p>
29

Output space compaction for testing and concurrent checking

Seuring, Markus January 2000 (has links)
In der Dissertation werden neue Entwurfsmethoden für Kompaktoren für die Ausgänge von digitalen Schaltungen beschrieben, die die Anzahl der zu testenden Ausgänge drastisch verkleinern und dabei die Testbarkeit der Schaltungen nur wenig oder gar nicht verschlechtern. <br>Der erste Teil der Arbeit behandelt für kombinatorische Schaltungen Methoden, die die Struktur der Schaltungen beim Entwurf der Kompaktoren berücksichtigen. Verschiedene Algorithmen zur Analyse von Schaltungsstrukturen werden zum ersten Mal vorgestellt und untersucht. Die Komplexität der vorgestellten Verfahren zur Erzeugung von Kompaktoren ist linear bezüglich der Anzahl der Gatter in der Schaltung und ist damit auf sehr große Schaltungen anwendbar. <br>Im zweiten Teil wird erstmals ein solches Verfahren für sequentielle Schaltkreise beschrieben. Dieses Verfahren baut im wesentlichen auf das erste auf. <br>Der dritte Teil beschreibt eine Entwurfsmethode, die keine Informationen über die interne Struktur der Schaltung oder über das zugrundeliegende Fehlermodell benötigt. Der Entwurf basiert alleine auf einem vorgegebenen Satz von Testvektoren und die dazugehörenden Testantworten der fehlerfreien Schaltung. Ein nach diesem Verfahren erzeugter Kompaktor maskiert keinen der Fehler, die durch das Testen mit den vorgegebenen Vektoren an den Ausgängen der Schaltung beobachtbar sind. / The objective of this thesis is to provide new space compaction techniques for testing or concurrent checking of digital circuits. In particular, the work focuses on the design of space compactors that achieve high compaction ratio and minimal loss of testability of the circuits. <br>In the first part, the compactors are designed for combinational circuits based on the knowledge of the circuit structure. Several algorithms for analyzing circuit structures are introduced and discussed for the first time. The complexity of each design procedure is linear with respect to the number of gates of the circuit. Thus, the procedures are applicable to large circuits. <br>In the second part, the first structural approach for output compaction for sequential circuits is introduced. Essentially, it enhances the first part. <br>For the approach introduced in the third part it is assumed that the structure of the circuit and the underlying fault model are unknown. The space compaction approach requires only the knowledge of the fault-free test responses for a precomputed test set. The proposed compactor design guarantees zero-aliasing with respect to the precomputed test set.
30

Rendering for Microlithography on GPU Hardware

Iwaniec, Michel January 2008 (has links)
Over the last decades, integrated circuits have changed our everyday lives in a number of ways. Many common devices today taken for granted would not have been possible without this industrial revolution. Central to the manufacturing of integrated circuits is the photomask used to expose the wafers. Additionally, such photomasks are also used for manufacturing of flat screen displays. Microlithography, the manufacturing technique of such photomasks, requires complex electronics equipment that excels in both speed and fidelity. Manufacture of such equipment requires competence in virtually all engineering disciplines, where the conversion of geometry into pixels is but one of these. Nevertheless, this single step in the photomask drawing process has a major impact on the throughput and quality of a photomask writer. Current high-end semiconductor writers from Micronic use a cluster of Field-Programmable Gate Array circuits (FPGA). FPGAs have for many years been able to replace Application Specific Integrated Circuits due to their flexibility and low initial development cost. For parallel computation, an FPGA can achieve throughput not possible with microprocessors alone. Nevertheless, high-performance FPGAs are expensive devices, and upgrading from one generation to the next often requires a major redesign. During the last decade, the computer games industry has taken the lead in parallel computation with graphics card for 3D gaming. While essentially being designed to render 3D polygons and lacking the flexibility of an FPGA, graphics cards have nevertheless started to rival FPGAs as the main workhorse of many parallel computing applications. This thesis covers an investigation on utilizing graphics cards for the task of rendering geometry into photomask patterns. It describes different strategies that were tried and the throughput and fidelity achieved with them, along with the problems encountered. It also describes the development of a suitable evaluation framework that was critical to the process.

Page generated in 0.0701 seconds