• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 46
  • 31
  • 29
  • 27
  • 26
  • 14
  • 8
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 383
  • 61
  • 40
  • 39
  • 31
  • 30
  • 29
  • 28
  • 24
  • 23
  • 23
  • 21
  • 21
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

NOUVELLES SONDES NUCLEIQUES POUR LA MESURE D'ACTIVITES ENZYMATIQUES DE REPARATION DES DOMMAGES DE L'ADN PAR UN TEST FRET

Chollat-Namy, Alexia 06 October 2006 (has links) (PDF)
LES METHODES CLASSIQUES DISPONIBLES POUR MESURER L'ACTIVITE ENZYMATIQUE DE REPARATION DES LESIONS DE L'ADN PAR DES ADN N-GLYCOSYLASES SONT LONGUES ET LABORIEUSES A METTRE EN ŒUVRE (ANALYSE PAR ELECTROPHORESE SUR GEL COUPLEE AU MARQUAGE PAR UN ISOTOPE RADIOACTIF OU ENCORE PAR CHROMATOGRAPHIE LIQUIDE HAUTE PERFORMANCE). NOUS AVONS DEVELOPPE DANS LE PRESENT TRAVAIL, UNE NOUVELLE METHODE DE QUANTIFICATION PRECISE ET AISEE DES ACTIVITES DE REPARATION BASEE SUR UNE DETECTION UTILISANT LE PRINCIPE PHYSIQUE DU FRET (TRANSFERT PAR RESONANCE D'ENERGIE DE FLUORESCENCE). POUR CE FAIRE, UN SUBSTRAT D'ADN ORIGINAL A ETE CONÇU : UNE STRUCTURE AUTOCOMPLEMENTAIRE CONTENANT DES LESIONS SPECIFIQUES DANS LA SEQUENCE DOUBLE BRIN DE L'EPINGLE A CHEVEUX ET, AYANT LES DEUX EXTREMITES MARQUEES PAR DES CHROMOPHORES. L'EXCISION DE LA LESION PAR DES ADN N-GLYCOSYLASES CONDUIT A LA SEPARATION DES BRINS COMPLEMENTAIRES, INDUISANT UNE DIMINUTION DU PROCESSUS DE « QUENCHING » DE FLUORESCENCE. L'EXCISION EST DONC DETECTEE ET QUANTIFIEE PAR L'AUGMENTATION DE L'INTENSITE DU SIGNAL D'EMISSION DU FLUOROPHORE. APRES AVOIR ETABLI LA LINEARITE DE LA REPONSE DU TEST, NOUS AVONS UTILISE CETTE APPROCHE EXPERIMENTALE POUR ACCEDER AUX PARAMETRES CINETIQUES CARACTERISTIQUES DES ENZYMES DE REPARATION. LA VALIDITE DE CES PARAMETRES A ETE CONTROLEE PAR COMPARAISON AVEC LES DONNEES OBTENUES PAR ANALYSE SUR GEL D'ACRYLAMIDE (EGPA). LES POSSIBLES APPLICATIONS DE NOTRE TEST EN TANT QU'OUTIL DE SCREENING POUR LA DETECTION D'ACTIVITE DE REPARATION OU D'INHIBITION ENZYMATIQUE, SUR ENZYMES PURIFIEES OU A PARTIR D'EXTRAITS CELLULAIRES ONT ETE INVESTIGUEES. ENFIN, UN PROJET DE MINIATURISATION DU FORMAT DE LECTURE DANS UN MICROSYSTEME DE TYPE « LAB-ON-A-CHIP » A ETE MENE. L'ENSEMBLE DES RESULTATS OBTENUS PROUVE LA PERTINENCE DE NOTRE METHODE D'ANALYSE EN PHASE HOMOGENE, EN VUE D'EXTENSIONS A L'ANALYSE PARALLELISEE HAUT DEBIT POUR DES APPLICATIONS EN RECHERCHE FONDAMENTALE, BIOMEDICALE ET PHARMACEUTIQUE.
252

Étude de la Distribution, sur Système à Grande Échelle, de Calcul Numérique Traitant des Matrices Creuses Compressées

Hamdi-Larbi, Olfa 27 March 2010 (has links) (PDF)
Plusieurs applications scientifiques effectuent des calculs sur des matrices creuses de grandes tailles. Pour des raisons d'efficacité en temps et en espace lors du traitement de ces matrices, elles sont stockées selon des formats compressés adéquats. D'un autre coté, la plupart des calculs scientifiques creux se ramènent aux deux problèmes fondamentaux d'algèbre linéaire i.e. la résolution de systèmes linéaires et le calcul d'éléments (valeurs/vecteurs) propres de matrices. Nous étudions dans ce mémoire la distribution, au sein d'un Système Distribué à Grande Echelle (SDGE), des calculs dans des méthodes itératives de résolution de systèmes linéaires et de calcul d'éléments propres et ce, dans le cas creux. Le produit matricevecteur creux (PMVC) constitue le noyau de base pour la plupart de ces méthodes. Notre problématique se ramène en fait à l'étude de la distribution du PMVC sur un SDGE. Généralement, trois étapes sont nécessaires pour accomplir cette tâche, à savoir, (i) le prétraitement, (ii) le traitement et (iii) le post-traitement. Dans la première étape, nous procédons d'abord à l'optimisation de quatre versions de l'algorithme du PMVC correspondant à quatre formats de compression spécifiques de la matrice, puis étudions leurs performances sur des machines cibles séquentielles. Nous nous focalisons de plus sur l'étude de l'équilibrage des charges pour la distribution des données traitées (se ramenant en fait aux lignes de la matrice creuse) sur un SDGE. Concernant l'étape de traitement, elle a consisté à valider l'étude précédente par une série d'expérimentations réalisées sur une plate-forme gérée par l'intergiciel XtremWeb-CH. L'étape de post-traitement, quant à elle, a consisté à analyser et interpréter les résultats expérimentaux obtenus au niveau de l'étape précédente et ce, afin d'en tirer des conclusions adéquates.
253

Recursive Blocked Algorithms, Data Structures, and High-Performance Software for Solving Linear Systems and Matrix Equations

Jonsson, Isak January 2003 (has links)
<p>This thesis deals with the development of efficient and reliable algorithms and library software for factorizing matrices and solving matrix equations on high-performance computer systems. The architectures of today's computers consist of multiple processors, each with multiple functional units. The memory systems are hierarchical with several levels, each having different speed and size. The practical peak performance of a system is reached only by considering all of these characteristics. One portable method for achieving good system utilization is to express a linear algebra problem in terms of level 3 BLAS (Basic Linear Algebra Subprogram) transformations. The most important operation is GEMM (GEneral Matrix Multiply), which typically defines the practical peak performance of a computer system. There are efficient GEMM implementations available for almost any platform, thus an algorithm using this operation is highly portable.</p><p>The dissertation focuses on how recursion can be applied to solve linear algebra problems. Recursive linear algebra algorithms have the potential to automatically match the size of subproblems to the different memory hierarchies, leading to much better utilization of the memory system. Furthermore, recursive algorithms expose level 3 BLAS operations, and reveal task parallelism. The first paper handles the Cholesky factorization for matrices stored in packed format. Our algorithm uses a recursive packed matrix data layout that enables the use of high-performance matrix--matrix multiplication, in contrast to the standard packed format. The resulting library routine requires half the memory of full storage, yet the performance is better than for full storage routines.</p><p>Paper two and tree introduce recursive blocked algorithms for solving triangular Sylvester-type matrix equations. For these problems, recursion together with superscalar kernels produce new algorithms that give 10-fold speedups compared to existing routines in the SLICOT and LAPACK libraries. We show that our recursive algorithms also have a significant impact on the execution time of solving unreduced problems and when used in condition estimation. By recursively splitting several problem dimensions simultaneously, parallel algorithms for shared memory systems are obtained. The fourth paper introduces a library---RECSY---consisting of a set of routines implemented in Fortran 90 using the ideas presented in paper two and three. Using performance monitoring tools, the last paper evaluates the possible gain in using different matrix blocking layouts and the impact of superscalar kernels in the RECSY library. </p>
254

INFLOW : Structured Print Job Delivery / INFLOW : strukturerade jobbleverans

Buckwalter, Claes January 2003 (has links)
<p>More and more print jobs are delivered from customer to printer digitally over the Internet. Although Internet-based job delivery can be highly efficient, companies in the graphic arts and printing industry often suffer unnecessary costs related to this type of inflow of print jobs to their production workflows. One of the reasons for this is the lack of a well-defined infrastructure for delivering print jobs digitally over the Internet. </p><p>This thesis presents INFLOW - a prototype for a print job delivery system for the graphic arts and printing industry. INFLOW is a web-based job delivery system that is hosted on an Internet-connected server by the organization receiving the print jobs. Focus has been on creating a system that is easy to use, highly customizable, secure, and easy to integrate with existing and future systems from third-party vendors. INFLOW has been implemented using open standards, such as XML and JDF (Job Definition Format). </p><p>The requirements for ease-of-use, high customizability and security are met by choosing a web-based architecture. The client side is implemented using standard web technologies such as HTML, CSS and JavaScript while the serverside is based on J2EE, Java Servlets and Java Server Pages (JSP). Using a web browser as a job delivery client provides a highly customizable user interface and built in support for encrypted file transfers using HTTPS (HTTP over SSL). </p><p>Process automation and easy integration with other print production systems is facilitated with CIP4’s JDF (Job Definition Format). INFLOW also supports"hot folder workflows"for integration with older preflight software and other hot folder-based software common in prepress workflows.</p>
255

The functionality of a district municipality as a transport authority : the case of the West Rand, Gauteng Province / Herina Hamer

Hamer, Herina January 2006 (has links)
Thesis (M. Development and Management)--North-West University, Potchefstroom Campus, 2006.
256

Recursive Blocked Algorithms, Data Structures, and High-Performance Software for Solving Linear Systems and Matrix Equations

Jonsson, Isak January 2003 (has links)
This thesis deals with the development of efficient and reliable algorithms and library software for factorizing matrices and solving matrix equations on high-performance computer systems. The architectures of today's computers consist of multiple processors, each with multiple functional units. The memory systems are hierarchical with several levels, each having different speed and size. The practical peak performance of a system is reached only by considering all of these characteristics. One portable method for achieving good system utilization is to express a linear algebra problem in terms of level 3 BLAS (Basic Linear Algebra Subprogram) transformations. The most important operation is GEMM (GEneral Matrix Multiply), which typically defines the practical peak performance of a computer system. There are efficient GEMM implementations available for almost any platform, thus an algorithm using this operation is highly portable. The dissertation focuses on how recursion can be applied to solve linear algebra problems. Recursive linear algebra algorithms have the potential to automatically match the size of subproblems to the different memory hierarchies, leading to much better utilization of the memory system. Furthermore, recursive algorithms expose level 3 BLAS operations, and reveal task parallelism. The first paper handles the Cholesky factorization for matrices stored in packed format. Our algorithm uses a recursive packed matrix data layout that enables the use of high-performance matrix--matrix multiplication, in contrast to the standard packed format. The resulting library routine requires half the memory of full storage, yet the performance is better than for full storage routines. Paper two and tree introduce recursive blocked algorithms for solving triangular Sylvester-type matrix equations. For these problems, recursion together with superscalar kernels produce new algorithms that give 10-fold speedups compared to existing routines in the SLICOT and LAPACK libraries. We show that our recursive algorithms also have a significant impact on the execution time of solving unreduced problems and when used in condition estimation. By recursively splitting several problem dimensions simultaneously, parallel algorithms for shared memory systems are obtained. The fourth paper introduces a library---RECSY---consisting of a set of routines implemented in Fortran 90 using the ideas presented in paper two and three. Using performance monitoring tools, the last paper evaluates the possible gain in using different matrix blocking layouts and the impact of superscalar kernels in the RECSY library.
257

Fräsch och strukturerad med attityd : Ett examensprojekt om tidningslayout

Abrahamsson, Emilia, Renhult, Elena January 2012 (has links)
AbstractPresentation of the problem: The goal with this graduation project is to produce a new visual concept for the student newspaper Campus, this will be expressed in the form of page templates. The page templates will have a visual concept that appears through graphical elements. The client's wishes were that the student newspaper should be audience appropriate, attractive, inviting, reliable, modern and functional to work with.Theory: In order to implement the wishes and in a good way produce a consistent visual concept, we took a starting point in theory of graphic design. This theory has taken many of its principles from aesthetic theory, and therefore have we also used aesthetics to provide a deeper theoretical aspect. The principles we used in graphic design is; how to create a unity in the layout by using basic elements such as text, images, and white space. We have also looked at alignment-, form-, color-, contrast-, typography- and placement principles. The principles we have applied from aesthetics are color repetition, color contrasts, balance, visual harmony and depth.Method: The empirical studies consisted of a qualitative content analysis of student newspapers, followed by a focus-group study with respondents from the Campus newspaper target group. The purpose of the two studies was to, via a content analysis, get a general picture of the design language that is used on the market today. This, combined with theory, helped us to create a visual expression that stands outside the general form. It was done in four different dummies, these dummies were discussed by the focus-group.Results: Our first empirical study showed how student newspapers are designed today. With support of theory, the functioning principles was sorted out and used in the four dummies. During the focus-group analysis, opinions about the dummies were presented. We could therefore see what the respondents wanted in the page templates. Together with the client's wishes and analysis from the empirical studies, a new concept developed. It will be perceived as clean and structured together with attitude. / Campus
258

Cmos Readout Electronics For Microbolometer Type Infrared Detector Arrays

Toprak, Alperen 01 February 2009 (has links) (PDF)
This thesis presents the development of CMOS readout electronics for microbolometer type infrared detector arrays. A low power output buffering architecture and a new bias correction digital-to-analog converter (DAC) structure for resistive microbolometer readouts is developed / and a 384x288 resistive microbolometer FPA readout for 35 &micro / m pixel pitch is designed and fabricated in a standard 0.6 &micro / m CMOS process. A 4-layer PCB is also prepared in order to form an imaging system together with the FPA after detector fabrication. The low power output buffering architecture employs a new buffering scheme that reduces the capacitive load and hence, the power dissipation of the readout channels. Furthermore, a special type operational amplifier with digitally controllable output current capability is designed in order to use the power more efficiently. With the combination of these two methods, the power dissipation of the output buffering structure of a 384x288 microbolometer FPA with 35 &micro / m pixel pitch operating at 50 fps with two output channels can be decreased to 8.96% of its initial value. The new bias correction DAC structure is designed to overcome the power dissipation and noise problems of the previous designs at METU. The structure is composed of two resistive ladder DAC stages, which are capable of providing multiple outputs. This feature of the resistive ladders reduces the overall area and power dissipation of the structure and enables the implementation of a dedicated DAC for each readout channel. As a result, the need for the sampling operation required in the previous designs is eliminated. Elimination of sampling prevents the concentration of the noise into the baseband, and therefore, allows most of the noise to be filtered out by integration. A 384x288 resistive microbolometer FPA readout with 35 &amp / #956 / m pixel pitch is designed and fabricated in a standard 0.6 &amp / #956 / m CMOS process. The fabricated chip occupies an area of 17.84 mm x 16.23 mm, and needs 32 pads for normal operation. The readout employs the low power output buffering architecture and the new bias correction DAC structure / therefore, it has significantly low power dissipation when compared to the previous designs at METU. A 4-layer imaging PCB is also designed for the FPA, and initial tests are performed with the same PCB. Results of the performed tests verify the proper operation of the readout. The rms output noise of the imaging system and the power dissipation of the readout when operating at a speed of 50 fps is measured as 1.76 mV and 236.9 mW, respectively.
259

Advanced Coding Techniques For Fiber-Optic Communications And Quantum Key Distribution

Zhang, Yequn January 2015 (has links)
Coding is an essential technology for efficient fiber-optic communications and secure quantum communications. In particular, low-density parity-check (LDPC) coding is favoured due to its strong error correction capability and high-throughput implementation feasibility. In fiber-optic communications, it has been realized that advanced high-order modulation formats and soft-decision forward error correction (FEC) such as LDPC codes are the key technologies for the next-generation high-speed optical communications. Therefore, energy-efficient LDPC coding in combination with advanced modulation formats is an important topic that needs to be studied for fiber-optic communications. In secure quantum communications, large-alphabet quantum key distribution (QKD) is becoming attractive recently due to its potential in improving the efficiency of key exchange. To recover the carried information bits, efficient information reconciliation is desirable, for which the use of LDPC coding is essential. In this dissertation, we first explore different efficient LDPC coding schemes for optical transmission of polarization-division multiplexed quadrature-amplitude modulation (QAM) signals. We show that high energy efficiency can be achieved without incurring extra overhead and complexity. We then study the transmission performance of LDPC-coded turbo equalization for QAM signals in a realistic fiber link as well as that of pragmatic turbo equalizers. Further, leveraging the polarization freedom of light, we expand the signal constellation into a four-dimensional (4D) space and evaluate the performance of LDPC-coded 4D signals in terms of transmission reach. Lastly, we study the security of a proposed weak-coherent-state large-alphabet QKD protocol and investigate the information reconciliation efficiency based on LDPC coding.
260

Étude et réalisation de sources photoniques intégrées sur InP pour les applications télécoms à hauts débits et à 1,55 µm

Carrara, David 23 May 2012 (has links) (PDF)
Les formats de modulation avancés, codant l'information sur la phase, la polarisation ou plusieurs niveaux d'amplitude de la lumière reçoivent aujourd'hui un intérêt croissant. En effet, ceux-ci permettent d'atteindre une meilleure efficacité spectrale et par conséquent des débits plus élevés. Ces caractéristiques sont actuellement très recherchées dans les télécommunications pour répondre à la demande constante d'augmentation de capacité des transmissions optiques fibrées. L'essentiel du travail effectué porte sur la génération de tels signaux dans des sources photoniques monolithiques sur InP faisant appel à un concept nouveau de commutation de phases optiques préfixées avec des modulateurs électro-absorbants. Une comparaison de notre technologie intégrée avec la technologie actuelle de génération de formats de modulation avancés démontre des possibilités nouvelles de réduction de taille, de diminution de consommation énergétique et d'évolution en vitesse de modulation jusqu'à 56 GBauds. Suite à la validation, par simulations, d'architectures de transmetteurs spécifiques pour la génération de formats de modulation avancés, nous réalisons en salle blanche les circuits photoniques intégrés d'étude. Les caractérisations statiques confirment le fonctionnement de toutes les fonctions intégrées des circuits et soulignent l'efficacité de la filière technologique. Pour une première démonstration de fonctionnalité nous choisissons un transmetteur BPSK capable de générer une modulation de phase à 12,4 GB. Ce résultat démontre la plus petite source intégrée BPSK à l'heure actuelle. Un autre circuit capable de générer des formats de modulation plus complexes est aussi caractérisé

Page generated in 0.0373 seconds