• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 750
  • 194
  • 183
  • 159
  • 42
  • 34
  • 22
  • 20
  • 16
  • 14
  • 14
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 1992
  • 506
  • 458
  • 420
  • 388
  • 320
  • 252
  • 222
  • 178
  • 149
  • 148
  • 134
  • 129
  • 126
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Calculos neutronicos, termo-hidrulicos e de seguranca de um dispositivo para irradiacao de miniplacas (DIM) de elementos combustiveis tipo dispersao / Neutronic, thermal-hydraulic and safety analysis calculations for a miniplate irradiation device (MID) of dispersion fuel elements

DOMINGOS, DOUGLAS B. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:27:28Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:06:47Z (GMT). No. of bitstreams: 0 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Neste trabalho foram desenvolvidos calculos neutrônicos, termo-hidráulicos e de segurança para avaliar a seguranca operacional de um dispositivo de irradiação a ser colocado no núcleo do reator IEA-R1 do IPEN-CNEN/SP. Este dispositivo de irradiação é utilizado para alojar miniplacas de combustvel do tipo dispers~ao de U3O8-Al e U3Si2-Al, com 19,75% em peso de 235U e densidades, respectivamente, de ate 3,2 gU/cm3 e 4,8 gU/cm3. Estas miniplacas serão irradiadas a queimas acima de 50% do 235U, de forma a qualificar este tipo de dispersão para utilização no Reator Multipropósito Brasileiro (RMB), em concepção. Para os calculos neutrônicos, foram utilizados os programas computacionais 2DB e CITATION. O programa FLOW foi utilizado para determinar o fluxo de refrigerante no irradiador, permitindo o cálculo das temperaturas máximas atingidas nas miniplacas de combustível com o programa MTRCR-IEA-R1. Um Acidente de Perda de Refrigerante (APR) foi analisado com os programas computacionais LOSS e TEMPLOCA, permitindo o cálculo das temperaturas nas miniplacas de combustível após o esvaziamento da piscina do reator. Os cálculos demonstraram que a irradiação deverá ocorrer sem consequências adversas no núcleo de reator IEA-R1. / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP / FAPESP:08/55686-6
252

Fusão de imagens médicas para aplicação de sistemas de planejamento de tratamento em radioterapia

ROS, RENATO A. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:51:40Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:10:00Z (GMT). No. of bitstreams: 0 / Foi desenvolvido um programa para fusão de imagens médicas para utilização nos sistemas de planejamento de tratamento de radioterapia CAT3D e de radiocirurgia MNPS. Foi utilizada uma metodologia de maximização da informação mútua para fazer a fusão das imagens de modalidades diferentes pela medida da dependência estatística entre os pares de voxels. O alinhamento por pontos referenciais faz uma aproximação inicial para o processo de otimização não linear pelo método de downhill simplex para gerar o histograma conjugado. A função de transformação de coordenadas utiliza uma interpolação trilinear e procura pelo valor de máximo global em um espaço de 6 dimensões, com 3 graus de liberdade para translação e 3 graus de liberdade para rotação, utilizando o modelo de corpo rígido. Este método foi avaliado com imagens de TC, RM e PET do banco de dados da Universidade Vanderbilt, para verificar sua exatidão pela comparação das coordenadas de transformação de cada fusão de imagens com os valores de referência. O valor da mediana dos erros de alinhamento das imagens foi de 1,6 mm para a fusão de TC-RM e de 3,5 mm para PET-RM, com a exatidão dos padrões de referência estimada em 0,4 mm para TC-RM e 1,7 mm para PET-RM. Os valores máximos de erros foram de 5,3 mm para TC-RM e de 7,4 mm para PET-RM e 99,1% dos erros foram menores que o tamanho dos voxels das imagens. O tempo médio de processamento para a fusão de imagens foi de 24 s. O programa foi concluído com sucesso e inserido na rotina de 59 serviços de radioterapia, dos quais 42 estão no Brasil e 17 na América Latina. Este método não apresenta limitações quanto às resoluções diferentes das imagens, tamanhos de pixels e espessuras de corte. Além disso, o alinhamento pode ser realizado com imagens transversais, coronais ou sagitais. / Tese (Doutoramento) / IPEN/T / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
253

Coding techniques for insertion/deletion error correction

Cheng, Ling 04 June 2012 (has links)
D. Ing. / In Information Theory, synchronization errors can be modelled as the insertion and deletion of symbols. Error correcting codes are proposed in this research as a method of recovering from a single insertion or deletion error; adjacent multiple deletion errors; or multiple insertion, deletion and substitution errors. A moment balancing template is a single insertion or deletion correcting construction based on number theoretic codes. The implementation of this previously published technique is extended to spectral shaping codes, (d, k) constrained codes and run-length limited sequences. Three new templates are developed. The rst one is an adaptation to DC-free codes, and the second one is an adaptation to spectral null codes. The third one is a generalized moment balancing template for both (d, k) constrained codes and run-length limited sequences. Following this, two new coding methods are investigated to protect a binary sequence against adjacent deletion errors. The rst class of codes is a binary code derived from the Tenengolts non-binary single insertion or deletion correcting code, with additional selection rules. The second class of codes is designed by using interleaving techniques. The asymptotic cardinality bounds of these new codes are also derived. Compared to the previously published codes, the new codes are more exible, since they can protect against any given xed known length of adjacent deletion errors. Based on these two methods, a nested construction is further proposed to guarantee correction of adjacent deletion errors, up to a certain xed number.
254

Efficient texture-based indexing for interactive image retrieval and cue detection

Levienaise-Obadia, B. January 2001 (has links)
The focus of this thesis is the definition of a complete framework for texture-based annotation and retrieval. This framework is centred on the concept of "texture codes", so called because they encode the relative energy levels of Gabor filter responses. These codes are pixel-based, robust descriptors with respect to illumination variations, can be generated efficiently, and included in a fast retrieval process. They can act as local or global descriptors, and can be used in the representations of regions or objects. Our framework is therefore capable of supporting a wide range of queries and applications. During our research, we have been able to utilise results of psychological studies on the perception of similarity and have explored non-metric similarity scores. As a result, we have found that similarity can be evaluated with simple measures predominantly relying on the information extracted from the query, without a drastic loss in retrieval performance. We have been able to show that the most simple measure possible, counting the number of common codes between the query and a stored image, can for some algorithmic parameters outperform well-proven benchmarks. Importantly also, our measures can all support partial comparisons, so that region-based queries can be answered without the need for segmentation. We have investigated refinements of the framework which endow it with the ability to localise queries in candidate images, and to deal with user relevance feedback. The final framework can generate good and fast retrieval results as demonstrated with a databases of 3723 images, and can therefore be useful as a stand-alone system. The framework has also been applied to the problem of high-level annotation. In particular, it has been used as a cue detector, where a cue is a visual example of a particular concept such as a type of sport. The detection results show that the system can predict the correct cue among a small set of cues, and can therefore provide useful information to an engine fusing the outputs of several cue detectors. So an important aspect of this framework is that it is expected to be an asset within a multi-cue annotation and/or retrieval system.
255

Résultants de polynômes de Ore et Cryptosystèmes de McEliece sur des Codes Rang faiblement structurés / Resultants of Ore polynomials and McEliece Cryptosystems based on weakly structured Rank Codes

Murat, Gaetan 09 December 2014 (has links)
Les techniques de chiffrement les plus utilisées en cryptographie, basées sur des problèmes de théorie des nombres, présentent malgré leur efficacité des défauts notamment une vulnérabilité aux attaques menées à l'aide d'ordinateur quantiques. Il est donc pertinent d'étudier d'autres familles de cryptosystèmes. Nous nous intéressons ici aux cryptosystèmes basés sur les codes correcteurs, introduits par McEliece en 1978 qui, étant basés sur des problèmes difficiles de théorie des codes, ne présentent pas cette vulnérabilité. Ces cryptosystèmes présentent des inconvénients, qui font qu'ils sont peu utilisés en pratique. Selon le code choisi, ils peuvent être vulnérables aux attaques structurelles, mais surtout ils nécessitent des clés de taille très importante.Récemment une nouvelle famille de codes appelés codes MDPC a été introduite ainsi qu'un cryptosystème basé sur cette famille de codes. Les codes MDPC semblent être distinguables seulement en trouvant des mots de poids faibles dans leur dual, les affranchissant ainsi d'une éventuelle vulnérabilité aux attaques structurelles. De plus, en utilisant une des matrices quasi-cycliques, ils obtiennent des clés de taille très compacte.Nous avons pour notre part, travaillé dans le contexte de la métrique rang, une nouvelle métrique introduite en 1985 par Gabidulin qui semble bien adaptée à une utilisation en cryptographie :• Nous avons commencé par travailler autour de la notion de polynôme de Ore et le cas particulier important des q-polynômes. Ces derniers sont des combinaisons linéaires des itérés de l'automorphisme de Frobenius sur un corps fini.Ces polynômes constituent un objet d'étude important en métrique rang, de par leur utilisation dans les premiers cryptosystèmes dans cette métrique. Nous présentons sous une nouvelle forme des résultats déjà connus, et de nouveaux algorithmes pour le calcul du PGCD de deux polynômes de Ore et le calcul des résultants et sous-résultants de polynômes de Ore (ainsi que de polynômes usuels en généralisant au calcul des sous-résultants la formule déjà connue pour les résultants) en utilisant une matrice de multiplication à droite plus petite que la matrice de Sylvester utilisée habituellement.Ces résultats peuvent être réexploités indirectement dans le cryptosystème présenté par la suite bien que celui-ci ne soit pas basé sur les q-polynômes.• La partie suivante de notre travail est consacrée à l'introduction d'une nouvelle famille de codes en métrique rang appelés codes LRPC (pour Low Rank Parity Check codes). Ces codes ont la particularité d'avoir une matrice de parité de poids rang faible (et peuvent donc être vus comme une généralisation des codes LDPC ou MDPC à la métrique rang).Nous présentons le cryptosystème LRPC, un cryptosystème de type Mc Eliece en métrique rang basé sur les codes LRPC. Ces codes sont très peu structurés et sont donc vraisemblablement résistants aux attaques structurelles. La matrice de parité peut être choisie doublement circulante (on parle alors de codes DC-LRPC) ce qui diminue considérablement la taille de la clé.Ainsi, le cryptosystème DC-LRPC cumule les avantages d'offrir une bonne sécurité en étant basé sur un problème difficile (comme tous les cryptosystèmes basés sur les codes correcteurs), d'être faiblement structurés, de disposer d'une clé de taille assez petite (quelques milliers de bits au plus) et d'un algorithme de décodage efficace.Une attaque a été trouvée dans le cas du cryptosystème DC-LRPC. Cette attaque basée sur la notion de code replié permet de baisser significativement la sécurité du cryptosystème dans le cas où le polynôme X^(k-1)+X^(k-2)+⋯+1 est scindable (k désignant la dimension du code). Cependant ce n'est pas le cas pour les paramètres présentés où le cryptosystème reste valide. / The most commonly used encryption techniques in cryptography are based on problems in number theory. Despite their efficiency, they are vulnerable to post-quantum cryptographic attack. Therefore it is relevant to study other types of cryptosystems. In this work we study error-corrector codes based cryptosystmems, introduced by McEliece in 1978 ; being based on hard problems in coding theory, these cryptosystems do not have this weakness. However these cryptosystems are almost not used in practice because they are vulnerable to strucural attacks and they require a key with very big length. Recently a new family of codes named MDPC codes has been introduced as well as a cryptosystem that is based on these codes. It seems that MDPC codes are distinguishable only by finding words with weak weight in their dual, thus preventing them from structural attacks. Furthermore, they can have compact keys by using quasi-cyclic matrices.In the present paper we use the rank metric, a new metric for codes that was introduced by Gabidulin in and seems suited for a cryptographic use :• At first we studied Ore Polynomials and the special case of q-polynomials , the latter being iterates of the Fobenius automorphism on a finite field.These polynomials are widely in rank metric due to their use in the first code-based cryptosystems in rank metric. We reformulate already known results and give new results regarding the computation of GCD, resultants and subresultants of two Ore polynomials (as well as usual polynomials for which we give a generalization of the resultant computation to subresultants) using a right-hand multiplication matrix which is smaller than the well-known Sylvester matrix.These results may be reused in the cryptosystem we introduce in the next chapters, though this cryptosystem is not based on q-polynomials.• In the next part of our work we define the LRPC codes (for Low Rank Parity Check Codes), a new family of codes in rank metric. These codes have a parity check matrix whose rank weight is low (and thus they can be seen as a generalization of LDPC or MDPC codes to rank metric).We present the LRPC cryptosystem, a McEliece cryptosystem in rank metric based on LRPC codes. These codes are weakly structured and so are likely to resist structural attacks. We can choose a double-circulant parity check matrix which greatly lowers the key size (we name these particular codes DC-LRPC codes).Thus the DC-LRPC cryptosystems have a good security (being based on a hard problem in coding theory), are weakly structured, have small public keys and can be quickly decoded.An attack was found for DC-LRPC cryptosystem. This attack relies on folded codes and may greatly lower the security of the cryptosystem, however it works only when the polynomial X^(k-1)+X^(k-2)+⋯+1 has a divisor with big degree. We give parameters for which the cryptosystem remains valid.
256

Navigational Complexity Within Building Codes

McLean, James Stephen 01 January 2017 (has links)
The premise, that building codes have become too complex, has been discussed, commented on, and documented by practicing engineers; however, prior to this research there was little scientific evidence that codes have increased in complexity over time. There are many aspects of building codes that are complicated, and this reflects a combination of the inherent complexity of building design and the dynamical processes that produce the codes. This research focuses on navigational complexity and specifically the aspects that can be quantified to demonstrate current codes are more complex than their predecessors. Navigational complexity is defined as the complexity created by document cross referencing and other unintended structural features of a code. A metric for quantifying navigational complexity has been developed based on estimates of time consumed by an engineer stepping and navigating through codes. The metric can be used to quantify navigational complexity within a given code and between different codes. Although it is unclear as to what extent navigational complexity contributes to the overall level of complexity within a code, this research affirms that navigational complexity has increased in various codes over the years and can be used to compare complexity between different codes. The complexity of building codes has been shown to be increasing in several commonly used codes, and it may be necessary to simplify some codes. Additionally, this research postulates that it is possible for codes to become too complex and that there may be instances where the cognitive limit of navigational complexity within any given code is exceeded. However, building codes are complex for several reasons, and attempting to make codes less complex is not trivial. Without a method to reduce complexity, the task of simplification may be impenetrable. The developed metric for navigational complexity has been coupled with graphical representations to identify areas where navigational complexity can be reduced and areas where it may be beyond the cognitive limit of code users. The combination of numerical data and graphical representations may provide additional significant advantages that are not yet realized. Measuring and understanding navigational complexity within any code opens up the possibility of mitigation through reorganization and developing better navigational tools for future editions.
257

Constrained sequences and coding for spectral and error control

Botha, Louis 11 February 2014 (has links)
D.Ing. / When digital information is to be transmitted over a communications channel or stored in a data recording system, it is first mapped onto a code sequence by an encoder. The code sequence has certain properties which makes it suitable for use on the channel, ie the sequence complies to the channel input restrictions. These input restrictions are often described in terms of a required power spectral density of the code sequence. In addition, the code sequence can also be chosen in such a way as to enable the receiver to correct errors which occur in the channel. The set of rules which governs the encoding process is referred to as a line code or a modulation code for the transmission or storage of data, respectively. Before a new line code or modulation code can be developed, the properties that the code sequence should have for compliance to the channel input, restrictions and possession of desired error correction capabilities have to be established. A code' construction algorithm, which is often time consuming and difficult to apply, is then used to obtain the new code. In this dissertation, new classes of sequences which comply to the input restrictions and error correction requirements of practical channels are defined, and new line codes and recording codes are developed for mapping data onto these sequences. Several theorems which show relations between' information theoretical aspects of different classes of code sequences are presented. Algorithms which can be used to transform an existing line code or modulation code into a new code for use on another channel are introduced. These algorithms are systematic and easy to apply, and precludes the necessity of applying a code construction algorithm.
258

Coding and bounds for correcting insertion/deletion errors

Swart, Theo G. 10 September 2012 (has links)
M.Ing. / Certain properties of codewords after deletions or insertions of bits are investigated. This is used in the enumeration of the number of subwords or superwords after deletions or insertions. Also, new upper bounds for insertion/deletion correcting codes are derived from these properties. A decoding algorithm to correct up to two deletions per word for Helberg's s = 2 codes is proposed. By using subword and superword tables, new s = 2 codebooks with greater cardinalities than before are presented. An insertion/deletion channel model is presented which can be used in evaluating insertion/deletion correcting codes. By changing the parameters, various channel configurations can be attained. Furthermore, a new convolutional coding scheme for correcting insertion/deletion errors is introduced and an investigation of the performance is done by using the presented channel model.
259

Dissemination of Teachers' Codes of Ethics

Elms, Arkie 08 1900 (has links)
This thesis examines the awareness of national and state standards established for teachers by teacher associations. Data for this study came from questionnaires filled out by teachers taking courses at North Texas State University.
260

Validation of CFD codes for propulsion system components

Chan, Chun Ngok 26 January 2010 (has links)
see document / This report describes an international effort to investigate the present limitations of some of the commercially available CFD codes and their models. This investigation involves comparing the predictions from these codes with the experimental results of the two selected test cases. The data collection method is briefly described followed by a detailed discussion of the graphical approach used by the group of investigators to compare results. In addition, an attempt to investigate the deviation of the collected results with the experimental data is discussed. / Master of Science

Page generated in 0.0391 seconds