• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 175
  • 24
  • 19
  • 13
  • 10
  • 9
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 334
  • 84
  • 66
  • 55
  • 51
  • 47
  • 39
  • 32
  • 31
  • 30
  • 26
  • 25
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Contrôle, synchronisation et chiffrement / Control, synchronization and encryption

Parriaux, Jérémy 03 October 2012 (has links)
Cette thèse traite de la synchronisation des systèmes dynamiques.La synchronisation est étudiée pour une configuration de type maître-esclave, c'est-à-dire pour des systèmes couplés de façon unidirectionnelle. Ce type de configuration s'avère d'un intérêt tout particulier car elle correspond à des architectures de communications chiffrées un-vers-un ou un-vers-plusieurs. Une attention spécifique est portée sur l'autosynchronisation, comportement qui caractérise la synchronisation par le simple couplage maître-esclave et donc en l'absence de tout contrôle extérieur. Elle joue un rôle majeur dans les communications impliquant des chiffreurs par flot autosynchronisants. L'étude de l'autosynchronisation dans le contexte cryptographique s'appuie sur la théorie du contrôle. Un lien original entre l'autosynchronisation et le principe de chiffrement/déchiffrement en cryptographie est mis en évidence. Il fait appel à la propriété de platitude des systèmes dynamiques, un concept emprunté à l'automatique. On montre que les systèmes dynamiques plats définissent complètement l'ensemble des systèmes autosynchronisants et permettent d'élargir les structures existantes des chiffreurs autosynchronisants. La platitude est tout d'abord étudiée pour deux types de systèmes non linéaires~: les systèmes linéaires commutés et à paramètres variants (LPV). La caractérisation des sorties plates s'appuie sur le concept de semigroupes nilpotents et un algorithme performant est proposé. Une approche constructive pour réaliser des structures maître-esclave autosynchronisantes est proposée sur la base de systèmes plats et les notions d'inversibilité à gauche et à droite empruntées à la théorie du contrôle. Par la suite, l'autosynchronisation est étudiée dans le contexte booléen, privilégié en cryptographie.Elle est caractérisée en premier lieu au travers la notion d'influence. Ensuite, différentes représentations matricielles associées aux fonctions booléennes sont proposées. Ces représentations s'avèrent particulièrement intéressantes pour l'analyse des propriétés liées à la sécurité. Un lien entre l'autosynchronisation et les structures propres des représentations matricielles est établi. Une approche orientée graphes est finalement élaborée pour la caractérisation. De nouvelles constructions de structures autosynchronisantes en sont déduites et des éléments de sécurité sont discutés. Enfin, une plateforme de test à base de FPGA qui a été réalisée est décrite / This thesis deals with the synchronization of dynamical systems. The synchronization considered is called master-slave, that is, the dynamical systems are connected in a unidirectional way. This configuration is of interest because it corresponds to an architecture encountered in secured communications of type one-to-one or one-to-many. A special attention is paid to self-synchronization. A behaviour that characterizes synchronization achieved with a simple master-slave coupling and so, without any external control. It is a central feature of self-synchronizing stream ciphers. The study of self-synchronization in the cryptographic context relies on control theory. An original connection between self-synchronization and encryption/decryption is provided. It is based on the flatness property of dynamical systems, a property borrowed from automatic control. It is shown that flat dynamical systems completly define the set of all self-synchronizing systems and thus, enlarge the existing structures of self-synchronizing stream ciphers. Flatness is first of all studied for the case of two nonlinear systems: switched linear systems and linear parameter-varying (LPV) systems. Flatness caracterization is based on the concept of nilpotent semigroups and an efficient algorithm is provided. A constructive approach for self-synchronizing master-slave structures is proposed. It relies on the construction of flat systems as well as on left and right invertibility also borrowed from control theory. Then, self-synchronization is studied in the Boolean context which is preferred in cryptography. Self-synchronization is caracterized through the notion of influence. Several matrix representations of Boolean functions are proposed. These representations are especially interesting for security analysis. A connection between self-synchronization and the eigenstructures of these matrices is established. Then, a graph oriented approach is provided. New self-synchronizing constructions are deduced and security elements are discussed. Eventually, the description of a realized FPGA based test plateform is provided
252

Binary Decision Diagrams for Random Boolean Functions

Gröpl, Clemens 03 May 1999 (has links)
Binary Decision Diagrams (BDDs) sind eine Datenstruktur für Boolesche Funktionen, die auch unter dem Namen branching program bekannt ist. In ordered binary decision diagrams (OBDDs) müssen die Tests einer festen Variablenordnung genügen. In free binary decision diagrams (FBDDs) darf jede Variable höchstens einmal getestet werden. Die Effizienz neuer Varianten des BDD-Konzepts wird gewöhnlich anhand spektakulärer (worst-case) Beispiele aufgezeigt. Wir verfolgen einen anderen Ansatz und vergleichen die Darstellungsgrößen für fast alle Booleschen Funktionen. Während I. Wegener bewiesen hat, daß für die `meisten' n die erwartete OBDD-Größe einer zufälligen Booleschen Funktion von n Variablen gleich der worst-case Größe bis auf Terme kleinerer Ordnung ist, zeigen wir daß dies nicht der Fall ist für n innerhalb von Intervallen konstanter Länge um die Werte n = 2h + h. Ferner gibt es Bereiche von n, in denen minimale FBDDs fast immer um mindestens einen konstanten Faktor kleiner sind als minimale OBDDs. Unsere Hauptsätze ha ben doppelt exponentielle Wahrschein- lichkeitsschranken (in n). Außerdem untersuchen wir die Entwicklung zufälliger OBDDs und ihrer worst-case Größe und decken dabei ein oszillierendes Verhalten auf, das erklärt, warum gewisse Aussagen im allgemeinen nicht verstärkt werden können. / Binary Decision Diagrams (BDDs) are a data structure for Boolean functions which are also known as branching programs. In ordered binary decision diagrams (OBDDs), the tests have to obey a fixed variable ordering. In free binary decision diagrams (FBDDs), each variable can be tested at most once. The efficiency of new variants of the BDD concept is usually demonstrated with spectacular (worst-case) examples. We pursue another approach and compare the representation sizes of almost all Boolean functions. Whereas I. Wegener proved that for `most' values of n the expected OBDD size of a random Boolean function of n variables is equal to the worst-case size up to terms of lower order, we show that this is not the case for n within intervals of constant length around the values n = 2h + h. Furthermore, ranges of n exist for which minimal FBDDs are almost always at least a constant factor smaller than minimal OBDDs. Our main theorems have doubly exponentially small probability bounds (in n). We also investigate the evolution of random OBDDs and their worst-case size, revealing an oscillating behaviour that explains why certain results cannot be improved in general.
253

[en] BOOLEAN OPERATIONS WITH COMPOUND SOLIDS REPRESENTED BY BOUNDARY / [pt] OPERAÇÕES BOOLEANAS COM SÓLIDOS COMPOSTOS REPRESENTADOS POR FRONTEIRA

MARCOS CHATAIGNIER DE ARRUDA 13 July 2005 (has links)
[pt] Num modelador de sólidos, uma das ferramentas mais poderosas para a criação de objetos tridimensionais de qualquer nível de complexidade geométrica é a aplicação das operações booleanas. Elas são formas intuitivas e populares de combinar sólidos, baseadas nas operações aplicadas a conjuntos. Os tipos principais de operações booleanas comumente aplicadas a sólidos são: união, interseção e diferença. Havendo interesse prático, para garantir que os objetos resultantes possuam a mesma dimensão dos objetos originais, sem partes soltas ou pendentes, o processo de regularização é aplicado. Regularizar significa restringir o resultado de tal forma que apenas volumes preenchíveis possam existir. Na prática, a regularização é realizada classificando-se os elementos topológicos e eliminando-se estruturas de dimensão inferior. A proposta deste trabalho é o desenvolvimento de um algoritmo genérico que permita a aplicação do conjunto de operações booleanas em um ambiente de modelagem geométrica aplicada à análise por elementos finitos e que agregue as seguintes funcionalidades: trabalhar com um número indefinido de entidades topológicas (conceito de Grupo), trabalhar com objetos de dimensões diferentes, trabalhar com objetos non-manifold, trabalhar com objetos não necessariamente poliedrais ou planos e garantir a eficiência, robustez e aplicabilidade em qualquer ambiente de modelagem baseado em representação B-Rep. Neste contexto, apresenta-se a implementação do algoritmo num modelador geométrico pré- existente, denominado MG, seguindo o conceito de programação orientada a objetos e mantendo a interface com o usuário simples e eficiente. / [en] In a solid modeler, one of the most powerful tools to create threedimensional objects with any level of geometric complexity is the application of the Boolean set operations. They are intuitive and popular ways to combine solids, based on the operations applied to sets. The main types of Boolean operations commonly applied to solids are: union, intersection and difference. If there is practical interest, in order to assure that the resulting objects have the same dimension of the original objects, without loose or dangling parts, the regularization process is applied. To regularize means to restrict the result in a way that only filling volumes are allowed. In practice, the regularization is performed classifying the topological elements and removing the lower dimensional structures. The objective of this work is the development of a generic algorithm that allows the application of the Boolean set operations in a geometric modeling environment applied to finite element analysis, which aggregates the following functionalities: working with an undefined number of topological entities (Group concept), working with objects of different dimensions, working with nonmanifold objects, working with objects not necessarily plane or polyhedrical and assuring the efficiency, robustness and applicability in any modeling environment based on B-Rep representation. In this context, the implementation of the algorithm in a pre-existing geometric modeler named MG is presented, using the concept of object oriented programming and keeping the user interface simple and efficient.
254

Mapeamento pedológico digital via regressão geograficamente ponderada e lógica booleana: uma estratégia integrada entre dados espectrais terrestres e de satélite / Digital pedological mapping by geographically weighted regression and boolean logic: an integrated strategy between terrestrial and satellite spectral data

Medeiros Neto, Luiz Gonzaga 10 February 2017 (has links)
Mapas pedológicos são importantes fontes de informação necessárias à agricultura, mas praticamente inexistentes em escalas adequadas para o Brasil, e seu levantamento pelo método convencional para a demanda brasileira é inviável. Como alternativa ao problema, mapeamento pedológico digital apresenta-se como uma área do conhecimento que envolve as relações das informações de campo, laboratório e pontuais de solos com métodos quantitativos via imagens de satélite e atributos do relevo para inferir atributos e classes. A literatura destaca, portanto, a importância do estudo da posição espacial de pontos amostrais na estimativa de atributos do solo a partir dos valores espectrais de imagens de satélite, aliado a isso, faz-se importante o cruzamento dos atributos do solo estimados e espacializados para chegar a classes de solo. Face ao exposto, o objetiva-se o desenvolvimento de uma técnica via imagem de satélite, dados espectrais e atributos do relevo, integrados por lógica booleana, para determinar mapas pedológicos. O trabalho foi realizado no município de Rio das Pedras, SP e entornos, numa área total de 47.882 ha. Onde, realizou-se processamento de imagens de satélites multitemporais, para obtenção da informação espectral da superfície de solo exposto. Esta informação foi correlacionada com espectro de laboratório de pontos amostrais em subsuperfície (profundidade 80-100 cm) e estimou-se os espectros simulando bandas de satélite para locais desconhecidos. Elaborou-se uma chave de classificação de solos por cruzamento de mapas de atributos via lógica booleana, onde definiu os seguintes atributos a serem mapeados: argila, V% e matéria orgânica (M.O) na profundidade 0-20 cm e argila, CTC, V%, m%, Al, ferro total, matiz, valor e croma na profundidade 80-100 cm. As estimativas de espectros em subsuperfície e dos atributos dos solos nas duas profundidades foram realizadas pela técnica multivariada regressão geograficamente ponderada (GWR), que teve seu desempenho preditivo avaliado pela comparação com desempenho preditivo da técnica de regressão linear múltipla (MRL). Os resultados mostraram correlação entre os espectros das duas profundidades, com R2 de validação acima 0.6. Argila (0-20 e 80-100 cm), matiz, valor e croma foram os atributos do solo que obtiveram as melhores estimativas com R2 acima 0.6. A técnica multivariada GWR obteve-se desempenho superior ao MRL. O mapa pedológico digital comparado aos mapas de solos detalhados de levantamentos convencionais obteve índice kappa de 34.65% e acurácia global de 54,46%. Tal resultado representa um nível regular de classificação. Por outro lado, deve se considerar que se trata de uma região de alta complexidade geológica e compreendendo heterogeneidade de solos. A técnica desenvolvida mostra-se com potencial de evolução no mapeamento digital de solos à medida que forem evoluindo as estimativas de atributos de solos e ajustes nos critérios da chave de classificação. / Soil maps are important sources of information necessary for agriculture, but practically absent in appropriate scales for Brazil, and its mapping by the conventional method for the brazilian demand is impracticable. How an alternative to the problem, digital pedological mapping appears as an area of knowledge that involves the relationship of field information, laboratory and point of soils with quantitative methods by satellite images and relief attributes to predict attributes and classes. The literature highlights therefore the importance of studying the spatial position of sampling points in the estimation of soil attributes from spectral values of satellite images, combined to this, is an important the crossing of the estimated and spatialized soil attributes to get the soil classes. In view of exposed, the objective is the development of a technique satellite image, spectral data and attributes of relief, integrated by boolean logic to determine soil maps. The work was carried out in Rio das Pedras county, SP, and surroundings, in a total area of 47,882 ha. Which was held processing multitemporal satellite images, to obtain spectral information of exposed soil surface. This information was correlated with laboratory spectra of sample points in the subsurface (depth 80-100 cm) and was estimated spectra simulating satellite bands to unknown locations. Produced is a soil classification key for cross attribute maps by boolean logic, which defines the following attributes to be mapped: clay, cation saturation and organic matter (OM) in the 0-20 cm depth and clay, CEC, cation saturation, aluminiu saturation, Al, total iron, hue, value and chroma in depth 80-100 cm. The estimates spectra subsurface and soil attributes in two depths were performed by multivariate technique geographically weighted regression (GWR), which had its predictive performance is evaluated by comparison with predictive performance of multiple linear regression (MRL). The results showed a correlation between the spectra of the two depths, with validation R2 above 0.6. Clay (0-20 and 80-100 cm), hue, value and chroma were the soil attributes obtained the best estimates R2 above 0.6. The GWR multivariate technique yielded better performance than MRL. The digital soil map compared to the detailed soil maps of conventional surveys obtained kappa index of 34.65% and overall accuracy of 54.46%. This result is a regular level of classification. On the other hand, it must be considered that it is a highly complex geological region and comprising heterogeneity of soils. The technique developed shows with potential developments in digital soil mapping as they evolve estimates of soil attributes and adjustments to the classification key criteria.
255

Inferência de redes de regulação gênica utilizando o paradigma de crescimento de sementes / Inference of gene regulatory networks using the seed growing paradigm

Higa, Carlos Henrique Aguena 17 February 2012 (has links)
Um problema importante na área de Biologia Sistêmica é o de inferência de redes de regulação gênica. Os avanços científicos e tecnológicos nos permitem analisar a expressão gênica de milhares de genes simultaneamente. Por \"expressão gênica\'\', estamos nos referindo ao nível de mRNA dentro de uma célula. Devido a esta grande quantidade de dados, métodos matemáticos, estatísticos e computacionais têm sido desenvolvidos com o objetivo de elucidar os mecanismos de regulação gênica presentes nos organismos vivos. Para isso, modelos matemáticos de redes de regulação gênica têm sido propostos, assim como algoritmos para inferir estas redes. Neste trabalho, focamos nestes dois aspectos: modelagem e inferência. Com relação à modelagem, estudamos modelos existentes para o ciclo celular da levedura (Saccharomyces cerevisiae). Após este estudo, propomos um modelo baseado em redes Booleanas probabilísticas sensíveis ao contexto, e em seguida, um aprimoramento deste modelo, utilizando cadeias de Markov não homogêneas. Mostramos os resultados, comparando os nossos modelos com os modelos estudados. Com relação à inferência, propomos um novo algoritmo utilizando o paradigma de crescimento de semente de genes. Neste contexto, uma semente é um pequeno subconjunto de genes de interesse. Nosso algoritmo é baseado em dois passos: passo de crescimento de semente e passo de amostragem. No primeiro passo, o algoritmo adiciona outros genes à esta semente, seguindo algum critério. No segundo, o algoritmo realiza uma amostragem de redes, definindo como saída um conjunto de redes potencialmente interessantes. Aplicamos o algoritmo em dados artificiais e dados biológicos de células HeLa, mostrando resultados satisfatórios. / A key problem in Systems Biology is the inference of gene regulatory networks. The scientific and technological advancement allow us to analyze the gene expression of thousands of genes, simultaneously. By \"gene expression\'\' we refer to the mRNA concentration level inside a cell. Due to this large amount of data, mathematical, statistical and computational methods have been developed in order to elucidate the gene regulatory mechanisms that take part of every living organism. To this end, mathematical models of gene regulatory networks have been proposed, along with algorithms to infer these networks. In this work, we focus in two aspects: modeling and inference. Regarding the modeling, we studied existing models for the yeast (Saccharomyces cerevisiae) cell cycle. After that, we proposed a model based on context sensitive probabilistic Boolean networks, and then, an improvement of this model, using nonhomogeneous Markov chain. We show the results, comparing our models against the studied models. Regarding the inference, we proposed a new algorithm using the seed growing paradigm. In this context, a seed is a small subset of genes. Our algorithm is based in two main steps: seed growing step and sampling step. In the first step, the algorithm adds genes into the seed, according to some criterion. In the second step, the algorithm performs a sampling process on the space of networks, defining as its output a set of potentially interesting networks. We applied the algorithm on artificial and biological HeLa cells data, showing satisfactory results.
256

Projeto de um módulo de aquisição e pré-processamento de imagem colorida baseado em computação reconfigurável e aplicado a robôs móveis / A project of a module for acquisition and color image pre-processing based on reconfigurable computation and applied to mobile robots

Bonato, Vanderlei 14 May 2004 (has links)
Este trabalho propõe um módulo básico de aquisição e pré-processamento de imagem colorida aplicado a robôs móveis, implementado em hardware reconfigurável, dentro do conceito de sistemas SoC (System-on-a-Chip). O módulo básico é apresentado em conjunto com funções mais específicas de pré-processamento de imagem, que são utilizadas como base para a verificação das funcionalidades implementadas no trabalho proposto. As principais funções realizadas pelo módulo básico são: montagem de frames a partir dos pixels obtidos da câmera digital CMOS, controle dos diversos parâmetros de configuração da câmera e conversão de padrões de cores. Já as funções mais específicas abordam as etapas de segmentação, centralização, redução e interpretação das imagens adquiridas. O tipo de dispositivo reconfigurável utilizado neste trabalho é o FPGA (Field-Programmable Gate Array), que permite maior adequação das funções específicas às necessidades das aplicações, tendo sempre como base o módulo proposto. O sistema foi aplicado para reconhecer gestos e obteve a taxa 99,57% de acerto operando a 31,88 frames por segundo. / This work proposes a basic module for a mobile robot color image capture and pre-processing, implemented in reconfigurable hardware based on SoC (System-on-a-Chip). The basic module is presented with a specifics image pre-processing function that are used as a base for verify the functionalities implemented in this research. The mains functions implemented on this basic module are: to read the pixels provide by the CMOS camera for compose the frame, to adjust the parameters of the camera control and to convert color space. The specifics image pre-processing functions are used to do image segmentation, centralization, reduction and image classification. The reconfigurable dispositive used in this research is the FPGA (Field-Programmable Gate Array) that permit to adapt the specific function according to the application needs. The system was applied to recognize gesture and had 99,57% rate of true recognition at 31,88 frames per second.
257

Logic Synthesis with High Testability for Cellular Arrays

Sarabi, Andisheh 01 January 1994 (has links)
The new Field Programmable Gate Array (FPGA) technologies and their structures have opened up new approaches to logic design and synthesis. The main feature of an FPGA is an array of logic blocks surrounded by a programmable interconnection structure. Cellular FPGAs are a special class of FPGAs which are distinguished by their fine granularity and their emphasis on local cell interconnects. While these characteristics call for specialized synthesis tools, the availability of logic gates other than Boolean AND, OR and NOT in these architectures opens up new possibilities for synthesis. Among the possible realizations of Boolean functions, XOR logic is shown to be more compact than AND/OR and also highly testable. In this dissertation, the concept of structural regularity and the advantages of XOR logic are used to investigate various synthesis approaches to cellular FPGAs, which up to now have been mostly nonexistent. Universal XOR Canonical Forms, Two-level AND/XOR, restricted factorization, as well as various Directed Acyclic Graph structures are among the proposed approaches. In addition, a new comprehensive methodology for the investigation of all possible XOR canonical forms is introduced. Additionally, a new compact class of XOR-based Decision Diagrams for the representation of Boolean functions, called Kronecker Functional Decision Diagrams (KFDD), is presented. It is shown that for the standard, hard, benchmark examples, KFDDs are on average 35% more compact than Binary Decision Diagrams, with some reductions of up to 75% being observed.
258

STRAINTRONIC NANOMAGNETIC DEVICES FOR NON-BOOLEAN COMPUTING

Abeed, Md Ahsanul 01 January 2019 (has links)
Nanomagnetic devices have been projected as an alternative to transistor-based switching devices due to their non-volatility and potentially superior energy-efficiency. The energy efficiency is enhanced by the use of straintronics which involves the application of a voltage to a piezoelectric layer to generate a strain which is ultimately transferred to an elastically coupled magnetostrictive nanomaget, causing magnetization rotation. The low energy dissipation and non-volatility characteristics make straintronic nanomagnets very attractive for both Boolean and non-Boolean computing applications. There was relatively little research on straintronic switching in devices built with real nanomagnets that invariably have defects and imperfections, or their adaptation to non-Boolean computing, both of which have been studied in this work. Detailed studies of the effects of nanomagnet material fabrication defects and surface roughness variation (found in real nanomagnets) on the switching process and ultimately device performance of those switches have been performed theoretically. The results of these studies place the viability of straintronics logic (Boolean) and/or memory in question. With a view to analog computing and signal processing, analog spin wave based device operation has been evaluated in the presence of defects and it was found that defects impact their performance, which can be a major concern for the spin wave based device community. Additionally, the design challenge for low barrier nanomagnet which is the building block of binary stochastic neurons based probabilistic computing device in case of real nanomagnets has also been investigated. This study also cast some doubt on the efficacy of probabilistic computing devices. Fortunately, there are some non-Boolean applications based on the collective action of array of nanomagnets which are very forgiving of material defects. One example is image processing using dipole coupled nanomagnets which is studied here and it showed promising result for noise correction and edge enhancement of corrupted pixels in an image. Moreover, a single magneto tunnel junction based microwave oscillator was proposed for the first time and theoretical simulations showed that it is capable of better performance compared to traditional microwave oscillators. The experimental part of this work dealt with spin wave modes excited by surface acoustic waves, studied with time resolved magneto optic Kerr effect (TR-MOKE). New hybrid spin wave modes were observed for the first time. An experiment was carried out to emulate simulated annealing in a system of dipole coupled magnetostrictive nanomagnets where strain served as the simulated annealing agent. This was a promising outcome and it is the first demonstration of the hardware variant of simulated annealing of a many body system based on magnetostrictive nanomagnets. Finally, a giant spin Hall effect actuated surface acoustic wave antenna was demonstrated experimentally. This is the first observation of photon to phonon conversion using spin-orbit torque and although the observed conversion efficiency was poor (1%), it opened the pathway for a new acoustic radiator. These studies complement past work done in the area of straintronics.
259

Cryptological Viewpoint Of Boolean Functions

Sagdicoglu, Serhat 01 January 2003 (has links) (PDF)
Boolean functions are the main building blocks of most cipher systems. Various aspects of their cryptological characteristics are examined and investigated by many researchers from different fields. This thesis has no claim to obtain original results but consists in an attempt at giving a unified survey of the main results of the subject. In this thesis, the theory of boolean functions is presented in details, emphasizing some important cryptological properties such as balance, nonlinearity, strict avalanche criterion and propagation criterion. After presenting many results about these criteria with detailed proofs, two upper bounds and two lower bounds on the nonlinearity of a boolean function due to Zhang and Zheng are proved. Because of their importance in the theory of boolean functions, construction of Sylvester-Hadamard matrices are shown and most of their properties used in cryptography are proved. The Walsh transform is investigated in detail by proving many properties. By using a property of Sylvester-Hadamard matrices, the fast Walsh transform is presented and its application in finding the nonlinearity of a boolean function is demonstrated. One of the most important classes of boolean functions, so called bent functions, are presented with many properties and by giving several examples, from the paper of Rothaus. By using bent functions, relations between balance, nonlinearity and propagation criterion are presented and it is shown that not all these criteria can be simultaneously satisfied completely. For this reason, several constructions of functions optimizing these criteria which are due to Seberry, Zhang and Zheng are presented.
260

Grenzen der visuellen Query-Konstruktion mittels Faceted Browsing

Koßlitz, Marleen 14 May 2012 (has links) (PDF)
Um in einer Menge von Daten nach bestimmten Informationen suchen und filtern zu können, verwenden Suchmaschinen und Datenbanksysteme Queries (Suchanfragen). Diese Queries sind häufig durch eine eigene Sprache definiert, welche die Bildung von komplexen Ausdrücken erlaubt. Die Systeme antworten auf die Suchanfrage in Form einer Ergebnismenge. Komplexe Suchanfragen ermöglichen dabei das Auffinden von präzisen Ergebnissen. Faceted Browsing ist ein Benutzerschnittstellen-Paradigma zum Suchen und Filtern von Daten. Dabei können Suchanfragen visuell erstellt und sukzessiv verfeinert werden, ohne die spezielle Anfragesprache kennen zu müssen. Die einfache und intuitive Benutzbarkeit der Oberfläche bildet das Erfolgsrezept, sodass Faceted Browsing in vielen Anwendungen, wie beispielsweise auch in Online-Shops, zum Einsatz kommt. Bisher sind die Systeme überwiegend so konzipiert, dass Queries, welche aus Konjunktionen von Disjunktionen bestehen, gebildet werden können. Es stellt sich nun die Frage, ob auch komplexere Suchanfragen mittels Faceted Browsing erstellt werden können und welche Veränderungen der Oberfläche dafür notwendig sind. Reichen die Veränderungen dabei so weit, dass zu Gunsten der Komplexität der Suchanfrage auf die Einfachheit der Oberfläche verzichtet werden muss oder existieren Möglichkeiten, komplexere Queries zu bilden und dabei die Einfachheit der Oberfläche zu bewahren? Ziel der Arbeit ist es, zu ermitteln, welche Komplexität die Suchanfragen, die mittels Faceted Browsing gebildet werden, aufweisen können, ohne dabei die einfache Benutzbarkeit der Facettenbrowseroberfläche zu verlieren. Dazu wird die bisherige Mächtigkeit von Facettenbrowseroberflächen hinsichtlich der Querybildung analysiert. Weiterhin werden komplexere Suchanfragen auf ihre Umsetzbarkeit mit Hilfe des Faceted Browsing untersucht. Es wird betrachtet, auf welche Weise sich bisherige Facettenbrowseroberflächen verändern müssen, um die visuelle Erstellung solcher Suchanfragen zu ermöglichen. Durch die prototypische Erweiterung eines bestehenden Facettenbrowsers um notwendige Oberflächenelemente soll die Möglichkeit bestehen, komplexere Suchanfragen, als bisher mittels Faceted Browsing möglich, zu bilden.

Page generated in 0.046 seconds