• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 47
  • 16
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 220
  • 41
  • 39
  • 36
  • 30
  • 23
  • 23
  • 19
  • 19
  • 17
  • 17
  • 15
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Implementation of a digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic /

Habiby, Sarry Fouad January 1986 (has links)
No description available.
62

Gigahertz-Range Multiplier Architectures Using MOS Current Mode Logic (MCML)

Srinivasan, Venkataramanujam 18 December 2003 (has links)
The tremendous advancement in VLSI technologies in the past decade has fueled the need for intricate tradeoffs among speed, power dissipation and area. With gigahertz range microprocessors becoming commonplace, it is a typical design requirement to push the speed to its extreme while minimizing power dissipation and die area. Multipliers are critical components of many computational intensive circuits such as real time signal processing and arithmetic systems. The increasing demand in speed for floating-point co-processors, graphic processing units, CDMA systems and DSP chips has shaped the need for high-speed multipliers. The focus of our research for modern digital systems is two fold. The first one is to analyze a relatively unexplored logic style called MOS Current Mode Logic (MCML), which is a promising logic technique for the design of high performance arithmetic circuits with minimal power dissipation. The second one is to design high-speed arithmetic circuits, in particular, gigahertz-range multipliers that exploit the many attractive features of the MCML logic style. A small library of MCML gates that form the core components of the multiplier were designed and optimized for high-speed operation. The three 8-bit MCML multiplier architectures designed and simulated in TSMC 0.18 mm CMOS technology are: 3-2-tree architecture with ripple carry adder (Architecture I), 4-2-tree design with ripple carry adder (Architecture II) and 4-2-tree architecture with carry look-ahead adders (Architecture III). Architecture I operates with a maximum throughput of 4.76 GHz (4.76 Billion multiplications per second) and a latency of 3.78 ns. Architecture II has a maximum throughput of 3.3 GHz and a latency of 3 ns and Architecture III has a maximum throughput of 2 GHz and a latency of 3 ns. Architecture I achieves the highest throughput among the three multipliers, but it incurs the largest area and latency, in terms of clock cycle count as well as absolute delay. Although it is difficult to compare the speed of our multipliers with existing ones, due to the use of different technologies and different optimization goals, we believe our multipliers are among the fastest found in contemporary literature. / Master of Science
63

The Complete Pick Property and Reproducing Kernel Hilbert Spaces

Marx, Gregory 03 January 2014 (has links)
We present two approaches towards a characterization of the complete Pick property. We first discuss the lurking isometry method used in a paper by J.A. Ball, T.T. Trent, and V. Vinnikov. They show that a nondegenerate, positive kernel has the complete Pick property if $1/k$ has one positive square. We also look at the one-point extension approach developed by P. Quiggin which leads to a sufficient and necessary condition for a positive kernel to have the complete Pick property. We conclude by connecting the two characterizations of the complete Pick property. / Master of Science
64

Fast prime field arithmetic using novel large integer representation

Alhazmi, Bader Hammad 10 July 2019 (has links)
Large integers are used in several key areas such as RSA (Rivest-Shamir-Adleman) public-key cryptographic system and elliptic curve public-key cryptographic system. To achieve higher levels of security requires larger key size and this becomes a limiting factor in prime finite field GF(p) arithmetic using large integers because operations on large integers suffer from the long carry propagation problem. Large integer representation has direct impact on the efficiency of the calculations and the hardware and software implementations. Attempts to use different representations such as residue number systems suffer from their own problems. In this dissertation, we propose a novel and efficient attribute-based large integer representation scheme capable of efficiently representing the large integers that are commonly used in cryptography such as the five NIST primes and the Pierpont primes used in supersingular isogeny Diffie-Hellman (SIDH) used in post-quantum cryptography. Moreover, we propose algorithms for this new representation to perform arithmetic operations such as conversions from and to binary representation, two’s complement, left-shift, numbers comparison, addition/subtraction, modular addition/subtraction, modular reduction, multiplication, and modular multiplication. Extensive numerical simulations and software implementations are done to verify the performance of the new number representation. Results show that the attribute-based large integer arithmetic operations are done faster in our proposed representation when compared with binary and residue number representations. This makes the proposed representation suitable for cryptographic applications on embedded systems and IoT devices with limited resources for better security level. / Graduate / 2020-07-04
65

Méthodes d’optimisation distribuée pour l’exploitation sécurisée des réseaux électriques interconnectés / Distributed optimization methods for the management of the security of interconnected power systems

Velay, Maxime 25 September 2018 (has links)
Notre société étant plus dépendante que jamais au vecteur électrique, la moindre perturbation du transport ou de l’acheminement de l’électricité a un impact social et économique important. La fiabilité et la sécurité des réseaux électriques sont donc cruciales pour les gestionnaires de réseaux, en plus des aspects économiques. De plus, les réseaux de transport sont interconnectés pour réduire les coûts des opérations et pour améliorer la sécurité. Un des plus grand défis des gestionnaires des réseaux de transport est ainsi de se coordonner avec les réseaux voisins, ce qui soulève des problèmes liés à la taille du problème, à l’interopérabilité et à la confidentialité des données.Cette thèse se focalise principalement sur la sécurité des opérations sur les réseaux électriques, c’est pourquoi l’évolution des principales caractéristiques des blackouts, qui sont des échecs de la sécurité des réseaux, sont étudiés sur la période 2005-2016. L’approche de cette étude consiste à déterminer quelles sont les principales caractéristiques des incidents de ces 10 dernières années, afin d’identifier ce qui devrait être intégré pour réduire le risque que ces incidents se reproduisent. L’évolution a été étudiée et comparé avec les caractéristiques des blackouts qui se sont produit avant 2005. L’étude se focalise sur les préconditions qui ont mené à ces blackouts et sur les cascades, et particulièrement sur le rôle de la vitesse des cascades. Les caractéristiques importante sont extraites et intégrées dans la suite de notre travail.Un algorithme résolvant un problème préventif d’Optimal Power Flow avec contraintes de sécurité (SCOPF) de manière distribuée est ainsi développé. Ce problème consiste en l’ajout de contraintes qui assure qu’après la perte de n’importe quel appareil d’importance, le nouveau point d’équilibre, atteint suite au réglage primaire en fréquence, respecte les contraintes du système. L’algorithme développé utilise une décomposition fine du problème et est implémenté sous le paradigme multi-agent, basé sur deux catégories d’agents : les appareils et les bus. Les agents sont coordonnés grâce à l’ « Alternating Direction Method of Multipliers (ADMM)» et grâce à un problème de consensus. Cette décomposition procure l’autonomie et la confidentialité nécessaire aux différents acteurs du système, mais aussi, un bon passage à l’échelle par rapport à la taille du problème. Cet algorithme a aussi pour avantage d’être robuste à n’importe quelle perturbation, incluant la séparation du système en plusieurs régions.Puis, pour prendre en compte l’incertitude sur la production créée par les erreurs de prédiction des fermes éoliennes, une approche distribuée à deux étapes est développée pour résoudre un problème d’Optimal Power Flow avec contraintes probabilistes (CCOPF), d’une manière complétement distribuée. Les erreurs de prédiction des fermes éoliennes sont modélisées par des lois normales indépendantes et les écarts par rapport aux plannings de production sont considérés compensés par le réglage primaire en fréquence. La première étape de l’algorithme a pour but de déterminer des paramètres de sensibilités nécessaires pour formuler le problème. Les résultats de cette étape sont ensuite des paramètres d’entrée de la seconde étape qui, elle, résout le problème de CCOPF. Une extension de cette formulation permet d’ajouter de la flexibilité au problème en permettant la réduction de la production éolienne. Cet algorithme est basé sur la même décomposition fine que précédemment où les agents sont également coordonnés par l’ADMM et grâce à un problème de consensus. En conclusion, cet algorithme en deux étapes garantit la confidentialité et l’autonomie des différents acteurs, et est parallèle et adaptée aux plateformes hautes performances. / Our societies are more dependent on electricity than ever, thus any disturbance in the power transmission and delivery has major economic and social impact. The reliability and security of power systems are then crucial to keep, for power system operators, in addition to minimizing the system operating cost. Moreover, transmission systems are interconnected to decrease the cost of operation and improve the system security. One of the main challenges for transmission system operators is therefore to coordinate with interconnected power systems, which raises scalability, interoperability and privacy issues. Hence, this thesis is concerned with how TSOs can operate their networks in a decentralized way but coordinating their operation with other neighboring TSOs to find a cost-effective scheduling that is globally secure.The main focus of this thesis is the security of power systems, this is why the evolution of the main characteristics of the blackouts that are failures in power system security, of the period 2005-2016 is studied. The approach consists in determining what the major characteristics of the incidents of the past 10 years are, to identify what should be taken into account to mitigate the risk of incidents. The evolution have been studied and compared with the characteristics of the blackouts before 2005. The study focuses on the pre-conditions that led to those blackouts and on the cascades, and especially the role of the cascade speed. Some important features are extracted and later integrated in our work.An algorithm that solve the preventive Security Constrained Optimal Power Flow (SCOPF) problem in a fully distributed manner, is thus developed. The preventive SCOPF problem consists in adding constraints that ensure that, after the loss of any major device of the system, the new steady-state reached, as a result of the primary frequency control, does not violate any constraint. The developed algorithm uses a fine-grained decomposition and is implemented under the multi-agent system paradigm based on two categories of agents: devices and buses. The agents are coordinated with the Alternating Direction method of multipliers in conjunction with a consensus problem. This decomposition provides the autonomy and privacy to the different actors of the system and the fine-grained decomposition allows to take the most of the decomposition and provides a good scalability regarding the size of the problem. This algorithm also have the advantage of being robust to any disturbance of the system, including the separation of the system into regions.Then, to account for the uncertainty of production brought by wind farms forecast error, a two-step distributed approach is developed to solve the Chance-Constrained Optimal Power Flow problem, in a fully distributed manner. The wind farms forecast errors are modeled by independent Gaussian distributions and the mismatches with the initials are assumed to be compensated by the primary frequency response of generators. The first step of this algorithm aims at determining the sensitivity factors of the system, needed to formulate the problem. The results of this first step are inputs of the second step that is the CCOPF. An extension of this formulation provides more flexibility to the problem and consists in including the possibility to curtail the wind farms. This algorithm relies on the same fine-grained decomposition where the agents are again coordinated by the ADMM and a consensus problem. In conclusion, this two-step algorithm ensures the privacy and autonomy of the different system actors and it is de facto parallel and adapted to high performance platforms.
66

Relating Constrained Motion to Force Through Newton's Second Law

Roithmayr, Carlos 06 April 2007 (has links)
When a mechanical system is subject to constraints its motion is in some way restricted. In accordance with Newton's second law, motion is a direct result of forces acting on a system; hence, constraint is inextricably linked to force. The presence of a constraint implies the application of particular forces needed to compel motion in accordance with the constraint; absence of a constraint implies the absence of such forces. The objective of this thesis is to formulate a comprehensive, consistent, and concise method for identifying a set of forces needed to constrain the behavior of a mechanical system modeled as a set of particles and rigid bodies. The goal is accomplished in large part by expressing constraint equations in vector form rather than entirely in terms of scalars. The method developed here can be applied whenever constraints can be described at the acceleration level by a set of independent equations that are linear in acceleration. Hence, the range of applicability extends to servo-constraints or program constraints described at the velocity level with relationships that are nonlinear in velocity. All configuration constraints, and an important class of classical motion constraints, can be expressed at the velocity level by using equations that are linear in velocity; therefore, the associated constraint equations are linear in acceleration when written at the acceleration level. Two new approaches are presented for deriving equations governing motion of a system subject to constraints expressed at the velocity level with equations that are nonlinear in velocity. By using partial accelerations instead of the partial velocities normally employed with Kane's method, it is possible to form dynamical equations that either do or do not contain evidence of the constraint forces, depending on the analyst's interests.
67

Recalage/Fusion d'images multimodales à l'aide de graphes d'ordres supérieurs / Registration/Fusion of multimodal images using higher order graphs

Fécamp, Vivien 12 January 2016 (has links)
L’objectif principal de cette thèse est l’exploration du recalage d’images à l’aide de champs aléatoires de Markov d’ordres supérieurs, et plus spécifiquement d’intégrer la connaissance de transformations globales comme une transformation rigide, dans la structure du graphe. Notre cadre principal s’applique au recalage 2D-2D ou 3D-3D et utilise une approche hiérarchique d’un modèle de champ de Markov dont le graphe est une grille régulière. Les variables cachées sont les vecteurs de déplacements des points de contrôle de la grille.Tout d’abord nous expliciterons la construction du graphe qui permet de recaler des images en cherchant entre elles une transformation affine, rigide, ou une similarité, tout en ne changeant qu’un potentiel sur l’ensemble du graphe, ce qui assure une flexibilité lors du recalage. Le choix de la métrique est également laissée à l’utilisateur et ne modifie pas le fonctionnement de notre algorithme. Nous utilisons l’algorithme d’optimisation de décomposition duale qui permet de gérer les hyper-arêtes du graphe et qui garantit l’obtention du minimum exact de la fonction pourvu que l’on ait un accord entre les esclaves. Un graphe similaire est utilisé pour réaliser du recalage 2D-3D.Ensuite, nous fusionnons le graphe précédent avec un autre graphe construit pour réaliser le recalage déformable. Le graphe résultant de cette fusion est plus complexe et, afin d’obtenir un résultat en un temps raisonnable, nous utilisons une méthode d’optimisation appelée ADMM (Alternating Direction Method of Multipliers) qui a pour but d’accélérer la convergence de la décomposition duale. Nous pouvons alors résoudre simultanément recalage affine et déformable, ce qui nous débarrasse du biais potentiel issu de l’approche classique qui consiste à recaler affinement puis de manière déformable. / The main objective of this thesis is the exploration of higher order Markov Random Fields for image registration, specifically to encode the knowledge of global transformations, like rigid transformations, into the graph structure. Our main framework applies to 2D-2D or 3D-3D registration and use a hierarchical grid-based Markov Random Field model where the hidden variables are the displacements vectors of the control points of the grid.We first present the construction of a graph that allows to perform linear registration, which means here that we can perform affine registration, rigid registration, or similarity registration with the same graph while changing only one potential. Our framework is thus modular regarding the sought transformation and the metric used. Inference is performed with Dual Decomposition, which allows to handle the higher order hyperedges and which ensures the global optimum of the function is reached if we have an agreement among the slaves. A similar structure is also used to perform 2D-3D registration.Second, we fuse our former graph with another structure able to perform deformable registration. The resulting graph is more complex and another optimisation algorithm, called Alternating Direction Method of Multipliers is needed to obtain a better solution within reasonable time. It is an improvement of Dual Decomposition which speeds up the convergence. This framework is able to solve simultaneously both linear and deformable registration which allows to remove a potential bias created by the standard approach of consecutive registrations.
68

Modeling, simulation and control of redundantly actuated parallel manipulators

Ganovski, Latchezar 04 December 2007 (has links)
Redundantly actuated manipulators have only recently aroused significant scientific interest. Their advantages in terms of enlarged workspace, higher payload ratio and better manipulability with respect to non-redundantly actuated systems explain the appearance of numerous applications in various fields: high-precision machining, fault-tolerant manipulators, transport and outer-space applications, surgical operation assistance, etc. The present Ph.D. research proposes a unified approach for modeling and actuation of redundantly actuated parallel manipulators. The approach takes advantage of the actuator redundancy principles and thus allows for following trajectories that contain parallel (force) singularities, and for eliminating the negative effect of the latter. As a first step of the approach, parallel manipulator kinematic and dynamic models are generated and treated in such a way that they do not suffer from kinematic loop closure numeric problems. Using symbolic models based on the multibody formalism and a Newton-Euler recursive computation scheme, faster-than-real-time computer simulations can thus be achieved. Further, an original piecewise actuation strategy is applied to the manipulators in order to eliminate singularity effects during their motion. Depending on the manipulator and the trajectories to be followed, this strategy results in non-redundant or redundant actuation solutions that satisfy actuator performance limits and additional optimality criteria. Finally, a validation of the theoretical results and the redundant actuation benefits is performed on the basis of well-known control algorithms applied on two parallel manipulators of different complexity. This is done both by means of computer simulations and experimental runs on a prototype designed at the Center for Research in Mechatronics of the UCL. The advantages of the actuator redundancy of parallel manipulators with respect to the elimination of singularity effects during motion and the actuator load optimization are thus confirmed (virtually and experimentally) and highlighted thanks to the proposed approach for modeling, simulation and control.
69

Relações hidroquímicas e avaliação de entradas antrópicas na qualidade das águas superficiais do Ribeirão Guaçu e afluentes, São Roque, SP / Hydrochemical relations and evaluation of anthropic inputs in the surface water quality of the Guaçu River and tributaries, São Roque, SP

Santos, Eddy Bruno dos 19 December 2018 (has links)
São Roque situa-se a 60 km de São Paulo, em uma região composta por serras e morros. A hidrografia do município é composta por bacias tributárias do Rio Tietê. A cidade se desenvolveu às margens dos córregos Aracaí e Carambeí, cujos leitos fluem em margens canalizadas até o deságue no Ribeirão Guaçu. Estes córregos, bem como o Ribeirão do Marmeleiro, encaminham resíduos, detritos e todo o volume de águas pluviais ao Ribeirão Guaçu. Em 2017, foi instalada no município uma estação de tratamento de esgoto visando a melhoria das condições de saneamento na região. Diante disso, este trabalho teve por objetivo avaliar as relações hidroquímicas quanto ao estado de trofia e demais impactos antrópicos na qualidade das águas do Ribeirão Guaçu, São Roque, SP, de modo espacial e temporal, mediante abordagem integrada de multitraçadores ambientais, situando a qualidade da microbacia hidrográfica pré e pós instalação de um sistema de coleta e tratamento de esgoto. Para avaliar a qualidade das águas dos corpos hídricos, foram efetuadas amostragens bimestrais de água superficial, obedecendo às épocas de chuva e de seca. Foram selecionados sete locais estrategicamente escolhidos e georreferenciados. As análises foram realizadas de acordo com os métodos analíticos baseados no Standard Methods for Examination of Water and Wastewater. Foram analisados parâmetros físicos, químicos e microbiológicos, sendo os resultados comparados com os valores permitidos por lei. O IQA foi empregado a fim de se obter um panorama sobre a qualidade hídrica da microbacia em função da sazonalidade e um comparativo entre o período pré e pós-operação da ETE. Os pontos Marmeleiro e Guaçu 4 apresentaram maior influência antrópica. Todos os pontos analisados demonstraram-se comprometidos com contaminantes microbiológicos. Em relação aos físicos e químicos, diversos pontos apresentaram inconformidades. / São Roque is located 60 km from São Paulo, in a region composed of hills. The hydrography of the municipality is composed of tributary basins of the Tietê River. The city developed on the margins of the streams of Aracaí and Carambeí, whose channels channeled flows until the drain in the Guaçu River. These streams, as well as Marmeleiro River, send waste, debris and all the volume of rainwater to the Guaçu River. In 2017, a sewage treatment plant was installed in the municipality aiming to improve the sanitation conditions in the region. The objective of this work was to evaluate the hydrochemical relationships regarding trophic status and other anthropic impacts in the water quality of the Guaçu River, São Roque, SP, in a spatial and temporal way, through an integrated approach of environmental multipliers, placing the quality of the hydrographic microbasin pre and post installation of a sewage collection and treatment system. In order to evaluate the water quality of the water bodies, bimonthly sampling of surface water was carried out, obeying rain and dry seasons. Seven strategically chosen and geo-referenced sites were selected. The analyzes shall be carried out in accordance with analytical methods based on the Standard Methods for Examination of Water and Wastewater. Physical, chemical and microbiological parameters were analyzed, and the results were compared with the values allowed by law. The WQI was used in order to obtain an overview of the water quality of the microbasin according to the seasonality and a comparison between the pre and post-operation period of the STS. The collect points Marmeleiro and Guaçu 4 presented greater anthropic influence. All analyzed points were shown to be compromised with microbiological contaminants. Regarding physicochemicals, several points presented nonconformities.
70

Circuitos Multiplicadores Array de Baixo Consumo de Potência Aplicados a Filtros Adaptativos / Low-Power Array Multipliers Circuits for Adaptive Filter

Pieper, Leandro Zafalon 08 August 2008 (has links)
Made available in DSpace on 2016-03-22T17:26:08Z (GMT). No. of bitstreams: 1 leandro zafalon.pdf: 1268402 bytes, checksum: cd35030285126fa95b61d98c6a518798 (MD5) Previous issue date: 2008-08-08 / The main goal of this work is the implementation and analyzes of new array multiplier architectures. These new architectures were recently presented in the scientific community by including different power reduction techniques, such as the use of efficient adder circuits and the optimization of the dedicated multiplication structures that allow the multiplication operation in the radix 2m. The new multipliers operate in 2´s complement and keep the same regularity presented by a conventional array multiplier. The architectures operate in the radix 2m, where m represents the group of bits multiplied at a time. In a conventional array multiplier, where the multiplication is performed bit by bit, m assumes value equal 1 (radix 2 operation). In this work, the new multiplier architectures operate in different radices, leading to a reduction in the number of partial product lines, enabling higher performance and power reduction in the multipliers. The 16, 32 and 64 bit width multipliers were described in textual language (gate level), and the comparisons between the multipliers are preformed in terms of area, delay and power consumption by using SIS environment (for area and delay results) and SLS tool (for power consumption estimation). In this work we have applied the proposed optimized multipliers in digital filtering algorithms such as finite impulse response (FIR) and dedicated architecture for the LMS (Least Mean Square) adaptive filtering / O objetivo principal deste trabalho é a implementação e análise de novas arquiteturas de circuitos multiplicadores array digitais recentemente apresentados no meio cientifico com diferentes técnicas de redução de potência, tais como a utilização de eficientes estruturas de circuitos somadores, bem como a otimização dos blocos dedicados de multiplicação, que permitem a operação de multiplicação na base 2m. A proposta de novas arquiteturas consiste em operações de multiplicação em complemento de 2 e que mantenham a mesma regularidade de um multiplicador array convencional. As arquiteturas podem operar com números na base 2m, onde m representa o grupo de bits de multiplicação. Em um multiplicador array convencional, onde a operação de multiplicação é realizada bit a bit, o valor de m é igual a 1 (operação na base 2). Neste trabalho, são apresentadas novas arquiteturas de multiplicadores que operam em diferentes bases, o que permite a redução do número de linhas de produtos parciais, com impactos diretos no aumento de desempenho e redução do consumo de potência. A implementação dos diferentes circuitos multiplicadores foi realizada no nível textual (nível de portas lógicas), onde circuitos multiplicadores de 16, 32 e 64 bits são comparados em termos de parâmetros de área, atraso e consumo de potência utilizando os ambientes SIS (para valores de área e atraso) e SLS (para estimação de valores de consumo de potência). Como estudos de caso, as diferentes arquiteturas de circuitos multiplicadores propostas neste trabalho foram aplicadas em filtros digitais de resposta finita ao impulso (FIR) e em arquitetura dedicada de algoritmo de filtragem adaptativa LMS (Least Mean Square)

Page generated in 0.0701 seconds