181 |
Absolute position measurement for automated guided vehicles using the Greedy DeBruijn SequenceOrtiz, John E. 09 1900 (has links)
Automated Guided Vehicles (AGVs) use different techniques to help locate their position with respect to a point of origin. This thesis compares two approaches that utilize a binary track laid on the floor for the AGV to follow. Both approaches use equally spaced n-tuples on the track that the AGV can use to compute its position. Both approaches also have the special feature that every n-tuple on the binary track is unique and can be used to designate the position of an AGV. The first approach, developed by E.M. Petriu, uses a Pseudo-Random Binary Sequence (PRBS) as a model for the binary track. In the second approach, we use a Greedy DeBruijn Sequence (GDBS) as a model for the binary track. Unlike the PRBS model, the GDBS model has a natural ordering which can be used to determine the position of the AGV more quickly and efficiently than the PRBS model.
|
182 |
Convolution based real-time control strategy for vehicle active suspension systemsSaud, Moudar January 2009 (has links)
A novel real-time control method that minimises linear system vibrations when it is subjected to an arbitrary external excitation is proposed in this study. The work deals with a discrete differential dynamic programming type of problem, in which an external disturbance is controlled over a time horizon by a control force strategy constituted by the well-known convolution approach. The proposed method states that if a control strategy can be established to restore an impulse external disturbance, then the convolution concept can be used to generate an overall control strategy to control the system response when it is subjected to an arbitrary external disturbance. The arbitrary disturbance is divided into impulses and by simply scaling, shifting and summation of the obtained control strategy against the impulse input for each impulse of the arbitrary disturbance, the overall control strategy will be established. Genetic Algorithm was adopted to obtain an optimal control force plan to suppress the system vibrations when it is subjected to a shock disturbance, and then the Convolution concept was used to enable the system response to be controlled in real-time using the obtained control strategy. Numerical tests were carried out on a two-degree of freedom quarter-vehicle active suspension model and the results were compared with results generated using the Linear Quadratic Regulator (LQR) method. The method was also applied to control the vibration of a seven-degree of freedom full-vehicle active suspension model. In addition, the effect of a time delay on the performance of the proposed approach was also studied. To demonstrate the applicability of the proposed method in real-time control, experimental tests were performed on a quarter-vehicle test rig equipped with a pneumatic active suspension. Numerical and experimental results showed the effectiveness of the proposed method in reducing the vehicle vibrations. One of the main contributions of this work besides using the Convolution concept to provide a real time control strategy is the reduction in the number of sensors needed to construct the proposed method as the disturbance amplitude is the only parameter needed to be measured (known). Finally, having achieved what has been proposed above, a generic robust control method is accomplished, which not only can be applied for active suspension systems but also in many other fields.
|
183 |
Heuristic approaches to solve the frequency assignment problemWhitford, Angela Tracy January 1999 (has links)
No description available.
|
184 |
Complexity aspects of certain graphical parameters07 October 2015 (has links)
M.Sc. (Mathematics) / Please refer to full text to view abstract
|
185 |
Etude de la sécurité des implémentations de couplage / On the security of pairing implementationsLashermes, Ronan 29 September 2014 (has links)
Les couplages sont des algorithmes cryptographiques qui permettent de nouveaux protocoles de cryptographie à clé publique. Après une décennie de recherches sur des implémentations efficaces, ce qui permet maintenant d’exécuter un couplage en un temps raisonnable, nous nous sommes concentrés sur la sécurité de ces mêmes implémentations.Pour cela nous avons évalué la résistance des algorithmes de couplage contre les attaques en faute. Nous avons envoyé des impulsions électromagnétiques sur la puce calculant le couplage à des moments choisis. Cela nous a permis de remonter au secret cryptographique qu’est censé protéger l’algorithme de couplage. Cette étude fut à la fois théorique et pratique avec la mise en œuvre d’attaques en faute. Finalement, des contremesures ont été proposées pour pouvoir protéger l’algorithme dans le futur / Pairings are cryptographic algorithms allowing new protocols for public-key cryptography. After a decade of research which led to a dramatic improvement of the computation speed of pairings, we focused on the security of pairing implementations.For that purpose, we evaluated the resistance to fault attacks. We have sent electromagnetic pulses in the chip computing a pairing at a precise instant. It allowed us to recover the cryptographic secret which should be protected in the computation. Our study was both theoretical and practical; we did implement actual fault attacks. Finally, we proposed countermeasures in order to protect the algorithm in the future
|
186 |
A fully three-dimensional heuristic algorithm for container packingAspoas, Sean Graham January 2016 (has links)
A research report submitted to the Faculty of Science, in partial fulfilment of the requirements for the degree of Master of Science, University of the Witwatersrand, Johannesburg, 1996. Degree awarded with distinction on 4 December 1996. / We present a new three-dimensional container-packing algorithm. The algorithm is truly
three-dimensional, thus, overcoming the limitations of layering algorithms, especially when a large number of parcel types is used. The algorithm sequentially places parcels into the container using localised heuristic information, and makes use of a balanced tree to store potential packing positions. The result is an algorithm with time complexity O(knlogn) where k is the number of parcel types, and n the maximum number of parcels that can be placed. Test results, including a comparative test, are very favourable, and show that the algorithms performance actually increases as the number of parcel types is increased.This is a direct result of the three-dimenslonal algorithm facilitating the utilisation of all useful
packing positions using the variety of parcel sizes available. / GR 2016
|
187 |
Unsupervised asset cluster analysis implemented with parallel genetic algorithms on the NVIDIA CUDA platformCieslakiewicz, Dariusz 01 July 2014 (has links)
During times of stock market turbulence and crises, monitoring the clustering behaviour
of financial instruments allows one to better understand the behaviour of the stock market
and the associated systemic risks. In the study undertaken, I apply an effective and
performant approach to classify data clusters in order to better understand correlations
between stocks. The novel methods aim to address the lack of effective algorithms to
deal with high-performance cluster analysis in the context of large complex real-time
low-latency data-sets. I apply an efficient and novel data clustering approach, namely
the Giada and Marsili log-likelihood function derived from the Noh model and use a Parallel
Genetic Algorithm in order to isolate residual data clusters. Genetic Algorithms
(GAs) are a very versatile methodology for scientific computing, while the application
of Parallel Genetic Algorithms (PGAs) further increases the computational efficiency.
They are an effective vehicle to mine data sets for information and traits. However,
the traditional parallel computing environment can be expensive. I focused on adopting
NVIDIAs Compute Unified Device Architecture (CUDA) programming model in order
to develop a PGA framework for my computation solution, where I aim to efficiently
filter out residual clusters. The results show that the application of the PGA with
the novel clustering function on the CUDA platform is quite effective to improve the
computational efficiency of parallel data cluster analysis.
|
188 |
An online adaptive learning algorithm for optimal trade execution in high-frequency marketsHendricks, Dieter January 2016 (has links)
A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy
in the Faculty of Science, School of Computer Science and Applied Mathematics
University of the Witwatersrand. October 2016. / Automated algorithmic trade execution is a central problem in modern financial markets,
however finding and navigating optimal trajectories in this system is a non-trivial
task. Many authors have developed exact analytical solutions by making simplifying
assumptions regarding governing dynamics, however for practical feasibility and robustness,
a more dynamic approach is needed to capture the spatial and temporal system
complexity and adapt as intraday regimes change.
This thesis aims to consolidate four key ideas: 1) the financial market as a complex
adaptive system, where purposeful agents with varying system visibility collectively and
simultaneously create and perceive their environment as they interact with it; 2) spin
glass models as a tractable formalism to model phenomena in this complex system; 3) the
multivariate Hawkes process as a candidate governing process for limit order book events;
and 4) reinforcement learning as a framework for online, adaptive learning. Combined
with the data and computational challenges of developing an efficient, machine-scale
trading algorithm, we present a feasible scheme which systematically encodes these ideas.
We first determine the efficacy of the proposed learning framework, under the conjecture
of approximate Markovian dynamics in the equity market. We find that a simple lookup
table Q-learning algorithm, with discrete state attributes and discrete actions, is able
to improve post-trade implementation shortfall by adapting a typical static arrival-price
volume trajectory with respect to prevailing market microstructure features streaming
from the limit order book.
To enumerate a scale-specific state space whilst avoiding the curse of dimensionality, we
propose a novel approach to detect the intraday temporal financial market state at each
decision point in the Q-learning algorithm, inspired by the complex adaptive system
paradigm. A physical analogy to the ferromagnetic Potts model at thermal equilibrium
is used to develop a high-speed maximum likelihood clustering algorithm, appropriate
for measuring critical or near-critical temporal states in the financial system. State
features are studied to extract time-scale-specific state signature vectors, which serve as
low-dimensional state descriptors and enable online state detection.
To assess the impact of agent interactions on the system, a multivariate Hawkes process is
used to measure the resiliency of the limit order book with respect to liquidity-demand
events of varying size. By studying the branching ratios associated with key quote
replenishment intensities following trades, we ensure that the limit order book is expected
to be resilient with respect to the maximum permissible trade executed by the agent.
Finally we present a feasible scheme for unsupervised state discovery, state detection
and online learning for high-frequency quantitative trading agents faced with a multifeatured,
asynchronous market data feed. We provide a technique for enumerating the
state space at the scale at which the agent interacts with the system, incorporating the
effects of a live trading agent on limit order book dynamics into the market data feed,
and hence the perceived state evolution. / LG2017
|
189 |
Algorithms in Elliptic Curve CryptographyUnknown Date (has links)
Elliptic curves have played a large role in modern cryptography. Most notably,
the Elliptic Curve Digital Signature Algorithm (ECDSA) and the Elliptic Curve
Di e-Hellman (ECDH) key exchange algorithm are widely used in practice today for
their e ciency and small key sizes. More recently, the Supersingular Isogeny-based
Di e-Hellman (SIDH) algorithm provides a method of exchanging keys which is conjectured
to be secure in the post-quantum setting. For ECDSA and ECDH, e cient
and secure algorithms for scalar multiplication of points are necessary for modern use
of these protocols. Likewise, in SIDH it is necessary to be able to compute an isogeny
from a given nite subgroup of an elliptic curve in a fast and secure fashion.
We therefore nd strong motivation to study and improve the algorithms used
in elliptic curve cryptography, and to develop new algorithms to be deployed within
these protocols. In this thesis we design and develop d-MUL, a multidimensional
scalar multiplication algorithm which is uniform in its operations and generalizes the
well known 1-dimensional Montgomery ladder addition chain and the 2-dimensional
addition chain due to Dan J. Bernstein. We analyze the construction and derive many
optimizations, implement the algorithm in software, and prove many theoretical and practical results. In the nal chapter of the thesis we analyze the operations carried
out in the construction of an isogeny from a given subgroup, as performed in SIDH.
We detail how to e ciently make use of parallel processing when constructing this
isogeny. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
190 |
A novel approach for the hardware implementation of a PPMC statistical data compressorFeregrino Uribe, Claudia January 2001 (has links)
This thesis aims to understand how to design high-performance compression algorithms suitable for hardware implementation and to provide hardware support for an efficient compression algorithm. Lossless data compression techniques have been developed to exploit the available bandwidth of applications in data communications and computer systems by reducing the amount of data they transmit or store. As the amount of data to handle is ever increasing, traditional methods for compressing data become· insufficient. To overcome this problem, more powerful methods have been developed. Among those are the so-called statistical data compression methods that compress data based on their statistics. However, their high complexity and space requirements have prevented their hardware implementation and the full exploitation of their potential benefits. This thesis looks into the feasibility of the hardware implementation of one of these statistical data compression methods by exploring the potential for reorganising and restructuring the method for hardware implementation and investigating ways of achieving efficient and effective designs to achieve an efficient and cost-effective algorithm.
|
Page generated in 0.0471 seconds