Spelling suggestions: "subject:"primitives"" "subject:"rimitives""
41 |
Efficient Authentication, Node Clone Detection, and Secure Data Aggregation for Sensor NetworksLi, Zhijun January 2010 (has links)
Sensor networks are innovative wireless networks consisting of a large number of low-cost, resource-constrained sensor nodes that collect, process, and transmit data in a distributed and collaborative way. There are numerous applications for wireless sensor networks, and security is vital for many of them. However, sensor nodes suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture, and the lack of infrastructure, all of which impose formidable security challenges and call for innovative approaches. In this thesis, we present our research results on three important aspects of securing sensor networks: lightweight entity authentication, distributed node clone detection, and secure data aggregation.
As the technical core of our lightweight authentication proposals, a special type of circulant matrix named circulant-P2 matrix is introduced. We prove the linear independence of matrix vectors, present efficient algorithms on matrix operations, and explore other important properties. By combining circulant-P2 matrix with the learning parity with noise problem, we develop two one-way authentication protocols: the innovative LCMQ protocol, which is provably secure against all probabilistic polynomial-time attacks and provides remarkable performance on almost all metrics except one mild requirement for the verifier's computational capacity, and the HB$^C$ protocol, which utilizes the conventional HB-like authentication structure to preserve the bit-operation only computation requirement for both participants and consumes less key storage than previous HB-like protocols without sacrificing other performance. Moreover, two enhancement mechanisms are provided to protect the HB-like protocols from known attacks and to improve performance. For both protocols, practical parameters for different security levels are recommended. In addition, we build a framework to extend enhanced HB-like protocols to mutual authentication in a communication-efficient fashion.
Node clone attack, that is, the attempt by adversaries to add one or more nodes to the network by cloning captured nodes, imposes a severe threat to wireless sensor networks. To cope with it, we propose two distributed detection protocols with difference tradeoffs on network conditions and performance. The first one is based on distributed hash table, by which a fully decentralized, key-based caching and checking system is constructed to deterministically catch cloned nodes in general sensor networks. The protocol performance of efficient storage consumption and high security level is theoretically deducted through a probability model, and the resulting equations, with necessary adjustments for real application, are supported by the simulations. The other is the randomly directed exploration protocol, which presents notable communication performance and minimal storage consumption by an elegant probabilistic directed forwarding technique along with random initial direction and border determination. The extensive experimental results uphold the protocol design and show its efficiency on communication overhead and satisfactory detection probability.
Data aggregation is an inherent requirement for many sensor network applications, but designing secure mechanisms for data aggregation is very challenging because the aggregation nature that requires intermediate nodes to process and change messages, and the security objective to prevent malicious manipulation, conflict with each other to a great extent. To fulfill different challenges of secure data aggregation, we present two types of approaches. The first is to provide cryptographic integrity mechanisms for general data aggregation. Based on recent developments of homomorphic primitives, we propose three integrity schemes: a concrete homomorphic MAC construction, homomorphic hash plus aggregate MAC, and homomorphic hash with identity-based aggregate signature, which provide different tradeoffs on security assumption, communication payload, and computation cost. The other is a substantial data aggregation scheme that is suitable for a specific and popular class of aggregation applications, embedded with built-in security techniques that effectively defeat outside and inside attacks. Its foundation is a new data structure---secure Bloom filter, which combines HMAC with Bloom filter. The secure Bloom filter is naturally compatible with aggregation and has reliable security properties. We systematically analyze the scheme's performance and run extensive simulations on different network scenarios for evaluation. The simulation results demonstrate that the scheme presents good performance on security, communication cost, and balance.
|
42 |
The Annotative Practices of Graduate Students: Tensions & Negotiations Fostering an Epistemic PracticeBelanger, Marie-Eve 14 December 2010 (has links)
This research explores the annotation and note-taking practices of graduate students and reports on the sets of activities, habits, objects, tools and methods that define the practice. In particular, this empirical study focuses on understanding the integration of annotation practices within larger scholarly processes. This study therefore aims to describe and analyze annotation not only as material externalities of the research process, but also as crucial epistemic practices allowing students to progress from one research activity to the other. Interviews are supplemented by document collection and analyzed using a multi-perspectival framework. The findings describe an annotation lifecycle and suggest a new model of the scholarly process using annotation practices as units of analysis. The study further discusses annotation as a primitive epistemic practice and examines the productive tensions fostering the student’s progress towards her goals. This research finally proposes requirements for future tools supporting scholarly practice.
|
43 |
Consolidation de relevés laser d'intérieurs construits : pour une approche probabiliste initialisée par géolocalisationHullo, Jean-Francois 10 January 2013 (has links) (PDF)
La préparation d'interventions de maintenance dans les installations industrielles a dorénavant recours à des outils d'étude, de modélisation et de simulation basés sur l'exploitation de modèles virtuels 3D des installations. L'acquisition de ces modèles tridimensionnels s'effectue à partir de nuages de points mesurés, depuis plusieurs points de vue, par balayage angulaire horizontal et vertical d'un faisceau laser par scanner laser terrestre. L'expression dans un repère commun de l'ensemble des données acquises est appelée consolidation, au cours de laquelle les paramètres de changement de repères entre les stations sont calculés. L'objectif de cette thèse est d'améliorer la méthode d'acquisition de données laser en environnements industriels. Celle-ci doit, au final, garantir la précision et l'exactitude nécessaires des données tout en optimisant le temps et les protocoles d'acquisition sur site, en libérant l'opérateur d'un certain nombre de contraintes inhérentes au relevé topographique classique. Nous examinons, dans un premier temps, l'état de l'art des moyens et méthodes mis en œuvre lors de l'acquisition de nuages de points denses de scènes d'intérieurs complexes (Partie I). Dans un deuxième temps, nous étudions et évaluons les données utilisables pour la consolidation: données laser terrestres, algorithmes de reconstruction de primitives et systèmes de géolocalisation d'intérieur (Partie II). Dans une troisième partie, nous formalisons et expérimentons ensuite un algorithme de recalage basé sur l'utilisation de primitives appariées, reconstruites dans les nuages de points (Partie~III). Nous proposons finalement une approche probabiliste de l'appariement de primitives permettant l'intégration des informations et incertitudes a priori dans le système de contraintes utilisé pour le calcul des poses (Partie IV).
|
44 |
The Annotative Practices of Graduate Students: Tensions & Negotiations Fostering an Epistemic PracticeBelanger, Marie-Eve 14 December 2010 (has links)
This research explores the annotation and note-taking practices of graduate students and reports on the sets of activities, habits, objects, tools and methods that define the practice. In particular, this empirical study focuses on understanding the integration of annotation practices within larger scholarly processes. This study therefore aims to describe and analyze annotation not only as material externalities of the research process, but also as crucial epistemic practices allowing students to progress from one research activity to the other. Interviews are supplemented by document collection and analyzed using a multi-perspectival framework. The findings describe an annotation lifecycle and suggest a new model of the scholarly process using annotation practices as units of analysis. The study further discusses annotation as a primitive epistemic practice and examines the productive tensions fostering the student’s progress towards her goals. This research finally proposes requirements for future tools supporting scholarly practice.
|
45 |
Efficient Authentication, Node Clone Detection, and Secure Data Aggregation for Sensor NetworksLi, Zhijun January 2010 (has links)
Sensor networks are innovative wireless networks consisting of a large number of low-cost, resource-constrained sensor nodes that collect, process, and transmit data in a distributed and collaborative way. There are numerous applications for wireless sensor networks, and security is vital for many of them. However, sensor nodes suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture, and the lack of infrastructure, all of which impose formidable security challenges and call for innovative approaches. In this thesis, we present our research results on three important aspects of securing sensor networks: lightweight entity authentication, distributed node clone detection, and secure data aggregation.
As the technical core of our lightweight authentication proposals, a special type of circulant matrix named circulant-P2 matrix is introduced. We prove the linear independence of matrix vectors, present efficient algorithms on matrix operations, and explore other important properties. By combining circulant-P2 matrix with the learning parity with noise problem, we develop two one-way authentication protocols: the innovative LCMQ protocol, which is provably secure against all probabilistic polynomial-time attacks and provides remarkable performance on almost all metrics except one mild requirement for the verifier's computational capacity, and the HB$^C$ protocol, which utilizes the conventional HB-like authentication structure to preserve the bit-operation only computation requirement for both participants and consumes less key storage than previous HB-like protocols without sacrificing other performance. Moreover, two enhancement mechanisms are provided to protect the HB-like protocols from known attacks and to improve performance. For both protocols, practical parameters for different security levels are recommended. In addition, we build a framework to extend enhanced HB-like protocols to mutual authentication in a communication-efficient fashion.
Node clone attack, that is, the attempt by adversaries to add one or more nodes to the network by cloning captured nodes, imposes a severe threat to wireless sensor networks. To cope with it, we propose two distributed detection protocols with difference tradeoffs on network conditions and performance. The first one is based on distributed hash table, by which a fully decentralized, key-based caching and checking system is constructed to deterministically catch cloned nodes in general sensor networks. The protocol performance of efficient storage consumption and high security level is theoretically deducted through a probability model, and the resulting equations, with necessary adjustments for real application, are supported by the simulations. The other is the randomly directed exploration protocol, which presents notable communication performance and minimal storage consumption by an elegant probabilistic directed forwarding technique along with random initial direction and border determination. The extensive experimental results uphold the protocol design and show its efficiency on communication overhead and satisfactory detection probability.
Data aggregation is an inherent requirement for many sensor network applications, but designing secure mechanisms for data aggregation is very challenging because the aggregation nature that requires intermediate nodes to process and change messages, and the security objective to prevent malicious manipulation, conflict with each other to a great extent. To fulfill different challenges of secure data aggregation, we present two types of approaches. The first is to provide cryptographic integrity mechanisms for general data aggregation. Based on recent developments of homomorphic primitives, we propose three integrity schemes: a concrete homomorphic MAC construction, homomorphic hash plus aggregate MAC, and homomorphic hash with identity-based aggregate signature, which provide different tradeoffs on security assumption, communication payload, and computation cost. The other is a substantial data aggregation scheme that is suitable for a specific and popular class of aggregation applications, embedded with built-in security techniques that effectively defeat outside and inside attacks. Its foundation is a new data structure---secure Bloom filter, which combines HMAC with Bloom filter. The secure Bloom filter is naturally compatible with aggregation and has reliable security properties. We systematically analyze the scheme's performance and run extensive simulations on different network scenarios for evaluation. The simulation results demonstrate that the scheme presents good performance on security, communication cost, and balance.
|
46 |
[en] HYBRID FRUSTUM CULLING USING CPU AND GPU / [pt] FRUSTUM CULLING HÍBRIDO UTILIZANDO CPU E GPUEDUARDO TELLES CARLOS 15 September 2017 (has links)
[pt] Um dos problemas mais antigos da computação gráfica tem sido a determinação de visibilidade. Vários algoritmos têm sido desenvolvidos para viabilizar modelos cada vez maiores e detalhados. Dentre estes algoritmos, destaca-se o frustum culling, cujo papel é remover objetos que não sejam visíveis ao observador. Esse algoritmo, muito comum em várias aplicações, vem sofrendo melhorias ao longo dos anos, a fim de acelerar ainda mais a sua execução. Apesar de ser tratado como um problema bem resolvido na computação gráfica, alguns pontos ainda podem ser aperfeiçoados, e novas formas de descarte desenvolvidas. No que se refere aos modelos massivos, necessita-se de algoritmos de alta performance, pois a quantidade de cálculos aumenta significativamente. Este trabalho objetiva avaliar o algoritmo de frustum culling e suas otimizações, com o propósito de obter o melhor algoritmo possível implementado em CPU, além de analisar a influência de cada uma de suas partes em modelos massivos. Com base nessa análise, novas técnicas de frustum culling serão desenvolvidas, utilizando o poder computacional da GPU (Graphics Processing Unit), e comparadas com o resultado obtido apenas pela CPU. Como resultado, será proposta uma forma de frustum culling híbrido, que tentará aproveitar o melhor da
CPU e da GPU. / [en] The definition of visibility is a classical problem in Computer Graphics. Several algorithms have been developed to enable the visualization of huge and complex models. Among these algorithms, the frustum culling, which plays an important role in this area, is used to remove invisible objects by the observer. Besides being very usual in applications, this algorithm has been improved in order to accelerate its execution. Although being treated as a well-solved problem in Computer Graphics, some points can be enhanced yet, and new forms of culling may be disclosed as well. In massive models, for example, algorithms of high performance are required, since the calculus arises considerably. This work analyses the frustum culling algorithm and its optimizations, aiming to obtain the state-of-the-art algorithm implemented in CPU, as well as explains the influence of each of its steps in massive models. Based on this analysis, new GPU (Graphics Processing Unit) based frustum culling techniques will be developed and compared with the ones using only CPU. As a result, a hybrid frustum culling will be proposed, in order to achieve the best of CPU and GPU processing.
|
47 |
Análise da convecção forçada no escoamento bidimensional laminar em dutos retangulares via GITT com variáveis primitivas. / Analysis of forced convection in two-dimensional laminar flow in rectangular ducts with via gitt primitive variables.Fernandes, Thiago Andrade 28 September 2012 (has links)
Made available in DSpace on 2015-05-08T14:59:45Z (GMT). No. of bitstreams: 1
Arquivototal.pdf: 1415567 bytes, checksum: 7b043cdad595239d38d4437820f24dd0 (MD5)
Previous issue date: 2012-09-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In the present study, forced convection flow in two-dimensional laminar simultaneous development pipeline with rectangular geometry, is analyzed analytically using the Navier-Stokes, Poisson and Energy in its primitive form. These equations are solved by the Generalized Integral Transform Technique, which is characteristic hybrid-numeric. The development work is composed primarily by restructuring the problem, so that, it can be solved by using the subroutine Fortran, IMSL Library, DBVPFD. Then, the results obtained are considered of practical interest, such as the velocity field, the temperature profile, the behavior of the average temperature, the Nusselt number and the friction factor, with the goal of providing a better parameter for construction equipment. These parameters are assessed both longitudinally and horizontally. Finally, the results are validated through convergence charts in order to comparisons with literature articles. / No presente estudo, a convecção forçada no escoamento bidimensional em desenvolvimento simultâneo laminar em dutos com geometria retangular, é analisada analiticamente através das equações de Navier-Stokes, Poisson e da Energia em sua forma primitiva. Estas equações são resolvidas pela Técnica da Transformada Integral Generalizada, que tem característica hibrida-numérica. O desenvolvimento do trabalho é composto primeiramente pela reestruturação do problema, afim de que, possa ser resolvido pelo Fortran por meio da subrotina, da biblioteca IMSL, DBVPFD. Em seguida, são obtidos os resultados considerados de interesse práticos, como o campo de velocidade, o perfil da temperatura, o comportamento da temperatura média, o numero de Nusselt e o fator de atrito, com o objetivo de proporcionar parâmetros para melhor dimensionamento de equipamentos. Esses parâmetros são avaliados tanto longitudinalmente quanto horizontalmente. Por fim, os resultados são validados através de tabelas de convergência e comparações com artigos da literatura especializada.
|
48 |
Security primitives for ultra-low power sensor nodes in wireless sensor networksHuang, An-Lun 05 May 2008 (has links)
The concept of wireless sensor network (WSN) is where tiny devices (sensor nodes), positioned fairly close to each other, are used for sensing and gathering data from its environment and exchange information through wireless connections between these nodes (e.g. sensor nodes distributed through out a bridge for monitoring the mechanical stress level of the bridge continuously). In order to easily deploy a relatively large quantity of sensor nodes, the sensor nodes are typically designed for low price and small size, thereby causing them to have very limited resources available (e.g. energy, processing power). Over the years, different security (cryptographic) primitives have been proposed and refined aiming at utilizing modern processor’s power e.g. 32-bit or 64-bit operation, architecture such as MMX (Multi Media Extension) and etc. In other words, security primitives have targeted at high-end systems (e.g. desktop or server) in software implementations. Some hardware-oriented security primitives have also been proposed. However, most of them have been designed aiming only at large message and high speed hashing, with no power consumption or other resources (such as memory space) taken into considerations. As a result, security mechanisms for ultra-low power (<500µW) devices such as the wireless sensor nodes must be carefully selected or designed with their limited resources in mind. The objective of this project is to provide implementations of security primitives (i.e. encryption and authentication) suitable to the WSN environment, where resources are extremely limited. The goal of the project is to provide an efficient building block on which the design of WSN secure routing protocols can be based on, so it can relieve the protocol designers from having to design everything from scratch. This project has provided three main contributions to the WSN field. Provides analysis of different tradeoffs between cryptographic security strength and performances, which then provide security primitives suitable for the needs in a WSN environment. Security primitives form the link layer security and act as building blocks for higher layer protocols i.e. secure routing protocol. Implements and optimizes several security primitives in a low-power microcontroller (TI MSP430F1232) with very limited resources (256 bytes RAM, 8KB flash program memory). The different security primitives are compared according to the number of CPU cycles required per byte processed, specific architectures required (e.g. multiplier, large bit shift) and resources (RAM, ROM/flash) required. These comparisons assist in the evaluation of its corresponding energy consumption, and thus the applicability to wireless sensor nodes. Apart from investigating security primitives, research on various security protocols designed for WSN have also been conducted in order to optimize the security primitives for the security protocols design trend. Further, a new link layer security protocol using optimized security primitives is also proposed. This new protocol shows an improvement over the existing link layer security protocols. Security primitives with confidentiality and authenticity functions are implemented in the TinyMote sensor nodes from the Technical University of Vienna in a wireless sensor network. This is to demonstrate the practicality of the designs of this thesis in a real-world WSN environment. This research has achieved ultra-low power security primitives in wireless sensor network with average power consumption less than 3.5 µW (at 2 second packet transmission interval) and 700 nW (at 5 second packet transmission interval). The proposed link layer security protocol has also shown improvements over existing protocols in both security and power consumption. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / unrestricted
|
49 |
Segmentation supervisée d'actions à partir de primitives haut niveau dans des flux vidéos / Action supervised segmentation based on high level features on video streamsChan-Hon-Tong, Adrien 29 September 2014 (has links)
Cette thèse porte sur la segmentation supervisée de flux vidéo dans un contexte applicatif lié à la reconnaissance d'actions de la vie courante.La méthode de segmentation proposée est dérivée la méthode des modèles de formes implicites (Implicit Shape Model) et s'obtient en optimisant les votes présents dans cette méthode d'élection.Nous démontrons que cette optimisation (dans un contexte de fenêtre temporelle glissante) peut être exprimée de manière équivalente dans le formalisme des SVM en imposant une contrainte de cohérence temporelle à l'apprentissage, ou, en représentant la fenêtre glissante selon une décomposition pyramidale dense.Tout ce processus est validé expérimentalement sur un jeu de données de la littérature de segmentation supervisée.Il y surpasse les autres méthodes de type modèles de formes implicites et le SVM linéaire standard.La méthode proposée est ensuite mise en œuvre dans le cadre de la segmentation supervisée d'actions.Pour cela, des primitives dédiées sont extraites des données squelette de la personne d'intérêt obtenues grâce à des logiciels standards.Ces primitives sont ensuite quantifiées puis utilisées par la méthode d'élection.Ce système de segmentation d'actions obtient les meilleurs scores de l'état de l'art sur un jeu de données de la littérature de reconnaissance d'actions, ce qui valide cette combinaison des primitives et de la méthode d'élection. / This thesis focuses on the supervised segmentation of video streams within the application context of daily action recognition.A segmentation algorithm is obtained from Implicit Shape Model by optimising the votes existing in this polling method.We prove that this optimisation can be linked to the sliding windows plus SVM framework and more precisely is equivalent with a standard training by adding temporal constraint, or, by encoding the data through a dense pyramidal decomposition. This algorithm is evaluated on a public database of segmentation where it outperforms other Implicit Shape Model like methods and the standard linear SVM.This algorithm is then integrated into a action segmentation system.Specific features are extracted from skeleton obtained from the video by standard software.These features are then clustered and given to the polling method.This system, combining our feature and our algorithm, obtains the best published performance on a human daily action segmentation dataset.
|
50 |
Learning-Based Motion Planning and Control of a UGV With Unknown and Changing DynamicsJohansson, Åke, Wikner, Joel January 2021 (has links)
Research about unmanned ground vehicles (UGVs) has received an increased amount of attention in recent years, partly due to the many applications of UGVs in areas where it is inconvenient or impossible to have human operators, such as in mines or urban search and rescue. Two closely linked problems that arise when developing such vehicles are motion planning and control of the UGV. This thesis explores these subjects for a UGV with an unknown, and possibly time-variant, dynamical model. A framework is developed that includes three components: a machine learning algorithm to estimate the unknown dynamical model of the UGV, a motion planner that plans a feasible path for the vehicle and a controller making the UGV follow the planned path. The motion planner used in the framework is a lattice-based planner based on input sampling. It uses a dynamical model of the UGV together with motion primitives, defined as a sequence of states and control signals, which are concatenated online in order to plan a feasible path between states. Furthermore, the controller that makes the vehicle follow this path is a model predictive control (MPC) controller, capable of taking the time-varying dynamics of the UGV into account as well as imposing constraints on the states and control signals. Since the dynamical model is unknown, the machine learning algorithm Bayesian linear regression (BLR) is used to continuously estimate the model parameters online during a run. The parameter estimates are then used by the MPC controller and the motion planner in order to improve the performance of the UGV. The performance of the proposed motion planning and control framework is evaluated by conducting a series of experiments in a simulation study. Two different simulation environments, containing obstacles, are used in the framework to simulate the UGV, where the performance measures considered are the deviation from the planned path, the average velocity of the UGV and the time to plan the path. The simulations are either performed with a time-invariant model, or a model where the parameters change during the run. The results show that the performance is improved when combining the motion planner and the MPC controller with the estimated model parameters from the BLR algorithm. With an improved model, the vehicle is capable of maintaining a higher average velocity, meaning that the plan can be executed faster. Furthermore, it can also track the path more precisely compared to when using a less accurate model, which is crucial in an environment with many obstacles. Finally, the use of the BLR algorithm to continuously estimate the model parameters allows the vehicle to adapt to changes in its model. This makes it possible for the UGV to stay operational in cases of, e.g., actuator malfunctions.
|
Page generated in 0.0714 seconds