• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Self-describing objects with tangible data structures / Objets intelligents avec des données tangibles

Sinha, Arnab 28 May 2014 (has links)
En informatique ubiquitaire, l'observation du monde physique et de son "contexte" (une représentation haut niveau de la situation physique) est essentielle. Il existe de nombreux moyens pour observer le contexte. Typiquement, cela consiste en un traitement en plusieurs étapes commençant par la récupération de données brutes issues de capteurs. Diverses technologies de capteurs sont utilisées pour la récupération d'informations de bas niveau sur les activités physiques en cours. Ces données sont ensuite rassemblées, analysées et traitées ailleurs dans les systèmes d'information afin d'offrir une reconnaissance de contexte. Les applications déployées réagissent alors en fonction du contexte/de la situation détecté(e). Parmis les capteurs utilisés, les tags RFID, une technologie émergente, permettent de créer un lien virtuel direct entre les objets physiques et les systèmes d'information. En plus de stocker des identifiants, ils offrent un espace mémoire générique aux objets auxquels ils sont attachés, offrant de nouvelles possibilités d'architectures en informatique omniprésente. Dans cette thèse, nous proposons une approche originale tirant parti de l'espace mémoire offerts aux objets réels par les tags RFID. Dans notre approche, les objets supportent directement le système d'information. Ce type d'intégration permet de réduire les communications requises par le traitement à distance. Pour ce faire, des données sémantiques sont tout d'abord attachées aux objets afin de les rendre auto-descriptifs. Ainsi, les données pertinentes concernant une entité physique sont directement disponibles pour un traitement local. Les objets peuvent ensuite être liés virtuellement grâce à des structures de données dédiées ou ad hoc et distribuées sur les objets eux-mêmes. Ce faisant, le traitement des données peut se faire de façon directe. Par exemple, certaines propriétés peuvent être vérifiées localement sur un ensemble d'objets. Une relation physique peut être déduite directement de la structure de données, d'où le nom de "structures de données tangibles". Vis-à-vis des approches conventionnelles tirant parti des identifiants, notre approche offrent des avantages en termes de vie privée, de mise à l'échelle, d'autonomie et d'indépendance vis-à-vis des infrastructures. Le défi se situe au niveau de son expressivité limitée à cause du faible espace mémoire disponible sur les tags RFID. Les principes sont validés dans deux prototypes aux applications différentes. Le premier prototype est développé dans le domaine de la gestion de déchets afin d'aider le tri et d'améliorer le recyclage. Le deuxième offre des services supplémentaires, tels qu'une assistance lors du montage et de la vérification d'objets composés de plusieurs parties, grâce aux structures de données distribuées sur les différentes parties. / Pervasive computing or ambient computing aims to integrate information systems into the environment, in a manner as transparent as possible to the users. It allows the information systems to be tightly coupled with the physical activities within the environment. Everyday used objects, along with their environment, are made smarter with the use of embedded computing, sensors etc. and also have the ability to communicate among themselves. In pervasive computing, it is necessary to sense the real physical world and to perceive its “context” ; a high level representation of the physical situation. There are various ways to derive the context. Typically, the approach is a multi-step process which begins with sensing. Various sensing technologies are used to capture low level information of the physical activities, which are then aggregated, analyzed and computed elsewhere in the information systems, to become aware of the context. Deployed applications then react, depending on the context situation. Among sensors, RFID is an important emerging technology which allows a direct digital link between information systems and physical objects. Besides storing identification data, RFID also provides a general purpose storage space on objects, enabling new architectures for pervasive computing. In this thesis, we defend an original approach adopting the later use of RFID i.e. a digital memory integrated to real objects. The approach uses the principle where the objects self-support information systems. This way of integration reduces the need of communication for remote processing. The principle is realized in two ways. First, objects are piggybacked with semantic information, related to itself ; as self-describing objects. Hence, relevant information associated with the physical entities are readily available locally for processing. Second, group of related objects are digitally linked using dedicated or ad-hoc data structure, distributed over the objects. Hence, it would allow direct data processing - like validating some property involving the objects in proximity. This property of physical relation among objects can be interpreted digitally from the data structure ; this justifies the appellation “Tangible Data Structures”. Unlike the conventional method of using identifiers, our approach has arguments on its benefits in terms of privacy, scalability, autonomy and reduced dependency with respect to infrastructure. But its challenge lies in the expressivity due to limited memory space available in the tags. The principles are validated by prototyping in two different application domains. The first application is developed for waste management domain that helps in efficient sorting and better recycling. And the second, provides added services like assistance while assembling and verification for composite objects, using the distributed data structure across the individual pieces.
2

Temporal streams: programming abstractions for distributed live stream analysis applications

Hilley, David B 20 October 2009 (has links)
Continuous live stream analysis applications are increasingly common. Video-based surveillance, emergency response, disaster recovery, and critical infrastructure monitoring are all examples of such applications. These applications are distributed and typically require significant computing resources (like a cluster of workstations) for analysis. In addition to live data, many such applications also require access to historical data that was streamed in the past and is now archived. While distributed programming support for traditional high-performance computing applications is fairly mature, existing solutions for live stream analysis applications are still in their early stages and, in our view, inadequate. We explore the system-level value of recognizing temporal properties -- a critical aspect of the application domain. We present "temporal streams", a programming model supporting a higher-level, domain-targeted programming abstraction for such applications. It provides a simple but expressive stream abstraction encompassing transport, manipulation and storage of streaming data. The semantics of the programming model are tailored to the application domain by explicitly recognizing the temporal aspects of continuous streams, providing a common interface for both time-based retrieval of current streaming data and data persistence. The unifying trait of time enables access to both current streaming data and archived historical data using the same interface; the communication and storage abstraction are the same -- a unified stream data abstraction, uniformly modeling stream data interactions. "Temporal streams" defines how distributed threads of computation interact implicitly via streams, but does not impose a particular model of computation constraining the interactions between distributed actors, targeting loosely coupled distributed systems with no centralized control. In particular, it targets stream analysis scenarios requiring significant signal processing on heavyweight streams such as audio and video. These unstructured streams are data rich but are not directly interpretable until meaningful features are extracted; consequently, feature detection and subsequent analysis are the major computational requirements. We also use the programming model as a vehicle for exploring systems software design issues, realizing "temporal streams" as a distributed runtime in the tradition of loosely coupled distributed systems with strong communication boundaries. We thoroughly examine the concrete software architecture and elements of implementation. We also describe two generations of system implementations, including the broad development philosophy, specific design principles and salient low-level details. The runtime is designed to be relatively lightweight and suitable as a substrate for higher-level, more domain-specific middleware or application functionality. Even with a relatively simple programming model, a carefully designed system architecture can provide a surprisingly rich and flexibly substrate for upper software layers. We also evaluate our system implementation in two ways; first, we present a series of quantitative experimental results designed to assess the performance of key primitives in our architecture in isolation. We also use motivating applications to evaluate "temporal streams" in the context of realistic application scenarios. We develop three motivating applications and provide quantitative and qualitative analyses of these applications in the context of "temporal streams." We show that, although it provides needed higher-level functionality to enable live stream analysis applications, our runtime does not add significant overhead to the stream computation at the core of each application. Finally, we also review the relationship of "temporal streams" (both the programming model and architecture) to other approaches, including database-oriented Stream Data Management Systems (SDMS), various stream processing engines, stream programming languages and parallel batch processing systems, as well as traditional distributed programming systems and communication frameworks.
3

Self-describing objects with tangible data structures

Sinha, Arnab 28 May 2014 (has links) (PDF)
Pervasive computing or ambient computing aims to integrate information systems into the environment, in a manner as transparent as possible to the users. It allows the information systems to be tightly coupled with the physical activities within the environment. Everyday used objects, along with their environment, are made smarter with the use of embedded computing, sensors etc. and also have the ability to communicate among themselves. In pervasive computing, it is necessary to sense the real physical world and to perceive its "context" ; a high level representation of the physical situation. There are various ways to derive the context. Typically, the approach is a multi-step process which begins with sensing. Various sensing technologies are used to capture low level information of the physical activities, which are then aggregated, analyzed and computed elsewhere in the information systems, to become aware of the context. Deployed applications then react, depending on the context situation. Among sensors, RFID is an important emerging technology which allows a direct digital link between information systems and physical objects. Besides storing identification data, RFID also provides a general purpose storage space on objects, enabling new architectures for pervasive computing. In this thesis, we defend an original approach adopting the later use of RFID i.e. a digital memory integrated to real objects. The approach uses the principle where the objects self-support information systems. This way of integration reduces the need of communication for remote processing. The principle is realized in two ways. First, objects are piggybacked with semantic information, related to itself ; as self-describing objects. Hence, relevant information associated with the physical entities are readily available locally for processing. Second, group of related objects are digitally linked using dedicated or ad-hoc data structure, distributed over the objects. Hence, it would allow direct data processing - like validating some property involving the objects in proximity. This property of physical relation among objects can be interpreted digitally from the data structure ; this justifies the appellation "Tangible Data Structures". Unlike the conventional method of using identifiers, our approach has arguments on its benefits in terms of privacy, scalability, autonomy and reduced dependency with respect to infrastructure. But its challenge lies in the expressivity due to limited memory space available in the tags. The principles are validated by prototyping in two different application domains. The first application is developed for waste management domain that helps in efficient sorting and better recycling. And the second, provides added services like assistance while assembling and verification for composite objects, using the distributed data structure across the individual pieces.
4

Lh*rs p2p : une nouvelle structure de données distribuée et scalable pour les environnements Pair à Pair / Lh*rsp2p : a new scalable and distributed data structure for Peer to Peer environnements

Yakouben, Hanafi 14 May 2013 (has links)
Nous proposons une nouvelle structure de données distribuée et scalable appelée LH*RSP2P conçue pour les environnements pair à pair(P2P).Les données de l'application forment un fichier d’enregistrements identifiés par les clés primaires. Les enregistrements sont dans des cases mémoires sur des pairs, adressées par le hachage distribué (LH*). Des éclatements créent dynamiquement de nouvelles cases pour accommoder les insertions. L'accès par clé à un enregistrement comporte un seul renvoi au maximum. Le scan du fichier s’effectue au maximum en deux rounds. Ces résultats sont parmi les meilleurs à l'heure actuelle. Tout fichier LH*RSP2P est également protégé contre le Churn. Le calcul de parité protège toute indisponibilité jusqu’à k cases, où k ≥ 1 est un paramètre scalable. Un nouveau type de requêtes, qualifiées de sûres, protège également contre l’accès à toute case périmée. Nous prouvons les propriétés de notre SDDS formellement par une implémentation prototype et des expérimentations. LH*RSP2P apparaît utile aux applications Big Data, sur des RamClouds tout particulièrement / We propose a new scalable and distributed data structure termed LH*RSP2P designed for Peer-to-Peer environment (P2P). Application data forms a file of records identified by primary keys. Records are in buckets on peers, addressed by distributed linear hashing (LH*). Splits create new buckets dynamically, to accommodate inserts. Key access to a record uses at most one hop. Scan of the file proceeds in two rounds at most. These results are among best at present. An LH*RSP2P file is also protected against Churn. Parity calculation recovers from every unavailability of up to k≥1, k is a scalable parameter. A new type of queries, qualified as sure, protects also against access to any out-of-date bucket. We prove the properties of our SDDS formally, by a prototype implementation and experiments. LH*RSP2P appears useful for Big Data manipulations, over RamClouds especially.

Page generated in 0.1122 seconds