• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 217
  • 216
  • 28
  • 24
  • 24
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 590
  • 140
  • 130
  • 110
  • 110
  • 93
  • 92
  • 69
  • 62
  • 62
  • 59
  • 59
  • 59
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

BioSENSE: Biologically-inspired Secure Elastic Networked Sensor Environment

Hassan Eltarras, Rami M. 22 September 2011 (has links)
The essence of smart pervasive Cyber-Physical Environments (CPEs) is to enhance the dependability, security and efficiency of their encompassing systems and infrastructures and their services. In CPEs, interactive information resources are integrated and coordinated with physical resources to better serve human users. To bridge the interaction gap between users and the physical environment, a CPE is instrumented with a large number of small devices, called sensors, that are capable of sensing, computing and communicating. Sensors with heterogeneous capabilities should autonomously organize on-demand and interact to furnish real-time, high fidelity information serving a wide variety of user applications with dynamic and evolving requirements. CPEs with their associated networked sensors promise aware services for smart systems and infrastructures with the potential to improve the quality of numerous application domains, in particular mission-critical infrastructure domains. Examples include healthcare, environment protection, transportation, energy, homeland security, and national defense. To build smart CPEs, Networked Sensor Environments (NSEs) are needed to manage demand-driven sharing of large-scale federated heterogeneous resources among multiple applications and users. We informally define NSE as a tailorable, application agnostic, distributed platform with the purpose of managing a massive number of federated resources with heterogeneous computing, communication, and monitoring capabilities. We perceive the need to develop scalable, trustworthy, cost-effective NSEs. A NSE should be endowed with dynamic and adaptable computing and communication services capable of efficiently running diverse applications with evolving QoS requirements on top of federated distributed resources. NSEs should also enable the development of applications independent of the underlying system and device concerns. To our knowledge, a NSE with the aforementioned capabilities does not currently exist. The large scale of NSEs, the heterogeneous node capabilities, the highly dynamic topology, and the likelihood of being deployed in inhospitable environments pose formidable challenges for the construction of resilient shared NSE platforms. Additionally, nodes in NSE are often resource challenged and therefore trustworthy node cooperation is required to provide useful services. Furthermore, the failure of NSE nodes due to malicious or non-malicious conditions represents a major threat to the trustworthiness of NSEs. Applications should be able to survive failure of nodes and change their runtime structure while preserving their operational integrity. It is also worth noting that the decoupling of application programming concerns from system and device concerns has not received the appropriate attention in most existing wireless sensor network platforms. In this dissertation, we present a Biologically-inspired Secure Elastic Networked Sensor Environment (BioSENSE) that synergistically integrates: (1) a novel bio-inspired construction of adaptable system building components, (2) associative routing framework with extensible adaptable criteria-based addressing of resources, and (3) management of multi-dimensional software diversity and trust-based variant hot shuffling. The outcome is that an application using BioSENSE is able to allocate, at runtime, a dynamic taskforce, running over a federated resource pool that would satisfy its evolving mission requirements. BioSENSE perceives both applications and the NSE itself to be elastic, and allows them to grow or shrink based upon needs and conditions. BioSENSE adopts Cell-Oriented-Architecture (COA), a novel architecture that supports the development, deployment, execution, maintenance, and evolution of NSE software. COA employs mission-oriented application design and inline code distribution to enable adaptability, dynamic re-tasking, and re-programmability. The cell, the basic building block in COA, is the abstraction of a mission-oriented autonomously active resource. Generic cells are spontaneously created by the middleware, then participate in emerging tasks through a process called specialization. Once specialized, cells exhibit application specific behavior. Specialized cells have mission objectives that are being continuously sought, and sensors that are used to monitor performance parameters, mission objectives, and other phenomena of interest. Due to the inherent anonymous nature of sensor nodes, associative routing enables dynamic semantically-rich descriptive identification of NSE resources. As such, associative routing presents a clear departure from most current network addressing schemes. Associative routing combines resource discovery and path discovery into a single coherent role, leading to significant reduction in traffic load and communication latency without any loss of generality. We also propose Adaptive Multi-Criteria Routing (AMCR) protocol as a realization of associative routing for NSEs. AMCR exploits application-specific message semantics, represented as generic criteria, and adapts its operation according to observed traffic patterns. BioSENSE intrinsically exploits software diversity, runtime implementation shuffling, and fault recovery to achieve security and resilience required for mission-critical NSEs. BioSENSE makes NSE software a resilient moving target that : 1) confuses the attacker by non-determinism through shuffling of software component implementations; 2) improves the availability of NSE by providing means to gracefully recover from implementation flaws at runtime; and 3) enhances the software system by survival of the fittest through trust-based component selection in an online software component marketplace. In summary, BioSENSE touts the following advantages: (1) on-demand, online distribution and adaptive allocation of services and physical resources shared among multiple long-lived applications with dynamic missions and quality of service requirements, (2) structural, functional, and performance adaptation to dynamic network scales, contexts and topologies, (3) moving target defense of system software, and (4) autonomic failure recovery. / Ph. D.
412

Methodologies, Techniques, and Tools for Understanding and Managing Sensitive Program Information

Liu, Yin 20 May 2021 (has links)
Exfiltrating or tampering with certain business logic, algorithms, and data can harm the security and privacy of both organizations and end users. Collectively referred to as sensitive program information (SPI), these building blocks are part and parcel of modern software systems in domains ranging from enterprise applications to cyberphysical setups. Hence, protecting SPI has become one of the most salient challenges of modern software development. However, several fundamental obstacles stand on the way of effective SPI protection: (1) understanding and locating the SPI for any realistically sized codebase by hand is hard; (2) manually isolating SPI to protect it is burdensome and error-prone; (3) if SPI is passed across distributed components within and across devices, it becomes vulnerable to security and privacy attacks. To address these problems, this dissertation research innovates in the realm of automated program analysis, code transformation, and novel programming abstractions to improve the state of the art in SPI protection. Specifically, this dissertation comprises three interrelated research thrusts that: (1) design and develop program analysis and programming support for inferring the usage semantics of program constructs, with the goal of helping developers understand and identify SPI; (2) provide powerful programming abstractions and tools that transform code automatically, with the goal of helping developers effectively isolate SPI from the rest of the codebase; (3) provide programming mechanism for distributed managed execution environments that hides SPI, with the goal of enabling components to exchange SPI safely and securely. The novel methodologies, techniques, and software tools, supported by programming abstractions, automated program analysis, and code transformation of this dissertation research lay the groundwork for establishing a secure, understandable, and efficient foundation for protecting SPI. This dissertation is based on 4 conference papers, presented at TrustCom'20, GPCE'20, GPCE'18, and ManLang'17, as well as 1 journal paper, published in Journal of Computer Languages (COLA). / Doctor of Philosophy / Some portions of a computer program can be sensitive, referred to as sensitive program information (SPI). By compromising SPI, attackers can hurt user security/privacy. It is hard for developers to identify and protect SPI, particularly for large programs. This dissertation introduces novel methodologies, techniques, and software tools that facilitate software developments tasks concerned with locating and protecting SPI.
413

The Client Insourcing Refactoring to Facilitate the Re-engineering of Web-Based Applications

An, Kijin 19 May 2021 (has links)
Developers often need to re-engineer distributed applications to address changes in requirements, made only after deployment. Much of the complexity of inspecting and evolving distributed applications lies in their distributed nature, while the majority of mature program analysis and transformation tools works only with centralized software. Inspired by business process re-engineering, in which remote operations can be insourced back in house to restructure and outsource anew, this dissertation brings an analogous approach to the re-engineering of distributed applications. Our approach introduces a novel automatic refactoring---Client Insourcing---that creates a semantically equivalent centralized version of a distributed application. This centralized version is then inspected, modified, and redistributed to meet new requirements. This dissertation demonstrates the utility of Client Insourcing in helping meet the changed requirements in performance, reliability, and security. We implemented Client Insourcing in the important domain of full-stack JavaScript applications, in which both the client and server parts are written in JavaScript, and applied our implementation to re-engineer mobile web applications. Client Insourcing reduces the complexity of inspecting and evolving distributed applications, thereby facilitating their re-engineering. This dissertation is based on 4 conference papers and 2 doctoral symposium papers, presented at ICWE 2019, SANER 2020, WWW 2020, and ICWE 2021. / Doctor of Philosophy / Modern web applications are distributed across a browser-based client and a remote server. Software developers need to optimize the performance of web applications as well as correct and modify their functionality. However, the vast majority of mature development tools, used for optimizing, correcting, and modifying applications work only with non-distributed software, written to run on a single machine. To facilitate the maintenance and evolution of web applications, this dissertation research contributes new automated software transformation techniques. These contributions can be incorporated into the design of software development tools, thereby advancing the engineering of web applications.
414

Effective Fusion and Separation of Distribution, Fault-Tolerance, and Energy-Efficiency Concerns

Kwon, Young Woo 03 July 2014 (has links)
As software applications are becoming increasingly distributed and mobile, their design and implementation are characterized by distributed software architectures, possibility of faults, and the need for energy awareness. Thus, software developers should be able to simultaneously reason about and handle the concerns of distribution, fault-tolerance, and energy-efficiency. Being closely intertwined, these concerns can introduce significant complexity into the design and implementation of modern software. In other words, to develop reliable and energy-efficient applications, software developers must understand how distribution, fault-tolerance, and energy-efficiency interplay with each other and how to implement these concerns while keeping the complexity in check. This dissertation addresses five technical issues that stand on the way of engineering reliable and energy-efficient software: (1) how can developers select and parameterize middleware to achieve the requisite levels of performance, reliability, and energy-efficiency? (2) how can one streamline the process of implementing and reusing fault tolerance functionality in distributed applications? (3) can automated techniques be developed to help transition centralized applications to using cloud-based services efficiently and reliably? (4) how can one leverage cloud-based resources to improve the energy-efficiency of mobile applications? (5) how can middleware be adapted to improve the energy-efficiency of distributed mobile applications operated over heterogeneous mobile networks? To address these issues, this research studies the concerns of distribution, fault-tolerance, and energy-efficiency as well as their interaction. It also develops novel approaches, techniques, and tools that effectively fuse and separate these concerns as required by particular software development scenarios. The specific innovations include (1) a systematic assessment of the performance, conciseness, complexity, reliability, and energy consumption of middleware mechanisms for accessing remote functionality, (2) a declarative approach to hardening distributed applications with resiliency against partial failure, (3) cloud refactoring, a set of automated program transformations for transitioning to using cloud-based services efficiently and reliably, (4) a cloud offloading approach that improves the energy-efficiency of mobile applications without compromising their reliability, (5) a middleware mechanism that optimizes energy consumption by adapting execution patterns dynamically in response to fluctuations in network conditions. / Ph. D.
415

A Middleware for Large-scale Simulation Systems & Resource Management

Makkapati, Hemanth 26 May 2013 (has links)
Socially coupled systems are comprised of inter-dependent social, organizational, economic, infrastructure and physical networks. Today's urban regions serve as an excellent example of such systems. People and institutions confront the implications of the increasing scale of information becoming available due to a combination of advances in pervasive computing, data acquisition systems as well as high performance computing. Integrated modeling and decision making environments are necessary to support planning, analysis and counter factual experiments to study these complex systems. Here, we describe SIMFRASTRUCTURE -- a computational infrastructure that supports high performance computing oriented decision and analytics environments to study socially coupled systems. Simfrastructure provides a middleware with multiplexing mechanism by which modeling environments with simple and intuitive user-interfaces can be plugged in as front-end systems, and high-end computing resources -- such as clusters, grids and clouds -- can be plugged in as back-end systems for execution. This makes several key aspects of simulation systems such as the computational complexity, data management and resource management and allocation completely transparent to the users. The decoupling of user interfaces, data repository and computational resources from simulation execution allows users to run simulations and access the results asynchronously and enables them to add new datasets and simulation models dynamically.  Simfrastructure enables implementation of a simple yet powerful modeling environment with built-in analytics-as-aservice platform, which provides seamless access to high end computational resources, through an intuitive interface for studying socially coupled systems. We illustrate the applicability of Simfrastructure in the context of an integrated modeling environment to study public health epidemiology and network science. / Master of Science
416

Comparing Service-Oriented Architecture Frameworks for Use in Programmable Industrial Vehicle Displays

Gällstedt, Axel January 2024 (has links)
Bindings are used to make a software library accessible in languages other than those that the library was originally written for. However, creating and maintaining large amounts of bindings for every library is time-consuming and costly. An alternative approach to bringing functionality to more languages is to use a service-oriented architecture, where functionality is provided as services accessible from another process through message passing. Various middlewares exist to enable message passing between processes. In this thesis, some of the state of the art messaging middlewares are explored and evaluated them in terms of various criteria. Emphasis is given to their suitability for programmable built for industrial vehicles. Three of the most suitable middlewares are used to implement small systems based on a service-oriented architecture for further evaluation. The results indicate that the Data Distribution Service is the most promising candidate, owing to its interface description language, language support, and relatively low RAM and disk space usage. / Bindings används för att göra ett mjukvarubibliotek tillgängliga i andra språk än de som biblioteket till en början var gjort för. Att skapa och underhålla bindings för varje bibliotek är dock tidskrävande och kostsamt. Ett alternativt sätt att tillhandahålla funktionalitet till fler språk är att använda en tjänsteorienterad arkitektur där funktionalitet finns tillgänglig i tjänster som andra processer använder via meddelandeöverföring. Det finns flera mellanprogramvaror för meddelandeöverföring mellan processer. I denna uppsats undersöks några av de främsta mellanprogramvarorna i förhållande till en mängd kriterier, med fokus på hur lämpliga de är för programmerbara displays gjorda för industriella fordon. För ytterligare utvärdering används de tre mest lämpliga mellanprogramvarorna för att implementera små system baserade på en tjänstorienterad arkitektur. Resultaten tyder på att Data Distribution Service är den mest lovande kandidaten tack vare dess Interface Description Language, språkstöd och relativt låga användning av RAM och diskutrymme.
417

A Configurable Job Submission and Scheduling System for the Grid

Kasarkod, Jeevak 01 September 2003 (has links)
Grid computing provides the necessary infrastructure to pool together diverse and distributed resources interconnected by networks to provide a unified virtual computing resource view to the user. One of the important responsibilities of the grid software is resource management and techniques to allow the user to make optimal use of the resources for executing applications. In addition to the goals of minimizing job completion time and achieving good throughput there are other minimum requirements such as minimum memory and cpu requirements, choice of operating system, fine grained file access permissions etc. Currently such requirements are being fulfilled by resource brokers, which act as mediating agents between users and resource owners. In this thesis we approach the resource brokering architectural issue in a different manner. Instead of a monolithic broker, which performs all the superscheduling functions we propose a Modular Framework based Architecture for Task Initiation and Scheduling (MFATIC) based on the three main stages in the superscheduling process. There are three major goals of this research. The first aim is to develop a decoupled architectural model that not only provides a clear distinction in the responsibilities of each of the components but also provides the user the flexibility to replace one component with another functionally equivalent component. Secondly each of these components should be configurable and extensible to be able to accommodate user requirements. Finally, the design should enable the user to plug in modules within components of different deployments of the resource broker and thus promoting software reuse. / Master of Science
418

Cognitively-inspired Architecture for Wireless Sensor Networks: A Model Driven Approach for Data Integration in a Traffic Monitoring System

Phalak, Kashmira 08 September 2006 (has links)
We describe CoSMo, a Cognitively Inspired Service and Model Architecture for situational awareness and monitoring of vehicular traffic in urban transportation systems using a network of wireless sensors. The system architecture combines (i) a cognitively inspired internal representation for analyzing and answering queries concerning the observed system and (ii) a service oriented architecture that facilitates interaction among individual modules, of the internal representation, the observed system and the user. The cognitively inspired model architecture allows effective deductive as well as inductive reasoning by combining simulation based dynamic models for planning with traditional relational databases for knowledge and data representation. On the other hand the service oriented design of interaction allows one to build flexible, extensible and scalable systems that can be deployed in practical settings. To illustrate our concepts and the novel features of our architecture, we have recently completed a prototype implementation of CoSMo. The prototype illustrates advantages of our approach over other traditional approaches for designing scalable software for situational awareness in large complex systems. The basic architecture and its prototype implementation are generic and can be applied for monitoring other complex systems. This thesis describes the design of cognitively-inspired model architecture and its corresponding prototype. Two important contributions include the following: • The cognitively-inspired architecture: In contrast to earlier work in model driven architecture, CoSMo contains a number of cognitively inspired features, including perception, memory and learning. Apart from illustrating interesting trade-offs between computational cost (e.g. access time, memory), and correctness available to a user, it also allows users specified deductive and inductive queries. • Distributed Data Integration and Fusion: In keeping with the cognitively-inspired model-driven approach, the system allows for an efficient data fusion from heterogeneous sensors, simulation based dynamic models and databases that are continually updated with real world and simulated data. It is capable of supporting a rich class of queries. / Master of Science
419

An Extensible Framework for Annotation-based Parameter Passing in Distributed Object Systems

Gopal, Sriram 28 July 2008 (has links)
Modern distributed object systems pass remote parameters based on their runtime type. This design choice limits the expressiveness, readability, and maintainability of distributed applications. While a rich body of research is concerned with middleware extensibility, modern distributed object systems do not offer programming facilities to extend their remote parameter passing semantics. Thus, extending these semantics requires understanding and modifying the underlying middleware implementation. This thesis addresses these design shortcomings by presenting (i) a declarative and extensible approach to remote parameter passing that decouples parameter passing from parameter types, and (ii) a plugin-based framework, DeXteR, that enables the programmer to extend the native set of remote parameter passing semantics, without having to understand or modify the underlying middleware implementation. DeXteR treats remote parameter passing as a distributed cross-cutting concern. It uses generative and aspect-oriented techniques, enabling the implementation of different parameter passing semantics as reusable application-level plugins that work with application, system, and third-party library classes. The flexibility and expressiveness of the framework is validated by implementing several non-trivial parameter passing semantics as DeXteR plugins. The material presented in this thesis has been accepted for publication at the ACM/USENIX Middleware 2008 conference. / Master of Science
420

An architecture for network centric operations in unconventional crisis: lessons learnt from Singapore's SARS experience

Tay, Chee Bin, Mui, Whye Kee 12 1900 (has links)
Approved for public release, distribution is unlimited / Singapore and many parts of Asia were hit with Severe Acute Respiratory Syndrome (SARS) in March 2003. The spread of SARS lead to a rapidly deteriorating and chaotic situation. Because SARS was a new infection, there was no prior knowledge that could be referenced to tackle such a complex, unknown and rapidly changing problem. Fortunately, through sound measures coupled with good leadership, quick action and inter-agency cooperation, the situation was quickly brought under control. This thesis uses the SARS incident as a case study to identify a set of network centric warfare methodologies and technologies that can be leveraged to facilitate the understanding and management of complex and rapidly changing situations. The same set of methodologies and technologies can also be selectively reused and extended to handle other situations in asymmetric and unconventional warfare. / Office of Force Transformation, DoD US Future Systems Directorate, MINDEF Singapore. / Lieutenant, Republic of Singapore Army / Civilian, Defence Science and Technology Agency, Singapore

Page generated in 0.0365 seconds