• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27548
  • 5236
  • 1499
  • 1329
  • 1329
  • 1329
  • 1329
  • 1329
  • 1319
  • 1212
  • 869
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 43151
  • 43151
  • 14726
  • 11044
  • 3184
  • 2994
  • 2823
  • 2608
  • 2595
  • 2551
  • 2515
  • 2500
  • 2395
  • 2289
  • 2139
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Model-driven Fault-Tolerance Provisioning for Component-based Distributed Real-time Embedded Systems

Tambe, Sumant 19 October 2010 (has links)
Developing distributed real-time and embedded (DRE) systems require effective strategies to simultaneously handle the challenges of networked systems, enterprise systems, and embedded systems. Component-based model is gaining prominence for the development of DRE systems because of its emphasis on composability, reuse, excellent support for separation of concerns, and explicit staging of development phases. Despite the advances in component technology, developing highly available DRE systems remains challenging because of several reasons; First, availability concerns crosscut functional, deployment, and other QoS concerns of DRE systems, which makes reasoning about simultaneous QoS requirements extremely difficult. Second, fault-tolerance provisioning affects nearly all the phases of system lifecycle including specification, design, composition, deployment, configuration, and run-time. Codifying the availability requirements in system artifacts corresponding to the various lifecycle phases remains challenging due to lack of a coherent approach. Finally, multi-tier architecture and non-deterministic behavior of DRE systems combined with the need to meet end-to-end deadlines even during failures give rise to unique end-to-end reliability issues. General-purpose middleware infrastructures often do not support such highly domain-specific end-to-end reliability and failure recovery requirements. This dissertation presents a model-driven framework to coherently address the issues arising during the development of highly available component-based DRE systems. First, a domain-specific modeling language called Component QoS Modeling Language (CQML) is presented that separates systemic concerns, such as composition, deployment, and QoS to enhance comprehension and design-time reasoning. Second, a multi-stage model-driven process named GeneRative Aspects for Fault Tolerance (GRAFT) is presented that synthesizes various system artifacts to provision domain-specific end-to-end reliability and recovery semantics using model-to-model, model-to-text, model-to-code transformations. Finally, the orphan request problem arising due to the side-effects of replication in the context of non-deterministic stateful components is addressed. This dissertation presents Group-failover protocol that ensures that the data in multi-tier real-time systems is both consistent and timely even in the case of failures. Although model-driven engineering (MDE) is used extensively in this dissertation, effective techniques for a key step in MDE, model traversal, are still maturing. In the course of this research, limitations in the current model traversal approaches were addressed in Language for Embedded Query and Traversal (LEESA), which is presented here as a language-centric solution for writing succinct, generic, reusable model traversals.
582

DESIGN AND RUN-TIME QUALITY OF SERVICE MANAGEMENT TECHNIQUES FOR PUBLISH/SUBSCRIBE DISTRIBUTED REAL-TIME AND EMBEDDED SYSTEMS

Hoffert, Joseph William 17 February 2011 (has links)
Quality of service (QoS)-enabled publish/subscribe middleware provides many configurable policies that increase flexibility and functionality but present many challenges. Configuring QoS policies has become more complicated due to the number of policies, the number of parameters per policy, and the policy interactions. Additionally, QoS mechanisms, such as transport protocols, used in one particular operating environment might not achieve the desired QoS for a different operating environment. Moreover, unforeseen changes in the environment during run-time can cause specified QoS not to be met. This thesis describes a research approach to managing the complexity of QoS-enabled DRE middleware. We show how a domain specific modeling language automates analysis and development of QoS policy configurations and improves productivity. We also show how integrating pub/sub middleware with our flexible network transport framework and our composite QoS metrics determines the most appropriate QoS mechanisms for a given environment while reducing development complexity. Finally, we show how pub/sub middleware can autonomically adapt in flexible and dynamic environments to support QoS.
583

CONFIGURATION AND DEPLOYMENT DERIVATION STRATEGIES FOR DISTRIBUTED REAL-TIME AND EMBEDDED SYSTEMS

Dougherty, Brian Patrick 21 March 2011 (has links)
Distributed real-time and embedded (DRE) systems are constructed by allocating software tasks to hardware. This allocation, called a deployment plan, must ensure that design constraints, such as quality of service (QoS) demands and resource requirements, are satisfied. Further, the financial cost and performance of these systems may differ greatly based on software allocation decisions, auto-scaling strategy, and execution schedule. This dissertation describes techniques for addressing the challenges of deriving DRE system configurations and deployments. First, we show how heuristic algorithms can be utilized to determine system deployments that meet QoS demands and resource requirements. Second, we use metaheuristic algorithms to optimize system-wide deployment properties. Third, we describe a Model-Driven Architecture (MDA) based methodology for constructing a DRE system configuration modeling tool. Fourth, we demonstrate a methodology for evolving DRE systems as new components become available. Next, we provide a technique for configuring virtual machine instances to create greener cloud-computing environments. Finally, we present a metric for assessing and increasing performance gains due to processor caching.
584

Maximizing Service Uptime of Smartphone-based Distributed Real-time and Embedded Systems

Shah, Anushi 02 December 2010 (has links)
This thesis presents, SmartDeploy, the deployment technique for maximizing service uptime in distributed applications over a network of smartphones. It takes into account the power consumption rate of the software components as a key factor affecting service uptime besides hardware resource constraints like memory, CPU, etc. The problem becomes more challenging with heterogeneity of devices and when system scale consists of hundreds of software components deployed on to hundreds of devices. The work suggests a hybrid deployment optimization technique by intelligently placing the software components onto the devices where they obtain maximum battery power and sufficient hardware resources like memory, CPU, etc. SmartDeploy provides a framework that can be strategized with the desired bin packing heuristic along with a strategizable framework to plug in the desired evolutionary algorithm so that a variation of a hybrid algorithm can be synthesized. To solve the service uptime maximization problem, SmartDeploy is strategized with the worst-fit bin packer which ensures that services are load balanced across the collection of smartphones used in the mission in a way that minimizes battery drain while also delivering the QoS. The evolutionary algorithm (particle swarm optimization or genetic algorithm) generates initial and evolved random vectors and evaluates them using a fitness function.
585

QOS ASSURANCE AND CONTROL OF LARGE SCALE DISTRIBUTED COMPONENT BASED SYSTEMS

Roy, Nilabja 07 December 2010 (has links)
Large scale distributed component based applications provide a number of different services to its clients. Such applications normally serve huge number of concurrent clients and need to provide a decent Quality of Service (QoS). A deployment domain composed of several machines are used to host these applications. The application components are distributed across the machines and communicate among themselves. An important objective of the owner of such a deployment will be to handle as much clients as possible during any given time which will obviously maximize the revenue earned. But this also needs to be done by keeping the costs down and also by providing every customer a minimum amount of QoS. The cost can be reduced by minimizing the number of machines used and using less power. This thesis works towards a solution to the above and comes up with novel application component placement heuristics which makes sure that the overall resources of the domain is utilized in the best possible way. The intuition behind this work is that components are the smallest elements of an application from the perspective of resource usage. By distributing the components in a judicious way across the machines, it is possible to ensure that a minimum of resources is wasted. The work presented here uses a three phase strategy to come up with a solution. In the first phase, the component resource requirement is identified using profiling and workload modeling techniques. In the second phase detailed performance estimation of the application is carried out using analytical methods. In the third and final phase heuristics are proposed which uses the component resource requirement and the performance estimation methods to come up with placing the components across the machines. It ensures that such a placement will waste the least of resources. In the final part of this work, it applies this work in the context of modern data center planning. The most important challenge in modern day data centers is to support large customer bases with high expectation of performance. The incoming workload to the application is highly varying with periodic increase and decrease of workload. If resources are allocated for average workload then performance suffers during peak workload while planning for peak workload keeps resources idle during less workload. Cloud computing is an emerging trend which allows the elastic configuration of resources where machines can be acquired and released on the go. This work proposes a dynamic capacity planning framework for cost minimization based on a look ahead control algorithm which combines performance modeling, workload forecasting and cost optimization to plan for resource allocation in a dynamic environment. The results show how the resources can be allocated just-in-time with workload fluctuations. The dissertation also presents the various way resource is allocated as the various cost components change.
586

Model-driven Performance Analysis of Reconfigurable Conveyor Systems used in Material Handling Applications

An, Kyoungho 07 April 2011 (has links)
Reconfigurable conveyors are increasingly being adopted in multiple industrial sectors for their immense flexibility in adapting to new products and product lines. Before modifying the layout of the conveyor system for the new product line, however, engineers and layout planners must be able to answer many questions about the system, such as maximum sustainable rate of flow of goods, prioritization among goods, and tolerances of failures. Any analysis capability that provides answers to these questions must account for both the physical and cyber artifacts of the reconfigurable system all at once. Moreover, the same system should enable the stakeholders to seamlessly change the layouts and be able to analyze the pros and cons of the layouts. This paper addresses these challenges by presenting a model-driven analysis tool that provides three important capabilities. First, a domain-specific modeling language provides the stakeholders with intuitive artifacts to model conveyor layouts. Second, an analysis engine embedded within the model-driven tool provides an accurate simulation of the modeled conveyor system accounting for both the physical and cyber issues. Third, generative capabilities within the tool help to automate the analysis process. The merits of our model-driven analysis tool are evaluated in the context of an example conveyor topology.
587

Robust and Efficient Routing in Wireless Mesh Networks

Wellons, Jonathan Lawrence 08 April 2011 (has links)
Wireless Mesh Networks have proven immensely valuable in extending the reach, speed of deployment and flexibility of networks. Routing in wireless mesh networks is complicated by channel interference, multi-hop pathways and the highly unpredictable nature of traffic demands, due to mobile clients and diversity of services. The goal of this dissertation is a routing strategy which provides the best possible worst-case performance while achieving a balance with the average case. We establish a baseline of a robust worst-case using oblivious routing, which uses no knowledge of traffic demand. We extend this using a series of demand models with increasing focus and time-awareness and incorporate them into our solution to enhance the average case with minimal risk to the worst-case. Finally, we accommodate multichannel and multiradio models to provide practical routings for realistic networks.
588

Smuggling Tunnel Mapping using Slide Image Registration

Okorn, Brian Edward 04 April 2011 (has links)
There exist a large number of unmapped tunnels across the US-Mexican border used primarily for smuggling, which the US Government desires to map using robots. This thesis presents two novel approaches to generate 3D maps of these tunnels. The both algorithms use a frame invariant point descriptor called the Slide Image that was originally developed for underwater SONAR ring localization. The presented algorithms adapt the Slide Images to larger more complex laser scans. Using the Slide Images generated for each 3D laser scan, the first algorithm determines the coordinate transforms needed to fuse the scans. The second algorithm uses the transform generated by the first algorithm as an initial mapping, which the algorithm fine tunes using an Iterative Closest Point approach. This fusion algorithm is able to provide the fine-tuned accuracy of the Iterative Closest Point technique, while retaining the Slide Images insensitivity to local minima. Both algorithms are evaluated using a real smuggling tunnel as well as an office environment. The results are compared with the results generated via the existing Iterative Closest Point algorithm. The first algorithm outperformed the Iterative Closest Point algorithm in the smuggling tunnel environment, but encountered difficulty mapping the intersections in the office environment. The Fusion algorithm clearly outperformed both the Slide Image algorithm and the Iterative Closest Point algorithm in both environments because it avoided the local minima the Iterative Closest Point algorithm selected while retaining the fine grain accuracy not possible with the Slide Image algorithm.
589

Human Performance and the Perception of Actions in Immersive Virtual Environments

McManus, Erin Adams 03 April 2012 (has links)
Developing immersive virtual environments that fully mirror the real world means ensuring that the visual stimuli are convincing and accurate and that human performance is unhindered and natural. This work describes two studies that explore human perception and action in virtual reality to aid this development process. The first is an avatar study investigating the effect of adding human characters to a scene in order to improve human performance on three tasks. We find that adding either another character or a self-avatar to a scene does improve performance on visually driven complex tasks. The second study explores human perception of actions, namely underhand throwing, through making judgments on errors added to the trajectories of a thrown ball. We also explore the role of the endpoint of the ball and the importance of visual and motor feedback when making these judgments. We find that there is no difference between a subject's ability to make judgments about errors introduced to the vertical and horizontal initial velocities of the trajectory and that motor or visual feedback alone is sufficient when performing this task.
590

PRACTICAL K-ANONYMITY ON LARGE DATASETS

Podgursky, Benjamin 15 April 2011 (has links)
The implicit contract between an individual and a website is that a viewer will remain anonymous unless they choose to identify themselves. On the other hand, there are many advantages to allowing websites to tailor content to viewers based on hints about the person's likely interests and habits. However, as people spend increasing amounts of time engaged networked and online, the line between a person's online presence and their offline identity has blurred. Ideally the goals of providing personalized internet content and the implicit contract of net-anonymity can be reconciled. This thesis studies what research from the field of privacy preserving data publishing can be used to use offline data anonymously for web personalization. <p> The anonymity models of k-Anonymity and (k,1)-Anonymity, or k-Unlinkability, turn out to be promising models to study for this problem, and this work studies how to anonymize insight data using these models. Rapleaf is a company that helps websites personalize their content, with the goal of anonymizing content while still keeping the data specific enough to be insightful. Rapleaf's personalization dataset is used as a case study for investigating the challenges associated with anonymizing one of these datasets. <p> It is hoped that through the findings reported here web data can be anonymized while remaining useful, and that organizations will be encouraged to view anonymity and insight as goals that can be equitably balanced, rather than being mutually exclusive.

Page generated in 0.0697 seconds