• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27520
  • 5236
  • 1482
  • 1311
  • 1311
  • 1311
  • 1311
  • 1311
  • 1301
  • 1212
  • 869
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 43078
  • 43078
  • 14710
  • 11031
  • 3185
  • 2991
  • 2822
  • 2607
  • 2596
  • 2545
  • 2510
  • 2494
  • 2393
  • 2289
  • 2130
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

A DEFECT-CENTRIC OPEN-SOURCE LIFECYCLE MODEL

Nuttall, Brandon O'Dell 02 March 2006 (has links)
<p>The fact that all software has defects is one of the few things that are true across all software projects. Traditionally, few proprietary software project managers would risk cultivating a reputation for producing ``buggy' code by making details about the defects in their products public. However, products such as Linux, Apache, and Mozilla have turned this attitude on its head by laying bare for inspection not only their source code but also the inner workings of their development processes. This thesis takes advantage of this openness by modeling open-source software development with a defect-centric approach. First, a framework for measuring the productivity of contributors is defined. These measurements allow contributors to be divided into groups identified by the activities they perform. Many of these activities are well-known; however, the activity of characterization is unique enough to warrant further attention and it is described along with its artifact, the trace path, in detail. Finally, predictions made by the model are tested using data from the Mozilla project, and the model is corroborated.</p>
532

Cartoon Textures: Re-Using Traditional Animation via Methods for Segmentation, Re-Sequencing, and Inbetweening

de Juan, Christina Nereida 07 April 2006 (has links)
A large body of traditional animation exists, both from television and film, which contains many interesting characters and animation sequences. This dissertation shows how to incorporate that body of animation into motion libraries by making them re-usable. The most desirable qualities of traditional animation are the nuances an artist adds to each character, giving that character personality and style. As such, the focus is on semi-automatic techniques that allow the re-use of traditional animation, yet include the artist at every step of the process. The objective is to provide a method of re-using traditional animation by creating novel animations from a library of existing hand-drawn cartoons. Sequences of similar-looking cartoon data are combined into a user-directed animation. This dissertation first addresses the issue of preparing the cartoon images to be incorporated into a motion library. Three methods are investigated for segmenting the cartoon images from their backgrounds: an ad hoc method, level sets, and support vector machines. We find that support vector machines are robust to artifacts in the cartoon images and are able to segment full-size images in a few seconds. Secondly, a method of nonlinear dimensionality reduction is applied to the cartoon images to discover a lower-dimensional manifold of the data. This manifold is traversed to create new sequences of cartoon animation, maintaining a model-free method, i.e., no a priori knowledge of the drawing or character is required. Finally, a radial basis function implicit surface modeling technique and a fast non-rigid elastic registration algorithm are combined to provide inbetween contours and textures given two key images of traditional animation.
533

Constraint Programming Approach to the TAEMS Scheduling Problem

Datta, Soumita 14 April 2006 (has links)
Planning and Scheduling are well-recognized research areas in the field of AI that address goal directed problem solving. They deal with choosing a course of action to achieve a goal contingent upon some sequencing and temporal constraints. TAEMS, (acronym for Task Analysis, Environment Modeling, and Simulation) is a modeling language for describing the task structures of agents. The TAEMS planning and scheduling problem is a particular case, where the actions that need to be scheduled to accomplish a root task are presented in a graph like structure. This problem is an NP-hard problem requiring search through a possibly exponential sized solution space.<p> This thesis aims at generating the basic initial schedule for a TAEMS style objective task structure using constraint programming techniques. Solving this initial planning and scheduling problem using constraint programming techniques involves encoding the TAEMS problem as a Constraint Satisfaction Problem, solving the Constraint Satisfaction Problem using various solver search techniques and decoding the solution into a TAEMS plan and schedule. The advantage of using the Constraint Programming approach is that the built-in search techniques of a solver can be utilized instead of implementing hand-crafted algorithms in a high level language. The thesis explains the techniques developed and provides the results of an experimental evaluation.
534

Optimization Techniques for Enhancing Middleware Quality of Service for Product-line Architectures

Krishna, Arvind 01 December 2005 (has links)
Product-line architectures (PLA)s are an emerging paradigm for developing software families by customizing reusable artifacts, rather than hand-crafting the software from scratch. In this paradigm, each product variant is assembled, configured, and deployed based on specifications of the required features and service-level agreements. To reduce the effort of developing software PLAs and product variants, it is common to leverage general-purpose -- ideally standard -- middleware platforms. These middleware platforms provide reusable services and mechanisms (such as connection management, data transfer protocols, concurrency control, demultiplexing, marshaling/demarshaling, and error-handling) that support a broad range of application requirements (such as efficiency, predictability, and minimizing end-to-end latency). A key challenge faced by developers of software PLAs is how to optimize standards-based -- and thus largely application-independent -- middleware to support the application-specific quality of service (QoS) needs of different product variants created atop a PLA. This dissertation provides four contributions to research on optimizing middleware for PLAs. First, it describes the evolution of optimization techniques for enhancing application-independent middleware to support the application-specific QoS needs of PLAs. Second, it presents a taxonomy that categorizes the evolution of this research in terms of (1) applicability, i.e., are the optimizations applicable across variants or specific to a variant, and (2) binding time, i.e., when are the optimizations applied during the middleware development lifecycle. Third, this taxonomy is applied to identify key challenges that have not been resolved by current research on PLAs, including reducing the complexity of subsetting, configuring, and specializing middleware for PLAs to satisfy the QoS requirements of product variants. Finally, the dissertation describes the OPTEML solution approach that synergistically addresses key unresolved research challenges via optimization strategies that encompass pattern-oriented, model-driven development, and specialization techniques to enhance the QoS and flexibility of middleware for PLAs. These optimizations have been prototyped, integrated, and validated in the context of several representative applications using middleware developed with Real-time Java and C++.
535

Learning by Teaching Agents

Katzlberger, Thomas 10 January 2006 (has links)
We present the design and implementation of an intelligent learning environment using an innovative multi-agent architecture scheme derived from the learning by teaching paradigm. Sixth grade students, who are domain-novices, take on the challenge of teaching a computer-based software agent how to solve distance-rate-time problems by constructing graphs, and learn about the domain in this process. The system was evaluated in a Metro Nashville 6th grade classroom. Our experiments con-trasted the learning, transfer, and motivation of students in two conditions: (1) those who learnt for themselves and (2) those who learnt by teaching agents within our environment. Our results showed that both groups improved in their word problem solving abilities, but there were no sig-nificant differences in the performance of the two groups. However, students in the learning by teaching condition demonstrated more motivation to learn, and showed better ability to transfer their knowledge of rate problems to a second domain. The motivation to learn was derived from self-reporting measures that included higher task value, self-regulation, self-efficacy, and critical thinking. The ability to transfer was measured in terms of students performance on rate problems associated with filling measurement cylinders with water. In addition, a survey conducted at the end of the study established that students who taught reported that they liked the system better and had more fun using it for problem solving tasks. However, we also found that our implemen-tation of the learning by teaching approach resulted in the students having to spend a lot of addi-tional time in interacting with the teachable agent using menu-based dialog structures. This is an issue that we will address in future designs of this system. Overall, the results of our study con-firmed that interacting with teachable social agents had influenced middle school students posi-tively, which also confirmed the viability of the learning by teaching agents approach and our design of the teachable agent.
536

MODEL-BASED FRAMEWORK TO DESIGN QoS ADAPTIVE DRE APPLICATIONS

Mujumdar, Sujata 02 December 2005 (has links)
Performance-critical distributed systems, especially the distributed real-time and embedded (DRE) systems, have been proliferating in the past few decades. Designing and implementing DRE systems is significantly challenging due to various factors like the real-time reactive nature of the application, distribution of the application software components over a set of potentially resource constrained hosts and unpredictable network and environmental conditions. An important aspect of these systems is the necessity to guarantee a certain level of performance in terms of Quality-of-Service (QoS). Failure to meet QoS guarantees may result in severe consequences including mission failures. Current state-of-the-art uses ad-hoc methods to adaptively meet the design requirements for provision of QoS in DRE systems. However, these methods are not systematic enough to ensure system integrity or reusability. In this thesis, we propose a model-based framework called the Dynamic QoS Modeling Environment (DQME) that provides a formal approach based on control-theory for satisfying QoS requirements. This framework provides design-time methodologies that facilitate an effective representation of QoS design and adaptation strategies along with the functional aspects of the system. These formal design-time adaptations are used to develop run-time adaptations. Code generators are provided for automatic synthesis of code that represents the behavior of the designed controllers and may be used in the low-level implementation frameworks.
537

AIDING THE DEPLOYMENT AND CONFIGURATION OF COMPONENT MIDDLEWARE IN DISTRIBUTED REAL-TIME AND EMBEDDED SYSTEMS

Paunov, Stoyan G. 20 April 2006 (has links)
Thesis under the supervision of Professor Douglas C. Schmidt: <p> Historically enterprise distributed, real-time and embedded (DRE) systems were developed atop of operating systems and protocols. These traditional methods were however replaced by stacks of middleware technologies in order to reuse existing architectural and design principles and avoid reinventing and reimplementing core distributed infrastructure capabilities and services. The most recent wave of middleware technologies offers higher-level abstractions, such as component models, web services, and model-driven middleware. <p> Although component middleware technologies successfully address many of the problems of previous generations of inflexible, monolithic, functionally-designed, and stove-piped enterprise DRE systems, they also introduce new challenges associated with the higher flexibility and configurability of the system, the manageability of the large number of deployment and configuration artifacts and ability of the system to evolve in response to improved understanding of the domain or feedback from end-to-end quality of service performance testing. <p> This thesis document first discusses how component repositories can be used to resolve many of the newly arisen deployment and configuration complexities in component-based middleware. Next it shows how Model-Driven Development (MDD) technologies can be applied to mitigate the complexities associated with configuring component middleware for quality of service.
538

A SEMANTIC ANCHORING INFRASTRUCTURE FOR MODEL-INTEGRATED COMPUTING

Chen, Kai 07 June 2006 (has links)
Model-Integrated Computing (MIC) is an approach for model-based design of embedded software and systems. MIC places strong emphasis on the use of domain-specific modeling languages (DSMLs) and model transformation techniques in design flows. Metamodeling facilitates the rapid, inexpensive development of DSMLs. However, the semantics specification for DSMLs is still a hard problem. In this thesis, we propose a semantic anchoring infrastructure including a set of reusable semantic units that provide reference semantics for basic behavioral categories using the Abstract State Machine framework. A tool suite for the semantic anchoring methodology is developed to facilitate the transformational specification of DSML semantics. If the semantics of a DSML can be directly mapped onto one of the basic behavioral categories, its semantics can be defined by simply specifying the semantic anchoring rules between the DSML and a semantic unit. However, in heterogeneous systems, the semantics is not always fully captured by a predefined semantic unit. If the semantics is specified from scratch it is not only expensive but we loose the advantages of anchoring the semantics to a set of common and well-established semantic units. Therefore, we extend the semantic anchoring framework to heterogeneous behaviors by developing an approach for the composition of semantic units. The compositional semantics specification approach reduces the required effort from DSML designers and improves the quality of the specification. This thesis also includes three case studies for different purposes. The FSM domain in Ptolemy is used as a case study to explain the semantic anchoring methodology and to illustrate how the semantic anchoring tool suite is applied to design DSMLs. The Timed Automata Semantic Unit is defined as an example to illustrate how to specify semantic units. An industrial-strength modeling language, EFSM, is employed as a case study to explain the compositional semantics specification approach.
539

Multi-Robot Coalition Formation

Vig, Lovekesh 17 October 2006 (has links)
As the multi-robot community strives towards greater autonomy, there is a need for systems that allow robots to autonomously form teams and cooperatively complete assigned missions. The corresponding problem with software agents has received considerable attention from the multi-agent community and is also known as the 'coalition formation problem'. Numerous coalition formation algorithms have been proposed that allow software agents to coalesce and perform tasks that would otherwise be too burdensome for a single agent. Coalition formation behaviors have also been discussed in relation to game theory. Despite the plethora of coalition formation algorithms in the literature, to the best of our knowledge none of the proposed algorithms have been demonstrated with an actual multiple robot system. Currently, there exists a divide between the software-agent coalition formation algorithms and their applicability to the multi-robot domain. This dissertation aims to bridge that divide by unearthing the issues that arise while attempting to tailor these algorithms to the multi-robot domain. A well-known multi-agent coalition formation algorithm was studied in order to identify the necessary modifications to facilitate its application to the multi-robot domain. The modified algorithm was then demonstrated on a set of real world robot tasks. The notion of coalition imbalance was introduced and its implications with respect to team performance and fault tolerance were studied both for the multi-robot foraging and soccer domains. Results suggest an interesting correlation between performance and balance across both the foraging and soccer domains. Balance information was also utilized to improve overall team performance in these domains. The balance coefficient metric was devised for quantifying balance in multi-robot teams. Finally, this dissertation introduces RACHNA, a market-based coalition formation system that leverages the inherent redundancy in robot sensory capabilities to enable a more tractable formulation of the coalition formation problem. The system allows individual sensors to be valued on the basis of demand and supply, by allowing for competition between the tasks. RACHNA's superiority over simple task allocation techniques was demonstrated in simulation experiments and the idea of preempting complex multi-robot tasks was explored.
540

AN EVALUATION OF MACHINE LEARNING TECHNIQUES IN INTRUSION DETECTION

Lee, Christina Mei-Fang 05 March 2007 (has links)
Intrusion detection allows an organization to monitor its network for possible attacks. The ability of an intrusion detection system (IDS) to distinguish correctly between attacks and normal activity is important. The use of machine learning algorithms is an active area of study in intrusion detection. Experiments have been performed with Naive Bayes, Decision Trees, and Artificial Neural Networks (ANNs) using an intrusion detection dataset. A Naive Bayes and Decision Tree algorithm programmed in Python are used, as well as the Weka Naive Bayes, J48 Decision Tree, and Multilayer Perceptron algorithms. Several subsets of the 1999 KDD Cup dataset are used to perform these experiments. An evaluation of the results, with special attention to approaches in evaluating false positives and negatives, is discussed. A novel approach to evaluating these results is shown.

Page generated in 0.1001 seconds