• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

An algebraic framework for compositional design of autonomous and adaptive multiagent systems

Oyenan, Walamitien Hervé January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Scott A. DeLoach / Organization-based Multiagent Systems (OMAS) have been viewed as an effective paradigm for addressing the design challenges posed by today’s complex systems. In those systems, the organizational perspective is the main abstraction, which provides a clear separation between agents and systems, allowing a reduction in the complexity of the overall system. To ease the development of OMAS, several methodologies have been proposed. Unfortunately, those methodologies typically require the designer to handle system complexity alone, which tends to lead to ad-hoc designs that are not scalable and are difficult to maintain. Moreover, designing organizations for large multiagent systems is a complex and time-consuming task; design models quickly become unwieldy and thus hard to develop. To cope with theses issues, a framework for organization-based multiagent system designs based on separation of concerns and composition principles is proposed. The framework uses category theory tools to construct a formal composition framework using core models from the Organization-based Multiagent Software Engineering (O-MASE) framework. I propose a formalization of these models that are then used to establish a reusable design approach for OMAS. This approach allows designers to design large multiagent organizations by reusing smaller composable organizations that are developed separately, thus providing them with a scalable approach for designing large and complex OMAS. In this dissertation, the process of formalizing and composing multiagent organizations is discussed. In addition, I propose a service-oriented approach for building autonomous, adaptive multiagent systems. Finally, as a proof of concept, I develop two real world examples from the domain of cooperative robotics and wireless sensor networks.
152

Grasping unknown novel objects from single view using octant analysis

Chleborad, Aaron A. January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson / Octant analysis, when combined with properties of the multivariate central limit theorem and multivariate normal distribution, allows finding a reasonable grasping point on an unknown novel object possible. This thesis’s original contribution is the ability to find progressively improving grasp points in a poor and/or sparse point cloud. It is shown how octant analysis was implemented using common consumer grade electronics to demonstrate the applicability to home and office robotics. Tests were carried out on three novel objects in multiple poses to determine the algorithm’s consistency and effectiveness at finding a grasp point on those objects. Results from the experiments bolster the idea that the application of octant analysis to the grasping point problem seems promising and deserving of further investigation. Other applications of the technique are also briefly considered.
153

Building and using a model of insurgent behavior to avoid IEDS in an online video game

Rogers-Ostema, Patrick J. January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson / IEDs are a prevailing threat to today’s armed forces and civilians. With some IEDs being well concealed and planted sometimes days or weeks prior to detonation, it is extremely difficult to detect their presence. Remotely triggered IEDs do offer an indirect method of detection as an insurgent must monitor the IED’s kill zone and detonate the device once the intended target is in range. Within the safe confines of a video game we can model the behavior of an insurgent using remotely triggered IEDs. Specifically, we can build a model of the sequence of actions an insurgent goes through immediately prior to detonating an IED. Using this insurgent model, we can recognize the behavior an insurgent would exhibit before detonating an IED. Once the danger level reaches a certain threshold, we can then react by changing our original course to a new one that does not cross the area we believe an IED to be in. We can show proof of concept of this by having human players take on the role of an insurgent in an online video game in which they try to destroy an autonomous agent. Successful tactics used by the autonomous agent should then be good tactics in the real world as well.
154

Design of a patient monitoring system using 3D accelerometer sensors

Kallem, Devi Shravanthi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / The Patient Monitoring System is a wireless sensor network application used for dynamically tracking a patient’s physical activity using 3D Accelerometer Sensors in the Sun Small Programmable Object Technology (SPOT) platform. The system is able to detect different postures of a person and recognize high-level actions performed by a patient by monitoring different pattern of postures. This activity can be monitored remotely from a nurse station or a handheld device. The monitoring system can be used for alerting the nurse station in a hospital, if a patient performs some abnormal action. In the proposed system, the Sun SPOTs are affixed on a person's chest, thigh, leg and arm. The application determines the posture of a person by sensing the acceleration and tilt values of the SPOT in the direction of X, Y and Z axis. Based on these values the application can determine the postures of a person such as Lying Down, Sitting, Standing, Walking, Bending, and Arm Moving. We provide user mechanisms to define high level actions such as “attempting to get up from Lying Down position”, in terms of patterns of lower-level posture sequences. The system detects these patterns from the posture sequences reported by the Sun SPOTs, and reports them at desired locations.
155

Applying model-based testing to network monitor user interface

Panday, Ashish January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Robby / This report is a case study of applying Model-Based testing approach using SpecExplorer, which is a model-based testing tool developed by Microsoft, to test a component of Microsoft Network Monitor. The system under test is the UI of the Network Monitor feature, Parser Profiles Management. Model-Based testing is a methodology for automated testing which not only automates the test execution, but the test design and generation. This approach starts by expressing an abstract model of the system which is a smaller subset of the product behavior, but retains essential elements which form the focus of the testing problem. A model-based testing tool creates a finite state machine from the model which is traversed to produce test cases. Thus, it provides more efficient coverage and flexibility in developing and maintaining test cases.
156

A visualization framework for patient data and its environment

Ayyagari, Pavani January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / A Health Care System is targeted to provide well-monitored patient care. ‘A Visualization Framework for Patient data and its environment’ offers a broad portfolio of patient monitoring to help and improve patient care. ‘A Visualization Framework for Patient data and its environment’ is an application to monitor and analyze any patient’s activity visually through a period of time in conjunction with the surroundings aspects like the room temperature, status of lights in the room, etc. in a healthcare system. A set of sensors equipped with each patient record data, pertaining to the patient’s movement and location in addition to a few other sensor values like temperature and light sensors for recording the room temperature, status of lights in the room respectively. On accepting the activity time bounds as the input, the application retrieves the appropriate values from the database and displays the patient position as a continuous stream of images in association with a slider along with the temperature values. A floor map of the hospital, similar to a blueprint model, is portrayed along with graphical display of lights, in conjunction with the slider. The patient locations are depicted on the map by minute icons with a patient id associated with the each of the icons for identification purposes. Individual window frames for each patient, displaying a patient’s position, enable the user to customize monitoring to specific patients at any instance of time and thus keep track of every move of the patient over a considerate period of time. The locations of the patients on the map, the lights in the rooms depicted on the floor map, the patient position in the individual windows and the temperature are all in synchronization with the slider whose movement is a function of time. The application allows monitoring of values that correspond closely to real-time data values thus maximizing the scope of improvements in the patient’s progress. The application is implemented on a Java Platform using Swings and is expected to handle considerable amounts of data up to two days.
157

Image classification with dense SIFT sampling: an exploration of optimal parameters

Chavez, Aaron J. January 1900 (has links)
Doctor of Philosophy / Department of Computer Science / David A. Gustafson / In this paper we evaluate a general form of image classification algorithm based on dense SIFT sampling. This algorithm is present in some form in most state-of-the-art classification systems. However, in this algorithm, numerous parameters must be tuned, and current research provides little insight into effective parameter tuning. We explore the relationship between various parameters and classification performance. Many of our results suggest that there are basic modifications which would improve state-of-the-art algorithms. Additionally, we develop two novel concepts, sampling redundancy and semantic capacity, to explain our data. These concepts provide additional insight into the limitations and potential improvements of state-of-the-art algorithms.
158

SCORM based learning management system for online training

Garg, Anubha January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell Neilsen / The Biosecurity Research Institute (BRI) facility at Kansas State University is a huge biocontainment facility to conduct research on infectious diseases that pose a threat to plant, animal and human health. The BRI Training and Education Program is currently offline; i.e., classroom sessions are taken to provide this training and education. The objective behind taking up this project was to move the entire training and education module of Biosecurity Research Institute online, instead of having a classroom session for each training course, with an subject matter expert (SME) to come and take the training session. The aim is to develop an online training system which is synchronized with the information in the BRI Research Project Database. The employees will only have to login to the website, scroll through the list of courses they are enrolled in, take the courses, write the assignments/quizzes assigned to the course and then submit the quizzes. They can also self-enroll themselves into courses, if they are given the permission to do so. The SME’s of the courses can create new courses, upload the course materials, enroll users into the courses, and assign deadlines to course completion. Once the student submits the course quiz or assignments, the SME’s can grade the quiz, assign a final grade to the students, and give feedback on their performance. They can even reassign the course in case of poor performance by the student. The administrators of the website can assign roles to different personnel, give permissions according to need and requirement, add/delete courses, and change the appearance of the website. The project is to be done using a Learning Management System called Moodle. Moodle (Modular Object Oriented Dynamic Learning Environment) is an online learning management system designed to allow interaction between teachers and students. The back-end database used is SQL Server 2008 R2 and additional tools used are Adobe Presenter (with Microsoft Power Point 2010) to create the courses in SCORM format.
159

Using density-based clustering to improve skeleton embedding in the Pinocchio automatic rigging system

Wang, Haolei January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / Automatic rigging is a targeting approach that takes a 3-D character mesh and an adapted skeleton and automatically embeds it into the mesh. Automating the embedding step provides a savings over traditional character rigging approaches, which require manual guidance, at the cost of occasional errors in recognizing parts of the mesh and aligning bones of the skeleton with it. In this thesis, I examine the problem of reducing such errors in an auto-rigging system and apply a density-based clustering algorithm to correct errors in a particular system, Pinocchio (Baran & Popovic, 2007). I show how the density-based clustering algorithm DBSCAN (Ester et al., 1996) is able to filter out some impossible vertices to correct errors at character extremities (hair, hands, and feet) and those resulting from clothing that hides extremities such as legs.
160

Parallelization of backward deleted distance calculation in graph based features using Hadoop

Pillamari, Jayachandran January 1900 (has links)
Master of Science / Department of Computing & Information Sciences / Daniel Andresen / The current project presents an approach to parallelize the calculation of Backward Deleted Distance (BDD) in Graph Based Features (GBF) computation using Hadoop. In this project the issues concerned with the calculation of BDD are identified and parallel computing technologies like Hadoop are applied to solve them. The project introduces a new algorithm to parallelize the APSP problem in BDD calculation using Hadoop Map Reduce feature. The project is implemented in Java and Hadoop technologies. The aim of this project is to parallelize the calculation of BDD thereby reducing GBF computation time. The process of BDD calculation is examined to identify the key places where it could be parallelized. Since the BDD calculation involves calculating the shortest paths between all pairs of given users, it can viewed as All Pairs Shortest Path (APSP) problem. The internal structure and implementation of Hadoop Map-Reduce framework is studied and applied to the process of APSP problem. The GBF features are one of the features set used in the Ontology classifiers. In the current project, GBF features are used to predict the friendship relationship between the users whose direct link is deleted. The computation involves calculating BDD between all pairs of users. The BDD for a user pair represents the shortest path between them when their direct link is deleted. In real terms, it is the shortest distance between them other than the direct path. The project uses train and test data sets consisting of positive instances and negative instances. The positive instances consist of user pairs having a friendship link between them whereas the negative instances do not have any direct link between them. Apache Hadoop is a latest emerging technology in the market introduced for scalable, distributed computing across clusters of computers. It has a Map Reduce framework used for developing applications which process large amounts of data in parallel on large clusters. The project is developed and implemented successfully and has the best time complexity. The project is tested for its reliability and performance. Different data sets are used in this testing by considering various factors and typical graph representations. The test results were analyzed to predict the behavior of the system. The test results show that the system has best speedup and considerably decreased the processing time from 10 hours to 20 minutes which is rewarding.

Page generated in 0.0728 seconds