• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Deploying multiple sensor applications in a network

Kondam, Sudhir Chander Reddy January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / TinyOS is an open-source component based operating system designed for highly memory constrained wireless embedded sensor network. TinyOS includes interfaces and components for communication management, routing and data acquisition tools to be refined further for custom applications. This project aims at developing a system which detects overlapping paths for data collection in different applications in the network and utilizing that information for efficient data acquisition. This prevents a reconfiguring the entire network of wireless sensor nodes (called motes) for each new application request. The application for initial or first data acquisition request tries to build the tree architecture on motes in the network where each node in the tree knows its immediate parent and children. The application builds the tree routed at the base station for the initial request and each intermediate node sends data to its parent when the data request is made. Each base station can request Light, Temperature and Passive Infrared sensory data from all or a subset of motes present in the system. When a new base station comes and connects to the network through a mote/node in the tree, the system reconfigures only those parts of the tree built in the initial phase which do not overlap with the tree required for the new base station as the root, all the other overlapping parts of the tree are left unchanged. We present experimental result to illustrate the efficiency of the approach.
122

A compiler front end for GUARDOL -- a domain-specific language for high assurance guards

Hoag, Jonathan January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / John M. Hatcliff / Guardol, a domain-specific language (DSL) developed by Rockwell Collins, was designed to streamline the process of specifying, implementing, and verifying Cross Domain Solution (CDS) security policies. Guardol’s syntax and intended computational behavior tightly resembles the core of many functional programming languages, but a number of features have been added to ease the development of high assurance cross domain solutions. A significant portion of the formalization and implementation of Guardol’s grammar and type system was performed by the SAnToS group at Kansas State University. This report summarizes the key conceptual components of Guardol’s grammar and tool- chain architecture. The focus of the report is a detailed description of Guardol’s type system implementation and formalization. A great deal of effort was put into a formalization which provided a high level of assurance that the specification of types and data structures were maintained in the intended implementation.
123

Data logger for medical device coordination framework

Gundimeda, Karthik January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / A software application or a hardware device performs well under favorable conditions. Practically there can be many factors which effect the performance and functioning of the system. Scenarios where the system fails or performs better are needed to be determined. Logging is one of the best methodologies to determine such scenarios. Logging can be helpful in determining worst and effective performance. There is always an advantage of levels in logging which gives flexibility in logging different kinds of messages. Determining what messages to be logged is the key of logging. All the important events, state changes, messages are to be logged to know the higher level of progress of the system. Medical Device Coordination Framework (MDCF) deals with device connectivity with MDCF server. In this report, we propose a logging component to the existing MDCF. Logging component for MDCF is inspired from the flight data recorder, “black box”. Black box is a device used to log each and every message passing through the flight‟s system. In this way it is reliable and easy to investigate any failures in the system. We will also be able to simulate the replay of the scenarios. The important state changes in MDCF include device connection, scenario instantiation, initial state of MDCF server, destination creation. Logging in MDCF is implemented by wrapping Log4j logging framework. The interface provided by the logging component is used by MDCF in order to log. This implementation facilitates building more complex logging component for MDCF.
124

International faculty search

Mudaranthakam, Dinesh pal Indrapal January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / This application enables users to search the database for International Faculty Members who are currently working at the veterinary department. It also helps the users to know more about the faculty members in detail that is about their specialization, area of expertise, their origin, languages they can speak and teaching experience. The main objective of this project is to develop an online application where the faculty members could be searched based on the three major criteria that is department to which the faculty member belong to or based upon the area of expertise of the faculty member or based upon the country. The application is designed in such a way that a combination of this three drop down list would also give us the results if any such kind exists. The major attraction for this application is that the faculty members are plotted on the world map using the Bing API. A red color dot is placed on the countries to which the faculty members belong, and a mouse over on the dot pops up when the mouse pointer is placed on the red colored dot then it would pop up the names of the faculty who hail from that country. These names are in form of hyper links when clicked on them would direct us to the respective faculties profile. This project is implemented using C#.NET on Microsoft Visual Studio 2008 along with the xml parsing techniques and some XML files which stores the profile of the faculty members. My primary focus is to get familiar with .NET framework and to be able to code in C#.NET. Also learn to use MS Access as database for storing and retrieving the data.
125

LDA-based dimensionality reduction and domain adaptation with application to DNA sequence classification

Mungre, Surbhi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Several computational biology and bioinformatics problems involve DNA sequence classification using supervised machine learning algorithms. The performance of these algorithms is largely dependent on the availability of labeled data and the approach used to represent DNA sequences as {\it feature vectors}. For many organisms, the labeled DNA data is scarce, while the unlabeled data is easily available. However, for a small number of well-studied model organisms, large amounts of labeled data are available. This calls for {\it domain adaptation} approaches, which can transfer knowledge from a {\it source} domain, for which labeled data is available, to a {\it target} domain, for which large amounts of unlabeled data are available. Intuitively, one approach to domain adaptation can be obtained by extracting and representing the features that the source domain and the target domain sequences share. \emph{Latent Dirichlet Allocation} (LDA) is an unsupervised dimensionality reduction technique that has been successfully used to generate features for sequence data such as text. In this work, we explore the use of LDA for generating predictive DNA sequence features, that can be used in both supervised and domain adaptation frameworks. More precisely, we propose two dimensionality reduction approaches, LDA Words (LDAW) and LDA Distribution (LDAD) for DNA sequences. LDA is a probabilistic model, which is generative in nature, and is used to model collections of discrete data such as document collections. For our problem, a sequence is considered to be a ``document" and k-mers obtained from a sequence are ``document words". We use LDA to model our sequence collection. Given the LDA model, each document can be represented as a distribution over topics (where a topic can be seen as a distribution over k-mers). In the LDAW method, we use the top k-mers in each topic as our features (i.e., k-mers with the highest probability); while in the LDAD method, we use the topic distribution to represent a document as a feature vector. We study LDA-based dimensionality reduction approaches for both supervised DNA sequence classification, as well as domain adaptation approaches. We apply the proposed approaches on the splice site predication problem, which is an important DNA sequence classification problem in the context of genome annotation. In the supervised learning framework, we study the effectiveness of LDAW and LDAD methods by comparing them with a traditional dimensionality reduction technique based on the information gain criterion. In the domain adaptation framework, we study the effect of increasing the evolutionary distances between the source and target organisms, and the effect of using different weights when combining labeled data from the source domain and with labeled data from the target domain. Experimental results show that LDA-based features can be successfully used to perform dimensionality reduction and domain adaptation for DNA sequence classification problems.
126

A framework for automatic optimization of MapReduce programs based on job parameter configurations.

Lakkimsetti, Praveen Kumar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / Recently, cost-effective and timely processing of large datasets has been playing an important role in the success of many enterprises and the scientific computing community. Two promising trends ensure that applications will be able to deal with ever increasing data volumes: first, the emergence of cloud computing, which provides transparent access to a large number of processing, storage and networking resources; and second, the development of the MapReduce programming model, which provides a high-level abstraction for data-intensive computing. MapReduce has been widely used for large-scale data analysis in the Cloud [5]. The system is well recognized for its elastic scalability and fine-grained fault tolerance. However, even to run a single program in a MapReduce framework, a number of tuning parameters have to be set by users or system administrators to increase the efficiency of the program. Users often run into performance problems because they are unaware of how to set these parameters, or because they don't even know that these parameters exist. With MapReduce being a relatively new technology, it is not easy to find qualified administrators [4]. The major objective of this project is to provide a framework that optimizes MapReduce programs that run on large datasets. This is done by executing the MapReduce program on a part of the dataset using stored parameter combinations and setting the program with the most efficient combination and this modified program can be executed over the different datasets. We know that many MapReduce programs are used over and over again in applications like daily weather analysis, log analysis, daily report generation etc. So, once the parameter combination is set, it can be used on a number of data sets efficiently. This feature can go a long way towards improving the productivity of users who lack the skills to optimize programs themselves due to lack of familiarity with MapReduce or with the data being processed.
127

Modeling, simulations, and experiments to balance performance and fairness in P2P file-sharing systems

Li,Yunzhao January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Don Gruenbacher / Caterina Scoglio / In this dissertation, we investigate research gaps still existing in P2P file-sharing systems: the necessity of fairness maintenance during the content information publishing/retrieving process, and the stranger policies on P2P fairness. First, through a wide range of measurements in the KAD network, we present the impact of a poorly designed incentive fairness policy on the performance of looking up content information. The KAD network, designed to help peers publish and retrieve sharing information, adopts a distributed hash table (DHT) technology and combines itself into the aMule/eMule P2P file-sharing network. We develop a distributed measurement framework that employs multiple test nodes running on the PlanetLab testbed. During the measurements, the routing tables of around 20,000 peers are crawled and analyzed. More than 3,000,000 pieces of source location information from the publishing tables of multiple peers are retrieved and contacted. Based on these measurements, we show that the routing table is well maintained, while the maintenance policy for the source-location-information publishing table is not well designed. Both the current maintenance schedule for the publishing table and the poor incentive policy on publishing peers eventually result in the low availability of the publishing table, which accordingly cause low lookup performance of the KAD network. Moreover, we propose three possible solutions to address these issues: the self-maintenance scheme with short period renewal interval, the chunk-based publishing/retrieving scheme, and the fairness scheme. Second, using both numerical analyses and agent-based simulations, we evaluate the impact of different stranger policies on system performance and fairness. We explore that the extremely restricting stranger policy brings the best fairness at a cost of performance degradation. The varying tendency of performance and fairness under different stranger policies are not consistent. A trade-off exists between controlling free-riding and maintaining system performance. Thus, P2P designers are required to tackle strangers carefully according to their individual design goals. We also show that BitTorrent prefers to maintain fairness with an extremely restricting stranger policy, while aMule/eMule’s fully rewarding stranger policy promotes free-riders’ benefit.
128

Data extraction for scale factor determination used in 3D-photogrammetry for plant analysis

Achanta, Leela Venkata Naga Satish January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / ImageJ and its recent upgrade, Fiji, are image processing tools that provide extensibility via Java plug-ins and recordable macros [2]. The aim of this project is to develop a plug-in compatible with ImageJ/Fiji, which extracts length information from images for scale factor determination used in 3-D Photogrammetry for plant analysis [5]. Plant images when processed using Agisoft software, gives an image consisting of the images processed merged into a single 3-D model. The coordinate system of the 3-D image generated is a relative coordinate system. The distances in the relative coordinate system are proportional to but not numerically the same as the real world distances. To know the length of any feature represented in 3-D model in real world distance, a scale factor is required. This scale factor when multiplied by some distance in the relative coordinate system, yields the actual length of that feature in the real coordinate system. For determining the scale factor we process images consisting of unsharpened yellow colored pencils which are all the same shape, color and size. The plug-in considers each pencil as a unique region by assigning unique value and unique color to all its pixels. The distance between the end midpoints of each pencil is calculated. The date and time on which the image file gets processed, name of the image file, image file creation and modification date and time, total number of valid (complete) pencils processed, the midpoints of ends of each valid pencil, length (distance) i.e., the number of pixels between the two end midpoints are all written to the output file. The length of the pencils written to the output file is used by the researchers to calculate the scale factor. Plug-in was tested on real images and the results obtained were same as the expected result.
129

An android application for the USDA structural design software

Kannikanti, Rajesh January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / People are more inclined to use tablets instead of other computing devices due to their portability and ease of use. A number of desktop applications are now becoming available as tablet applications, with increasing demand in the market. Android is one of the largest and most popular open source platforms that offer developers complete access to framework APIs in order to develop innovative tablet applications. The objective of this project is to develop an Android application for the U.S. Department of Agriculture (USDA) Structural Design Software. The GUI for this software is developed to run on tablet devices powered by Android platform. The main features provided by the User Interface include: • Allowing the input to be saved in ASCII text format and displaying the simulation results in PDF format • Allowing the user to select the type of project or view help contents for the projects • Allowing the user to build the simulation for the selected type of project • Allowing the user to send the simulation results to an e-mail The backend for this software is supposed to replace the old FORTRAN source files with Java source files. FORTRAN to Java translation is performed using the FORTRAN to Java (F2J) translator. F2J is intended to translate old FORTRAN math libraries, but was not completely successful in translating these FORTRAN programs. To accomplish successful translation, some features (such as Common Blocks, IO operations) were removed from the FORTRAN source files before translation. After successful translation, the removed features were added again to the translated Java source files. The simulation results provided by the software are useful to design engineers to develop new structural designs.
130

Recommending recipes based on ingredients and user reviews

Jagithyala, Anirudh January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / In recent years, the content volume and number of users of the Web have increased dramatically. This large amount of data has caused an information overload problem, which hinders the ability of a user to find the relevant data at the right time. Therefore, the primary task of recommendation systems is to analyze data in order to offer users suggestions for similar data. Recommendations which use the core content are known as content-based recommendation or content filtering, and recommendations which utilize directly the user feedback are known as collaborative filtering. This thesis presents the design, implementation, testing, and evaluation of a recommender system within the recipe domain, where various approaches for producing recommendations are utilized. More specifically, this thesis discusses approaches derived from basic recommendation algorithms, but customized to take advantage of specific data available in the {\it recipe} domain. The proposed approaches for recommending recipes make use of recipe ingredients and reviews. We first build ingredient vectors for both recipes and users (based on recipes they have rated highly), and recommend new recipes to users based on the similarity between user and recipe ingredient vectors. Similarly, we build recipe and user vectors based on recipe review text, and recommend new recipes based on the similarity between user and recipe review vectors. At last, we study a hybrid approach, where both ingredients and reviews are used together. Our proposed approaches are tested over an existing dataset crawled from recipes.com. Experimental results show that the recipe ingredients are more informative than the review text for making recommendations. Furthermore, when using ingredients and reviews together, the results are better than using just the reviews, but worse than using just the ingredients, suggesting that to make use of reviews, the review vocabulary needs better filtering.

Page generated in 0.0734 seconds