• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 8
  • 8
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Data logger for medical device coordination framework

Gundimeda, Karthik January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / A software application or a hardware device performs well under favorable conditions. Practically there can be many factors which effect the performance and functioning of the system. Scenarios where the system fails or performs better are needed to be determined. Logging is one of the best methodologies to determine such scenarios. Logging can be helpful in determining worst and effective performance. There is always an advantage of levels in logging which gives flexibility in logging different kinds of messages. Determining what messages to be logged is the key of logging. All the important events, state changes, messages are to be logged to know the higher level of progress of the system. Medical Device Coordination Framework (MDCF) deals with device connectivity with MDCF server. In this report, we propose a logging component to the existing MDCF. Logging component for MDCF is inspired from the flight data recorder, “black box”. Black box is a device used to log each and every message passing through the flight‟s system. In this way it is reliable and easy to investigate any failures in the system. We will also be able to simulate the replay of the scenarios. The important state changes in MDCF include device connection, scenario instantiation, initial state of MDCF server, destination creation. Logging in MDCF is implemented by wrapping Log4j logging framework. The interface provided by the logging component is used by MDCF in order to log. This implementation facilitates building more complex logging component for MDCF.
2

International faculty search

Mudaranthakam, Dinesh pal Indrapal January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / This application enables users to search the database for International Faculty Members who are currently working at the veterinary department. It also helps the users to know more about the faculty members in detail that is about their specialization, area of expertise, their origin, languages they can speak and teaching experience. The main objective of this project is to develop an online application where the faculty members could be searched based on the three major criteria that is department to which the faculty member belong to or based upon the area of expertise of the faculty member or based upon the country. The application is designed in such a way that a combination of this three drop down list would also give us the results if any such kind exists. The major attraction for this application is that the faculty members are plotted on the world map using the Bing API. A red color dot is placed on the countries to which the faculty members belong, and a mouse over on the dot pops up when the mouse pointer is placed on the red colored dot then it would pop up the names of the faculty who hail from that country. These names are in form of hyper links when clicked on them would direct us to the respective faculties profile. This project is implemented using C#.NET on Microsoft Visual Studio 2008 along with the xml parsing techniques and some XML files which stores the profile of the faculty members. My primary focus is to get familiar with .NET framework and to be able to code in C#.NET. Also learn to use MS Access as database for storing and retrieving the data.
3

Recommending recipes based on ingredients and user reviews

Jagithyala, Anirudh January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / In recent years, the content volume and number of users of the Web have increased dramatically. This large amount of data has caused an information overload problem, which hinders the ability of a user to find the relevant data at the right time. Therefore, the primary task of recommendation systems is to analyze data in order to offer users suggestions for similar data. Recommendations which use the core content are known as content-based recommendation or content filtering, and recommendations which utilize directly the user feedback are known as collaborative filtering. This thesis presents the design, implementation, testing, and evaluation of a recommender system within the recipe domain, where various approaches for producing recommendations are utilized. More specifically, this thesis discusses approaches derived from basic recommendation algorithms, but customized to take advantage of specific data available in the {\it recipe} domain. The proposed approaches for recommending recipes make use of recipe ingredients and reviews. We first build ingredient vectors for both recipes and users (based on recipes they have rated highly), and recommend new recipes to users based on the similarity between user and recipe ingredient vectors. Similarly, we build recipe and user vectors based on recipe review text, and recommend new recipes based on the similarity between user and recipe review vectors. At last, we study a hybrid approach, where both ingredients and reviews are used together. Our proposed approaches are tested over an existing dataset crawled from recipes.com. Experimental results show that the recipe ingredients are more informative than the review text for making recommendations. Furthermore, when using ingredients and reviews together, the results are better than using just the reviews, but worse than using just the ingredients, suggesting that to make use of reviews, the review vocabulary needs better filtering.
4

Parallelization of backward deleted distance calculation in graph based features using Hadoop

Pillamari, Jayachandran January 1900 (has links)
Master of Science / Department of Computing & Information Sciences / Daniel Andresen / The current project presents an approach to parallelize the calculation of Backward Deleted Distance (BDD) in Graph Based Features (GBF) computation using Hadoop. In this project the issues concerned with the calculation of BDD are identified and parallel computing technologies like Hadoop are applied to solve them. The project introduces a new algorithm to parallelize the APSP problem in BDD calculation using Hadoop Map Reduce feature. The project is implemented in Java and Hadoop technologies. The aim of this project is to parallelize the calculation of BDD thereby reducing GBF computation time. The process of BDD calculation is examined to identify the key places where it could be parallelized. Since the BDD calculation involves calculating the shortest paths between all pairs of given users, it can viewed as All Pairs Shortest Path (APSP) problem. The internal structure and implementation of Hadoop Map-Reduce framework is studied and applied to the process of APSP problem. The GBF features are one of the features set used in the Ontology classifiers. In the current project, GBF features are used to predict the friendship relationship between the users whose direct link is deleted. The computation involves calculating BDD between all pairs of users. The BDD for a user pair represents the shortest path between them when their direct link is deleted. In real terms, it is the shortest distance between them other than the direct path. The project uses train and test data sets consisting of positive instances and negative instances. The positive instances consist of user pairs having a friendship link between them whereas the negative instances do not have any direct link between them. Apache Hadoop is a latest emerging technology in the market introduced for scalable, distributed computing across clusters of computers. It has a Map Reduce framework used for developing applications which process large amounts of data in parallel on large clusters. The project is developed and implemented successfully and has the best time complexity. The project is tested for its reliability and performance. Different data sets are used in this testing by considering various factors and typical graph representations. The test results were analyzed to predict the behavior of the system. The test results show that the system has best speedup and considerably decreased the processing time from 10 hours to 20 minutes which is rewarding.
5

Recommender system for recipes

Goda, Sai Bharath January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Anderson / Most of the e-commerce websites like Amazon, EBay, hotels, trip advisor etc. use recommender systems to recommend products to their users. Some of them use the knowledge of history/ of all users to recommend what kind of products the current user may like (Collaborative filtering) and some use the knowledge of the products which the user is interested in and make recommendations (Content based filtering). An example is Amazon which uses both kinds of techniques.. These recommendation systems can be represented in the form of a graph where the nodes are users and products and edges are between users and products. The aim of this project is to build a recommender system for recipes by using the data from allrecipes.com. Allrecipes.com is a popular website used all throughout the world to post recipes, review them and rate them. To understand the data set one needs to know how the recipes are posted and rated in allrecipes.com, whose details are given in the paper. The network of allrecipes.com consists of users, recipes and ingredients. The aim of this research project is to extensively study about two algorithms adsorption and matrix factorization, which are evaluated on homogeneous networks and try them on the heterogeneous networks and analyze their results. This project also studies another algorithm that is used to propagate influence from one network to another network. To learn from one network and propagate the same information to another network we compute flow (influence of one network on another) as described in [7]. The paper introduces a variant of adsorption that takes the flow values into account and tries to make recommendations in the user-recipe and the user-ingredient networks. The results of this variant are analyzed in depth in this paper.
6

Internal erosion and simplified breach analysis: (upgraded version 2012)

Sadhu, Vijay January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / In recent years, headline news has been overwhelmed with stories about dam and levee failures including the 2005 levee breaches in New Orleans and the 2006 Kaloko Damfailure in Hawaii that resulted in seven deaths. Since 2000, state and federal agencies have reported 92 dam failures in the United States to the National Performance of Dams Program. Incidents such as these have brought both national and worldwide attention to the need for improved flood warning systems and breach prediction tools for dam embankment and levee failures. (G. J. Hanson, 2010) IESIMBA 2012 is an upgraded version of SIMBA, which has been upgraded from VB6 to C#.NET. The Microsoft Windows-based SIMplified Breach Analysis software (SIMBA) was developed by the USDA Agricultural Research Service in cooperation with Kansas State University. The software was developed for the purpose of analyzing internal erosion, earth embankment breach test data and extending the understanding of the underlying physical processes of breach of an overtopped earth embankment. It is a research tool that is modified routinely to test the sensitivity of the output to various sub-models and assumptions. This software is a test version for use in validation testing of the simplified breach model based on stress and mass failure driven headcut movement. It runs under Microsoft Windows 98SE, Windows 2000, NT, XP, or Vista. The following Input Screens are used to guide the user through development of input data sets.  Model Properties , Dam Profile , Structure Table, Spillway Rating and Hydrograph Data After an input data set has been entered, the data is saved and simulation can be performed on the data stored in memory at any time by selecting Build option. Input and output files are stored in a fixed ASCII text format. The results of the simulation can be viewed in graphical format which are of interest to the researchers at Oklahoma State University, Stillwater by selecting View option.
7

The interrelationships of university student characteristics and the Keller ARCS motivation model in a blended digital literacy course

Schartz, Shane January 1900 (has links)
Doctor of Philosophy / Curriculum and Instruction / Rosemary Talab / The purpose of this study was to examine student motivation in a blended learning digital literacy course and its relation to student characteristics. The study consisted of 136 student participants enrolled in a blended learning digital literacy course at a Midwestern university. The Keller ARCS Motivation Model was the theoretical framework. The Course Interest Survey was used in the study, which was designed to measure motivation using Keller ARCS categories. Data was collected through the Course Interest Survey to voluntary student participants and through data obtained from the research setting. The study examined the following research questions: Research Question 1: Do statistically significant relationships exist between non-performance student characteristics and the Keller ARCS Course Interest Survey student motivation scores in a blended digital literacy course? Research Question 2: Do statistically significant relationships exist between pre-course performance student characteristics and the Keller ARCS Course Interest Survey scores in a blended digital literacy course? Research Question 3: Do statistically significant relationships exist between post-course performance student characteristics and the Keller ARCS Course Interest Survey student motivation scores in a blended digital literacy course? To examine these relationships, the study utilized MANOVAs to analyze the student characteristics on the four categories of the Keller ARCS Motivation Model. One significant relationship was found for Confidence within Academic Rank (p < .05), between Seniors and Freshmen. Seniors reported a .4799 higher Confidence score, on average, than Freshmen. Other characteristics did not have significant relationships. The mean change in pretest and posttest scores in digital literacy on the ALTSA assessment was 6.64. Recommendations for the research setting included the use of student focus groups to better understand and increase Freshmen confidence and the Freshmen experience, a review of course design and delivery methods, an exploration of variations of blended learning models, an examination of current test-out procedures, and adjustment of the scale used in this study to provide a wider range of motivation responses. Recommendations for future studies included a qualitative study of student performance characteristics, a mixed methods study of different learning models for course delivery, and an exploratory study aimed at expanding student characteristics.
8

Web genre classification using feature selection and semi-supervised learning

Chetry, Roshan January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / As the web pages continuously change and their number grows exponentially, the need for genre classification of web pages also increases. One simple reason for this is given by the need to group web pages into various genre categories in order to reduce the complexities of various web tasks (e.g., search). Experts unanimously agree on the huge potential of genre classification of web pages. However, while everybody agrees that genre classification of web pages is necessary, researchers face problems in finding enough labeled data to perform supervised classification of web pages into various genres. The high cost of skilled manual labor, rapid changing nature of web and never ending growth of web pages are the main reasons for the limited amount of labeled data. On the contrary unlabeled data can be acquired relatively inexpensively in comparison to labeled data. This suggests the use of semi-supervised learning approaches for genre classification, instead of using supervised approaches. Semi-supervised learning makes use of both labeled and unlabeled data for training - typically a small amount of labeled data and a large amount of unlabeled data. Semi-supervised learning have been extensively used in text classification problems. Given the link structure of the web, for web-page classification one can use link features in addition to the content features that are used for general text classification. Hence, the feature set corresponding to web-pages can be easily divided into two views, namely content and link based feature views. Intuitively, the two feature views are conditionally independent given the genre category and have the ability to predict the class on their own. The scarcity of labeled data, availability of large amounts of unlabeled data, richer set of features as compared to the conventional text classification tasks (specifically complementary and sufficient views of features) have encouraged us to use co-training as a tool to perform semi-supervised learning. During co-training labeled examples represented using the two views are used to learn distinct classifiers, which keep improving at each iteration by sharing the most confident predictions on the unlabeled data. In this work, we classify web-pages of .eu domain consisting of 1232 labeled host and 20000 unlabeled hosts (provided by the European Archive Foundation [Benczur et al., 2010]) into six different genres, using co-training. We compare our results with the results produced by standard supervised methods. We find that co-training can be an effective and cheap alternative to costly supervised learning. This is mainly due to the two independent and complementary feature sets of web: content based features and link based features.
9

The evaluation of software defined networking for communication and control of cyber physical systems

Sydney, Ali January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Don Gruenbacher / Caterina Scoglio / Cyber physical systems emerge when physical systems are integrated with communication networks. In particular, communication networks facilitate dissemination of data among components of physical systems to meet key requirements, such as efficiency and reliability, in achieving an objective. In this dissertation, we consider one of the most important cyber physical systems: the smart grid. The North American Electric Reliability Corporation (NERC) envisions a smart grid that aggressively explores advance communication network solutions to facilitate real-time monitoring and dynamic control of the bulk electric power system. At the distribution level, the smart grid integrates renewable generation and energy storage mechanisms to improve reliability of the grid. Furthermore, dynamic pricing and demand management provide customers an avenue to interact with the power system to determine electricity usage that satisfies their lifestyle. At the transmission level, efficient communication and a highly automated architecture provide visibility in the power system; hence, faults are mitigated faster than they can propagate. However, higher levels of reliability and efficiency rely on the supporting physical communication infrastructure and the network technologies employed. Conventionally, the topology of the communication network tends to be identical to that of the power network. In this dissertation, however, we employ a Demand Response (DR) application to illustrate that a topology that may be ideal for the power network may not necessarily be ideal for the communication network. To develop this illustration, we realize that communication network issues, such as congestion, are addressed by protocols, middle-ware, and software mechanisms. Additionally, a network whose physical topology is designed to avoid congestion realizes an even higher level of performance. For this reason, characterizing the communication infrastructure of smart grids provides mechanisms to improve performance while minimizing cost. Most recently, algebraic connectivity has been used in the ongoing research effort characterizing the robustness of networks to failures and attacks. Therefore, we first derive analytical methods for increasing algebraic connectivity and validate these methods numerically. Secondly, we investigate impact on the topology and traffic characteristics as algebraic connectivity is increased. Finally, we construct a DR application to demonstrate how concepts from graph theory can dramatically improve the performance of a communication network. With a hybrid simulation of both power and communication network, we illustrate that a topology which may be ideal for the power network may not necessarily be ideal for the communication network. To date, utility companies are embracing network technologies such as Multiprotocol Label Switching (MPLS) because of the available support for legacy devices, traffic engineering, and virtual private networks (VPNs) which are essential to the functioning of the smart grid. Furthermore, this particular network technology meets the requirement of non-routability as stipulated by NERC, but these benefits are costly for the infrastructure that supports the full MPLS specification. More importantly, with MPLS routing and other switching technologies, innovation is restricted to the features provided by the equipment. In particular, no practical method exists for utility consultants or researchers to test new ideas, such as alternatives to IP or MPLS, on a realistic scale in order to obtain the experience and confidence necessary for real-world deployments. As a result, novel ideas remain untested. On the contrary, OpenFlow, which has gained support from network providers such as Microsoft and Google and equipment vendors such as NEC and Cisco, provides the programmability and flexibility necessary to enable innovation in next-generation communication architectures for the smart grid. This level of flexibility allows OpenFlow to provide all features of MPLS and allows OpenFlow devices to co-exist with existing MPLS devices. Therefore, in this dissertation we explore a low-cost OpenFlow Software Defined Networking solution and compare its performance to that of MPLS. In summary, we develop methods for designing robust networks and evaluate software defined networking for communication and control in cyber physical systems where the smart grid is the system under consideration.
10

Attitudes toward, and awareness of, online privacy and security: a quantitative comparison of East Africa and U.S. internet users

Ruhwanya, Zainab Said January 1900 (has links)
Master of Science / Computing and Information Sciences / Eugene Vasserman / The increase in penetration of Internet technology throughout the world is bringing an increasing volume of user information online, and developing countries such as those of East Africa are included as contributors and consumers of this voluminous information. While we have seen concerns from other parts of the world regarding user privacy and security, very little is known of East African Internet users’ concern with their online information exposure. The aim of this study is to compare Internet user awareness and concerns regarding online privacy and security between East Africa (EA) and the United States (U.S.) and to determine any common attitudes and differences. The study followed a quantitative research approach, with the EA population sampled from the Open University of Tanzania, an open and distance-learning university in East Africa, and the U.S. population sampled from Kansas State University, a public university in the U.S. Online questionnaires were used as survey instruments. The results show no significant difference in awareness of online privacy between Internet users from East Africa and the U.S. There is however, significant difference in concerns about online privacy, which differ with the type of information shared. Moreover, the results have shown that the U.S. Internet users are more aware of online privacy concerns, and more likely to have taken measure to protect their online privacy and conceal their online presence, than the East African Internet users. This study has also shown that East Africans Internet users are more likely to be victims of online identity theft, security issues and reputation damage.

Page generated in 0.1049 seconds