• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 162
  • 27
  • Tagged with
  • 1180
  • 773
  • 699
  • 436
  • 436
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 88
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

A tool for implementing distributed algorithms written in PROMELA, using DAJ toolkit

Nuthi, Kranthi Kiran January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / PROMELA stands for Protocol Meta Language. It is a modeling language for developing distributed systems. It allows for the dynamic creation of concurrent processes which can communicate through message channels. DAJ stands for Distributed Algorithms in Java. It is a Java toolkit for designing, implementing, simulating, and visualizing distributed algorithms. The toolkit consists of Java class library with a simple programming interface that allows development of distributed algorithms based on a message passing model. It also provides a visualization environment where the protocol execution can be paused, performed step by step, and restarted. This project is a Java application designed to translate a model written in Promela into a model using the Java class library provided by DAJ and simulate it using DAJ. Even though there are similarities between the programming constructs of Promela and DAJ, the programming interface supported by DAJ is smaller, so the input has been confined to a variant, which is a subset of Promela. The implementation was performed in three steps. In the first step an input domain was defined and an ANTLR grammar was defined for the input structure. Java code has been embedded to this ANTLR grammar so that it can parse the input and translates it into an intermediate xml format. In the second step, a String Template is used which would consist of templates of the output model, along with a Java program which traverses the intermediate xml file and generates the output model. In the third step, the obtained output model is compiled and then simulated and visualized using DAJ. The application has been tested over input models having different topologies, process nodes, messages, and variables and covering most of the input domain.
152

An empirical approach to modeling uncertainty in intrusion analysis

Sakthivelmurugan, Sakthiyuvaraja January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Xinming (Simon) Ou / A well-known problem in current intrusion detection tools is that they create too many low-level alerts and system administrators find it hard to cope up with the huge volume. Also, when they have to combine multiple sources of information to confirm an attack, there is a dramatic increase in the complexity. Attackers use sophisticated techniques to evade the detection and current system monitoring tools can only observe the symptoms or effects of malicious activities. When mingled with similar effects from normal or non-malicious behavior they lead intrusion analysis to conclusions of varying confidence and high false positive/negative rates. In this thesis work we present an empirical approach to the problem of modeling uncertainty where inferred security implications of low-level observations are captured in a simple logical language augmented with uncertainty tags. We have designed an automated reasoning process that enables us to combine multiple sources of system monitoring data and extract highly-confident attack traces from the numerous possible interpretations of low-level observations. We have developed our model empirically: the starting point was a true intrusion that happened on a campus network we studied to capture the essence of the human reasoning process that led to conclusions about the attack. We then used a Datalog-like language to encode the model and a Prolog system to carry out the reasoning process. Our model and reasoning system reached the same conclusions as the human administrator on the question of which machines were certainly compromised. We then automatically generated the reasoning model needed for handling Snort alerts from the natural-language descriptions in the Snort rule repository, and developed a Snort add-on to analyze Snort alerts. Keeping the reasoning model unchanged, we applied our reasoning system to two third-party data sets and one production network. Our results showed that the reasoning model is effective on these data sets as well. We believe such an empirical approach has the potential of codifying the seemingly ad-hoc human reasoning of uncertain events, and can yield useful tools for automated intrusion analysis.
153

A comparitive performance analysis of GENI control framework aggregates

Tare, Nidhi January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Caterina M. Scoglio / Network researchers for a long time have been investigating ways to improve network performance and reliability by devising new protocols, services, and network architectures. For the most part, these innovative ideas are tested through simulations and emulation techniques that though yield credible results; fail to account for realistic Internet measurements values like traffic, capacity, noise, and variable workload, and network failures. Overlay networks, on the other hand have existed for a decade, but they assume the current internet architecture is not suitable for clean-slate network architecture research. Recently, the Global Environment for Network Innovations (GENI) project aims to address this issue by providing an open platform comprising of a suite of highly programmable and shareable network facilities along with its control software. The aim of this report is to introduce GENI’s key architectural concepts, its control frameworks, and how they are used for dynamic resource allocation of computing and networking resources. We mainly discuss about the architectural concepts and design goals of two aggregates, namely the BBN Open Resource Control Architecture of the (BBNORCA) of the ORCA control framework and Great Plains Environment for Network Innovations (GpENI) belonging to the PlanetLab control framework. We then describe the procedure adopted for hardware and software setup of individual aggregates. After giving an overview of the two prototypes, an analysis of the simple experiments that were conducted on each of the aggregates is presented. Based on the study and experimental results, we present a comparative analysis of control framework architectures, their relative merits and demerits, experimentation ease, virtualization technology, and its suitability for a future GENI prototype. We use metrics such as scalability, leasing overhead, oversubscription of resources, and experiment isolation for comparison.
154

GUI abstraction of a sensor field on mobile device

Chauhan, Gaurav January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / A sensor network can be used to observe events performed by physical entities and their physical locations. The growing need of wireless sensor networks to monitor different events can be accomplished by tiny computing platforms called Motes. Having a GUI abstraction of the region can help in getting a heads up display of the region which can be used for several purposes, e.g. in hospitals for tracking different events in case of a fire emergency. It can be helpful for firefighters entering a large building by providing them prior information of a building layout as it is difficult to see through due to heavy smoke. This project develops an approach to show via GUI information of a region with the help of motes transferring data wirelessly. It is a 3-tier application including a server, a mobile client and a mote setup. The motes execute TinyOS, which is specifically designed operating system for sensor networks. The project has been tested in the computer science department building, Crossbow’s TelosB motes have been used for the mote setup. The programs for the motes are written in nesC (dialect of C), and the client and the server program are written in Java.
155

Online billboard

Kondapaneni, Vikram Kumar January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / The Online Billboard Application provides different kinds of facilities to the users like education, rental, real estate, employment opportunities, cinema, used cars, etc. Administrator adds, modifies and deletes the different categories of information. The application provides an interactive interface through which a user can interact with different areas of the application easily. A report generation feature is provided using Crystal Reports to generate reports based on the criterion of the users. The user can search whether the vacancies are there or not in different courses in different colleges. The rental details of houses in different areas will be provided for the user based on the user's search criteria. The lands information which will be sold in different areas is provided in the Real Estate category. The user can search for different Movies information for booking. The Used-Cars details will also be provided for the user. The information of Vacancies in different companies will be provided for the user based on their search criteria. The working of the application is made convenient and easy to use for the end user.
156

Management of Uncertainties in Publish/Subscribe System

Liu, Haifeng 18 February 2010 (has links)
In the publish/subscribe paradigm, information providers disseminate publications to all consumers who have expressed interest by registering subscriptions. This paradigm has found wide-spread applications, ranging from selective information dissemination to network management. However, all existing publish/subscribe systems cannot capture uncertainty inherent to the information in either subscriptions or publications. In many situations the large number of data sources exhibit various kinds of uncertainties. Examples of imprecision include: exact knowledge to either specify subscriptions or publications is not available; the match between a subscription and a publication with uncertain data is approximate; the constraints used to define a match is not only content based, but also take the semantic information into consideration. All these kinds of uncertainties have not received much attention in the context of publish/subscribe systems. In this thesis, we propose new publish/subscribe models to express uncertainties and semantics in publications and subscriptions, along with the matching semantics for each model. We also develop efficient algorithms to perform filtering for our models so that it can be applied to process the rapidly increasing information on the Internet. A thorough experimental evaluation is presented to demonstrate that the proposed systems can offer scalability to large number of subscribers and high publishing rates.
157

Pattern Recognition Applied to the Computer-aided Detection and Diagnosis of Breast Cancer from Dynamic Contrast-enhanced Magnetic Resonance Breast Images

Levman, Jacob 21 April 2010 (has links)
The goal of this research is to improve the breast cancer screening process based on magnetic resonance imaging (MRI). In a typical MRI breast examination, a radiologist is responsible for visually examining the MR images acquired during the examination and identifying suspect tissues for biopsy. It is known that if multiple radiologists independently analyze the same examinations and we biopsy any lesion that any of our radiologists flagged as suspicious then the overall screening process becomes more sensitive but less specific. Unfortunately cost factors prohibit the use of multiple radiologists for the screening of every breast MR examination. It is thought that instead of having a second expert human radiologist to examine each set of images, that the act of second reading of the examination can be performed by a computer-aided detection and diagnosis system. The research presented in this thesis is focused on the development of a computer-aided detection and diagnosis system for breast cancer screening from dynamic contrast-enhanced magnetic resonance imaging examinations. This thesis presents new computational techniques in supervised learning, unsupervised learning and classifier visualization. The techniques have been applied to breast MR lesion data and have been shown to outperform existing methods yielding a computer aided detection and diagnosis system with a sensitivity of 89% and a specificity of 70%.
158

Java bytecode to Pilar translator

Ochani, Vidit January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Robby / Software technology is the pivot around which all modern industries revolve. It is not surprising that industries of diverse nature such as finance, business, engineering, medicine, defense, etc. have assimilated sophisticated software in every step of functioning. Subsequently, with larger reach of application, software technology has evolved intricately; thereby thwarting the desirable testing of software. Companies are investing millions of dollars in manual and automated testing, however, software bugs continue to persist. It is well known that even a trivial bug can ultimately cost the company millions of dollars. Therefore, we need smarter tools to help eliminate bugs. Sireum is a research project to develop a software analysis platform that incorporates various tools and techniques. Symbolic execution, model checking, deductive reasoning and control flow graph are few examples of the aforementioned techniques. The Sireum platform is based on previous projects like the Indus static analysis framework, the Bogor model checking framework and the Bandera Java model checker. It uses the Pilar language as intermediate representation. Any language which can be translated to Pilar can be analyzed by Sireum. There exists translator for Spark - a verifiable subset of Ada for building high-integrity systems. In this report, we are presenting one such translator for Java Bytecode - A frontend which can generate Pilar from Java intermediate representation. The translator emulates the working of the Java Virtual Machine(JVM), by simulating a stack-based virtual machine. It will help us analyse JVM based softwares, such as, mobile applications for Android. We also evaluate and report statistics on the efficiency and speed of translation.
159

Parameter study for WinDAM using DAKOTA

Bhat, Ashwin Ramachandra January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Mitchell L. Neilsen / Windows[superscript TM] Dam Analysis Modules (WinDAM) is a set of modular software components that can be used to analyze overtopped earthen embankments and internal erosion of embankment dams. Sandia National Laboratories’ DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides algorithms to perform iterative analysis with the help of built-in algorithms for uncertainty quantification with sampling and parameter study methods. This software integrates Sandia National Laboratories’ DAKOTA software suite with WinDAM. It provides a user-interface for input and manipulation of parameters and performs various (centered and multi-dimensional) parameter studies on a wide range of parameters. This software takes advantage of the various algorithms present in DAKOTA to perform parameter studies on the various properties of WinDAM and provides users with detailed output on the changes caused by these variations.
160

Analysis of PageRank on Wikipedia

Tadakamala, Anirudh January 1900 (has links)
Master of Science / Department of Computing and Information Science / Daniel Andresen / With massive explosion of data in recent times and people depending more and more on search engines to get all kinds of information they want, it has becoming increasingly difficult for the search engines to produce most relevant data to the users. PageRank is one algorithm that has revolutionized the way search engines work. It was developed by Google`s Larry Page and Sergey Brin. It was developed by Google to rank websites and display them in order of ranking in its search engine results. PageRank is a link analysis algorithm that assigns a weight to each document in a corpus and measures the relative importance within the corpus. The purpose of my project is to extract all the English Wikipedia data using MediaWiki API and JWPL(Java Wikipedia Library), build PageRank Algorithm and analyze its performance on the this data set. Since the data set is too big to run in a single node Hadoop cluster, the analysis is done in a high computation cluster called Beocat, provided by Kansas State University, Computing and Information Sciences Department.

Page generated in 0.0222 seconds