• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 269
  • 269
  • 215
  • 84
  • 56
  • 51
  • 46
  • 43
  • 39
  • 37
  • 34
  • 33
  • 33
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Extending Service Oriented Architecture Using Generic Service Representatives

Najafi, Mehran 04 1900 (has links)
<p>Service-Oriented Architecture (SOA) focuses on dividing the enterprise application layer of an enterprise system into components (as services) that have direct relationships with the business functionality of the enterprise. Web services, which are based on message exchanges, are the most widely adopted SOA technology. Web services provide web-accessible programs and devices that have been widely promoted for cloud computing environments. However, different types of web services are required to model actual services in the business domain. Particularly, enterprises (business providers such as banks, health care, and insurance companies) usually send their agents or other personnel (e.g., representatives, installers, maintainers, and trainers) to client sides to perform required services. An enterprise agent can be modeled as a software agent - a computer program that cannot be transmitted efficiently by communication messages. Lacking an efficient way to model the transmission of enterprise agents in traditional message based technologies restricts the application and usage of service-oriented architectures. The central problem addressed in this thesis is the need to develop an efficient SOA model for enterprise agents that will enable service providers to process client data locally at the client side.</p> <p>To address the research problem, the thesis proposes to model enterprise agents in SOA with a generic software agent called the Service Representative. This is a generic software agent which stays at the client side and can be customized by different service providers to process client data locally. Moreover, to employ a service representative, the thesis proposes a new type of web services called Task Services. While a traditional web service, called Data Service, processes client data completely at the server side, a task service is a web service with the capability of processing client data and resources partially or completely at the client side, using a Service Representative. Each task service assigns a task with three components to the generic service representative: task model, task knowledge, and task data. The task components are mapped to business components such as business process models, business rules and actions, and business data, where they can be efficiently transmitted by service messages.</p> <p>The combination of a service representative and task services provides an executable platform for service providers at the client side. Moreover, the client does not need to reveal its data, and hence privacy and security are maintained. Large volume client data is processed locally, causing less network traffic. Finally, real-time and event-triggered web services can be developed, based on the proposed approach.</p> <p>The main contributions and novelty of this research are: i) a domain independent computational model of enterprise agents in SOA to support a wide variety of client-processing tasks, ii) client- side web services which are compatible with typical server-side web services and comparable to other client-side processing technologies, iii) extensions of the SOA architecture by adding novel generic components including the service representative, the competition desk, and the service composition certifier, iv) provision of a formal model of client-side and server-side web services based on their construction of business components, v) empirical evaluations of the web service model in a number of different applications, using a prototype system, and vi) the application of the developed model to a number of target domains including the healthcare field. Furthermore, because client-side and server-side web services are complementary, a decision support model is provided that will assist service developers to decide upon the best service type for a web service.</p> / Doctor of Science (PhD)
72

Fog Computing with Go: A Comparative Study

Butterfield, Ellis H 01 January 2016 (has links)
The Internet of Things is a recent computing paradigm, de- fined by networks of highly connected things – sensors, actuators and smart objects – communicating across networks of homes, buildings, vehicles, and even people. The Internet of Things brings with it a host of new problems, from managing security on constrained devices to processing never before seen amounts of data. While cloud computing might be able to keep up with current data processing and computational demands, it is unclear whether it can be extended to the requirements brought forth by Internet of Things. Fog computing provides an architectural solution to address some of these problems by providing a layer of intermediary nodes within what is called an edge network, separating the local object networks and the Cloud. These edge nodes provide interoperability, real-time interaction, routing, and, if necessary, computational delegation to the Cloud. This paper attempts to evaluate Go, a distributed systems language developed by Google, in the context of requirements set forth by Fog computing. Similar methodologies of previous literature are simulated and benchmarked against in order to assess the viability of Go in the edge nodes of Fog computing architecture.
73

Towards Design and Analysis For High-Performance and Reliable SSDs

Xia, Qianbin 01 January 2017 (has links)
NAND Flash-based Solid State Disks have many attractive technical merits, such as low power consumption, light weight, shock resistance, sustainability of hotter operation regimes, and extraordinarily high performance for random read access, which makes SSDs immensely popular and be widely employed in different types of environments including portable devices, personal computers, large data centers, and distributed data systems. However, current SSDs still suffer from several critical inherent limitations, such as the inability of in-place-update, asymmetric read and write performance, slow garbage collection processes, limited endurance, and degraded write performance with the adoption of MLC and TLC techniques. To alleviate these limitations, we propose optimizations from both specific outside applications layer and SSDs' internal layer. Since SSDs are good compromise between the performance and price, so SSDs are widely deployed as second layer caches sitting between DRAMs and hard disks to boost the system performance. Due to the special properties of SSDs such as the internal garbage collection processes and limited lifetime, traditional cache devices like DRAM and SRAM based optimizations might not work consistently for SSD-based cache. Therefore, for the outside applications layer, our work focus on integrating the special properties of SSDs into the optimizations of SSD caches. Moreover, our work also involves the alleviation of the increased Flash write latency and ECC complexity due to the adoption of MLC and TLC technologies by analyzing the real work workloads.
74

Using Machine Learning to Detect Malicious URLs

Cheng, Aidan 01 January 2017 (has links)
There is a need for better predictive model that reduces the number of malicious URLs being sent through emails. This system should learn from existing metadata about URLs. The ideal solution for this problem would be able to learn from its predictions. For example, if it predicts a URL to be malicious, and that URL is deemed safe by the sandboxing environment, the predictor should refine its model to account for this data. The problem, then, is to construct a model with these characteristics that can make these predictions for the vast number of URLs being processed. Given that the current system does not employ machine learning methods, we intend to investigate multiple such models and summarize which of those might be worth pursuing on a large scale.
75

Advanced Text Analytics and Machine Learning Approach for Document Classification

Anne, Chaitanya 19 May 2017 (has links)
Text classification is used in information extraction and retrieval from a given text, and text classification has been considered as an important step to manage a vast number of records given in digital form that is far-reaching and expanding. This thesis addresses patent document classification problem into fifteen different categories or classes, where some classes overlap with other classes for practical reasons. For the development of the classification model using machine learning techniques, useful features have been extracted from the given documents. The features are used to classify patent document as well as to generate useful tag-words. The overall objective of this work is to systematize NASA’s patent management, by developing a set of automated tools that can assist NASA to manage and market its portfolio of intellectual properties (IP), and to enable easier discovery of relevant IP by users. We have identified an array of methods that can be applied such as k-Nearest Neighbors (kNN), two variations of the Support Vector Machine (SVM) algorithms, and two tree based classification algorithms: Random Forest and J48. The major research steps in this work consist of filtering techniques for variable selection, information gain and feature correlation analysis, and training and testing potential models using effective classifiers. Further, the obstacles associated with the imbalanced data were mitigated by adding synthetic data wherever appropriate, which resulted in a superior SVM classifier based model.
76

Evaluating and Improving the Efficiency of Software and Algorithms for Sequence Data Analysis

Eaves, Hugh L 01 January 2016 (has links)
With the ever-growing size of sequence data sets, data processing and analysis are an increasingly large portion of the time and money spent on nucleic acid sequencing projects. Correspondingly, the performance of the software and algorithms used to perform that analysis has a direct effect on the time and expense involved. Although the analytical methods are widely varied, certain types of software and algorithms are applicable to a number of areas. Targeting improvements to these common elements has the potential for wide reaching rewards. This dissertation research consisted of several projects to characterize and improve upon the efficiency of several common elements of sequence data analysis software and algorithms. The first project sought to improve the efficiency of the short read mapping process, as mapping is the most time consuming step in many data analysis pipelines. The result was a new short read mapping algorithm and software, demonstrated to be more computationally efficient than existing software and enabling more of the raw data to be utilized. While developing this software, it was discovered that a widely used bioinformatics software library introduced a great deal of inefficiency into the application. Given the potential impact of similar libraries to other applications, and because little research had been done to evaluate library efficiency, the second project evaluated the efficiency of seven of the most popular bioinformatics software libraries, written in C++, Java, Python, and Perl. This evaluation showed that two of libraries written in the most popular language, Java, were an order of magnitude slower and used more memory than expected based on the language in which they were implemented. The third and final project, therefore, was the development of a new general-purpose bioinformatics software library for Java. This library, known as BioMojo, incorporated a new design approach resulting in vastly improved efficiency. Assessing the performance of this new library using the benchmark methods developed for the second project showed that BioMojo outperformed all of the other libraries across all benchmark tasks, being up to 30 times more CPU efficient than existing Java libraries.
77

Mitigating Interference During Virtual Machine Live Migration through Storage Offloading

Stuart, Morgan S 01 January 2016 (has links)
Today's cloud landscape has evolved computing infrastructure into a dynamic, high utilization, service-oriented paradigm. This shift has enabled the commoditization of large-scale storage and distributed computation, allowing engineers to tackle previously untenable problems without large upfront investment. A key enabler of flexibility in the cloud is the ability to transfer running virtual machines across subnets or even datacenters using live migration. However, live migration can be a costly process, one that has the potential to interfere with other applications not involved with the migration. This work investigates storage interference through experimentation with real-world systems and well-established benchmarks. In order to address migration interference in general, a buffering technique is presented that offloads the migration's read, eliminating interference in the majority of scenarios.
78

A Software Framework for Augmentative and Alternative Communication

Loup, Adam 18 May 2012 (has links)
By combining context awareness and analytical based relevance computing software, the proposed Augmentative and Alternative Communication (AAC) framework aims provide a foundation to create communication systems to dramatically increase the words available to AAC users. The framework will allow the lexicon available to the user to be dynamically updated by varying sources and to promote words based on contextual relevance. This level of customization enables the development of highly customizable AAC devices that evolve with use to become more personal while also broadening the expressiveness of the user. In order to maximize the efficient creation of conversation for AAC users, the framework provides a lexicon with the ability to obtain words from multiple sources which are then organized according to relevance in a situational context.
79

Analysis and Detection of Heap-based Malwares Using Introspection in a Virtualized Environment

Javaid, Salman 13 August 2014 (has links)
Malware detection and analysis is a major part of computer security. There is an arm race between security experts and malware developers to develop various techniques to secure computer systems and to find ways to circumvent these security methods. In recent years process heap-based attacks have increased significantly. These attacks exploit the system under attack via the heap, typically by using a heap spraying attack. The main drawback with existing techniques is that they either consume too many resources or are complicated to implement. Our work in this thesis focuses on new methods which offloads process heap analysis for guest Virtual Machines (VM) to the privileged domain using Virtual Machine Introspection (VMI) in a Cloud environment. VMI provides us with a seamless, non-intrusive and invisible (to malwares) way of observing the memory and state of VMs without raising red flags for the malwares.
80

Survey of Autonomic Computing and Experiments on JMX-based Autonomic Features

Azzam, Adel R 13 May 2016 (has links)
Autonomic Computing (AC) aims at solving the problem of managing the rapidly-growing complexity of Information Technology systems, by creating self-managing systems. In this thesis, we have surveyed the progress of the AC field, and studied the requirements, models and architectures of AC. The commonly recognized AC requirements are four properties - self-configuring, self-healing, self-optimizing, and self-protecting. The recommended software architecture is the MAPE-K model containing four modules, namely - monitor, analyze, plan and execute, as well as the knowledge repository. In the modern software marketplace, Java Management Extensions (JMX) has facilitated one function of the AC requirements - monitoring. Using JMX, we implemented a package that attempts to assist programming for AC features including socket management, logging, and recovery of distributed computation. In the experiments, we have not only realized the powerful Java capabilities that are unknown to many educators, we also illustrated the feasibility of learning AC in senior computer science courses.

Page generated in 0.0594 seconds