• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61818
  • 6049
  • 5658
  • 3723
  • 3450
  • 2277
  • 2277
  • 2277
  • 2277
  • 2277
  • 2264
  • 1224
  • 1146
  • 643
  • 535
  • Tagged with
  • 103672
  • 45455
  • 28903
  • 20554
  • 17957
  • 12465
  • 10988
  • 10849
  • 9121
  • 8524
  • 7165
  • 6398
  • 6238
  • 6186
  • 6063
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Secure learning in adversarial environments

Li, Bo 14 July 2016 (has links)
Machine learning has become ubiquitous in the modern world, varying from enterprise applications to personal use cases and from image annotation and text recognition to speech captioning and machine translation. Its capabilities in inferring patterns from data have found great success in the domains of prediction and decision making, including in security sensitive applications, such as intrusion detection, virus detection, biometric identity recognition, and spam filtering. However, strengths of such learning systems of traditional machine learning are based on the distributional stationarity assumption, and can become their vulnerabilities when there are adversarial manipulations during the training process (poisoning attack) or the testing process (evasion attack). Considering the fact that the traditional learning strategies are potentially vulnerable to security faults, there is a need for machine learning techniques that are secure against sophisticated adversaries in order to fill the gap between the distributional stationarity assumption and deliberate adversarial manipulations. These techniques will be referred to as secure learning throughout this thesis. To conduct systematic research for this secure learning problem, my study is based on three components. First, I model different kinds of attacks against the learning systems by evaluating the adversariesâ capabilities, goals and cost models. Second, I study the secure learning algorithms that counter any targeted malicious attacks by considering the specific goals of the learners and their resource and capability limitations theoretically. Concretely, I model the interactions between the defender (learning system) and attackers as different forms of games. Based on the game theoretic analysis, I evaluate the utilities and constraints for both participants, as well as optimize the secure learning system with respect to adversarial responses. Third, I design and implement practical algorithms to efficiently defend against multi-adversarial attack strategies. My thesis focuses on examining and answering theoretical questions about the limits of classifier evasion (evasion attack), adversarial contamination (poisoning attack) and privacy preserving problem in adversarial environments, as well as how to design practical resilient learning algorithms for a wide range of applications, including spam filters, malware detection, network intrusion detection, recommendation systems, etc. In my study, I tailor my approaches for building scalable machine learning systems, which are demanded by modern big data applications.
382

Performance Measurement for the e-Government Initiatives: A Comparative Study

Isaac, Willy C. 01 January 2007 (has links)
The main objective of performance measurement in public organizations is to support better decision-making by management, leading to improved outcome for the community, and to meet external accountability requirements. There are different performance measurement models to measure the e-Government initiatives and different studies differ in identifying the key factors and measurement indicator. Many measurement instruments take a too simplistic view and focus on measuring what is easy to measure. Much challenge faced by the existing e-Government studies is understanding what citizens, businesses and government agencies wants and how to measure the return on government's Internet investment. Government administrations, international organizations and consultancy firms have done many e-Government benchmarking and performance studies. The results of these studies vary because most of the e-Government studies are assessed from only one perspective of either citizens, businesses or public officials. Issues analyzed by different evaluations lead to different outcomes and give only part of the answer to what is the level of e-Government in a given country or local community. The main aim of this research was to evaluate the impact of e-Government and its instruments of measurements to develop an e-Government performance measurement framework. The combined research methodology of literature research and case study were chosen to answer the goal of this research. This research analyzed the existing literature on performance measurement models from private and public sector and also the e-Government performance models proposed by many governmental and international organizations. Proposed model was validated with a number of national government Strategies with an illustrative case study approach using documentary analysis. Many of the performance studies are used as the main determinants of public opinion on e-Government and for developing e-Government strategy, it is very important that, what is being measured is crucial for the further development of e-Government.
383

Privacy Policies: A Study of Their Use Among Online Canadian Pharmacies

Kuzma, Joanne 01 January 2006 (has links)
The use of online Canadian pharmacies has grown over the past decade due to lower cost medications and ease of use. In order for these firms to gain business and marketing information, they collect a variety of consumer data. This has raised concerns among consumers as to privacy issues of the data collected by these online firms. However, researchers have not effectively examined how online consumers value specific privacy factors when deciding whether to use the sites. Also, studies have not determined if many of these sites have comprehensive privacy policies that indicate if they protect consumers' data for a variety of factors. This research included a study of 25 major online Canadian pharmacies to determine the completeness of privacy policy factors among this population. This survey showed the majority of sites did contain a privacy policy. However, the comprehensiveness of policies differed vastly among the sites. This dissertation also included an investigation of consumers' views of the privacy policy factors they feel are important when deciding to use these pharmacy sites. Results of a survey of 147 users of medical Web sites showed that consumers were concerned about privacy on these sites, with opt-in, security and consumer/licensing issues of high importance. However, the study also showed that for consumers who actually used an online pharmacy during the past year, cost savings, rather than privacy issues were the principal concern. This dissertation created an instrument that online firms can use to evaluate consumers perceptions of privacy policies, as well as which policies are important to include on a Web site.
384

Low Cost Video For Distance Education

Simpson, Michael J. 01 January 1996 (has links)
A distance education system has been designed for Nova Southeastern University (NSU) . The design was based on emerging low cost video technology. The report presented the design and summarizes existing distance education efforts and technologies. The design supported multimedia electronic classrooms, and enabled students to participate in multimedia classes using standard telephone networks. Results were presented in three areas: management, courseware, and, systems. In the area of management, the report recommended that the University separately establish, fund, and staff the distance education project. Supporting rationale was included. In the area of courseware, the importance of quality courseware was highlighted. It was found that the development of distance education courseware was difficult; nevertheless, quality courseware was the key to a successful distance education program. In the area of systems, component level designs were presented for a student system, a university host, and a support system. Networks connecting the systems were addressed. The student system was based on widely available multimedia systems. The host system supported up to sixteen participants in a single class. The support system was designed for the development of courseware and the support of future projects in distance education. The report included supporting Proof of Principle demonstrations. These demonstrations showed that low cost video systems had utility at speeds as low as 7. 2 kbps. They also showed that high quality student images were not crucial to the system. The report included three alternate implementation strategies. The initial capability could be operational in 1997. A multi-session, 2000 user system was projected for early in the next century.
385

A dBase III Plus System For Processing And Maintaining Historical Records On Students' Evaluations Of Instructors And Courses

Warner, Douglas W. 01 January 1989 (has links)
The problem expounded by this study was the development of a reliable methodology for processing and maintaining historical records on students’ evaluations of instructors and courses. Two additional factors were critical to the study. First, to give the study the merits of students’ evaluations of instructors. Second, as an outcome of the study, a project was designed and developed that would satisfy the problem. The final project, a computerized system written in a commercial programming language by Ashton-Tate called DBASE III Plus, was developed and named “ICES”—an acronym standing for Instructor/Course Evaluation System. The data base structure and language of dBASE available to users, many books are available on the into additional projects is one of the easiest on the market. Equally important, research of universities in the United States regarding software.
386

Region-based memory management for expressive GPU programming

Holk, Eric 10 August 2016 (has links)
<p> Over the last decade, graphics processing units (GPUs) have seen their use broaden from purely graphical tasks to general purpose computation. The increased programmability required by demanding graphics applications has proven useful for a number of non-graphical problems as well. GPUs' high memory bandwidth and floating point performance make them attractive for general computation workloads, yet these benefits come at the cost of added complexity. One particular problem is the fact that GPUs and their associated high performance memory typically lie on discrete cards that are separated from the host CPU} by the PCI-Express bus. This requires programmers to carefully manage the transfer of data between the CPU and GPU memory so that the right data is in the right place at the right time. Programmers must design data structures with serialization in mind in order to efficiently move data across the PCI bus. In practice, this leads to programmers working with only simple data structures such as one or two-dimensional arrays and the applications that can be easily expressed in terms of these structures. CPU programmers have long had access to richer data structures, such as trees or first class procedures, which enable new and simpler approaches to solving certain problems. </p><p> This thesis explores the use of RBMM to overcome these data movement challenges. RBMM is a technique in which data is assigned to regions and these regions can then be operated on as a unit. One of the first uses of regions was to amortize the cost of deallocation. Many small objects would be allocated in a single region and the region could be deallocated as a single operation independent of the number of items in the region. In this thesis, regions are used as the unit of data movement between the CPU and GPU. Data structures are assigned to a region and thus the runtime system does not have to be aware of the internal layout of a data structure. The runtime system can simply move the entire region from one device to another, keeping the internal layout intact and allowing code running on either device to operate on the data in the same way. </p><p> These ideas are explored through a new programming language called Harlan. Harlan is designed to simplify programming GPUs and other data parallel processors. It provides kernel expressions as its fundamental mechanism for parallelism. Kernels function similarly to a parallel map or zipWith operation from other functional programming languages. For example, the expression <tt> (kernel ([x xs] [y ys]) (+ x y))</tt> evaluates to a vector where each element is the sum of the corresponding elements in xs and ys. Kernels can have arbitrary body expressions that can even include kernels, thereby supporting nested data parallelism. Harlan uses a region-based memory system to enable higher level programming features such as trees and ADTs and even first class procedures. Like all data in Harlan, first class procedures are device-independent, so a procedure created in GPU code can be applied in CPU code and vice-versa. </p><p> Besides providing the design and description of the implementation of Harlan, this thesis includes a type safety proof for a small model of Harlan's region system as well as a number of small application case studies. The type safety proof provides formal support that Harlan ensures programs will have the right data in the right place at the right time. The application case studies show that Harlan and the ideas embodied within it are useful both for a number of traditional applications as well as problems that are problematic for previous GPU programming languages. The design and implementation of Harlan, its proof of type safety and the set of application case studies together show that region-based memory management is an effective way of enabling high level features in languages targeting CPU/GPU systems and other machines with disjoint memories.</p>
387

Guidelines for Development of Courses for Delivery Over The Iowa Communications Network

Hasman, Gary F. 01 January 2001 (has links)
This paper examines the quality of education as it relates to the Iowa Communications Network (ICN). It reviews the literature to determine a working definition of quality that was used to create a list of characteristics desirable in teachers who use technology. A list of such teachers was solicited from three administrative committees of the ICN and from the directors of the state's 15 area education agencies. Four teachers were selected from the list and their approach to creating programs for delivery over the ICN was examined. Personal interviews were used to discover commonalities among the four teachers' approaches to distance education that had led to their success. These commonalities, along with the working definition of quality, were used to develop a set of guidelines that can be used by developers of future ICN offerings. The guidelines contain information on designing courses for distance education, overcoming obstacles, use of collaborative techniques, and distance learning methodology. The guidelines were developed into a small booklet that will be distributed to teachers and administrators across the state of lowa.
388

A Technique for Visualizing Software Architectures

Inouye, Jon M. 01 January 2002 (has links)
Software architecture appeared in the early 1990s as a distinct discipline within software engineering. Models based on software architecture attempt to reduce the complexity of software by providing relatively coarse-grained structures for representing different aspects of software development. A software architecture typically consists of various components and connections arranged in a specific topology. Elements of the topology can serve as abstractions on (for example) modules, objects, protocols or interfaces. The meaning of the topology depends on viewpoint. Software architectures' can be described using an architecture description language (ADL). The key goals of ADLs are to communicate alternate designs to the different individuals involved in software development (such individuals are referred to as "stakeholders"), to detect reusable structures, and to record design decisions. A major problem in software architecture has been the difficulty of creating different representations of an architecture to accommodate differing viewpoints of stakeholders. Ideally, different viewpoints would be conveyed in a way that is both comprehensive enough for specialists but consistent enough for generalists. The representation problem has been one of reconciling and integrating different viewpoints. This dissertation provided a solution to the representation problem by creating a tool for three-dimensional visualization of software architectures using the Virtual Reality Modeling Language (VRML). Different architectural viewpoints were first defined in an ADL called the Visually Translatable Architecture Description Language (VT ADL). When VT ADL was translated into VRML, software architectures were embodied within three-dimensional "worlds" through which stakeholders may navigate. Each viewpoint was a separate VRML world. A viewpoint could be related to other viewpoints, representing different facets of software architectures, to reflect different stakeholder requirements. Traceability from design to requirements was possible through VRML hyperlinks from the visualized architecture. The goal of the dissertation was to develop a prototype for demonstrating the visualization technique. Based on the successful results of two visualization case studies, we concluded that the goal was achieved. Refinement of the prototype into a polished visualization tool was recommended. In future research, the refined version should be used for realistic evaluation of the technique in an actual software development environment.
389

Effect of Electronic Portfolio Assessments On The Motivation And Computer Interest of Fourth And Fifth Grade Students In A Massachusetts Suburban School

Montesino, Paul V. 01 January 1998 (has links)
A preliminary causal-comparative study was conducted in an elementary suburban school in Massachusetts to investigate the impact of electronic portfolio assessments in student's intrinsic motivation and computer interest. The target population were two groups of fourth grade and two groups of fifth grade students for a total of 77 subjects. They were trained and introduced to electronic portfolio assessments, a program which lasted for the entire school year. The students used Hyper Studio, a multimedia software program developed and marketed by Roger Wagner Publishing, Inc. It was the intention of the elementary school program directors and teachers that students would take a proactive and self-administered approach to the management of portfolios. Participants were tested before initiation of the program and post-tested six months later using the "Children's Academic Intrinsic Motivation Inventory" "(CAIMI)," a Likert scale test developed by Adele Eskeles Gottfried, Ph.D. at California State University, Northridge. They were also given a pre-test and post-test computer interest Likert scale inventory adapted from a test named Moe Computer Educational Survey "(MCES)." This test was developed at South Dakota State University by Daniel J. Moe as part of his research and graduate work. The MCES test was used to determine if there had been a change of computer interest by girls after participation in the computer-based electronic portfolio assessment program. The motivation and interest pre-and post-test results were analyzed with t-tests (p < .05 for motivation, p < .01 for interest). There were no significant treatment effects. There were score increases at the lowest level of the motivation pre-test scoring level but no increases at the highest pre-test scoring levels. Thirty-four students (48 percent) showed an increase in intrinsic motivation scores, while thirty-seven students (52 percent) showed no change or experimented a decrease in scores. As a result, it was concluded that other factors, including subject maturation and teachers' skills in identifying and working intensely with the students who displayed symptoms of initial low motivation may have contributed to the increases. The study was inconclusive because it did not provide evidence to support the hypothesis that there was a change in intrinsic motivation or interest of all the students as a result of their participation in the electronic portfolio assessment program in the Massachusetts suburban elementary school. For confidentiality reasons, fictitious names were used to describe the suburban locality and the experimental school. The locality was named Best borough and the school site Pioneer.
390

Social Media Network Data Mining and Optimization

Jose, Neha Clare 13 June 2016 (has links)
Many small social aid organizations could benefit from collaborating with other organizations on common causes, but may not have the necessary social relationships. We present a framework for a recommender system for the Louisiana Poverty Initiative that identifies member organizations with common causes and aims to forge connections between these organizations. Our framework employs a combination of graph and text analyses of the organizations' Facebook pages. We use NodeXL, a plugin to Microsoft Excel, to download the Facebook graph and to interface with SNAP, the Stanford Network Analysis Platform, for calculating network measurements. Our framework extends NodeXL with algorithms that analyze the text found on the Facebook pages as well as the connections between organizations and individuals posting on those pages. As a substitute for more complex text data mining, we use a simple keyword analysis for identifying the goals and initiatives of organizations. We present algorithms that combine this keyword analysis with graph analyses that compute connectivity measurements for both organizations and individuals. The results of these analyses can then be used to form a recommender system that suggests new network links between organizations and individuals to let them explore collaboration possibilities. Our experiments on Facebook data from the Louisiana Poverty Initiative show that our framework will be able to collect the information necessary for building such a user-to-user recommender system.

Page generated in 0.0942 seconds