• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61818
  • 6049
  • 5658
  • 3723
  • 3450
  • 2277
  • 2277
  • 2277
  • 2277
  • 2277
  • 2264
  • 1224
  • 1146
  • 643
  • 535
  • Tagged with
  • 103672
  • 45455
  • 28903
  • 20554
  • 17957
  • 12465
  • 10988
  • 10849
  • 9121
  • 8524
  • 7165
  • 6398
  • 6238
  • 6186
  • 6063
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Computer display of time variant functions /

Gómez, Julian E. January 1985 (has links)
No description available.
362

An automated methodology for the design and implementation of virtual interfaces /

Dobbs, Verlynda Smithson January 1985 (has links)
No description available.
363

Automated validation of communication protocols /

Lu, Ching-sung January 1986 (has links)
No description available.
364

A model for supporting multiple software engineering methods in a software environment /

Hochstettler, William Henry January 1986 (has links)
No description available.
365

Software science and program complexity measures /

Baker, Albert L. January 1979 (has links)
No description available.
366

Hardware and Software Considerations for Improving the Throughput of Scientific Computation Computers

Sullivan, Glenn Allen 01 January 1977 (has links) (PDF)
In this paper, hardware and software techniques are presented for improving the Throughput (defined as computations per dollar) of computing systems which are oriented towards high-precision floating point computations. The various improvements are referenced to a baseline of the PDP 11/20, the NOVA 1200, and the TI 960A, all 16 bit minicomputers. The most beneficial hardware improvement is the inclusion of a Floating Point Processor, which yields up to 200X Throughput increase over a software floating point package. The inclusion of a cache high speed local memory and the availability of Polish Notation format instructions are shown to provide less than a 5X increase each. The use of 48 bit data paths, numerous registers devoted to various processor functions, instruction look ahead, a system I/O controller which frees the processor from I/O work, and partitioned main memory, result in a combined Throughput increase of 5.9X.
367

Large-Scale, Hybrid Approaches for Recommending Web Pages Based on Content and User's Previous Click Patterns

Sharif, Mohammad Amir 04 February 2016 (has links)
<p> The distribution of the amount of preference information across customers is not the same in every domain of recommendation problems. It is necessary to treat each user differently based on their available preference information. In this dissertation, we have proposed three novel recommender system approaches that depend on each user&rsquo;s preference information and can produce recommendations in a user-specific, parametric way. This parametric approach allows different weights to be assigned to different kinds of page-page similarity features used in the recommendation process, depending on the user group to which a particular user belongs. This novel approach of incorporation of different kinds of item-item (or page-page) similarities is shown to result in a significant increase in recommendation accuracy. In our first approach, we incorporated content-based and co-occurrence-based page-page similarities parametrically, by determining relative weights of the two component page-page similarities in user-specific way. We implemented a Map-Reduce based, parametric, hybrid recommendation system in order to solve the scalability issues. Experimental results showed better accuracy for this unique, scalable, and user-specific parametric approach, compared to that of another related work. In our second approach, we used clustering-based, hybrid recommendation system in user-specific way to get better accuracy and to further alleviate scalability issues by exploiting pre-computed clusters. This clustering-based incorporation approach showed better result than our first approach for users having extremely small amount of preference information. Finally, in our third approach, we proposed a graph-based, hybrid recommendation system. Two graphs using, respectively, content similarity and co-occurrence similarity were created. An approach involving features derived from these two graphs to make web page recommendations was introduced. For each user-page pair, one combined feature component was first obtained by making a weighted summation of the eight feature sets from each graph. Use of supervised learning for deriving feature weights to obtain combined feature components showed much more promising results, compared to the first two methods. Finally, the two feature components, from the two graphs were combined in user-specific way to train a model and make recommendations. To the best of our knowledge, ours is the first such effort in the recommender systems context.</p>
368

Elicitation of a Program's Behaviors

Miles, Craig S. 04 February 2016 (has links)
<p> Programmers, software testers, and cyber-security analysts have a need to understand the behaviors that their programs might exhibit. We consider a program's behaviors to be its actions manifesting some effect beyond its own internal state. A program generally exhibits such behaviors by making API calls. One particularly powerful strategy for gaining an understanding of a program's behaviors is to witness their exhibition as the program runs. However, in order to witness a program's behaviors, one must first be able to elicit the program into exhibiting them. In the present work, a method is presented that serves to automatically and efficiently elicit a program into exhibiting many or all of its potential behaviors. The method works by guiding concolic execution towards the control flow paths along which a program's behaviors are more likely to be exhibited. First, an upfront interprocedural data flow analysis is employed to compute how API call statements reach one another and various other program points with respect to the program's control flow. The resulting information is then used to guide the path selection of concolic execution so as to give preference to paths along which more API call statements can be reached. An evaluation of the presented method shows that it more efficiently elicits program behaviors than does usage of non-guided concolic execution. In particular, the percentage increase in API call statements executed using our behavior elicitation method as compared to a common non-guided strategy ranged from 2% up to 287%, with a median percentage gain of 69.74%. </p>
369

Pytracks: A Tool for Visualizing Fish Movement Tracks on Different Scales

Fossum, Ross 08 March 2016 (has links)
A fundamental problem in conservation biology and fisheries management is the ability to make educated decisions based on the data collected. Fish populations and their spatial distributions need to be represented accurately for conversation efforts and management decisions. Methods such as modeling, surveying, and tracking can all be used to collect data on a particular fishery. To include the movement patterns in conservation and management, one needs to work with and process fish tracking data or data exported from fish movement simulation models. This data can often be difficult to process. This topic is becoming increasingly popular as technology to accurately track and log fish did not exist in the past. With all of this data being generated, real or simulated, tools need to be developed to efficiently process it all, as many do not exist. Pytracks attempts to fill a currently existing gap and help programmers who work with simulated and observed simulation data by allowing them to visualize and analyze their data more efficiently. Pytracks, as presented in this thesis, is a tool written in Python which wraps raw data files from field observations or simulation models with an easy to use API. This allows programmers to spend less time on trivial raw file processing and more time on data visualization and computation. The code to visualize sample data can also be much shorter and easier to interpret. In this thesis, pytracks was used to help solve a problem related to interpreting different movement algorithms. This work has a focus on fish movement models, but can also be relevant for any other type of animal if the data is compatible. Many examples have been included in this thesis to justify the effectiveness of pytracks. Additional online documentation has been written as well to show how to further utilize pytracks.
370

Learning from Access Log to Mitigate Insider Threats

Zhang, Wen 17 March 2016 (has links)
As the quantity of data collected, stored, and processed in information systems has grown, so too have insider threats. This type of threat is realized when authorized individuals misuse their privileges to violate privacy or security policies. Over the past several decades, various technologies have been introduced to mitigate the insider threat, which can be roughly partitioned into two categories: 1) prospective and 2) retrospective. Prospective technologies are designed to specify and manage a userâs rights, such that misuse can be detected and prevented before it transpires. Conversely, retrospective technologies permit users to invoke privileges aim, but investigate the legitimacy of such actions after the fact. Despite the existence of such strategies, administrators need to answer several critical questions to put them into practice. First, given a specific circumstance, which type of strategy (i.e., prospective vs. retrospective) should be adopted? Second, given the type of strategy, which is the best approach to support it in an operational manner? Existing approaches addressing them neglect that the data captured by information systems may be able to inform the decision making. As such, the overarching goal of this dissertation is to investigate how best to answer these questions using data-driven approaches. This dissertation makes three technical contributions. The first contribution is in the introduction of a novel approach to quantify tradeoffs for prospective and retrospective strategies, under which each strategy is translated into a classification model, whereby the misclassification costs for each model are compared to facilitate decision support. This dissertation then introduces several data-driven approaches to realize the strategies. The second contribution is for prospective strategies, with a specific focus on role-based access control (RBAC). This dissertation introduces an approach to evolve an existing RBAC based on evidence in an access log, which relies on a strategy to promote roles from candidates. The third contribution is for retrospective strategies, whereby this dissertation introduces an auditing framework that can leverage workflow information to facilitate misuse detection. These methods are empirically validated in three months of access log (million accesses) derived from a real-world information system.

Page generated in 0.098 seconds