• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 163
  • 27
  • Tagged with
  • 1181
  • 774
  • 700
  • 437
  • 437
  • 401
  • 401
  • 398
  • 398
  • 116
  • 115
  • 103
  • 88
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Making Trade-offs among Security and Other Requirements during System Design

Elahi, Golnaz 21 August 2012 (has links)
Employing a design solution can satisfy some requirements while having negative side-effects on some other software requirements and project objectives. Ultimately, selecting a design solution among multiple options involves making trade-offs among competing requirements. These trade-offs, especially at the early stages of software development, are often hard to identify or quantify, and can be subjective. Security is one critical requirement among many, which can cause critical trade-offs and severe costs. Damages from security attacks can be overwhelming and the costs increase every year. The threat of vulnerabilities and their exploitation by potential adversaries calls for careful analysis of security risks and trade-offs that security solutions impose, from the viewpoints of both defenders and attackers. Since software developers and analysts are usually not security experts, detecting potential threats within software systems can be problematic. Even when threats are known, the risk factors, either the probability of a successful attack or the resulting damage of a successful attack, are not always known or numerically measurable. In this situation, selecting proper security solutions can be challenging, when mitigating impacts and side-effects of solutions are often not quantifiable. This thesis addresses such challenges in identifying and making trade-offs among security and other system requirements and stakeholders' goals. This work introduces a framework for identifying and modeling security risks and requirements trade-offs. The central idea in this thesis is analyzing security requirements on the basis of predicting software vulnerabilities, weaknesses or flaws that can be exploited to break into the system. Vulnerabilities and exploitation scenarios are specified within goal-oriented requirements models of the system. This approach enables analysis of vulnerability exploitations and their impacts on the running system. The structure of goal-oriented security requirements models enables tracing the ultimate impacts of the exploitations on high-level goals of stakeholders and design objectives. In order to evaluate the risk of vulnerabilities, this framework intertwines the Common Vulnerability Scoring System (CVSS) with security requirements risk assessment. The proposed framework provides a decision aid method that takes into the account risks, competing requirements, security solutions, their impacts on risks, and their side-effects on other requirements, to aid decision makers to select a solution among alternative security solutions. The proposed decision analysis method helps analysts to make requirements trade-offs systematically, in the absence of quantitative data, or when a mixture of both quantitative and qualitative data are available.
182

Placement By Marriage

Bian, Huimin 30 July 2008 (has links)
As the field programmable gate array (FPGA) industry grows device capacity with Moore's law and expands its market to high performance computing, scalability of its key CAD algorithms emerges as a new priority to deliver a user experience competitive to parallel processors. Among the many walls to overcome, placement stands out due to its critical impact on both frontend synthesis and backend routing. To construct a scalable placement flow, we present three innovations in detailed placement: a legalizer that works well under low whitespace; a wirelength optimizer based on bipartite matching; and a cache-aware annealer. When applied to the hundred-thousand cell IBM benchmark suite, our detailed placer can achieve 27% better wirelength and 8X faster runtime against FastDP, the fastest academic detailed placer reported, and our full placement flow can achieve 101X faster runtime, with 5% wirelength overhead, against VPR, the de facto standard in FPGA placements.
183

50,000 Tiny Videos: A Large Dataset for Non-parametric Content-based Retrieval and Recognition

Karpenko, Alexandre 22 September 2009 (has links)
This work extends the tiny image data-mining techniques developed by Torralba et al. to videos. A large dataset of over 50,000 videos was collected from YouTube. This is the largest user-labeled research database of videos available to date. We demonstrate that a large dataset of tiny videos achieves high classification precision in a variety of content-based retrieval and recognition tasks using very simple similarity metrics. Content-based copy detection (CBCD) is evaluated on a standardized dataset, and the results are applied to related video retrieval within tiny videos. We use our similarity metrics to improve text-only video retrieval results. Finally, we apply our large labeled video dataset to various classification tasks. We show that tiny videos are better suited for classifying activities than tiny images. Furthermore, we demonstrate that classification can be improved by combining the tiny images and tiny videos datasets.
184

A Study of Conflict Detection in Software Transactional Memory

Lupei, Daniel 15 February 2010 (has links)
Transactional Memory (TM) has been proposed as a simpler parallel programming model compared to the traditional locking model. However, uptake from the programming community has been slow, primarily because performance issues of software-based TM strategies are not well understood. In this thesis we conduct a systematic analysis of conflict scenarios that may emerge when enforcing correctness between conflicting transactions. We find that some combinations of conflict detection and resolution strategies perform better than others depending on the conflict patterns in the application. We validate our findings by implementing several concurrency control strategies, and by measuring their relative performance. Based on these observations, we introduce partial rollbacks as a mechanism for effectively compensating the variability in the TM algorithm performance. We show that using this mechanism we can obtain close to the overall best performance for a range of conflict patterns in a synthetically generated workload and a realistic game application.
185

Data Recovery For Web Applications

Akkus, Istemi Ekin 14 December 2009 (has links)
Web applications store their data at the server. Despite several benefits, this design raises a serious problem because a bug or misconfiguration causing data loss or corruption can affect a large number of users. We describe the design of a generic recovery system for web applications. Our system tracks application requests and reuses undo logs already kept by databases to selectively recover from corrupting requests and their effects. The main challenge is to correlate requests across the multiple tiers of the application to determine the correct recovery actions. We explore using dependencies both within and across requests at three layers, (i.e., database, application, client) to help identify data corruption accurately. We evaluate our system using known bugs and misconfigurations in popular web applications, including Wordpress, Drupal and Gallery2. Our results show that our system enables recovery from data corruption without loss of critical data incurring little overhead while tracking requests.
186

On the Design of Peer-assisted Video-on-demand Systems

Wu, Jiahua 17 February 2010 (has links)
Peer-assisted Video-on-Demand (VoD) systems have not only received substantial recent research attention, but also been implemented and deployed with success in large-scale real-world streaming systems. Despite the remarkable popularity in real-world systems, the design of such systems are not well understood. In this thesis, we seek to address two design problems in peer-assisted VoD systems. First, we focus on the design of cache replacement algorithms. We construct an analytical framework based on dynamic programming, to help us form an in-depth understanding of optimal strategies to design cache replacement algorithms. Second, we shift our attention to the surplus upload bandwidth allocation problem in multi-channel systems. Through theoretical analysis and realistic simulations, we conclude that surplus upload bandwidth from peers can be utilized more efficiently than conventional prefetching strategies when it is devoted to redistributing content to those channels in deficit state.
187

A System for Detecting, Preventing and Exposing Atomicity Violations in Multithreaded Programs

Chew, Lee 13 January 2010 (has links)
Multi-core machines have become common and have led to an increase in multithreaded software. In turn, the number of concurrency bugs has also increased. Such bugs are elusive and remain difficult to solve, despite existing research. Thus, this thesis proposes a system which detects, prevents and optionally helps expose concurrency bugs. Specifically, we focus on bugs caused by atomicity violations, which occur when thread interleaving violates the programmer’s assumption that a code section executes atomically. At compile-time, our system performs static analysis to identify code sections where violations could occur. At run-time, we use debug registers to monitor these sections for interleaving thread accesses which would cause a violation. If detected, we undo their effects and thus prevent the violation. Optionally, we help expose atomicity violations by perturbing thread scheduling during execution. Our results demonstrate that the system is effective and imposes low overhead.
188

Facial Feature Point Detection

Chen, Fang 06 December 2011 (has links)
Facial feature point detection is a key issue in facial image processing. One main challenge of facial feature point detection is the variation of facial structures due to expressions. This thesis aims to explore more accurate and robust facial feature point detection algorithms, which can facilitate the research on facial image processing, in particular the facial expression analysis. This thesis introduces a facial feature point detection system, where the Multilinear Principal Component Analysis is applied to extract the highly descriptive features of facial feature points. In addition, to improve the accuracy and efficiency of the system, a skin color based face detection algorithm is studied. The experiment results have indicated that this system is effective in detecting 20 facial feature points in frontal faces with different expressions. This system has also achieved a higher accuracy during the comparison with the state-of-the-art, BoRMaN.
189

Programmer-assisted Automatic Parallelization

Huang, Diego 08 December 2011 (has links)
Parallel software is now required to exploit the abundance of threads and processors in modern multicore computers. Unfortunately, manual parallelization is too time-consuming and error-prone for all but the most advanced programmers. While automatic parallelization promises threaded software with little programmer effort, current auto-parallelizers are easily thwarted by pointers and other forms of ambiguity in the code. In this dissertation we profile the loops in SPEC CPU2006, categorize the loops in terms of available parallelism, and focus on promising loops that are not parallelized by IBM's XL C/C++ V10 auto-parallelizer. For those loops we propose methods of improved interaction between the programmer and compiler that can facilitate their parallelization. In particular, we (i) suggest methods for the compiler to better identify to the programmer the parallelization-blockers; (ii) suggest methods for the programmer to provide guarantees to the compiler that overcome these parallelization-blockers; and (iii) evaluate the resulting impact on performance.
190

Exploring Virtualization Techniques for Branch Outcome Prediction

Sadooghi-Alvandi, Maryam 20 December 2011 (has links)
Modern processors use branch prediction to predict branch outcomes, in order to fetch ahead in the instruction stream, increasing concurrency and performance. Larger predictor tables can improve prediction accuracy, but come at the cost of larger area and longer access delay. This work introduces a new branch predictor design that increases the perceived predictor capacity without increasing its delay, by using a large virtual second-level table allocated in the second-level caches. Virtualization is applied to a state-of-the-art multi- table branch predictor. We evaluate the design using instruction count as proxy for timing on a set of commercial workloads. For a predictor whose size is determined by access delay constraints rather than area, accuracy can be improved by 8.7%. Alternatively, the design can be used to achieve the same accuracy as a non-virtualized design while using 25% less dedicated storage.

Page generated in 0.0228 seconds