• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2268
  • 421
  • 390
  • 368
  • 45
  • 41
  • 27
  • 18
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 2
  • Tagged with
  • 4099
  • 4099
  • 1846
  • 1804
  • 994
  • 583
  • 521
  • 460
  • 432
  • 425
  • 407
  • 401
  • 391
  • 312
  • 289
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Dynamically learning efficient server/client network protocols for networked simulations

Orsten, Sterling 06 1900 (has links)
With the rise of services like Steam and Xbox Live, multiplayer support has become essential to the success of many commercial video games. Explicit, server-client synchronisation models are bandwidth intensive and error prone to implement, while implicit, peer-to-peer synchronisation models are brittle, inflexible, and vulnerable to cheating. We present a generalised server-client network synchronisation model targeted at complex games, such as real time strategy games, that previously have only been feasible via peer-to-peer techniques. We use prediction, learning, and entropy coding techniques to learn a bandwidth-efficient incremental game state representation while guaranteeing both correctness of synchronised data and robustness in the face of unreliable network behavior. The resulting algorithms are efficient enough to synchronise the state of real time strategy games such as Blizzard’s Starcraft (which can involve hundreds of in-game characters) using less than three kilobytes per second of bandwidth.
92

Efficient query evaluation using hybrid index organization

Zhou, Ying Jie January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
93

Mapping BoxTalk to Promela Model

Peng, Yuan January 2007 (has links)
A telecommunication feature is an optional or incremental unit of functionality, such as call display (CD) and call forwarding (CF). A feature interaction occurs when, in the presence of other features, the actual behavior of a feature becomes inconsistent with its specified behavior. This feature interaction problem is a long-existing problem in telephony, and it becomes an increasingly pressing problem as more and more sophisticated features are developed and put into use. It takes a lot of effort to test that the addition of a new feature to a system doesn’t affect any existing features in an undesired way. Distributed Feature Composition (DFC) proposed by Michael Jackson and Pamela Zave, is an architectural approach to the feature interaction problem. Telecommunication features are modeled as independent components, which we call boxes. Boxes are composed in a pipe-and-filter-like sequence to form an end-to-end call. Our work studies the behaviour of single feature boxes. We translate BoxTalk specifications into another format, that is more conducive to automated reasoning. We build formal models on the translated format, then the formal models are checked by a model checker, SPIN, against DFC compliance properties written in Linear Temporal Logic (LTL). From BoxTalk specifications to Promela models, the translation takes steps: 1) Explicating BoxTalk, which expands BoxTalk macros and presents its implicit behaviours as explicit transitions. 2) Define BoxTalk semantics in terms of Template Semantics. 3) Construct Promela model from Template Semantics HTS. Our case studies exercised this translation process, and the resulting models are proven to hold desired properties.
94

Fault Diagnosis in Enterprise Software Systems Using Discrete Monitoring Data

Reidemeister, Thomas 18 May 2012 (has links)
Success for many businesses depends on their information software systems. Keeping these systems operational is critical, as failure in these systems is costly. Such systems are in many cases sophisticated, distributed and dynamically composed. To ensure high availability and correct operation, it is essential that failures be detected promptly, their causes diagnosed and remedial actions taken. Although automated recovery approaches exists for specific problem domains, the problem-resolution process is in many cases manual and painstaking. Computer support personnel put a great deal of effort into resolving the reported failures. The growing size and complexity of these systems creates the need to automate this process. The primary focus of our research is on automated fault diagnosis and recovery using discrete monitoring data such as log files and notifications. Our goal is to quickly pinpoint the root-cause of a failure. Our contributions are: Modelling discrete monitoring data for automated analysis, automatically leveraging common symptoms of failures from historic monitoring data using such models to pinpoint faults, and providing a model for decision-making under uncertainty such that appropriate recovery actions are chosen. Failures in such systems are caused by software defects, human error, hardware failures, environmental conditions and malicious behaviour. Our primary focus in this thesis is on software defects and misconfiguration.
95

On effective fault localization in software debugging /

Qi, Yu. January 2008 (has links)
Thesis (Ph.D.)--University of Texas at Dallas, 2008. / Includes vita. Includes bibliographical references (leaves 106-116)
96

A methodology for developing timing constraints for the Ballistic Missile Defense System /

Miklaski, Michael H. Babbitt, Joel D. January 2003 (has links) (PDF)
Thesis [M.H. Miklaski]-(M.S. in Systems Technology) and (M.S. in Software Engineering)--Naval Postgraduate School, December 2003. Thesis [J.D. Babbitt]-(M.S. in Computer Science)--Naval Postgraduate School, December 2003. / Thesis advisor(s): Man-Tak Shing, James Bret Michael. Includes bibliographical references (p. 287-289). Also available online.
97

Using Topic Models to Support Software Maintenance

Grant, Scott 30 April 2012 (has links)
Latent topic models are statistical structures in which a "latent topic" describes some relationship between parts of the data. Co-maintenance is defined as an observable property of software systems under source control in which source code fragments are modified together in some time frame. When topic models are applied to software systems, latent topics emerge from code fragments. However, it is not yet known what these latent topics mean. In this research, we analyse software maintenance history, and show that latent topics often correspond to code fragments that are maintained together. Moreover, we show that latent topic models can identify such co-maintenance relationships even with no supervision. We can use this correlation both to categorize and understand maintenance history, and to predict future co-maintenance in practice. The relationship between co-maintenance and topics is directly analysed within changelists, with respect to both local pairwise code fragment similarity and global system-wide fragment similarity. This analysis is used to evaluate topic models used with a domain-specific programming language for web service similarity detection, and to estimate appropriate topic counts for modelling source code. / Thesis (Ph.D, Computing) -- Queen's University, 2012-04-30 18:16:04.05
98

Dynamically learning efficient server/client network protocols for networked simulations

Orsten, Sterling Unknown Date
No description available.
99

Static and dynamic analysis of programs that contain arbitrary interprocedural control flow

Sinha, Saurabh January 2002 (has links)
No description available.
100

Visualizing interaction patterns in program executions

Jerding, Dean Frederick 12 1900 (has links)
No description available.

Page generated in 0.0591 seconds