• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 433
  • 38
  • 35
  • 29
  • 19
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 4
  • 4
  • Tagged with
  • 757
  • 757
  • 464
  • 347
  • 184
  • 182
  • 159
  • 122
  • 112
  • 112
  • 108
  • 103
  • 100
  • 86
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Legal aspects of aviation risk management

Leloudas, Georgios January 2003 (has links)
No description available.
452

Minimizing the naming facilities requiring protection in a computing utility.

Bratt, Richard Glenn. January 1975 (has links)
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1975 / Bibliography: leaves 129-130. / M.S. / M.S. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
453

The design, development and evaluation of a holistic cloud migration decision framework

Mushi, Tumelo Nicholas January 2020 (has links)
No keywords provided in dissertation / Cloud Computing has gained traction since its emergence and client organisations that want to benefit from the Cloud are looking for ways to migrate their on-premise applications to the Cloud. To assist client organisations with migration projects, researchers and practitioners have proposed various Cloud migration approaches. However, these approaches differ in applicability depending on the type of application being migrated and the Cloud Service Provider where the application is being migrated to. The various approaches to Cloud migration create complexity in Cloud migration decisions as client organisations have to consider various approaches depending on the migration project. The purpose of this dissertation is to create a universal Cloud migration approach that can be applied to every Cloud migration project. In this dissertation, a cloud migration decision framework is proposed; namely, A Holistic Cloud Migration Decision Framework (HCMDF). The research strategy that was followed is Design Science Research (DSR) and was selected since the output of the research is going to be an Information Technology (IT) research artefact. By applying the DSR strategy, the HCMDF was successfully developed and evaluated in the real world using an adaptive case study. The analysis of the results indicated that the HCMDF solves Cloud migration problem and that it can be applied to every Cloud migration project. Throughout the evaluation, areas of improvement were identified and these will be considered in future research. / School of Computing / M. Tech (Information Technology)
454

Methods and Tools for Practical Software Testing and Maintenance

Saieva, Anthony January 2024 (has links)
As software continues to envelop traditional industries the need for increased attention to cybersecurity is higher than ever. Software security helps protect businesses and governments from financial losses due to cyberattacks and data breaches, as well as reputational damage. In theory, securing software is relatively straightforward—it involves following certain best practices and guidelines to ensure that the software is secure. In practice, however, software security is often much more complicated. It requires a deep understanding of the underlying system and code (including potentially legacy code), as well as a comprehensive understanding of the threats and vulnerabilities that could be present. Additionally, software security also involves the implementation of strategies to protect against those threats and vulnerabilities, which may involve a combination of technologies, processes, and procedures. In fact many real cyber attacks are caused not from zero day vulnerabilities but from known issues that haven't been addressed so real software security also requires ongoing monitoring and maintenance to ensure critical systems remain secure. This thesis presents a series of novel techniques that together form an enhanced software maintenance methodology from initial bug reporting all the way through patch deployment. We begin by introducing Ad Hoc Test Generation, a novel testing technique that handles when a security vulnerability or other critical bugis not detected by the developers’ test suite, and is discovered post-deployment, developers must quickly devise a new test that reproduces the buggy behavior. Then the developers need to test whether their candidate patch indeed fixes the bug, without breaking other functionality, while racing to deploy before attackers pounce on exposed user installations. This work builds on record-replay and binary rewriting to automatically generate and run targeted tests for candidate patches significantly faster and more efficiently than traditional test suite generation techniques like symbolic execution. Our prototype of this concept is called ATTUNE. To construct patches in some instances developers maintaining software may be forced to deal directly with the binary since source code is no longer available. In these instances this work presents a transformer based model called DIRECT that provides semantics related names for variables and function names that have been lost giving developers the opportunity to work with a facsimile of the source code that would otherwise be unavailable. In the event developers need even more support deciphering the decompiled code we provide another tool called REINFOREST that allows developers to search for similar code which they can use to further understand the code in question and use as a reference when developing a patch. After patches have been written, deployment remains a challenge. In some instances deploying a patch for the buggy behavior may require supporting legacy systems where software cannot be upgraded without causing compatibility issues. To support these updates this work introduces the concept of binary patch decomposition which breaks a software release down into its component parts and allows software administrators to apply only the critical portions without breaking functionality. We present a novel software patching methodology that we can recreate bugs, develop patches, and deploy updates in the presence of the typical challenges that come when patching production software including deficient test suites, lack of source code, lack of documentation, compatibility issues, and the difficulties associated with patching binaries directly.
455

Detection, Triage, and Attribution of PII Phishing Sites

Roellke, Dennis January 2022 (has links)
Stolen personally identifiable information (PII) can be abused to perform a multitude of crimes in the victim’s name. For instance, credit card information can be used in drug business, Social Security Numbers and health ID’s can be used in insurance fraud, and passport data can be used for human trafficking or in terrorism. Even Information typically considered publicly available (e.g. name, birthday, phone number, etc.) can be used for unauthorized registration of services and generation of new accounts using the victim’s identity (unauthorized account creation). Accordingly, modern phishing campaigns have outlived the goal of account takeover and are trending towards more sophisticated goals. While criminal investigations in the real world evolved over centuries, digital forensics is only a few decades into the art. In digital forensics, threat analysts have pioneered the field of enhanced attribution - a study of threat intelligence that aims to find a link between attacks and attackers. Their findings provide valuable information for investigators, ultimately bolster takedown efforts and help determine the proper course of legal action. Despite an overwhelming offer of security solutions today suggesting great threat analysis capabilities, vendors only share attack signatures and additional intelligence remains locked into the vendor’s ecosystem. Victims often hesitate to disclose attacks, fearing reputation damage and the accidental revealing of intellectual property. This phenomenon limits the availability of postmortem analysis from real-world attacks and often forces third-party investigators, like government agencies, to mine their own data. In the absence of industry data, it can be promising to actively infiltrate fraudsters in an independent sting operation. Intuitively, undercover agents can be used to monitor online markets for illegal offerings and another common industry practice is to trap attackers in monitored sandboxes called honeypots. Using honeypots, investigators lure and deceive an attacker into believing an attack was successful while simultaneously studying the attacker’s behavior. Insights gathered from this process allow investigators to examine the latest attack vectors, methodology, and overall trends. For either approach, investigators crave additional information about the attacker, such that they can know what to look for. In the context of phishing attacks, it has been repeatedly proposed to "shoot tracers into the cloud", by stuffing phishing sites with fake information that can later be recognized in one way or another. However, to the best of our knowledge, no existing solution can keep up with modern phishing campaigns, because they focus on credential stuffing only, while modern campaigns steal more than just user credentials — they increasingly target PII instead.We observe that the use of HTML form input fields is a commonality among both credential stealing and identity stealing phishing sites and we propose to thoroughly evaluate this feature for the detection, triage and attribution of phishing attacks. This process includes extracting the phishing site’s target PII from its HTML <label> tags, investigating how JavaScript code stylometry can be used to fingerprint a phishing site for its detection, and determining commonalities between the threat actor’s personal styles. Our evaluation shows that <input> tag identifiers, and <label> tags are the most important features for this machine learning classification task, lifting the accuracy from 68% without these features to up to 92% when including them. We show that <input> tag identifiers and code stylometry can also be used to decide if a phishing site uses cloaking. Then we propose to build the first denial-of-phishing engine (DOPE) that handles all phishing; both Credential Stealing and PII theft. DOPE analyzes HTML <label> tags to learn which information to provide, and we craft this information in a believable manner, meaning that it can be expected to pass credibility tests by the phisher.
456

SECURITY WITHOUT SACRIFICE: MEDIATING SECURITY IN THE HISTORIC CITY HALL

KASPAREK, JASON W. 05 October 2004 (has links)
No description available.
457

Requirements analysis of application software for telemedicine and the health care industry

Sundaram, Senthilnathan 01 July 2002 (has links)
No description available.
458

Network intrusion simulation using OPNET

Razak, Shabana 01 October 2002 (has links)
No description available.
459

Mobile agent file integrity analyzer

Wang, Guantong 01 April 2001 (has links)
No description available.
460

Mitigation of network tampering using dynamic dispatch of mobile agents

Rocke, Adam Jay 01 January 2004 (has links)
No description available.

Page generated in 0.0446 seconds