• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 452
  • 226
  • 141
  • 103
  • 103
  • 103
  • 103
  • 103
  • 103
  • 21
  • 15
  • 12
  • 9
  • 8
  • 7
  • Tagged with
  • 1307
  • 1307
  • 459
  • 385
  • 244
  • 195
  • 179
  • 123
  • 116
  • 112
  • 110
  • 95
  • 86
  • 83
  • 80
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Cost-effective radiation hardened techniques for microprocessor pipelines

Lin, Yang January 2015 (has links)
The aggressive scaling of semiconductor devices has caused a significant increase in the soft error rate induced by radiation particle strikes. This has led to an increasing need for soft error tolerance techniques to maintain system reliability, even for sea-level commodity computer products. Conventional radiation-hardening techniques, typically used in safety-critical applications, are prohibitively expensive for non-safety-critical microprocessors in terrestrial environments. Providing effective hardening solutions for general logic in microprocessor pipelines, in particular, is a major challenge and still remains open. This thesis studies the soft error effects on modern microprocessors, with the aim to develop cost-effective soft error mitigation techniques for general logic, and provide a comprehensive soft error treatment for commercial microprocessor pipelines. This thesis presents three major contributions. The first contribution proposes two novel radiation hardening flip-flop architectures, named SETTOFF. A reliability evaluation model, which can statistically analyse the reliability of different circuit architectures, is also developed. The evaluation results from 65nm and 120nm technologies show that SETTOFF can provide better error-tolerance capabilities than most previous techniques. Compared to a TMR-latch, SETTOFF can reduce area, power, and delay overheads by over 50%, 86%, and 78%, respectively. The second contribution proposes a self-checking technique based on the SETTOFF architectures. The self-checking technique overcomes the common limitation of most previous techniques by realising a self checking capability, which allows SETTOFF to mitigate both the errors occurring in the original circuitry, and the errors occurring in the redundancies added for error-tolerance. Evaluation results demonstrated that the self-checking architecture can provide much higher Multiple-Bit-Upsets tolerant capabilities with significantly less power and delay penalties, compared to the traditional ECC technique, for protecting the register file. The third contribution proposes a novel pipeline protection mechanism, which is achieved by incorporating the SETTOFF-based self-checking cells into the microprocessor pipeline. An architectural replay recovery scheme is developed to recover the relevant errors detected by the self-checking SETTOFF architecture. The evaluation results show that the proposed mechanism can effectively mitigate both SEUs and SETs occurring in different parts of the pipeline. It overcomes the drawback of most previous pipeline protection techniques and achieves a complete and cost effective pipeline protection.
562

Domain Name Service Trust Delegation in Cloud Computing: Exploitation, Risks, and Defense

Laprade, Craig 01 January 2021 (has links)
The Domain Name Service (DNS) infrastructure is a global distributed database that links human readable domain names with the Internet Protocol (IP) addresses of the resources that power the internet. With the explosion of cloud computing over the past decade, increasing proportions of organizations' computing services have moved from on-premise solutions to cloud providers. These services range from complete DNS management to singular services such as E-mail or a payroll application. Each of these outsourced services requires a trust delegation, that is, the owning organization needs to advertise to the world, often by DNS records, that another organization can act authoritatively on its behalf. What occurs when these trust delegations are misused? In this work, I explore the methods that can be used to exploit DNS trust delegation and then examine the top 1% of the most popular domains in the world for the presence of these exploitable vulnerabilities. Finally, I conclude with methods of defense against such attacks and the publishing of a novel tool to detect these vulnerabilities.
563

Towards Polymorphic Systems Engineering

Mathieson, John T.J. 01 January 2021 (has links)
Systems engineering is widely regarded as a full life cycle discipline and provides methodologies and processes to support the design, development, verification, sustainment, and disposal of systems. While this cradle-to-grave concept is well documented throughout literature, there has been recent and ever-increasing emphasis on evolving and digitally transforming systems engineering methodologies, practices, and tools to a model-based discipline, not only for advancing system development, but perhaps more importantly for extending agility and adaptability through the later stages of system life cycles – through system operations and sustainment. This research adopts principles from the software engineering domain DevOps concept (a collaborative merger of system development and system operations) into a Systems Engineering DevOps Lemniscate life cycle model. This progression on traditional life cycle models lays a foundation for the continuum of model-based systems engineering artifacts during the life of a system and promotes the coexistence and symbiosis of variants throughout. This is done by facilitating a merger of model-based systems engineering processes, tools, and products into a surrogate and common modeling environment in which the operations and sustainment of a system is tied closely to the curation of a descriptive system model. This model-based approach using descriptive system models, traditionally leveraged for system development, is now expanded to include the operational support elements necessary to operate and sustain the system (i.e. executable procedures, command scripts, maintenance manuals, etc. modeled as part of the core system). This evolution on traditional systems engineering implementation, focused on digitally transforming and enhancing system operations and sustainment, capitalizes on the ability of model-based systems engineering to embrace change to improve agility in the later life cycle stages and emphasizes the existence of polymorphic systems engineering (performing a variety of systems engineering roles in simultaneously occurring life cycle stages to increase system agility). A model-based framework for applying the Systems Engineering DevOps life cycle model is introduced as a new Systems Modeling Language profile. A use-case leveraging this “Model-Based System Operations” framework demonstrates how merging operational support elements into a spacecraft system model improves adaptability of support elements in response to faults, failures, and evolving environments during system operations, exemplifying elements of a DevOps approach to cyber-physical system sustainment.
564

SledgeEDF: Deadline-Driven Serverless for the Edge

McBride, Sean Patrick 01 January 2021 (has links)
Serverless Computing has gained mass popularity by offering lower cost, improved elasticity, and improved ease of use. Driven by the need for efficient low latency computation on resource-constrained infrastructure, it is also becoming a common execution model for edge computing. However, hyperscale cloud mitigations against the serverless cold start problem do not cleanly scale down to tiny 10-100kW edge sites, causing edge deployments of existing VM and container-based serverless runtimes to suffer poor tail latency. This is particularly acute considering that future edge computing workloads are expected to have latency requirements ranging from microseconds to seconds. SledgeEDF is the first runtime to apply the traditional real-time systems techniques of admissions control and deadline-driven scheduling to the serverless execution model. It extends previous research on aWsm, an ahead-of-time (AOT) WebAssembly compiler, and Sledge, a single-process WebAssembly-based serverless runtime designed for the edge, yielding a runtime that targets efficient execution of mixed-criticality edge workloads. Evaluations demonstrate that SledgeEDF prevents backpressure due to excessive client requests and eliminates head-of-line blocking, allowing latency-sensitive high-criticality requests to preempt executing tasks and complete within 10% of their optimal execution time. Taken together, SledgeEDF's admissions controller and deadline-driven scheduler enable it to provide limited guarantees around latency deadlines defined by client service level objectives.
565

Revalume: Configurable Employee Evaluations in the Cloud

Li, Terrence Zone 01 March 2017 (has links)
The software industry has seen a shift from annual to more frequent quarterly and even weekly employee reviews. As a result, there is a high demand for employee evaluations to be less costly and less time-consuming, while providing key insights for richer interactions between employees and their employers or managers. Tech com- panies are constantly looking for methods of producing high quality evaluations to prevent costly turnover. In an industry where software engineers are in high demand, tech companies face a challenging problem. Issues with employee evaluations typi- cally include the lack of performance transparency, unhelpful feedback, lack of metrics, lack of time, and lack of resources. This thesis addresses these challenges through the implementation of an employee evaluation tool. Revalume is a cloud-based web application that provides a stream-lined solution of creating, routing, completing, and viewing evaluation forms. Revalume allows users to use pre-existing and configurable templates, third-party APIs, and a friendly UI to ease the evaluation process. Revalume was evaluated with a longitudinal, semi-controlled study that demonstrates meaningful improvements over existing solutions.
566

Cause and Prevention of Liver Off-Flavor in Five Beef Chuck Muscles

Wadhwani, Ranjeeta 01 December 2008 (has links)
Liver off-flavor is a sporadic problem that limits the consumer acceptance of several beef chuck muscles, including the infraspinatus (flat iron steak). Residual blood hemoglobin is known to contribute to liver off-flavor development. This study was conducted to evaluate factors affecting development of liver off-flavor after cooking of beef chuck (shoulder) muscles. The study was conducted in three parts. The objective of part 1 was to determine effects of muscle (infraspinatus, longissimus dorsi, serratus ventralis, supraspinatus, teres major) and processing (with or w/o carcass electrical stimulation) on residual blood hemoglobin content and total pigment content of raw muscle and sensory characteristics after cooking to 71 or 82?C. The objective of part 2 was to evaluate the effect of antioxidant treatment and anaerobic packaging to possibly reduce the incidence of liver and other off-flavors of beef infraspinatus (IF) steaks. The objective of part 3 was to determine the effect of animal age (commercial grade; >42 months, compared to select grade; <30>months), antioxidant treatment, and anaerobic packaging on sensory characteristics of beef IF steaks. Among beef chuck muscles, the infraspinatus had highest mean liver flavor score of 2.08±1.00 where 2=slightly intense liver flavor. Other muscles (longissimus dorsi, serratus ventralis, supraspinatus, teres major) had mean liver flavor scores less than 2. Liver flavor score, myoglobin, hemoglobin, and total pigment content were higher (p<0.05) for infraspinatus muscle from older animals. Among select grade muscles, carcass electrical stimulation had no significant effect on liver flavor score. Rancid flavor scores were significantly increased from 1.34±0.65 to 1.58±0.84 as internal cook temperature increased from 71 to 82°C but mean TBA values as a measure of rancidity (0.25±0.15 and 0.29±0.13, respectively) were not affected by cook temperature. Antioxidant treatment significantly reduced TBA values, rancid, and liver flavor scores for aerobically packaged steaks (PVC or 80% O2-MAP) but had little effect on scores of steaks in anaerobic packaging (0.4% CO-MAP). Results of this study indicate that infraspinatus steaks from older animals are most likely to have objectionable liver, sour/grassy, or rancid flavors. Objectionable flavor scores were lower in steaks receiving antioxidant injection or packaged anaerobically in 0.4% CO-MAP.
567

Analysis of Lubricants at Trace Levels Using Infrared Spectroscopy

Bandarupalli, Tanmai 01 January 2021 (has links)
Analysis of trace evidence involved in sexual assault investigations holds considerable potential as a newer avenue of identification when bulk, larger evidence is not found or unreliable. Trace analysis of forensic materials involves common findings such as strands of hair, residues left on clothing, shards of paint or glass, etc. In recent research focused on the analysis of trace materials found as evidence in a sexual assault, there has been promise in condom and bottled lubricant classification based on their chemical profiles that can provide an associative link in an investigation. Few studies have considered the examination of lubricant evidence at a trace level as it may be found on a crime scene or a victim. In this study, a new protocol will be tested and established to analyze trace lubricant evidence recovered from a fabric substrate, such as underwear, after sexual assaults using Fourier transform infrared (FTIR) spectroscopy. An experiment is proposed to examine the comparison of the spectra resulting from FTIR spectroscopic analysis of bulk and trace level lubricants recovered from a cotton substrate. The resulting spectra will be compared for their similarities using multivariate statistical techniques to test the viability of the approach.
568

Strategies for Enhanced Genetic Analysis of Trace DNA from Touch DNA Evidence and Household Dust

Farash, Katherine 01 January 2015 (has links)
In forensic casework it is often necessary to obtain genetic profiles from crime scene samples that contain increasingly smaller amounts of genetic material, called Low Template DNA (LTDNA). Two examples of LTDNA sources are touch DNA evidence and dust bunnies. Touch DNA refers to DNA that is left behind through casual contact of a donor with an object or another person. Touch DNA can be used to prove a suspect was present at a crime scene. Dust bunnies, or dust conglomerates, typically contain trapped shed skin cells of anyone in the vicinity along with fibers, dirt, hair, and other trace materials. Dust specimens are a potential source of forensic evidence that has been widely underutilized in the forensic community. This is unfortunate because a dust bunny could not only be used to associate a person or crime scene – through trace materials such as fibers – but also to positively identify – through a DNA profile. For example, if a dust specimen is found on a piece of evidence suspected of being moved from its original location, for instance as a body that is too heavy to carry and therefore collects dust while being dragged, then it could be used to link a suspect to a crime scene. Standard methods for obtaining and analyzing touch DNA have been established, but the techniques are not ideal. First, by nature, the 'blind-swabbing' technique, which involves cotton swabs or adhesive tape being applied to an area of interest, can artificially create mixtures of biological material that was originally spatially separated. Second, because the amount of DNA present is typically very low, standard analysis methods may not be sensitive enough to produce probative profiles. In the case of mixtures, the minor component's DNA may go undetected. Dust specimens contain degraded genetic material that has been accumulating for an unknown amount of time. Additionally, dust is usually a conglomeration of genetic material from multiple donors so a mixed profile, if any, is likely to be recovered if standard analysis methods are used. In order to overcome these obstacles presented by LTDNA, a micro-manipulation and combined cell lysis/direct PCR amplification technique has been developed that is sensitive enough to obtain full or probative STR profiles from single or clumped bio-particles collected from touch DNA and dust evidence. Sources of touch DNA evidence such as worn clothing items, touched objects, and skin/skin mixtures are easily sampled using an adhesive material on a microscope slide. Dust specimens can be dispersed onto an adhesive material as well. Targeted bio-particles are then "picked" with a water-soluble adhesive on a tungsten needle and deposited into a micro-volume STR amplification mix. Individual selection and analysis of isolated bio-particles reduces the chance of mixed profile recovery. To aid in the release of genetic material present in the bio-particles, a lysis mix containing a thermostable proteinase is then added to the sample. Samples are then analyzed using standard capillary electrophoresis (CE) methods. In addition to identifying the donor source of these LTDNA sources, it would be beneficial to a criminal investigation to identify the tissue source of the biological material as well. While it is widely speculated that the material originates from shed skin cells, there is little confirmatory evidence proving this assertion. Knowledge of the nature of the evidence could be vital to prevent its misinterpretation during the investigation and prosecution of a crime. Here tissue specific mRNA biomarkers have been evaluated for their use in tissue source determination using a highly sensitive High Resolution Melt (HRM) temperature assay that detects the selectively amplified targets based on their melt temperatures. Using the enhanced genetic analysis technique described above, DNA profile recovery has been markedly enhanced in sources of Touch DNA evidence and dust specimens compared to standard methods. Additionally, the molecular-based characterization method could potentially provide a better understanding of the meaningfulness of the recovered DNA profiles. This optimized strategy provides a method for recovering highly probative data from biological material in low template samples in an efficient and cost effect manner.
569

Forensic Analysis Of C-4 And Commercial Blasting Agents For Possible Discrimination

Steele, Katie 01 January 2007 (has links)
The criminal use of explosives has increased in recent years. Political instability and the wide spread access to the internet, filled with "homemade recipes," are two conjectures for the increase. C-4 is a plastic bonded explosive (PBX) comprised of 91% of the high explosive RDX, 1.6% processing oils, 5.3% plasticizer, and 2.1% polyisobutylene (PIB). C-4 is most commonly used for military purposes, but also has found use in commercial industry as well. Current methods for the forensic analysis of C-4 are limited to identification of the explosive; however, recent publications have suggested the plausibility of discrimination between C-4 samples based upon the processing oils and stable isotope ratios. This research focuses on the discrimination of C-4 samples based on ratios of RDX to HMX, a common impurity resulting from RDX synthesis. The relative amounts of HMX are a function of the RDX synthetic route and conditions. RDX was extracted from different C-4 samples and was analyzed by ESI-MS-SIM as the chloride adduct, EI-GC-MS-SIM, and NICI-GC-MS. Ratios (RDX/HMX) were calculated for each method. An analysis of variance (ANOVA) followed by a Tukey HSD allowed for an overall discriminating power to be assessed for each analytical method. The C-4 processing oils were also extracted, and analyzed by direct exposure probe mass spectrometry (DEP-MS) with electron ionization, a technique that requires less than two minutes for analysis. The overall discriminating power of the processing oils was calculated by conducting a series of t tests. Lastly, a set of heterogeneous commercial blasting agents were analyzed by laser induced breakdown spectroscopy (LIBS). The data was analyzed by principal components analysis (PCA), and the possibility of creating a searchable library was explored.
570

Advances In Fire Debris Analysis

Williams, Mary 01 January 2007 (has links)
Fire incidents are a major contributor to the number of deaths and property losses within the United States each year. Fire investigations determine the cause of the fire resulting in an assignment of responsibility. Current methods of fire debris analysis are reviewed including the preservation, extraction, detection and characterization of ignitable liquids from fire debris. Leak rates were calculated for the three most common types of fire debris evidence containers. The consequences of leaking containers on the recovery and characterization of ignitable liquids were demonstrated. The interactions of hydrocarbons with activated carbon during the extraction of ignitable liquids from the fire debris were studied. An estimation of available adsorption sites on the activated carbon surface area was calculated based on the number of moles of each hydrocarbon onto the activated carbon. Upon saturation of the surface area, hydrocarbons with weaker interactions with the activated carbon were displaced by more strongly interacting hydrocarbons thus resulting in distortion of the chromatographic profiles used in the interpretation of the GC/MS data. The incorporation of an additional sub-sampling step in the separation of ignitable liquids by passive headspace sampling reduces the concentration of ignitable liquid accessible for adsorption on the activated carbon thus avoiding saturation of the activated carbon. A statistical method of covariance mapping with a coincident measurement to compare GC/MS data sets of two ignitable liquids was able to distinguish ignitable liquids of different classes, sub-classes and states of evaporation. In addition, the method was able to distinguish 10 gasoline samples as having originated from different sources with a known statistical certainty. In a blind test, an unknown gasoline sample was correctly identified from the set of 10 gasoline samples without making a Type II error.

Page generated in 0.0655 seconds