• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 719
  • 238
  • 238
  • 121
  • 67
  • 48
  • 21
  • 19
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1771
  • 529
  • 473
  • 274
  • 184
  • 139
  • 137
  • 117
  • 117
  • 115
  • 114
  • 109
  • 107
  • 102
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Scheduling of 2-operation jobs on a single machine to minimize the number of tardy jobs [electronic resource] / by Radhika M. Yeleswarapu.

Yeleswarapu, Radhika M. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 80 pages. / Thesis (M.S.I.E.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: This study focuses on the study of a unique but commonly occurring manufacturing problem of scheduling of customized jobs consisting of two operations on a single multi-purpose machine with the performance objective of minimizing the number of tardy jobs (jobs that are not completed by their due dates). Each customized job to be complete needs one unique operation and one common operation performed on it. We considered a static case in this work. The objective of minimizing the number of tardy jobs is considered where all jobs have equal weights and the maximum tardiness has no effect on the performance. This problem is proved in literature as NP-hard and hence practically very difficult to obtain optimal solution within reasonable computational time. Till date only a pseudo-polynomial algorithm is given to solve this problem with no concrete computational experiments designed to prove the efficiency and working of the algorithm for different problem instances. / ABSTRACT: We propose a heuristic algorithm based on the Moore-Hodgson's algorithm combining with other procedures and optimal schedule properties from the literature to solve this problem. In literature, Moore-Hodgson's algorithm is an efficient heuristic algorithm that minimizes the number of tardy jobs for the classical single machine one-operation problems. The performance of the heuristic is evaluated through extensive computational experiments for large real size data. The obtained results are compared to the solutions obtained by implementing the optimal pseudo-polynomial algorithm and the performance of the heuristic is tested on large data sets. The test data for the computational experiments are generated randomly using MATLAB 6.1. Future directions of research and development on the problem to improve the obtained solution by the heuristic algorithm are given. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
362

Electrostatic interactions and exciton coupling in photosynthetic light-harvesting complexes and reaction centers /

Johnson, Ethan Thoreau. January 2002 (has links)
Thesis (Ph. D.)--University of Washington, 2002. / Vita. Includes bibliographical references (leaves 184-198).
363

Algorithms and performance optimization for distributed radar automatic target recognition

Wilcher, John S. 08 June 2015 (has links)
This thesis focuses upon automatic target recognition (ATR) with radar sensors. Recent advancements in ATR have included the processing of target signatures from multiple, spatially-diverse perspectives. The advantage of multiple perspectives in target classification results from the angular sensitivity of reflected radar transmissions. By viewing the target at different angles, the classifier has a better opportunity to distinguish between target classes. This dissertation extends recent advances in multi-perspective target classification by: 1) leveraging bistatic target reflectivity signatures observed from multiple, spatially-diverse radar sensors; and, 2) employing a statistical distance measure to identify radar sensor locations yielding improved classification rates. The algorithms provided in this thesis use high resolution range (HRR) profiles – formed by each participating radar sensor – as input to a multi-sensor classification algorithm derived using the fundamentals of statistical signal processing. Improvements to target classification rates are demonstrated for multiple configurations of transmitter, receiver, and target locations. These improvements are shown to emanate from the multi-static characteristics of a target class’ range profile and not merely from non-coherent gain. The significance of dominant scatterer reflections is revealed in both classification performance and the “statistical distance” between target classes. Numerous simulations have been performed to interrogate the robustness of the derived classifier. Errors in target pose angle and the inclusion of camouflage, concealment, and deception (CCD) effects are considered in assessing the validity of the classifier. Consideration of different transmitter and receiver combinations and low signal-to-noise ratios are analyzed in the context of deterministic, Gaussian, and uniform target pose uncertainty models. Performance metrics demonstrate increases in classification rates of up to 30% for multiple-transmit, multiple-receive platform configurations when compared to multi-sensor monostatic configurations. A distance measure between probable target classes is derived using information theoretic techniques pioneered by Kullback and Leibler. The derived measure is shown to suggest radar sensor placements yielding better target classification rates. The predicted placements consider two-platform and three-platform configurations in a single-transmit, multiple-receive environment. Significant improvements in classification rates are observed when compared to ad-hoc sensor placement. In one study, platform placements identified by the distance measure algorithm are shown to produce classification rates exceeding 98.8% of all possible platform placements.
364

The effects of scaling on bite force and suction index in the eastern hellbender (Cryptobranchus alleganiensis)

Larghi, Nicholas Patrick 01 January 2013 (has links)
The hellbender (Cryptobranchus alleganiensis) is a salamander that grows over a large range of body sizes (2-74 cm total length) making it an ideal organism for examining the effects of body size on morphology and performance. The goal of this study is to investigate the morphology changes over ontogeny and change in feeding ability. Cryptobranchus feeds on small aquatic insects as juveniles and shifts to crayfish as they get larger. Morphology can be expected to change as an organism grows larger, and because morphology and performance are closely linked, this morphological change can result in a change in feeding ability. Cryptobranchus alleganiensis are primarily aquatic salamanders that utilize both suction feeding and biting behaviors. I hypothesize bite force would increase with positive allometry reflecting a possible dietary shift during ontogeny in which larger Cryptobranchus favor crayfish. Because suction is the primary mode of feeding making it an important aspect of feeding throughout ontogeny, suction index was hypothesized to scale with isometry. Fourteen preserved specimens (11.9-34.5 cm SVL) were used to investigate the effects of scaling on suction potential and estimated bite force. Bite force was calculated using a 3D static equilibrium model and suction potential was calculated as suction index. Bite force scaled with positive allometry allowing the animals to bite harder relative to body mass with increasing body size, and suction index showed no effect of body size. Results of this study indicate that Cryptobranchus alleganiensis maintains suction performance across ontogeny allowing them to generate suction with similar ability ontogenetically, but increases its biting performance to cope with durophagous prey with a possible ontogenetic dietary shift.
365

Examining the presence of arching action in edge-stiffened cantilever slab overhangs subjected to a static and fatigue wheel load

Klowak, Chad Steven 01 October 2015 (has links)
Engineers proposed the idea that arching action present may be present in bridge deck cantilever slab overhangs, stiffened along their longitudinal free edge via a traffic barrier, subjected to a wheel load. The experimental research program consisted of the design, construction, and static as well as fatigue destructive testing of a full-scale innovative bridge deck slab complete with two traffic barrier walls. The observed experimental data provided extremely interesting findings that indicated a very strong presence of arching action in edge-stiffened cantilever slab overhangs subjected to static and fatigue wheel loads. Deflection profiles indicated curvatures that contradict classical flexural behavior. Large tensile strain magnitudes on the bottom reinforcing mat in all cantilever test locations as well as cracking patterns dictate behavior typical to arching action. Top transverse strains measured did not agree with flexural theory and patterns confirmed earlier research finding that the quantity of top transverse reinforcement may be reduced. Compressive strains measured on the top surface of the cantilever contradicted flexural theory and confirmed the presence of arching action. Punching shear modes of failure observed in all test locations also strengthened the argument for the presence of arching action. Theoretical and analytical modeling techniques were able to validate and confirm the experimental test results. Based on experimental research findings and analytical modeling researchers were able to confirm a major presence of arching action in edge-stiffened cantilever slab overhangs subjected to static and fatigue wheel loads. Recommendations include a proposed reduction in top transverse reinforcement provided in the adjacent internal panel due to the presence arching action that could contribute to a significant initial capital cost savings. Based on the research findings, the report also suggests potential provisions to design codes that take into account the presence of arching action. Further research and theoretical modeling is still required to better understand the presence of arching action in edge-stiffened cantilever slab overhangs. Additional testing and a demonstration project complete with civionics and structural health monitoring will aid engineers in the implementation of the break-through findings highlighted in this study. / February 2016
366

Characteristics of dynamic abnormal grain growth in commercial-purity molybdenum

Worthington, Daniel Lee 06 February 2012 (has links)
Dynamic abnormal grain growth (DAGG) in commercial-purity molybdenum sheets was investigated through a series of tensile tests at temperatures between 1450°C and 1800°C. DAGG is abnormal grain growth (AGG) which requires the presence of concurrent plastic strain. Most AGG phenomena previously documented in the literature can be categorized as static abnormal grain growth (SAGG) because they occur during static annealing, sometimes following plastic strain, but do not occur during plastic deformation. The DAGG boundary migration rate is much faster than the SAGG boundary migration rate, and DAGG may be utilized to obtain large single crystals in the solid state. Dynamic abnormal grains were found to exhibit a crystallographic orientation preference with respect to the specimen geometry, generally described as derivative from a <101> fiber texture. DAGG was found to prefer growth on the surface of the specimen rather than the interior. The growth of dynamic abnormal grains, which initiated and grew during plastic straining, generally ceased when the application of plastic strain was removed. The DAGG boundary migration rate was found to be a direct function of plastic strain accumulation, regardless of the strain-rate. Therefore, it is hypothesized that the rapid boundary migration rate during DAGG results from an enhanced mobility of certain boundaries. A model is proposed based on the rate of boundary unpinning, as mediated by the emission of dislocations from pinning sites. / text
367

Systematic techniques for efficiently checking Software Product Lines

Kim, Chang Hwan Peter 25 February 2014 (has links)
A Software Product Line (SPL) is a family of related programs, which of each is defined by a combination of features. By developing related programs together, an SPL simultaneously reduces programming effort and satisfies multiple sets of requirements. Testing an SPL efficiently is challenging because a property must be checked for all the programs in the SPL, the number of which can be exponential in the number of features. In this dissertation, we present a suite of complementary static and dynamic techniques for efficient testing and runtime monitoring of SPLs, which can be divided into two categories. The first prunes programs, termed configurations, that are irrelevant to the property being tested. More specifically, for a given test, a static analysis identifies features that can influence the test outcome, so that the test needs to be run only on programs that include these features. A dynamic analysis counterpart also eliminates configurations that do not have to be tested, but does so by checking a simpler property and can be faster and more scalable. In addition, for runtime monitoring, a static analysis identifies configurations that can violate a safety property and only these configurations need to be monitored. When no configurations can be pruned, either by design of the test or due to ineffectiveness of program analyses, runtime similarity between configurations, arising due to design similarity between configurations of a product line, is exploited. In particular, shared execution runs all the configurations together, executing bytecode instructions common to the configurations just once. Deferred execution improves on shared execution by allowing multiple memory locations to be treated as a single memory location, which can increase the amount of sharing for object-oriented programs and for programs using arrays. The techniques have been evaluated and the results demonstrate that the techniques can be effective and can advance the idea that despite the feature combinatorics of an SPL, its structure can be exploited by automated analyses to make testing more efficient. / text
368

Toward better server-side Web security

Son, Sooel 25 June 2014 (has links)
Server-side Web applications are constantly exposed to new threats as new technologies emerge. For instance, forced browsing attacks exploit incomplete access-control enforcement to perform security-sensitive operations (such as database writes without proper permission) by invoking unintended program entry points. SQL command injection attacks (SQLCIA) have evolved into NoSQL command injection attacks targeting the increasingly popular NoSQL databases. They may expose internal data, bypass authentication or violate security and privacy properties. Preventing such Web attacks demands defensive programming techniques that require repetitive and error-prone manual coding and auditing. This dissertation presents three methods for improving the security of server-side Web applications against forced browsing and SQL/NoSQL command injection attacks. The first method finds incomplete access-control enforcement. It statically identifies access-control logic that mediates security-sensitive operations and finds missing access-control checks without an a priori specification of an access-control policy. Second, we design, implement and evaluate a static analysis and program transformation tool that finds access-control errors of omission and produces candidate repairs. Our third method dynamically identifies SQL/NoSQL command injection attacks. It computes shadow values for tracking user-injected values and then parses a shadow value along with the original database query in tandem with its shadow value to identify whether user-injected parts serve as code. Remediating Web vulnerabilities and blocking Web attacks are essential for improving Web application security. Automated security tools help developers remediate Web vulnerabilities and block Web attacks while minimizing error-prone human factors. This dissertation describes automated tools implementing the proposed ideas and explores their applications to real-world server-side Web applications. Automated security tools are effective for identifying server-side Web application security holes and a promising direction toward better server-side Web security. / text
369

Automatic Extraction of Program Models for Formal Software Verification

de Carvalho Gomes, Pedro January 2015 (has links)
In this thesis we present a study of the generation of abstract program models from programs in real-world programming languages that are employed in the formal verification of software. The thesis is divided into three parts, which cover distinct types of software systems, programming languages, verification scenarios, program models and properties.The first part presents an algorithm for the extraction of control flow graphs from sequential Java bytecode programs. The graphs are tailored for a compositional technique for the verification of temporal control flow safety properties. We prove that the extracted models soundly over-approximate the program behaviour w.r.t. sequences of method invocations and exceptions. Therefore, the properties that are established with the compositional technique over the control flow graphs also hold for the programs. We implement the algorithm as ConFlEx, and evaluate the tool on a number of test cases.The second part presents a technique to generate program models from incomplete software systems, i.e., programs where the implementation of at least one of the components is not available. We first define a framework to represent incomplete Java bytecode programs, and extend the algorithm presented in the first part to handle missing code. Then, we introduce refinement rules, i.e., conditions for instantiating the missing code, and prove that the rules preserve properties established over control flow graphs extracted from incomplete programs. We have extended ConFlEx to support the new definitions, and re-evaluate the tool, now over test cases of incomplete programs.The third part addresses the verification of multithreaded programs. We present a technique to prove the following property of synchronization with condition variables: "If every thread synchronizing under the same condition variables eventually enters its synchronization block, then every thread will eventually exit the synchronization". To support the verification, we first propose SyncTask, a simple intermediate language for specifying synchronized parallel computations. Then, we propose an annotation language for Java programs to assist the automatic extraction of SyncTask programs, and show that, for correctly annotated programs, the above-mentioned property holds if and only if the corresponding SyncTask program terminates. We reduce the termination problem into a reachability problem on Coloured Petri Nets. We define an algorithm to extract nets from SyncTask programs, and show that a program terminates if and only if its corresponding net always reaches a particular set of dead configurations. The extraction of SyncTask programs and their translation into Petri nets is implemented as the STaVe tool. We evaluate the technique by feeding annotated Java programs to STaVe, then verifying the extracted nets with a standard Coloured Petri Net analysis tool / Den här avhandlingen studerar automatisk konstruktion av abstrakta modeller för formell verifikation av program skrivna i verkliga programmeringsspråk. Avhandlingen består av tre delar som involverar olika typer av program, programmeringsspråk, verifikationsscenarier, programmodeller och egenskaper.Del ett presenterar en algoritm för generation av flödesgrafer från sekventiella program i Java bytekod. Graferna är skräddarsydda för en kompositionell teknik för verifikationen av temporala kontrollflödens säkerhetsegenskaper. Vi visar att de extraherade modellerna sunt överapproximerar programbeteenden med avseende på sekvenser av metodanrop och -undantag. Således gäller egenskaperna som kan fastställas genom kompositionstekniken över kontrollflöden även för programmen. Vi implementerar dessutom algoritmen i form av verktyget ConFlEx och utvärderar verktyget på ett antal testfall.Del två presenterar en teknik för att generera modeller av ofullständiga program. Det vill säga, program där implementationen av åtminstone en komponent inte är tillgänglig. Vi definierar ett ramverk för att representera ofullständiga Java bytekodsprogram och utökar algoritmen från del ett till att hantera ofullständig kod.  Därefter presenterar vi raffineringsregler - villkor för att instansiera den saknade koden - och bevisar att reglerna bevarar relevanta egenskaper av kontrollflödesgrafer. Vi har dessutom utökat ConFlEx till att stödja de nya definitionerna och har omvärderat verktyget på testfall av ofullständiga program.Del tre angriper verifikation av multitrådade program. Vi presenterar en teknik för att bevisa följande egenskap för synkronisering med vilkorsvariabler: "Om varje trådsynkronisering under samma villkor så småningom stiger in i sitt synkroniseringsblock så kommer varje tråd också till slut lämna synkroniseringen". För att stödja verifikationen så introducerar vi först SyncTask - ett enkelt mellanliggande språk för att specificera synkronisering av parallella beräkningar. Därefter presenterar vi ett annoteringsspråk för Java som tillåter automatisk extrahering av SyncTask-program och visar att egenskapen gäller om och endast om motsvarande SyncTask-program terminerar. Vi reducerar termineringsproblemet till ett nåbarhetsproblem på färgade Petrinät samt definierar en algoritm som skapar Petrinät från SyncTask-program där programmet terminerar om och endast om nätet alltid når en särskild mängd av döda konfigurationer. Extraktionen av SyncTask-program och deras motsvarande Petrinät är implementerade i form av verktyget STaVe.  Slutligen utvärderar vi verktyget genom att mata annoterade. / <p>QC 20151101</p>
370

The effects of processing conditions on static abnormal grain growth in Al-Mg alloy AA5182

Carpenter, Alexander James 17 June 2011 (has links)
Static abnormal grain growth (SAGG) was studied in Al-Mg alloy AA5182 sheet by varying four processing parameters: deformation temperature, strain rate, annealing temperature, and annealing time. SAGG is a secondary recrystallization process related to geometric dynamic recrystallization (GDRX) and requires both deformation at elevated temperature and subsequent static annealing. A minimum temperature is required for both SAGG and GDRX. Recrystallized grains only develop at strains larger than the critical strain for SAGG, [epsilon]SAGG. The size of the recrystallized grains is inversely related to and controlled by the density of SAGG nuclei, which increases as local strain increases. The results of this study suggest that SAGG is controlled by two thermally-activated mechanisms, dynamic recovery and recrystallization. During deformation, dynamic recovery increases as deformation temperature increases or strain rate decreases, increasing the critical strain for SAGG. SAGG is subject to an incubation time that decreases as annealing temperature increases. SAGG can produce grains large enough to reduce yield strength by 20 to 50 percent. The results of this study suggest strategies for avoiding SAGG during hot-metal forming operations by varying processing conditions to increase [epsilon]SAGG. / text

Page generated in 0.0307 seconds