Spelling suggestions: "subject:"cynamic analysis"" "subject:"clynamic analysis""
71 |
An Integrated Seismic Hazard Framework For Liquefaction Triggering Assessment Of Earthfill Dams' / Foundation SoilsUnsal Oral, Sevinc 01 February 2009 (has links) (PDF)
Within the confines of this study, seismic soil liquefaction triggering potential of a dam foundation is assessed within an integrated probabilistic seismic hazard assessment framework. More specifically, the scheme presented hereby directly integrates effective stress-based seismic soil liquefaction triggering assessment with seismic hazard analysis framework, supported by an illustrative case. The proposed methodology successively, i) processes the discrete stages of probabilistic seismic hazard workflow upon seismic source characterization, ii) numerically develops the target elastic acceleration response spectra for typical rock sites, covering all the earthquake scenarios that are re-grouped with respect to earthquake magnitude and distance, iii) matches the strong ground motion records selected from a database with the target response spectra for every defined scenario, and iv) performs 2-D equivalent linear seismic response analyses of a 56 m high earth fill dam founded on 24 m thick alluvial deposits. Results of seismic response analyses are presented in the form of annual probability of excess pore pressure ratios and seismically-induced lateral deformations exceeding various threshold values. For the purpose of assessing the safety of the dam slopes, phi-c reduction based slope stability analyses were also performed representing post-liquefaction conditions. After having integrated this phi-c reduction analyses results into the probabilistic hazard framework, annual probabilities of factor of safety of slopes exceeding various threshold values were estimated. As the concluding remark, probability of liquefaction triggering, induced deformations and factor of safeties are presented for a service life of 100 years. It is believed that the proposed probabilistic seismic performance assessment methodology which incorporates both phi-c reduction based failure probabilities and seismic soil liquefaction-induced deformation potentials, provides dam engineers a robust methodology to rationally quantify the level of confidence with their decisions regarding if costly mitigation of dam foundation soils against seismic soil liquefaction triggering hazard and induced risks is necessary.
|
72 |
Dynamic Effects Of Moving Traffic On Railway BridgesCinek, Fatih 01 May 2010 (has links) (PDF)
In this study, dynamic effects on high speed railway bridges under moving traffic are investigated. Within this context, the clear definition of the possible dynamic effects is provided and the related studies that exist in literature are investigated. In the light of those studies, analytical procedures that are defined to find the critical dynamic responses such as deflections, accelerations and resonance conditions are examined and a MatLab programming language is written to obtain the responses for different train loading and velocity values. The reliability of the written program is conformed by comparing the results with the related studies in literature. In addition to the analytical procedures, the approaches in the European standards concerning the dynamic effects of railway traffic are defined. A case study is investigated for a bridge that is in the scope of the Ankara-Sivas High Speed Railway Project. The related bridge is modeled by using finite element program, SAP2000 according to the definitions that are stated in European standards. The related high speed railway bridge is analysed with a real train which is French TGV together with the HSLM trains that are defined in Eurocode and the results obtained are compared with each other. This study also includes the analysis of the bridges performed for 7 different stiffness and 3 different mass values to determine the parameters affecting dynamic behaviour.
|
73 |
Dynamic analysis of multiple-body floating platforms coupled with mooring lines and risersKim, Young-Bok 30 September 2004 (has links)
A computer program, WINPOST-MULT, is developed for the dynamic analysis of a multiple-body floating system coupled with mooring lines and risers in the presence of waves, winds and currents. The coupled dynamics program for a single platform is extended for analyzing multiple-body systems by including all the platforms, mooring lines and risers in a combined matrix equation in the time domain. Compared to the iteration method between multiple bodies, the combined matrix method can include the full hydrodynamic interactions among bodies. The floating platform is modeled as a rigid body with six degrees of freedom. The first- and second-order wave forces, added mass coefficients, and radiation damping coefficients are calculated from the hydrodynamics program WAMIT for multiple bodies. Then, the time series of wave forces are generated in the time domain based on the two-term Volterra model. The wind forces are separately generated from the input wind spectrum and wind force formula. The current is included in Morison's drag force formula. In case of FPSO, the wind and current forces are generated using the respective coefficients given in the OCIMF data sheet. A finite element method is derived for the long elastic element of an arbitrary shape and material. This newly developed computer program is first applied to the system of a turret-moored FPSO and a shuttle tanker in tandem mooring. The dynamics of the turret-moored FPSO in waves, winds and currents are verified against independent computation and OTRC experiment. Then, the simulations for the FPSO-shuttle system with a hawser connection are carried out and the results are compared with the simplified methods without considering or partially including hydrodynamic interactions.
|
74 |
Towards a Gold Standard for Points-to AnalysisGutzmann, Tobias January 2010 (has links)
<p>Points-to analysis is a static program analysis that computes reference informationfor a given input program. It serves as input to many client applicationsin optimizing compilers and software engineering tools. Unfortunately, the Gold Standard – i.e., the exact reference information for a given program– is impossible to compute automatically for all but trivial cases, and thus, little can been said about the accuracy of points-to analysis.</p><p>This thesis aims at paving the way towards a Gold Standard for points-to analysis. For this, we discuss theoretical implications and practical challenges that occur when comparing results obtained by different points-to analyses. We also show ways to improve points-to analysis by different means, e.g., combining different analysis implementations, and a novel approach to path sensitivity.</p><p>We support our theories with a number of experiments.</p>
|
75 |
Systematic techniques for efficiently checking Software Product LinesKim, Chang Hwan Peter 25 February 2014 (has links)
A Software Product Line (SPL) is a family of related programs, which of each is defined by a combination of
features. By developing related programs together, an SPL simultaneously reduces programming effort and satisfies multiple sets of requirements. Testing an SPL efficiently is challenging because a property must be checked for all the programs in the SPL, the number of which can be exponential in the number of features.
In this dissertation, we present a suite of complementary static and dynamic techniques for efficient testing and runtime monitoring of SPLs, which can be divided into two categories. The first prunes programs, termed configurations, that are irrelevant to the property being tested. More specifically, for a given test, a static analysis identifies features that can influence the test outcome, so that the test needs to be run only on programs that include these features. A dynamic analysis counterpart also eliminates configurations that do not have to be tested, but does so by checking a simpler property and can be faster and more scalable. In addition, for runtime monitoring,
a static analysis identifies configurations that can violate a safety property and only these configurations need to be monitored.
When no configurations can be pruned, either by design of the test or due to ineffectiveness of program analyses,
runtime similarity between configurations, arising due to design similarity between configurations of a product line, is exploited. In particular, shared execution runs all the configurations together, executing bytecode instructions common to the configurations just once. Deferred execution improves on shared execution by
allowing multiple memory locations to be treated as a single memory location, which can increase the amount of sharing for object-oriented programs and for programs
using arrays.
The techniques have been evaluated and the results demonstrate that the techniques can be effective and can advance the idea that despite the feature combinatorics of an SPL, its structure can be exploited by automated analyses to make testing more efficient. / text
|
76 |
Toward better server-side Web securitySon, Sooel 25 June 2014 (has links)
Server-side Web applications are constantly exposed to new threats as new technologies emerge. For instance, forced browsing attacks exploit incomplete access-control enforcement to perform security-sensitive operations (such as database writes without proper permission) by invoking unintended program entry points. SQL command injection attacks (SQLCIA) have evolved into NoSQL command injection attacks targeting the increasingly popular NoSQL databases. They may expose internal data, bypass authentication or violate security and privacy properties. Preventing such Web attacks demands defensive programming techniques that require repetitive and error-prone manual coding and auditing. This dissertation presents three methods for improving the security of server-side Web applications against forced browsing and SQL/NoSQL command injection attacks. The first method finds incomplete access-control enforcement. It statically identifies access-control logic that mediates security-sensitive operations and finds missing access-control checks without an a priori specification of an access-control policy. Second, we design, implement and evaluate a static analysis and program transformation tool that finds access-control errors of omission and produces candidate repairs. Our third method dynamically identifies SQL/NoSQL command injection attacks. It computes shadow values for tracking user-injected values and then parses a shadow value along with the original database query in tandem with its shadow value to identify whether user-injected parts serve as code. Remediating Web vulnerabilities and blocking Web attacks are essential for improving Web application security. Automated security tools help developers remediate Web vulnerabilities and block Web attacks while minimizing error-prone human factors. This dissertation describes automated tools implementing the proposed ideas and explores their applications to real-world server-side Web applications. Automated security tools are effective for identifying server-side Web application security holes and a promising direction toward better server-side Web security. / text
|
77 |
Numerical and Experimental Investigations of the Machinability of Ti6AI4V : Energy Efficiency and Sustainable Cooling/ Lubrication StrategiesPervaiz, Salman January 2015 (has links)
Titanium alloys are widely utilized in the aerospace, biomedical,marine, petro-chemical and other demanding industries due to theirdurability, high fatigue resistance and ability to sustain elevateoperating temperature. As titanium alloys are difficult to machine, dueto which machining of these alloys ends up with higher environmentalburden. The industry is now embracing the sustainable philosophy inorder to reduce their carbon footprint. This means that the bestsustainable practices have to be used in machining of titanium alloys aswell as in an effort to reduce the carbon footprint and greenhouse gas(GHG) emissions.In this thesis, a better understanding towards the feasibility of shiftingfrom conventional (dry and flood) cooling techniques to the vegetableoil based minimum quantity cooling lubrication (MQCL) wasestablished. Machining performance of MQCL cooling strategies wasencouraging as in most cases the tool life was found close to floodstrategy or sometimes even better. The study revealed that theinfluence of the MQCL (Internal) application method on overallmachining performance was more evident at higher cutting speeds. Inaddition to the experimental machinability investigations, FiniteElement Modeling (FEM) and Computational Fluid Dynamic (CFD)Modeling was also employed to prediction of energy consumed inmachining and cutting temperature distribution on the cutting tool. Allnumerical results were found in close agreement to the experimentaldata. The contribution of the thesis should be of interest to those whowork in the areas of sustainable machining. / <p>QC 20150915</p>
|
78 |
A Model for Run-time Measurement of Input and Round-off ErrorMeng, Nicholas Jie 25 September 2012 (has links)
For scientists, the accuracy of their results is a constant concern. As the programs they write to support their research grow in complexity, there is a greater need to understand what causes the inaccuracies in their outputs, and how they can be mitigated. This problem is difficult because the inaccuracies in the outputs come from a variety of sources in both the scientific and computing domains. Furthermore, as most programs lack a testing oracle, there is no simple way to validate the results.
We define a model for the analysis of error propagation in software. Its novel combination of interval arithmetic and automatic differentiation allows for the error accumulated in an output to be measurable at runtime, attributable to individual inputs and functions, and identifiable as either input error, round-off error, or error from a different source. This allows for the identification of the subset of inputs and functions that are most responsible for the error seen in an output and how it can be best mitigated. We demonstrate the effectiveness of our model by analyzing a small case study from the field of nuclear engineering, where we are able to attribute the contribution of over 99% of the error to 3 functions out of 15, and identify the causes for the observed error. / Thesis (Master, Computing) -- Queen's University, 2012-09-24 14:12:25.659
|
79 |
Improving Device Driver Reliability through Decoupled Dynamic Binary AnalysesRuwase, Olatunji O. 01 May 2013 (has links)
Device drivers are Operating Systems (OS) extensions that enable the use of I/O devices in computing systems. However, studies have identified drivers as an Achilles’ heel of system reliability, their high fault rate accounting for a significant portion of system failures. Consequently, significant effort has been directed towards improving system robustness by protecting system components (e.g., OS kernel, I/O devices, etc.) from the harmful effects of driver faults. In contrast to prior techniques which focused on preventing unsafe driver interactions (e.g., with the OS kernel), my thesis is that checking a driver’s execution for correctness violations results in the detection and mitigation of more faults.
To validate this thesis, I present Guardrail, a flexible and powerful framework that enables instruction-grained dynamic analysis (e.g., data race detection) of unmodified kernel-mode driver binaries to safeguard I/O operations and devices from driver faults. Guardrail decouples the analysis tool from driver execution to improve performance, and runs it in user-space to simplify the deployment of new tools. Moreover, Guardrail leverages virtualization to be transparent to both the driver and device, and enable support for arbitrary driver/device combinations.
To demonstrate Guardrail’s generality, I implemented three novel dynamic checking tools within the framework for detecting memory faults, data races and DMA faults in drivers. These tools found 25 serious bugs, including previously unknown bugs, in Linux storage and network drivers. Some of the bugs existed in several Linux (and driver) releases, suggesting their elusiveness to existing approaches. Guardrail easily detected these bugs using common driver workloads.
Finally, I present an evaluation of using Guardrail to protect network and storage I/O operations from memory faults, data races and DMA faults in drivers. The results show that with hardware-assisted logging for decoupling the heavyweight analyses from driver execution, standard I/O workloads generally experienced negligible slowdown on their end-to-end performance.
In conclusion, Guardrail’s high fidelity fault detection and efficient monitoring performance makes it a promising approach for improving the resilience of computing systems to the wide variety of driver faults.
|
80 |
Hermes: A Targeted Fuzz Testing FrameworkShortt, Caleb James 12 March 2015 (has links)
The use of security assurance cases (security cases) to provide evidence-based
assurance of security properties in software is a young field in Software Engineering.
A security case uses evidence to argue that a particular claim is true. For example,
the highest-level claim may be that a given system is sufficiently secure, and it would
include sub claims to break that general claim down into more granular, and tangible,
items - such as evidence or other claims. Random negative testing (fuzz testing) is
used as evidence to support security cases and the assurance they provide. Many
current approaches apply fuzz testing to a target system for a given amount of time
due to resource constraints. This may leave entire sections of code untouched [60].
These results may be used as evidence in a security case but their quality varies
based on controllable variables, such as time, and uncontrollable variables, such as
the random paths chosen by the fuzz testing engine.
This thesis presents Hermes, a proof-of-concept fuzz testing framework that provides improved evidence for security cases by automatically targeting problem sections
in software and selectively fuzz tests them in a repeatable and timely manner. During
our experiments Hermes produced results with comparable target code coverage to
a full, exhaustive, fuzz test run while significantly reducing the test execution time
that is associated with an exhaustive fuzz test. These results provide a targeted piece
of evidence for security cases which can be audited and refined for further assurance.
Hermes' design allows it to be easily attached to continuous integration frameworks
where it can be executed in addition to other frameworks in a given test suite. / Graduate / 0984 / cshortt@uvic.ca
|
Page generated in 0.0701 seconds